* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-06-27 19:30 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-06-27 19:30 UTC (permalink / raw
To: gentoo-commits
commit: cbd5ac9343d107f2b67991f876d9a8fbe3160cfe
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 27 19:29:46 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 27 19:29:46 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cbd5ac93
Gentoo Linux support config settings and defaults
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++++
4567_distro-Gentoo-Kconfig.patch | 10 ----------
2 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/0000_README b/0000_README
index a7905742..639f7346 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 3000_Support-printing-firmware-info.patch
From: https://bugs.gentoo.org/732852
Desc: Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
+Patch: 4567_distro-Gentoo-Kconfig.patch
+From: Tom Wijsman <TomWij@gentoo.org>
+Desc: Add Gentoo Linux support config settings and defaults.
+
Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 1efc0fba..0a380985 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -299,16 +299,6 @@
+ See the settings that become available for more details and fine-tuning.
+
+endmenu
---- a/security/Kconfig 2022-04-25 11:20:45.487213970 -0400
-+++ b/security/Kconfig 2022-04-25 11:22:02.514143999 -0400
-@@ -167,6 +167,7 @@ config HARDENED_USERCOPY_PAGESPAN
- bool "Refuse to copy allocations that span multiple pages"
- depends on HARDENED_USERCOPY
- depends on BROKEN
-+ depends on !GENTOO_KERNEL_SELF_PROTECTION
- help
- When a multi-page allocation is done without __GFP_COMP,
- hardened usercopy will reject attempts to copy it. There are,
diff --git a/security/selinux/Kconfig b/security/selinux/Kconfig
index 9e921fc72..f29bc13fa 100644
--- a/security/selinux/Kconfig
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-02 18:20 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-02 18:20 UTC (permalink / raw
To: gentoo-commits
commit: 5937201b16fe180982c20df4fc8ec78f7e8886f0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 2 18:18:55 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 2 18:18:55 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5937201b
Add the BMQ(BitMap Queue) Scheduler.
See: https://gitlab.com/alfredchen/projectc
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +
5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch | 9956 ++++++++++++++++++++++++++
5021_BMQ-and-PDS-gentoo-defaults.patch | 13 +
3 files changed, 9977 insertions(+)
diff --git a/0000_README b/0000_README
index 639f7346..3d9202d9 100644
--- a/0000_README
+++ b/0000_README
@@ -78,3 +78,11 @@ Desc: Add Gentoo Linux support config settings and defaults.
Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch: 5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch
+From: https://gitlab.com/alfredchen/linux-prjc
+Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch: 5021_BMQ-and-PDS-gentoo-defaults.patch
+From: https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc: Set defaults for BMQ. Add archs as people test, default to N
diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch
new file mode 100644
index 00000000..610cfe83
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch
@@ -0,0 +1,9956 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index cc3ea8febc62..ab4c5a35b999 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5299,6 +5299,12 @@
+ sa1100ir [NET]
+ See drivers/net/irda/sa1100_ir.c.
+
++ sched_timeslice=
++ [KNL] Time slice in ms for Project C BMQ/PDS scheduler.
++ Format: integer 2, 4
++ Default: 4
++ See Documentation/scheduler/sched-BMQ.txt
++
+ sched_verbose [KNL] Enables verbose scheduler debug messages.
+
+ schedstats= [KNL,X86] Enable or disable scheduled statistics.
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index ddccd1077462..e24781970a3d 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1524,3 +1524,13 @@ is 10 seconds.
+
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield will perform.
++
++ 0 - No yield.
++ 1 - Deboost and requeue task. (default)
++ 2 - Set run queue skip task.
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++ BitMap queue CPU Scheduler
++ --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++ Overview
++ Task policy
++ Priority management
++ BitMap Queue
++ CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++ It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++ All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++ BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++ ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 8dfa36a99c74..46397c606e01 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -479,7 +479,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ seq_puts(m, "0 0 0\n");
+ else
+ seq_printf(m, "%llu %llu %lu\n",
+- (unsigned long long)task->se.sum_exec_runtime,
++ (unsigned long long)tsk_seruntime(task),
+ (unsigned long long)task->sched_info.run_delay,
+ task->sched_info.pcount);
+
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ [RLIMIT_LOCKS] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ [RLIMIT_SIGPENDING] = { 0, 0 }, \
+ [RLIMIT_MSGQUEUE] = { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
+- [RLIMIT_NICE] = { 0, 0 }, \
++ [RLIMIT_NICE] = { 30, 30 }, \
+ [RLIMIT_RTPRIO] = { 0, 0 }, \
+ [RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index c46f3a63b758..7c65e6317d97 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -751,8 +751,14 @@ struct task_struct {
+ unsigned int ptrace;
+
+ #ifdef CONFIG_SMP
+- int on_cpu;
+ struct __call_single_node wake_entry;
++#endif
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
++ int on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ unsigned int wakee_flips;
+ unsigned long wakee_flip_decay_ts;
+ struct task_struct *last_wakee;
+@@ -766,6 +772,7 @@ struct task_struct {
+ */
+ int recent_used_cpu;
+ int wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ int on_rq;
+
+@@ -774,6 +781,20 @@ struct task_struct {
+ int normal_prio;
+ unsigned int rt_priority;
+
++#ifdef CONFIG_SCHED_ALT
++ u64 last_ran;
++ s64 time_slice;
++ int sq_idx;
++ struct list_head sq_node;
++#ifdef CONFIG_SCHED_BMQ
++ int boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++ u64 deadline;
++#endif /* CONFIG_SCHED_PDS */
++ /* sched_clock time spent running */
++ u64 sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ struct sched_entity se;
+ struct sched_rt_entity rt;
+ struct sched_dl_entity dl;
+@@ -784,6 +805,7 @@ struct task_struct {
+ unsigned long core_cookie;
+ unsigned int core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_CGROUP_SCHED
+ struct task_group *sched_task_group;
+@@ -1517,6 +1539,15 @@ struct task_struct {
+ */
+ };
+
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t) ((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t) (0UL)
++#else /* CFS */
++#define tsk_seruntime(t) ((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t) ((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ static inline struct pid *task_pid(struct task_struct *task)
+ {
+ return task->thread_pid;
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index 7c83d4d5a971..fa30f98cb2be 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -1,5 +1,24 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++ return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p) (0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p) ((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p) ((p)->dl.deadline)
++
+ /*
+ * SCHED_DEADLINE tasks has negative priorities, reflecting
+ * the fact that any of them has higher prio than RT and
+@@ -21,6 +40,7 @@ static inline int dl_task(struct task_struct *p)
+ {
+ return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index ab83d85e1183..6af9ae681116 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -18,6 +18,32 @@
+ #define MAX_PRIO (MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO (MAX_RT_PRIO + NICE_WIDTH / 2)
+
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ (7)
++
++#define MIN_NORMAL_PRIO (MAX_RT_PRIO)
++#define MAX_PRIO (MIN_NORMAL_PRIO + NICE_WIDTH)
++#define DEFAULT_PRIO (MIN_NORMAL_PRIO + NICE_WIDTH / 2)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ (0)
++
++#define MIN_NORMAL_PRIO (128)
++#define NORMAL_PRIO_NUM (64)
++#define MAX_PRIO (MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO (MAX_PRIO - NICE_WIDTH / 2)
++#endif
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+ * Convert user-nice values [ -20 ... 0 ... 19 ]
+ * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index e5af028c08b4..0a7565d0d3cf 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
+
+ if (policy == SCHED_FIFO || policy == SCHED_RR)
+ return true;
++#ifndef CONFIG_SCHED_ALT
+ if (policy == SCHED_DEADLINE)
+ return true;
++#endif
+ return false;
+ }
+
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 56cffe42abbc..e020fc572b22 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -233,7 +233,8 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu)
+
+ #endif /* !CONFIG_SMP */
+
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++ !defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index c7900e8975f1..d2b593e3807d 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -812,6 +812,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ bool "Enable utilization clamping for RT/FAIR tasks"
+ depends on CPU_FREQ_GOV_SCHEDUTIL
++ depends on !SCHED_ALT
+ help
+ This feature enables the scheduler to track the clamped utilization
+ of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -858,6 +859,35 @@ config UCLAMP_BUCKETS_COUNT
+
+ If in doubt, use the default value.
+
++menuconfig SCHED_ALT
++ bool "Alternative CPU Schedulers"
++ default y
++ help
++ This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++ prompt "Alternative CPU Scheduler"
++ default SCHED_BMQ
++
++config SCHED_BMQ
++ bool "BMQ CPU scheduler"
++ help
++ The BitMap Queue CPU scheduler for excellent interactivity and
++ responsiveness on the desktop and solid scalability on normal
++ hardware and commodity servers.
++
++config SCHED_PDS
++ bool "PDS CPU scheduler"
++ help
++ The Priority and Deadline based Skip list multiple queue CPU
++ Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+
+ #
+@@ -911,6 +941,7 @@ config NUMA_BALANCING
+ depends on ARCH_SUPPORTS_NUMA_BALANCING
+ depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++ depends on !SCHED_ALT
+ help
+ This option adds support for automatic NUMA aware memory/task placement.
+ The mechanism is quite primitive and is based on migrating memory when
+@@ -1003,6 +1034,7 @@ config FAIR_GROUP_SCHED
+ depends on CGROUP_SCHED
+ default CGROUP_SCHED
+
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ depends on FAIR_GROUP_SCHED
+@@ -1025,6 +1057,7 @@ config RT_GROUP_SCHED
+ realtime bandwidth for them.
+ See Documentation/scheduler/sched-rt-group.rst for more information.
+
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+
+ config UCLAMP_TASK_GROUP
+@@ -1268,6 +1301,7 @@ config CHECKPOINT_RESTORE
+
+ config SCHED_AUTOGROUP
+ bool "Automatic process group scheduling"
++ depends on !SCHED_ALT
+ select CGROUPS
+ select CGROUP_SCHED
+ select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index 73cc8f03511a..2d0bad762895 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -75,9 +75,15 @@ struct task_struct init_task
+ .stack = init_stack,
+ .usage = REFCOUNT_INIT(2),
+ .flags = PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++ .prio = DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++ .static_prio = DEFAULT_PRIO,
++ .normal_prio = DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++#else
+ .prio = MAX_PRIO - 20,
+ .static_prio = MAX_PRIO - 20,
+ .normal_prio = MAX_PRIO - 20,
++#endif
+ .policy = SCHED_NORMAL,
+ .cpus_ptr = &init_task.cpus_mask,
+ .user_cpus_ptr = NULL,
+@@ -88,6 +94,17 @@ struct task_struct init_task
+ .restart_block = {
+ .fn = do_no_restart_syscall,
+ },
++#ifdef CONFIG_SCHED_ALT
++ .sq_node = LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++ .boost_prio = 0,
++ .sq_idx = 15,
++#endif
++#ifdef CONFIG_SCHED_PDS
++ .deadline = 0,
++#endif
++ .time_slice = HZ,
++#else
+ .se = {
+ .group_node = LIST_HEAD_INIT(init_task.se.group_node),
+ },
+@@ -95,6 +112,7 @@ struct task_struct init_task
+ .run_list = LIST_HEAD_INIT(init_task.rt.run_list),
+ .time_slice = RR_TIMESLICE,
+ },
++#endif
+ .tasks = LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ .pushable_tasks = PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index c2f1fd95a821..41654679b1b2 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+
+ config SCHED_CORE
+ bool "Core Scheduling for SMT"
+- depends on SCHED_SMT
++ depends on SCHED_SMT && !SCHED_ALT
+ help
+ This option permits Core Scheduling, a means of coordinated task
+ selection across SMT siblings. When enabled -- see
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 71a418858a5e..7e3016873db1 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -704,7 +704,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ return ret;
+ }
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * Helper routine for generate_sched_domains().
+ * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1100,7 +1100,7 @@ static void rebuild_sched_domains_locked(void)
+ /* Have scheduler rebuild the domains */
+ partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index 164ed9ef77a3..c974a84b056f 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -150,7 +150,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ */
+ t1 = tsk->sched_info.pcount;
+ t2 = tsk->sched_info.run_delay;
+- t3 = tsk->se.sum_exec_runtime;
++ t3 = tsk_seruntime(tsk);
+
+ d->cpu_count += t1;
+
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 64c938ce36fe..a353f7ef5392 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -124,7 +124,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->curr_target = next_thread(tsk);
+ }
+
+- add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++ add_device_randomness((const void*) &tsk_seruntime(tsk),
+ sizeof(unsigned long long));
+
+ /*
+@@ -145,7 +145,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->inblock += task_io_get_inblock(tsk);
+ sig->oublock += task_io_get_oublock(tsk);
+ task_io_accounting_add(&sig->ioac, &tsk->ioac);
+- sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++ sig->sum_sched_runtime += tsk_seruntime(tsk);
+ sig->nr_threads--;
+ __unhash_process(tsk, group_dead);
+ write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 7779ee8abc2a..5b9893cdfb1b 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -300,21 +300,25 @@ static __always_inline void
+ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ {
+ waiter->prio = __waiter_prio(task);
+- waiter->deadline = task->dl.deadline;
++ waiter->deadline = __tsk_deadline(task);
+ }
+
+ /*
+ * Only use with rt_mutex_waiter_{less,equal}()
+ */
+ #define task_to_waiter(p) \
+- &(struct rt_mutex_waiter){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++ &(struct rt_mutex_waiter){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+
+ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+ struct rt_mutex_waiter *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline < right->deadline);
++#else
+ if (left->prio < right->prio)
+ return 1;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -323,16 +327,22 @@ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+ */
+ if (dl_prio(left->prio))
+ return dl_time_before(left->deadline, right->deadline);
++#endif
+
+ return 0;
++#endif
+ }
+
+ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ struct rt_mutex_waiter *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline == right->deadline);
++#else
+ if (left->prio != right->prio)
+ return 0;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -341,8 +351,10 @@ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ */
+ if (dl_prio(left->prio))
+ return left->deadline == right->deadline;
++#endif
+
+ return 1;
++#endif
+ }
+
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..d0ab41c4d9ad
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7807 @@
++/*
++ * kernel/sched/alt_core.c
++ *
++ * Core alternative kernel scheduler code and related syscalls
++ *
++ * Copyright (C) 1991-2002 Linus Torvalds
++ *
++ * 2009-08-13 Brainfuck deadline scheduling policy by Con Kolivas deletes
++ * a whole lot of those previous things.
++ * 2017-09-06 Priority and Deadline based Skip list multiple queue kernel
++ * scheduler by Alfred Chen.
++ * 2019-02-20 BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/profile.h>
++#include <linux/nmi.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++
++#include "pelt.h"
++
++#include "../../fs/io-wq.h"
++#include "../smpboot.h"
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x) (1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x) (0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v5.19-r0"
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p) rt_prio((p)->prio)
++#define rt_policy(policy) ((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p) (rt_policy((p)->policy))
++
++#define STOP_PRIO (MAX_RT_PRIO - 1)
++
++/* Default time slice is 4 in ms, can be set via kernel parameter "sched_timeslice" */
++u64 sched_timeslice_ns __read_mostly = (4 << 20);
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx);
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++static int __init sched_timeslice(char *str)
++{
++ int timeslice_ms;
++
++ get_option(&str, ×lice_ms);
++ if (2 != timeslice_ms)
++ timeslice_ms = 4;
++ sched_timeslice_ns = timeslice_ms << 20;
++ sched_timeslice_imp(timeslice_ms);
++
++ return 0;
++}
++early_param("sched_timeslice", sched_timeslice);
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS (100 << 10)
++
++/**
++ * sched_yield_type - Choose what sort of yield sched_yield will perform.
++ * 0: No yield.
++ * 1: Deboost and requeue task. (default)
++ * 2: Set rq skip task.
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
++#endif
++static cpumask_t sched_rq_watermark[SCHED_QUEUE_BITS] ____cacheline_aligned_in_smp;
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++ int i;
++
++ bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++ for(i = 0; i < SCHED_BITS; i++)
++ INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++ struct task_struct *idle)
++{
++ idle->sq_idx = IDLE_TASK_SCHED_PRIO;
++ INIT_LIST_HEAD(&q->heads[idle->sq_idx]);
++ list_add(&idle->sq_node, &q->heads[idle->sq_idx]);
++}
++
++/* water mark related functions */
++static inline void update_sched_rq_watermark(struct rq *rq)
++{
++ unsigned long watermark = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++ unsigned long last_wm = rq->watermark;
++ unsigned long i;
++ int cpu;
++
++ if (watermark == last_wm)
++ return;
++
++ rq->watermark = watermark;
++ cpu = cpu_of(rq);
++ if (watermark < last_wm) {
++ for (i = last_wm; i > watermark; i--)
++ cpumask_clear_cpu(cpu, sched_rq_watermark + SCHED_QUEUE_BITS - i);
++#ifdef CONFIG_SCHED_SMT
++ if (static_branch_likely(&sched_smt_present) &&
++ IDLE_TASK_SCHED_PRIO == last_wm)
++ cpumask_andnot(&sched_sg_idle_mask,
++ &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++ return;
++ }
++ /* last_wm < watermark */
++ for (i = watermark; i > last_wm; i--)
++ cpumask_set_cpu(cpu, sched_rq_watermark + SCHED_QUEUE_BITS - i);
++#ifdef CONFIG_SCHED_SMT
++ if (static_branch_likely(&sched_smt_present) &&
++ IDLE_TASK_SCHED_PRIO == watermark) {
++ cpumask_t tmp;
++
++ cpumask_and(&tmp, cpu_smt_mask(cpu), sched_rq_watermark);
++ if (cpumask_equal(&tmp, cpu_smt_mask(cpu)))
++ cpumask_or(&sched_sg_idle_mask,
++ &sched_sg_idle_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++ unsigned long idx = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++ const struct list_head *head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct *
++sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++ unsigned long idx = p->sq_idx;
++ struct list_head *head = &rq->queue.heads[idx];
++
++ if (list_is_last(&p->sq_node, head)) {
++ idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++ sched_idx2prio(idx, rq) + 1);
++ head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++ }
++
++ return list_next_entry(p, sq_node);
++}
++
++static inline struct task_struct *rq_runnable_task(struct rq *rq)
++{
++ struct task_struct *next = sched_rq_first_task(rq);
++
++ if (unlikely(next == rq->skip))
++ next = sched_rq_next_task(next, rq);
++
++ return next;
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ * p->pi_lock
++ * rq->lock
++ * hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ * rq1->lock
++ * rq2->lock where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ * - sched_setaffinity()/
++ * set_cpus_allowed_ptr(): p->cpus_ptr, p->nr_cpus_allowed
++ * - set_user_nice(): p->se.load, p->*prio
++ * - __sched_setscheduler(): p->sched_class, p->policy, p->*prio,
++ * p->se.load, p->rt_priority,
++ * p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ * - sched_setnuma(): p->numa_preferred_nid
++ * - sched_move_task()/
++ * cpu_cgroup_fork(): p->sched_task_group
++ * - uclamp_update_active() p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ * is changed locklessly using set_current_state(), __set_current_state() or
++ * set_special_state(), see their respective comments, or by
++ * try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ * concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ * is set by activate_task() and cleared by deactivate_task(), under
++ * rq->lock. Non-zero indicates the task is runnable, the special
++ * ON_RQ_MIGRATING state is used for migration without holding both
++ * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ * is set by prepare_task() and cleared by finish_task() such that it will be
++ * set before p is scheduled-in and cleared after p is scheduled-out, both
++ * under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ * [ The astute reader will observe that it is possible for two tasks on one
++ * CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ * - Don't call set_task_cpu() on a blocked task:
++ *
++ * We don't care what CPU we're not running on, this simplifies hotplug,
++ * the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ * - for try_to_wake_up(), called under p->pi_lock:
++ *
++ * This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ * - for migration called under rq->lock:
++ * [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ * o move_queued_task()
++ * o detach_task()
++ *
++ * - for migration called under double_rq_lock():
++ *
++ * o __migrate_swap_task()
++ * o push_rt_task() / pull_rt_task()
++ * o push_dl_task() / pull_dl_task()
++ * o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq
++*__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock(&rq->lock);
++ if (likely((p->on_cpu || task_on_rq_queued(p))
++ && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ *plock = NULL;
++ return rq;
++ }
++ }
++}
++
++static inline void
++__task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++ if (NULL != lock)
++ raw_spin_unlock(lock);
++}
++
++static inline struct rq
++*task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock,
++ unsigned long *flags)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock_irqsave(&rq->lock, *flags);
++ if (likely((p->on_cpu || task_on_rq_queued(p))
++ && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, *flags);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ raw_spin_lock_irqsave(&p->pi_lock, *flags);
++ if (likely(!p->on_cpu && !p->on_rq &&
++ rq == task_rq(p))) {
++ *plock = &p->pi_lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++ }
++ }
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock,
++ unsigned long *flags)
++{
++ raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ lockdep_assert_held(&p->pi_lock);
++
++ for (;;) {
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++ return rq;
++ raw_spin_unlock(&rq->lock);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ for (;;) {
++ raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ /*
++ * move_queued_task() task_rq_lock()
++ *
++ * ACQUIRE (rq->lock)
++ * [S] ->on_rq = MIGRATING [L] rq = task_rq()
++ * WMB (__set_task_cpu()) ACQUIRE (rq->lock);
++ * [S] ->cpu = new_cpu [L] task_rq()
++ * [L] ->on_rq
++ * RELEASE (rq->lock)
++ *
++ * If we observe the old CPU in task_rq_lock(), the acquire of
++ * the old rq->lock will fully serialize against the stores.
++ *
++ * If we observe the new CPU in task_rq_lock(), the address
++ * dependency headed by '[L] rq = task_rq()' and the acquire
++ * will pair with the WMB to ensure we then also see migrating.
++ */
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++static inline void
++rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void
++rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++ raw_spinlock_t *lock;
++
++ /* Matches synchronize_rcu() in __sched_core_enable() */
++ preempt_disable();
++
++ for (;;) {
++ lock = __rq_lockp(rq);
++ raw_spin_lock_nested(lock, subclass);
++ if (likely(lock == __rq_lockp(rq))) {
++ /* preempt_count *MUST* be > 1 */
++ preempt_enable_no_resched();
++ return;
++ }
++ raw_spin_unlock(lock);
++ }
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++ raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++ s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++ /*
++ * Since irq_time is only updated on {soft,}irq_exit, we might run into
++ * this case when a previous update_rq_clock() happened inside a
++ * {soft,}irq region.
++ *
++ * When this happens, we stop ->clock_task and only update the
++ * prev_irq_time stamp to account for the part that fit, so that a next
++ * update will consume the rest. This ensures ->clock_task is
++ * monotonic.
++ *
++ * It does however cause some slight miss-attribution of {soft,}irq
++ * time, a more accurate solution would be to update the irq_time using
++ * the current rq->clock timestamp, except that would require using
++ * atomic ops.
++ */
++ if (irq_delta > delta)
++ irq_delta = delta;
++
++ rq->prev_irq_time += irq_delta;
++ delta -= irq_delta;
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ if (static_key_false((¶virt_steal_rq_enabled))) {
++ steal = paravirt_steal_clock(cpu_of(rq));
++ steal -= rq->prev_steal_time_rq;
++
++ if (unlikely(steal > delta))
++ steal = delta;
++
++ rq->prev_steal_time_rq += steal;
++ delta -= steal;
++ }
++#endif
++
++ rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ if ((irq_delta + steal))
++ update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++ s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++ if (unlikely(delta <= 0))
++ return;
++ rq->clock += delta;
++ update_rq_time_edge(rq);
++ update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS (sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT (8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l) (((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t) ((t) >> 17)
++#define LOAD_HALF_BLOCK(t) ((t) >> 16)
++#define BLOCK_MASK(t) ((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b) (1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++ u64 time = rq->clock;
++ u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp),
++ RQ_LOAD_HISTORY_BITS - 1);
++ u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++ u64 curr = !!rq->nr_running;
++
++ if (delta) {
++ rq->load_history = rq->load_history >> delta;
++
++ if (delta < RQ_UTIL_SHIFT) {
++ rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++ if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++ rq->load_history ^= LOAD_BLOCK_BIT(delta);
++ }
++
++ rq->load_block = BLOCK_MASK(time) * prev;
++ } else {
++ rq->load_block += (time - rq->load_stamp) * prev;
++ }
++ if (prev ^ curr)
++ rq->load_history ^= CURRENT_LOAD_BIT;
++ rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++ return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu, unsigned long max)
++{
++ return rq_load_util(cpu_rq(cpu), max);
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid. Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++ struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++ data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
++ cpu_of(rq)));
++ if (data)
++ data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++ int cpu = cpu_of(rq);
++
++ if (!tick_nohz_full_cpu(cpu))
++ return;
++
++ if (rq->nr_running < 2)
++ tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++ else
++ tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++ return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++ unsigned long ip = 0;
++ unsigned int state;
++
++ if (!p || p == current)
++ return 0;
++
++ /* Only get wchan if task is blocked and we can keep it that way. */
++ raw_spin_lock_irq(&p->pi_lock);
++ state = READ_ONCE(p->__state);
++ smp_rmb(); /* see try_to_wake_up() */
++ if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++ ip = __get_wchan(p);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags) \
++ psi_dequeue(p, flags & DEQUEUE_SLEEP); \
++ sched_info_dequeue(rq, p); \
++ \
++ list_del(&p->sq_node); \
++ if (list_empty(&rq->queue.heads[p->sq_idx])) \
++ clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags) \
++ sched_info_enqueue(rq, p); \
++ psi_enqueue(p, flags); \
++ \
++ p->sq_idx = task_sched_prio_idx(p, rq); \
++ list_add_tail(&p->sq_node, &rq->queue.heads[p->sq_idx]); \
++ set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++
++ __SCHED_DEQUEUE_TASK(p, rq, flags);
++ --rq->nr_running;
++#ifdef CONFIG_SMP
++ if (1 == rq->nr_running)
++ cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: enqueue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++
++ __SCHED_ENQUEUE_TASK(p, rq, flags);
++ update_sched_rq_watermark(rq);
++ ++rq->nr_running;
++#ifdef CONFIG_SMP
++ if (2 == rq->nr_running)
++ cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx)
++{
++ lockdep_assert_held(&rq->lock);
++ /*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++ cpu_of(rq), task_cpu(p));
++
++ list_del(&p->sq_node);
++ list_add_tail(&p->sq_node, &rq->queue.heads[idx]);
++ if (idx != p->sq_idx) {
++ if (list_empty(&rq->queue.heads[p->sq_idx]))
++ clear_bit(sched_idx2prio(p->sq_idx, rq),
++ rq->queue.bitmap);
++ p->sq_idx = idx;
++ set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++ update_sched_rq_watermark(rq);
++ }
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask) \
++ ({ \
++ typeof(ptr) _ptr = (ptr); \
++ typeof(mask) _mask = (mask); \
++ typeof(*_ptr) _old, _val = *_ptr; \
++ \
++ for (;;) { \
++ _old = cmpxchg(_ptr, _val, _val | _mask); \
++ if (_old == _val) \
++ break; \
++ _val = _old; \
++ } \
++ _old; \
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static bool set_nr_and_not_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ typeof(ti->flags) old, val = READ_ONCE(ti->flags);
++
++ for (;;) {
++ if (!(val & _TIF_POLLING_NRFLAG))
++ return false;
++ if (val & _TIF_NEED_RESCHED)
++ return true;
++ old = cmpxchg(&ti->flags, val, val | _TIF_NEED_RESCHED);
++ if (old == val)
++ break;
++ val = old;
++ }
++ return true;
++}
++
++#else
++static bool set_nr_and_not_polling(struct task_struct *p)
++{
++ set_tsk_need_resched(p);
++ return true;
++}
++
++#ifdef CONFIG_SMP
++static bool set_nr_if_polling(struct task_struct *p)
++{
++ return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ struct wake_q_node *node = &task->wake_q;
++
++ /*
++ * Atomically grab the task, if ->wake_q is !nil already it means
++ * it's already queued (either by us or someone else) and will get the
++ * wakeup due to that.
++ *
++ * In order to ensure that a pending wakeup will observe our pending
++ * state, even in the failed case, an explicit smp_mb() must be used.
++ */
++ smp_mb__before_atomic();
++ if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++ return false;
++
++ /*
++ * The head is context local, there can be no concurrency.
++ */
++ *head->lastp = node;
++ head->lastp = &node->next;
++ return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ if (__wake_q_add(head, task))
++ get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++ if (!__wake_q_add(head, task))
++ put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++ struct wake_q_node *node = head->first;
++
++ while (node != WAKE_Q_TAIL) {
++ struct task_struct *task;
++
++ task = container_of(node, struct task_struct, wake_q);
++ /* task can safely be re-inserted now: */
++ node = node->next;
++ task->wake_q.next = NULL;
++
++ /*
++ * wake_up_process() executes a full barrier, which pairs with
++ * the queueing in wake_q_add() so as not to miss wakeups.
++ */
++ wake_up_process(task);
++ put_task_struct(task);
++ }
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++void resched_curr(struct rq *rq)
++{
++ struct task_struct *curr = rq->curr;
++ int cpu;
++
++ lockdep_assert_held(&rq->lock);
++
++ if (test_tsk_need_resched(curr))
++ return;
++
++ cpu = cpu_of(rq);
++ if (cpu == smp_processor_id()) {
++ set_tsk_need_resched(curr);
++ set_preempt_need_resched();
++ return;
++ }
++
++ if (set_nr_and_not_polling(curr))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (cpu_online(cpu) || cpu == smp_processor_id())
++ resched_curr(cpu_rq(cpu));
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++void nohz_balance_enter_idle(int cpu) {}
++
++void select_nohz_load_balancer(int stop_tick) {}
++
++void set_cpu_sd_state_idle(void) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU. This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++ int i, cpu = smp_processor_id(), default_cpu = -1;
++ struct cpumask *mask;
++ const struct cpumask *hk_mask;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++ if (!idle_cpu(cpu))
++ return cpu;
++ default_cpu = cpu;
++ }
++
++ hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++ for (mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++ mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++ for_each_cpu_and(i, mask, hk_mask)
++ if (!idle_cpu(i))
++ return i;
++
++ if (default_cpu == -1)
++ default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++ cpu = default_cpu;
++
++ return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (cpu == smp_processor_id())
++ return;
++
++ if (set_nr_and_not_polling(rq->idle))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++ /*
++ * We just need the target to call irq_exit() and re-evaluate
++ * the next tick. The nohz full kick at least implies that.
++ * If needed we can still optimize that later with an
++ * empty IRQ.
++ */
++ if (cpu_is_offline(cpu))
++ return true; /* Don't try to wake offline CPUs. */
++ if (tick_nohz_full_cpu(cpu)) {
++ if (cpu != smp_processor_id() ||
++ tick_nohz_tick_stopped())
++ tick_nohz_full_kick_cpu(cpu);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++ if (!wake_up_full_nohz_cpu(cpu))
++ wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++ struct rq *rq = info;
++ int cpu = cpu_of(rq);
++ unsigned int flags;
++
++ /*
++ * Release the rq::nohz_csd.
++ */
++ flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++ WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++ rq->idle_balance = idle_cpu(cpu);
++ if (rq->idle_balance && !need_resched()) {
++ rq->nohz_idle_balance = flags;
++ raise_softirq_irqoff(SCHED_SOFTIRQ);
++ }
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void check_preempt_curr(struct rq *rq)
++{
++ if (sched_rq_first_task(rq) != rq->curr)
++ resched_curr(rq);
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++ if (hrtimer_active(&rq->hrtick_timer))
++ hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++ struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++ WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++ raw_spin_lock(&rq->lock);
++ resched_curr(rq);
++ raw_spin_unlock(&rq->lock);
++
++ return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ * - enabled by features
++ * - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ /**
++ * Alt schedule FW doesn't support sched_feat yet
++ if (!sched_feat(HRTICK))
++ return 0;
++ */
++ if (!cpu_active(cpu_of(rq)))
++ return 0;
++ return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ ktime_t time = rq->hrtick_time;
++
++ hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++ struct rq *rq = arg;
++
++ raw_spin_lock(&rq->lock);
++ __hrtick_restart(rq);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ s64 delta;
++
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense and can cause timer DoS.
++ */
++ delta = max_t(s64, delay, 10000LL);
++
++ rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++ if (rq == this_rq())
++ __hrtick_restart(rq);
++ else
++ smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense. Rely on vruntime for fairness.
++ */
++ delay = max_t(u64, delay, 10000LL);
++ hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++ HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++ hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++ rq->hrtick_timer.function = hrtick;
++}
++#else /* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif /* CONFIG_SCHED_HRTICK */
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++ return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) :
++ static_prio + MAX_PRIORITY_ADJ;
++}
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++ return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++ p->normal_prio = normal_prio(p);
++ /*
++ * If we are RT tasks or we were boosted to RT priority,
++ * keep the priority unchanged. Otherwise, update priority
++ * to the normal priority:
++ */
++ if (!rt_prio(p->prio))
++ return p->normal_prio;
++ return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++ enqueue_task(p, rq, ENQUEUE_WAKEUP);
++ p->on_rq = TASK_ON_RQ_QUEUED;
++
++ /*
++ * If in_iowait is set, the code below may not trigger any cpufreq
++ * utilization updates, so do it here explicitly with the IOWAIT flag
++ * passed.
++ */
++ cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++ dequeue_task(p, rq, DEQUEUE_SLEEP);
++ p->on_rq = 0;
++ cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++ /*
++ * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++ * successfully executed on another CPU. We must ensure that updates of
++ * per-task data have been completed by this moment.
++ */
++ smp_wmb();
++
++ WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++ return p->migration_disabled;
++#else
++ return false;
++#endif
++}
++
++#define SCA_CHECK 0x01
++#define SCA_USER 0x08
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * We should never call set_task_cpu() on a blocked task,
++ * ttwu() will sort out the placement.
++ */
++ WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++ /*
++ * The caller should hold either p->pi_lock or rq->lock, when changing
++ * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++ *
++ * sched_move_task() holds both and thus holding either pins the cgroup,
++ * see task_group().
++ */
++ WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++ lockdep_is_held(&task_rq(p)->lock)));
++#endif
++ /*
++ * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++ */
++ WARN_ON_ONCE(!cpu_online(new_cpu));
++
++ WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++ if (task_cpu(p) == new_cpu)
++ return;
++ trace_sched_migrate_task(p, new_cpu);
++ rseq_migrate(p);
++ perf_event_task_migrate(p);
++
++ __set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED 0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ /*
++ * This here violates the locking rules for affinity, since we're only
++ * supposed to change these variables while holding both rq->lock and
++ * p->pi_lock.
++ *
++ * HOWEVER, it magically works, because ttwu() is the only code that
++ * accesses these variables under p->pi_lock and only does so after
++ * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++ * before finish_task().
++ *
++ * XXX do further audits, this smells like something putrid.
++ */
++ SCHED_WARN_ON(!p->on_cpu);
++ p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++ struct task_struct *p = current;
++ int cpu;
++
++ if (p->migration_disabled) {
++ p->migration_disabled++;
++ return;
++ }
++
++ preempt_disable();
++ cpu = smp_processor_id();
++ if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++ cpu_rq(cpu)->nr_pinned++;
++ p->migration_disabled = 1;
++ p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++ /*
++ * Violates locking rules! see comment in __do_set_cpus_ptr().
++ */
++ if (p->cpus_ptr == &p->cpus_mask)
++ __do_set_cpus_ptr(p, cpumask_of(cpu));
++ }
++ preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++ struct task_struct *p = current;
++
++ if (0 == p->migration_disabled)
++ return;
++
++ if (p->migration_disabled > 1) {
++ p->migration_disabled--;
++ return;
++ }
++
++ if (WARN_ON_ONCE(!p->migration_disabled))
++ return;
++
++ /*
++ * Ensure stop_task runs either before or after this, and that
++ * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++ */
++ preempt_disable();
++ /*
++ * Assumption: current should be running on allowed cpu
++ */
++ WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++ if (p->cpus_ptr != &p->cpus_mask)
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ /*
++ * Mustn't clear migration_disabled() until cpus_ptr points back at the
++ * regular cpus_mask, otherwise things that race (eg.
++ * select_fallback_rq) get confused.
++ */
++ barrier();
++ p->migration_disabled = 0;
++ this_rq()->nr_pinned--;
++ preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++ /* When not in the task's cpumask, no point in looking further. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /* migrate_disabled() must be allowed to finish. */
++ if (is_migration_disabled(p))
++ return cpu_online(cpu);
++
++ /* Non kernel threads are not allowed during either online or offline. */
++ if (!(p->flags & PF_KTHREAD))
++ return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++ /* KTHREAD_IS_PER_CPU is always allowed. */
++ if (kthread_is_per_cpu(p))
++ return cpu_online(cpu);
++
++ /* Regular kernel threads don't get to stay during offline. */
++ if (cpu_dying(cpu))
++ return false;
++
++ /* But are allowed during online. */
++ return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ * stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ * off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ * it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ * is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int
++ new_cpu)
++{
++ lockdep_assert_held(&rq->lock);
++
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++ dequeue_task(p, rq, 0);
++ update_sched_rq_watermark(rq);
++ set_task_cpu(p, new_cpu);
++ raw_spin_unlock(&rq->lock);
++
++ rq = cpu_rq(new_cpu);
++
++ raw_spin_lock(&rq->lock);
++ BUG_ON(task_cpu(p) != new_cpu);
++ sched_task_sanity_check(p, rq);
++ enqueue_task(p, rq, 0);
++ p->on_rq = TASK_ON_RQ_QUEUED;
++ check_preempt_curr(rq);
++
++ return rq;
++}
++
++struct migration_arg {
++ struct task_struct *task;
++ int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int
++ dest_cpu)
++{
++ /* Affinity changed (again). */
++ if (!is_cpu_allowed(p, dest_cpu))
++ return rq;
++
++ update_rq_clock(rq);
++ return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++ struct migration_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++
++ /*
++ * The original target CPU might have gone down and we might
++ * be on another CPU but it doesn't matter.
++ */
++ local_irq_save(flags);
++ /*
++ * We need to explicitly wake pending tasks before running
++ * __migrate_task() such that we will not miss enforcing cpus_ptr
++ * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++ */
++ flush_smp_call_function_queue();
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++ /*
++ * If task_rq(p) != rq, it cannot be migrated here, because we're
++ * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++ * we're holding p->pi_lock.
++ */
++ if (task_rq(p) == rq && task_on_rq_queued(p))
++ rq = __migrate_task(rq, p, arg->dest_cpu);
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask)
++{
++ cpumask_copy(&p->cpus_mask, new_mask);
++ p->nr_cpus_allowed = cpumask_weight(new_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++ lockdep_assert_held(&p->pi_lock);
++ set_cpus_allowed_common(p, new_mask);
++}
++
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++ __do_set_cpus_allowed(p, new_mask);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++ int node)
++{
++ if (!src->user_cpus_ptr)
++ return 0;
++
++ dst->user_cpus_ptr = kmalloc_node(cpumask_size(), GFP_KERNEL, node);
++ if (!dst->user_cpus_ptr)
++ return -ENOMEM;
++
++ cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++ return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++ struct cpumask *user_mask = NULL;
++
++ swap(p->user_cpus_ptr, user_mask);
++
++ return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++ kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++ return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * If @match_state is nonzero, it's the @p->state value just checked and
++ * not expected to change. If it changes, i.e. @p might have woken up,
++ * then return zero. When we succeed in waiting for @p to be off its CPU,
++ * we return a positive number (its total switch count). If a second call
++ * a short while later returns the same number, the caller can be sure that
++ * @p has remained unscheduled the whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++ unsigned long flags;
++ bool running, on_rq;
++ unsigned long ncsw;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ for (;;) {
++ rq = task_rq(p);
++
++ /*
++ * If the task is actively running on another CPU
++ * still, just relax and busy-wait without holding
++ * any locks.
++ *
++ * NOTE! Since we don't hold any locks, it's not
++ * even sure that "rq" stays as the right runqueue!
++ * But we don't care, since this will return false
++ * if the runqueue has changed and p is actually now
++ * running somewhere else!
++ */
++ while (task_running(p) && p == rq->curr) {
++ if (match_state && unlikely(READ_ONCE(p->__state) != match_state))
++ return 0;
++ cpu_relax();
++ }
++
++ /*
++ * Ok, time to look more closely! We need the rq
++ * lock now, to be *sure*. If we're wrong, we'll
++ * just go back and repeat.
++ */
++ task_access_lock_irqsave(p, &lock, &flags);
++ trace_sched_wait_task(p);
++ running = task_running(p);
++ on_rq = p->on_rq;
++ ncsw = 0;
++ if (!match_state || READ_ONCE(p->__state) == match_state)
++ ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ /*
++ * If it changed from the expected state, bail out now.
++ */
++ if (unlikely(!ncsw))
++ break;
++
++ /*
++ * Was it really running after all now that we
++ * checked with the proper locks actually held?
++ *
++ * Oops. Go back and try again..
++ */
++ if (unlikely(running)) {
++ cpu_relax();
++ continue;
++ }
++
++ /*
++ * It's not enough that it's not actively running,
++ * it must be off the runqueue _entirely_, and not
++ * preempted!
++ *
++ * So if it was still runnable (but just not actively
++ * running right now), it's preempted, and we should
++ * yield - it could be a while.
++ */
++ if (unlikely(on_rq)) {
++ ktime_t to = NSEC_PER_SEC / HZ;
++
++ set_current_state(TASK_UNINTERRUPTIBLE);
++ schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++ continue;
++ }
++
++ /*
++ * Ahh, all good. It wasn't running, and it wasn't
++ * runnable, which means that it will never become
++ * running in the future either. We're all done!
++ */
++ break;
++ }
++
++ return ncsw;
++}
++
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++ int cpu;
++
++ preempt_disable();
++ cpu = task_cpu(p);
++ if ((cpu != smp_processor_id()) && task_curr(p))
++ smp_send_reschedule(cpu);
++ preempt_enable();
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ * - cpu_active must be a subset of cpu_online
++ *
++ * - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ * see __set_cpus_allowed_ptr(). At this point the newly online
++ * CPU isn't yet part of the sched domains, and balancing will not
++ * see it.
++ *
++ * - on cpu-down we clear cpu_active() to mask the sched domains and
++ * avoid the load balancer to place new tasks on the to be removed
++ * CPU. Existing tasks will remain running there and will be taken
++ * off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++ int nid = cpu_to_node(cpu);
++ const struct cpumask *nodemask = NULL;
++ enum { cpuset, possible, fail } state = cpuset;
++ int dest_cpu;
++
++ /*
++ * If the node that the CPU is on has been offlined, cpu_to_node()
++ * will return -1. There is no CPU on the node, and we should
++ * select the CPU on the other node.
++ */
++ if (nid != -1) {
++ nodemask = cpumask_of_node(nid);
++
++ /* Look for allowed, online CPU in same node. */
++ for_each_cpu(dest_cpu, nodemask) {
++ if (is_cpu_allowed(p, dest_cpu))
++ return dest_cpu;
++ }
++ }
++
++ for (;;) {
++ /* Any allowed, online CPU? */
++ for_each_cpu(dest_cpu, p->cpus_ptr) {
++ if (!is_cpu_allowed(p, dest_cpu))
++ continue;
++ goto out;
++ }
++
++ /* No more Mr. Nice Guy. */
++ switch (state) {
++ case cpuset:
++ if (cpuset_cpus_allowed_fallback(p)) {
++ state = possible;
++ break;
++ }
++ fallthrough;
++ case possible:
++ /*
++ * XXX When called from select_task_rq() we only
++ * hold p->pi_lock and again violate locking order.
++ *
++ * More yuck to audit.
++ */
++ do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++ state = fail;
++ break;
++
++ case fail:
++ BUG();
++ break;
++ }
++ }
++
++out:
++ if (state != cpuset) {
++ /*
++ * Don't tell them about moving exiting tasks or
++ * kernel threads (both mm NULL), since they never
++ * leave kernel.
++ */
++ if (p->mm && printk_ratelimit()) {
++ printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++ task_pid_nr(p), p->comm, cpu);
++ }
++ }
++
++ return dest_cpu;
++}
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ cpumask_t chk_mask, tmp;
++
++ if (unlikely(!cpumask_and(&chk_mask, p->cpus_ptr, cpu_active_mask)))
++ return select_fallback_rq(task_cpu(p), p);
++
++ if (
++#ifdef CONFIG_SCHED_SMT
++ cpumask_and(&tmp, &chk_mask, &sched_sg_idle_mask) ||
++#endif
++ cpumask_and(&tmp, &chk_mask, sched_rq_watermark) ||
++ cpumask_and(&tmp, &chk_mask,
++ sched_rq_watermark + SCHED_QUEUE_BITS - 1 - task_sched_prio(p)))
++ return best_mask_cpu(task_cpu(p), &tmp);
++
++ return best_mask_cpu(task_cpu(p), &chk_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++ static struct lock_class_key stop_pi_lock;
++ struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++ struct sched_param start_param = { .sched_priority = 0 };
++ struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++ if (stop) {
++ /*
++ * Make it appear like a SCHED_FIFO task, its something
++ * userspace knows about and won't get confused about.
++ *
++ * Also, it will make PI more or less work without too
++ * much confusion -- but then, stop work should not
++ * rely on PI working anyway.
++ */
++ sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++ /*
++ * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++ * adjust the effective priority of a task. As a result,
++ * rt_mutex_setprio() can trigger (RT) balancing operations,
++ * which can then trigger wakeups of the stop thread to push
++ * around the current task.
++ *
++ * The stop task itself will never be part of the PI-chain, it
++ * never blocks, therefore that ->pi_lock recursion is safe.
++ * Tell lockdep about this by placing the stop->pi_lock in its
++ * own class.
++ */
++ lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++ }
++
++ cpu_rq(cpu)->stop = stop;
++
++ if (old_stop) {
++ /*
++ * Reset it back to a normal scheduling policy so that
++ * it can die in pieces.
++ */
++ sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++ }
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++ raw_spinlock_t *lock, unsigned long irq_flags)
++{
++ /* Can the task run on the task's current CPU? If so, we're done */
++ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++ if (p->migration_disabled) {
++ if (likely(p->cpus_ptr != &p->cpus_mask))
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ p->migration_disabled = 0;
++ p->migration_flags |= MDF_FORCE_ENABLED;
++ /* When p is migrate_disabled, rq->lock should be held */
++ rq->nr_pinned--;
++ }
++
++ if (task_running(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++ struct migration_arg arg = { p, dest_cpu };
++
++ /* Need help from migration thread: drop lock and wait. */
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++ return 0;
++ }
++ if (task_on_rq_queued(p)) {
++ /*
++ * OK, since we're going to drop the lock immediately
++ * afterwards anyway.
++ */
++ update_rq_clock(rq);
++ rq = move_queued_task(rq, p, dest_cpu);
++ lock = &rq->lock;
++ }
++ }
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++ const struct cpumask *new_mask,
++ u32 flags,
++ struct rq *rq,
++ raw_spinlock_t *lock,
++ unsigned long irq_flags)
++{
++ const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++ const struct cpumask *cpu_valid_mask = cpu_active_mask;
++ bool kthread = p->flags & PF_KTHREAD;
++ struct cpumask *user_mask = NULL;
++ int dest_cpu;
++ int ret = 0;
++
++ if (kthread || is_migration_disabled(p)) {
++ /*
++ * Kernel threads are allowed on online && !active CPUs,
++ * however, during cpu-hot-unplug, even these might get pushed
++ * away if not KTHREAD_IS_PER_CPU.
++ *
++ * Specifically, migration_disabled() tasks must not fail the
++ * cpumask_any_and_distribute() pick below, esp. so on
++ * SCA_MIGRATE_ENABLE, otherwise we'll not call
++ * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++ */
++ cpu_valid_mask = cpu_online_mask;
++ }
++
++ if (!kthread && !cpumask_subset(new_mask, cpu_allowed_mask)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ /*
++ * Must re-check here, to close a race against __kthread_bind(),
++ * sched_setaffinity() is not guaranteed to observe the flag.
++ */
++ if ((flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ if (cpumask_equal(&p->cpus_mask, new_mask))
++ goto out;
++
++ dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
++ if (dest_cpu >= nr_cpu_ids) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ __do_set_cpus_allowed(p, new_mask);
++
++ if (flags & SCA_USER)
++ user_mask = clear_user_cpus_ptr(p);
++
++ ret = affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++ kfree(user_mask);
++
++ return ret;
++
++out:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++ return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * proper CPU and schedule it away if the CPU it's executing on
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++ const struct cpumask *new_mask, u32 flags)
++{
++ unsigned long irq_flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++
++ return __set_cpus_allowed_ptr_locked(p, new_mask, flags, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ return __set_cpus_allowed_ptr(p, new_mask, 0);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask
++ * and pointing @p->user_cpus_ptr to a copy of the old mask.
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++ struct cpumask *new_mask,
++ const struct cpumask *subset_mask)
++{
++ struct cpumask *user_mask = NULL;
++ unsigned long irq_flags;
++ raw_spinlock_t *lock;
++ struct rq *rq;
++ int err;
++
++ if (!p->user_cpus_ptr) {
++ user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
++ if (!user_mask)
++ return -ENOMEM;
++ }
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++
++ if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) {
++ err = -EINVAL;
++ goto err_unlock;
++ }
++
++ /*
++ * We're about to butcher the task affinity, so keep track of what
++ * the user asked for in case we're able to restore it later on.
++ */
++ if (user_mask) {
++ cpumask_copy(user_mask, p->cpus_ptr);
++ p->user_cpus_ptr = user_mask;
++ }
++
++ /*return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);*/
++ return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, lock, irq_flags);
++
++err_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ kfree(user_mask);
++ return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpu_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ cpumask_var_t new_mask;
++ const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++ alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++ /*
++ * __migrate_task() can fail silently in the face of concurrent
++ * offlining of the chosen destination CPU, so take the hotplug
++ * lock to ensure that the migration succeeds.
++ */
++ cpus_read_lock();
++ if (!cpumask_available(new_mask))
++ goto out_set_mask;
++
++ if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++ goto out_free_mask;
++
++ /*
++ * We failed to find a valid subset of the affinity mask for the
++ * task, so override it based on its cpuset hierarchy.
++ */
++ cpuset_cpus_allowed(p, new_mask);
++ override_mask = new_mask;
++
++out_set_mask:
++ if (printk_ratelimit()) {
++ printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++ task_pid_nr(p), p->comm,
++ cpumask_pr_args(override_mask));
++ }
++
++ WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++ cpus_read_unlock();
++ free_cpumask_var(new_mask);
++}
++
++static int
++__sched_setaffinity(struct task_struct *p, const struct cpumask *mask);
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr(). This will clear (and free)
++ * @p->user_cpus_ptr.
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ struct cpumask *user_mask = p->user_cpus_ptr;
++ unsigned long flags;
++
++ /*
++ * Try to restore the old affinity mask. If this fails, then
++ * we free the mask explicitly to avoid it being inherited across
++ * a subsequent fork().
++ */
++ if (!user_mask || !__sched_setaffinity(p, user_mask))
++ return;
++
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ user_mask = clear_user_cpus_ptr(p);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ kfree(user_mask);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++ const struct cpumask *new_mask, u32 flags)
++{
++ return set_cpus_allowed_ptr(p, new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq;
++
++ if (!schedstat_enabled())
++ return;
++
++ rq = this_rq();
++
++#ifdef CONFIG_SMP
++ if (cpu == rq->cpu) {
++ __schedstat_inc(rq->ttwu_local);
++ __schedstat_inc(p->stats.nr_wakeups_local);
++ } else {
++ /** Alt schedule FW ToDo:
++ * How to do ttwu_wake_remote
++ */
++ }
++#endif /* CONFIG_SMP */
++
++ __schedstat_inc(rq->ttwu_count);
++ __schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable and perform wakeup-preemption.
++ */
++static inline void
++ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++ check_preempt_curr(rq);
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++ if (p->sched_contributes_to_load)
++ rq->nr_uninterruptible--;
++
++ if (
++#ifdef CONFIG_SMP
++ !(wake_flags & WF_MIGRATED) &&
++#endif
++ p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ activate_task(p, rq);
++ ttwu_do_wakeup(rq, p, 0);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ * for (;;) {
++ * set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ * if (CONDITION)
++ * break;
++ *
++ * schedule();
++ * }
++ * __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ * %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ int ret = 0;
++
++ rq = __task_access_lock(p, &lock);
++ if (task_on_rq_queued(p)) {
++ /* check_preempt_curr() may use rq clock */
++ update_rq_clock(rq);
++ ttwu_do_wakeup(rq, p, wake_flags);
++ ret = 1;
++ }
++ __task_access_unlock(p, lock);
++
++ return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++ struct llist_node *llist = arg;
++ struct rq *rq = this_rq();
++ struct task_struct *p, *t;
++ struct rq_flags rf;
++
++ if (!llist)
++ return;
++
++ /*
++ * rq::ttwu_pending racy indication of out-standing wakeups.
++ * Races such that false-negatives are possible, since they
++ * are shorter lived that false-positives would be.
++ */
++ WRITE_ONCE(rq->ttwu_pending, 0);
++
++ rq_lock_irqsave(rq, &rf);
++ update_rq_clock(rq);
++
++ llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++ if (WARN_ON_ONCE(p->on_cpu))
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++ set_task_cpu(p, cpu_of(rq));
++
++ ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++ }
++
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++void send_call_function_single_ipi(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (!set_nr_if_polling(rq->idle))
++ arch_send_call_function_single_ipi(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++ WRITE_ONCE(rq->ttwu_pending, 1);
++ __smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(int cpu, int wake_flags)
++{
++ /*
++ * Do not complicate things with the async wake_list while the CPU is
++ * in hotplug state.
++ */
++ if (!cpu_active(cpu))
++ return false;
++
++ /*
++ * If the CPU does not share cache, then queue the task on the
++ * remote rqs wakelist to avoid accessing remote data.
++ */
++ if (!cpus_share_cache(smp_processor_id(), cpu))
++ return true;
++
++ /*
++ * If the task is descheduling and the only running task on the
++ * CPU then use the wakelist to offload the task activation to
++ * the soon-to-be-idle CPU as the current CPU is likely busy.
++ * nr_running is checked to avoid unnecessary task stacking.
++ */
++ if ((wake_flags & WF_ON_CPU) && cpu_rq(cpu)->nr_running <= 1)
++ return true;
++
++ return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
++ if (WARN_ON_ONCE(cpu == smp_processor_id()))
++ return false;
++
++ sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++ __ttwu_queue_wakelist(p, cpu, wake_flags);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ rcu_read_lock();
++
++ if (!is_idle_task(rcu_dereference(rq->curr)))
++ goto out;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (is_idle_task(rq->curr))
++ resched_curr(rq);
++ /* Else CPU is not idle, do nothing here */
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++out:
++ rcu_read_unlock();
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++ if (this_cpu == that_cpu)
++ return true;
++
++ return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (ttwu_queue_wakelist(p, cpu, wake_flags))
++ return;
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++ ttwu_do_activate(rq, p, wake_flags);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of PREEMPT_RT saved_state:
++ *
++ * The related locking code always holds p::pi_lock when updating
++ * p::saved_state, which means the code is fully serialized in both cases.
++ *
++ * The lock wait and lock wakeups happen via TASK_RTLOCK_WAIT. No other
++ * bits set. This allows to distinguish all wakeup scenarios.
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++ state != TASK_RTLOCK_WAIT);
++ }
++
++ if (READ_ONCE(p->__state) & state) {
++ *success = 1;
++ return true;
++ }
++
++#ifdef CONFIG_PREEMPT_RT
++ /*
++ * Saved state preserves the task state across blocking on
++ * an RT lock. If the state matches, set p::saved_state to
++ * TASK_RUNNING, but do not wake the task because it waits
++ * for a lock wakeup. Also indicate success because from
++ * the regular waker's point of view this has succeeded.
++ *
++ * After acquiring the lock the task will restore p::__state
++ * from p::saved_state which ensures that the regular
++ * wakeup is not lost. The restore will also set
++ * p::saved_state to TASK_RUNNING so any further tests will
++ * not result in false positives vs. @success
++ */
++ if (p->saved_state & state) {
++ p->saved_state = TASK_RUNNING;
++ *success = 1;
++ }
++#endif
++ return false;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ * MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ * A) UNLOCK of the rq(c0)->lock scheduling out task t
++ * B) migration for t is required to synchronize *both* rq(c0)->lock and
++ * rq(c1)->lock (if not at the same time, then in that order).
++ * C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ * CPU0 CPU1 CPU2
++ *
++ * LOCK rq(0)->lock
++ * sched-out X
++ * sched-in Y
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(0)->lock // orders against CPU0
++ * dequeue X
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(1)->lock
++ * enqueue X
++ * UNLOCK rq(1)->lock
++ *
++ * LOCK rq(1)->lock // orders against CPU2
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(1)->lock
++ *
++ *
++ * BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ * 1) smp_store_release(X->on_cpu, 0) -- finish_task()
++ * 2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ * LOCK rq(0)->lock LOCK X->pi_lock
++ * dequeue X
++ * sched-out X
++ * smp_store_release(X->on_cpu, 0);
++ *
++ * smp_cond_load_acquire(&X->on_cpu, !VAL);
++ * X->state = WAKING
++ * set_task_cpu(X,2)
++ *
++ * LOCK rq(2)->lock
++ * enqueue X
++ * X->state = RUNNING
++ * UNLOCK rq(2)->lock
++ *
++ * LOCK rq(2)->lock // orders against CPU1
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(2)->lock
++ *
++ * UNLOCK X->pi_lock
++ * UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ * If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ * - p->sched_class
++ * - p->cpus_ptr
++ * - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ * - ttwu_runnable() -- old rq, unavoidable, see comment there;
++ * - ttwu_queue() -- new rq, for enqueue of the task;
++ * - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ * %false otherwise.
++ */
++static int try_to_wake_up(struct task_struct *p, unsigned int state,
++ int wake_flags)
++{
++ unsigned long flags;
++ int cpu, success = 0;
++
++ preempt_disable();
++ if (p == current) {
++ /*
++ * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++ * == smp_processor_id()'. Together this means we can special
++ * case the whole 'p->on_rq && ttwu_runnable()' case below
++ * without taking any locks.
++ *
++ * In particular:
++ * - we rely on Program-Order guarantees for all the ordering,
++ * - we're serialized against set_special_state() by virtue of
++ * it disabling IRQs (this allows not taking ->pi_lock).
++ */
++ if (!ttwu_state_match(p, state, &success))
++ goto out;
++
++ trace_sched_waking(p);
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ trace_sched_wakeup(p);
++ goto out;
++ }
++
++ /*
++ * If we are going to wake up a thread waiting for CONDITION we
++ * need to ensure that CONDITION=1 done by the caller can not be
++ * reordered with p->state check below. This pairs with smp_store_mb()
++ * in set_current_state() that the waiting thread does.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ smp_mb__after_spinlock();
++ if (!ttwu_state_match(p, state, &success))
++ goto unlock;
++
++ trace_sched_waking(p);
++
++ /*
++ * Ensure we load p->on_rq _after_ p->state, otherwise it would
++ * be possible to, falsely, observe p->on_rq == 0 and get stuck
++ * in smp_cond_load_acquire() below.
++ *
++ * sched_ttwu_pending() try_to_wake_up()
++ * STORE p->on_rq = 1 LOAD p->state
++ * UNLOCK rq->lock
++ *
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * UNLOCK rq->lock
++ *
++ * [task p]
++ * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * A similar smb_rmb() lives in try_invoke_on_locked_down_task().
++ */
++ smp_rmb();
++ if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++ goto unlock;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++ * possible to, falsely, observe p->on_cpu == 0.
++ *
++ * One must be running (->on_cpu == 1) in order to remove oneself
++ * from the runqueue.
++ *
++ * __schedule() (switch to task 'p') try_to_wake_up()
++ * STORE p->on_cpu = 1 LOAD p->on_rq
++ * UNLOCK rq->lock
++ *
++ * __schedule() (put 'p' to sleep)
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * STORE p->on_rq = 0 LOAD p->on_cpu
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++ * schedule()'s deactivate_task() has 'happened' and p will no longer
++ * care about it's own p->state. See the comment in __schedule().
++ */
++ smp_acquire__after_ctrl_dep();
++
++ /*
++ * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++ * == 0), which means we need to do an enqueue, change p->state to
++ * TASK_WAKING such that we can unlock p->pi_lock before doing the
++ * enqueue, such as ttwu_queue_wakelist().
++ */
++ WRITE_ONCE(p->__state, TASK_WAKING);
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, considering queueing p on the remote CPUs wake_list
++ * which potentially sends an IPI instead of spinning on p->on_cpu to
++ * let the waker make forward progress. This is safe because IRQs are
++ * disabled and the IPI will deliver after on_cpu is cleared.
++ *
++ * Ensure we load task_cpu(p) after p->on_cpu:
++ *
++ * set_task_cpu(p, cpu);
++ * STORE p->cpu = @cpu
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock
++ * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu)
++ * STORE p->on_cpu = 1 LOAD p->cpu
++ *
++ * to ensure we observe the correct CPU on which the task is currently
++ * scheduling.
++ */
++ if (smp_load_acquire(&p->on_cpu) &&
++ ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
++ goto unlock;
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, wait until it's done referencing the task.
++ *
++ * Pairs with the smp_store_release() in finish_task().
++ *
++ * This ensures that tasks getting woken will be fully ordered against
++ * their previous state and preserve Program Order.
++ */
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ sched_task_ttwu(p);
++
++ cpu = select_task_rq(p);
++
++ if (cpu != task_cpu(p)) {
++ if (p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ wake_flags |= WF_MIGRATED;
++ psi_ttwu_dequeue(p);
++ set_task_cpu(p, cpu);
++ }
++#else
++ cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++ ttwu_queue(p, cpu, wake_flags);
++unlock:
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++out:
++ if (success)
++ ttwu_stat(p, task_cpu(p), wake_flags);
++ preempt_enable();
++
++ return success;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it. This function can use ->on_rq and task_curr()
++ * to work out what the state is, if required. Given that @func can be invoked
++ * with a runqueue lock held, it had better be quite lightweight.
++ *
++ * Returns:
++ * Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++ struct rq *rq = NULL;
++ unsigned int state;
++ struct rq_flags rf;
++ int ret;
++
++ raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++ state = READ_ONCE(p->__state);
++
++ /*
++ * Ensure we load p->on_rq after p->__state, otherwise it would be
++ * possible to, falsely, observe p->on_rq == 0.
++ *
++ * See try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++
++ /*
++ * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++ * the task is blocked. Make sure to check @state since ttwu() can drop
++ * locks at the end, see ttwu_queue_wakelist().
++ */
++ if (state == TASK_RUNNING || state == TASK_WAKING || p->on_rq)
++ rq = __task_rq_lock(p, &rf);
++
++ /*
++ * At this point the task is pinned; either:
++ * - blocked and we're holding off wakeups (pi->lock)
++ * - woken, and we're holding off enqueue (rq->lock)
++ * - queued, and we're holding off schedule (rq->lock)
++ * - running, and we're holding off de-schedule (rq->lock)
++ *
++ * The called function (@func) can use: task_curr(), p->on_rq and
++ * p->__state to differentiate between these states.
++ */
++ ret = func(p, arg);
++
++ if (rq)
++ __task_rq_unlock(rq, &rf);
++
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++ return ret;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++ return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++ return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ p->on_rq = 0;
++ p->on_cpu = 0;
++ p->utime = 0;
++ p->stime = 0;
++ p->sched_time = 0;
++
++#ifdef CONFIG_SCHEDSTATS
++ /* Even if schedstat is disabled, there should not be garbage */
++ memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++ INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++ p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++ p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ __sched_fork(clone_flags, p);
++ /*
++ * We mark the process as NEW here. This guarantees that
++ * nobody will actually run it, and a signal or other external
++ * event cannot wake it up and insert it on the runqueue either.
++ */
++ p->__state = TASK_NEW;
++
++ /*
++ * Make sure we do not leak PI boosting priority to the child.
++ */
++ p->prio = current->normal_prio;
++
++ /*
++ * Revert to default priority/policy on fork if requested.
++ */
++ if (unlikely(p->sched_reset_on_fork)) {
++ if (task_has_rt_policy(p)) {
++ p->policy = SCHED_NORMAL;
++ p->static_prio = NICE_TO_PRIO(0);
++ p->rt_priority = 0;
++ } else if (PRIO_TO_NICE(p->static_prio) < 0)
++ p->static_prio = NICE_TO_PRIO(0);
++
++ p->prio = p->normal_prio = p->static_prio;
++
++ /*
++ * We don't need the reset flag anymore after the fork. It has
++ * fulfilled its duty:
++ */
++ p->sched_reset_on_fork = 0;
++ }
++
++#ifdef CONFIG_SCHED_INFO
++ if (unlikely(sched_info_on()))
++ memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++ init_task_preempt_count(p);
++
++ return 0;
++}
++
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ /*
++ * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++ * required yet, but lockdep gets upset if rules are violated.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ /*
++ * Share the timeslice between parent and child, thus the
++ * total amount of pending timeslices in the system doesn't change,
++ * resulting in more scheduling fairness.
++ */
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ rq->curr->time_slice /= 2;
++ p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++ hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++ if (p->time_slice < RESCHED_NS) {
++ p->time_slice = sched_timeslice_ns;
++ resched_curr(rq);
++ }
++ sched_task_fork(p, rq);
++ raw_spin_unlock(&rq->lock);
++
++ rseq_migrate(p);
++ /*
++ * We're setting the CPU for the first time, we don't migrate,
++ * so use __set_task_cpu().
++ */
++ __set_task_cpu(p, smp_processor_id());
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++ if (enabled)
++ static_branch_enable(&sched_schedstats);
++ else
++ static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++ if (!schedstat_enabled()) {
++ pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++ static_branch_enable(&sched_schedstats);
++ }
++}
++
++static int __init setup_schedstats(char *str)
++{
++ int ret = 0;
++ if (!str)
++ goto out;
++
++ if (!strcmp(str, "enable")) {
++ set_schedstats(true);
++ ret = 1;
++ } else if (!strcmp(str, "disable")) {
++ set_schedstats(false);
++ ret = 1;
++ }
++out:
++ if (!ret)
++ pr_warn("Unable to parse schedstats=\n");
++
++ return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
++ size_t *lenp, loff_t *ppos)
++{
++ struct ctl_table t;
++ int err;
++ int state = static_branch_likely(&sched_schedstats);
++
++ if (write && !capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
++ t = *table;
++ t.data = &state;
++ err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++ if (err < 0)
++ return err;
++ if (write)
++ set_schedstats(state);
++ return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++ {
++ .procname = "sched_schedstats",
++ .data = NULL,
++ .maxlen = sizeof(unsigned int),
++ .mode = 0644,
++ .proc_handler = sysctl_schedstats,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_ONE,
++ },
++ {}
++};
++static int __init sched_core_sysctl_init(void)
++{
++ register_sysctl_init("kernel", sched_core_sysctls);
++ return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++ rseq_migrate(p);
++ /*
++ * Fork balancing, do it here and not earlier because:
++ * - cpus_ptr can change in the fork path
++ * - any previously selected CPU might disappear through hotplug
++ *
++ * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++ * as we're not fully set-up yet.
++ */
++ __set_task_cpu(p, cpu_of(rq));
++#endif
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ activate_task(p, rq);
++ trace_sched_wakeup_new(p);
++ check_preempt_curr(rq);
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++ static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++ static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++ if (!static_branch_unlikely(&preempt_notifier_key))
++ WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++ hlist_add_head(¬ifier->link, ¤t->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++ hlist_del(¬ifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++ /*
++ * Claim the task as running, we do this before switching to it
++ * such that any running task will have this set.
++ *
++ * See the ttwu() WF_ON_CPU case and its ordering comment.
++ */
++ WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++ /*
++ * This must be the very last reference to @prev from this CPU. After
++ * p->on_cpu is cleared, the task can be moved to a different CPU. We
++ * must ensure this doesn't happen until the switch is completely
++ * finished.
++ *
++ * In particular, the load of prev->state in finish_task_switch() must
++ * happen before this.
++ *
++ * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++ */
++ smp_store_release(&prev->on_cpu, 0);
++#else
++ prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++ void (*func)(struct rq *rq);
++ struct callback_head *next;
++
++ lockdep_assert_held(&rq->lock);
++
++ while (head) {
++ func = (void (*)(struct rq *))head->func;
++ next = head->next;
++ head->next = NULL;
++ head = next;
++
++ func(rq);
++ }
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct callback_head balance_push_callback = {
++ .next = NULL,
++ .func = (void (*)(struct callback_head *))balance_push,
++};
++
++static inline struct callback_head *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++ struct callback_head *head = rq->balance_callback;
++
++ if (likely(!head))
++ return NULL;
++
++ lockdep_assert_rq_held(rq);
++ /*
++ * Must not take balance_push_callback off the list when
++ * splice_balance_callbacks() and balance_callbacks() are not
++ * in the same rq->lock section.
++ *
++ * In that case it would be possible for __schedule() to interleave
++ * and observe the list empty.
++ */
++ if (split && head == &balance_push_callback)
++ head = NULL;
++ else
++ rq->balance_callback = NULL;
++
++ return head;
++}
++
++static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
++{
++ return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++ do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++ unsigned long flags;
++
++ if (unlikely(head)) {
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ do_balance_callbacks(rq, head);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++ }
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
++{
++ return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++ /*
++ * Since the runqueue lock will be released by the next
++ * task (which is an invalid locking op but in the case
++ * of the scheduler it's an obvious special-case), so we
++ * do an early lockdep release here:
++ */
++ spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++ /* this is a valid case when another task releases the spinlock */
++ rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++ /*
++ * If we are tracking spinlock dependencies then we have to
++ * fix up the runqueue lock - which gets 'carried over' from
++ * prev into current:
++ */
++ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ kcov_prepare_switch(prev);
++ sched_info_switch(rq, prev, next);
++ perf_event_task_sched_out(prev, next);
++ rseq_preempt(prev);
++ fire_sched_out_preempt_notifiers(prev, next);
++ kmap_local_sched_out();
++ prepare_task(next);
++ prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock. (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ struct rq *rq = this_rq();
++ struct mm_struct *mm = rq->prev_mm;
++ unsigned int prev_state;
++
++ /*
++ * The previous task will have left us with a preempt_count of 2
++ * because it left us after:
++ *
++ * schedule()
++ * preempt_disable(); // 1
++ * __schedule()
++ * raw_spin_lock_irq(&rq->lock) // 2
++ *
++ * Also, see FORK_PREEMPT_COUNT.
++ */
++ if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++ "corrupted preempt_count: %s/%d/0x%x\n",
++ current->comm, current->pid, preempt_count()))
++ preempt_count_set(FORK_PREEMPT_COUNT);
++
++ rq->prev_mm = NULL;
++
++ /*
++ * A task struct has one reference for the use as "current".
++ * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++ * schedule one last time. The schedule call will never return, and
++ * the scheduled task must drop that reference.
++ *
++ * We must observe prev->state before clearing prev->on_cpu (in
++ * finish_task), otherwise a concurrent wakeup can get prev
++ * running on another CPU and we could rave with its RUNNING -> DEAD
++ * transition, resulting in a double drop.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ vtime_task_switch(prev);
++ perf_event_task_sched_in(prev, current);
++ finish_task(prev);
++ tick_nohz_task_switch();
++ finish_lock_switch(rq);
++ finish_arch_post_lock_switch();
++ kcov_finish_switch(current);
++ /*
++ * kmap_local_sched_out() is invoked with rq::lock held and
++ * interrupts disabled. There is no requirement for that, but the
++ * sched out code does not have an interrupt enabled section.
++ * Restoring the maps on sched in does not require interrupts being
++ * disabled either.
++ */
++ kmap_local_sched_in();
++
++ fire_sched_in_preempt_notifiers(current);
++ /*
++ * When switching through a kernel thread, the loop in
++ * membarrier_{private,global}_expedited() may have observed that
++ * kernel thread and not issued an IPI. It is therefore possible to
++ * schedule between user->kernel->user threads without passing though
++ * switch_mm(). Membarrier requires a barrier after storing to
++ * rq->curr, before returning to userspace, so provide them here:
++ *
++ * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++ * provided by mmdrop(),
++ * - a sync_core for SYNC_CORE.
++ */
++ if (mm) {
++ membarrier_mm_sync_core_before_usermode(mm);
++ mmdrop_sched(mm);
++ }
++ if (unlikely(prev_state == TASK_DEAD)) {
++ /* Task is done with its stack. */
++ put_task_stack(prev);
++
++ put_task_struct_rcu_user(prev);
++ }
++
++ return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ /*
++ * New tasks start with FORK_PREEMPT_COUNT, see there and
++ * finish_task_switch() for details.
++ *
++ * finish_task_switch() will drop rq->lock() and lower preempt_count
++ * and the preempt_enable() will end up enabling preemption (on
++ * PREEMPT_COUNT kernels).
++ */
++
++ finish_task_switch(prev);
++ preempt_enable();
++
++ if (current->set_child_tid)
++ put_user(task_pid_vnr(current), current->set_child_tid);
++
++ calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ prepare_task_switch(rq, prev, next);
++
++ /*
++ * For paravirt, this is coupled with an exit in switch_to to
++ * combine the page table reload and the switch backend into
++ * one hypercall.
++ */
++ arch_start_context_switch(prev);
++
++ /*
++ * kernel -> kernel lazy + transfer active
++ * user -> kernel lazy + mmgrab() active
++ *
++ * kernel -> user switch + mmdrop() active
++ * user -> user switch
++ */
++ if (!next->mm) { // to kernel
++ enter_lazy_tlb(prev->active_mm, next);
++
++ next->active_mm = prev->active_mm;
++ if (prev->mm) // from user
++ mmgrab(prev->active_mm);
++ else
++ prev->active_mm = NULL;
++ } else { // to user
++ membarrier_switch_mm(rq, prev->active_mm, next->mm);
++ /*
++ * sys_membarrier() requires an smp_mb() between setting
++ * rq->curr / membarrier_switch_mm() and returning to userspace.
++ *
++ * The below provides this either through switch_mm(), or in
++ * case 'prev->active_mm == next->mm' through
++ * finish_task_switch()'s mmdrop().
++ */
++ switch_mm_irqs_off(prev->active_mm, next->mm, next);
++
++ if (!prev->mm) { // from kernel
++ /* will mmdrop() in finish_task_switch(). */
++ rq->prev_mm = prev->active_mm;
++ prev->active_mm = NULL;
++ }
++ }
++
++ prepare_lock_switch(rq, next);
++
++ /* Here we just switch the register state and the stack. */
++ switch_to(prev, next, prev);
++ barrier();
++
++ return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_online_cpu(i)
++ sum += cpu_rq(i)->nr_running;
++
++ return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race. The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++ return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches(void)
++{
++ int i;
++ unsigned long long sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += cpu_rq(i)->nr_switches;
++
++ return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++ return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += nr_iowait_cpu(i);
++
++ return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++ s64 ns = rq->clock_task - p->last_ran;
++
++ p->sched_time += ns;
++ cgroup_account_cputime(p, ns);
++ account_group_exec_runtime(p, ns);
++
++ p->time_slice -= ns;
++ p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++ /*
++ * 64-bit doesn't need locks to atomically read a 64-bit value.
++ * So we have a optimization chance when the task's delta_exec is 0.
++ * Reading ->on_cpu is racy, but this is ok.
++ *
++ * If we race with it leaving CPU, we'll take a lock. So we're correct.
++ * If we race with it entering CPU, unaccounted time is 0. This is
++ * indistinguishable from the read occurring a few cycles earlier.
++ * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++ * been accounted, so we're correct here as well.
++ */
++ if (!p->on_cpu || !task_on_rq_queued(p))
++ return tsk_seruntime(p);
++#endif
++
++ rq = task_access_lock_irqsave(p, &lock, &flags);
++ /*
++ * Must be ->curr _and_ ->on_rq. If dequeued, we would
++ * project cycles that may never be accounted to this
++ * thread, breaking clock_gettime().
++ */
++ if (p == rq->curr && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ update_curr(rq, p);
++ }
++ ns = tsk_seruntime(p);
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++ struct task_struct *p = rq->curr;
++
++ if (is_idle_task(p))
++ return;
++
++ update_curr(rq, p);
++ cpufreq_update_util(rq, 0);
++
++ /*
++ * Tasks have less than RESCHED_NS of time slice left they will be
++ * rescheduled.
++ */
++ if (p->time_slice >= RESCHED_NS)
++ return;
++ set_tsk_need_resched(p);
++ set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++ int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++ u64 resched_latency, now = rq_clock(rq);
++ static bool warned_once;
++
++ if (sysctl_resched_latency_warn_once && warned_once)
++ return 0;
++
++ if (!need_resched() || !latency_warn_ms)
++ return 0;
++
++ if (system_state == SYSTEM_BOOTING)
++ return 0;
++
++ if (!rq->last_seen_need_resched_ns) {
++ rq->last_seen_need_resched_ns = now;
++ rq->ticks_without_resched = 0;
++ return 0;
++ }
++
++ rq->ticks_without_resched++;
++ resched_latency = now - rq->last_seen_need_resched_ns;
++ if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++ return 0;
++
++ warned_once = true;
++
++ return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++ long val;
++
++ if ((kstrtol(str, 0, &val))) {
++ pr_warn("Unable to set resched_latency_warn_ms\n");
++ return 1;
++ }
++
++ sysctl_resched_latency_warn_ms = val;
++ return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void scheduler_tick(void)
++{
++ int cpu __maybe_unused = smp_processor_id();
++ struct rq *rq = cpu_rq(cpu);
++ u64 resched_latency;
++
++ arch_scale_freq_tick();
++ sched_clock_tick();
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ scheduler_task_tick(rq);
++ if (sched_feat(LATENCY_WARN))
++ resched_latency = cpu_resched_latency(rq);
++ calc_global_load_tick(rq);
++
++ rq->last_tick = rq->clock;
++ raw_spin_unlock(&rq->lock);
++
++ if (sched_feat(LATENCY_WARN) && resched_latency)
++ resched_latency_warn(cpu, resched_latency);
++
++ perf_event_task_tick();
++}
++
++#ifdef CONFIG_SCHED_SMT
++static inline int sg_balance_cpu_stop(void *data)
++{
++ struct rq *rq = this_rq();
++ struct task_struct *p = data;
++ cpumask_t tmp;
++ unsigned long flags;
++
++ local_irq_save(flags);
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++
++ rq->active_balance = 0;
++ /* _something_ may have changed the task, double check again */
++ if (task_on_rq_queued(p) && task_rq(p) == rq &&
++ cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
++ !is_migration_disabled(p)) {
++ int cpu = cpu_of(rq);
++ int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu));
++ rq = move_queued_task(rq, p, dcpu);
++ }
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock(&p->pi_lock);
++
++ local_irq_restore(flags);
++
++ return 0;
++}
++
++/* sg_balance_trigger - trigger slibing group balance for @cpu */
++static inline int sg_balance_trigger(const int cpu)
++{
++ struct rq *rq= cpu_rq(cpu);
++ unsigned long flags;
++ struct task_struct *curr;
++ int res;
++
++ if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++ return 0;
++ curr = rq->curr;
++ res = (!is_idle_task(curr)) && (1 == rq->nr_running) &&\
++ cpumask_intersects(curr->cpus_ptr, &sched_sg_idle_mask) &&\
++ !is_migration_disabled(curr) && (!rq->active_balance);
++
++ if (res)
++ rq->active_balance = 1;
++
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ if (res)
++ stop_one_cpu_nowait(cpu, sg_balance_cpu_stop, curr,
++ &rq->active_balance_work);
++ return res;
++}
++
++/*
++ * sg_balance - slibing group balance check for run queue @rq
++ */
++static inline void sg_balance(struct rq *rq)
++{
++ cpumask_t chk;
++ int cpu = cpu_of(rq);
++
++ /* exit when cpu is offline */
++ if (unlikely(!rq->online))
++ return;
++
++ /*
++ * Only cpu in slibing idle group will do the checking and then
++ * find potential cpus which can migrate the current running task
++ */
++ if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
++ cpumask_andnot(&chk, cpu_online_mask, sched_rq_watermark) &&
++ cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
++ int i;
++
++ for_each_cpu_wrap(i, &chk, cpu) {
++ if (cpumask_subset(cpu_smt_mask(i), &chk) &&
++ sg_balance_trigger(i))
++ return;
++ }
++ }
++}
++#endif /* CONFIG_SCHED_SMT */
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++ int cpu;
++ atomic_t state;
++ struct delayed_work work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE 0
++#define TICK_SCHED_REMOTE_OFFLINING 1
++#define TICK_SCHED_REMOTE_RUNNING 2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ * TICK_SCHED_REMOTE_OFFLINE
++ * | ^
++ * | |
++ * | | sched_tick_remote()
++ * | |
++ * | |
++ * +--TICK_SCHED_REMOTE_OFFLINING
++ * | ^
++ * | |
++ * sched_tick_start() | | sched_tick_stop()
++ * | |
++ * V |
++ * TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++ struct delayed_work *dwork = to_delayed_work(work);
++ struct tick_work *twork = container_of(dwork, struct tick_work, work);
++ int cpu = twork->cpu;
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *curr;
++ unsigned long flags;
++ u64 delta;
++ int os;
++
++ /*
++ * Handle the tick only if it appears the remote CPU is running in full
++ * dynticks mode. The check is racy by nature, but missing a tick or
++ * having one too much is no big deal because the scheduler tick updates
++ * statistics and checks timeslices in a time-independent way, regardless
++ * of when exactly it is running.
++ */
++ if (!tick_nohz_tick_stopped_cpu(cpu))
++ goto out_requeue;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ curr = rq->curr;
++ if (cpu_is_offline(cpu))
++ goto out_unlock;
++
++ update_rq_clock(rq);
++ if (!is_idle_task(curr)) {
++ /*
++ * Make sure the next tick runs within a reasonable
++ * amount of time.
++ */
++ delta = rq_clock_task(rq) - curr->last_ran;
++ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++ }
++ scheduler_task_tick(rq);
++
++ calc_load_nohz_remote(rq);
++out_unlock:
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++out_requeue:
++ /*
++ * Run the remote tick once per second (1Hz). This arbitrary
++ * frequency is large enough to avoid overload but short enough
++ * to keep scheduler internal stats reasonably up to date. But
++ * first update state to reflect hotplug activity if required.
++ */
++ os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++ if (os == TICK_SCHED_REMOTE_RUNNING)
++ queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++ int os;
++ struct tick_work *twork;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++ if (os == TICK_SCHED_REMOTE_OFFLINE) {
++ twork->cpu = cpu;
++ INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++ queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++ }
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++ struct tick_work *twork;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ cancel_delayed_work_sync(&twork->work);
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++ tick_work_cpu = alloc_percpu(struct tick_work);
++ BUG_ON(!tick_work_cpu);
++ return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++ defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++ if (preempt_count() == val) {
++ unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++ current->preempt_disable_ip = ip;
++#endif
++ trace_preempt_off(CALLER_ADDR0, ip);
++ }
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++ return;
++#endif
++ __preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Spinlock count overflowing soon?
++ */
++ DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++ PREEMPT_MASK - 10);
++#endif
++ preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++ if (preempt_count() == val)
++ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++ return;
++ /*
++ * Is the spinlock portion underflowing?
++ */
++ if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++ !(preempt_count() & PREEMPT_MASK)))
++ return;
++#endif
++
++ preempt_latency_stop(val);
++ __preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ return p->preempt_disable_ip;
++#else
++ return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++ /* Save this before calling printk(), since that will clobber it */
++ unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++ if (oops_in_progress)
++ return;
++
++ printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++ prev->comm, prev->pid, preempt_count());
++
++ debug_show_held_locks(prev);
++ print_modules();
++ if (irqs_disabled())
++ print_irqtrace_events(prev);
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)
++ && in_atomic_preempt_off()) {
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, preempt_disable_ip);
++ }
++ if (panic_on_warn)
++ panic("scheduling while atomic\n");
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++ if (task_stack_end_corrupted(prev))
++ panic("corrupted stack end detected inside scheduler\n");
++
++ if (task_scs_end_corrupted(prev))
++ panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++ if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++ printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++ prev->comm, prev->pid, prev->non_block_count);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++ }
++#endif
++
++ if (unlikely(in_atomic_preempt_off())) {
++ __schedule_bug(prev);
++ preempt_count_set(PREEMPT_DISABLED);
++ }
++ rcu_sleep_check();
++ SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++ profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++ schedstat_inc(this_rq()->sched_count);
++}
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++ printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++ sched_rq_pending_mask.bits[0],
++ sched_rq_watermark[0].bits[0],
++ sched_sg_idle_mask.bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef CONFIG_SMP
++
++#define SCHED_RQ_NR_MIGRATION (32U)
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ * Will try to migrate mininal of half of @rq nr_running tasks and
++ * SCHED_RQ_NR_MIGRATION to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++ struct task_struct *p, *skip = rq->curr;
++ int nr_migrated = 0;
++ int nr_tries = min(rq->nr_running / 2, SCHED_RQ_NR_MIGRATION);
++
++ while (skip != rq->idle && nr_tries &&
++ (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++ skip = sched_rq_next_task(p, rq);
++ if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++ __SCHED_DEQUEUE_TASK(p, rq, 0);
++ set_task_cpu(p, dest_cpu);
++ sched_task_sanity_check(p, dest_rq);
++ __SCHED_ENQUEUE_TASK(p, dest_rq, 0);
++ nr_migrated++;
++ }
++ nr_tries--;
++ }
++
++ return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++ struct cpumask *topo_mask, *end_mask;
++
++ if (unlikely(!rq->online))
++ return 0;
++
++ if (cpumask_empty(&sched_rq_pending_mask))
++ return 0;
++
++ topo_mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++ end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++ do {
++ int i;
++ for_each_cpu_and(i, &sched_rq_pending_mask, topo_mask) {
++ int nr_migrated;
++ struct rq *src_rq;
++
++ src_rq = cpu_rq(i);
++ if (!do_raw_spin_trylock(&src_rq->lock))
++ continue;
++ spin_acquire(&src_rq->lock.dep_map,
++ SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++ if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++ src_rq->nr_running -= nr_migrated;
++ if (src_rq->nr_running < 2)
++ cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++ rq->nr_running += nr_migrated;
++ if (rq->nr_running > 1)
++ cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++ cpufreq_update_util(rq, 0);
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++
++ return 1;
++ }
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++ }
++ } while (++topo_mask < end_mask);
++
++ return 0;
++}
++#endif
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++ if (unlikely(rq->idle == p))
++ return;
++
++ update_curr(rq, p);
++
++ if (p->time_slice < RESCHED_NS)
++ time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu, struct task_struct *prev)
++{
++ struct task_struct *next;
++
++ if (unlikely(rq->skip)) {
++ next = rq_runnable_task(rq);
++ if (next == rq->idle) {
++#ifdef CONFIG_SMP
++ if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++ rq->skip = NULL;
++ schedstat_inc(rq->sched_goidle);
++ return next;
++#ifdef CONFIG_SMP
++ }
++ next = rq_runnable_task(rq);
++#endif
++ }
++ rq->skip = NULL;
++#ifdef CONFIG_HIGH_RES_TIMERS
++ hrtick_start(rq, next->time_slice);
++#endif
++ return next;
++ }
++
++ next = sched_rq_first_task(rq);
++ if (next == rq->idle) {
++#ifdef CONFIG_SMP
++ if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++ schedstat_inc(rq->sched_goidle);
++ /*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++ return next;
++#ifdef CONFIG_SMP
++ }
++ next = sched_rq_first_task(rq);
++#endif
++ }
++#ifdef CONFIG_HIGH_RES_TIMERS
++ hrtick_start(rq, next->time_slice);
++#endif
++ /*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu,
++ * next);*/
++ return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
++ * optimize the AND operation out and just check for zero.
++ */
++#define SM_NONE 0x0
++#define SM_PREEMPT 0x1
++#define SM_RTLOCK_WAIT 0x2
++
++#ifndef CONFIG_PREEMPT_RT
++# define SM_MASK_PREEMPT (~0U)
++#else
++# define SM_MASK_PREEMPT SM_PREEMPT
++#endif
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ * 1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ * 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ * paths. For example, see arch/x86/entry_64.S.
++ *
++ * To drive preemption between tasks, the scheduler sets the flag in timer
++ * interrupt handler scheduler_tick().
++ *
++ * 3. Wakeups don't really cause entry into schedule(). They add a
++ * task to the run-queue and that's it.
++ *
++ * Now, if the new task added to the run-queue preempts the current
++ * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ * called on the nearest possible occasion:
++ *
++ * - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ * - in syscall or exception context, at the next outmost
++ * preempt_enable(). (this might be as soon as the wake_up()'s
++ * spin_unlock()!)
++ *
++ * - in IRQ context, return from interrupt-handler to
++ * preemptible context
++ *
++ * - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ * then at the next:
++ *
++ * - cond_resched() call
++ * - explicit schedule() call
++ * - return from syscall or exception to user-space
++ * - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(unsigned int sched_mode)
++{
++ struct task_struct *prev, *next;
++ unsigned long *switch_count;
++ unsigned long prev_state;
++ struct rq *rq;
++ int cpu;
++ int deactivated = 0;
++
++ cpu = smp_processor_id();
++ rq = cpu_rq(cpu);
++ prev = rq->curr;
++
++ schedule_debug(prev, !!sched_mode);
++
++ /* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++ hrtick_clear(rq);
++
++ local_irq_disable();
++ rcu_note_context_switch(!!sched_mode);
++
++ /*
++ * Make sure that signal_pending_state()->signal_pending() below
++ * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++ * done by the caller to avoid the race with signal_wake_up():
++ *
++ * __set_current_state(@state) signal_wake_up()
++ * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)
++ * wake_up_state(p, state)
++ * LOCK rq->lock LOCK p->pi_state
++ * smp_mb__after_spinlock() smp_mb__after_spinlock()
++ * if (signal_pending_state()) if (p->state & @state)
++ *
++ * Also, the membarrier system call requires a full memory barrier
++ * after coming from user-space, before storing to rq->curr.
++ */
++ raw_spin_lock(&rq->lock);
++ smp_mb__after_spinlock();
++
++ update_rq_clock(rq);
++
++ switch_count = &prev->nivcsw;
++ /*
++ * We must load prev->state once (task_struct::state is volatile), such
++ * that we form a control dependency vs deactivate_task() below.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
++ if (signal_pending_state(prev_state, prev)) {
++ WRITE_ONCE(prev->__state, TASK_RUNNING);
++ } else {
++ prev->sched_contributes_to_load =
++ (prev_state & TASK_UNINTERRUPTIBLE) &&
++ !(prev_state & TASK_NOLOAD) &&
++ !(prev->flags & PF_FROZEN);
++
++ if (prev->sched_contributes_to_load)
++ rq->nr_uninterruptible++;
++
++ /*
++ * __schedule() ttwu()
++ * prev_state = prev->state; if (p->on_rq && ...)
++ * if (prev_state) goto out;
++ * p->on_rq = 0; smp_acquire__after_ctrl_dep();
++ * p->state = TASK_WAKING
++ *
++ * Where __schedule() and ttwu() have matching control dependencies.
++ *
++ * After this, schedule() must not care about p->state any more.
++ */
++ sched_task_deactivate(prev, rq);
++ deactivate_task(prev, rq);
++ deactivated = 1;
++
++ if (prev->in_iowait) {
++ atomic_inc(&rq->nr_iowait);
++ delayacct_blkio_start();
++ }
++ }
++ switch_count = &prev->nvcsw;
++ }
++
++ check_curr(prev, rq);
++
++ next = choose_next_task(rq, cpu, prev);
++ clear_tsk_need_resched(prev);
++ clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++ rq->last_seen_need_resched_ns = 0;
++#endif
++
++ if (likely(prev != next)) {
++ if (deactivated)
++ update_sched_rq_watermark(rq);
++ next->last_ran = rq->clock_task;
++ rq->last_ts_switch = rq->clock;
++
++ rq->nr_switches++;
++ /*
++ * RCU users of rcu_dereference(rq->curr) may not see
++ * changes to task_struct made by pick_next_task().
++ */
++ RCU_INIT_POINTER(rq->curr, next);
++ /*
++ * The membarrier system call requires each architecture
++ * to have a full memory barrier after updating
++ * rq->curr, before returning to user-space.
++ *
++ * Here are the schemes providing that barrier on the
++ * various architectures:
++ * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
++ * switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
++ * - finish_lock_switch() for weakly-ordered
++ * architectures where spin_unlock is a full barrier,
++ * - switch_to() for arm64 (weakly-ordered, spin_unlock
++ * is a RELEASE barrier),
++ */
++ ++*switch_count;
++
++ psi_sched_switch(prev, next, !task_on_rq_queued(prev));
++
++ trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
++
++ /* Also unlocks the rq: */
++ rq = context_switch(rq, prev, next);
++ } else {
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++ }
++
++#ifdef CONFIG_SCHED_SMT
++ sg_balance(rq);
++#endif
++}
++
++void __noreturn do_task_dead(void)
++{
++ /* Causes final put_task_struct in finish_task_switch(): */
++ set_special_state(TASK_DEAD);
++
++ /* Tell freezer to ignore us: */
++ current->flags |= PF_NOFREEZE;
++
++ __schedule(SM_NONE);
++ BUG();
++
++ /* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++ for (;;)
++ cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++ unsigned int task_flags;
++
++ if (task_is_running(tsk))
++ return;
++
++ task_flags = tsk->flags;
++ /*
++ * If a worker goes to sleep, notify and ask workqueue whether it
++ * wants to wake up a task to maintain concurrency.
++ */
++ if (task_flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++ if (task_flags & PF_WQ_WORKER)
++ wq_worker_sleeping(tsk);
++ else
++ io_wq_worker_sleeping(tsk);
++ }
++
++ if (tsk_is_pi_blocked(tsk))
++ return;
++
++ /*
++ * If we are going to sleep and we have plugged IO queued,
++ * make sure to submit it to avoid deadlocks.
++ */
++ blk_flush_plug(tsk->plug, true);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++ if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++ if (tsk->flags & PF_WQ_WORKER)
++ wq_worker_running(tsk);
++ else
++ io_wq_worker_running(tsk);
++ }
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++ struct task_struct *tsk = current;
++
++ sched_submit_work(tsk);
++ do {
++ preempt_disable();
++ __schedule(SM_NONE);
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++ sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++ /*
++ * As this skips calling sched_submit_work(), which the idle task does
++ * regardless because that function is a nop when the task is in a
++ * TASK_RUNNING state, make sure this isn't used someplace that the
++ * current task can be in any other state. Note, idle is always in the
++ * TASK_RUNNING state.
++ */
++ WARN_ON_ONCE(current->__state);
++ do {
++ __schedule(SM_NONE);
++ } while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++ /*
++ * If we come here after a random call to set_need_resched(),
++ * or we have been woken up remotely but the IPI has not yet arrived,
++ * we haven't yet exited the RCU idle mode. Do it here manually until
++ * we find a better solution.
++ *
++ * NB: There are buggy callers of this function. Ideally we
++ * should warn if prev_state != CONTEXT_USER, but that will trigger
++ * too frequently to make sense yet.
++ */
++ enum ctx_state prev_state = exception_enter();
++ schedule();
++ exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++ sched_preempt_enable_no_resched();
++ schedule();
++ preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++ do {
++ preempt_disable();
++ __schedule(SM_RTLOCK_WAIT);
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ __schedule(SM_PREEMPT);
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++
++ /*
++ * Check again in case we missed a preemption opportunity
++ * between schedule and now.
++ */
++ } while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++ /*
++ * If there is a non-zero preempt_count or interrupts are disabled,
++ * we do not want to preempt the current task. Just return..
++ */
++ if (likely(!preemptible()))
++ return;
++
++ preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled preempt_schedule
++#define preempt_schedule_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++ return;
++ preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++ enum ctx_state prev_ctx;
++
++ if (likely(!preemptible()))
++ return;
++
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ /*
++ * Needs preempt disabled in case user_exit() is traced
++ * and the tracer calls preempt_enable_notrace() causing
++ * an infinite recursion.
++ */
++ prev_ctx = exception_enter();
++ __schedule(SM_PREEMPT);
++ exception_exit(prev_ctx);
++
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++ } while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++ return;
++ preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++ enum ctx_state prev_state;
++
++ /* Catch callers which need to be fixed */
++ BUG_ON(preempt_count() || !irqs_disabled());
++
++ prev_state = exception_enter();
++
++ do {
++ preempt_disable();
++ local_irq_enable();
++ __schedule(SM_PREEMPT);
++ local_irq_disable();
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++
++ exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++ void *key)
++{
++ WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~WF_SYNC);
++ return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++ int idx;
++
++ /* Trigger resched if task sched_prio has been modified. */
++ if (task_on_rq_queued(p) && (idx = task_sched_prio_idx(p, rq)) != p->sq_idx) {
++ requeue_task(p, rq, idx);
++ check_preempt_curr(rq);
++ }
++}
++
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++ p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++ if (pi_task)
++ prio = min(prio, pi_task->prio);
++
++ return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++ return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++ int prio;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ /* XXX used to be waiter->prio, not waiter->task->prio */
++ prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++ /*
++ * If nothing changed; bail early.
++ */
++ if (p->pi_top_task == pi_task && prio == p->prio)
++ return;
++
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Set under pi_lock && rq->lock, such that the value can be used under
++ * either lock.
++ *
++ * Note that there is loads of tricky to make this pointer cache work
++ * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++ * ensure a task is de-boosted (pi_task is set to NULL) before the
++ * task is allowed to run again (and can exit). This ensures the pointer
++ * points to a blocked task -- which guarantees the task is present.
++ */
++ p->pi_top_task = pi_task;
++
++ /*
++ * For FIFO/RR we only need to set prio, if that matches we're done.
++ */
++ if (prio == p->prio)
++ goto out_unlock;
++
++ /*
++ * Idle task boosting is a nono in general. There is one
++ * exception, when PREEMPT_RT and NOHZ is active:
++ *
++ * The idle task calls get_next_timer_interrupt() and holds
++ * the timer wheel base->lock on the CPU and another CPU wants
++ * to access the timer (probably to cancel it). We can safely
++ * ignore the boosting request, as the idle CPU runs this code
++ * with interrupts disabled and will complete the lock
++ * protected section without being interrupted. So there is no
++ * real need to boost.
++ */
++ if (unlikely(p == rq->idle)) {
++ WARN_ON(p != rq->curr);
++ WARN_ON(p->pi_blocked_on);
++ goto out_unlock;
++ }
++
++ trace_sched_pi_setprio(p, pi_task);
++
++ __setscheduler_prio(p, prio);
++
++ check_task_changed(p, rq);
++out_unlock:
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++
++ __balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++
++ preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++ return;
++ /*
++ * We have to be careful, if called from sys_setpriority(),
++ * the task might be in the middle of scheduling on another CPU.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ rq = __task_access_lock(p, &lock);
++
++ p->static_prio = NICE_TO_PRIO(nice);
++ /*
++ * The RT priorities are set via sched_setscheduler(), but we still
++ * allow the 'normal' nice value to be set - but as expected
++ * it won't have any effect on scheduling until the task is
++ * not SCHED_NORMAL/SCHED_BATCH:
++ */
++ if (task_has_rt_policy(p))
++ goto out_unlock;
++
++ p->prio = effective_prio(p);
++
++ check_task_changed(p, rq);
++out_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++ /* Convert nice value [19,-20] to rlimit style value [1,40] */
++ int nice_rlim = nice_to_rlimit(nice);
++
++ return (nice_rlim <= task_rlimit(p, RLIMIT_NICE) ||
++ capable(CAP_SYS_NICE));
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++ long nice, retval;
++
++ /*
++ * Setpriority might change our priority at the same moment.
++ * We don't have to worry. Conceptually one call occurs first
++ * and we have a single winner.
++ */
++
++ increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++ nice = task_nice(current) + increment;
++
++ nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++ if (increment < 0 && !can_nice(current, nice))
++ return -EPERM;
++
++ retval = security_task_setnice(current, nice);
++ if (retval)
++ return retval;
++
++ set_user_nice(current, nice);
++ return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy return value kernel prio user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53] [100 ... 139] 0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39] 100 0/[-20 ... 19]
++ * fifo, rr [-1 ... -100] [99 ... 0] [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++ return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++ task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (rq->curr != rq->idle)
++ return 0;
++
++ if (rq->nr_running)
++ return 0;
++
++#ifdef CONFIG_SMP
++ if (rq->ttwu_pending)
++ return 0;
++#endif
++
++ return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++ return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++ return pid ? find_task_by_vpid(pid) : current;
++}
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++ const struct sched_attr *attr)
++{
++ int policy = attr->sched_policy;
++
++ if (policy == SETPARAM_POLICY)
++ policy = p->policy;
++
++ p->policy = policy;
++
++ /*
++ * allow normal nice value to be set, but will not have any
++ * effect on scheduling until the task not SCHED_NORMAL/
++ * SCHED_BATCH
++ */
++ p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++ /*
++ * __sched_setscheduler() ensures attr->sched_priority == 0 when
++ * !rt_policy. Always setting this ensures that things like
++ * getparam()/getattr() don't report silly values for !rt tasks.
++ */
++ p->rt_priority = attr->sched_priority;
++ p->normal_prio = normal_prio(p);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++ const struct cred *cred = current_cred(), *pcred;
++ bool match;
++
++ rcu_read_lock();
++ pcred = __task_cred(p);
++ match = (uid_eq(cred->euid, pcred->euid) ||
++ uid_eq(cred->euid, pcred->uid));
++ rcu_read_unlock();
++ return match;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++ const struct sched_attr *attr,
++ bool user, bool pi)
++{
++ const struct sched_attr dl_squash_attr = {
++ .size = sizeof(struct sched_attr),
++ .sched_policy = SCHED_FIFO,
++ .sched_nice = 0,
++ .sched_priority = 99,
++ };
++ int oldpolicy = -1, policy = attr->sched_policy;
++ int retval, newprio;
++ struct callback_head *head;
++ unsigned long flags;
++ struct rq *rq;
++ int reset_on_fork;
++ raw_spinlock_t *lock;
++
++ /* The pi code expects interrupts enabled */
++ BUG_ON(pi && in_interrupt());
++
++ /*
++ * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++ */
++ if (unlikely(SCHED_DEADLINE == policy)) {
++ attr = &dl_squash_attr;
++ policy = attr->sched_policy;
++ }
++recheck:
++ /* Double check policy once rq lock held */
++ if (policy < 0) {
++ reset_on_fork = p->sched_reset_on_fork;
++ policy = oldpolicy = p->policy;
++ } else {
++ reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++ if (policy > SCHED_IDLE)
++ return -EINVAL;
++ }
++
++ if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++ return -EINVAL;
++
++ /*
++ * Valid priorities for SCHED_FIFO and SCHED_RR are
++ * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++ * SCHED_BATCH and SCHED_IDLE is 0.
++ */
++ if (attr->sched_priority < 0 ||
++ (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++ (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++ return -EINVAL;
++ if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++ (attr->sched_priority != 0))
++ return -EINVAL;
++
++ /*
++ * Allow unprivileged RT tasks to decrease priority:
++ */
++ if (user && !capable(CAP_SYS_NICE)) {
++ if (SCHED_FIFO == policy || SCHED_RR == policy) {
++ unsigned long rlim_rtprio =
++ task_rlimit(p, RLIMIT_RTPRIO);
++
++ /* Can't set/change the rt policy */
++ if (policy != p->policy && !rlim_rtprio)
++ return -EPERM;
++
++ /* Can't increase priority */
++ if (attr->sched_priority > p->rt_priority &&
++ attr->sched_priority > rlim_rtprio)
++ return -EPERM;
++ }
++
++ /* Can't change other user's priorities */
++ if (!check_same_owner(p))
++ return -EPERM;
++
++ /* Normal users shall not reset the sched_reset_on_fork flag */
++ if (p->sched_reset_on_fork && !reset_on_fork)
++ return -EPERM;
++ }
++
++ if (user) {
++ retval = security_task_setscheduler(p);
++ if (retval)
++ return retval;
++ }
++
++ if (pi)
++ cpuset_read_lock();
++
++ /*
++ * Make sure no PI-waiters arrive (or leave) while we are
++ * changing the priority of the task:
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++ /*
++ * To be able to change p->policy safely, task_access_lock()
++ * must be called.
++ * IF use task_access_lock() here:
++ * For the task p which is not running, reading rq->stop is
++ * racy but acceptable as ->stop doesn't change much.
++ * An enhancemnet can be made to read rq->stop saftly.
++ */
++ rq = __task_access_lock(p, &lock);
++
++ /*
++ * Changing the policy of the stop threads its a very bad idea
++ */
++ if (p == rq->stop) {
++ retval = -EINVAL;
++ goto unlock;
++ }
++
++ /*
++ * If not changing anything there's no need to proceed further:
++ */
++ if (unlikely(policy == p->policy)) {
++ if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++ goto change;
++ if (!rt_policy(policy) &&
++ NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++ goto change;
++
++ p->sched_reset_on_fork = reset_on_fork;
++ retval = 0;
++ goto unlock;
++ }
++change:
++
++ /* Re-check policy now with rq lock held */
++ if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++ policy = oldpolicy = -1;
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ if (pi)
++ cpuset_read_unlock();
++ goto recheck;
++ }
++
++ p->sched_reset_on_fork = reset_on_fork;
++
++ newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++ if (pi) {
++ /*
++ * Take priority boosted tasks into account. If the new
++ * effective priority is unchanged, we just store the new
++ * normal parameters and do not touch the scheduler class and
++ * the runqueue. This will be done when the task deboost
++ * itself.
++ */
++ newprio = rt_effective_prio(p, newprio);
++ }
++
++ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++ __setscheduler_params(p, attr);
++ __setscheduler_prio(p, newprio);
++ }
++
++ check_task_changed(p, rq);
++
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++ head = splice_balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ if (pi) {
++ cpuset_read_unlock();
++ rt_mutex_adjust_pi(p);
++ }
++
++ /* Run balance callbacks after we've adjusted the PI chain: */
++ balance_callbacks(rq, head);
++ preempt_enable();
++
++ return 0;
++
++unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ if (pi)
++ cpuset_read_unlock();
++ return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++ const struct sched_param *param, bool check)
++{
++ struct sched_attr attr = {
++ .sched_policy = policy,
++ .sched_priority = param->sched_priority,
++ .sched_nice = PRIO_TO_NICE(p->static_prio),
++ };
++
++ /* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++ if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++ attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++ policy &= ~SCHED_RESET_ON_FORK;
++ attr.sched_policy = policy;
++ }
++
++ return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++ const struct sched_param *param)
++{
++ return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++ return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++ return __sched_setscheduler(p, attr, false, true);
++}
++EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission. For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++ const struct sched_param *param)
++{
++ return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ * MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++ struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++ WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++ struct sched_param sp = { .sched_priority = 1 };
++ WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++ struct sched_attr attr = {
++ .sched_policy = SCHED_NORMAL,
++ .sched_nice = nice,
++ };
++ WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++ struct sched_param lparam;
++ struct task_struct *p;
++ int retval;
++
++ if (!param || pid < 0)
++ return -EINVAL;
++ if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++ return -EFAULT;
++
++ rcu_read_lock();
++ retval = -ESRCH;
++ p = find_process_by_pid(pid);
++ if (likely(p))
++ get_task_struct(p);
++ rcu_read_unlock();
++
++ if (likely(p)) {
++ retval = sched_setscheduler(p, policy, &lparam);
++ put_task_struct(p);
++ }
++
++ return retval;
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++ u32 size;
++ int ret;
++
++ /* Zero the full structure, so that a short copy will be nice: */
++ memset(attr, 0, sizeof(*attr));
++
++ ret = get_user(size, &uattr->size);
++ if (ret)
++ return ret;
++
++ /* ABI compatibility quirk: */
++ if (!size)
++ size = SCHED_ATTR_SIZE_VER0;
++
++ if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++ goto err_size;
++
++ ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++ if (ret) {
++ if (ret == -E2BIG)
++ goto err_size;
++ return ret;
++ }
++
++ /*
++ * XXX: Do we want to be lenient like existing syscalls; or do we want
++ * to be strict and return an error on out-of-bounds values?
++ */
++ attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++ /* sched/core.c uses zero here but we already know ret is zero */
++ return 0;
++
++err_size:
++ put_user(sizeof(*attr), &uattr->size);
++ return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++ if (policy < 0)
++ return -EINVAL;
++
++ return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++ return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++ unsigned int, flags)
++{
++ struct sched_attr attr;
++ struct task_struct *p;
++ int retval;
++
++ if (!uattr || pid < 0 || flags)
++ return -EINVAL;
++
++ retval = sched_copy_attr(uattr, &attr);
++ if (retval)
++ return retval;
++
++ if ((int)attr.sched_policy < 0)
++ return -EINVAL;
++
++ rcu_read_lock();
++ retval = -ESRCH;
++ p = find_process_by_pid(pid);
++ if (likely(p))
++ get_task_struct(p);
++ rcu_read_unlock();
++
++ if (likely(p)) {
++ retval = sched_setattr(p, &attr);
++ put_task_struct(p);
++ }
++
++ return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++ struct task_struct *p;
++ int retval = -EINVAL;
++
++ if (pid < 0)
++ goto out_nounlock;
++
++ retval = -ESRCH;
++ rcu_read_lock();
++ p = find_process_by_pid(pid);
++ if (p) {
++ retval = security_task_getscheduler(p);
++ if (!retval)
++ retval = p->policy;
++ }
++ rcu_read_unlock();
++
++out_nounlock:
++ return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++ struct sched_param lp = { .sched_priority = 0 };
++ struct task_struct *p;
++ int retval = -EINVAL;
++
++ if (!param || pid < 0)
++ goto out_nounlock;
++
++ rcu_read_lock();
++ p = find_process_by_pid(pid);
++ retval = -ESRCH;
++ if (!p)
++ goto out_unlock;
++
++ retval = security_task_getscheduler(p);
++ if (retval)
++ goto out_unlock;
++
++ if (task_has_rt_policy(p))
++ lp.sched_priority = p->rt_priority;
++ rcu_read_unlock();
++
++ /*
++ * This one might sleep, we cannot do it with a spinlock held ...
++ */
++ retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++
++out_nounlock:
++ return retval;
++
++out_unlock:
++ rcu_read_unlock();
++ return retval;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++ struct sched_attr *kattr,
++ unsigned int usize)
++{
++ unsigned int ksize = sizeof(*kattr);
++
++ if (!access_ok(uattr, usize))
++ return -EFAULT;
++
++ /*
++ * sched_getattr() ABI forwards and backwards compatibility:
++ *
++ * If usize == ksize then we just copy everything to user-space and all is good.
++ *
++ * If usize < ksize then we only copy as much as user-space has space for,
++ * this keeps ABI compatibility as well. We skip the rest.
++ *
++ * If usize > ksize then user-space is using a newer version of the ABI,
++ * which part the kernel doesn't know about. Just ignore it - tooling can
++ * detect the kernel's knowledge of attributes from the attr->size value
++ * which is set to ksize in this case.
++ */
++ kattr->size = min(usize, ksize);
++
++ if (copy_to_user(uattr, kattr, kattr->size))
++ return -EFAULT;
++
++ return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++ unsigned int, usize, unsigned int, flags)
++{
++ struct sched_attr kattr = { };
++ struct task_struct *p;
++ int retval;
++
++ if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++ usize < SCHED_ATTR_SIZE_VER0 || flags)
++ return -EINVAL;
++
++ rcu_read_lock();
++ p = find_process_by_pid(pid);
++ retval = -ESRCH;
++ if (!p)
++ goto out_unlock;
++
++ retval = security_task_getscheduler(p);
++ if (retval)
++ goto out_unlock;
++
++ kattr.sched_policy = p->policy;
++ if (p->sched_reset_on_fork)
++ kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++ if (task_has_rt_policy(p))
++ kattr.sched_priority = p->rt_priority;
++ else
++ kattr.sched_nice = task_nice(p);
++ kattr.sched_flags &= SCHED_FLAG_ALL;
++
++#ifdef CONFIG_UCLAMP_TASK
++ kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++ kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++
++ rcu_read_unlock();
++
++ return sched_attr_copy_to_user(uattr, &kattr, usize);
++
++out_unlock:
++ rcu_read_unlock();
++ return retval;
++}
++
++static int
++__sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
++{
++ int retval;
++ cpumask_var_t cpus_allowed, new_mask;
++
++ if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
++ return -ENOMEM;
++
++ if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++ retval = -ENOMEM;
++ goto out_free_cpus_allowed;
++ }
++
++ cpuset_cpus_allowed(p, cpus_allowed);
++ cpumask_and(new_mask, mask, cpus_allowed);
++again:
++ retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER);
++ if (retval)
++ goto out_free_new_mask;
++
++ cpuset_cpus_allowed(p, cpus_allowed);
++ if (!cpumask_subset(new_mask, cpus_allowed)) {
++ /*
++ * We must have raced with a concurrent cpuset
++ * update. Just reset the cpus_allowed to the
++ * cpuset's cpus_allowed
++ */
++ cpumask_copy(new_mask, cpus_allowed);
++ goto again;
++ }
++
++out_free_new_mask:
++ free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++ free_cpumask_var(cpus_allowed);
++ return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++ struct task_struct *p;
++ int retval;
++
++ rcu_read_lock();
++
++ p = find_process_by_pid(pid);
++ if (!p) {
++ rcu_read_unlock();
++ return -ESRCH;
++ }
++
++ /* Prevent p going away */
++ get_task_struct(p);
++ rcu_read_unlock();
++
++ if (p->flags & PF_NO_SETAFFINITY) {
++ retval = -EINVAL;
++ goto out_put_task;
++ }
++
++ if (!check_same_owner(p)) {
++ rcu_read_lock();
++ if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) {
++ rcu_read_unlock();
++ retval = -EPERM;
++ goto out_put_task;
++ }
++ rcu_read_unlock();
++ }
++
++ retval = security_task_setscheduler(p);
++ if (retval)
++ goto out_put_task;
++
++ retval = __sched_setaffinity(p, in_mask);
++out_put_task:
++ put_task_struct(p);
++ return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++ struct cpumask *new_mask)
++{
++ if (len < cpumask_size())
++ cpumask_clear(new_mask);
++ else if (len > cpumask_size())
++ len = cpumask_size();
++
++ return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++ unsigned long __user *, user_mask_ptr)
++{
++ cpumask_var_t new_mask;
++ int retval;
++
++ if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++ return -ENOMEM;
++
++ retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++ if (retval == 0)
++ retval = sched_setaffinity(pid, new_mask);
++ free_cpumask_var(new_mask);
++ return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++ struct task_struct *p;
++ raw_spinlock_t *lock;
++ unsigned long flags;
++ int retval;
++
++ rcu_read_lock();
++
++ retval = -ESRCH;
++ p = find_process_by_pid(pid);
++ if (!p)
++ goto out_unlock;
++
++ retval = security_task_getscheduler(p);
++ if (retval)
++ goto out_unlock;
++
++ task_access_lock_irqsave(p, &lock, &flags);
++ cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++out_unlock:
++ rcu_read_unlock();
++
++ return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++ unsigned long __user *, user_mask_ptr)
++{
++ int ret;
++ cpumask_var_t mask;
++
++ if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++ return -EINVAL;
++ if (len & (sizeof(unsigned long)-1))
++ return -EINVAL;
++
++ if (!alloc_cpumask_var(&mask, GFP_KERNEL))
++ return -ENOMEM;
++
++ ret = sched_getaffinity(pid, mask);
++ if (ret == 0) {
++ unsigned int retlen = min_t(size_t, len, cpumask_size());
++
++ if (copy_to_user(user_mask_ptr, mask, retlen))
++ ret = -EFAULT;
++ else
++ ret = retlen;
++ }
++ free_cpumask_var(mask);
++
++ return ret;
++}
++
++static void do_sched_yield(void)
++{
++ struct rq *rq;
++ struct rq_flags rf;
++
++ if (!sched_yield_type)
++ return;
++
++ rq = this_rq_lock_irq(&rf);
++
++ schedstat_inc(rq->yld_count);
++
++ if (1 == sched_yield_type) {
++ if (!rt_task(current))
++ do_sched_yield_type_1(current, rq);
++ } else if (2 == sched_yield_type) {
++ if (rq->nr_running > 1)
++ rq->skip = current;
++ }
++
++ preempt_disable();
++ raw_spin_unlock_irq(&rq->lock);
++ sched_preempt_enable_no_resched();
++
++ schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++ do_sched_yield();
++ return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++ if (should_resched(0)) {
++ preempt_schedule_common();
++ return 1;
++ }
++ /*
++ * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++ * whether the current CPU is in an RCU read-side critical section,
++ * so the tick can report quiescent states even for CPUs looping
++ * in kernel context. In contrast, in non-preemptible kernels,
++ * RCU readers leave no in-memory hints, which means that CPU-bound
++ * processes executing in kernel context might never report an
++ * RCU quiescent state. Therefore, the following code causes
++ * cond_resched() to report a quiescent state, but only when RCU
++ * is in urgent need of one.
++ */
++#ifndef CONFIG_PREEMPT_RCU
++ rcu_all_qs();
++#endif
++ return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled __cond_resched
++#define cond_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled __cond_resched
++#define might_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_might_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION. We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held(lock);
++
++ if (spin_needbreak(lock) || resched) {
++ spin_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ spin_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_read(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ read_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ read_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_write(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ write_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ write_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ * cond_resched <- __cond_resched
++ * might_resched <- RET0
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ * cond_resched <- __cond_resched
++ * might_resched <- __cond_resched
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ * cond_resched <- RET0
++ * might_resched <- RET0
++ * preempt_schedule <- preempt_schedule
++ * preempt_schedule_notrace <- preempt_schedule_notrace
++ * irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++ preempt_dynamic_undefined = -1,
++ preempt_dynamic_none,
++ preempt_dynamic_voluntary,
++ preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++ if (!strcmp(str, "none"))
++ return preempt_dynamic_none;
++
++ if (!strcmp(str, "voluntary"))
++ return preempt_dynamic_voluntary;
++
++ if (!strcmp(str, "full"))
++ return preempt_dynamic_full;
++
++ return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f) static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f) static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f) static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f) static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++void sched_dynamic_update(int mode)
++{
++ /*
++ * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++ * the ZERO state, which is invalid.
++ */
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++ switch (mode) {
++ case preempt_dynamic_none:
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ pr_info("Dynamic Preempt: none\n");
++ break;
++
++ case preempt_dynamic_voluntary:
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ pr_info("Dynamic Preempt: voluntary\n");
++ break;
++
++ case preempt_dynamic_full:
++ preempt_dynamic_disable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++ pr_info("Dynamic Preempt: full\n");
++ break;
++ }
++
++ preempt_dynamic_mode = mode;
++}
++
++static int __init setup_preempt_mode(char *str)
++{
++ int mode = sched_dynamic_mode(str);
++ if (mode < 0) {
++ pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++ return 0;
++ }
++
++ sched_dynamic_update(mode);
++ return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++ if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++ if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++ sched_dynamic_update(preempt_dynamic_none);
++ } else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++ sched_dynamic_update(preempt_dynamic_voluntary);
++ } else {
++ /* Default static call setting, nothing to do */
++ WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++ preempt_dynamic_mode = preempt_dynamic_full;
++ pr_info("Dynamic Preempt: full\n");
++ }
++ }
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++ bool preempt_model_##mode(void) \
++ { \
++ WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++ return preempt_dynamic_mode == preempt_dynamic_##mode; \
++ } \
++ EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++ set_current_state(TASK_RUNNING);
++ do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ * true (>0) if we indeed boosted the target task.
++ * false (0) if we failed to boost the target.
++ * -ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++ return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++ int old_iowait = current->in_iowait;
++
++ current->in_iowait = 1;
++ blk_flush_plug(current->plug, true);
++ return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++ current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO. Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++ int token;
++ long ret;
++
++ token = io_schedule_prepare();
++ ret = schedule_timeout(timeout);
++ io_schedule_finish(token);
++
++ return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++ int token;
++
++ token = io_schedule_prepare();
++ schedule();
++ io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++ int ret = -EINVAL;
++
++ switch (policy) {
++ case SCHED_FIFO:
++ case SCHED_RR:
++ ret = MAX_RT_PRIO - 1;
++ break;
++ case SCHED_NORMAL:
++ case SCHED_BATCH:
++ case SCHED_IDLE:
++ ret = 0;
++ break;
++ }
++ return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++ int ret = -EINVAL;
++
++ switch (policy) {
++ case SCHED_FIFO:
++ case SCHED_RR:
++ ret = 1;
++ break;
++ case SCHED_NORMAL:
++ case SCHED_BATCH:
++ case SCHED_IDLE:
++ ret = 0;
++ break;
++ }
++ return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++ struct task_struct *p;
++ int retval;
++
++ alt_sched_debug();
++
++ if (pid < 0)
++ return -EINVAL;
++
++ retval = -ESRCH;
++ rcu_read_lock();
++ p = find_process_by_pid(pid);
++ if (!p)
++ goto out_unlock;
++
++ retval = security_task_getscheduler(p);
++ if (retval)
++ goto out_unlock;
++ rcu_read_unlock();
++
++ *t = ns_to_timespec64(sched_timeslice_ns);
++ return 0;
++
++out_unlock:
++ rcu_read_unlock();
++ return retval;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++ struct __kernel_timespec __user *, interval)
++{
++ struct timespec64 t;
++ int retval = sched_rr_get_interval(pid, &t);
++
++ if (retval == 0)
++ retval = put_timespec64(&t, interval);
++
++ return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++ struct old_timespec32 __user *, interval)
++{
++ struct timespec64 t;
++ int retval = sched_rr_get_interval(pid, &t);
++
++ if (retval == 0)
++ retval = put_old_timespec32(&t, interval);
++ return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++ unsigned long free = 0;
++ int ppid;
++
++ if (!try_get_task_stack(p))
++ return;
++
++ pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++ if (task_is_running(p))
++ pr_cont(" running task ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++ free = stack_not_used(p);
++#endif
++ ppid = 0;
++ rcu_read_lock();
++ if (pid_alive(p))
++ ppid = task_pid_nr(rcu_dereference(p->real_parent));
++ rcu_read_unlock();
++ pr_cont(" stack:%5lu pid:%5d ppid:%6d flags:0x%08lx\n",
++ free, task_pid_nr(p), ppid,
++ read_task_thread_flags(p));
++
++ print_worker_info(KERN_INFO, p);
++ print_stop_info(KERN_INFO, p);
++ show_stack(p, NULL, KERN_INFO);
++ put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /* no filter, everything matches */
++ if (!state_filter)
++ return true;
++
++ /* filter, but doesn't match */
++ if (!(state & state_filter))
++ return false;
++
++ /*
++ * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++ * TASK_KILLABLE).
++ */
++ if (state_filter == TASK_UNINTERRUPTIBLE && state == TASK_IDLE)
++ return false;
++
++ return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++ struct task_struct *g, *p;
++
++ rcu_read_lock();
++ for_each_process_thread(g, p) {
++ /*
++ * reset the NMI-timeout, listing all files on a slow
++ * console might take a lot of time:
++ * Also, reset softlockup watchdogs on all CPUs, because
++ * another CPU might be blocked waiting for us to process
++ * an IPI.
++ */
++ touch_nmi_watchdog();
++ touch_all_softlockup_watchdogs();
++ if (state_filter_match(state_filter, p))
++ sched_show_task(p);
++ }
++
++#ifdef CONFIG_SCHED_DEBUG
++ /* TODO: Alt schedule FW should support this
++ if (!state_filter)
++ sysrq_sched_debug_show();
++ */
++#endif
++ rcu_read_unlock();
++ /*
++ * Only show locks if all tasks are dumped:
++ */
++ if (!state_filter)
++ debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++ pr_info("Task dump for CPU %d:\n", cpu);
++ sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ __sched_fork(0, idle);
++
++ raw_spin_lock_irqsave(&idle->pi_lock, flags);
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ idle->last_ran = rq->clock_task;
++ idle->__state = TASK_RUNNING;
++ /*
++ * PF_KTHREAD should already be set at this point; regardless, make it
++ * look like a proper per-CPU kthread.
++ */
++ idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;
++ kthread_set_per_cpu(idle, cpu);
++
++ sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++ /*
++ * It's possible that init_idle() gets called multiple times on a task,
++ * in that case do_set_cpus_allowed() will not do the right thing.
++ *
++ * And since this is boot we can forgo the serialisation.
++ */
++ set_cpus_allowed_common(idle, cpumask_of(cpu));
++#endif
++
++ /* Silence PROVE_RCU */
++ rcu_read_lock();
++ __set_task_cpu(idle, cpu);
++ rcu_read_unlock();
++
++ rq->idle = idle;
++ rcu_assign_pointer(rq->curr, idle);
++ idle->on_cpu = 1;
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++ /* Set the preempt count _outside_ the spinlocks! */
++ init_idle_preempt_count(idle, cpu);
++
++ ftrace_graph_init_idle_task(idle, cpu);
++ vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++ sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++ const struct cpumask __maybe_unused *trial)
++{
++ return 1;
++}
++
++int task_can_attach(struct task_struct *p,
++ const struct cpumask *cs_cpus_allowed)
++{
++ int ret = 0;
++
++ /*
++ * Kthreads which disallow setaffinity shouldn't be moved
++ * to a new cpuset; we don't want to change their CPU
++ * affinity and isolating such threads by their set of
++ * allowed nodes is unnecessary. Thus, cpusets are not
++ * applicable for such threads. This prevents checking for
++ * success of set_cpus_allowed_ptr() on all attached tasks
++ * before cpus_mask may be changed.
++ */
++ if (p->flags & PF_NO_SETAFFINITY)
++ ret = -EINVAL;
++
++ return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++ struct mm_struct *mm = current->active_mm;
++
++ BUG_ON(current != this_rq()->idle);
++
++ if (mm != &init_mm) {
++ switch_mm(mm, &init_mm, current);
++ finish_arch_post_lock_switch();
++ }
++
++ /* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++ struct task_struct *p = arg;
++ struct rq *rq = this_rq();
++ struct rq_flags rf;
++ int cpu;
++
++ raw_spin_lock_irq(&p->pi_lock);
++ rq_lock(rq, &rf);
++
++ update_rq_clock(rq);
++
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ cpu = select_fallback_rq(rq->cpu, p);
++ rq = __migrate_task(rq, p, cpu);
++ }
++
++ rq_unlock(rq, &rf);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ put_task_struct(p);
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++ struct task_struct *push_task = rq->curr;
++
++ lockdep_assert_held(&rq->lock);
++
++ /*
++ * Ensure the thing is persistent until balance_push_set(.on = false);
++ */
++ rq->balance_callback = &balance_push_callback;
++
++ /*
++ * Only active while going offline and when invoked on the outgoing
++ * CPU.
++ */
++ if (!cpu_dying(rq->cpu) || rq != this_rq())
++ return;
++
++ /*
++ * Both the cpu-hotplug and stop task are in this case and are
++ * required to complete the hotplug process.
++ */
++ if (kthread_is_per_cpu(push_task) ||
++ is_migration_disabled(push_task)) {
++
++ /*
++ * If this is the idle task on the outgoing CPU try to wake
++ * up the hotplug control thread which might wait for the
++ * last task to vanish. The rcuwait_active() check is
++ * accurate here because the waiter is pinned on this CPU
++ * and can't obviously be running in parallel.
++ *
++ * On RT kernels this also has to check whether there are
++ * pinned and scheduled out tasks on the runqueue. They
++ * need to leave the migrate disabled section first.
++ */
++ if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++ rcuwait_active(&rq->hotplug_wait)) {
++ raw_spin_unlock(&rq->lock);
++ rcuwait_wake_up(&rq->hotplug_wait);
++ raw_spin_lock(&rq->lock);
++ }
++ return;
++ }
++
++ get_task_struct(push_task);
++ /*
++ * Temporarily drop rq->lock such that we can wake-up the stop task.
++ * Both preemption and IRQs are still disabled.
++ */
++ raw_spin_unlock(&rq->lock);
++ stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++ this_cpu_ptr(&push_work));
++ /*
++ * At this point need_resched() is true and we'll take the loop in
++ * schedule(). The next pick is obviously going to be the stop task
++ * which kthread_is_per_cpu() and will push this task away.
++ */
++ raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct rq_flags rf;
++
++ rq_lock_irqsave(rq, &rf);
++ if (on) {
++ WARN_ON_ONCE(rq->balance_callback);
++ rq->balance_callback = &balance_push_callback;
++ } else if (rq->balance_callback == &balance_push_callback) {
++ rq->balance_callback = NULL;
++ }
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++ struct rq *rq = this_rq();
++
++ rcuwait_wait_event(&rq->hotplug_wait,
++ rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++ TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++ if (rq->online)
++ rq->online = false;
++}
++
++static void set_rq_online(struct rq *rq)
++{
++ if (!rq->online)
++ rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask. If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++ if (cpuhp_tasks_frozen) {
++ /*
++ * num_cpus_frozen tracks how many CPUs are involved in suspend
++ * resume sequence. As long as this is not the last online
++ * operation in the resume sequence, just build a single sched
++ * domain, ignoring cpusets.
++ */
++ partition_sched_domains(1, NULL, NULL);
++ if (--num_cpus_frozen)
++ return;
++ /*
++ * This is the last CPU online operation. So fall through and
++ * restore the original sched domains by considering the
++ * cpuset configurations.
++ */
++ cpuset_force_rebuild();
++ }
++
++ cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++ if (!cpuhp_tasks_frozen) {
++ cpuset_update_active_cpus();
++ } else {
++ num_cpus_frozen++;
++ partition_sched_domains(1, NULL, NULL);
++ }
++ return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ /*
++ * Clear the balance_push callback and prepare to schedule
++ * regular tasks.
++ */
++ balance_push_set(cpu, false);
++
++#ifdef CONFIG_SCHED_SMT
++ /*
++ * When going up, increment the number of cores with SMT present.
++ */
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++ static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++ set_cpu_active(cpu, true);
++
++ if (sched_smp_initialized)
++ cpuset_cpu_active();
++
++ /*
++ * Put the rq online, if not already. This happens:
++ *
++ * 1) In the early boot process, because we build the real domains
++ * after all cpus have been brought up.
++ *
++ * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++ * domains.
++ */
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_online(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++ int ret;
++
++ set_cpu_active(cpu, false);
++
++ /*
++ * From this point forward, this CPU will refuse to run any task that
++ * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++ * push those tasks away until this gets cleared, see
++ * sched_cpu_dying().
++ */
++ balance_push_set(cpu, true);
++
++ /*
++ * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++ * users of this state to go away such that all new such users will
++ * observe it.
++ *
++ * Specifically, we rely on ttwu to no longer target this CPU, see
++ * ttwu_queue_cond() and is_cpu_allowed().
++ *
++ * Do sync before park smpboot threads to take care the rcu boost case.
++ */
++ synchronize_rcu();
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ update_rq_clock(rq);
++ set_rq_offline(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++ /*
++ * When going down, decrement the number of cores with SMT present.
++ */
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_dec_cpuslocked(&sched_smt_present);
++ if (!static_branch_likely(&sched_smt_present))
++ cpumask_clear(&sched_sg_idle_mask);
++ }
++#endif
++
++ if (!sched_smp_initialized)
++ return 0;
++
++ ret = cpuset_cpu_inactive(cpu);
++ if (ret) {
++ balance_push_set(cpu, false);
++ set_cpu_active(cpu, true);
++ return ret;
++ }
++
++ return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++ sched_rq_cpu_starting(cpu);
++ sched_tick_start(cpu);
++ return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++ balance_hotplug_wait();
++ return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++ long delta = calc_load_fold_active(rq, 1);
++
++ if (delta)
++ atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++ struct task_struct *g, *p;
++ int cpu = cpu_of(rq);
++
++ lockdep_assert_held(&rq->lock);
++
++ printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++ for_each_process_thread(g, p) {
++ if (task_cpu(p) != cpu)
++ continue;
++
++ if (!task_on_rq_queued(p))
++ continue;
++
++ printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++ }
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ /* Handle pending wakeups and then migrate everything off */
++ sched_tick_stop(cpu);
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++ WARN(true, "Dying CPU not properly vacated!");
++ dump_rq_tasks(rq, KERN_WARNING);
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ calc_load_migrate(rq);
++ hrtick_clear(rq);
++ return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++ int cpu;
++ cpumask_t *tmp;
++
++ for_each_possible_cpu(cpu) {
++ /* init topo masks */
++ tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++ cpumask_copy(tmp, cpumask_of(cpu));
++ tmp++;
++ cpumask_copy(tmp, cpu_possible_mask);
++ per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++ per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++ /*per_cpu(sd_llc_id, cpu) = cpu;*/
++ }
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++ if (cpumask_and(topo, topo, mask)) { \
++ cpumask_copy(topo, mask); \
++ printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name, \
++ cpu, (topo++)->bits[0]); \
++ } \
++ if (!last) \
++ cpumask_complement(topo, mask)
++
++static void sched_init_topology_cpumask(void)
++{
++ int cpu;
++ cpumask_t *topo;
++
++ for_each_online_cpu(cpu) {
++ /* take chance to reset time slice for idle tasks */
++ cpu_rq(cpu)->idle->time_slice = sched_timeslice_ns;
++
++ topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++
++ cpumask_complement(topo, cpumask_of(cpu));
++#ifdef CONFIG_SCHED_SMT
++ TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++ per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++ per_cpu(sched_cpu_llc_mask, cpu) = topo;
++ TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++ TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++ TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++ per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++ printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++ cpu, per_cpu(sd_llc_id, cpu),
++ (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++ per_cpu(sched_cpu_topo_masks, cpu)));
++ }
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++ /* Move init over to a non-isolated CPU */
++ if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++ BUG();
++ current->flags &= ~PF_NO_SETAFFINITY;
++
++ sched_init_topology_cpumask();
++
++ sched_smp_initialized = true;
++}
++#else
++void __init sched_init_smp(void)
++{
++ cpu_rq(0)->idle->time_slice = sched_timeslice_ns;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++ return in_lock_functions(addr) ||
++ (addr >= (unsigned long)__sched_text_start
++ && addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++ struct cgroup_subsys_state css;
++
++ struct rcu_head rcu;
++ struct list_head list;
++
++ struct task_group *parent;
++ struct list_head siblings;
++ struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++ unsigned long shares;
++#endif
++};
++
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __read_mostly;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++ int i;
++ struct rq *rq;
++
++ printk(KERN_INFO ALT_SCHED_VERSION_MSG);
++
++ wait_bit_init();
++
++#ifdef CONFIG_SMP
++ for (i = 0; i < SCHED_QUEUE_BITS; i++)
++ cpumask_copy(sched_rq_watermark + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++ task_group_cache = KMEM_CACHE(task_group, 0);
++
++ list_add(&root_task_group.list, &task_groups);
++ INIT_LIST_HEAD(&root_task_group.children);
++ INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++ for_each_possible_cpu(i) {
++ rq = cpu_rq(i);
++
++ sched_queue_init(&rq->queue);
++ rq->watermark = IDLE_TASK_SCHED_PRIO;
++ rq->skip = NULL;
++
++ raw_spin_lock_init(&rq->lock);
++ rq->nr_running = rq->nr_uninterruptible = 0;
++ rq->calc_load_active = 0;
++ rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++ rq->online = false;
++ rq->cpu = i;
++
++#ifdef CONFIG_SCHED_SMT
++ rq->active_balance = 0;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++ INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++ rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++ rq->nr_switches = 0;
++
++ hrtick_rq_init(rq);
++ atomic_set(&rq->nr_iowait, 0);
++ }
++#ifdef CONFIG_SMP
++ /* Set rq->online for cpu 0 */
++ cpu_rq(0)->online = true;
++#endif
++ /*
++ * The boot idle thread does lazy MMU switching as well:
++ */
++ mmgrab(&init_mm);
++ enter_lazy_tlb(&init_mm, current);
++
++ /*
++ * The idle task doesn't need the kthread struct to function, but it
++ * is dressed up as a per-CPU kthread and thus needs to play the part
++ * if we want to avoid special-casing it in code that deals with per-CPU
++ * kthreads.
++ */
++ WARN_ON(!set_kthread_struct(current));
++
++ /*
++ * Make us the idle thread. Technically, schedule() should not be
++ * called from this thread, however somewhere below it might be,
++ * but because we are the idle thread, we just pick up running again
++ * when this runqueue becomes "idle".
++ */
++ init_idle(current, smp_processor_id());
++
++ calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++ idle_thread_set_boot_cpu();
++ balance_push_set(smp_processor_id(), false);
++
++ sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++ psi_init();
++
++ preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++ unsigned int state = get_current_state();
++ /*
++ * Blocking primitives will set (and therefore destroy) current->state,
++ * since we will exit with TASK_RUNNING make sure we enter with it,
++ * otherwise we will destroy state.
++ */
++ WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++ "do not call blocking ops when !TASK_RUNNING; "
++ "state=%x set at [<%p>] %pS\n", state,
++ (void *)current->task_state_change,
++ (void *)current->task_state_change);
++
++ __might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++ if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++ return;
++
++ if (preempt_count() == preempt_offset)
++ return;
++
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++ unsigned int nested = preempt_count();
++
++ nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++ return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++ /* Ratelimiting timestamp: */
++ static unsigned long prev_jiffy;
++
++ unsigned long preempt_disable_ip;
++
++ /* WARN_ON_ONCE() by default, no rate limit required: */
++ rcu_sleep_check();
++
++ if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++ !is_idle_task(current) && !current->non_block_count) ||
++ system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++ oops_in_progress)
++ return;
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ /* Save this before calling printk(), since that will clobber it: */
++ preempt_disable_ip = get_preempt_disable_ip(current);
++
++ pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++ file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), current->non_block_count,
++ current->pid, current->comm);
++ pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++ offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++ if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++ pr_err("RCU nest depth: %d, expected: %u\n",
++ rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++ }
++
++ if (task_stack_end_corrupted(current))
++ pr_emerg("Thread overran stack, or stack corrupted\n");
++
++ debug_show_held_locks(current);
++ if (irqs_disabled())
++ print_irqtrace_events(current);
++
++ print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++ preempt_disable_ip);
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > preempt_offset)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++ printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (is_migration_disabled(current))
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > 0)
++ return;
++
++ if (current->migration_flags & MDF_FORCE_ENABLED)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), is_migration_disabled(current),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++ struct task_struct *g, *p;
++ struct sched_attr attr = {
++ .sched_policy = SCHED_NORMAL,
++ };
++
++ read_lock(&tasklist_lock);
++ for_each_process_thread(g, p) {
++ /*
++ * Only normalize user tasks:
++ */
++ if (p->flags & PF_KTHREAD)
++ continue;
++
++ schedstat_set(p->stats.wait_start, 0);
++ schedstat_set(p->stats.sleep_start, 0);
++ schedstat_set(p->stats.block_start, 0);
++
++ if (!rt_task(p)) {
++ /*
++ * Renice negative nice level userspace
++ * tasks back to 0:
++ */
++ if (task_nice(p) < 0)
++ set_user_nice(p, 0);
++ continue;
++ }
++
++ __sched_setscheduler(p, &attr, false, false);
++ }
++ read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for the IA64 MCA handling, or kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++ return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_IA64
++/**
++ * ia64_set_curr_task - set the current task for a given CPU.
++ * @cpu: the processor in question.
++ * @p: the task pointer to set.
++ *
++ * Description: This function must only be used when non-maskable interrupts
++ * are serviced on a separate stack. It allows the architecture to switch the
++ * notion of the current task on a CPU in a non-blocking manner. This function
++ * must be called with all CPU's synchronised, and interrupts disabled, the
++ * and caller must save the original value of the current task (see
++ * curr_task() above) and restore that value before reenabling interrupts and
++ * re-starting the system.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ */
++void ia64_set_curr_task(int cpu, struct task_struct *p)
++{
++ cpu_curr(cpu) = p;
++}
++
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++ kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++ sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++ /*
++ * We have to wait for yet another RCU grace period to expire, as
++ * print_cfs_stats() might run concurrently.
++ */
++ call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++ struct task_group *tg;
++
++ tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++ if (!tg)
++ return ERR_PTR(-ENOMEM);
++
++ return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++ /* Now it should be safe to free those cfs_rqs: */
++ sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++ /* Wait for possible concurrent references to cfs_rqs complete: */
++ call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++ return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++ struct task_group *parent = css_tg(parent_css);
++ struct task_group *tg;
++
++ if (!parent) {
++ /* This is early initialization for the top cgroup */
++ return &root_task_group.css;
++ }
++
++ tg = sched_create_group(parent);
++ if (IS_ERR(tg))
++ return ERR_PTR(-ENOMEM);
++ return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++ struct task_group *parent = css_tg(css->parent);
++
++ if (parent)
++ sched_online_group(tg, parent);
++ return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ /*
++ * Relies on the RCU grace period between css_released() and this.
++ */
++ sched_unregister_group(tg);
++}
++
++static void cpu_cgroup_fork(struct task_struct *task)
++{
++}
++
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++ return 0;
++}
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++ /*
++ * We can't change the weight of the root cgroup.
++ */
++ if (&root_task_group == tg)
++ return -EINVAL;
++
++ shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++ mutex_lock(&shares_mutex);
++ if (tg->shares == shares)
++ goto done;
++
++ tg->shares = shares;
++done:
++ mutex_unlock(&shares_mutex);
++ return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 shareval)
++{
++ if (shareval > scale_load_down(ULONG_MAX))
++ shareval = MAX_SHARES;
++ return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ struct task_group *tg = css_tg(css);
++
++ return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++ {
++ .name = "shares",
++ .read_u64 = cpu_shares_read_u64,
++ .write_u64 = cpu_shares_write_u64,
++ },
++#endif
++ { } /* Terminate */
++};
++
++
++static struct cftype cpu_files[] = {
++ { } /* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++ .css_alloc = cpu_cgroup_css_alloc,
++ .css_online = cpu_cgroup_css_online,
++ .css_released = cpu_cgroup_css_released,
++ .css_free = cpu_cgroup_css_free,
++ .css_extra_stat_show = cpu_extra_stat_show,
++ .fork = cpu_cgroup_fork,
++ .can_attach = cpu_cgroup_can_attach,
++ .attach = cpu_cgroup_attach,
++ .legacy_cftypes = cpu_files,
++ .legacy_cftypes = cpu_legacy_files,
++ .dfl_cftypes = cpu_files,
++ .early_init = true,
++ .threaded = true,
++};
++#endif /* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1212a031700e
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,31 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date : 2020
++ */
++#include "sched.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...) \
++ do { \
++ if (m) \
++ seq_printf(m, x); \
++ else \
++ pr_cont(x); \
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++ struct seq_file *m)
++{
++ SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++ get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..a181bf9ce57d
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,645 @@
++#ifndef ALT_SCHED_H
++#define ALT_SCHED_H
++
++#include <linux/psi.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_SCHED_BMQ
++/* bits:
++ * RT(0-99), (Low prio adj range, nice width, high prio adj range) / 2, cpu idle task */
++#define SCHED_BITS (MAX_RT_PRIO + NICE_WIDTH / 2 + MAX_PRIORITY_ADJ + 1)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++/* bits: RT(0-99), reserved(100-127), NORMAL_PRIO_NUM, cpu idle task */
++#define SCHED_BITS (MIN_NORMAL_PRIO + NORMAL_PRIO_NUM + 1)
++#endif /* CONFIG_SCHED_PDS */
++
++#define IDLE_TASK_SCHED_PRIO (SCHED_BITS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x) WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x) ({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++ unsigned long __w = (w); \
++ if (__w) \
++ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++ __w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) (w)
++# define scale_load_down(w) (w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ * limitation from this.)
++ */
++#define MIN_SHARES (1UL << 1)
++#define MAX_SHARES (1UL << 18)
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED 1
++#define TASK_ON_RQ_MIGRATING 2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++ return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++ return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/*
++ * wake flags
++ */
++#define WF_SYNC 0x01 /* waker goes to sleep after wakeup */
++#define WF_FORK 0x02 /* child wakeup after fork */
++#define WF_MIGRATED 0x04 /* internal use, task got migrated */
++#define WF_ON_CPU 0x08 /* Wakee is on_rq */
++
++#define SCHED_QUEUE_BITS (SCHED_BITS - 1)
++
++struct sched_queue {
++ DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++ struct list_head heads[SCHED_BITS];
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++ /* runqueue lock: */
++ raw_spinlock_t lock;
++
++ struct task_struct __rcu *curr;
++ struct task_struct *idle, *stop, *skip;
++ struct mm_struct *prev_mm;
++
++ struct sched_queue queue;
++#ifdef CONFIG_SCHED_PDS
++ u64 time_edge;
++#endif
++ unsigned long watermark;
++
++ /* switch count */
++ u64 nr_switches;
++
++ atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++ u64 last_seen_need_resched_ns;
++ int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++ int membarrier_state;
++#endif
++
++#ifdef CONFIG_SMP
++ int cpu; /* cpu of this runqueue */
++ bool online;
++
++ unsigned int ttwu_pending;
++ unsigned char nohz_idle_balance;
++ unsigned char idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ struct sched_avg avg_irq;
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++ int active_balance;
++ struct cpu_stop_work active_balance_work;
++#endif
++ struct callback_head *balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ struct rcuwait hotplug_wait;
++#endif
++ unsigned int nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++ u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++ /* For genenal cpu load util */
++ s32 load_history;
++ u64 load_block;
++ u64 load_stamp;
++
++ /* calc_load related fields */
++ unsigned long calc_load_update;
++ long calc_load_active;
++
++ u64 clock, last_tick;
++ u64 last_ts_switch;
++ u64 clock_task;
++
++ unsigned int nr_running;
++ unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++ call_single_data_t hrtick_csd;
++#endif
++ struct hrtimer hrtick_timer;
++ ktime_t hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++ /* latency stats */
++ struct sched_info rq_sched_info;
++ unsigned long long rq_cpu_time;
++ /* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++ /* sys_sched_yield() stats */
++ unsigned int yld_count;
++
++ /* schedule() stats */
++ unsigned int sched_switch;
++ unsigned int sched_count;
++ unsigned int sched_goidle;
++
++ /* try_to_wake_up() stats */
++ unsigned int ttwu_count;
++ unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++ /* Must be inspected within a rcu lock section */
++ struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++ call_single_data_t nohz_csd;
++#endif
++ atomic_t nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++};
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
++#define this_rq() this_cpu_ptr(&runqueues)
++#define task_rq(p) cpu_rq(task_cpu(p))
++#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
++#define raw_rq() raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++ ITSELF_LEVEL_SPACE_HOLDER,
++#ifdef CONFIG_SCHED_SMT
++ SMT_LEVEL_SPACE_HOLDER,
++#endif
++ COREGROUP_LEVEL_SPACE_HOLDER,
++ CORE_LEVEL_SPACE_HOLDER,
++ OTHER_LEVEL_SPACE_HOLDER,
++ NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DECLARE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++ int cpu;
++
++ while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++ mask++;
++
++ return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++ return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++extern void flush_smp_call_function_queue(void);
++
++#else /* !CONFIG_SMP */
++static inline void flush_smp_call_function_queue(void) { }
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++ return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++ return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP 0x01
++
++#define ENQUEUE_WAKEUP 0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++ unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ local_irq_disable();
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++ return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++ return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++ lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++ raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++ local_irq_disable();
++ raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++ raw_spin_rq_unlock(rq);
++ local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++ return rq->curr == p;
++}
++
++static inline bool task_running(struct task_struct *p)
++{
++ return p->on_cpu;
++}
++
++extern int task_running_nice(struct task_struct *p);
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++ rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ WARN_ON(!rcu_read_lock_held());
++ return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ return rq->cpu;
++#else
++ return 0;
++#endif
++}
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT 0
++#define NOHZ_STATS_KICK_BIT 1
++
++#define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK (NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++ u64 total;
++ u64 tick_delta;
++ u64 irq_start_time;
++ struct u64_stats_sync sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++ struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++ unsigned int seq;
++ u64 total;
++
++ do {
++ seq = __u64_stats_fetch_begin(&irqtime->sync);
++ total = irqtime->total;
++ } while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++ return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant() (true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant() (false)
++#endif
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV 0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++ int membarrier_state;
++
++ if (prev_mm == next_mm)
++ return;
++
++ membarrier_state = atomic_read(&next_mm->membarrier_state);
++ if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++ return;
++
++ WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++ struct task_struct *p)
++{
++ return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#endif /* ALT_SCHED_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..66b77291b9d0
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,110 @@
++#define ALT_SCHED_VERSION_MSG "sched/bmq: BMQ CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
++
++/*
++ * BMQ only routines
++ */
++#define rq_switch_time(rq) ((rq)->clock - (rq)->last_ts_switch)
++#define boost_threshold(p) (sched_timeslice_ns >>\
++ (15 - MAX_PRIORITY_ADJ - (p)->boost_prio))
++
++static inline void boost_task(struct task_struct *p)
++{
++ int limit;
++
++ switch (p->policy) {
++ case SCHED_NORMAL:
++ limit = -MAX_PRIORITY_ADJ;
++ break;
++ case SCHED_BATCH:
++ case SCHED_IDLE:
++ limit = 0;
++ break;
++ default:
++ return;
++ }
++
++ if (p->boost_prio > limit)
++ p->boost_prio--;
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++ if (p->boost_prio < MAX_PRIORITY_ADJ)
++ p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ return p->prio + p->boost_prio - MAX_RT_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MAX_RT_PRIO)? p->prio : MAX_RT_PRIO / 2 + (p->prio + p->boost_prio) / 2;
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++ return task_sched_prio(p);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++ return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++ return idx;
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sched_timeslice_ns;
++
++ if (SCHED_FIFO != p->policy && task_on_rq_queued(p)) {
++ if (SCHED_RR != p->policy)
++ deboost_task(p);
++ requeue_task(p, rq, task_sched_prio_idx(p, rq));
++ }
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++
++inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio + p->boost_prio > DEFAULT_PRIO + MAX_PRIORITY_ADJ);
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++ p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++#ifdef CONFIG_SMP
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++ if(this_rq()->clock_task - p->last_ran > sched_timeslice_ns)
++ boost_task(p);
++}
++#endif
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++ if (rq_switch_time(rq) < boost_threshold(p))
++ boost_task(p);
++}
++
++static inline void update_rq_time_edge(struct rq *rq) {}
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index d9dc9ab3773f..71a25540d65e 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -42,13 +42,19 @@
+
+ #include "idle.c"
+
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+
+ #include "cputime.c"
+-#include "deadline.c"
+
++#ifndef CONFIG_SCHED_ALT
++#include "deadline.c"
++#endif
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 99bdd96f454f..23f80a86d2d7 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -85,7 +85,9 @@
+
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 3dbf351d12d5..b2590f961139 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -160,9 +160,14 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu);
+
+ sg_cpu->max = max;
++#ifndef CONFIG_SCHED_ALT
+ sg_cpu->bw_dl = cpu_bw_dl(rq);
+ sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(sg_cpu->cpu), max,
+ FREQUENCY_UTIL, NULL);
++#else
++ sg_cpu->bw_dl = 0;
++ sg_cpu->util = rq_load_util(rq, max);
++#endif /* CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -306,8 +311,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+ */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
+ sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -607,6 +614,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ }
+
+ ret = sched_setattr_nocheck(thread, &attr);
++
+ if (ret) {
+ kthread_stop(thread);
+ pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+@@ -839,7 +847,9 @@ cpufreq_governor_init(schedutil_gov);
+ #ifdef CONFIG_ENERGY_MODEL
+ static void rebuild_sd_workfn(struct work_struct *work)
+ {
++#ifndef CONFIG_SCHED_ALT
+ rebuild_sched_domains_energy();
++#endif /* CONFIG_SCHED_ALT */
+ }
+ static DECLARE_WORK(rebuild_sd_work, rebuild_sd_workfn);
+
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 78a233d43757..b3bbc87d4352 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -122,7 +122,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ p->utime += cputime;
+ account_group_user_time(p, cputime);
+
+- index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++ index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+
+ /* Add user time to cpustat. */
+ task_group_account_field(p, index, cputime);
+@@ -146,7 +146,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ p->gtime += cputime;
+
+ /* Add guest time to cpustat. */
+- if (task_nice(p) > 0) {
++ if (task_running_nice(p)) {
+ task_group_account_field(p, CPUTIME_NICE, cputime);
+ cpustat[CPUTIME_GUEST_NICE] += cputime;
+ } else {
+@@ -269,7 +269,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+- return t->se.sum_exec_runtime;
++ return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -279,7 +279,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ struct rq *rq;
+
+ rq = task_rq_lock(t, &rf);
+- ns = t->se.sum_exec_runtime;
++ ns = tsk_seruntime(t);
+ task_rq_unlock(rq, t, &rf);
+
+ return ns;
+@@ -611,7 +611,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ struct task_cputime cputime = {
+- .sum_exec_runtime = p->se.sum_exec_runtime,
++ .sum_exec_runtime = tsk_seruntime(p),
+ };
+
+ if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index bb3d63bdf4ae..4e1680785704 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+ * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * This allows printing both to /proc/sched_debug and
+ * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+
+@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
+
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+
+ static const struct seq_operations sched_debug_sops;
+@@ -293,6 +296,7 @@ static const struct file_operations sched_debug_fops = {
+ .llseek = seq_lseek,
+ .release = seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+
+ static struct dentry *debugfs_sched;
+
+@@ -302,12 +306,15 @@ static __init int sched_init_debug(void)
+
+ debugfs_sched = debugfs_create_dir("sched", NULL);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ debugfs_create_bool("verbose", 0644, debugfs_sched, &sched_debug_verbose);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_u32("latency_ns", 0644, debugfs_sched, &sysctl_sched_latency);
+ debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity);
+ debugfs_create_u32("idle_min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_idle_min_granularity);
+@@ -336,11 +343,13 @@ static __init int sched_init_debug(void)
+ #endif
+
+ debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+
+ return 0;
+ }
+ late_initcall(sched_init_debug);
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+
+ static cpumask_var_t sd_sysctl_cpus;
+@@ -1067,6 +1076,7 @@ void proc_sched_set_task(struct task_struct *p)
+ memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 328cccbee444..aef991facc79 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -400,6 +400,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ do_idle();
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * idle-task scheduling class.
+ */
+@@ -521,3 +522,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ .switched_to = switched_to_idle,
+ .update_curr = update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..56a649d02e49
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,127 @@
++#define ALT_SCHED_VERSION_MSG "sched/pds: PDS CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
++
++static int sched_timeslice_shift = 22;
++
++#define NORMAL_PRIO_MOD(x) ((x) & (NORMAL_PRIO_NUM - 1))
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms)
++{
++ if (2 == timeslice_ms)
++ sched_timeslice_shift = 21;
++}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ s64 delta = p->deadline - rq->time_edge + NORMAL_PRIO_NUM - NICE_WIDTH;
++
++ if (WARN_ONCE(delta > NORMAL_PRIO_NUM - 1,
++ "pds: task_sched_prio_normal() delta %lld\n", delta))
++ return NORMAL_PRIO_NUM - 1;
++
++ return (delta < 0) ? 0 : delta;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MAX_RT_PRIO) ? p->prio :
++ MIN_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++ return (p->prio < MAX_RT_PRIO) ? p->prio : MIN_NORMAL_PRIO +
++ NORMAL_PRIO_MOD(task_sched_prio_normal(p, rq) + rq->time_edge);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++ return (IDLE_TASK_SCHED_PRIO == prio || prio < MAX_RT_PRIO) ? prio :
++ MIN_NORMAL_PRIO + NORMAL_PRIO_MOD((prio - MIN_NORMAL_PRIO) +
++ rq->time_edge);
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++ return (idx < MAX_RT_PRIO) ? idx : MIN_NORMAL_PRIO +
++ NORMAL_PRIO_MOD((idx - MIN_NORMAL_PRIO) + NORMAL_PRIO_NUM -
++ NORMAL_PRIO_MOD(rq->time_edge));
++}
++
++static inline void sched_renew_deadline(struct task_struct *p, const struct rq *rq)
++{
++ if (p->prio >= MAX_RT_PRIO)
++ p->deadline = (rq->clock >> sched_timeslice_shift) +
++ p->static_prio - (MAX_PRIO - NICE_WIDTH);
++}
++
++int task_running_nice(struct task_struct *p)
++{
++ return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void update_rq_time_edge(struct rq *rq)
++{
++ struct list_head head;
++ u64 old = rq->time_edge;
++ u64 now = rq->clock >> sched_timeslice_shift;
++ u64 prio, delta;
++
++ if (now == old)
++ return;
++
++ delta = min_t(u64, NORMAL_PRIO_NUM, now - old);
++ INIT_LIST_HEAD(&head);
++
++ for_each_set_bit(prio, &rq->queue.bitmap[2], delta)
++ list_splice_tail_init(rq->queue.heads + MIN_NORMAL_PRIO +
++ NORMAL_PRIO_MOD(prio + old), &head);
++
++ rq->queue.bitmap[2] = (NORMAL_PRIO_NUM == delta) ? 0UL :
++ rq->queue.bitmap[2] >> delta;
++ rq->time_edge = now;
++ if (!list_empty(&head)) {
++ u64 idx = MIN_NORMAL_PRIO + NORMAL_PRIO_MOD(now);
++ struct task_struct *p;
++
++ list_for_each_entry(p, &head, sq_node)
++ p->sq_idx = idx;
++
++ list_splice(&head, rq->queue.heads + idx);
++ rq->queue.bitmap[2] |= 1UL;
++ }
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sched_timeslice_ns;
++ sched_renew_deadline(p, rq);
++ if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++ requeue_task(p, rq, task_sched_prio_idx(p, rq));
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++ u64 max_dl = rq->time_edge + NICE_WIDTH - 1;
++ if (unlikely(p->deadline > max_dl))
++ p->deadline = max_dl;
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++ sched_renew_deadline(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ time_slice_expired(p, rq);
++}
++
++#ifdef CONFIG_SMP
++static inline void sched_task_ttwu(struct task_struct *p) {}
++#endif
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index 0f310768260c..bd38bf738fe9 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * sched_entity:
+ *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+
+ return 0;
+ }
++#endif
+
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * thermal:
+ *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 4ff2ed4f8fa1..226eeed61318 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
+
+ static inline u64 thermal_load_avg(struct rq *rq)
+@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ unsigned int enqueued;
+@@ -155,9 +158,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+
+ #else
+
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -175,6 +180,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ return 0;
+ }
++#endif
+
+ static inline int
+ update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 47b89a0fc6e5..de2641a32c22 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3116,4 +3120,9 @@ extern int sched_dynamic_mode(const char *str);
+ extern void sched_dynamic_update(int mode);
+ #endif
+
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 857f837f52cb..5486c63e4790 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -125,8 +125,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ } else {
+ struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ struct sched_domain *sd;
+ int dcount = 0;
++#endif
+ #endif
+ cpu = (unsigned long)(v - 2);
+ rq = cpu_rq(cpu);
+@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ seq_printf(seq, "\n");
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ /* domain-specific stats */
+ rcu_read_lock();
+ for_each_domain(cpu, sd) {
+@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ sd->ttwu_move_balance);
+ }
+ rcu_read_unlock();
++#endif
+ #endif
+ }
+ return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index baa839c1ba96..15238be0581b 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart (struct rq *rq, unsigned long long delt
+
+ #endif /* CONFIG_SCHEDSTATS */
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ struct sched_entity se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PSI
+ /*
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 05b6c2ad90b9..480ef393b3c9 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+ * Scheduler topology setup/handling methods
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ DEFINE_MUTEX(sched_domains_mutex);
+
+ /* Protected by sched_domains_mutex: */
+@@ -1413,8 +1414,10 @@ static void asym_cpu_capacity_scan(void)
+ */
+
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1647,6 +1650,7 @@ sd_init(struct sched_domain_topology_level *tl,
+
+ return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ /*
+ * Topology list, bottom-up.
+@@ -1683,6 +1687,7 @@ void set_sched_topology(struct sched_domain_topology_level *tl)
+ sched_domain_topology_saved = NULL;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2638,3 +2643,15 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++ struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return best_mask_cpu(cpu, cpus);
++}
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 35d034219513..23719c728677 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -86,6 +86,10 @@
+
+ /* Constants used for minimum and maximum */
+
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1590,6 +1594,7 @@ int proc_do_static_key(struct ctl_table *table, int write,
+ }
+
+ static struct ctl_table kern_table[] = {
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA_BALANCING
+ {
+ .procname = "numa_balancing",
+@@ -1601,6 +1606,7 @@ static struct ctl_table kern_table[] = {
+ .extra2 = SYSCTL_FOUR,
+ },
+ #endif /* CONFIG_NUMA_BALANCING */
++#endif /* !CONFIG_SCHED_ALT */
+ {
+ .procname = "panic",
+ .data = &panic_timeout,
+@@ -1902,6 +1908,17 @@ static struct ctl_table kern_table[] = {
+ .proc_handler = proc_dointvec,
+ },
+ #endif
++#ifdef CONFIG_SCHED_ALT
++ {
++ .procname = "yield_type",
++ .data = &sched_yield_type,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_TWO,
++ },
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ {
+ .procname = "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 0ea8702eb516..a27a0f3a654d 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2088,8 +2088,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ int ret = 0;
+ u64 slack;
+
++#ifndef CONFIG_SCHED_ALT
+ slack = current->timer_slack_ns;
+ if (dl_task(current) || rt_task(current))
++#endif
+ slack = 0;
+
+ hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index cb925e8ef9a8..67d823510f5c 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ u64 stime, utime;
+
+ task_cputime(p, &utime, &stime);
+- store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++ store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -866,6 +866,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ }
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ if (tsk->dl.dl_overrun) {
+@@ -873,6 +874,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ }
+ }
++#endif
+
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -900,8 +902,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ u64 samples[CPUCLOCK_MAX];
+ unsigned long soft;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk))
+ check_dl_overrun(tsk);
++#endif
+
+ if (expiry_cache_is_inactive(pct))
+ return;
+@@ -915,7 +919,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ if (soft != RLIM_INFINITY) {
+ /* Task RT timeout is accounted in jiffies. RTTIME is usec */
+- unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++ unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+
+ /* At the hard limit, send SIGKILL. No further action. */
+@@ -1151,8 +1155,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ return true;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk) && tsk->dl.dl_overrun)
+ return true;
++#endif
+
+ return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index a2d301f58ced..2ccdede8585c 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1143,10 +1143,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ /* Make this a -deadline thread */
+ static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++ /* No deadline on BMQ/PDS, use RR */
++ .sched_policy = SCHED_RR,
++#else
+ .sched_policy = SCHED_DEADLINE,
+ .sched_runtime = 100000ULL,
+ .sched_deadline = 10000000ULL,
+ .sched_period = 10000000ULL
++#endif
+ };
+ struct wakeup_test_data *x = data;
+
diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..6b2049da
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig 2022-07-07 13:22:00.698439887 -0400
++++ b/init/Kconfig 2022-07-07 13:23:45.152333576 -0400
+@@ -874,8 +874,9 @@ config UCLAMP_BUCKETS_COUNT
+ If in doubt, use the default value.
+
+ menuconfig SCHED_ALT
++ depends on X86_64
+ bool "Alternative CPU Schedulers"
+- default y
++ default n
+ help
+ This feature enable alternative CPU scheduler"
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-11 12:32 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-11 12:32 UTC (permalink / raw
To: gentoo-commits
commit: 7168afb285a989bf42870f28d2f479e9ddb1bda8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 11 12:32:02 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 11 12:32:02 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7168afb2
Linux patch 5.19.1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1000_linux-5.19.1.patch | 754 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 758 insertions(+)
diff --git a/0000_README b/0000_README
index 3d9202d9..6335a155 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-5.19.1.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-5.19.1.patch b/1000_linux-5.19.1.patch
new file mode 100644
index 00000000..24359699
--- /dev/null
+++ b/1000_linux-5.19.1.patch
@@ -0,0 +1,754 @@
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index 9e9556826450b..2ce2a38cdd556 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -422,6 +422,14 @@ The possible values in this file are:
+ 'RSB filling' Protection of RSB on context switch enabled
+ ============= ===========================================
+
++ - EIBRS Post-barrier Return Stack Buffer (PBRSB) protection status:
++
++ =========================== =======================================================
++ 'PBRSB-eIBRS: SW sequence' CPU is affected and protection of RSB on VMEXIT enabled
++ 'PBRSB-eIBRS: Vulnerable' CPU is vulnerable
++ 'PBRSB-eIBRS: Not affected' CPU is not affected by PBRSB
++ =========================== =======================================================
++
+ Full mitigation might require a microcode update from the CPU
+ vendor. When the necessary microcode is not available, the kernel will
+ report vulnerability.
+diff --git a/Documentation/devicetree/bindings/net/broadcom-bluetooth.yaml b/Documentation/devicetree/bindings/net/broadcom-bluetooth.yaml
+index 5aac094fd2172..58ecafc1b7f90 100644
+--- a/Documentation/devicetree/bindings/net/broadcom-bluetooth.yaml
++++ b/Documentation/devicetree/bindings/net/broadcom-bluetooth.yaml
+@@ -23,6 +23,7 @@ properties:
+ - brcm,bcm4345c5
+ - brcm,bcm43540-bt
+ - brcm,bcm4335a0
++ - brcm,bcm4349-bt
+
+ shutdown-gpios:
+ maxItems: 1
+diff --git a/Makefile b/Makefile
+index df92892325ae0..3acb329035eb9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c
+index 9c3d86e397bf3..1fae18ba11ed1 100644
+--- a/arch/arm64/crypto/poly1305-glue.c
++++ b/arch/arm64/crypto/poly1305-glue.c
+@@ -52,7 +52,7 @@ static void neon_poly1305_blocks(struct poly1305_desc_ctx *dctx, const u8 *src,
+ {
+ if (unlikely(!dctx->sset)) {
+ if (!dctx->rset) {
+- poly1305_init_arch(dctx, src);
++ poly1305_init_arm64(&dctx->h, src);
+ src += POLY1305_BLOCK_SIZE;
+ len -= POLY1305_BLOCK_SIZE;
+ dctx->rset = 1;
+diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
+index 96dc0f7da258d..a971d462f531c 100644
+--- a/arch/arm64/include/asm/kernel-pgtable.h
++++ b/arch/arm64/include/asm/kernel-pgtable.h
+@@ -103,8 +103,8 @@
+ /*
+ * Initial memory map attributes.
+ */
+-#define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+-#define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
++#define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED | PTE_UXN)
++#define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S | PMD_SECT_UXN)
+
+ #if ARM64_KERNEL_USES_PMD_MAPS
+ #define SWAPPER_MM_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 6a98f1a38c29a..8a93a0a7489b2 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -285,7 +285,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
+ subs x1, x1, #64
+ b.ne 1b
+
+- mov x7, SWAPPER_MM_MMUFLAGS
++ mov_q x7, SWAPPER_MM_MMUFLAGS
+
+ /*
+ * Create the identity mapping.
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index a77b915d36a8e..ede8990f3e416 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -303,6 +303,7 @@
+ #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */
+ #define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */
+ #define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */
++#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
+
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
+@@ -456,5 +457,6 @@
+ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
+ #define X86_BUG_RETBLEED X86_BUG(26) /* CPU is affected by RETBleed */
++#define X86_BUG_EIBRS_PBRSB X86_BUG(27) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
+
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index cc615be27a54b..e057e039173cb 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -150,6 +150,10 @@
+ * are restricted to targets in
+ * kernel.
+ */
++#define ARCH_CAP_PBRSB_NO BIT(24) /*
++ * Not susceptible to Post-Barrier
++ * Return Stack Buffer Predictions.
++ */
+
+ #define MSR_IA32_FLUSH_CMD 0x0000010b
+ #define L1D_FLUSH BIT(0) /*
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 38a3e86e665ef..d3a3cc6772ee1 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -60,7 +60,9 @@
+ 774: \
+ add $(BITS_PER_LONG/8) * 2, sp; \
+ dec reg; \
+- jnz 771b;
++ jnz 771b; \
++ /* barrier for jnz misprediction */ \
++ lfence;
+
+ #ifdef __ASSEMBLY__
+
+@@ -118,13 +120,28 @@
+ #endif
+ .endm
+
++.macro ISSUE_UNBALANCED_RET_GUARD
++ ANNOTATE_INTRA_FUNCTION_CALL
++ call .Lunbalanced_ret_guard_\@
++ int3
++.Lunbalanced_ret_guard_\@:
++ add $(BITS_PER_LONG/8), %_ASM_SP
++ lfence
++.endm
++
+ /*
+ * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
+ * monstrosity above, manually.
+ */
+-.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
++.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2
++.ifb \ftr2
+ ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr
++.else
++ ALTERNATIVE_2 "jmp .Lskip_rsb_\@", "", \ftr, "jmp .Lunbalanced_\@", \ftr2
++.endif
+ __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)
++.Lunbalanced_\@:
++ ISSUE_UNBALANCED_RET_GUARD
+ .Lskip_rsb_\@:
+ .endm
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 6761668100b9f..9f7e751b91df9 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1335,6 +1335,53 @@ static void __init spec_ctrl_disable_kernel_rrsba(void)
+ }
+ }
+
++static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode)
++{
++ /*
++ * Similar to context switches, there are two types of RSB attacks
++ * after VM exit:
++ *
++ * 1) RSB underflow
++ *
++ * 2) Poisoned RSB entry
++ *
++ * When retpoline is enabled, both are mitigated by filling/clearing
++ * the RSB.
++ *
++ * When IBRS is enabled, while #1 would be mitigated by the IBRS branch
++ * prediction isolation protections, RSB still needs to be cleared
++ * because of #2. Note that SMEP provides no protection here, unlike
++ * user-space-poisoned RSB entries.
++ *
++ * eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB
++ * bug is present then a LITE version of RSB protection is required,
++ * just a single call needs to retire before a RET is executed.
++ */
++ switch (mode) {
++ case SPECTRE_V2_NONE:
++ return;
++
++ case SPECTRE_V2_EIBRS_LFENCE:
++ case SPECTRE_V2_EIBRS:
++ if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
++ setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
++ pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n");
++ }
++ return;
++
++ case SPECTRE_V2_EIBRS_RETPOLINE:
++ case SPECTRE_V2_RETPOLINE:
++ case SPECTRE_V2_LFENCE:
++ case SPECTRE_V2_IBRS:
++ setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
++ pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n");
++ return;
++ }
++
++ pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exit");
++ dump_stack();
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -1485,28 +1532,7 @@ static void __init spectre_v2_select_mitigation(void)
+ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+ pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
+
+- /*
+- * Similar to context switches, there are two types of RSB attacks
+- * after vmexit:
+- *
+- * 1) RSB underflow
+- *
+- * 2) Poisoned RSB entry
+- *
+- * When retpoline is enabled, both are mitigated by filling/clearing
+- * the RSB.
+- *
+- * When IBRS is enabled, while #1 would be mitigated by the IBRS branch
+- * prediction isolation protections, RSB still needs to be cleared
+- * because of #2. Note that SMEP provides no protection here, unlike
+- * user-space-poisoned RSB entries.
+- *
+- * eIBRS, on the other hand, has RSB-poisoning protections, so it
+- * doesn't need RSB clearing after vmexit.
+- */
+- if (boot_cpu_has(X86_FEATURE_RETPOLINE) ||
+- boot_cpu_has(X86_FEATURE_KERNEL_IBRS))
+- setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
++ spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
+
+ /*
+ * Retpoline protects the kernel, but doesn't protect firmware. IBRS
+@@ -2292,6 +2318,19 @@ static char *ibpb_state(void)
+ return "";
+ }
+
++static char *pbrsb_eibrs_state(void)
++{
++ if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
++ if (boot_cpu_has(X86_FEATURE_RSB_VMEXIT_LITE) ||
++ boot_cpu_has(X86_FEATURE_RSB_VMEXIT))
++ return ", PBRSB-eIBRS: SW sequence";
++ else
++ return ", PBRSB-eIBRS: Vulnerable";
++ } else {
++ return ", PBRSB-eIBRS: Not affected";
++ }
++}
++
+ static ssize_t spectre_v2_show_state(char *buf)
+ {
+ if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
+@@ -2304,12 +2343,13 @@ static ssize_t spectre_v2_show_state(char *buf)
+ spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
+ return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
+
+- return sprintf(buf, "%s%s%s%s%s%s\n",
++ return sprintf(buf, "%s%s%s%s%s%s%s\n",
+ spectre_v2_strings[spectre_v2_enabled],
+ ibpb_state(),
+ boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
+ stibp_state(),
+ boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
++ pbrsb_eibrs_state(),
+ spectre_v2_module_string());
+ }
+
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 736262a76a12b..64a73f415f036 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1135,6 +1135,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+ #define NO_SWAPGS BIT(6)
+ #define NO_ITLB_MULTIHIT BIT(7)
+ #define NO_SPECTRE_V2 BIT(8)
++#define NO_EIBRS_PBRSB BIT(9)
+
+ #define VULNWL(vendor, family, model, whitelist) \
+ X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, whitelist)
+@@ -1177,7 +1178,7 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+
+ VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+ VULNWL_INTEL(ATOM_GOLDMONT_D, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+- VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
+
+ /*
+ * Technically, swapgs isn't serializing on AMD (despite it previously
+@@ -1187,7 +1188,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ * good enough for our purposes.
+ */
+
+- VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_TREMONT, NO_EIBRS_PBRSB),
++ VULNWL_INTEL(ATOM_TREMONT_L, NO_EIBRS_PBRSB),
++ VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
+
+ /* AMD Family 0xf - 0x12 */
+ VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+@@ -1365,6 +1368,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ setup_force_cpu_bug(X86_BUG_RETBLEED);
+ }
+
++ if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) &&
++ !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
++ !(ia32_cap & ARCH_CAP_PBRSB_NO))
++ setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
++
+ if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ return;
+
+diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
+index 4182c7ffc9091..6de96b9438044 100644
+--- a/arch/x86/kvm/vmx/vmenter.S
++++ b/arch/x86/kvm/vmx/vmenter.S
+@@ -227,11 +227,13 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL)
+ * entries and (in some cases) RSB underflow.
+ *
+ * eIBRS has its own protection against poisoned RSB, so it doesn't
+- * need the RSB filling sequence. But it does need to be enabled
+- * before the first unbalanced RET.
++ * need the RSB filling sequence. But it does need to be enabled, and a
++ * single call to retire, before the first unbalanced RET.
+ */
+
+- FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
++ FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\
++ X86_FEATURE_RSB_VMEXIT_LITE
++
+
+ pop %_ASM_ARG2 /* @flags */
+ pop %_ASM_ARG1 /* @vmx */
+diff --git a/block/blk-ioc.c b/block/blk-ioc.c
+index df9cfe4ca5328..63fc020424082 100644
+--- a/block/blk-ioc.c
++++ b/block/blk-ioc.c
+@@ -247,6 +247,8 @@ static struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
+ INIT_HLIST_HEAD(&ioc->icq_list);
+ INIT_WORK(&ioc->release_work, ioc_release_fn);
+ #endif
++ ioc->ioprio = IOPRIO_DEFAULT;
++
+ return ioc;
+ }
+
+diff --git a/block/ioprio.c b/block/ioprio.c
+index 2fe068fcaad58..2a34cbca18aed 100644
+--- a/block/ioprio.c
++++ b/block/ioprio.c
+@@ -157,9 +157,9 @@ out:
+ int ioprio_best(unsigned short aprio, unsigned short bprio)
+ {
+ if (!ioprio_valid(aprio))
+- aprio = IOPRIO_DEFAULT;
++ aprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_BE_NORM);
+ if (!ioprio_valid(bprio))
+- bprio = IOPRIO_DEFAULT;
++ bprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_BE_NORM);
+
+ return min(aprio, bprio);
+ }
+diff --git a/drivers/acpi/apei/bert.c b/drivers/acpi/apei/bert.c
+index 598fd19b65fa4..45973aa6e06d4 100644
+--- a/drivers/acpi/apei/bert.c
++++ b/drivers/acpi/apei/bert.c
+@@ -29,16 +29,26 @@
+
+ #undef pr_fmt
+ #define pr_fmt(fmt) "BERT: " fmt
++
++#define ACPI_BERT_PRINT_MAX_RECORDS 5
+ #define ACPI_BERT_PRINT_MAX_LEN 1024
+
+ static int bert_disable;
+
++/*
++ * Print "all" the error records in the BERT table, but avoid huge spam to
++ * the console if the BIOS included oversize records, or too many records.
++ * Skipping some records here does not lose anything because the full
++ * data is available to user tools in:
++ * /sys/firmware/acpi/tables/data/BERT
++ */
+ static void __init bert_print_all(struct acpi_bert_region *region,
+ unsigned int region_len)
+ {
+ struct acpi_hest_generic_status *estatus =
+ (struct acpi_hest_generic_status *)region;
+ int remain = region_len;
++ int printed = 0, skipped = 0;
+ u32 estatus_len;
+
+ while (remain >= sizeof(struct acpi_bert_region)) {
+@@ -46,24 +56,26 @@ static void __init bert_print_all(struct acpi_bert_region *region,
+ if (remain < estatus_len) {
+ pr_err(FW_BUG "Truncated status block (length: %u).\n",
+ estatus_len);
+- return;
++ break;
+ }
+
+ /* No more error records. */
+ if (!estatus->block_status)
+- return;
++ break;
+
+ if (cper_estatus_check(estatus)) {
+ pr_err(FW_BUG "Invalid error record.\n");
+- return;
++ break;
+ }
+
+- pr_info_once("Error records from previous boot:\n");
+- if (region_len < ACPI_BERT_PRINT_MAX_LEN)
++ if (estatus_len < ACPI_BERT_PRINT_MAX_LEN &&
++ printed < ACPI_BERT_PRINT_MAX_RECORDS) {
++ pr_info_once("Error records from previous boot:\n");
+ cper_estatus_print(KERN_INFO HW_ERR, estatus);
+- else
+- pr_info_once("Max print length exceeded, table data is available at:\n"
+- "/sys/firmware/acpi/tables/data/BERT");
++ printed++;
++ } else {
++ skipped++;
++ }
+
+ /*
+ * Because the boot error source is "one-time polled" type,
+@@ -75,6 +87,9 @@ static void __init bert_print_all(struct acpi_bert_region *region,
+ estatus = (void *)estatus + estatus_len;
+ remain -= estatus_len;
+ }
++
++ if (skipped)
++ pr_info(HW_ERR "Skipped %d error records\n", skipped);
+ }
+
+ static int __init setup_bert_disable(char *str)
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index becc198e4c224..6615f59ab7fd2 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -430,7 +430,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xRU",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+ DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
+ },
+ },
+@@ -438,59 +437,75 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xRU",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
+- DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xRU",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+- DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+- .ident = "Clevo NL5xRU",
++ .ident = "Clevo NL5xNU",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+- DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
++ DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
+ },
+ },
++ /*
++ * The TongFang PF5PU1G, PF4NU1F, PF5NU1G, and PF5LUXG/TUXEDO BA15 Gen10,
++ * Pulse 14/15 Gen1, and Pulse 15 Gen2 have the same problem as the Clevo
++ * NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2. See the description
++ * above.
++ */
+ {
+ .callback = video_detect_force_native,
+- .ident = "Clevo NL5xRU",
++ .ident = "TongFang PF5PU1G",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+- DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
++ DMI_MATCH(DMI_BOARD_NAME, "PF5PU1G"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+- .ident = "Clevo NL5xNU",
++ .ident = "TongFang PF4NU1F",
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "PF4NU1F"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "TongFang PF4NU1F",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+- DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++ DMI_MATCH(DMI_BOARD_NAME, "PULSE1401"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+- .ident = "Clevo NL5xNU",
++ .ident = "TongFang PF5NU1G",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
+- DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++ DMI_MATCH(DMI_BOARD_NAME, "PF5NU1G"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+- .ident = "Clevo NL5xNU",
++ .ident = "TongFang PF5NU1G",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+- DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_MATCH(DMI_BOARD_NAME, "PULSE1501"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "TongFang PF5LUXG",
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "PF5LUXG"),
+ },
+ },
+-
+ /*
+ * Desktops which falsely report a backlight and which our heuristics
+ * for this do not catch.
+diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
+index de5bd02cad447..e3cff01201b80 100644
+--- a/drivers/ata/sata_mv.c
++++ b/drivers/ata/sata_mv.c
+@@ -4057,7 +4057,7 @@ static int mv_platform_probe(struct platform_device *pdev)
+ /*
+ * Simple resource validation ..
+ */
+- if (unlikely(pdev->num_resources != 2)) {
++ if (unlikely(pdev->num_resources != 1)) {
+ dev_err(&pdev->dev, "invalid number of resources\n");
+ return -EINVAL;
+ }
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 76fbb046bdbe8..c9cda681c691e 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -454,6 +454,8 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
+ { 0x6606, "BCM4345C5" }, /* 003.006.006 */
+ { 0x230f, "BCM4356A2" }, /* 001.003.015 */
+ { 0x220e, "BCM20702A1" }, /* 001.002.014 */
++ { 0x420d, "BCM4349B1" }, /* 002.002.013 */
++ { 0x420e, "BCM4349B1" }, /* 002.002.014 */
+ { 0x4217, "BCM4329B1" }, /* 002.002.023 */
+ { 0x6106, "BCM4359C0" }, /* 003.001.006 */
+ { 0x4106, "BCM4335A0" }, /* 002.001.006 */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index e25fcd49db702..aaba2d7371781 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -427,6 +427,18 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x04ca, 0x4006), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
+
++ /* Realtek 8852CE Bluetooth devices */
++ { USB_DEVICE(0x04ca, 0x4007), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x04c5, 0x1675), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0cb8, 0xc558), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3587), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3586), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
++
+ /* Realtek Bluetooth devices */
+ { USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01),
+ .driver_info = BTUSB_REALTEK },
+@@ -477,6 +489,9 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH |
+ BTUSB_VALID_LE_STATES },
++ { USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH |
++ BTUSB_VALID_LE_STATES },
+
+ /* Additional Realtek 8723AE Bluetooth devices */
+ { USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index 785f445dd60d5..49bed66b8c84e 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -1544,8 +1544,10 @@ static const struct of_device_id bcm_bluetooth_of_match[] = {
+ { .compatible = "brcm,bcm43430a0-bt" },
+ { .compatible = "brcm,bcm43430a1-bt" },
+ { .compatible = "brcm,bcm43438-bt", .data = &bcm43438_device_data },
++ { .compatible = "brcm,bcm4349-bt", .data = &bcm43438_device_data },
+ { .compatible = "brcm,bcm43540-bt", .data = &bcm4354_device_data },
+ { .compatible = "brcm,bcm4335a0" },
++ { .compatible = "infineon,cyw55572-bt" },
+ { },
+ };
+ MODULE_DEVICE_TABLE(of, bcm_bluetooth_of_match);
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index eab34e24d9446..8df11016fd51b 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1588,7 +1588,7 @@ static bool qca_wakeup(struct hci_dev *hdev)
+ wakeup = device_may_wakeup(hu->serdev->ctrl->dev.parent);
+ bt_dev_dbg(hu->hdev, "wakeup status : %d", wakeup);
+
+- return !wakeup;
++ return wakeup;
+ }
+
+ static int qca_regulator_init(struct hci_uart *hu)
+diff --git a/drivers/macintosh/adb.c b/drivers/macintosh/adb.c
+index 439fab4eaa850..1bbb9ca08d40f 100644
+--- a/drivers/macintosh/adb.c
++++ b/drivers/macintosh/adb.c
+@@ -647,7 +647,7 @@ do_adb_query(struct adb_request *req)
+
+ switch(req->data[1]) {
+ case ADB_QUERY_GETDEVINFO:
+- if (req->nbytes < 3)
++ if (req->nbytes < 3 || req->data[2] >= 16)
+ break;
+ mutex_lock(&adb_handler_mutex);
+ req->reply[0] = adb_handler[req->data[2]].original_address;
+diff --git a/include/linux/ioprio.h b/include/linux/ioprio.h
+index 3f53bc27a19bf..3d088a88f8320 100644
+--- a/include/linux/ioprio.h
++++ b/include/linux/ioprio.h
+@@ -11,7 +11,7 @@
+ /*
+ * Default IO priority.
+ */
+-#define IOPRIO_DEFAULT IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_BE_NORM)
++#define IOPRIO_DEFAULT IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0)
+
+ /*
+ * Check that a priority value has a valid class.
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index a77b915d36a8e..8323ac5b7eee5 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -303,6 +303,7 @@
+ #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */
+ #define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */
+ #define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */
++#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM-Exit when EIBRS is enabled */
+
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
+diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
+index cc615be27a54b..e057e039173cb 100644
+--- a/tools/arch/x86/include/asm/msr-index.h
++++ b/tools/arch/x86/include/asm/msr-index.h
+@@ -150,6 +150,10 @@
+ * are restricted to targets in
+ * kernel.
+ */
++#define ARCH_CAP_PBRSB_NO BIT(24) /*
++ * Not susceptible to Post-Barrier
++ * Return Stack Buffer Predictions.
++ */
+
+ #define MSR_IA32_FLUSH_CMD 0x0000010b
+ #define L1D_FLUSH BIT(0) /*
+diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
+index 9b68658b6bb85..5b98f3ee58a58 100644
+--- a/tools/vm/slabinfo.c
++++ b/tools/vm/slabinfo.c
+@@ -233,6 +233,24 @@ static unsigned long read_slab_obj(struct slabinfo *s, const char *name)
+ return l;
+ }
+
++static unsigned long read_debug_slab_obj(struct slabinfo *s, const char *name)
++{
++ char x[128];
++ FILE *f;
++ size_t l;
++
++ snprintf(x, 128, "/sys/kernel/debug/slab/%s/%s", s->name, name);
++ f = fopen(x, "r");
++ if (!f) {
++ buffer[0] = 0;
++ l = 0;
++ } else {
++ l = fread(buffer, 1, sizeof(buffer), f);
++ buffer[l] = 0;
++ fclose(f);
++ }
++ return l;
++}
+
+ /*
+ * Put a size string together
+@@ -409,14 +427,18 @@ static void show_tracking(struct slabinfo *s)
+ {
+ printf("\n%s: Kernel object allocation\n", s->name);
+ printf("-----------------------------------------------------------------------\n");
+- if (read_slab_obj(s, "alloc_calls"))
++ if (read_debug_slab_obj(s, "alloc_traces"))
++ printf("%s", buffer);
++ else if (read_slab_obj(s, "alloc_calls"))
+ printf("%s", buffer);
+ else
+ printf("No Data\n");
+
+ printf("\n%s: Kernel object freeing\n", s->name);
+ printf("------------------------------------------------------------------------\n");
+- if (read_slab_obj(s, "free_calls"))
++ if (read_debug_slab_obj(s, "free_traces"))
++ printf("%s", buffer);
++ else if (read_slab_obj(s, "free_calls"))
+ printf("%s", buffer);
+ else
+ printf("No Data\n");
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-17 14:30 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-17 14:30 UTC (permalink / raw
To: gentoo-commits
commit: c5708be894c34baca3cfbe5d0ccff1674ad3c365
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 17 14:30:09 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 17 14:30:09 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c5708be8
Linux patch 5.19.2
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1001_linux-5.19.2.patch | 84691 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 84695 insertions(+)
diff --git a/0000_README b/0000_README
index 6335a155..8f7da639 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-5.19.1.patch
From: http://www.kernel.org
Desc: Linux 5.19.1
+Patch: 1001_linux-5.19.2.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.2
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1001_linux-5.19.2.patch b/1001_linux-5.19.2.patch
new file mode 100644
index 00000000..b27d1555
--- /dev/null
+++ b/1001_linux-5.19.2.patch
@@ -0,0 +1,84691 @@
+diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
+index 7faf719af1650..fac0f429a869f 100644
+--- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
++++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
+@@ -42,5 +42,5 @@ KernelVersion: 5.10
+ Contact: Maximilian Heyne <mheyne@amazon.de>
+ Description:
+ Whether to enable the persistent grants feature or not. Note
+- that this option only takes effect on newly created backends.
++ that this option only takes effect on newly connected backends.
+ The default is Y (enable).
+diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkfront b/Documentation/ABI/testing/sysfs-driver-xen-blkfront
+index 7f646c58832e6..4d36c5a10546e 100644
+--- a/Documentation/ABI/testing/sysfs-driver-xen-blkfront
++++ b/Documentation/ABI/testing/sysfs-driver-xen-blkfront
+@@ -15,5 +15,5 @@ KernelVersion: 5.10
+ Contact: Maximilian Heyne <mheyne@amazon.de>
+ Description:
+ Whether to enable the persistent grants feature or not. Note
+- that this option only takes effect on newly created frontends.
++ that this option only takes effect on newly connected frontends.
+ The default is Y (enable).
+diff --git a/Documentation/admin-guide/device-mapper/writecache.rst b/Documentation/admin-guide/device-mapper/writecache.rst
+index 10429779a91ab..724e028d1858b 100644
+--- a/Documentation/admin-guide/device-mapper/writecache.rst
++++ b/Documentation/admin-guide/device-mapper/writecache.rst
+@@ -78,16 +78,16 @@ Status:
+ 2. the number of blocks
+ 3. the number of free blocks
+ 4. the number of blocks under writeback
+-5. the number of read requests
+-6. the number of read requests that hit the cache
+-7. the number of write requests
+-8. the number of write requests that hit uncommitted block
+-9. the number of write requests that hit committed block
+-10. the number of write requests that bypass the cache
+-11. the number of write requests that are allocated in the cache
++5. the number of read blocks
++6. the number of read blocks that hit the cache
++7. the number of write blocks
++8. the number of write blocks that hit uncommitted block
++9. the number of write blocks that hit committed block
++10. the number of write blocks that bypass the cache
++11. the number of write blocks that are allocated in the cache
+ 12. the number of write requests that are blocked on the freelist
+ 13. the number of flush requests
+-14. the number of discard requests
++14. the number of discarded blocks
+
+ Messages:
+ flush
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index cc3ea8febc623..e4fe443bea77d 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5203,20 +5203,33 @@
+ Speculative Code Execution with Return Instructions)
+ vulnerability.
+
++ AMD-based UNRET and IBPB mitigations alone do not stop
++ sibling threads from influencing the predictions of other
++ sibling threads. For that reason, STIBP is used on pro-
++ cessors that support it, and mitigate SMT on processors
++ that don't.
++
+ off - no mitigation
+ auto - automatically select a migitation
+ auto,nosmt - automatically select a mitigation,
+ disabling SMT if necessary for
+ the full mitigation (only on Zen1
+ and older without STIBP).
+- ibpb - mitigate short speculation windows on
+- basic block boundaries too. Safe, highest
+- perf impact.
+- unret - force enable untrained return thunks,
+- only effective on AMD f15h-f17h
+- based systems.
+- unret,nosmt - like unret, will disable SMT when STIBP
+- is not available.
++ ibpb - On AMD, mitigate short speculation
++ windows on basic block boundaries too.
++ Safe, highest perf impact. It also
++ enables STIBP if present. Not suitable
++ on Intel.
++ ibpb,nosmt - Like "ibpb" above but will disable SMT
++ when STIBP is not available. This is
++ the alternative for systems which do not
++ have STIBP.
++ unret - Force enable untrained return thunks,
++ only effective on AMD f15h-f17h based
++ systems.
++ unret,nosmt - Like unret, but will disable SMT when STIBP
++ is not available. This is the alternative for
++ systems which do not have STIBP.
+
+ Selecting 'auto' will choose a mitigation method at run
+ time according to the CPU.
+diff --git a/Documentation/admin-guide/pm/cpuidle.rst b/Documentation/admin-guide/pm/cpuidle.rst
+index aec2cd2aaea73..19754beb5a4e6 100644
+--- a/Documentation/admin-guide/pm/cpuidle.rst
++++ b/Documentation/admin-guide/pm/cpuidle.rst
+@@ -612,8 +612,8 @@ the ``menu`` governor to be used on the systems that use the ``ladder`` governor
+ by default this way, for example.
+
+ The other kernel command line parameters controlling CPU idle time management
+-described below are only relevant for the *x86* architecture and some of
+-them affect Intel processors only.
++described below are only relevant for the *x86* architecture and references
++to ``intel_idle`` affect Intel processors only.
+
+ The *x86* architecture support code recognizes three kernel command line
+ options related to CPU idle time management: ``idle=poll``, ``idle=halt``,
+@@ -635,10 +635,13 @@ idle, so it very well may hurt single-thread computations performance as well as
+ energy-efficiency. Thus using it for performance reasons may not be a good idea
+ at all.]
+
+-The ``idle=nomwait`` option disables the ``intel_idle`` driver and causes
+-``acpi_idle`` to be used (as long as all of the information needed by it is
+-there in the system's ACPI tables), but it is not allowed to use the
+-``MWAIT`` instruction of the CPUs to ask the hardware to enter idle states.
++The ``idle=nomwait`` option prevents the use of ``MWAIT`` instruction of
++the CPU to enter idle states. When this option is used, the ``acpi_idle``
++driver will use the ``HLT`` instruction instead of ``MWAIT``. On systems
++running Intel processors, this option disables the ``intel_idle`` driver
++and forces the use of the ``acpi_idle`` driver instead. Note that in either
++case, ``acpi_idle`` driver will function only if all the information needed
++by it is in the system's ACPI tables.
+
+ In addition to the architecture-level kernel command line options affecting CPU
+ idle time management, there are parameters affecting individual ``CPUIdle``
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index d27db84d585ed..0b4235b1f8c46 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -82,10 +82,14 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A57 | #1319537 | ARM64_ERRATUM_1319367 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM | Cortex-A57 | #1742098 | ARM64_ERRATUM_1742098 |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A72 | #853709 | N/A |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A72 | #1319367 | ARM64_ERRATUM_1319367 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM | Cortex-A72 | #1655431 | ARM64_ERRATUM_1742098 |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A76 | #1188873,1418040| ARM64_ERRATUM_1418040 |
+diff --git a/Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml b/Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml
+index 77f174eee424f..2ebaa43eb62e9 100644
+--- a/Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml
++++ b/Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml
+@@ -24,6 +24,15 @@ properties:
+ clock-names:
+ const: ldb
+
++ reg:
++ minItems: 2
++ maxItems: 2
++
++ reg-names:
++ items:
++ - const: ldb
++ - const: lvds
++
+ ports:
+ $ref: /schemas/graph.yaml#/properties/ports
+
+@@ -56,10 +65,15 @@ examples:
+ #include <dt-bindings/clock/imx8mp-clock.h>
+
+ blk-ctrl {
+- bridge {
++ #address-cells = <1>;
++ #size-cells = <1>;
++
++ bridge@5c {
+ compatible = "fsl,imx8mp-ldb";
+ clocks = <&clk IMX8MP_CLK_MEDIA_LDB>;
+ clock-names = "ldb";
++ reg = <0x5c 0x4>, <0x128 0x4>;
++ reg-names = "ldb", "lvds";
+
+ ports {
+ #address-cells = <1>;
+diff --git a/Documentation/devicetree/bindings/mmc/sdhci-msm.yaml b/Documentation/devicetree/bindings/mmc/sdhci-msm.yaml
+index e4236334e7489..31a3ce208e1a1 100644
+--- a/Documentation/devicetree/bindings/mmc/sdhci-msm.yaml
++++ b/Documentation/devicetree/bindings/mmc/sdhci-msm.yaml
+@@ -17,6 +17,9 @@ description:
+ properties:
+ compatible:
+ oneOf:
++ - enum:
++ - qcom,sdhci-msm-v4
++ deprecated: true
+ - items:
+ - enum:
+ - qcom,apq8084-sdhci
+@@ -27,6 +30,9 @@ properties:
+ - qcom,msm8992-sdhci
+ - qcom,msm8994-sdhci
+ - qcom,msm8996-sdhci
++ - const: qcom,sdhci-msm-v4 # for sdcc versions less than 5.0
++ - items:
++ - enum:
+ - qcom,qcs404-sdhci
+ - qcom,sc7180-sdhci
+ - qcom,sc7280-sdhci
+@@ -38,12 +44,7 @@ properties:
+ - qcom,sm6350-sdhci
+ - qcom,sm8150-sdhci
+ - qcom,sm8250-sdhci
+- - enum:
+- - qcom,sdhci-msm-v4 # for sdcc versions less than 5.0
+- - qcom,sdhci-msm-v5 # for sdcc version 5.0
+- - items:
+- - const: qcom,sdhci-msm-v4 # Deprecated (only for backward compatibility)
+- # for sdcc versions less than 5.0
++ - const: qcom,sdhci-msm-v5 # for sdcc version 5.0
+
+ reg:
+ minItems: 1
+@@ -53,6 +54,28 @@ properties:
+ - description: CQE register map
+ - description: Inline Crypto Engine register map
+
++ reg-names:
++ minItems: 1
++ maxItems: 4
++ oneOf:
++ - items:
++ - const: hc_mem
++ - items:
++ - const: hc_mem
++ - const: core_mem
++ - items:
++ - const: hc_mem
++ - const: cqe_mem
++ - items:
++ - const: hc_mem
++ - const: cqe_mem
++ - const: ice_mem
++ - items:
++ - const: hc_mem
++ - const: core_mem
++ - const: cqe_mem
++ - const: ice_mem
++
+ clocks:
+ minItems: 3
+ items:
+@@ -121,6 +144,16 @@ properties:
+ description: A phandle to sdhci power domain node
+ maxItems: 1
+
++ mmc-ddr-1_8v: true
++
++ mmc-hs200-1_8v: true
++
++ mmc-hs400-1_8v: true
++
++ bus-width: true
++
++ max-frequency: true
++
+ patternProperties:
+ '^opp-table(-[a-z0-9]+)?$':
+ if:
+@@ -140,7 +173,10 @@ required:
+ - clock-names
+ - interrupts
+
+-additionalProperties: true
++allOf:
++ - $ref: mmc-controller.yaml#
++
++unevaluatedProperties: false
+
+ examples:
+ - |
+@@ -149,7 +185,7 @@ examples:
+ #include <dt-bindings/clock/qcom,rpmh.h>
+ #include <dt-bindings/power/qcom-rpmpd.h>
+
+- sdhc_2: sdhci@8804000 {
++ sdhc_2: mmc@8804000 {
+ compatible = "qcom,sm8250-sdhci", "qcom,sdhci-msm-v5";
+ reg = <0 0x08804000 0 0x1000>;
+
+diff --git a/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml b/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
+index e2d330bd4608a..69cdab18d6294 100644
+--- a/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
++++ b/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
+@@ -46,7 +46,7 @@ properties:
+ const: 2
+
+ cache-sets:
+- const: 1024
++ enum: [1024, 2048]
+
+ cache-size:
+ const: 2097152
+@@ -84,6 +84,8 @@ then:
+ description: |
+ Must contain entries for DirError, DataError and DataFail signals.
+ maxItems: 3
++ cache-sets:
++ const: 1024
+
+ else:
+ properties:
+@@ -91,6 +93,8 @@ else:
+ description: |
+ Must contain entries for DirError, DataError, DataFail, DirFail signals.
+ minItems: 4
++ cache-sets:
++ const: 2048
+
+ additionalProperties: false
+
+diff --git a/Documentation/filesystems/ext4/blockmap.rst b/Documentation/filesystems/ext4/blockmap.rst
+index 2bd990402a5c4..cc596541ce792 100644
+--- a/Documentation/filesystems/ext4/blockmap.rst
++++ b/Documentation/filesystems/ext4/blockmap.rst
+@@ -1,7 +1,7 @@
+ .. SPDX-License-Identifier: GPL-2.0
+
+ +---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+-| i.i_block Offset | Where It Points |
++| i.i_block Offset | Where It Points |
+ +=====================+==============================================================================================================================================================================================================================+
+ | 0 to 11 | Direct map to file blocks 0 to 11. |
+ +---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
+index 6183f43f4d73f..004b0ec62c448 100644
+--- a/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
++++ b/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
+@@ -2997,7 +2997,7 @@ enum v4l2_mpeg_video_hevc_size_of_length_field -
+ * - __u8
+ - ``colour_plane_id``
+ -
+- * - __u16
++ * - __s32
+ - ``slice_pic_order_cnt``
+ -
+ * - __u8
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 64379c699903b..08620b9a44fc7 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -7773,9 +7773,6 @@ F: include/linux/fs.h
+ F: include/linux/fs_types.h
+ F: include/uapi/linux/fs.h
+ F: include/uapi/linux/openat2.h
+-X: fs/io-wq.c
+-X: fs/io-wq.h
+-X: fs/io_uring.c
+
+ FINTEK F75375S HARDWARE MONITOR AND FAN CONTROLLER DRIVER
+ M: Riku Voipio <riku.voipio@iki.fi>
+@@ -10476,9 +10473,7 @@ L: io-uring@vger.kernel.org
+ S: Maintained
+ T: git git://git.kernel.dk/linux-block
+ T: git git://git.kernel.dk/liburing
+-F: fs/io-wq.c
+-F: fs/io-wq.h
+-F: fs/io_uring.c
++F: io_uring/
+ F: include/linux/io_uring.h
+ F: include/uapi/linux/io_uring.h
+ F: tools/io_uring/
+diff --git a/Makefile b/Makefile
+index 3acb329035eb9..e2edc38ce52c1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+@@ -1033,6 +1033,11 @@ KBUILD_CFLAGS += $(KCFLAGS)
+ KBUILD_LDFLAGS_MODULE += --build-id=sha1
+ LDFLAGS_vmlinux += --build-id=sha1
+
++KBUILD_LDFLAGS += -z noexecstack
++ifeq ($(CONFIG_LD_IS_BFD),y)
++KBUILD_LDFLAGS += $(call ld-option,--no-warn-rwx-segments)
++endif
++
+ ifeq ($(CONFIG_STRIP_ASM_SYMS),y)
+ LDFLAGS_vmlinux += $(call ld-option, -X,)
+ endif
+@@ -1097,6 +1102,7 @@ export MODULES_NSDEPS := $(extmod_prefix)modules.nsdeps
+ ifeq ($(KBUILD_EXTMOD),)
+ core-y += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/
+ core-$(CONFIG_BLOCK) += block/
++core-$(CONFIG_IO_URING) += io_uring/
+
+ vmlinux-dirs := $(patsubst %/,%,$(filter %/, \
+ $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 71b9272acb28b..5ea3e3838c211 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -223,6 +223,9 @@ config HAVE_FUNCTION_DESCRIPTORS
+ config TRACE_IRQFLAGS_SUPPORT
+ bool
+
++config TRACE_IRQFLAGS_NMI_SUPPORT
++ bool
++
+ #
+ # An arch should select this if it provides all these things:
+ #
+diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile
+index 5112f493f4946..27eec8e670ecd 100644
+--- a/arch/arm/boot/dts/Makefile
++++ b/arch/arm/boot/dts/Makefile
+@@ -135,6 +135,7 @@ dtb-$(CONFIG_ARCH_BCM_5301X) += \
+ bcm47094-luxul-xwr-3150-v1.dtb \
+ bcm47094-netgear-r8500.dtb \
+ bcm47094-phicomm-k3.dtb \
++ bcm53015-meraki-mr26.dtb \
+ bcm53016-meraki-mr32.dtb \
+ bcm94708.dtb \
+ bcm94709.dtb \
+diff --git a/arch/arm/boot/dts/aspeed-ast2500-evb.dts b/arch/arm/boot/dts/aspeed-ast2500-evb.dts
+index 1d24b394ea4c3..a497dd135491b 100644
+--- a/arch/arm/boot/dts/aspeed-ast2500-evb.dts
++++ b/arch/arm/boot/dts/aspeed-ast2500-evb.dts
+@@ -5,7 +5,7 @@
+
+ / {
+ model = "AST2500 EVB";
+- compatible = "aspeed,ast2500";
++ compatible = "aspeed,ast2500-evb", "aspeed,ast2500";
+
+ aliases {
+ serial4 = &uart5;
+diff --git a/arch/arm/boot/dts/aspeed-ast2600-evb-a1.dts b/arch/arm/boot/dts/aspeed-ast2600-evb-a1.dts
+index dd7148060c4a3..d0a5c2ff0fec4 100644
+--- a/arch/arm/boot/dts/aspeed-ast2600-evb-a1.dts
++++ b/arch/arm/boot/dts/aspeed-ast2600-evb-a1.dts
+@@ -5,6 +5,7 @@
+
+ / {
+ model = "AST2600 A1 EVB";
++ compatible = "aspeed,ast2600-evb-a1", "aspeed,ast2600";
+
+ /delete-node/regulator-vcc-sdhci0;
+ /delete-node/regulator-vcc-sdhci1;
+diff --git a/arch/arm/boot/dts/aspeed-ast2600-evb.dts b/arch/arm/boot/dts/aspeed-ast2600-evb.dts
+index 5a6063bd4508d..c698e65382693 100644
+--- a/arch/arm/boot/dts/aspeed-ast2600-evb.dts
++++ b/arch/arm/boot/dts/aspeed-ast2600-evb.dts
+@@ -8,7 +8,7 @@
+
+ / {
+ model = "AST2600 EVB";
+- compatible = "aspeed,ast2600";
++ compatible = "aspeed,ast2600-evb-a1", "aspeed,ast2600";
+
+ aliases {
+ serial4 = &uart5;
+diff --git a/arch/arm/boot/dts/bcm53015-meraki-mr26.dts b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+new file mode 100644
+index 0000000000000..14f58033efeb9
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+@@ -0,0 +1,166 @@
++// SPDX-License-Identifier: GPL-2.0-or-later OR MIT
++/*
++ * Broadcom BCM470X / BCM5301X ARM platform code.
++ * DTS for Meraki MR26 / Codename: Venom
++ *
++ * Copyright (C) 2022 Christian Lamparter <chunkeey@gmail.com>
++ */
++
++/dts-v1/;
++
++#include "bcm4708.dtsi"
++#include "bcm5301x-nand-cs0-bch8.dtsi"
++#include <dt-bindings/leds/common.h>
++
++/ {
++ compatible = "meraki,mr26", "brcm,bcm53015", "brcm,bcm4708";
++ model = "Meraki MR26";
++
++ memory@0 {
++ reg = <0x00000000 0x08000000>;
++ device_type = "memory";
++ };
++
++ leds {
++ compatible = "gpio-leds";
++
++ led-0 {
++ function = LED_FUNCTION_FAULT;
++ color = <LED_COLOR_ID_AMBER>;
++ gpios = <&chipcommon 13 GPIO_ACTIVE_HIGH>;
++ panic-indicator;
++ };
++ led-1 {
++ function = LED_FUNCTION_INDICATOR;
++ color = <LED_COLOR_ID_WHITE>;
++ gpios = <&chipcommon 12 GPIO_ACTIVE_HIGH>;
++ };
++ };
++
++ keys {
++ compatible = "gpio-keys";
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ key-restart {
++ label = "Reset";
++ linux,code = <KEY_RESTART>;
++ gpios = <&chipcommon 11 GPIO_ACTIVE_LOW>;
++ };
++ };
++};
++
++&uart0 {
++ clock-frequency = <50000000>;
++ /delete-property/ clocks;
++};
++
++&uart1 {
++ status = "disabled";
++};
++
++&gmac0 {
++ status = "okay";
++};
++
++&gmac1 {
++ status = "disabled";
++};
++&gmac2 {
++ status = "disabled";
++};
++&gmac3 {
++ status = "disabled";
++};
++
++&nandcs {
++ nand-ecc-algo = "hw";
++
++ partitions {
++ compatible = "fixed-partitions";
++ #address-cells = <0x1>;
++ #size-cells = <0x1>;
++
++ partition@0 {
++ label = "u-boot";
++ reg = <0x0 0x200000>;
++ read-only;
++ };
++
++ partition@200000 {
++ label = "u-boot-env";
++ reg = <0x200000 0x200000>;
++ /* empty */
++ };
++
++ partition@400000 {
++ label = "u-boot-backup";
++ reg = <0x400000 0x200000>;
++ /* empty */
++ };
++
++ partition@600000 {
++ label = "u-boot-env-backup";
++ reg = <0x600000 0x200000>;
++ /* empty */
++ };
++
++ partition@800000 {
++ label = "ubi";
++ reg = <0x800000 0x7780000>;
++ };
++ };
++};
++
++&srab {
++ status = "okay";
++
++ ports {
++ port@0 {
++ reg = <0>;
++ label = "poe";
++ };
++
++ port@5 {
++ reg = <5>;
++ label = "cpu";
++ ethernet = <&gmac0>;
++
++ fixed-link {
++ speed = <1000>;
++ duplex-full;
++ };
++ };
++ };
++};
++
++&i2c0 {
++ status = "okay";
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinmux_i2c>;
++
++ clock-frequency = <100000>;
++
++ ina219@40 {
++ compatible = "ti,ina219"; /* PoE power */
++ reg = <0x40>;
++ shunt-resistor = <60000>; /* = 60 mOhms */
++ };
++
++ eeprom@56 {
++ compatible = "atmel,24c64";
++ reg = <0x56>;
++ pagesize = <32>;
++ read-only;
++ #address-cells = <1>;
++ #size-cells = <1>;
++
++ /* it's empty */
++ };
++};
++
++&thermal {
++ status = "disabled";
++ /* does not work, reads 418 degree Celsius */
++};
+diff --git a/arch/arm/boot/dts/imx6qdl-apalis.dtsi b/arch/arm/boot/dts/imx6qdl-apalis.dtsi
+index bd763bae596b0..da919d0544a80 100644
+--- a/arch/arm/boot/dts/imx6qdl-apalis.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-apalis.dtsi
+@@ -315,7 +315,7 @@
+ /* ADC conversion time: 80 clocks */
+ st,sample-time = <4>;
+
+- stmpe_touchscreen: stmpe-touchscreen {
++ stmpe_touchscreen: stmpe_touchscreen {
+ compatible = "st,stmpe-ts";
+ /* 8 sample average control */
+ st,ave-ctrl = <3>;
+@@ -332,7 +332,7 @@
+ st,touch-det-delay = <5>;
+ };
+
+- stmpe_adc: stmpe-adc {
++ stmpe_adc: stmpe_adc {
+ compatible = "st,stmpe-adc";
+ /* forbid to use ADC channels 3-0 (touch) */
+ st,norequest-mask = <0x0F>;
+diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
+index afeec01f65228..eca8bf89ab88f 100644
+--- a/arch/arm/boot/dts/imx6ul.dtsi
++++ b/arch/arm/boot/dts/imx6ul.dtsi
+@@ -64,20 +64,18 @@
+ clock-frequency = <696000000>;
+ clock-latency = <61036>; /* two CLK32 periods */
+ #cooling-cells = <2>;
+- operating-points = <
++ operating-points =
+ /* kHz uV */
+- 696000 1275000
+- 528000 1175000
+- 396000 1025000
+- 198000 950000
+- >;
+- fsl,soc-operating-points = <
++ <696000 1275000>,
++ <528000 1175000>,
++ <396000 1025000>,
++ <198000 950000>;
++ fsl,soc-operating-points =
+ /* KHz uV */
+- 696000 1275000
+- 528000 1175000
+- 396000 1175000
+- 198000 1175000
+- >;
++ <696000 1275000>,
++ <528000 1175000>,
++ <396000 1175000>,
++ <198000 1175000>;
+ clocks = <&clks IMX6UL_CLK_ARM>,
+ <&clks IMX6UL_CLK_PLL2_BUS>,
+ <&clks IMX6UL_CLK_PLL2_PFD2>,
+@@ -149,6 +147,9 @@
+ ocram: sram@900000 {
+ compatible = "mmio-sram";
+ reg = <0x00900000 0x20000>;
++ ranges = <0 0x00900000 0x20000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ };
+
+ intc: interrupt-controller@a01000 {
+@@ -543,7 +544,7 @@
+ };
+
+ kpp: keypad@20b8000 {
+- compatible = "fsl,imx6ul-kpp", "fsl,imx6q-kpp", "fsl,imx21-kpp";
++ compatible = "fsl,imx6ul-kpp", "fsl,imx21-kpp";
+ reg = <0x020b8000 0x4000>;
+ interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX6UL_CLK_KPP>;
+@@ -998,7 +999,7 @@
+ };
+
+ csi: csi@21c4000 {
+- compatible = "fsl,imx6ul-csi", "fsl,imx7-csi";
++ compatible = "fsl,imx6ul-csi";
+ reg = <0x021c4000 0x4000>;
+ interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX6UL_CLK_CSI>;
+@@ -1007,7 +1008,7 @@
+ };
+
+ lcdif: lcdif@21c8000 {
+- compatible = "fsl,imx6ul-lcdif", "fsl,imx28-lcdif";
++ compatible = "fsl,imx6ul-lcdif", "fsl,imx6sx-lcdif";
+ reg = <0x021c8000 0x4000>;
+ interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX6UL_CLK_LCDIF_PIX>,
+@@ -1028,7 +1029,7 @@
+ qspi: spi@21e0000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+- compatible = "fsl,imx6ul-qspi", "fsl,imx6sx-qspi";
++ compatible = "fsl,imx6ul-qspi";
+ reg = <0x021e0000 0x4000>, <0x60000000 0x10000000>;
+ reg-names = "QuadSPI", "QuadSPI-memory";
+ interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/imx7-colibri-aster.dtsi b/arch/arm/boot/dts/imx7-colibri-aster.dtsi
+index b770fc9379707..02c49ed686a86 100644
+--- a/arch/arm/boot/dts/imx7-colibri-aster.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri-aster.dtsi
+@@ -4,41 +4,7 @@
+ *
+ */
+
+-
+-#include <dt-bindings/input/input.h>
+-#include <dt-bindings/pwm/pwm.h>
+-
+ / {
+- chosen {
+- stdout-path = "serial0:115200n8";
+- };
+-
+- gpio-keys {
+- compatible = "gpio-keys";
+- pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_gpiokeys>;
+-
+- power {
+- label = "Wake-Up";
+- gpios = <&gpio1 1 GPIO_ACTIVE_HIGH>;
+- linux,code = <KEY_WAKEUP>;
+- debounce-interval = <10>;
+- wakeup-source;
+- };
+- };
+-
+- panel: panel {
+- compatible = "edt,et057090dhu";
+- backlight = <&bl>;
+- power-supply = <®_3v3>;
+-
+- port {
+- panel_in: endpoint {
+- remote-endpoint = <&lcdif_out>;
+- };
+- };
+- };
+-
+ reg_3v3: regulator-3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "3.3V";
+@@ -77,13 +43,6 @@
+ status = "disabled";
+ };
+
+-&bl {
+- brightness-levels = <0 4 8 16 32 64 128 255>;
+- default-brightness-level = <6>;
+- power-supply = <®_3v3>;
+- status = "okay";
+-};
+-
+ &fec1 {
+ status = "okay";
+ };
+@@ -91,17 +50,6 @@
+ &i2c4 {
+ status = "okay";
+
+- /* Microchip/Atmel maxtouch controller */
+- touchscreen@4a {
+- compatible = "atmel,maxtouch";
+- pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_gpiotouch>;
+- reg = <0x4a>;
+- interrupt-parent = <&gpio2>;
+- interrupts = <15 IRQ_TYPE_EDGE_FALLING>; /* SODIMM 107 */
+- reset-gpios = <&gpio2 28 GPIO_ACTIVE_LOW>; /* SODIMM 106 */
+- };
+-
+ /* M41T0M6 real time clock on carrier board */
+ rtc: rtc@68 {
+ compatible = "st,m41t0";
+@@ -109,25 +57,6 @@
+ };
+ };
+
+-&iomuxc {
+- pinctrl_gpiotouch: touchgpios {
+- fsl,pins = <
+- MX7D_PAD_EPDC_DATA15__GPIO2_IO15 0x74
+- MX7D_PAD_EPDC_BDR0__GPIO2_IO28 0x14
+- >;
+- };
+-};
+-
+-&lcdif {
+- status = "okay";
+-
+- port {
+- lcdif_out: endpoint {
+- remote-endpoint = <&panel_in>;
+- };
+- };
+-};
+-
+ &pwm1 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/imx7-colibri-eval-v3.dtsi b/arch/arm/boot/dts/imx7-colibri-eval-v3.dtsi
+index 3b9df8c82ae30..b5f632921df2a 100644
+--- a/arch/arm/boot/dts/imx7-colibri-eval-v3.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri-eval-v3.dtsi
+@@ -4,48 +4,13 @@
+ */
+
+ / {
+- aliases {
+- rtc0 = &rtc;
+- rtc1 = &snvs_rtc;
+- };
+-
+- chosen {
+- stdout-path = "serial0:115200n8";
+- };
+-
+- /* fixed crystal dedicated to mpc258x */
++ /* Fixed crystal dedicated to MCP2515. */
+ clk16m: clk16m {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <16000000>;
+ };
+
+- gpio-keys {
+- compatible = "gpio-keys";
+- pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_gpiokeys>;
+-
+- power {
+- label = "Wake-Up";
+- gpios = <&gpio1 1 GPIO_ACTIVE_HIGH>;
+- linux,code = <KEY_WAKEUP>;
+- debounce-interval = <10>;
+- wakeup-source;
+- };
+- };
+-
+- panel: panel {
+- compatible = "edt,et057090dhu";
+- backlight = <&bl>;
+- power-supply = <®_3v3>;
+-
+- port {
+- panel_in: endpoint {
+- remote-endpoint = <&lcdif_out>;
+- };
+- };
+- };
+-
+ reg_3v3: regulator-3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "3.3V";
+@@ -72,14 +37,6 @@
+ };
+ };
+
+-&bl {
+- brightness-levels = <0 4 8 16 32 64 128 255>;
+- default-brightness-level = <6>;
+- power-supply = <®_3v3>;
+-
+- status = "okay";
+-};
+-
+ &adc1 {
+ status = "okay";
+ };
+@@ -88,6 +45,18 @@
+ status = "okay";
+ };
+
++/*
++ * The Atmel maxtouch controller uses SODIMM 28/30, also used for PWM<B>, PWM<C>, aka pwm2, pwm3.
++ * So if you enable following capacitive touch controller, disable pwm2/pwm3 first.
++ */
++&atmel_mxt_ts {
++ interrupt-parent = <&gpio1>;
++ interrupts = <9 IRQ_TYPE_EDGE_FALLING>; /* SODIMM 28 / INT */
++ pinctrl-0 = <&pinctrl_atmel_adapter>;
++ reset-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>; /* SODIMM 30 / RST */
++ status = "disabled";
++};
++
+ &ecspi3 {
+ status = "okay";
+
+@@ -113,21 +82,6 @@
+ &i2c4 {
+ status = "okay";
+
+- /*
+- * Touchscreen is using SODIMM 28/30, also used for PWM<B>, PWM<C>,
+- * aka pwm2, pwm3. so if you enable touchscreen, disable the pwms
+- */
+- touchscreen@4a {
+- compatible = "atmel,maxtouch";
+- pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_gpiotouch>;
+- reg = <0x4a>;
+- interrupt-parent = <&gpio1>;
+- interrupts = <9 IRQ_TYPE_EDGE_FALLING>; /* SODIMM 28 */
+- reset-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>; /* SODIMM 30 */
+- status = "disabled";
+- };
+-
+ /* M41T0M6 real time clock on carrier board */
+ rtc: rtc@68 {
+ compatible = "st,m41t0";
+@@ -135,16 +89,6 @@
+ };
+ };
+
+-&lcdif {
+- status = "okay";
+-
+- port {
+- lcdif_out: endpoint {
+- remote-endpoint = <&panel_in>;
+- };
+- };
+-};
+-
+ &pwm1 {
+ status = "okay";
+ };
+@@ -183,12 +127,3 @@
+ vmmc-supply = <®_3v3>;
+ status = "okay";
+ };
+-
+-&iomuxc {
+- pinctrl_gpiotouch: touchgpios {
+- fsl,pins = <
+- MX7D_PAD_GPIO1_IO09__GPIO1_IO9 0x74
+- MX7D_PAD_GPIO1_IO10__GPIO1_IO10 0x14
+- >;
+- };
+-};
+diff --git a/arch/arm/boot/dts/imx7-colibri.dtsi b/arch/arm/boot/dts/imx7-colibri.dtsi
+index f1c60b0cb143e..79f041988c7b3 100644
+--- a/arch/arm/boot/dts/imx7-colibri.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri.dtsi
+@@ -3,13 +3,63 @@
+ * Copyright 2016-2020 Toradex
+ */
+
++#include <dt-bindings/pwm/pwm.h>
++
+ / {
+- bl: backlight {
++ aliases {
++ rtc0 = &rtc;
++ rtc1 = &snvs_rtc;
++ };
++
++ backlight: backlight {
++ brightness-levels = <0 45 63 88 119 158 203 255>;
+ compatible = "pwm-backlight";
++ default-brightness-level = <4>;
++ enable-gpios = <&gpio5 1 GPIO_ACTIVE_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_gpio_bl_on>;
+- pwms = <&pwm1 0 5000000 0>;
+- enable-gpios = <&gpio5 1 GPIO_ACTIVE_HIGH>;
++ power-supply = <®_module_3v3>;
++ pwms = <&pwm1 0 6666667 PWM_POLARITY_INVERTED>;
++ status = "disabled";
++ };
++
++ chosen {
++ stdout-path = "serial0:115200n8";
++ };
++
++ extcon_usbc_det: usbc-det {
++ compatible = "linux,extcon-usb-gpio";
++ debounce = <25>;
++ id-gpio = <&gpio7 14 GPIO_ACTIVE_HIGH>; /* SODIMM 137 / USBC_DET */
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinctrl_usbc_det>;
++ };
++
++ gpio-keys {
++ compatible = "gpio-keys";
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinctrl_gpiokeys>;
++
++ wakeup {
++ debounce-interval = <10>;
++ gpios = <&gpio1 1 (GPIO_ACTIVE_HIGH | GPIO_PULL_DOWN)>; /* SODIMM 45 */
++ label = "Wake-Up";
++ linux,code = <KEY_WAKEUP>;
++ wakeup-source;
++ };
++ };
++
++ panel_dpi: panel-dpi {
++ backlight = <&backlight>;
++ compatible = "edt,et057090dhu";
++ power-supply = <®_3v3>;
++ status = "disabled";
++
++ port {
++ lcd_panel_in: endpoint {
++ remote-endpoint = <&lcdif_out>;
++ };
++ };
+ };
+
+ reg_module_3v3: regulator-module-3v3 {
+@@ -301,18 +351,19 @@
+ VDDD-supply = <®_DCDC3>;
+ };
+
+- ad7879@2c {
++ ad7879_ts: touchscreen@2c {
++ adi,acquisition-time = /bits/ 8 <1>;
++ adi,averaging = /bits/ 8 <1>;
++ adi,conversion-interval = /bits/ 8 <255>;
++ adi,first-conversion-delay = /bits/ 8 <3>;
++ adi,median-filter-size = /bits/ 8 <2>;
++ adi,resistance-plate-x = <120>;
+ compatible = "adi,ad7879-1";
+- reg = <0x2c>;
+ interrupt-parent = <&gpio1>;
+ interrupts = <13 IRQ_TYPE_EDGE_FALLING>;
++ reg = <0x2c>;
+ touchscreen-max-pressure = <4096>;
+- adi,resistance-plate-x = <120>;
+- adi,first-conversion-delay = /bits/ 8 <3>;
+- adi,acquisition-time = /bits/ 8 <1>;
+- adi,median-filter-size = /bits/ 8 <2>;
+- adi,averaging = /bits/ 8 <1>;
+- adi,conversion-interval = /bits/ 8 <255>;
++ status = "disabled";
+ };
+
+ pmic@33 {
+@@ -392,12 +443,32 @@
+ pinctrl-1 = <&pinctrl_i2c4_recovery>;
+ scl-gpios = <&gpio7 8 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ sda-gpios = <&gpio7 9 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++ status = "disabled";
++
++ /* Atmel maxtouch controller */
++ atmel_mxt_ts: touchscreen@4a {
++ compatible = "atmel,maxtouch";
++ interrupt-parent = <&gpio2>;
++ interrupts = <15 IRQ_TYPE_EDGE_FALLING>; /* SODIMM 107 / INT */
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinctrl_atmel_connector>;
++ reg = <0x4a>;
++ reset-gpios = <&gpio2 28 GPIO_ACTIVE_LOW>; /* SODIMM 106 / RST */
++ status = "disabled";
++ };
+ };
+
+ &lcdif {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcdif_dat
+ &pinctrl_lcdif_ctrl>;
++ status = "disabled";
++
++ port {
++ lcdif_out: endpoint {
++ remote-endpoint = <&lcd_panel_in>;
++ };
++ };
+ };
+
+ &pwm1 {
+@@ -457,7 +528,8 @@
+ };
+
+ &usbotg1 {
+- dr_mode = "host";
++ dr_mode = "otg";
++ extcon = <0>, <&extcon_usbc_det>;
+ };
+
+ &usdhc1 {
+@@ -485,8 +557,27 @@
+
+ &iomuxc {
+ pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_gpio1 &pinctrl_gpio2 &pinctrl_gpio3 &pinctrl_gpio4
+- &pinctrl_gpio7 &pinctrl_usbc_det>;
++ pinctrl-0 = <&pinctrl_gpio1 &pinctrl_gpio2 &pinctrl_gpio3 &pinctrl_gpio4>;
++
++ /*
++ * Atmel MXT touchsceen + Capacitive Touch Adapter
++ * NOTE: This pin group conflicts with pin groups pinctrl_pwm2/pinctrl_pwm3.
++ * Don't use them simultaneously.
++ */
++ pinctrl_atmel_adapter: atmelconnectorgrp {
++ fsl,pins = <
++ MX7D_PAD_GPIO1_IO09__GPIO1_IO9 0x74 /* SODIMM 28 / INT */
++ MX7D_PAD_GPIO1_IO10__GPIO1_IO10 0x14 /* SODIMM 30 / RST */
++ >;
++ };
++
++ /* Atmel MXT touchsceen + boards with built-in Capacitive Touch Connector */
++ pinctrl_atmel_connector: atmeladaptergrp {
++ fsl,pins = <
++ MX7D_PAD_EPDC_BDR0__GPIO2_IO28 0x14 /* SODIMM 106 / RST */
++ MX7D_PAD_EPDC_DATA15__GPIO2_IO15 0x74 /* SODIMM 107 / INT */
++ >;
++ };
+
+ pinctrl_gpio1: gpio1-grp {
+ fsl,pins = <
+@@ -494,8 +585,6 @@
+ MX7D_PAD_EPDC_DATA09__GPIO2_IO9 0x14 /* SODIMM 89 */
+ MX7D_PAD_EPDC_DATA08__GPIO2_IO8 0x74 /* SODIMM 91 */
+ MX7D_PAD_LCD_RESET__GPIO3_IO4 0x14 /* SODIMM 93 */
+- MX7D_PAD_EPDC_DATA13__GPIO2_IO13 0x14 /* SODIMM 95 */
+- MX7D_PAD_ENET1_RGMII_TXC__GPIO7_IO11 0x14 /* SODIMM 99 */
+ MX7D_PAD_EPDC_DATA10__GPIO2_IO10 0x74 /* SODIMM 105 */
+ MX7D_PAD_EPDC_DATA00__GPIO2_IO0 0x14 /* SODIMM 111 */
+ MX7D_PAD_EPDC_DATA01__GPIO2_IO1 0x14 /* SODIMM 113 */
+@@ -729,6 +818,15 @@
+ >;
+ };
+
++ pinctrl_lvds_transceiver: lvdstx {
++ fsl,pins = <
++ MX7D_PAD_ENET1_RGMII_RD2__GPIO7_IO2 0x14 /* SODIMM 63 */
++ MX7D_PAD_ENET1_RGMII_RD3__GPIO7_IO3 0x74 /* SODIMM 55 */
++ MX7D_PAD_ENET1_RGMII_TXC__GPIO7_IO11 0x14 /* SODIMM 99 */
++ MX7D_PAD_EPDC_DATA13__GPIO2_IO13 0x14 /* SODIMM 95 */
++ >;
++ };
++
+ pinctrl_pwm1: pwm1-grp {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO08__PWM1_OUT 0x79
+diff --git a/arch/arm/boot/dts/imx7d-colibri-aster.dts b/arch/arm/boot/dts/imx7d-colibri-aster.dts
+index f3f0537d5a375..ce0e6bb7db37c 100644
+--- a/arch/arm/boot/dts/imx7d-colibri-aster.dts
++++ b/arch/arm/boot/dts/imx7d-colibri-aster.dts
+@@ -14,6 +14,26 @@
+ "fsl,imx7d";
+ };
+
++&ad7879_ts {
++ status = "okay";
++};
++
++&atmel_mxt_ts {
++ status = "okay";
++};
++
++&backlight {
++ status = "okay";
++};
++
++&lcdif {
++ status = "okay";
++};
++
++&panel_dpi {
++ status = "okay";
++};
++
+ &usbotg2 {
+ vbus-supply = <®_usbh_vbus>;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/imx7d-colibri-emmc.dtsi b/arch/arm/boot/dts/imx7d-colibri-emmc.dtsi
+index af39e5370fa12..045e4413d3390 100644
+--- a/arch/arm/boot/dts/imx7d-colibri-emmc.dtsi
++++ b/arch/arm/boot/dts/imx7d-colibri-emmc.dtsi
+@@ -13,6 +13,10 @@
+ };
+ };
+
++&cpu1 {
++ cpu-supply = <®_DCDC2>;
++};
++
+ &gpio6 {
+ gpio-line-names = "",
+ "",
+diff --git a/arch/arm/boot/dts/imx7d-colibri-eval-v3.dts b/arch/arm/boot/dts/imx7d-colibri-eval-v3.dts
+index 87b132bcd272d..c610c50c003a8 100644
+--- a/arch/arm/boot/dts/imx7d-colibri-eval-v3.dts
++++ b/arch/arm/boot/dts/imx7d-colibri-eval-v3.dts
+@@ -13,6 +13,38 @@
+ "fsl,imx7d";
+ };
+
++&ad7879_ts {
++ status = "okay";
++};
++
++/*
++ * The Atmel maxtouch controller uses SODIMM 28/30, also used for PWM<B>, PWM<C>, aka pwm2, pwm3.
++ * So if you enable following capacitive touch controller, disable pwm2/pwm3 first.
++ */
++&atmel_mxt_ts {
++ status = "disabled";
++};
++
++&backlight {
++ status = "okay";
++};
++
++&lcdif {
++ status = "okay";
++};
++
++&panel_dpi {
++ status = "okay";
++};
++
++&pwm2 {
++ status = "okay";
++};
++
++&pwm3 {
++ status = "okay";
++};
++
+ &usbotg2 {
+ vbus-supply = <®_usbh_vbus>;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/imx7s-colibri-aster.dts b/arch/arm/boot/dts/imx7s-colibri-aster.dts
+index fca4e0a95c1b3..87f9e0e079a86 100644
+--- a/arch/arm/boot/dts/imx7s-colibri-aster.dts
++++ b/arch/arm/boot/dts/imx7s-colibri-aster.dts
+@@ -13,3 +13,23 @@
+ compatible = "toradex,colibri-imx7s-aster", "toradex,colibri-imx7s",
+ "fsl,imx7s";
+ };
++
++&ad7879_ts {
++ status = "okay";
++};
++
++&atmel_mxt_ts {
++ status = "okay";
++};
++
++&backlight {
++ status = "okay";
++};
++
++&lcdif {
++ status = "okay";
++};
++
++&panel_dpi {
++ status = "okay";
++};
+diff --git a/arch/arm/boot/dts/imx7s-colibri-eval-v3.dts b/arch/arm/boot/dts/imx7s-colibri-eval-v3.dts
+index aa70d3f2e2e2f..81956c16b95bc 100644
+--- a/arch/arm/boot/dts/imx7s-colibri-eval-v3.dts
++++ b/arch/arm/boot/dts/imx7s-colibri-eval-v3.dts
+@@ -12,3 +12,35 @@
+ compatible = "toradex,colibri-imx7s-eval-v3", "toradex,colibri-imx7s",
+ "fsl,imx7s";
+ };
++
++&ad7879_ts {
++ status = "okay";
++};
++
++/*
++ * The Atmel maxtouch controller uses SODIMM 28/30, also used for PWM<B>, PWM<C>, aka pwm2, pwm3.
++ * So if you enable following capacitive touch controller, disable pwm2/pwm3 first.
++ */
++&atmel_mxt_ts {
++ status = "disabled";
++};
++
++&backlight {
++ status = "okay";
++};
++
++&lcdif {
++ status = "okay";
++};
++
++&panel_dpi {
++ status = "okay";
++};
++
++&pwm2 {
++ status = "okay";
++};
++
++&pwm3 {
++ status = "okay";
++};
+diff --git a/arch/arm/boot/dts/qcom-ipq8064.dtsi b/arch/arm/boot/dts/qcom-ipq8064.dtsi
+index 808ea18622835..d09354ca100d4 100644
+--- a/arch/arm/boot/dts/qcom-ipq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq8064.dtsi
+@@ -784,7 +784,7 @@
+ l2cc: clock-controller@2011000 {
+ compatible = "qcom,kpss-gcc", "syscon";
+ reg = <0x2011000 0x1000>;
+- clocks = <&gcc PLL8_VOTE>, <&gcc PXO_SRC>;
++ clocks = <&gcc PLL8_VOTE>, <&pxo_board>;
+ clock-names = "pll8_vote", "pxo";
+ clock-output-names = "acpu_l2_aux";
+ };
+diff --git a/arch/arm/boot/dts/qcom-mdm9615.dtsi b/arch/arm/boot/dts/qcom-mdm9615.dtsi
+index 8f0752ce1c7ba..0ce0d04bd9940 100644
+--- a/arch/arm/boot/dts/qcom-mdm9615.dtsi
++++ b/arch/arm/boot/dts/qcom-mdm9615.dtsi
+@@ -321,6 +321,7 @@
+
+ pmicgpio: gpio@150 {
+ compatible = "qcom,pm8018-gpio", "qcom,ssbi-gpio";
++ reg = <0x150>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ gpio-controller;
+diff --git a/arch/arm/boot/dts/qcom-msm8974.dtsi b/arch/arm/boot/dts/qcom-msm8974.dtsi
+index c3b8a6d630275..2d9416d1a6c89 100644
+--- a/arch/arm/boot/dts/qcom-msm8974.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8974.dtsi
+@@ -580,7 +580,7 @@
+ blsp2_uart1: serial@f995d000 {
+ compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm";
+ reg = <0xf995d000 0x1000>;
+- interrupts = <GIC_SPI 113 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&gcc GCC_BLSP2_UART1_APPS_CLK>, <&gcc GCC_BLSP2_AHB_CLK>;
+ clock-names = "core", "iface";
+ pinctrl-names = "default", "sleep";
+@@ -1182,6 +1182,8 @@
+ qcom,smem-states = <&modem_smp2p_out 0>;
+ qcom,smem-state-names = "stop";
+
++ status = "disabled";
++
+ mba {
+ memory-region = <&mba_region>;
+ };
+@@ -1630,6 +1632,7 @@
+ reg = <0xfdd00000 0x2000>,
+ <0xfec00000 0x180000>;
+ reg-names = "ctrl", "mem";
++ ranges = <0 0xfec00000 0x180000>;
+ clocks = <&rpmcc RPM_SMD_OCMEMGX_CLK>,
+ <&mmcc OCMEMCX_OCMEMNOC_CLK>;
+ clock-names = "core", "iface";
+@@ -1661,6 +1664,8 @@
+ qcom,smem-states = <&adsp_smp2p_out 0>;
+ qcom,smem-state-names = "stop";
+
++ status = "disabled";
++
+ smd-edge {
+ interrupts = <GIC_SPI 156 IRQ_TYPE_EDGE_RISING>;
+
+diff --git a/arch/arm/boot/dts/qcom-msm8974pro-fairphone-fp2.dts b/arch/arm/boot/dts/qcom-msm8974pro-fairphone-fp2.dts
+index 58cb2ce1e4dfe..8a6b8e4de8878 100644
+--- a/arch/arm/boot/dts/qcom-msm8974pro-fairphone-fp2.dts
++++ b/arch/arm/boot/dts/qcom-msm8974pro-fairphone-fp2.dts
+@@ -147,10 +147,12 @@
+ };
+
+ &remoteproc_adsp {
++ status = "okay";
+ cx-supply = <&pm8841_s2>;
+ };
+
+ &remoteproc_mss {
++ status = "okay";
+ cx-supply = <&pm8841_s2>;
+ mss-supply = <&pm8841_s3>;
+ mx-supply = <&pm8841_s1>;
+diff --git a/arch/arm/boot/dts/qcom-msm8974pro-samsung-klte.dts b/arch/arm/boot/dts/qcom-msm8974pro-samsung-klte.dts
+index d6b2300a82231..577cbffad0100 100644
+--- a/arch/arm/boot/dts/qcom-msm8974pro-samsung-klte.dts
++++ b/arch/arm/boot/dts/qcom-msm8974pro-samsung-klte.dts
+@@ -457,10 +457,12 @@
+ };
+
+ &remoteproc_adsp {
++ status = "okay";
+ cx-supply = <&pma8084_s2>;
+ };
+
+ &remoteproc_mss {
++ status = "okay";
+ cx-supply = <&pma8084_s2>;
+ mss-supply = <&pma8084_s6>;
+ mx-supply = <&pma8084_s1>;
+diff --git a/arch/arm/boot/dts/qcom-pm8841.dtsi b/arch/arm/boot/dts/qcom-pm8841.dtsi
+index 2caf71eacb520..b5cdde034d188 100644
+--- a/arch/arm/boot/dts/qcom-pm8841.dtsi
++++ b/arch/arm/boot/dts/qcom-pm8841.dtsi
+@@ -24,6 +24,7 @@
+ compatible = "qcom,spmi-temp-alarm";
+ reg = <0x2400>;
+ interrupts = <4 0x24 0 IRQ_TYPE_EDGE_RISING>;
++ #thermal-sensor-cells = <0>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom-sdx55.dtsi
+index 1c2b208a5670b..ef1da28f567c8 100644
+--- a/arch/arm/boot/dts/qcom-sdx55.dtsi
++++ b/arch/arm/boot/dts/qcom-sdx55.dtsi
+@@ -206,7 +206,7 @@
+ blsp1_uart3: serial@831000 {
+ compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm";
+ reg = <0x00831000 0x200>;
+- interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_LOW>;
++ interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&gcc 30>,
+ <&gcc 9>;
+ clock-names = "core", "iface";
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-codina.dts b/arch/arm/boot/dts/ste-ux500-samsung-codina.dts
+index b6746ac167bc1..5f41256d7f4b4 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-codina.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-codina.dts
+@@ -598,8 +598,8 @@
+ reg = <0x19>;
+ vdd-supply = <&ab8500_ldo_aux1_reg>; // 3V
+ vddio-supply = <&ab8500_ldo_aux2_reg>; // 1.8V
+- mount-matrix = "0", "-1", "0",
+- "1", "0", "0",
++ mount-matrix = "0", "1", "0",
++ "-1", "0", "0",
+ "0", "0", "1";
+ };
+ };
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-gavini.dts b/arch/arm/boot/dts/ste-ux500-samsung-gavini.dts
+index 53062d50e455a..806da3fc33cd7 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-gavini.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-gavini.dts
+@@ -527,8 +527,8 @@
+ accelerometer@18 {
+ compatible = "bosch,bma222e";
+ reg = <0x18>;
+- mount-matrix = "0", "1", "0",
+- "-1", "0", "0",
++ mount-matrix = "0", "-1", "0",
++ "1", "0", "0",
+ "0", "0", "1";
+ vddio-supply = <&ab8500_ldo_aux2_reg>; // 1.8V
+ vdd-supply = <&ab8500_ldo_aux1_reg>; // 3V
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-janice.dts b/arch/arm/boot/dts/ste-ux500-samsung-janice.dts
+index e6d4fd0eb5f42..ed5c79c3d04b0 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-janice.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-janice.dts
+@@ -633,8 +633,8 @@
+ accelerometer@8 {
+ compatible = "bosch,bma222";
+ reg = <0x08>;
+- mount-matrix = "0", "1", "0",
+- "-1", "0", "0",
++ mount-matrix = "0", "-1", "0",
++ "1", "0", "0",
+ "0", "0", "1";
+ vddio-supply = <&ab8500_ldo_aux2_reg>; // 1.8V
+ vdd-supply = <&ab8500_ldo_aux1_reg>; // 3V
+diff --git a/arch/arm/boot/dts/uniphier-pxs2.dtsi b/arch/arm/boot/dts/uniphier-pxs2.dtsi
+index e81e5937a60ae..03301ddb3403a 100644
+--- a/arch/arm/boot/dts/uniphier-pxs2.dtsi
++++ b/arch/arm/boot/dts/uniphier-pxs2.dtsi
+@@ -597,8 +597,8 @@
+ compatible = "socionext,uniphier-dwc3", "snps,dwc3";
+ status = "disabled";
+ reg = <0x65a00000 0xcd00>;
+- interrupt-names = "host", "peripheral";
+- interrupts = <0 134 4>, <0 135 4>;
++ interrupt-names = "dwc_usb3";
++ interrupts = <0 134 4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb0>, <&pinctrl_usb2>;
+ clock-names = "ref", "bus_early", "suspend";
+@@ -693,8 +693,8 @@
+ compatible = "socionext,uniphier-dwc3", "snps,dwc3";
+ status = "disabled";
+ reg = <0x65c00000 0xcd00>;
+- interrupt-names = "host", "peripheral";
+- interrupts = <0 137 4>, <0 138 4>;
++ interrupt-names = "dwc_usb3";
++ interrupts = <0 137 4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb1>, <&pinctrl_usb3>;
+ clock-names = "ref", "bus_early", "suspend";
+diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
+index e4dba5461cb3e..149a5bd6b88c1 100644
+--- a/arch/arm/crypto/Kconfig
++++ b/arch/arm/crypto/Kconfig
+@@ -63,7 +63,7 @@ config CRYPTO_SHA512_ARM
+ using optimized ARM assembler and NEON, when available.
+
+ config CRYPTO_BLAKE2S_ARM
+- tristate "BLAKE2s digest algorithm (ARM)"
++ bool "BLAKE2s digest algorithm (ARM)"
+ select CRYPTO_ARCH_HAVE_LIB_BLAKE2S
+ help
+ BLAKE2s digest algorithm optimized with ARM scalar instructions. This
+diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
+index 0274f81cc8ea0..971e74546fb1b 100644
+--- a/arch/arm/crypto/Makefile
++++ b/arch/arm/crypto/Makefile
+@@ -9,8 +9,7 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o
+ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
+ obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
+ obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
+-obj-$(CONFIG_CRYPTO_BLAKE2S_ARM) += blake2s-arm.o
+-obj-$(if $(CONFIG_CRYPTO_BLAKE2S_ARM),y) += libblake2s-arm.o
++obj-$(CONFIG_CRYPTO_BLAKE2S_ARM) += libblake2s-arm.o
+ obj-$(CONFIG_CRYPTO_BLAKE2B_NEON) += blake2b-neon.o
+ obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha-neon.o
+ obj-$(CONFIG_CRYPTO_POLY1305_ARM) += poly1305-arm.o
+@@ -32,7 +31,6 @@ sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha256_neon_glue.o
+ sha256-arm-y := sha256-core.o sha256_glue.o $(sha256-arm-neon-y)
+ sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o
+ sha512-arm-y := sha512-core.o sha512-glue.o $(sha512-arm-neon-y)
+-blake2s-arm-y := blake2s-shash.o
+ libblake2s-arm-y:= blake2s-core.o blake2s-glue.o
+ blake2b-neon-y := blake2b-neon-core.o blake2b-neon-glue.o
+ sha1-arm-ce-y := sha1-ce-core.o sha1-ce-glue.o
+diff --git a/arch/arm/crypto/blake2s-shash.c b/arch/arm/crypto/blake2s-shash.c
+deleted file mode 100644
+index 763c73beea2d0..0000000000000
+--- a/arch/arm/crypto/blake2s-shash.c
++++ /dev/null
+@@ -1,75 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/*
+- * BLAKE2s digest algorithm, ARM scalar implementation
+- *
+- * Copyright 2020 Google LLC
+- */
+-
+-#include <crypto/internal/blake2s.h>
+-#include <crypto/internal/hash.h>
+-
+-#include <linux/module.h>
+-
+-static int crypto_blake2s_update_arm(struct shash_desc *desc,
+- const u8 *in, unsigned int inlen)
+-{
+- return crypto_blake2s_update(desc, in, inlen, false);
+-}
+-
+-static int crypto_blake2s_final_arm(struct shash_desc *desc, u8 *out)
+-{
+- return crypto_blake2s_final(desc, out, false);
+-}
+-
+-#define BLAKE2S_ALG(name, driver_name, digest_size) \
+- { \
+- .base.cra_name = name, \
+- .base.cra_driver_name = driver_name, \
+- .base.cra_priority = 200, \
+- .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \
+- .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \
+- .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \
+- .base.cra_module = THIS_MODULE, \
+- .digestsize = digest_size, \
+- .setkey = crypto_blake2s_setkey, \
+- .init = crypto_blake2s_init, \
+- .update = crypto_blake2s_update_arm, \
+- .final = crypto_blake2s_final_arm, \
+- .descsize = sizeof(struct blake2s_state), \
+- }
+-
+-static struct shash_alg blake2s_arm_algs[] = {
+- BLAKE2S_ALG("blake2s-128", "blake2s-128-arm", BLAKE2S_128_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-160", "blake2s-160-arm", BLAKE2S_160_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-224", "blake2s-224-arm", BLAKE2S_224_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-256", "blake2s-256-arm", BLAKE2S_256_HASH_SIZE),
+-};
+-
+-static int __init blake2s_arm_mod_init(void)
+-{
+- return IS_REACHABLE(CONFIG_CRYPTO_HASH) ?
+- crypto_register_shashes(blake2s_arm_algs,
+- ARRAY_SIZE(blake2s_arm_algs)) : 0;
+-}
+-
+-static void __exit blake2s_arm_mod_exit(void)
+-{
+- if (IS_REACHABLE(CONFIG_CRYPTO_HASH))
+- crypto_unregister_shashes(blake2s_arm_algs,
+- ARRAY_SIZE(blake2s_arm_algs));
+-}
+-
+-module_init(blake2s_arm_mod_init);
+-module_exit(blake2s_arm_mod_exit);
+-
+-MODULE_DESCRIPTION("BLAKE2s digest algorithm, ARM scalar implementation");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("blake2s-128");
+-MODULE_ALIAS_CRYPTO("blake2s-128-arm");
+-MODULE_ALIAS_CRYPTO("blake2s-160");
+-MODULE_ALIAS_CRYPTO("blake2s-160-arm");
+-MODULE_ALIAS_CRYPTO("blake2s-224");
+-MODULE_ALIAS_CRYPTO("blake2s-224-arm");
+-MODULE_ALIAS_CRYPTO("blake2s-256");
+-MODULE_ALIAS_CRYPTO("blake2s-256-arm");
+diff --git a/arch/arm/mach-bcm/bcm_kona_smc.c b/arch/arm/mach-bcm/bcm_kona_smc.c
+index 43829e49ad93f..347bfb7f03e2c 100644
+--- a/arch/arm/mach-bcm/bcm_kona_smc.c
++++ b/arch/arm/mach-bcm/bcm_kona_smc.c
+@@ -52,6 +52,7 @@ int __init bcm_kona_smc_init(void)
+ return -ENODEV;
+
+ prop_val = of_get_address(node, 0, &prop_size, NULL);
++ of_node_put(node);
+ if (!prop_val)
+ return -EINVAL;
+
+diff --git a/arch/arm/mach-dove/Kconfig b/arch/arm/mach-dove/Kconfig
+index c30c69c664ea8..a568ef90633ea 100644
+--- a/arch/arm/mach-dove/Kconfig
++++ b/arch/arm/mach-dove/Kconfig
+@@ -8,6 +8,7 @@ menuconfig ARCH_DOVE
+ select PINCTRL_DOVE
+ select PLAT_ORION_LEGACY
+ select PM_GENERIC_DOMAINS if PM
++ select PCI_QUIRKS if PCI
+ help
+ Support for the Marvell Dove SoC 88AP510
+
+diff --git a/arch/arm/mach-dove/pcie.c b/arch/arm/mach-dove/pcie.c
+index 2a493bdfffc6e..f90f42fc495e3 100644
+--- a/arch/arm/mach-dove/pcie.c
++++ b/arch/arm/mach-dove/pcie.c
+@@ -136,14 +136,19 @@ static struct pci_ops pcie_ops = {
+ .write = pcie_wr_conf,
+ };
+
++/*
++ * The root complex has a hardwired class of PCI_CLASS_MEMORY_OTHER, when it
++ * is operating as a root complex this needs to be switched to
++ * PCI_CLASS_BRIDGE_HOST or Linux will errantly try to process the BAR's on
++ * the device. Decoding setup is handled by the orion code.
++ */
+ static void rc_pci_fixup(struct pci_dev *dev)
+ {
+- /*
+- * Prevent enumeration of root complex.
+- */
+ if (dev->bus->parent == NULL && dev->devfn == 0) {
+ int i;
+
++ dev->class &= 0xff;
++ dev->class |= PCI_CLASS_BRIDGE_HOST << 8;
+ for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
+ dev->resource[i].start = 0;
+ dev->resource[i].end = 0;
+diff --git a/arch/arm/mach-mv78xx0/pcie.c b/arch/arm/mach-mv78xx0/pcie.c
+index e15646af7f26d..4f1847babef2a 100644
+--- a/arch/arm/mach-mv78xx0/pcie.c
++++ b/arch/arm/mach-mv78xx0/pcie.c
+@@ -180,14 +180,19 @@ static struct pci_ops pcie_ops = {
+ .write = pcie_wr_conf,
+ };
+
++/*
++ * The root complex has a hardwired class of PCI_CLASS_MEMORY_OTHER, when it
++ * is operating as a root complex this needs to be switched to
++ * PCI_CLASS_BRIDGE_HOST or Linux will errantly try to process the BAR's on
++ * the device. Decoding setup is handled by the orion code.
++ */
+ static void rc_pci_fixup(struct pci_dev *dev)
+ {
+- /*
+- * Prevent enumeration of root complex.
+- */
+ if (dev->bus->parent == NULL && dev->devfn == 0) {
+ int i;
+
++ dev->class &= 0xff;
++ dev->class |= PCI_CLASS_BRIDGE_HOST << 8;
+ for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
+ dev->resource[i].start = 0;
+ dev->resource[i].end = 0;
+diff --git a/arch/arm/mach-omap2/display.c b/arch/arm/mach-omap2/display.c
+index 21413a9b7b6c6..8d829f3dafe76 100644
+--- a/arch/arm/mach-omap2/display.c
++++ b/arch/arm/mach-omap2/display.c
+@@ -211,6 +211,7 @@ static int __init omapdss_init_fbdev(void)
+ node = of_find_node_by_name(NULL, "omap4_padconf_global");
+ if (node)
+ omap4_dsi_mux_syscon = syscon_node_to_regmap(node);
++ of_node_put(node);
+
+ return 0;
+ }
+@@ -259,11 +260,13 @@ static int __init omapdss_init_of(void)
+
+ if (!pdev) {
+ pr_err("Unable to find DSS platform device\n");
++ of_node_put(node);
+ return -ENODEV;
+ }
+
+ r = of_platform_populate(node, NULL, NULL, &pdev->dev);
+ put_device(&pdev->dev);
++ of_node_put(node);
+ if (r) {
+ pr_err("Unable to populate DSS submodule devices\n");
+ return r;
+diff --git a/arch/arm/mach-omap2/pdata-quirks.c b/arch/arm/mach-omap2/pdata-quirks.c
+index 13f1b89f74b82..5b99d602c87bc 100644
+--- a/arch/arm/mach-omap2/pdata-quirks.c
++++ b/arch/arm/mach-omap2/pdata-quirks.c
+@@ -540,6 +540,8 @@ pdata_quirks_init_clocks(const struct of_device_id *omap_dt_match_table)
+
+ of_platform_populate(np, omap_dt_match_table,
+ omap_auxdata_lookup, NULL);
++
++ of_node_put(np);
+ }
+ }
+
+diff --git a/arch/arm/mach-omap2/prm3xxx.c b/arch/arm/mach-omap2/prm3xxx.c
+index 1b442b1285693..63e73e9b82bc6 100644
+--- a/arch/arm/mach-omap2/prm3xxx.c
++++ b/arch/arm/mach-omap2/prm3xxx.c
+@@ -708,6 +708,7 @@ static int omap3xxx_prm_late_init(void)
+ }
+
+ irq_num = of_irq_get(np, 0);
++ of_node_put(np);
+ if (irq_num == -EPROBE_DEFER)
+ return irq_num;
+
+diff --git a/arch/arm/mach-orion5x/Kconfig b/arch/arm/mach-orion5x/Kconfig
+index bf833b51931d1..aeac281c87647 100644
+--- a/arch/arm/mach-orion5x/Kconfig
++++ b/arch/arm/mach-orion5x/Kconfig
+@@ -7,6 +7,7 @@ menuconfig ARCH_ORION5X
+ select GPIOLIB
+ select MVEBU_MBUS
+ select FORCE_PCI
++ select PCI_QUIRKS
+ select PHYLIB if NETDEVICES
+ select PLAT_ORION_LEGACY
+ help
+diff --git a/arch/arm/mach-orion5x/pci.c b/arch/arm/mach-orion5x/pci.c
+index 92e938bba20d4..9574c73f3c039 100644
+--- a/arch/arm/mach-orion5x/pci.c
++++ b/arch/arm/mach-orion5x/pci.c
+@@ -515,14 +515,20 @@ static int __init pci_setup(struct pci_sys_data *sys)
+ /*****************************************************************************
+ * General PCIe + PCI
+ ****************************************************************************/
++
++/*
++ * The root complex has a hardwired class of PCI_CLASS_MEMORY_OTHER, when it
++ * is operating as a root complex this needs to be switched to
++ * PCI_CLASS_BRIDGE_HOST or Linux will errantly try to process the BAR's on
++ * the device. Decoding setup is handled by the orion code.
++ */
+ static void rc_pci_fixup(struct pci_dev *dev)
+ {
+- /*
+- * Prevent enumeration of root complex.
+- */
+ if (dev->bus->parent == NULL && dev->devfn == 0) {
+ int i;
+
++ dev->class &= 0xff;
++ dev->class |= PCI_CLASS_BRIDGE_HOST << 8;
+ for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
+ dev->resource[i].start = 0;
+ dev->resource[i].end = 0;
+diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+index abea41f7782e5..117e7b07995b9 100644
+--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+@@ -125,6 +125,7 @@ remove:
+
+ list_for_each_entry_safe(pos, tmp, &quirk_list, list) {
+ list_del(&pos->list);
++ of_node_put(pos->np);
+ kfree(pos);
+ }
+
+@@ -174,11 +175,12 @@ static int __init rcar_gen2_regulator_quirk(void)
+ memcpy(&quirk->i2c_msg, id->data, sizeof(quirk->i2c_msg));
+
+ quirk->id = id;
+- quirk->np = np;
++ quirk->np = of_node_get(np);
+ quirk->i2c_msg.addr = addr;
+
+ ret = of_irq_parse_one(np, 0, argsa);
+ if (ret) { /* Skip invalid entry and continue */
++ of_node_put(np);
+ kfree(quirk);
+ continue;
+ }
+@@ -225,6 +227,7 @@ err_free:
+ err_mem:
+ list_for_each_entry_safe(pos, tmp, &quirk_list, list) {
+ list_del(&pos->list);
++ of_node_put(pos->np);
+ kfree(pos);
+ }
+
+diff --git a/arch/arm/mach-zynq/common.c b/arch/arm/mach-zynq/common.c
+index e1ca6a5732d27..15e8a321a713b 100644
+--- a/arch/arm/mach-zynq/common.c
++++ b/arch/arm/mach-zynq/common.c
+@@ -77,6 +77,7 @@ static int __init zynq_get_revision(void)
+ }
+
+ zynq_devcfg_base = of_iomap(np, 0);
++ of_node_put(np);
+ if (!zynq_devcfg_base) {
+ pr_err("%s: Unable to map I/O memory\n", __func__);
+ return -1;
+diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
+index 1f9c3ba328333..93c8ccbf29828 100644
+--- a/arch/arm/xen/enlighten.c
++++ b/arch/arm/xen/enlighten.c
+@@ -34,6 +34,7 @@
+ #include <linux/timekeeping.h>
+ #include <linux/timekeeper_internal.h>
+ #include <linux/acpi.h>
++#include <linux/virtio_anchor.h>
+
+ #include <linux/mm.h>
+
+@@ -443,7 +444,8 @@ static int __init xen_guest_init(void)
+ if (!xen_domain())
+ return 0;
+
+- xen_set_restricted_virtio_memory_access();
++ if (IS_ENABLED(CONFIG_XEN_VIRTIO))
++ virtio_set_mem_acc_cb(xen_virtio_mem_acc);
+
+ if (!acpi_disabled)
+ xen_acpi_guest_init();
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 1652a9800ebee..a5d1b561ed53f 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -226,6 +226,7 @@ config ARM64
+ select THREAD_INFO_IN_TASK
+ select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD
+ select TRACE_IRQFLAGS_SUPPORT
++ select TRACE_IRQFLAGS_NMI_SUPPORT
+ help
+ ARM 64-bit (AArch64) Linux support.
+
+@@ -503,6 +504,22 @@ config ARM64_ERRATUM_834220
+
+ If unsure, say Y.
+
++config ARM64_ERRATUM_1742098
++ bool "Cortex-A57/A72: 1742098: ELR recorded incorrectly on interrupt taken between cryptographic instructions in a sequence"
++ depends on COMPAT
++ default y
++ help
++ This option removes the AES hwcap for aarch32 user-space to
++ workaround erratum 1742098 on Cortex-A57 and Cortex-A72.
++
++ Affected parts may corrupt the AES state if an interrupt is
++ taken between a pair of AES instructions. These instructions
++ are only present if the cryptography extensions are present.
++ All software should have a fallback implementation for CPUs
++ that don't implement the cryptography extensions.
++
++ If unsure, say Y.
++
+ config ARM64_ERRATUM_845719
+ bool "Cortex-A53: 845719: a load might read incorrect data"
+ depends on COMPAT
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-orangepi-win.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-orangepi-win.dts
+index c519d9fa6967c..3d2c68d58f49c 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-orangepi-win.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-orangepi-win.dts
+@@ -40,7 +40,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- status {
++ led-0 {
+ label = "orangepi:green:status";
+ gpios = <&pio 7 11 GPIO_ACTIVE_HIGH>; /* PH11 */
+ };
+diff --git a/arch/arm64/boot/dts/exynos/exynosautov9-pinctrl.dtsi b/arch/arm64/boot/dts/exynos/exynosautov9-pinctrl.dtsi
+index ef0349d1c3d09..68f4a0fae7cf5 100644
+--- a/arch/arm64/boot/dts/exynos/exynosautov9-pinctrl.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynosautov9-pinctrl.dtsi
+@@ -1089,21 +1089,21 @@
+
+ /* PERIC1 USI11_SPI */
+ spi11_bus: spi11-pins {
+- samsung,pins = "gpp3-6", "gpp3-5", "gpp3-4";
++ samsung,pins = "gpp5-6", "gpp5-5", "gpp5-4";
+ samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+ samsung,pin-drv = <EXYNOS5420_PIN_DRV_LV1>;
+ };
+
+ spi11_cs: spi11-cs-pins {
+- samsung,pins = "gpp3-7";
++ samsung,pins = "gpp5-7";
+ samsung,pin-function = <EXYNOS_PIN_FUNC_OUTPUT>;
+ samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+ samsung,pin-drv = <EXYNOS5420_PIN_DRV_LV1>;
+ };
+
+ spi11_cs_func: spi11-cs-func-pins {
+- samsung,pins = "gpp3-7";
++ samsung,pins = "gpp5-7";
+ samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+ samsung,pin-drv = <EXYNOS5420_PIN_DRV_LV1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+index 2b9bf8dd14ecc..7538918c7a829 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+@@ -49,7 +49,7 @@
+ wps {
+ label = "wps";
+ linux,code = <KEY_WPS_BUTTON>;
+- gpios = <&pio 102 GPIO_ACTIVE_HIGH>;
++ gpios = <&pio 102 GPIO_ACTIVE_LOW>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+index 733aec2e7f77e..d5cae38c842a6 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+@@ -43,7 +43,7 @@
+ reg = <0x000>;
+ enable-method = "psci";
+ clock-frequency = <1701000000>;
+- cpu-idle-states = <&cpuoff_l &clusteroff_l>;
++ cpu-idle-states = <&cpu_sleep_l &cluster_sleep_l>;
+ next-level-cache = <&l2_0>;
+ capacity-dmips-mhz = <530>;
+ };
+@@ -54,7 +54,7 @@
+ reg = <0x100>;
+ enable-method = "psci";
+ clock-frequency = <1701000000>;
+- cpu-idle-states = <&cpuoff_l &clusteroff_l>;
++ cpu-idle-states = <&cpu_sleep_l &cluster_sleep_l>;
+ next-level-cache = <&l2_0>;
+ capacity-dmips-mhz = <530>;
+ };
+@@ -65,7 +65,7 @@
+ reg = <0x200>;
+ enable-method = "psci";
+ clock-frequency = <1701000000>;
+- cpu-idle-states = <&cpuoff_l &clusteroff_l>;
++ cpu-idle-states = <&cpu_sleep_l &cluster_sleep_l>;
+ next-level-cache = <&l2_0>;
+ capacity-dmips-mhz = <530>;
+ };
+@@ -76,7 +76,7 @@
+ reg = <0x300>;
+ enable-method = "psci";
+ clock-frequency = <1701000000>;
+- cpu-idle-states = <&cpuoff_l &clusteroff_l>;
++ cpu-idle-states = <&cpu_sleep_l &cluster_sleep_l>;
+ next-level-cache = <&l2_0>;
+ capacity-dmips-mhz = <530>;
+ };
+@@ -87,7 +87,7 @@
+ reg = <0x400>;
+ enable-method = "psci";
+ clock-frequency = <2171000000>;
+- cpu-idle-states = <&cpuoff_b &clusteroff_b>;
++ cpu-idle-states = <&cpu_sleep_b &cluster_sleep_b>;
+ next-level-cache = <&l2_1>;
+ capacity-dmips-mhz = <1024>;
+ };
+@@ -98,7 +98,7 @@
+ reg = <0x500>;
+ enable-method = "psci";
+ clock-frequency = <2171000000>;
+- cpu-idle-states = <&cpuoff_b &clusteroff_b>;
++ cpu-idle-states = <&cpu_sleep_b &cluster_sleep_b>;
+ next-level-cache = <&l2_1>;
+ capacity-dmips-mhz = <1024>;
+ };
+@@ -109,7 +109,7 @@
+ reg = <0x600>;
+ enable-method = "psci";
+ clock-frequency = <2171000000>;
+- cpu-idle-states = <&cpuoff_b &clusteroff_b>;
++ cpu-idle-states = <&cpu_sleep_b &cluster_sleep_b>;
+ next-level-cache = <&l2_1>;
+ capacity-dmips-mhz = <1024>;
+ };
+@@ -120,7 +120,7 @@
+ reg = <0x700>;
+ enable-method = "psci";
+ clock-frequency = <2171000000>;
+- cpu-idle-states = <&cpuoff_b &clusteroff_b>;
++ cpu-idle-states = <&cpu_sleep_b &cluster_sleep_b>;
+ next-level-cache = <&l2_1>;
+ capacity-dmips-mhz = <1024>;
+ };
+@@ -172,8 +172,8 @@
+ };
+
+ idle-states {
+- entry-method = "arm,psci";
+- cpuoff_l: cpuoff_l {
++ entry-method = "psci";
++ cpu_sleep_l: cpu-sleep-l {
+ compatible = "arm,idle-state";
+ arm,psci-suspend-param = <0x00010001>;
+ local-timer-stop;
+@@ -181,7 +181,7 @@
+ exit-latency-us = <140>;
+ min-residency-us = <780>;
+ };
+- cpuoff_b: cpuoff_b {
++ cpu_sleep_b: cpu-sleep-b {
+ compatible = "arm,idle-state";
+ arm,psci-suspend-param = <0x00010001>;
+ local-timer-stop;
+@@ -189,7 +189,7 @@
+ exit-latency-us = <145>;
+ min-residency-us = <720>;
+ };
+- clusteroff_l: clusteroff_l {
++ cluster_sleep_l: cluster-sleep-l {
+ compatible = "arm,idle-state";
+ arm,psci-suspend-param = <0x01010002>;
+ local-timer-stop;
+@@ -197,7 +197,7 @@
+ exit-latency-us = <155>;
+ min-residency-us = <860>;
+ };
+- clusteroff_b: clusteroff_b {
++ cluster_sleep_b: cluster-sleep-b {
+ compatible = "arm,idle-state";
+ arm,psci-suspend-param = <0x01010002>;
+ local-timer-stop;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra186.dtsi b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+index 0e9afc3e2f268..9eca18b546983 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra186.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+@@ -1820,6 +1820,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x0 0x30000000 0x50000>;
++ no-memory-wc;
+
+ cpu_bpmp_tx: sram@4e000 {
+ reg = <0x4e000 0x1000>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
+index a7d7cfd66379f..b0f9393dd39cc 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
+@@ -75,7 +75,7 @@
+
+ /* SDMMC1 (SD/MMC) */
+ mmc@3400000 {
+- cd-gpios = <&gpio TEGRA194_MAIN_GPIO(A, 0) GPIO_ACTIVE_LOW>;
++ cd-gpios = <&gpio TEGRA194_MAIN_GPIO(G, 7) GPIO_ACTIVE_LOW>;
+ };
+
+ /* SDMMC4 (eMMC) */
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index d1f8248c00f41..3fdb0b8527185 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -2684,6 +2684,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x0 0x40000000 0x50000>;
++ no-memory-wc;
+
+ cpu_bpmp_tx: sram@4e000 {
+ reg = <0x4e000 0x1000>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+index cb3af539e4770..0213a7e3dad09 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+@@ -1325,6 +1325,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x0 0x40000000 0x80000>;
++ no-memory-wc;
+
+ cpu_bpmp_tx: sram@70000 {
+ reg = <0x70000 0x1000>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index c89499e366d30..748575ed1490d 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -525,9 +525,9 @@
+ };
+
+ timer@b120000 {
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x10000000>;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0x0 0x0b120000 0x0 0x1000>;
+
+@@ -535,49 +535,49 @@
+ frame-number = <0>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x0b121000 0x0 0x1000>,
+- <0x0 0x0b122000 0x0 0x1000>;
++ reg = <0x0b121000 0x1000>,
++ <0x0b122000 0x1000>;
+ };
+
+ frame@b123000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0xb123000 0x0 0x1000>;
++ reg = <0x0b123000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@b124000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x0b124000 0x0 0x1000>;
++ reg = <0x0b124000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@b125000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x0b125000 0x0 0x1000>;
++ reg = <0x0b125000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@b126000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x0b126000 0x0 0x1000>;
++ reg = <0x0b126000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@b127000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x0b127000 0x0 0x1000>;
++ reg = <0x0b127000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@b128000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x0b128000 0x0 0x1000>;
++ reg = <0x0b128000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 4c38b15c6fd41..697f46e179030 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -534,7 +534,7 @@
+ status = "disabled";
+ };
+
+- qpic_nand: nand@79b0000 {
++ qpic_nand: nand-controller@79b0000 {
+ compatible = "qcom,ipq8074-nand";
+ reg = <0x079b0000 0x10000>;
+ #address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 05472510e29d5..15c91fb59dbae 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1788,8 +1788,8 @@
+ <&rpmpd MSM8916_VDDMX>;
+ power-domain-names = "cx", "mx";
+
+- qcom,state = <&wcnss_smp2p_out 0>;
+- qcom,state-names = "stop";
++ qcom,smem-states = <&wcnss_smp2p_out 0>;
++ qcom,smem-state-names = "stop";
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&wcnss_pin_a>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 1ac2913b182cc..8cc3cb79ed056 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -1074,6 +1074,7 @@
+ reg = <0xfdd00000 0x2000>,
+ <0xfec00000 0x200000>;
+ reg-names = "ctrl", "mem";
++ ranges = <0 0xfec00000 0x200000>;
+ clocks = <&rpmcc RPM_SMD_OCMEMGX_CLK>,
+ <&mmcc OCMEMCX_OCMEMNOC_CLK>;
+ clock-names = "core", "iface";
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 9932186f7ceb0..b670d0412760e 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -609,7 +609,7 @@
+ <0x00035400 0x1dc>;
+ #phy-cells = <0>;
+
+- #clock-cells = <1>;
++ #clock-cells = <0>;
+ clock-output-names = "pcie_0_pipe_clk_src";
+ clocks = <&gcc GCC_PCIE_0_PIPE_CLK>;
+ clock-names = "pipe0";
+@@ -623,6 +623,7 @@
+ <0x00036400 0x1dc>;
+ #phy-cells = <0>;
+
++ #clock-cells = <0>;
+ clock-output-names = "pcie_1_pipe_clk_src";
+ clocks = <&gcc GCC_PCIE_1_PIPE_CLK>;
+ clock-names = "pipe1";
+@@ -636,6 +637,7 @@
+ <0x00037400 0x1dc>;
+ #phy-cells = <0>;
+
++ #clock-cells = <0>;
+ clock-output-names = "pcie_2_pipe_clk_src";
+ clocks = <&gcc GCC_PCIE_2_PIPE_CLK>;
+ clock-names = "pipe2";
+@@ -2769,7 +2771,7 @@
+ <0x07410600 0x1a8>;
+ #phy-cells = <0>;
+
+- #clock-cells = <1>;
++ #clock-cells = <0>;
+ clock-output-names = "usb3_phy_pipe_clk_src";
+ clocks = <&gcc GCC_USB3_PHY_PIPE_CLK>;
+ clock-names = "pipe0";
+diff --git a/arch/arm64/boot/dts/qcom/msm8998-sony-xperia-yoshino-poplar.dts b/arch/arm64/boot/dts/qcom/msm8998-sony-xperia-yoshino-poplar.dts
+index 4a1f98a210319..c21333aa73c29 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998-sony-xperia-yoshino-poplar.dts
++++ b/arch/arm64/boot/dts/qcom/msm8998-sony-xperia-yoshino-poplar.dts
+@@ -26,11 +26,13 @@
+ };
+
+ &vreg_l18a_2p85 {
+- regulator-min-microvolt = <2850000>;
+- regulator-max-microvolt = <2850000>;
++ /* Note: Round-down from 2850000 to be a multiple of PLDO step-size 8000 */
++ regulator-min-microvolt = <2848000>;
++ regulator-max-microvolt = <2848000>;
+ };
+
+ &vreg_l22a_2p85 {
+- regulator-min-microvolt = <2700000>;
+- regulator-max-microvolt = <2700000>;
++ /* Note: Round-down from 2700000 to be a multiple of PLDO step-size 8000 */
++ regulator-min-microvolt = <2696000>;
++ regulator-max-microvolt = <2696000>;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+index d912166b7552a..c8b7d8eb59967 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+@@ -548,7 +548,7 @@
+ compatible = "snps,dwc3";
+ reg = <0x07580000 0xcd00>;
+ interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>;
+- phys = <&usb2_phy_sec>, <&usb3_phy>;
++ phys = <&usb2_phy_prim>, <&usb3_phy>;
+ phy-names = "usb2-phy", "usb3-phy";
+ snps,has-lpm-erratum;
+ snps,hird-threshold = /bits/ 8 <0x10>;
+@@ -577,7 +577,7 @@
+ compatible = "snps,dwc3";
+ reg = <0x078c0000 0xcc00>;
+ interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>;
+- phys = <&usb2_phy_prim>;
++ phys = <&usb2_phy_sec>;
+ phy-names = "usb2-phy";
+ snps,has-lpm-erratum;
+ snps,hird-threshold = /bits/ 8 <0x10>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+index e55dbaa6dc128..a071b8f5d7dc7 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+@@ -43,6 +43,7 @@
+ */
+
+ /delete-node/ &hyp_mem;
++/delete-node/ &ipa_fw_mem;
+ /delete-node/ &xbl_mem;
+ /delete-node/ &aop_mem;
+ /delete-node/ &sec_apps_mem;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 5dcaac23a1381..8769ad30f1c7b 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -3215,7 +3215,7 @@
+ };
+
+ aoss_qmp: power-controller@c300000 {
+- compatible = "qcom,sc7180-aoss-qmp";
++ compatible = "qcom,sc7180-aoss-qmp", "qcom,aoss-qmp";
+ reg = <0 0x0c300000 0 0x400>;
+ interrupts = <GIC_SPI 389 IRQ_TYPE_EDGE_RISING>;
+ mboxes = <&apss_shared 0>;
+@@ -3384,9 +3384,9 @@
+ };
+
+ timer@17c20000{
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x20000000>;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0 0x17c20000 0 0x1000>;
+
+@@ -3394,49 +3394,49 @@
+ frame-number = <0>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c21000 0 0x1000>,
+- <0 0x17c22000 0 0x1000>;
++ reg = <0x17c21000 0x1000>,
++ <0x17c22000 0x1000>;
+ };
+
+ frame@17c23000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c23000 0 0x1000>;
++ reg = <0x17c23000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c25000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c25000 0 0x1000>;
++ reg = <0x17c25000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c27000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c27000 0 0x1000>;
++ reg = <0x17c27000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c29000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c29000 0 0x1000>;
++ reg = <0x17c29000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2b000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c2b000 0 0x1000>;
++ reg = <0x17c2b000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2d000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c2d000 0 0x1000>;
++ reg = <0x17c2d000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
+index 9cb1bc8ed6b5c..8b96fad5fdd4c 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
+@@ -388,7 +388,7 @@ ap_sar_sensor_i2c: &i2c1 {
+
+ vdd-supply = <&pp1800_prox>;
+
+- label = "proximity-wifi-lte0";
++ label = "proximity-wifi_cellular-0";
+ status = "disabled";
+ };
+
+@@ -404,7 +404,7 @@ ap_sar_sensor_i2c: &i2c1 {
+
+ vdd-supply = <&pp1800_prox>;
+
+- label = "proximity-wifi-lte1";
++ label = "proximity-wifi_cellular-1";
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index e66fc67de206a..75e174316d00f 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -818,7 +818,7 @@
+ reg = <0 0x00100000 0 0x1f0000>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>,
+ <&rpmhcc RPMH_CXO_CLK_A>, <&sleep_clk>,
+- <0>, <&pcie1_lane 0>,
++ <0>, <&pcie1_lane>,
+ <0>, <0>, <0>, <0>;
+ clock-names = "bi_tcxo", "bi_tcxo_ao", "sleep_clk",
+ "pcie_0_pipe_clk", "pcie_1_pipe_clk",
+@@ -2035,7 +2035,7 @@
+
+ clocks = <&gcc GCC_PCIE_1_PIPE_CLK>,
+ <&gcc GCC_PCIE_1_PIPE_CLK_SRC>,
+- <&pcie1_lane 0>,
++ <&pcie1_lane>,
+ <&rpmhcc RPMH_CXO_CLK>,
+ <&gcc GCC_PCIE_1_AUX_CLK>,
+ <&gcc GCC_PCIE_1_CFG_AHB_CLK>,
+@@ -2110,7 +2110,7 @@
+ clock-names = "pipe0";
+
+ #phy-cells = <0>;
+- #clock-cells = <1>;
++ #clock-cells = <0>;
+ clock-output-names = "pcie_1_pipe_clk";
+ };
+ };
+@@ -3843,7 +3843,7 @@
+ };
+
+ aoss_qmp: power-controller@c300000 {
+- compatible = "qcom,sc7280-aoss-qmp";
++ compatible = "qcom,sc7280-aoss-qmp", "qcom,aoss-qmp";
+ reg = <0 0x0c300000 0 0x400>;
+ interrupts-extended = <&ipcc IPCC_CLIENT_AOP
+ IPCC_MPROC_SIGNAL_GLINK_QMP
+@@ -4771,9 +4771,9 @@
+ };
+
+ timer@17c20000 {
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x20000000>;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0 0x17c20000 0 0x1000>;
+
+@@ -4781,49 +4781,49 @@
+ frame-number = <0>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c21000 0 0x1000>,
+- <0 0x17c22000 0 0x1000>;
++ reg = <0x17c21000 0x1000>,
++ <0x17c22000 0x1000>;
+ };
+
+ frame@17c23000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c23000 0 0x1000>;
++ reg = <0x17c23000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c25000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c25000 0 0x1000>;
++ reg = <0x17c25000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c27000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c27000 0 0x1000>;
++ reg = <0x17c27000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c29000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c29000 0 0x1000>;
++ reg = <0x17c29000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2b000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c2b000 0 0x1000>;
++ reg = <0x17c2b000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2d000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17c2d000 0 0x1000>;
++ reg = <0x17c2d000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index b72e8e6c52f35..2acd55bd3e5b6 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -8,6 +8,7 @@
+ #include <dt-bindings/clock/qcom,gpucc-sdm660.h>
+ #include <dt-bindings/clock/qcom,mmcc-sdm660.h>
+ #include <dt-bindings/clock/qcom,rpmcc.h>
++#include <dt-bindings/interconnect/qcom,sdm660.h>
+ #include <dt-bindings/power/qcom-rpmpd.h>
+ #include <dt-bindings/gpio/gpio.h>
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+@@ -1045,11 +1046,13 @@
+ nvmem-cells = <&gpu_speed_bin>;
+ nvmem-cell-names = "speed_bin";
+
+- interconnects = <&gnoc 1 &bimc 5>;
++ interconnects = <&bimc MASTER_OXILI &bimc SLAVE_EBI>;
+ interconnect-names = "gfx-mem";
+
+ operating-points-v2 = <&gpu_sdm630_opp_table>;
+
++ status = "disabled";
++
+ gpu_sdm630_opp_table: opp-table {
+ compatible = "operating-points-v2";
+ opp-775000000 {
+@@ -1264,7 +1267,7 @@
+ #phy-cells = <0>;
+
+ clocks = <&gcc GCC_USB_PHY_CFG_AHB2PHY_CLK>,
+- <&gcc GCC_RX1_USB2_CLKREF_CLK>;
++ <&gcc GCC_RX0_USB2_CLKREF_CLK>;
+ clock-names = "cfg_ahb", "ref";
+
+ resets = <&gcc GCC_QUSB2PHY_PRIM_BCR>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm636-sony-xperia-ganges-mermaid.dts b/arch/arm64/boot/dts/qcom/sdm636-sony-xperia-ganges-mermaid.dts
+index b96da53f2f1ee..58f687fc49e04 100644
+--- a/arch/arm64/boot/dts/qcom/sdm636-sony-xperia-ganges-mermaid.dts
++++ b/arch/arm64/boot/dts/qcom/sdm636-sony-xperia-ganges-mermaid.dts
+@@ -19,7 +19,7 @@
+ };
+
+ &sdc2_state_on {
+- pinconf-clk {
++ clk {
+ drive-strength = <14>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama-akatsuki.dts b/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama-akatsuki.dts
+index 8a0d94e7f5985..2f5e12deaadab 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama-akatsuki.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama-akatsuki.dts
+@@ -19,8 +19,9 @@
+ };
+
+ &vreg_l22a_2p8 {
+- regulator-min-microvolt = <2700000>;
+- regulator-max-microvolt = <2700000>;
++ /* Note: Round-down from 2700000 to be a multiple of PLDO step-size 8000 */
++ regulator-min-microvolt = <2696000>;
++ regulator-max-microvolt = <2696000>;
+ };
+
+ &vreg_l28a_2p8 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 038538c8c6141..7783005c8028c 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4948,9 +4948,9 @@
+ };
+
+ timer@17c90000 {
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x20000000>;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0 0x17c90000 0 0x1000>;
+
+@@ -4958,49 +4958,49 @@
+ frame-number = <0>;
+ interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17ca0000 0 0x1000>,
+- <0 0x17cb0000 0 0x1000>;
++ reg = <0x17ca0000 0x1000>,
++ <0x17cb0000 0x1000>;
+ };
+
+ frame@17cc0000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17cc0000 0 0x1000>;
++ reg = <0x17cc0000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17cd0000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17cd0000 0 0x1000>;
++ reg = <0x17cd0000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17ce0000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17ce0000 0 0x1000>;
++ reg = <0x17ce0000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17cf0000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17cf0000 0 0x1000>;
++ reg = <0x17cf0000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17d00000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17d00000 0 0x1000>;
++ reg = <0x17d00000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17d10000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0 0x17d10000 0 0x1000>;
++ reg = <0x17d10000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts b/arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts
+index 871ccbba445bb..038970c0b68e2 100644
+--- a/arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts
++++ b/arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts
+@@ -88,11 +88,19 @@
+ status = "okay";
+ };
+
+-&sdc2_state_off {
++&sdc2_off_state {
+ sd-cd {
+ pins = "gpio98";
++ drive-strength = <2>;
+ bias-disable;
++ };
++};
++
++&sdc2_on_state {
++ sd-cd {
++ pins = "gpio98";
+ drive-strength = <2>;
++ bias-pull-up;
+ };
+ };
+
+@@ -102,32 +110,6 @@
+
+ &tlmm {
+ gpio-reserved-ranges = <22 2>, <28 6>;
+-
+- sdc2_state_on: sdc2-on {
+- clk {
+- pins = "sdc2_clk";
+- bias-disable;
+- drive-strength = <16>;
+- };
+-
+- cmd {
+- pins = "sdc2_cmd";
+- bias-pull-up;
+- drive-strength = <10>;
+- };
+-
+- data {
+- pins = "sdc2_data";
+- bias-pull-up;
+- drive-strength = <10>;
+- };
+-
+- sd-cd {
+- pins = "gpio98";
+- bias-pull-up;
+- drive-strength = <2>;
+- };
+- };
+ };
+
+ &usb3 {
+diff --git a/arch/arm64/boot/dts/qcom/sm6125.dtsi b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+index 135e6e0da27ac..5ee1e4b203019 100644
+--- a/arch/arm64/boot/dts/qcom/sm6125.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+@@ -386,23 +386,43 @@
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+- sdc2_state_off: sdc2-off {
++ sdc2_off_state: sdc2-off-state {
+ clk {
+ pins = "sdc2_clk";
+- bias-disable;
+ drive-strength = <2>;
++ bias-disable;
+ };
+
+ cmd {
+ pins = "sdc2_cmd";
++ drive-strength = <2>;
+ bias-pull-up;
++ };
++
++ data {
++ pins = "sdc2_data";
+ drive-strength = <2>;
++ bias-pull-up;
++ };
++ };
++
++ sdc2_on_state: sdc2-on-state {
++ clk {
++ pins = "sdc2_clk";
++ drive-strength = <16>;
++ bias-disable;
++ };
++
++ cmd {
++ pins = "sdc2_cmd";
++ drive-strength = <10>;
++ bias-pull-up;
+ };
+
+ data {
+ pins = "sdc2_data";
++ drive-strength = <10>;
+ bias-pull-up;
+- drive-strength = <2>;
+ };
+ };
+ };
+@@ -470,8 +490,8 @@
+ <&xo_board>;
+ clock-names = "iface", "core", "xo";
+
+- pinctrl-0 = <&sdc2_state_on>;
+- pinctrl-1 = <&sdc2_state_off>;
++ pinctrl-0 = <&sdc2_on_state>;
++ pinctrl-1 = <&sdc2_off_state>;
+ pinctrl-names = "default", "sleep";
+
+ power-domains = <&rpmpd SM6125_VDDCX>;
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index d4f8f33f3f0ca..b44734cd8d6fa 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -1304,57 +1304,57 @@
+ compatible = "arm,armv7-timer-mem";
+ reg = <0x0 0x17c20000 0x0 0x1000>;
+ clock-frequency = <19200000>;
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x20000000>;
+
+ frame@17c21000 {
+ frame-number = <0>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c21000 0x0 0x1000>,
+- <0x0 0x17c22000 0x0 0x1000>;
++ reg = <0x17c21000 0x1000>,
++ <0x17c22000 0x1000>;
+ };
+
+ frame@17c23000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c23000 0x0 0x1000>;
++ reg = <0x17c23000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c25000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c25000 0x0 0x1000>;
++ reg = <0x17c25000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c27000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c27000 0x0 0x1000>;
++ reg = <0x17c27000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c29000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c29000 0x0 0x1000>;
++ reg = <0x17c29000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2b000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c2b000 0x0 0x1000>;
++ reg = <0x17c2b000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2d000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c2d000 0x0 0x1000>;
++ reg = <0x17c2d000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 8ea44c4b56b42..8abaa28cebbc2 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -3718,7 +3718,7 @@
+ };
+
+ aoss_qmp: power-controller@c300000 {
+- compatible = "qcom,sm8150-aoss-qmp";
++ compatible = "qcom,sm8150-aoss-qmp", "qcom,aoss-qmp";
+ reg = <0x0 0x0c300000 0x0 0x400>;
+ interrupts = <GIC_SPI 389 IRQ_TYPE_EDGE_RISING>;
+ mboxes = <&apss_shared 0>;
+@@ -3944,9 +3944,9 @@
+ };
+
+ timer@17c20000 {
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x20000000>;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0x0 0x17c20000 0x0 0x1000>;
+ clock-frequency = <19200000>;
+@@ -3955,49 +3955,49 @@
+ frame-number = <0>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c21000 0x0 0x1000>,
+- <0x0 0x17c22000 0x0 0x1000>;
++ reg = <0x17c21000 0x1000>,
++ <0x17c22000 0x1000>;
+ };
+
+ frame@17c23000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c23000 0x0 0x1000>;
++ reg = <0x17c23000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c25000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c25000 0x0 0x1000>;
++ reg = <0x17c25000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c27000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c26000 0x0 0x1000>;
++ reg = <0x17c26000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c29000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c29000 0x0 0x1000>;
++ reg = <0x17c29000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2b000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c2b000 0x0 0x1000>;
++ reg = <0x17c2b000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2d000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c2d000 0x0 0x1000>;
++ reg = <0x17c2d000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index cf0c97bd5ad3e..e8cdca50bc837 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -1884,6 +1884,8 @@
+ clock-names = "pipe0";
+
+ #phy-cells = <0>;
++
++ #clock-cells = <0>;
+ clock-output-names = "pcie_0_pipe_clk";
+ };
+ };
+@@ -1990,6 +1992,8 @@
+ clock-names = "pipe0";
+
+ #phy-cells = <0>;
++
++ #clock-cells = <0>;
+ clock-output-names = "pcie_1_pipe_clk";
+ };
+ };
+@@ -2096,6 +2100,8 @@
+ clock-names = "pipe0";
+
+ #phy-cells = <0>;
++
++ #clock-cells = <0>;
+ clock-output-names = "pcie_2_pipe_clk";
+ };
+ };
+@@ -3734,7 +3740,7 @@
+ };
+
+ aoss_qmp: power-controller@c300000 {
+- compatible = "qcom,sm8250-aoss-qmp";
++ compatible = "qcom,sm8250-aoss-qmp", "qcom,aoss-qmp";
+ reg = <0 0x0c300000 0 0x400>;
+ interrupts-extended = <&ipcc IPCC_CLIENT_AOP
+ IPCC_MPROC_SIGNAL_GLINK_QMP
+@@ -4867,9 +4873,9 @@
+ };
+
+ timer@17c20000 {
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x20000000>;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0x0 0x17c20000 0x0 0x1000>;
+ clock-frequency = <19200000>;
+@@ -4878,49 +4884,49 @@
+ frame-number = <0>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c21000 0x0 0x1000>,
+- <0x0 0x17c22000 0x0 0x1000>;
++ reg = <0x17c21000 0x1000>,
++ <0x17c22000 0x1000>;
+ };
+
+ frame@17c23000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c23000 0x0 0x1000>;
++ reg = <0x17c23000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c25000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c25000 0x0 0x1000>;
++ reg = <0x17c25000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c27000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c27000 0x0 0x1000>;
++ reg = <0x17c27000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c29000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c29000 0x0 0x1000>;
++ reg = <0x17c29000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2b000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c2b000 0x0 0x1000>;
++ reg = <0x17c2b000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2d000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c2d000 0x0 0x1000>;
++ reg = <0x17c2d000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 743cba9b683cd..3293f76478df4 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1718,7 +1718,7 @@
+ };
+
+ aoss_qmp: power-controller@c300000 {
+- compatible = "qcom,sm8350-aoss-qmp";
++ compatible = "qcom,sm8350-aoss-qmp", "qcom,aoss-qmp";
+ reg = <0 0x0c300000 0 0x400>;
+ interrupts-extended = <&ipcc IPCC_CLIENT_AOP IPCC_MPROC_SIGNAL_GLINK_QMP
+ IRQ_TYPE_EDGE_RISING>;
+@@ -1933,9 +1933,9 @@
+
+ timer@17c20000 {
+ compatible = "arm,armv7-timer-mem";
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x20000000>;
+ reg = <0x0 0x17c20000 0x0 0x1000>;
+ clock-frequency = <19200000>;
+
+@@ -1943,49 +1943,49 @@
+ frame-number = <0>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c21000 0x0 0x1000>,
+- <0x0 0x17c22000 0x0 0x1000>;
++ reg = <0x17c21000 0x1000>,
++ <0x17c22000 0x1000>;
+ };
+
+ frame@17c23000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c23000 0x0 0x1000>;
++ reg = <0x17c23000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c25000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c25000 0x0 0x1000>;
++ reg = <0x17c25000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c27000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c27000 0x0 0x1000>;
++ reg = <0x17c27000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c29000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c29000 0x0 0x1000>;
++ reg = <0x17c29000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2b000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c2b000 0x0 0x1000>;
++ reg = <0x17c2b000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17c2d000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17c2d000 0x0 0x1000>;
++ reg = <0x17c2d000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index b87756bf1ce44..c958f5d4adc26 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -2867,9 +2867,9 @@
+
+ timer@17420000 {
+ compatible = "arm,armv7-timer-mem";
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0 0 0 0x20000000>;
+ reg = <0x0 0x17420000 0x0 0x1000>;
+ clock-frequency = <19200000>;
+
+@@ -2877,49 +2877,49 @@
+ frame-number = <0>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17421000 0x0 0x1000>,
+- <0x0 0x17422000 0x0 0x1000>;
++ reg = <0x17421000 0x1000>,
++ <0x17422000 0x1000>;
+ };
+
+ frame@17423000 {
+ frame-number = <1>;
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17423000 0x0 0x1000>;
++ reg = <0x17423000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17425000 {
+ frame-number = <2>;
+ interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17425000 0x0 0x1000>;
++ reg = <0x17425000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17427000 {
+ frame-number = <3>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17427000 0x0 0x1000>;
++ reg = <0x17427000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@17429000 {
+ frame-number = <4>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x17429000 0x0 0x1000>;
++ reg = <0x17429000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@1742b000 {
+ frame-number = <5>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x1742b000 0x0 0x1000>;
++ reg = <0x1742b000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@1742d000 {
+ frame-number = <6>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0x0 0x1742d000 0x0 0x1000>;
++ reg = <0x1742d000 0x1000>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+index 142e7ffbd2bd4..63e7a39e100e3 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+@@ -146,7 +146,7 @@
+ };
+ };
+
+- reg_audio: regulator_audio {
++ reg_audio: regulator-audio {
+ compatible = "regulator-fixed";
+ regulator-name = "audio-1.8V";
+ regulator-min-microvolt = <1800000>;
+@@ -174,7 +174,7 @@
+ vin-supply = <®_lcd>;
+ };
+
+- reg_cam0: regulator_camera {
++ reg_cam0: regulator-cam0 {
+ compatible = "regulator-fixed";
+ regulator-name = "reg_cam0";
+ regulator-min-microvolt = <1800000>;
+@@ -183,7 +183,7 @@
+ enable-active-high;
+ };
+
+- reg_cam1: regulator_camera {
++ reg_cam1: regulator-cam1 {
+ compatible = "regulator-fixed";
+ regulator-name = "reg_cam1";
+ regulator-min-microvolt = <1800000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index b6aeb22e88364..86e8c9b5147a3 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -1952,7 +1952,7 @@
+ cpu-thermal {
+ polling-delay-passive = <250>;
+ polling-delay = <0>;
+- thermal-sensors = <&thermal 0>;
++ thermal-sensors = <&thermal>;
+ sustainable-power = <717>;
+
+ cooling-maps {
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index d330212026376..800274de1fe07 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -2129,7 +2129,7 @@
+ cpu-thermal {
+ polling-delay-passive = <250>;
+ polling-delay = <0>;
+- thermal-sensors = <&thermal 0>;
++ thermal-sensors = <&thermal>;
+ sustainable-power = <717>;
+
+ cooling-maps {
+diff --git a/arch/arm64/boot/dts/renesas/r8a779m8.dtsi b/arch/arm64/boot/dts/renesas/r8a779m8.dtsi
+index 752440b0c40f7..750bd8ccdb7f1 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779m8.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779m8.dtsi
+@@ -10,3 +10,8 @@
+ / {
+ compatible = "renesas,r8a779m8", "renesas,r8a7795";
+ };
++
++&cluster0_opp {
++ /delete-node/ opp-1600000000;
++ /delete-node/ opp-1700000000;
++};
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g054l2-smarc.dts b/arch/arm64/boot/dts/renesas/r9a07g054l2-smarc.dts
+index 4e07e1a0fb668..3d01a4cf0fbe7 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g054l2-smarc.dts
++++ b/arch/arm64/boot/dts/renesas/r9a07g054l2-smarc.dts
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+ /*
+- * Device Tree Source for the RZ/G2L SMARC EVK board
++ * Device Tree Source for the RZ/V2L SMARC EVK board
+ *
+ * Copyright (C) 2021 Renesas Electronics Corp.
+ */
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+index be97da1322580..ba75adedbf79b 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+@@ -599,8 +599,8 @@
+ compatible = "socionext,uniphier-dwc3", "snps,dwc3";
+ status = "disabled";
+ reg = <0x65a00000 0xcd00>;
+- interrupt-names = "host", "peripheral";
+- interrupts = <0 134 4>, <0 135 4>;
++ interrupt-names = "dwc_usb3";
++ interrupts = <0 134 4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb0>, <&pinctrl_usb2>;
+ clock-names = "ref", "bus_early", "suspend";
+@@ -701,8 +701,8 @@
+ compatible = "socionext,uniphier-dwc3", "snps,dwc3";
+ status = "disabled";
+ reg = <0x65c00000 0xcd00>;
+- interrupt-names = "host", "peripheral";
+- interrupts = <0 137 4>, <0 138 4>;
++ interrupt-names = "dwc_usb3";
++ interrupts = <0 137 4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb1>, <&pinctrl_usb3>;
+ clock-names = "ref", "bus_early", "suspend";
+diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
+index ac85682c013c1..e3aaa971d6600 100644
+--- a/arch/arm64/crypto/Kconfig
++++ b/arch/arm64/crypto/Kconfig
+@@ -71,6 +71,7 @@ config CRYPTO_GHASH_ARM64_CE
+ select CRYPTO_HASH
+ select CRYPTO_GF128MUL
+ select CRYPTO_LIB_AES
++ select CRYPTO_AEAD
+
+ config CRYPTO_CRCT10DIF_ARM64_CE
+ tristate "CRCT10DIF digest algorithm using PMULL instructions"
+diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
+index 9839bfc163d71..78d272b26ebd1 100644
+--- a/arch/arm64/include/asm/kexec.h
++++ b/arch/arm64/include/asm/kexec.h
+@@ -115,7 +115,9 @@ extern const struct kexec_file_ops kexec_image_ops;
+
+ struct kimage;
+
+-extern int arch_kimage_file_post_load_cleanup(struct kimage *image);
++int arch_kimage_file_post_load_cleanup(struct kimage *image);
++#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup
++
+ extern int load_other_segments(struct kimage *image,
+ unsigned long kernel_load_addr, unsigned long kernel_size,
+ char *initrd, unsigned long initrd_len,
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index 9e58749db21df..86eb0bfe3b380 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -272,8 +272,9 @@ void tls_preserve_current_state(void);
+
+ static inline void start_thread_common(struct pt_regs *regs, unsigned long pc)
+ {
++ s32 previous_syscall = regs->syscallno;
+ memset(regs, 0, sizeof(*regs));
+- forget_syscall(regs);
++ regs->syscallno = previous_syscall;
+ regs->pc = pc;
+
+ if (system_uses_irq_prio_masking())
+diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
+index fa7981d0d9170..7075a9c6a4a61 100644
+--- a/arch/arm64/kernel/Makefile
++++ b/arch/arm64/kernel/Makefile
+@@ -14,6 +14,11 @@ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_syscall.o = -fstack-protector -fstack-protector-strong
+ CFLAGS_syscall.o += -fno-stack-protector
+
++# When KASAN is enabled, a stack trace is recorded for every alloc/free, which
++# can significantly impact performance. Avoid instrumenting the stack trace
++# collection code to minimize this impact.
++KASAN_SANITIZE_stacktrace.o := n
++
+ # It's not safe to invoke KCOV when portions of the kernel environment aren't
+ # available or are out-of-sync with HW state. Since `noinstr` doesn't always
+ # inhibit KCOV instrumentation, disable it for the entire compilation unit.
+diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
+index 6875a16b09d29..fb0e7c7b2e209 100644
+--- a/arch/arm64/kernel/armv8_deprecated.c
++++ b/arch/arm64/kernel/armv8_deprecated.c
+@@ -59,6 +59,7 @@ struct insn_emulation {
+ static LIST_HEAD(insn_emulation);
+ static int nr_insn_emulated __initdata;
+ static DEFINE_RAW_SPINLOCK(insn_emulation_lock);
++static DEFINE_MUTEX(insn_emulation_mutex);
+
+ static void register_emulation_hooks(struct insn_emulation_ops *ops)
+ {
+@@ -207,10 +208,10 @@ static int emulation_proc_handler(struct ctl_table *table, int write,
+ loff_t *ppos)
+ {
+ int ret = 0;
+- struct insn_emulation *insn = (struct insn_emulation *) table->data;
++ struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode);
+ enum insn_emulation_mode prev_mode = insn->current_mode;
+
+- table->data = &insn->current_mode;
++ mutex_lock(&insn_emulation_mutex);
+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+
+ if (ret || !write || prev_mode == insn->current_mode)
+@@ -223,7 +224,7 @@ static int emulation_proc_handler(struct ctl_table *table, int write,
+ update_insn_emulation_mode(insn, INSN_UNDEF);
+ }
+ ret:
+- table->data = insn;
++ mutex_unlock(&insn_emulation_mutex);
+ return ret;
+ }
+
+@@ -247,7 +248,7 @@ static void __init register_insn_emulation_sysctl(void)
+ sysctl->maxlen = sizeof(int);
+
+ sysctl->procname = insn->ops->name;
+- sysctl->data = insn;
++ sysctl->data = &insn->current_mode;
+ sysctl->extra1 = &insn->min;
+ sysctl->extra2 = &insn->max;
+ sysctl->proc_handler = emulation_proc_handler;
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index c05cc3b6162e9..6b92989f4cc27 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -395,6 +395,14 @@ static struct midr_range trbe_write_out_of_range_cpus[] = {
+ };
+ #endif /* CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE */
+
++#ifdef CONFIG_ARM64_ERRATUM_1742098
++static struct midr_range broken_aarch32_aes[] = {
++ MIDR_RANGE(MIDR_CORTEX_A57, 0, 1, 0xf, 0xf),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
++ {},
++};
++#endif /* CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE */
++
+ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
+ {
+@@ -657,6 +665,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ /* Cortex-A510 r0p0 - r0p1 */
+ ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 1)
+ },
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_1742098
++ {
++ .desc = "ARM erratum 1742098",
++ .capability = ARM64_WORKAROUND_1742098,
++ CAP_MIDR_RANGE_LIST(broken_aarch32_aes),
++ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++ },
+ #endif
+ {
+ }
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 8d88433de81da..ebdfbd1cf207b 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -79,6 +79,7 @@
+ #include <asm/cpufeature.h>
+ #include <asm/cpu_ops.h>
+ #include <asm/fpsimd.h>
++#include <asm/hwcap.h>
+ #include <asm/insn.h>
+ #include <asm/kvm_host.h>
+ #include <asm/mmu_context.h>
+@@ -561,7 +562,7 @@ static const struct arm64_ftr_bits ftr_id_pfr2[] = {
+
+ static const struct arm64_ftr_bits ftr_id_dfr0[] = {
+ /* [31:28] TraceFilt */
+- S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_PERFMON_SHIFT, 4, 0xf),
++ S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_DFR0_PERFMON_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MPROFDBG_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPTRC_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPTRC_SHIFT, 4, 0),
+@@ -1971,6 +1972,14 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
+ }
+ #endif /* CONFIG_ARM64_MTE */
+
++static void elf_hwcap_fixup(void)
++{
++#ifdef CONFIG_ARM64_ERRATUM_1742098
++ if (cpus_have_const_cap(ARM64_WORKAROUND_1742098))
++ compat_elf_hwcap2 &= ~COMPAT_HWCAP2_AES;
++#endif /* ARM64_ERRATUM_1742098 */
++}
++
+ #ifdef CONFIG_KVM
+ static bool is_kvm_protected_mode(const struct arm64_cpu_capabilities *entry, int __unused)
+ {
+@@ -3143,8 +3152,10 @@ void __init setup_cpu_features(void)
+ setup_system_capabilities();
+ setup_elf_hwcaps(arm64_elf_hwcaps);
+
+- if (system_supports_32bit_el0())
++ if (system_supports_32bit_el0()) {
+ setup_elf_hwcaps(compat_elf_hwcaps);
++ elf_hwcap_fixup();
++ }
+
+ if (system_uses_ttbr0_pan())
+ pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n");
+@@ -3197,6 +3208,7 @@ static int enable_mismatched_32bit_el0(unsigned int cpu)
+ cpu_active_mask);
+ get_cpu_device(lucky_winner)->offline_disabled = true;
+ setup_elf_hwcaps(compat_elf_hwcaps);
++ elf_hwcap_fixup();
+ pr_info("Asymmetric 32-bit EL0 support detected on CPU %u; CPU hot-unplug disabled on CPU %u\n",
+ cpu, lucky_winner);
+ return 0;
+diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
+index 2e248342476ea..af5df48ba915b 100644
+--- a/arch/arm64/kernel/hibernate.c
++++ b/arch/arm64/kernel/hibernate.c
+@@ -300,11 +300,6 @@ static void swsusp_mte_restore_tags(void)
+ unsigned long pfn = xa_state.xa_index;
+ struct page *page = pfn_to_online_page(pfn);
+
+- /*
+- * It is not required to invoke page_kasan_tag_reset(page)
+- * at this point since the tags stored in page->flags are
+- * already restored.
+- */
+ mte_restore_page_tags(page_address(page), tags);
+
+ mte_free_tag_storage(tags);
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index f6b00743c3994..b2b730233274b 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -48,15 +48,6 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
+ if (!pte_is_tagged)
+ return;
+
+- page_kasan_tag_reset(page);
+- /*
+- * We need smp_wmb() in between setting the flags and clearing the
+- * tags because if another thread reads page->flags and builds a
+- * tagged address out of it, there is an actual dependency to the
+- * memory access, but on the current thread we do not guarantee that
+- * the new page->flags are visible before the tags were updated.
+- */
+- smp_wmb();
+ mte_clear_page_tags(page_address(page));
+ }
+
+diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
+index 0467cb79f080a..d6bef106e37e5 100644
+--- a/arch/arm64/kernel/stacktrace.c
++++ b/arch/arm64/kernel/stacktrace.c
+@@ -117,15 +117,15 @@ static int notrace unwind_next(struct task_struct *tsk,
+ if (fp <= state->prev_fp)
+ return -EINVAL;
+ } else {
+- set_bit(state->prev_type, state->stacks_done);
++ __set_bit(state->prev_type, state->stacks_done);
+ }
+
+ /*
+ * Record this frame record's values and location. The prev_fp and
+ * prev_type are only meaningful to the next unwind_next() invocation.
+ */
+- state->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
+- state->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8));
++ state->fp = READ_ONCE(*(unsigned long *)(fp));
++ state->pc = READ_ONCE(*(unsigned long *)(fp + 8));
+ state->prev_fp = fp;
+ state->prev_type = info.type;
+
+diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
+index f66c0142b3357..e43926ef2bc2a 100644
+--- a/arch/arm64/kvm/handle_exit.c
++++ b/arch/arm64/kvm/handle_exit.c
+@@ -347,10 +347,10 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
+ kvm_err("nVHE hyp BUG at: %s:%u!\n", file, line);
+ else
+ kvm_err("nVHE hyp BUG at: [<%016llx>] %pB!\n", panic_addr,
+- (void *)panic_addr);
++ (void *)(panic_addr + kaslr_offset()));
+ } else {
+ kvm_err("nVHE hyp panic at: [<%016llx>] %pB!\n", panic_addr,
+- (void *)panic_addr);
++ (void *)(panic_addr + kaslr_offset()));
+ }
+
+ /*
+diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
+index 6db801db8f271..925b34b7708d6 100644
+--- a/arch/arm64/kvm/hyp/nvhe/switch.c
++++ b/arch/arm64/kvm/hyp/nvhe/switch.c
+@@ -386,5 +386,5 @@ asmlinkage void __noreturn hyp_panic_bad_stack(void)
+
+ asmlinkage void kvm_unexpected_el2_exception(void)
+ {
+- return __kvm_unexpected_el2_exception();
++ __kvm_unexpected_el2_exception();
+ }
+diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
+index 969f20daf97aa..390af1a6a9b4f 100644
+--- a/arch/arm64/kvm/hyp/vhe/switch.c
++++ b/arch/arm64/kvm/hyp/vhe/switch.c
+@@ -249,5 +249,5 @@ void __noreturn hyp_panic(void)
+
+ asmlinkage void kvm_unexpected_el2_exception(void)
+ {
+- return __kvm_unexpected_el2_exception();
++ __kvm_unexpected_el2_exception();
+ }
+diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
+index 0dea80bf6de46..24913271e898c 100644
+--- a/arch/arm64/mm/copypage.c
++++ b/arch/arm64/mm/copypage.c
+@@ -23,15 +23,6 @@ void copy_highpage(struct page *to, struct page *from)
+
+ if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
+ set_bit(PG_mte_tagged, &to->flags);
+- page_kasan_tag_reset(to);
+- /*
+- * We need smp_wmb() in between setting the flags and clearing the
+- * tags because if another thread reads page->flags and builds a
+- * tagged address out of it, there is an actual dependency to the
+- * memory access, but on the current thread we do not guarantee that
+- * the new page->flags are visible before the tags were updated.
+- */
+- smp_wmb();
+ mte_copy_page_tags(kto, kfrom);
+ }
+ }
+diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
+index a9e50e930484a..4334dec93bd44 100644
+--- a/arch/arm64/mm/mteswap.c
++++ b/arch/arm64/mm/mteswap.c
+@@ -53,15 +53,6 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
+ if (!tags)
+ return false;
+
+- page_kasan_tag_reset(page);
+- /*
+- * We need smp_wmb() in between setting the flags and clearing the
+- * tags because if another thread reads page->flags and builds a
+- * tagged address out of it, there is an actual dependency to the
+- * memory access, but on the current thread we do not guarantee that
+- * the new page->flags are visible before the tags were updated.
+- */
+- smp_wmb();
+ mte_restore_page_tags(page_address(page), tags);
+
+ return true;
+diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
+index 507b203739539..8809e14cf86a2 100644
+--- a/arch/arm64/tools/cpucaps
++++ b/arch/arm64/tools/cpucaps
+@@ -61,6 +61,7 @@ WORKAROUND_1418040
+ WORKAROUND_1463225
+ WORKAROUND_1508412
+ WORKAROUND_1542419
++WORKAROUND_1742098
+ WORKAROUND_1902691
+ WORKAROUND_2038923
+ WORKAROUND_2064142
+diff --git a/arch/csky/abiv1/inc/abi/string.h b/arch/csky/abiv1/inc/abi/string.h
+index 9d95594b0febf..de50117b904d9 100644
+--- a/arch/csky/abiv1/inc/abi/string.h
++++ b/arch/csky/abiv1/inc/abi/string.h
+@@ -6,4 +6,10 @@
+ #define __HAVE_ARCH_MEMCPY
+ extern void *memcpy(void *, const void *, __kernel_size_t);
+
++#define __HAVE_ARCH_MEMMOVE
++extern void *memmove(void *, const void *, __kernel_size_t);
++
++#define __HAVE_ARCH_MEMSET
++extern void *memset(void *, int, __kernel_size_t);
++
+ #endif /* __ABI_CSKY_STRING_H */
+diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
+index 7cbce290f4e5a..757c2f6d8d4b8 100644
+--- a/arch/ia64/include/asm/processor.h
++++ b/arch/ia64/include/asm/processor.h
+@@ -538,7 +538,7 @@ ia64_get_irr(unsigned int vector)
+ {
+ unsigned int reg = vector / 64;
+ unsigned int bit = vector % 64;
+- u64 irr;
++ unsigned long irr;
+
+ switch (reg) {
+ case 0: irr = ia64_getreg(_IA64_REG_CR_IRR0); break;
+diff --git a/arch/loongarch/kernel/proc.c b/arch/loongarch/kernel/proc.c
+index 1effc73850fea..5c67cc4fd56d5 100644
+--- a/arch/loongarch/kernel/proc.c
++++ b/arch/loongarch/kernel/proc.c
+@@ -106,7 +106,7 @@ static void *c_start(struct seq_file *m, loff_t *pos)
+ {
+ unsigned long i = *pos;
+
+- return i < NR_CPUS ? (void *)(i + 1) : NULL;
++ return i < nr_cpu_ids ? (void *)(i + 1) : NULL;
+ }
+
+ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+diff --git a/arch/m68k/virt/platform.c b/arch/m68k/virt/platform.c
+index cb820f19a2219..1560c4140ab91 100644
+--- a/arch/m68k/virt/platform.c
++++ b/arch/m68k/virt/platform.c
+@@ -8,20 +8,15 @@
+
+ #define VIRTIO_BUS_NB 128
+
+-static int __init virt_virtio_init(unsigned int id)
++static struct platform_device * __init virt_virtio_init(unsigned int id)
+ {
+ const struct resource res[] = {
+ DEFINE_RES_MEM(virt_bi_data.virtio.mmio + id * 0x200, 0x200),
+ DEFINE_RES_IRQ(virt_bi_data.virtio.irq + id),
+ };
+- struct platform_device *pdev;
+
+- pdev = platform_device_register_simple("virtio-mmio", id,
++ return platform_device_register_simple("virtio-mmio", id,
+ res, ARRAY_SIZE(res));
+- if (IS_ERR(pdev))
+- return PTR_ERR(pdev);
+-
+- return 0;
+ }
+
+ static int __init virt_platform_init(void)
+@@ -35,8 +30,10 @@ static int __init virt_platform_init(void)
+ DEFINE_RES_MEM(virt_bi_data.rtc.mmio + 0x1000, 0x1000),
+ DEFINE_RES_IRQ(virt_bi_data.rtc.irq + 1),
+ };
+- struct platform_device *pdev;
++ struct platform_device *pdev1, *pdev2;
++ struct platform_device *pdevs[VIRTIO_BUS_NB];
+ unsigned int i;
++ int ret = 0;
+
+ if (!MACH_IS_VIRT)
+ return -ENODEV;
+@@ -44,29 +41,40 @@ static int __init virt_platform_init(void)
+ /* We need this to have DMA'able memory provided to goldfish-tty */
+ min_low_pfn = 0;
+
+- pdev = platform_device_register_simple("goldfish_tty",
+- PLATFORM_DEVID_NONE,
+- goldfish_tty_res,
+- ARRAY_SIZE(goldfish_tty_res));
+- if (IS_ERR(pdev))
+- return PTR_ERR(pdev);
++ pdev1 = platform_device_register_simple("goldfish_tty",
++ PLATFORM_DEVID_NONE,
++ goldfish_tty_res,
++ ARRAY_SIZE(goldfish_tty_res));
++ if (IS_ERR(pdev1))
++ return PTR_ERR(pdev1);
+
+- pdev = platform_device_register_simple("goldfish_rtc",
+- PLATFORM_DEVID_NONE,
+- goldfish_rtc_res,
+- ARRAY_SIZE(goldfish_rtc_res));
+- if (IS_ERR(pdev))
+- return PTR_ERR(pdev);
++ pdev2 = platform_device_register_simple("goldfish_rtc",
++ PLATFORM_DEVID_NONE,
++ goldfish_rtc_res,
++ ARRAY_SIZE(goldfish_rtc_res));
++ if (IS_ERR(pdev2)) {
++ ret = PTR_ERR(pdev2);
++ goto err_unregister_tty;
++ }
+
+ for (i = 0; i < VIRTIO_BUS_NB; i++) {
+- int err;
+-
+- err = virt_virtio_init(i);
+- if (err)
+- return err;
++ pdevs[i] = virt_virtio_init(i);
++ if (IS_ERR(pdevs[i])) {
++ ret = PTR_ERR(pdevs[i]);
++ goto err_unregister_rtc_virtio;
++ }
+ }
+
+ return 0;
++
++err_unregister_rtc_virtio:
++ while (i > 0)
++ platform_device_unregister(pdevs[--i]);
++ platform_device_unregister(pdev2);
++err_unregister_tty:
++ platform_device_unregister(pdev1);
++
++ return ret;
+ }
+
+ arch_initcall(virt_platform_init);
+diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c
+index bb43bf850314a..8eba5a1ed664c 100644
+--- a/arch/mips/kernel/proc.c
++++ b/arch/mips/kernel/proc.c
+@@ -311,7 +311,7 @@ static void *c_start(struct seq_file *m, loff_t *pos)
+ {
+ unsigned long i = *pos;
+
+- return i < NR_CPUS ? (void *) (i + 1) : NULL;
++ return i < nr_cpu_ids ? (void *) (i + 1) : NULL;
+ }
+
+ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
+index 3d0cf471f2fe1..b2cc2c2dd4bfc 100644
+--- a/arch/mips/kernel/vdso.c
++++ b/arch/mips/kernel/vdso.c
+@@ -159,7 +159,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ /* Map GIC user page. */
+ if (gic_size) {
+ gic_base = (unsigned long)mips_gic_base + MIPS_GIC_USER_OFS;
+- gic_pfn = virt_to_phys((void *)gic_base) >> PAGE_SHIFT;
++ gic_pfn = PFN_DOWN(__pa(gic_base));
+
+ ret = io_remap_pfn_range(vma, base, gic_pfn, gic_size,
+ pgprot_noncached(vma->vm_page_prot));
+diff --git a/arch/mips/loongson64/numa.c b/arch/mips/loongson64/numa.c
+index 69a533148efdd..8f61e93c0c5bc 100644
+--- a/arch/mips/loongson64/numa.c
++++ b/arch/mips/loongson64/numa.c
+@@ -196,7 +196,6 @@ void __init prom_init_numa_memory(void)
+ pr_info("CP0_PageGrain: CP0 5.1 (0x%x)\n", read_c0_pagegrain());
+ prom_meminit();
+ }
+-EXPORT_SYMBOL(prom_init_numa_memory);
+
+ pg_data_t * __init arch_alloc_nodedata(int nid)
+ {
+diff --git a/arch/mips/mm/physaddr.c b/arch/mips/mm/physaddr.c
+index a1ced5e449511..f9b8c85e98433 100644
+--- a/arch/mips/mm/physaddr.c
++++ b/arch/mips/mm/physaddr.c
+@@ -5,6 +5,7 @@
+ #include <linux/mmdebug.h>
+ #include <linux/mm.h>
+
++#include <asm/addrspace.h>
+ #include <asm/sections.h>
+ #include <asm/io.h>
+ #include <asm/page.h>
+@@ -12,15 +13,6 @@
+
+ static inline bool __debug_virt_addr_valid(unsigned long x)
+ {
+- /* high_memory does not get immediately defined, and there
+- * are early callers of __pa() against PAGE_OFFSET
+- */
+- if (!high_memory && x >= PAGE_OFFSET)
+- return true;
+-
+- if (high_memory && x >= PAGE_OFFSET && x < (unsigned long)high_memory)
+- return true;
+-
+ /*
+ * MAX_DMA_ADDRESS is a virtual address that may not correspond to an
+ * actual physical address. Enough code relies on
+@@ -30,7 +22,9 @@ static inline bool __debug_virt_addr_valid(unsigned long x)
+ if (x == MAX_DMA_ADDRESS)
+ return true;
+
+- return false;
++ return x >= PAGE_OFFSET && (KSEGX(x) < KSEG2 ||
++ IS_ENABLED(CONFIG_EVA) ||
++ !IS_ENABLED(CONFIG_HIGHMEM));
+ }
+
+ phys_addr_t __virt_to_phys(volatile const void *x)
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index a9bc578e4c52e..af3d7cdc1541b 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -50,9 +50,6 @@ void flush_instruction_cache_local(void); /* flushes local code-cache only */
+ */
+ DEFINE_SPINLOCK(pa_tlb_flush_lock);
+
+-/* Swapper page setup lock. */
+-DEFINE_SPINLOCK(pa_swapper_pg_lock);
+-
+ #if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
+ int pa_serialize_tlb_flushes __ro_after_init;
+ #endif
+diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
+index 776d624a7207b..d126e78e101ae 100644
+--- a/arch/parisc/kernel/drivers.c
++++ b/arch/parisc/kernel/drivers.c
+@@ -520,7 +520,6 @@ alloc_pa_dev(unsigned long hpa, struct hardware_path *mod_path)
+ dev->id.hversion_rev = iodc_data[1] & 0x0f;
+ dev->id.sversion = ((iodc_data[4] & 0x0f) << 16) |
+ (iodc_data[5] << 8) | iodc_data[6];
+- dev->hpa.name = parisc_pathname(dev);
+ dev->hpa.start = hpa;
+ /* This is awkward. The STI spec says that gfx devices may occupy
+ * 32MB or 64MB. Unfortunately, we don't know how to tell whether
+@@ -534,10 +533,10 @@ alloc_pa_dev(unsigned long hpa, struct hardware_path *mod_path)
+ dev->hpa.end = hpa + 0xfff;
+ }
+ dev->hpa.flags = IORESOURCE_MEM;
+- name = parisc_hardware_description(&dev->id);
+- if (name) {
+- strlcpy(dev->name, name, sizeof(dev->name));
+- }
++ dev->hpa.name = dev->name;
++ name = parisc_hardware_description(&dev->id) ? : "unknown";
++ snprintf(dev->name, sizeof(dev->name), "%s [%s]",
++ name, parisc_pathname(dev));
+
+ /* Silently fail things like mouse ports which are subsumed within
+ * the keyboard controller
+diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
+index 68b46fe2f17c5..8a99c998da9bb 100644
+--- a/arch/parisc/kernel/syscalls/syscall.tbl
++++ b/arch/parisc/kernel/syscalls/syscall.tbl
+@@ -413,7 +413,7 @@
+ 412 32 utimensat_time64 sys_utimensat sys_utimensat
+ 413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64
+ 414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64
+-416 32 io_pgetevents_time64 sys_io_pgetevents sys_io_pgetevents
++416 32 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64
+ 417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64
+ 418 32 mq_timedsend_time64 sys_mq_timedsend sys_mq_timedsend
+ 419 32 mq_timedreceive_time64 sys_mq_timedreceive sys_mq_timedreceive
+diff --git a/arch/powerpc/configs/44x/akebono_defconfig b/arch/powerpc/configs/44x/akebono_defconfig
+index 4bc549c6edc5a..fde4824f235ef 100644
+--- a/arch/powerpc/configs/44x/akebono_defconfig
++++ b/arch/powerpc/configs/44x/akebono_defconfig
+@@ -118,7 +118,7 @@ CONFIG_CRAMFS=y
+ CONFIG_NLS_DEFAULT="n"
+ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ISO8859_1=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+ CONFIG_XMON=y
+diff --git a/arch/powerpc/configs/44x/currituck_defconfig b/arch/powerpc/configs/44x/currituck_defconfig
+index 7178272199217..7283b7d4a1a57 100644
+--- a/arch/powerpc/configs/44x/currituck_defconfig
++++ b/arch/powerpc/configs/44x/currituck_defconfig
+@@ -73,7 +73,7 @@ CONFIG_NFS_FS=y
+ CONFIG_NFS_V3_ACL=y
+ CONFIG_NFS_V4=y
+ CONFIG_NLS_DEFAULT="n"
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+ CONFIG_XMON=y
+diff --git a/arch/powerpc/configs/44x/fsp2_defconfig b/arch/powerpc/configs/44x/fsp2_defconfig
+index 8da316e61a08c..3fdfbb29b8548 100644
+--- a/arch/powerpc/configs/44x/fsp2_defconfig
++++ b/arch/powerpc/configs/44x/fsp2_defconfig
+@@ -110,7 +110,7 @@ CONFIG_XZ_DEC=y
+ CONFIG_PRINTK_TIME=y
+ CONFIG_MESSAGE_LOGLEVEL_DEFAULT=3
+ CONFIG_DYNAMIC_DEBUG=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+ CONFIG_CRYPTO_CBC=y
+diff --git a/arch/powerpc/configs/44x/iss476-smp_defconfig b/arch/powerpc/configs/44x/iss476-smp_defconfig
+index c11e777b2f3d6..0f6380e1e6125 100644
+--- a/arch/powerpc/configs/44x/iss476-smp_defconfig
++++ b/arch/powerpc/configs/44x/iss476-smp_defconfig
+@@ -56,7 +56,7 @@ CONFIG_PROC_KCORE=y
+ CONFIG_TMPFS=y
+ CONFIG_CRAMFS=y
+ # CONFIG_NETWORK_FILESYSTEMS is not set
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+ CONFIG_PPC_EARLY_DEBUG=y
+diff --git a/arch/powerpc/configs/44x/warp_defconfig b/arch/powerpc/configs/44x/warp_defconfig
+index 47252c2d7669a..20891c413149c 100644
+--- a/arch/powerpc/configs/44x/warp_defconfig
++++ b/arch/powerpc/configs/44x/warp_defconfig
+@@ -88,7 +88,7 @@ CONFIG_NLS_UTF8=y
+ CONFIG_CRC_CCITT=y
+ CONFIG_CRC_T10DIF=y
+ CONFIG_PRINTK_TIME=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_DEBUG_FS=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+diff --git a/arch/powerpc/configs/52xx/lite5200b_defconfig b/arch/powerpc/configs/52xx/lite5200b_defconfig
+index 63368e6775064..7db479dcbc0c4 100644
+--- a/arch/powerpc/configs/52xx/lite5200b_defconfig
++++ b/arch/powerpc/configs/52xx/lite5200b_defconfig
+@@ -58,6 +58,6 @@ CONFIG_NFS_FS=y
+ CONFIG_NFS_V4=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_PRINTK_TIME=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_DETECT_HUNG_TASK=y
+ # CONFIG_DEBUG_BUGVERBOSE is not set
+diff --git a/arch/powerpc/configs/52xx/motionpro_defconfig b/arch/powerpc/configs/52xx/motionpro_defconfig
+index 72762da94846f..6186ead1e1056 100644
+--- a/arch/powerpc/configs/52xx/motionpro_defconfig
++++ b/arch/powerpc/configs/52xx/motionpro_defconfig
+@@ -84,7 +84,7 @@ CONFIG_ROOT_NFS=y
+ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ISO8859_1=y
+ CONFIG_PRINTK_TIME=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_DETECT_HUNG_TASK=y
+ # CONFIG_DEBUG_BUGVERBOSE is not set
+ CONFIG_CRYPTO_ECB=y
+diff --git a/arch/powerpc/configs/52xx/tqm5200_defconfig b/arch/powerpc/configs/52xx/tqm5200_defconfig
+index a3c8ca74032c4..e6735b945327e 100644
+--- a/arch/powerpc/configs/52xx/tqm5200_defconfig
++++ b/arch/powerpc/configs/52xx/tqm5200_defconfig
+@@ -85,7 +85,7 @@ CONFIG_ROOT_NFS=y
+ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ISO8859_1=y
+ CONFIG_PRINTK_TIME=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_DETECT_HUNG_TASK=y
+ # CONFIG_DEBUG_BUGVERBOSE is not set
+ CONFIG_CRYPTO_ECB=y
+diff --git a/arch/powerpc/configs/adder875_defconfig b/arch/powerpc/configs/adder875_defconfig
+index 5326bc7392790..7f35d5bc12299 100644
+--- a/arch/powerpc/configs/adder875_defconfig
++++ b/arch/powerpc/configs/adder875_defconfig
+@@ -45,7 +45,7 @@ CONFIG_CRAMFS=y
+ CONFIG_NFS_FS=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CRC32_SLICEBY4=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_DEBUG_FS=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+diff --git a/arch/powerpc/configs/ep8248e_defconfig b/arch/powerpc/configs/ep8248e_defconfig
+index 00d69965f898b..8df6d3a293e3c 100644
+--- a/arch/powerpc/configs/ep8248e_defconfig
++++ b/arch/powerpc/configs/ep8248e_defconfig
+@@ -59,7 +59,7 @@ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ASCII=y
+ CONFIG_NLS_ISO8859_1=y
+ CONFIG_NLS_UTF8=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ # CONFIG_SCHED_DEBUG is not set
+ CONFIG_BDI_SWITCH=y
+diff --git a/arch/powerpc/configs/ep88xc_defconfig b/arch/powerpc/configs/ep88xc_defconfig
+index f5c3e72da7196..a98ef6a4abef6 100644
+--- a/arch/powerpc/configs/ep88xc_defconfig
++++ b/arch/powerpc/configs/ep88xc_defconfig
+@@ -48,6 +48,6 @@ CONFIG_CRAMFS=y
+ CONFIG_NFS_FS=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CRC32_SLICEBY4=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+diff --git a/arch/powerpc/configs/fsl-emb-nonhw.config b/arch/powerpc/configs/fsl-emb-nonhw.config
+index df37efed0aec3..f14c6dbd7346c 100644
+--- a/arch/powerpc/configs/fsl-emb-nonhw.config
++++ b/arch/powerpc/configs/fsl-emb-nonhw.config
+@@ -24,7 +24,7 @@ CONFIG_CRYPTO_PCBC=m
+ CONFIG_CRYPTO_SHA256=y
+ CONFIG_CRYPTO_SHA512=y
+ CONFIG_DEBUG_FS=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_DEBUG_KERNEL=y
+ CONFIG_DEBUG_SHIRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+diff --git a/arch/powerpc/configs/mgcoge_defconfig b/arch/powerpc/configs/mgcoge_defconfig
+index dcc8dccf54f3b..498d35db78331 100644
+--- a/arch/powerpc/configs/mgcoge_defconfig
++++ b/arch/powerpc/configs/mgcoge_defconfig
+@@ -73,7 +73,7 @@ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ASCII=y
+ CONFIG_NLS_ISO8859_1=y
+ CONFIG_NLS_UTF8=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_DEBUG_FS=y
+ CONFIG_MAGIC_SYSRQ=y
+ # CONFIG_SCHED_DEBUG is not set
+diff --git a/arch/powerpc/configs/mpc5200_defconfig b/arch/powerpc/configs/mpc5200_defconfig
+index 83d801307178d..c0fe5e76604a0 100644
+--- a/arch/powerpc/configs/mpc5200_defconfig
++++ b/arch/powerpc/configs/mpc5200_defconfig
+@@ -122,6 +122,6 @@ CONFIG_ROOT_NFS=y
+ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ISO8859_1=y
+ CONFIG_PRINTK_TIME=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_DEBUG_KERNEL=y
+ CONFIG_DETECT_HUNG_TASK=y
+diff --git a/arch/powerpc/configs/mpc8272_ads_defconfig b/arch/powerpc/configs/mpc8272_ads_defconfig
+index 00a4d2bf43b2a..4145ef5689caa 100644
+--- a/arch/powerpc/configs/mpc8272_ads_defconfig
++++ b/arch/powerpc/configs/mpc8272_ads_defconfig
+@@ -67,7 +67,7 @@ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ASCII=y
+ CONFIG_NLS_ISO8859_1=y
+ CONFIG_NLS_UTF8=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+ CONFIG_BDI_SWITCH=y
+diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
+index c74dc76b1d0d1..700115d85d6fb 100644
+--- a/arch/powerpc/configs/mpc885_ads_defconfig
++++ b/arch/powerpc/configs/mpc885_ads_defconfig
+@@ -71,7 +71,7 @@ CONFIG_ROOT_NFS=y
+ CONFIG_CRYPTO=y
+ CONFIG_CRYPTO_DEV_TALITOS=y
+ CONFIG_CRC32_SLICEBY4=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DEBUG_FS=y
+ CONFIG_DEBUG_VM_PGTABLE=y
+diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig
+index b622ecd73286c..91967824272ef 100644
+--- a/arch/powerpc/configs/ppc6xx_defconfig
++++ b/arch/powerpc/configs/ppc6xx_defconfig
+@@ -1065,7 +1065,7 @@ CONFIG_NLS_ISO8859_14=m
+ CONFIG_NLS_ISO8859_15=m
+ CONFIG_NLS_KOI8_R=m
+ CONFIG_NLS_KOI8_U=m
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_HEADERS_INSTALL=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DEBUG_KERNEL=y
+diff --git a/arch/powerpc/configs/pq2fads_defconfig b/arch/powerpc/configs/pq2fads_defconfig
+index 9d8a76857c6fc..9d63e2e652115 100644
+--- a/arch/powerpc/configs/pq2fads_defconfig
++++ b/arch/powerpc/configs/pq2fads_defconfig
+@@ -68,7 +68,7 @@ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ASCII=y
+ CONFIG_NLS_ISO8859_1=y
+ CONFIG_NLS_UTF8=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+ # CONFIG_SCHED_DEBUG is not set
+diff --git a/arch/powerpc/configs/ps3_defconfig b/arch/powerpc/configs/ps3_defconfig
+index 7c95fab4b9206..2d9ac233da685 100644
+--- a/arch/powerpc/configs/ps3_defconfig
++++ b/arch/powerpc/configs/ps3_defconfig
+@@ -153,7 +153,7 @@ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ISO8859_1=y
+ CONFIG_CRC_CCITT=m
+ CONFIG_CRC_T10DIF=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DEBUG_MEMORY_INIT=y
+ CONFIG_DEBUG_STACKOVERFLOW=y
+diff --git a/arch/powerpc/configs/tqm8xx_defconfig b/arch/powerpc/configs/tqm8xx_defconfig
+index 77857d5130223..083c2e57520a0 100644
+--- a/arch/powerpc/configs/tqm8xx_defconfig
++++ b/arch/powerpc/configs/tqm8xx_defconfig
+@@ -55,6 +55,6 @@ CONFIG_CRAMFS=y
+ CONFIG_NFS_FS=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CRC32_SLICEBY4=y
+-CONFIG_DEBUG_INFO=y
++CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_DETECT_HUNG_TASK=y
+diff --git a/arch/powerpc/include/asm/archrandom.h b/arch/powerpc/include/asm/archrandom.h
+index 9a53e29680f41..258174304904b 100644
+--- a/arch/powerpc/include/asm/archrandom.h
++++ b/arch/powerpc/include/asm/archrandom.h
+@@ -38,12 +38,7 @@ static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
+ #endif /* CONFIG_ARCH_RANDOM */
+
+ #ifdef CONFIG_PPC_POWERNV
+-int powernv_hwrng_present(void);
+ int powernv_get_random_long(unsigned long *v);
+-int powernv_get_random_real_mode(unsigned long *v);
+-#else
+-static inline int powernv_hwrng_present(void) { return 0; }
+-static inline int powernv_get_random_real_mode(unsigned long *v) { return 0; }
+ #endif
+
+ #endif /* _ASM_POWERPC_ARCHRANDOM_H */
+diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
+index 2aefe14e14422..1e5e9b6ec78d9 100644
+--- a/arch/powerpc/include/asm/kexec.h
++++ b/arch/powerpc/include/asm/kexec.h
+@@ -120,6 +120,15 @@ int setup_purgatory(struct kimage *image, const void *slave_code,
+ #ifdef CONFIG_PPC64
+ struct kexec_buf;
+
++int arch_kexec_kernel_image_probe(struct kimage *image, void *buf, unsigned long buf_len);
++#define arch_kexec_kernel_image_probe arch_kexec_kernel_image_probe
++
++int arch_kimage_file_post_load_cleanup(struct kimage *image);
++#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup
++
++int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf);
++#define arch_kexec_locate_mem_hole arch_kexec_locate_mem_hole
++
+ int load_crashdump_segments_ppc64(struct kimage *image,
+ struct kexec_buf *kbuf);
+ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
+diff --git a/arch/powerpc/include/asm/simple_spinlock.h b/arch/powerpc/include/asm/simple_spinlock.h
+index 7ae6aeef8464e..9dcc7e9993b90 100644
+--- a/arch/powerpc/include/asm/simple_spinlock.h
++++ b/arch/powerpc/include/asm/simple_spinlock.h
+@@ -48,10 +48,11 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock)
+ static inline unsigned long __arch_spin_trylock(arch_spinlock_t *lock)
+ {
+ unsigned long tmp, token;
++ unsigned int eh = IS_ENABLED(CONFIG_PPC64);
+
+ token = LOCK_TOKEN;
+ __asm__ __volatile__(
+-"1: lwarx %0,0,%2,1\n\
++"1: lwarx %0,0,%2,%[eh]\n\
+ cmpwi 0,%0,0\n\
+ bne- 2f\n\
+ stwcx. %1,0,%2\n\
+@@ -59,7 +60,7 @@ static inline unsigned long __arch_spin_trylock(arch_spinlock_t *lock)
+ PPC_ACQUIRE_BARRIER
+ "2:"
+ : "=&r" (tmp)
+- : "r" (token), "r" (&lock->slock)
++ : "r" (token), "r" (&lock->slock), [eh] "n" (eh)
+ : "cr0", "memory");
+
+ return tmp;
+@@ -156,9 +157,10 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
+ static inline long __arch_read_trylock(arch_rwlock_t *rw)
+ {
+ long tmp;
++ unsigned int eh = IS_ENABLED(CONFIG_PPC64);
+
+ __asm__ __volatile__(
+-"1: lwarx %0,0,%1,1\n"
++"1: lwarx %0,0,%1,%[eh]\n"
+ __DO_SIGN_EXTEND
+ " addic. %0,%0,1\n\
+ ble- 2f\n"
+@@ -166,7 +168,7 @@ static inline long __arch_read_trylock(arch_rwlock_t *rw)
+ bne- 1b\n"
+ PPC_ACQUIRE_BARRIER
+ "2:" : "=&r" (tmp)
+- : "r" (&rw->lock)
++ : "r" (&rw->lock), [eh] "n" (eh)
+ : "cr0", "xer", "memory");
+
+ return tmp;
+@@ -179,17 +181,18 @@ static inline long __arch_read_trylock(arch_rwlock_t *rw)
+ static inline long __arch_write_trylock(arch_rwlock_t *rw)
+ {
+ long tmp, token;
++ unsigned int eh = IS_ENABLED(CONFIG_PPC64);
+
+ token = WRLOCK_TOKEN;
+ __asm__ __volatile__(
+-"1: lwarx %0,0,%2,1\n\
++"1: lwarx %0,0,%2,%[eh]\n\
+ cmpwi 0,%0,0\n\
+ bne- 2f\n"
+ " stwcx. %1,0,%2\n\
+ bne- 1b\n"
+ PPC_ACQUIRE_BARRIER
+ "2:" : "=&r" (tmp)
+- : "r" (token), "r" (&rw->lock)
++ : "r" (token), "r" (&rw->lock), [eh] "n" (eh)
+ : "cr0", "memory");
+
+ return tmp;
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index 7e56ddb3e0b94..caebe1431596e 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -775,6 +775,11 @@ bool iommu_table_in_use(struct iommu_table *tbl)
+ /* ignore reserved bit0 */
+ if (tbl->it_offset == 0)
+ start = 1;
++
++ /* Simple case with no reserved MMIO32 region */
++ if (!tbl->it_reserved_start && !tbl->it_reserved_end)
++ return find_next_bit(tbl->it_map, tbl->it_size, start) != tbl->it_size;
++
+ end = tbl->it_reserved_start - tbl->it_offset;
+ if (find_next_bit(tbl->it_map, end, start) != end)
+ return true;
+diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
+index 068410cd54a3d..c787df126ada2 100644
+--- a/arch/powerpc/kernel/pci-common.c
++++ b/arch/powerpc/kernel/pci-common.c
+@@ -74,16 +74,32 @@ void __init set_pci_dma_ops(const struct dma_map_ops *dma_ops)
+ static int get_phb_number(struct device_node *dn)
+ {
+ int ret, phb_id = -1;
+- u32 prop_32;
+ u64 prop;
+
+ /*
+ * Try fixed PHB numbering first, by checking archs and reading
+- * the respective device-tree properties. Firstly, try powernv by
+- * reading "ibm,opal-phbid", only present in OPAL environment.
++ * the respective device-tree properties. Firstly, try reading
++ * standard "linux,pci-domain", then try reading "ibm,opal-phbid"
++ * (only present in powernv OPAL environment), then try device-tree
++ * alias and as the last try to use lower bits of "reg" property.
+ */
+- ret = of_property_read_u64(dn, "ibm,opal-phbid", &prop);
++ ret = of_get_pci_domain_nr(dn);
++ if (ret >= 0) {
++ prop = ret;
++ ret = 0;
++ }
++ if (ret)
++ ret = of_property_read_u64(dn, "ibm,opal-phbid", &prop);
++
+ if (ret) {
++ ret = of_alias_get_id(dn, "pci");
++ if (ret >= 0) {
++ prop = ret;
++ ret = 0;
++ }
++ }
++ if (ret) {
++ u32 prop_32;
+ ret = of_property_read_u32_index(dn, "reg", 1, &prop_32);
+ prop = prop_32;
+ }
+@@ -95,10 +111,7 @@ static int get_phb_number(struct device_node *dn)
+ if ((phb_id >= 0) && !test_and_set_bit(phb_id, phb_bitmap))
+ return phb_id;
+
+- /*
+- * If not pseries nor powernv, or if fixed PHB numbering tried to add
+- * the same PHB number twice, then fallback to dynamic PHB numbering.
+- */
++ /* If everything fails then fallback to dynamic PHB numbering. */
+ phb_id = find_first_zero_bit(phb_bitmap, MAX_PHBS);
+ BUG_ON(phb_id >= MAX_PHBS);
+ set_bit(phb_id, phb_bitmap);
+diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
+index 2a893e06e4f1f..58e9a2d9b284f 100644
+--- a/arch/powerpc/kernel/trace/ftrace.c
++++ b/arch/powerpc/kernel/trace/ftrace.c
+@@ -392,11 +392,11 @@ int ftrace_make_nop(struct module *mod,
+ */
+ static bool expected_nop_sequence(void *ip, ppc_inst_t op0, ppc_inst_t op1)
+ {
+- if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1))
++ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
++ return ppc_inst_equal(op0, ppc_inst(PPC_RAW_NOP()));
++ else
+ return ppc_inst_equal(op0, ppc_inst(PPC_RAW_BRANCH(8))) &&
+ ppc_inst_equal(op1, ppc_inst(PPC_INST_LD_TOC));
+- else
+- return ppc_inst_equal(op0, ppc_inst(PPC_RAW_NOP()));
+ }
+
+ static int
+@@ -411,7 +411,7 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ if (copy_inst_from_kernel_nofault(op, ip))
+ return -EFAULT;
+
+- if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1) &&
++ if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) &&
+ copy_inst_from_kernel_nofault(op + 1, ip + 4))
+ return -EFAULT;
+
+diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
+index b4981b651d9aa..349a781cea0b3 100644
+--- a/arch/powerpc/kexec/file_load_64.c
++++ b/arch/powerpc/kexec/file_load_64.c
+@@ -23,6 +23,7 @@
+ #include <linux/vmalloc.h>
+ #include <asm/setup.h>
+ #include <asm/drmem.h>
++#include <asm/firmware.h>
+ #include <asm/kexec_ranges.h>
+ #include <asm/crashdump-ppc64.h>
+
+@@ -1038,6 +1039,48 @@ out:
+ return ret;
+ }
+
++static int copy_property(void *fdt, int node_offset, const struct device_node *dn,
++ const char *propname)
++{
++ const void *prop, *fdtprop;
++ int len = 0, fdtlen = 0;
++
++ prop = of_get_property(dn, propname, &len);
++ fdtprop = fdt_getprop(fdt, node_offset, propname, &fdtlen);
++
++ if (fdtprop && !prop)
++ return fdt_delprop(fdt, node_offset, propname);
++ else if (prop)
++ return fdt_setprop(fdt, node_offset, propname, prop, len);
++ else
++ return -FDT_ERR_NOTFOUND;
++}
++
++static int update_pci_dma_nodes(void *fdt, const char *dmapropname)
++{
++ struct device_node *dn;
++ int pci_offset, root_offset, ret = 0;
++
++ if (!firmware_has_feature(FW_FEATURE_LPAR))
++ return 0;
++
++ root_offset = fdt_path_offset(fdt, "/");
++ for_each_node_with_property(dn, dmapropname) {
++ pci_offset = fdt_subnode_offset(fdt, root_offset, of_node_full_name(dn));
++ if (pci_offset < 0)
++ continue;
++
++ ret = copy_property(fdt, pci_offset, dn, "ibm,dma-window");
++ if (ret < 0)
++ break;
++ ret = copy_property(fdt, pci_offset, dn, dmapropname);
++ if (ret < 0)
++ break;
++ }
++
++ return ret;
++}
++
+ /**
+ * setup_new_fdt_ppc64 - Update the flattend device-tree of the kernel
+ * being loaded.
+@@ -1099,6 +1142,18 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
+ if (ret < 0)
+ goto out;
+
++#define DIRECT64_PROPNAME "linux,direct64-ddr-window-info"
++#define DMA64_PROPNAME "linux,dma64-ddr-window-info"
++ ret = update_pci_dma_nodes(fdt, DIRECT64_PROPNAME);
++ if (ret < 0)
++ goto out;
++
++ ret = update_pci_dma_nodes(fdt, DMA64_PROPNAME);
++ if (ret < 0)
++ goto out;
++#undef DMA64_PROPNAME
++#undef DIRECT64_PROPNAME
++
+ /* Update memory reserve map */
+ ret = get_reserved_memory_ranges(&rmem);
+ if (ret)
+diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
+index 88a8f6473c4e0..3abaef5f9ac27 100644
+--- a/arch/powerpc/kvm/book3s_hv_builtin.c
++++ b/arch/powerpc/kvm/book3s_hv_builtin.c
+@@ -19,7 +19,7 @@
+ #include <asm/interrupt.h>
+ #include <asm/kvm_ppc.h>
+ #include <asm/kvm_book3s.h>
+-#include <asm/archrandom.h>
++#include <asm/machdep.h>
+ #include <asm/xics.h>
+ #include <asm/xive.h>
+ #include <asm/dbell.h>
+@@ -176,13 +176,14 @@ EXPORT_SYMBOL_GPL(kvmppc_hcall_impl_hv_realmode);
+
+ int kvmppc_hwrng_present(void)
+ {
+- return powernv_hwrng_present();
++ return ppc_md.get_random_seed != NULL;
+ }
+ EXPORT_SYMBOL_GPL(kvmppc_hwrng_present);
+
+ long kvmppc_rm_h_random(struct kvm_vcpu *vcpu)
+ {
+- if (powernv_get_random_real_mode(&vcpu->arch.regs.gpr[4]))
++ if (ppc_md.get_random_seed &&
++ ppc_md.get_random_seed(&vcpu->arch.regs.gpr[4]))
+ return H_SUCCESS;
+
+ return H_HARDWARE;
+diff --git a/arch/powerpc/kvm/book3s_xics.h b/arch/powerpc/kvm/book3s_xics.h
+index 8e4c79e2fcd84..08fb0843faf58 100644
+--- a/arch/powerpc/kvm/book3s_xics.h
++++ b/arch/powerpc/kvm/book3s_xics.h
+@@ -143,6 +143,7 @@ static inline struct kvmppc_ics *kvmppc_xics_find_ics(struct kvmppc_xics *xics,
+ }
+
+ extern unsigned long xics_rm_h_xirr(struct kvm_vcpu *vcpu);
++extern unsigned long xics_rm_h_xirr_x(struct kvm_vcpu *vcpu);
+ extern int xics_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
+ unsigned long mfrr);
+ extern int xics_rm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr);
+diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
+index f3e4d069e0ba7..a70828a6d9357 100644
+--- a/arch/powerpc/mm/kasan/init_32.c
++++ b/arch/powerpc/mm/kasan/init_32.c
+@@ -25,7 +25,7 @@ static void __init kasan_populate_pte(pte_t *ptep, pgprot_t prot)
+ int i;
+
+ for (i = 0; i < PTRS_PER_PTE; i++, ptep++)
+- __set_pte_at(&init_mm, va, ptep, pfn_pte(PHYS_PFN(pa), prot), 0);
++ __set_pte_at(&init_mm, va, ptep, pfn_pte(PHYS_PFN(pa), prot), 1);
+ }
+
+ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_end)
+diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
+index 27f9186ae3740..1ee08c3efe5b6 100644
+--- a/arch/powerpc/mm/nohash/8xx.c
++++ b/arch/powerpc/mm/nohash/8xx.c
+@@ -179,8 +179,8 @@ void mmu_mark_initmem_nx(void)
+ unsigned long boundary = strict_kernel_rwx_enabled() ? sinittext : etext8;
+ unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M);
+
+- mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, false);
+- mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL, false);
++ if (!debug_pagealloc_enabled_or_kfence())
++ mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL, false);
+
+ mmu_pin_tlb(block_mapped_ram, false);
+ }
+diff --git a/arch/powerpc/mm/nohash/tlb_low_64e.S b/arch/powerpc/mm/nohash/tlb_low_64e.S
+index 8b97c4acfebfa..9e9ab3803fb2f 100644
+--- a/arch/powerpc/mm/nohash/tlb_low_64e.S
++++ b/arch/powerpc/mm/nohash/tlb_low_64e.S
+@@ -583,7 +583,7 @@ itlb_miss_fault_e6500:
+ */
+ rlwimi r11,r14,32-19,27,27
+ rlwimi r11,r14,32-16,19,19
+- beq normal_tlb_miss
++ beq normal_tlb_miss_user
+ /* XXX replace the RMW cycles with immediate loads + writes */
+ 1: mfspr r10,SPRN_MAS1
+ cmpldi cr0,r15,8 /* Check for vmalloc region */
+@@ -626,7 +626,7 @@ itlb_miss_fault_e6500:
+
+ cmpldi cr0,r15,0 /* Check for user region */
+ std r14,EX_TLB_ESR(r12) /* write crazy -1 to frame */
+- beq normal_tlb_miss
++ beq normal_tlb_miss_user
+
+ li r11,_PAGE_PRESENT|_PAGE_BAP_SX /* Base perm */
+ oris r11,r11,_PAGE_ACCESSED@h
+@@ -653,6 +653,12 @@ itlb_miss_fault_e6500:
+ * r11 = PTE permission mask
+ * r10 = crap (free to use)
+ */
++normal_tlb_miss_user:
++#ifdef CONFIG_PPC_KUAP
++ mfspr r14,SPRN_MAS1
++ rlwinm. r14,r14,0,0x3fff0000
++ beq- normal_tlb_miss_access_fault /* KUAP fault */
++#endif
+ normal_tlb_miss:
+ /* So we first construct the page table address. We do that by
+ * shifting the bottom of the address (not the region ID) by
+@@ -683,11 +689,6 @@ finish_normal_tlb_miss:
+ /* Check if required permissions are met */
+ andc. r15,r11,r14
+ bne- normal_tlb_miss_access_fault
+-#ifdef CONFIG_PPC_KUAP
+- mfspr r11,SPRN_MAS1
+- rlwinm. r10,r11,0,0x3fff0000
+- beq- normal_tlb_miss_access_fault /* KUAP fault */
+-#endif
+
+ /* Now we build the MAS:
+ *
+@@ -709,9 +710,7 @@ finish_normal_tlb_miss:
+ rldicl r10,r14,64-8,64-8
+ cmpldi cr0,r10,BOOK3E_PAGESZ_4K
+ beq- 1f
+-#ifndef CONFIG_PPC_KUAP
+ mfspr r11,SPRN_MAS1
+-#endif
+ rlwimi r11,r14,31,21,24
+ rlwinm r11,r11,0,21,19
+ mtspr SPRN_MAS1,r11
+diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
+index a56ade39dc68a..3ac73f9fb5d59 100644
+--- a/arch/powerpc/mm/pgtable_32.c
++++ b/arch/powerpc/mm/pgtable_32.c
+@@ -135,9 +135,9 @@ void mark_initmem_nx(void)
+ unsigned long numpages = PFN_UP((unsigned long)_einittext) -
+ PFN_DOWN((unsigned long)_sinittext);
+
+- if (v_block_mapped((unsigned long)_sinittext)) {
+- mmu_mark_initmem_nx();
+- } else {
++ mmu_mark_initmem_nx();
++
++ if (!v_block_mapped((unsigned long)_sinittext)) {
+ set_memory_nx((unsigned long)_sinittext, numpages);
+ set_memory_rw((unsigned long)_sinittext, numpages);
+ }
+diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
+index 03607ab90c66f..f884760ca5cfe 100644
+--- a/arch/powerpc/mm/ptdump/shared.c
++++ b/arch/powerpc/mm/ptdump/shared.c
+@@ -17,9 +17,9 @@ static const struct flag_info flag_array[] = {
+ .clear = " ",
+ }, {
+ .mask = _PAGE_RW,
+- .val = _PAGE_RW,
+- .set = "rw",
+- .clear = "r ",
++ .val = 0,
++ .set = "r ",
++ .clear = "rw",
+ }, {
+ .mask = _PAGE_EXEC,
+ .val = _PAGE_EXEC,
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 140502a7fdf86..03c64a0195df2 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1349,27 +1349,22 @@ static void power_pmu_disable(struct pmu *pmu)
+ * a PMI happens during interrupt replay and perf counter
+ * values are cleared by PMU callbacks before replay.
+ *
+- * If any PMC corresponding to the active PMU events are
+- * overflown, disable the interrupt by clearing the paca
+- * bit for PMI since we are disabling the PMU now.
+- * Otherwise provide a warning if there is PMI pending, but
+- * no counter is found overflown.
++ * Disable the interrupt by clearing the paca bit for PMI
++ * since we are disabling the PMU now. Otherwise provide a
++ * warning if there is PMI pending, but no counter is found
++ * overflown.
++ *
++ * Since power_pmu_disable runs under local_irq_save, it
++ * could happen that code hits a PMC overflow without PMI
++ * pending in paca. Hence only clear PMI pending if it was
++ * set.
++ *
++ * If a PMI is pending, then MSR[EE] must be disabled (because
++ * the masked PMI handler disabling EE). So it is safe to
++ * call clear_pmi_irq_pending().
+ */
+- if (any_pmc_overflown(cpuhw)) {
+- /*
+- * Since power_pmu_disable runs under local_irq_save, it
+- * could happen that code hits a PMC overflow without PMI
+- * pending in paca. Hence only clear PMI pending if it was
+- * set.
+- *
+- * If a PMI is pending, then MSR[EE] must be disabled (because
+- * the masked PMI handler disabling EE). So it is safe to
+- * call clear_pmi_irq_pending().
+- */
+- if (pmi_irq_pending())
+- clear_pmi_irq_pending();
+- } else
+- WARN_ON(pmi_irq_pending());
++ if (pmi_irq_pending())
++ clear_pmi_irq_pending();
+
+ val = mmcra = cpuhw->mmcr.mmcra;
+
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 9e2df4b664788..3629fd73083e2 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -174,11 +174,11 @@ config POWER9_CPU
+
+ config E5500_CPU
+ bool "Freescale e5500"
+- depends on E500
++ depends on PPC64 && E500
+
+ config E6500_CPU
+ bool "Freescale e6500"
+- depends on E500
++ depends on PPC64 && E500
+
+ config 860_CPU
+ bool "8xx family"
+diff --git a/arch/powerpc/platforms/cell/axon_msi.c b/arch/powerpc/platforms/cell/axon_msi.c
+index f3291e957a19d..5b012abca773d 100644
+--- a/arch/powerpc/platforms/cell/axon_msi.c
++++ b/arch/powerpc/platforms/cell/axon_msi.c
+@@ -223,6 +223,7 @@ static int setup_msi_msg_address(struct pci_dev *dev, struct msi_msg *msg)
+ if (!prop) {
+ dev_dbg(&dev->dev,
+ "axon_msi: no msi-address-(32|64) properties found\n");
++ of_node_put(dn);
+ return -ENOENT;
+ }
+
+diff --git a/arch/powerpc/platforms/cell/spufs/inode.c b/arch/powerpc/platforms/cell/spufs/inode.c
+index 34334c32b7f58..320008528edd9 100644
+--- a/arch/powerpc/platforms/cell/spufs/inode.c
++++ b/arch/powerpc/platforms/cell/spufs/inode.c
+@@ -660,6 +660,7 @@ spufs_init_isolated_loader(void)
+ return;
+
+ loader = of_get_property(dn, "loader", &size);
++ of_node_put(dn);
+ if (!loader)
+ return;
+
+diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c
+index 3805ad13b8f3d..d19305292e1e3 100644
+--- a/arch/powerpc/platforms/powernv/rng.c
++++ b/arch/powerpc/platforms/powernv/rng.c
+@@ -29,15 +29,6 @@ struct powernv_rng {
+
+ static DEFINE_PER_CPU(struct powernv_rng *, powernv_rng);
+
+-int powernv_hwrng_present(void)
+-{
+- struct powernv_rng *rng;
+-
+- rng = get_cpu_var(powernv_rng);
+- put_cpu_var(rng);
+- return rng != NULL;
+-}
+-
+ static unsigned long rng_whiten(struct powernv_rng *rng, unsigned long val)
+ {
+ unsigned long parity;
+@@ -58,17 +49,6 @@ static unsigned long rng_whiten(struct powernv_rng *rng, unsigned long val)
+ return val;
+ }
+
+-int powernv_get_random_real_mode(unsigned long *v)
+-{
+- struct powernv_rng *rng;
+-
+- rng = raw_cpu_read(powernv_rng);
+-
+- *v = rng_whiten(rng, __raw_rm_readq(rng->regs_real));
+-
+- return 1;
+-}
+-
+ static int powernv_get_random_darn(unsigned long *v)
+ {
+ unsigned long val;
+@@ -105,12 +85,14 @@ int powernv_get_random_long(unsigned long *v)
+ {
+ struct powernv_rng *rng;
+
+- rng = get_cpu_var(powernv_rng);
+-
+- *v = rng_whiten(rng, in_be64(rng->regs));
+-
+- put_cpu_var(rng);
+-
++ if (mfmsr() & MSR_DR) {
++ rng = get_cpu_var(powernv_rng);
++ *v = rng_whiten(rng, in_be64(rng->regs));
++ put_cpu_var(rng);
++ } else {
++ rng = raw_cpu_read(powernv_rng);
++ *v = rng_whiten(rng, __raw_rm_readq(rng->regs_real));
++ }
+ return 1;
+ }
+ EXPORT_SYMBOL_GPL(powernv_get_random_long);
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index fba64304e8597..c3d425ef7b39a 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -700,6 +700,33 @@ struct iommu_table_ops iommu_table_lpar_multi_ops = {
+ .get = tce_get_pSeriesLP
+ };
+
++/*
++ * Find nearest ibm,dma-window (default DMA window) or direct DMA window or
++ * dynamic 64bit DMA window, walking up the device tree.
++ */
++static struct device_node *pci_dma_find(struct device_node *dn,
++ const __be32 **dma_window)
++{
++ const __be32 *dw = NULL;
++
++ for ( ; dn && PCI_DN(dn); dn = dn->parent) {
++ dw = of_get_property(dn, "ibm,dma-window", NULL);
++ if (dw) {
++ if (dma_window)
++ *dma_window = dw;
++ return dn;
++ }
++ dw = of_get_property(dn, DIRECT64_PROPNAME, NULL);
++ if (dw)
++ return dn;
++ dw = of_get_property(dn, DMA64_PROPNAME, NULL);
++ if (dw)
++ return dn;
++ }
++
++ return NULL;
++}
++
+ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus)
+ {
+ struct iommu_table *tbl;
+@@ -712,20 +739,10 @@ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus)
+ pr_debug("pci_dma_bus_setup_pSeriesLP: setting up bus %pOF\n",
+ dn);
+
+- /*
+- * Find nearest ibm,dma-window (default DMA window), walking up the
+- * device tree
+- */
+- for (pdn = dn; pdn != NULL; pdn = pdn->parent) {
+- dma_window = of_get_property(pdn, "ibm,dma-window", NULL);
+- if (dma_window != NULL)
+- break;
+- }
++ pdn = pci_dma_find(dn, &dma_window);
+
+- if (dma_window == NULL) {
++ if (dma_window == NULL)
+ pr_debug(" no ibm,dma-window property !\n");
+- return;
+- }
+
+ ppci = PCI_DN(pdn);
+
+@@ -735,11 +752,13 @@ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus)
+ if (!ppci->table_group) {
+ ppci->table_group = iommu_pseries_alloc_group(ppci->phb->node);
+ tbl = ppci->table_group->tables[0];
+- iommu_table_setparms_lpar(ppci->phb, pdn, tbl,
+- ppci->table_group, dma_window);
++ if (dma_window) {
++ iommu_table_setparms_lpar(ppci->phb, pdn, tbl,
++ ppci->table_group, dma_window);
+
+- if (!iommu_init_table(tbl, ppci->phb->node, 0, 0))
+- panic("Failed to initialize iommu table");
++ if (!iommu_init_table(tbl, ppci->phb->node, 0, 0))
++ panic("Failed to initialize iommu table");
++ }
+ iommu_register_group(ppci->table_group,
+ pci_domain_nr(bus), 0);
+ pr_debug(" created table: %p\n", ppci->table_group);
+@@ -1232,7 +1251,7 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ bool default_win_removed = false, direct_mapping = false;
+ bool pmem_present;
+ struct pci_dn *pci = PCI_DN(pdn);
+- struct iommu_table *tbl = pci->table_group->tables[0];
++ struct property *default_win = NULL;
+
+ dn = of_find_node_by_type(NULL, "ibm,pmemory");
+ pmem_present = dn != NULL;
+@@ -1289,11 +1308,10 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ * for extensions presence.
+ */
+ if (query.windows_available == 0) {
+- struct property *default_win;
+ int reset_win_ext;
+
+ /* DDW + IOMMU on single window may fail if there is any allocation */
+- if (iommu_table_in_use(tbl)) {
++ if (iommu_table_in_use(pci->table_group->tables[0])) {
+ dev_warn(&dev->dev, "current IOMMU table in use, can't be replaced.\n");
+ goto out_failed;
+ }
+@@ -1429,16 +1447,18 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+
+ pci->table_group->tables[1] = newtbl;
+
+- /* Keep default DMA window struct if removed */
+- if (default_win_removed) {
+- tbl->it_size = 0;
+- vfree(tbl->it_map);
+- tbl->it_map = NULL;
+- }
+-
+ set_iommu_table_base(&dev->dev, newtbl);
+ }
+
++ if (default_win_removed) {
++ iommu_tce_table_put(pci->table_group->tables[0]);
++ pci->table_group->tables[0] = NULL;
++
++ /* default_win is valid here because default_win_removed == true */
++ of_remove_property(pdn, default_win);
++ dev_info(&dev->dev, "Removed default DMA window for %pOF\n", pdn);
++ }
++
+ spin_lock(&dma_win_list_lock);
+ list_add(&window->list, &dma_win_list);
+ spin_unlock(&dma_win_list_lock);
+@@ -1503,13 +1523,7 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
+ dn = pci_device_to_OF_node(dev);
+ pr_debug(" node is %pOF\n", dn);
+
+- for (pdn = dn; pdn && PCI_DN(pdn) && !PCI_DN(pdn)->table_group;
+- pdn = pdn->parent) {
+- dma_window = of_get_property(pdn, "ibm,dma-window", NULL);
+- if (dma_window)
+- break;
+- }
+-
++ pdn = pci_dma_find(dn, &dma_window);
+ if (!pdn || !PCI_DN(pdn)) {
+ printk(KERN_WARNING "pci_dma_dev_setup_pSeriesLP: "
+ "no DMA window found for pci dev=%s dn=%pOF\n",
+@@ -1540,7 +1554,6 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
+ static bool iommu_bypass_supported_pSeriesLP(struct pci_dev *pdev, u64 dma_mask)
+ {
+ struct device_node *dn = pci_device_to_OF_node(pdev), *pdn;
+- const __be32 *dma_window = NULL;
+
+ /* only attempt to use a new window if 64-bit DMA is requested */
+ if (dma_mask < DMA_BIT_MASK(64))
+@@ -1554,13 +1567,7 @@ static bool iommu_bypass_supported_pSeriesLP(struct pci_dev *pdev, u64 dma_mask)
+ * search upwards in the tree until we either hit a dma-window
+ * property, OR find a parent with a table already allocated.
+ */
+- for (pdn = dn; pdn && PCI_DN(pdn) && !PCI_DN(pdn)->table_group;
+- pdn = pdn->parent) {
+- dma_window = of_get_property(pdn, "ibm,dma-window", NULL);
+- if (dma_window)
+- break;
+- }
+-
++ pdn = pci_dma_find(dn, NULL);
+ if (pdn && PCI_DN(pdn))
+ return enable_ddw(pdev, pdn);
+
+diff --git a/arch/powerpc/sysdev/fsl_pci.c b/arch/powerpc/sysdev/fsl_pci.c
+index 1011cfea2e327..bfbb8c8fc9aaa 100644
+--- a/arch/powerpc/sysdev/fsl_pci.c
++++ b/arch/powerpc/sysdev/fsl_pci.c
+@@ -521,6 +521,7 @@ int fsl_add_bridge(struct platform_device *pdev, int is_primary)
+ struct resource rsrc;
+ const int *bus_range;
+ u8 hdr_type, progif;
++ u32 class_code;
+ struct device_node *dev;
+ struct ccsr_pci __iomem *pci;
+ u16 temp;
+@@ -594,6 +595,13 @@ int fsl_add_bridge(struct platform_device *pdev, int is_primary)
+ PPC_INDIRECT_TYPE_SURPRESS_PRIMARY_BUS;
+ if (fsl_pcie_check_link(hose))
+ hose->indirect_type |= PPC_INDIRECT_TYPE_NO_PCIE_LINK;
++ /* Fix Class Code to PCI_CLASS_BRIDGE_PCI_NORMAL for pre-3.0 controller */
++ if (in_be32(&pci->block_rev1) < PCIE_IP_REV_3_0) {
++ early_read_config_dword(hose, 0, 0, PCIE_FSL_CSR_CLASSCODE, &class_code);
++ class_code &= 0xff;
++ class_code |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8;
++ early_write_config_dword(hose, 0, 0, PCIE_FSL_CSR_CLASSCODE, class_code);
++ }
+ } else {
+ /*
+ * Set PBFR(PCI Bus Function Register)[10] = 1 to
+diff --git a/arch/powerpc/sysdev/fsl_pci.h b/arch/powerpc/sysdev/fsl_pci.h
+index cdbde2e0c96ef..093a875d7d1ec 100644
+--- a/arch/powerpc/sysdev/fsl_pci.h
++++ b/arch/powerpc/sysdev/fsl_pci.h
+@@ -18,6 +18,7 @@ struct platform_device;
+
+ #define PCIE_LTSSM 0x0404 /* PCIE Link Training and Status */
+ #define PCIE_LTSSM_L0 0x16 /* L0 state */
++#define PCIE_FSL_CSR_CLASSCODE 0x474 /* FSL GPEX CSR */
+ #define PCIE_IP_REV_2_2 0x02080202 /* PCIE IP block version Rev2.2 */
+ #define PCIE_IP_REV_3_0 0x02080300 /* PCIE IP block version Rev3.0 */
+ #define PIWAR_EN 0x80000000 /* Enable */
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index d02911e78cfc1..e2c8f93b535ba 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -718,6 +718,7 @@ static bool __init xive_get_max_prio(u8 *max_prio)
+ }
+
+ reg = of_get_property(rootdn, "ibm,plat-res-int-priorities", &len);
++ of_node_put(rootdn);
+ if (!reg) {
+ pr_err("Failed to read 'ibm,plat-res-int-priorities' property\n");
+ return false;
+diff --git a/arch/riscv/boot/dts/starfive/jh7100.dtsi b/arch/riscv/boot/dts/starfive/jh7100.dtsi
+index 69f22f9aad9db..f48e232a72a74 100644
+--- a/arch/riscv/boot/dts/starfive/jh7100.dtsi
++++ b/arch/riscv/boot/dts/starfive/jh7100.dtsi
+@@ -118,7 +118,7 @@
+ interrupt-controller;
+ #address-cells = <0>;
+ #interrupt-cells = <1>;
+- riscv,ndev = <127>;
++ riscv,ndev = <133>;
+ };
+
+ clkgen: clock-controller@11800000 {
+diff --git a/arch/riscv/include/asm/cpu_ops.h b/arch/riscv/include/asm/cpu_ops.h
+index 134590f1b8435..aa128466c4d4e 100644
+--- a/arch/riscv/include/asm/cpu_ops.h
++++ b/arch/riscv/include/asm/cpu_ops.h
+@@ -38,6 +38,7 @@ struct cpu_operations {
+ #endif
+ };
+
++extern const struct cpu_operations cpu_ops_spinwait;
+ extern const struct cpu_operations *cpu_ops[NR_CPUS];
+ void __init cpu_set_ops(int cpu);
+
+diff --git a/arch/riscv/kernel/cpu_ops.c b/arch/riscv/kernel/cpu_ops.c
+index 170d07e577215..f92c0e6eddb16 100644
+--- a/arch/riscv/kernel/cpu_ops.c
++++ b/arch/riscv/kernel/cpu_ops.c
+@@ -15,9 +15,7 @@
+ const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init;
+
+ extern const struct cpu_operations cpu_ops_sbi;
+-#ifdef CONFIG_RISCV_BOOT_SPINWAIT
+-extern const struct cpu_operations cpu_ops_spinwait;
+-#else
++#ifndef CONFIG_RISCV_BOOT_SPINWAIT
+ const struct cpu_operations cpu_ops_spinwait = {
+ .name = "",
+ .cpu_prepare = NULL,
+diff --git a/arch/riscv/kernel/cpu_ops_spinwait.c b/arch/riscv/kernel/cpu_ops_spinwait.c
+index 346847f6c41c8..d98d19226b5f5 100644
+--- a/arch/riscv/kernel/cpu_ops_spinwait.c
++++ b/arch/riscv/kernel/cpu_ops_spinwait.c
+@@ -11,6 +11,8 @@
+ #include <asm/sbi.h>
+ #include <asm/smp.h>
+
++#include "head.h"
++
+ const struct cpu_operations cpu_ops_spinwait;
+ void *__cpu_spinwait_stack_pointer[NR_CPUS] __section(".data");
+ void *__cpu_spinwait_task_pointer[NR_CPUS] __section(".data");
+@@ -18,7 +20,7 @@ void *__cpu_spinwait_task_pointer[NR_CPUS] __section(".data");
+ static void cpu_update_secondary_bootdata(unsigned int cpuid,
+ struct task_struct *tidle)
+ {
+- int hartid = cpuid_to_hartid_map(cpuid);
++ unsigned long hartid = cpuid_to_hartid_map(cpuid);
+
+ /*
+ * The hartid must be less than NR_CPUS to avoid out-of-bound access
+@@ -27,7 +29,7 @@ static void cpu_update_secondary_bootdata(unsigned int cpuid,
+ * spinwait booting is not the recommended approach for any platforms
+ * booting Linux in S-mode and can be disabled in the future.
+ */
+- if (hartid == INVALID_HARTID || hartid >= NR_CPUS)
++ if (hartid == INVALID_HARTID || hartid >= (unsigned long) NR_CPUS)
+ return;
+
+ /* Make sure tidle is updated */
+diff --git a/arch/riscv/kernel/crash_save_regs.S b/arch/riscv/kernel/crash_save_regs.S
+index 7832fb763abac..b2a1908c0463e 100644
+--- a/arch/riscv/kernel/crash_save_regs.S
++++ b/arch/riscv/kernel/crash_save_regs.S
+@@ -44,7 +44,7 @@ SYM_CODE_START(riscv_crash_save_regs)
+ REG_S t6, PT_T6(a0) /* x31 */
+
+ csrr t1, CSR_STATUS
+- csrr t2, CSR_EPC
++ auipc t2, 0x0
+ csrr t3, CSR_TVAL
+ csrr t4, CSR_CAUSE
+
+diff --git a/arch/riscv/kernel/machine_kexec.c b/arch/riscv/kernel/machine_kexec.c
+index df8e24559035c..ee79e6839b863 100644
+--- a/arch/riscv/kernel/machine_kexec.c
++++ b/arch/riscv/kernel/machine_kexec.c
+@@ -138,19 +138,37 @@ void machine_shutdown(void)
+ #endif
+ }
+
++/* Override the weak function in kernel/panic.c */
++void crash_smp_send_stop(void)
++{
++ static int cpus_stopped;
++
++ /*
++ * This function can be called twice in panic path, but obviously
++ * we execute this only once.
++ */
++ if (cpus_stopped)
++ return;
++
++ smp_send_stop();
++ cpus_stopped = 1;
++}
++
+ /*
+ * machine_crash_shutdown - Prepare to kexec after a kernel crash
+ *
+ * This function is called by crash_kexec just before machine_kexec
+- * below and its goal is similar to machine_shutdown, but in case of
+- * a kernel crash. Since we don't handle such cases yet, this function
+- * is empty.
++ * and its goal is to shutdown non-crashing cpus and save registers.
+ */
+ void
+ machine_crash_shutdown(struct pt_regs *regs)
+ {
++ local_irq_disable();
++
++ /* shutdown non-crashing cpus */
++ crash_smp_send_stop();
++
+ crash_save_cpu(regs, smp_processor_id());
+- machine_shutdown();
+ pr_info("Starting crashdump kernel...\n");
+ }
+
+@@ -171,7 +189,7 @@ machine_kexec(struct kimage *image)
+ struct kimage_arch *internal = &image->arch;
+ unsigned long jump_addr = (unsigned long) image->start;
+ unsigned long first_ind_entry = (unsigned long) &image->head;
+- unsigned long this_cpu_id = smp_processor_id();
++ unsigned long this_cpu_id = __smp_processor_id();
+ unsigned long this_hart_id = cpuid_to_hartid_map(this_cpu_id);
+ unsigned long fdt_addr = internal->fdt_addr;
+ void *control_code_buffer = page_address(image->control_code_page);
+diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c
+index 7a057b5f0adc7..c976a21cd4bd5 100644
+--- a/arch/riscv/kernel/probes/uprobes.c
++++ b/arch/riscv/kernel/probes/uprobes.c
+@@ -59,8 +59,6 @@ int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+
+ instruction_pointer_set(regs, utask->xol_vaddr);
+
+- regs->status &= ~SR_SPIE;
+-
+ return 0;
+ }
+
+@@ -72,8 +70,6 @@ int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+
+ instruction_pointer_set(regs, utask->vaddr + auprobe->insn_size);
+
+- regs->status |= SR_SPIE;
+-
+ return 0;
+ }
+
+@@ -111,8 +107,6 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+ * address.
+ */
+ instruction_pointer_set(regs, utask->vaddr);
+-
+- regs->status &= ~SR_SPIE;
+ }
+
+ bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx,
+diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
+index 8c475f4da3084..ec486e5369d9b 100644
+--- a/arch/riscv/lib/uaccess.S
++++ b/arch/riscv/lib/uaccess.S
+@@ -175,7 +175,7 @@ ENTRY(__asm_copy_from_user)
+ /* Exception fixup code */
+ 10:
+ /* Disable access to user memory */
+- csrs CSR_STATUS, t6
++ csrc CSR_STATUS, t6
+ mv a0, t5
+ ret
+ ENDPROC(__asm_copy_to_user)
+@@ -227,7 +227,7 @@ ENTRY(__clear_user)
+ /* Exception fixup code */
+ 11:
+ /* Disable access to user memory */
+- csrs CSR_STATUS, t6
++ csrc CSR_STATUS, t6
+ mv a0, a1
+ ret
+ ENDPROC(__clear_user)
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index d466ec670e1fa..2c4a64e97aec1 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -135,6 +135,10 @@ static void __init print_vm_layout(void)
+ (unsigned long)VMEMMAP_END);
+ print_ml("vmalloc", (unsigned long)VMALLOC_START,
+ (unsigned long)VMALLOC_END);
++#ifdef CONFIG_64BIT
++ print_ml("modules", (unsigned long)MODULES_VADDR,
++ (unsigned long)MODULES_END);
++#endif
+ print_ml("lowmem", (unsigned long)PAGE_OFFSET,
+ (unsigned long)high_memory);
+ if (IS_ENABLED(CONFIG_64BIT)) {
+diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
+index 40264f60b0da9..f4073106e1f39 100644
+--- a/arch/s390/include/asm/gmap.h
++++ b/arch/s390/include/asm/gmap.h
+@@ -148,4 +148,6 @@ void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long dirty_bitmap[4],
+ unsigned long gaddr, unsigned long vmaddr);
+ int gmap_mark_unmergeable(void);
+ void s390_reset_acc(struct mm_struct *mm);
++void s390_unlist_old_asce(struct gmap *gmap);
++int s390_replace_asce(struct gmap *gmap);
+ #endif /* _ASM_S390_GMAP_H */
+diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h
+index 649ecdcc87345..8886aadc11a3a 100644
+--- a/arch/s390/include/asm/kexec.h
++++ b/arch/s390/include/asm/kexec.h
+@@ -92,5 +92,8 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ const Elf_Shdr *relsec,
+ const Elf_Shdr *symtab);
+ #define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
++
++int arch_kimage_file_post_load_cleanup(struct kimage *image);
++#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup
+ #endif
+ #endif /*_S390_KEXEC_H */
+diff --git a/arch/s390/include/asm/unwind.h b/arch/s390/include/asm/unwind.h
+index 0bf06f1682d81..02462e7100c1c 100644
+--- a/arch/s390/include/asm/unwind.h
++++ b/arch/s390/include/asm/unwind.h
+@@ -47,7 +47,7 @@ struct unwind_state {
+ static inline unsigned long unwind_recover_ret_addr(struct unwind_state *state,
+ unsigned long ip)
+ {
+- ip = ftrace_graph_ret_addr(state->task, &state->graph_idx, ip, NULL);
++ ip = ftrace_graph_ret_addr(state->task, &state->graph_idx, ip, (void *)state->sp);
+ if (is_kretprobe_trampoline(ip))
+ ip = kretprobe_find_ret_addr(state->task, (void *)state->sp, &state->kr_cur);
+ return ip;
+diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
+index 28124d0fa1d5e..f8ebdd70dd317 100644
+--- a/arch/s390/kernel/crash_dump.c
++++ b/arch/s390/kernel/crash_dump.c
+@@ -199,7 +199,7 @@ static int copy_oldmem_user(void __user *dst, unsigned long src, size_t count)
+ } else {
+ len = count;
+ }
+- rc = copy_to_user_real(dst, src, count);
++ rc = copy_to_user_real(dst, src, len);
+ if (rc)
+ return rc;
+ }
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index 8f43575a4dd32..fc6d5f58debeb 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -31,6 +31,7 @@ int s390_verify_sig(const char *kernel, unsigned long kernel_len)
+ const unsigned long marker_len = sizeof(MODULE_SIG_STRING) - 1;
+ struct module_signature *ms;
+ unsigned long sig_len;
++ int ret;
+
+ /* Skip signature verification when not secure IPLed. */
+ if (!ipl_secure_flag)
+@@ -65,11 +66,18 @@ int s390_verify_sig(const char *kernel, unsigned long kernel_len)
+ return -EBADMSG;
+ }
+
+- return verify_pkcs7_signature(kernel, kernel_len,
+- kernel + kernel_len, sig_len,
+- VERIFY_USE_PLATFORM_KEYRING,
+- VERIFYING_MODULE_SIGNATURE,
+- NULL, NULL);
++ ret = verify_pkcs7_signature(kernel, kernel_len,
++ kernel + kernel_len, sig_len,
++ VERIFY_USE_SECONDARY_KEYRING,
++ VERIFYING_MODULE_SIGNATURE,
++ NULL, NULL);
++ if (ret == -ENOKEY && IS_ENABLED(CONFIG_INTEGRITY_PLATFORM_KEYRING))
++ ret = verify_pkcs7_signature(kernel, kernel_len,
++ kernel + kernel_len, sig_len,
++ VERIFY_USE_PLATFORM_KEYRING,
++ VERIFYING_MODULE_SIGNATURE,
++ NULL, NULL);
++ return ret;
+ }
+ #endif /* CONFIG_KEXEC_SIG */
+
+diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
+index 8bd42a20d924e..88112065d9411 100644
+--- a/arch/s390/kvm/intercept.c
++++ b/arch/s390/kvm/intercept.c
+@@ -528,12 +528,27 @@ static int handle_pv_uvc(struct kvm_vcpu *vcpu)
+
+ static int handle_pv_notification(struct kvm_vcpu *vcpu)
+ {
++ int ret;
++
+ if (vcpu->arch.sie_block->ipa == 0xb210)
+ return handle_pv_spx(vcpu);
+ if (vcpu->arch.sie_block->ipa == 0xb220)
+ return handle_pv_sclp(vcpu);
+ if (vcpu->arch.sie_block->ipa == 0xb9a4)
+ return handle_pv_uvc(vcpu);
++ if (vcpu->arch.sie_block->ipa >> 8 == 0xae) {
++ /*
++ * Besides external call, other SIGP orders also cause a
++ * 108 (pv notify) intercept. In contrast to external call,
++ * these orders need to be emulated and hence the appropriate
++ * place to handle them is in handle_instruction().
++ * So first try kvm_s390_handle_sigp_pei() and if that isn't
++ * successful, go on with handle_instruction().
++ */
++ ret = kvm_s390_handle_sigp_pei(vcpu);
++ if (!ret)
++ return ret;
++ }
+
+ return handle_instruction(vcpu);
+ }
+diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
+index cc7c9599f43ee..8eee3fc414e5b 100644
+--- a/arch/s390/kvm/pv.c
++++ b/arch/s390/kvm/pv.c
+@@ -161,10 +161,13 @@ int kvm_s390_pv_deinit_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
+ atomic_set(&kvm->mm->context.is_protected, 0);
+ KVM_UV_EVENT(kvm, 3, "PROTVIRT DESTROY VM: rc %x rrc %x", *rc, *rrc);
+ WARN_ONCE(cc, "protvirt destroy vm failed rc %x rrc %x", *rc, *rrc);
+- /* Inteded memory leak on "impossible" error */
+- if (!cc)
++ /* Intended memory leak on "impossible" error */
++ if (!cc) {
+ kvm_s390_pv_dealloc_vm(kvm);
+- return cc ? -EIO : 0;
++ return 0;
++ }
++ s390_replace_asce(kvm->arch.gmap);
++ return -EIO;
+ }
+
+ int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
+diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c
+index 8aaee2892ec35..cb747bf6c7982 100644
+--- a/arch/s390/kvm/sigp.c
++++ b/arch/s390/kvm/sigp.c
+@@ -480,9 +480,9 @@ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *dest_vcpu;
+ u8 order_code = kvm_s390_get_base_disp_rs(vcpu, NULL);
+
+- trace_kvm_s390_handle_sigp_pei(vcpu, order_code, cpu_addr);
+-
+ if (order_code == SIGP_EXTERNAL_CALL) {
++ trace_kvm_s390_handle_sigp_pei(vcpu, order_code, cpu_addr);
++
+ dest_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr);
+ BUG_ON(dest_vcpu == NULL);
+
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index b8ae4a4aa2ba4..85cab61d87a96 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2735,3 +2735,89 @@ void s390_reset_acc(struct mm_struct *mm)
+ mmput(mm);
+ }
+ EXPORT_SYMBOL_GPL(s390_reset_acc);
++
++/**
++ * s390_unlist_old_asce - Remove the topmost level of page tables from the
++ * list of page tables of the gmap.
++ * @gmap: the gmap whose table is to be removed
++ *
++ * On s390x, KVM keeps a list of all pages containing the page tables of the
++ * gmap (the CRST list). This list is used at tear down time to free all
++ * pages that are now not needed anymore.
++ *
++ * This function removes the topmost page of the tree (the one pointed to by
++ * the ASCE) from the CRST list.
++ *
++ * This means that it will not be freed when the VM is torn down, and needs
++ * to be handled separately by the caller, unless a leak is actually
++ * intended. Notice that this function will only remove the page from the
++ * list, the page will still be used as a top level page table (and ASCE).
++ */
++void s390_unlist_old_asce(struct gmap *gmap)
++{
++ struct page *old;
++
++ old = virt_to_page(gmap->table);
++ spin_lock(&gmap->guest_table_lock);
++ list_del(&old->lru);
++ /*
++ * Sometimes the topmost page might need to be "removed" multiple
++ * times, for example if the VM is rebooted into secure mode several
++ * times concurrently, or if s390_replace_asce fails after calling
++ * s390_remove_old_asce and is attempted again later. In that case
++ * the old asce has been removed from the list, and therefore it
++ * will not be freed when the VM terminates, but the ASCE is still
++ * in use and still pointed to.
++ * A subsequent call to replace_asce will follow the pointer and try
++ * to remove the same page from the list again.
++ * Therefore it's necessary that the page of the ASCE has valid
++ * pointers, so list_del can work (and do nothing) without
++ * dereferencing stale or invalid pointers.
++ */
++ INIT_LIST_HEAD(&old->lru);
++ spin_unlock(&gmap->guest_table_lock);
++}
++EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
++
++/**
++ * s390_replace_asce - Try to replace the current ASCE of a gmap with a copy
++ * @gmap: the gmap whose ASCE needs to be replaced
++ *
++ * If the allocation of the new top level page table fails, the ASCE is not
++ * replaced.
++ * In any case, the old ASCE is always removed from the gmap CRST list.
++ * Therefore the caller has to make sure to save a pointer to it
++ * beforehand, unless a leak is actually intended.
++ */
++int s390_replace_asce(struct gmap *gmap)
++{
++ unsigned long asce;
++ struct page *page;
++ void *table;
++
++ s390_unlist_old_asce(gmap);
++
++ page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
++ if (!page)
++ return -ENOMEM;
++ table = page_to_virt(page);
++ memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT));
++
++ /*
++ * The caller has to deal with the old ASCE, but here we make sure
++ * the new one is properly added to the CRST list, so that
++ * it will be freed when the VM is torn down.
++ */
++ spin_lock(&gmap->guest_table_lock);
++ list_add(&page->lru, &gmap->crst_list);
++ spin_unlock(&gmap->guest_table_lock);
++
++ /* Set new table origin while preserving existing ASCE control bits */
++ asce = (gmap->asce & ~_ASCE_ORIGIN) | __pa(table);
++ WRITE_ONCE(gmap->asce, asce);
++ WRITE_ONCE(gmap->mm->context.gmap_asce, asce);
++ WRITE_ONCE(gmap->table, table);
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(s390_replace_asce);
+diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
+index 6a0ac00d5a42b..4a154a0849660 100644
+--- a/arch/s390/mm/init.c
++++ b/arch/s390/mm/init.c
+@@ -31,7 +31,6 @@
+ #include <linux/cma.h>
+ #include <linux/gfp.h>
+ #include <linux/dma-direct.h>
+-#include <linux/platform-feature.h>
+ #include <asm/processor.h>
+ #include <linux/uaccess.h>
+ #include <asm/pgalloc.h>
+@@ -48,6 +47,7 @@
+ #include <asm/kasan.h>
+ #include <asm/dma-mapping.h>
+ #include <asm/uv.h>
++#include <linux/virtio_anchor.h>
+ #include <linux/virtio_config.h>
+
+ pgd_t swapper_pg_dir[PTRS_PER_PGD] __section(".bss..swapper_pg_dir");
+@@ -175,7 +175,7 @@ static void pv_init(void)
+ if (!is_prot_virt_guest())
+ return;
+
+- platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
++ virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
+
+ /* make sure bounce buffers are shared */
+ swiotlb_init(true, SWIOTLB_FORCE | SWIOTLB_VERBOSE);
+diff --git a/arch/um/drivers/random.c b/arch/um/drivers/random.c
+index 433a3f8f2ef3e..32b3341fe9707 100644
+--- a/arch/um/drivers/random.c
++++ b/arch/um/drivers/random.c
+@@ -28,7 +28,7 @@
+ * protects against a module being loaded twice at the same time.
+ */
+ static int random_fd = -1;
+-static struct hwrng hwrng = { 0, };
++static struct hwrng hwrng;
+ static DECLARE_COMPLETION(have_data);
+
+ static int rng_dev_read(struct hwrng *rng, void *buf, size_t max, bool block)
+diff --git a/arch/um/include/asm/archrandom.h b/arch/um/include/asm/archrandom.h
+new file mode 100644
+index 0000000000000..2f24cb96391d7
+--- /dev/null
++++ b/arch/um/include/asm/archrandom.h
+@@ -0,0 +1,30 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __ASM_UM_ARCHRANDOM_H__
++#define __ASM_UM_ARCHRANDOM_H__
++
++#include <linux/types.h>
++
++/* This is from <os.h>, but better not to #include that in a global header here. */
++ssize_t os_getrandom(void *buf, size_t len, unsigned int flags);
++
++static inline bool __must_check arch_get_random_long(unsigned long *v)
++{
++ return os_getrandom(v, sizeof(*v), 0) == sizeof(*v);
++}
++
++static inline bool __must_check arch_get_random_int(unsigned int *v)
++{
++ return os_getrandom(v, sizeof(*v), 0) == sizeof(*v);
++}
++
++static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
++{
++ return false;
++}
++
++static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
++{
++ return false;
++}
++
++#endif
+diff --git a/arch/um/include/asm/xor.h b/arch/um/include/asm/xor.h
+index 22b39de73c246..647fae200c5d3 100644
+--- a/arch/um/include/asm/xor.h
++++ b/arch/um/include/asm/xor.h
+@@ -18,7 +18,7 @@
+ #undef XOR_SELECT_TEMPLATE
+ /* pick an arbitrary one - measuring isn't possible with inf-cpu */
+ #define XOR_SELECT_TEMPLATE(x) \
+- (time_travel_mode == TT_MODE_INFCPU ? TT_CPU_INF_XOR_DEFAULT : x))
++ (time_travel_mode == TT_MODE_INFCPU ? TT_CPU_INF_XOR_DEFAULT : x)
+ #endif
+
+ #endif
+diff --git a/arch/um/include/shared/os.h b/arch/um/include/shared/os.h
+index fafde1d5416ed..0df646c6651ea 100644
+--- a/arch/um/include/shared/os.h
++++ b/arch/um/include/shared/os.h
+@@ -11,6 +11,12 @@
+ #include <irq_user.h>
+ #include <longjmp.h>
+ #include <mm_id.h>
++/* This is to get size_t */
++#ifndef __UM_HOST__
++#include <linux/types.h>
++#else
++#include <sys/types.h>
++#endif
+
+ #define CATCH_EINTR(expr) while ((errno = 0, ((expr) < 0)) && (errno == EINTR))
+
+@@ -243,6 +249,7 @@ extern void stack_protections(unsigned long address);
+ extern int raw(int fd);
+ extern void setup_machinename(char *machine_out);
+ extern void setup_hostinfo(char *buf, int len);
++extern ssize_t os_getrandom(void *buf, size_t len, unsigned int flags);
+ extern void os_dump_core(void) __attribute__ ((noreturn));
+ extern void um_early_printk(const char *s, unsigned int n);
+ extern void os_fix_helper_signals(void);
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index 9838967d0b2f1..e0de60e503b98 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -16,6 +16,7 @@
+ #include <linux/sched/task.h>
+ #include <linux/kmsg_dump.h>
+ #include <linux/suspend.h>
++#include <linux/random.h>
+
+ #include <asm/processor.h>
+ #include <asm/cpufeature.h>
+@@ -406,6 +407,8 @@ int __init __weak read_initrd(void)
+
+ void __init setup_arch(char **cmdline_p)
+ {
++ u8 rng_seed[32];
++
+ stack_protections((unsigned long) &init_thread_info);
+ setup_physmem(uml_physmem, uml_reserved, physmem_size, highmem);
+ mem_total_pages(physmem_size, iomem_size, highmem);
+@@ -416,6 +419,11 @@ void __init setup_arch(char **cmdline_p)
+ strlcpy(boot_command_line, command_line, COMMAND_LINE_SIZE);
+ *cmdline_p = command_line;
+ setup_hostinfo(host_info, sizeof host_info);
++
++ if (os_getrandom(rng_seed, sizeof(rng_seed), 0) == sizeof(rng_seed)) {
++ add_bootloader_randomness(rng_seed, sizeof(rng_seed));
++ memzero_explicit(rng_seed, sizeof(rng_seed));
++ }
+ }
+
+ void __init check_bugs(void)
+diff --git a/arch/um/os-Linux/util.c b/arch/um/os-Linux/util.c
+index 41297ec404bf9..fc0f2a9dee5af 100644
+--- a/arch/um/os-Linux/util.c
++++ b/arch/um/os-Linux/util.c
+@@ -14,6 +14,7 @@
+ #include <sys/wait.h>
+ #include <sys/mman.h>
+ #include <sys/utsname.h>
++#include <sys/random.h>
+ #include <init.h>
+ #include <os.h>
+
+@@ -96,6 +97,11 @@ static inline void __attribute__ ((noreturn)) uml_abort(void)
+ exit(127);
+ }
+
++ssize_t os_getrandom(void *buf, size_t len, unsigned int flags)
++{
++ return getrandom(buf, len, flags);
++}
++
+ /*
+ * UML helper threads must not handle SIGWINCH/INT/TERM
+ */
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 52a7f91527fe0..25e2b8b75e40c 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -278,6 +278,7 @@ config X86
+ select SYSCTL_EXCEPTION_TRACE
+ select THREAD_INFO_IN_TASK
+ select TRACE_IRQFLAGS_SUPPORT
++ select TRACE_IRQFLAGS_NMI_SUPPORT
+ select USER_STACKTRACE_SUPPORT
+ select VIRT_TO_BUS
+ select HAVE_ARCH_KCSAN if X86_64
+diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
+index 340399f699544..bdfe08f1a9304 100644
+--- a/arch/x86/Kconfig.debug
++++ b/arch/x86/Kconfig.debug
+@@ -1,8 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+-config TRACE_IRQFLAGS_NMI_SUPPORT
+- def_bool y
+-
+ config EARLY_PRINTK_USB
+ bool
+
+diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
+index b5aecb524a8aa..ffec8bb01ba8c 100644
+--- a/arch/x86/boot/Makefile
++++ b/arch/x86/boot/Makefile
+@@ -103,7 +103,7 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
+ AFLAGS_header.o += -I$(objtree)/$(obj)
+ $(obj)/header.o: $(obj)/zoffset.h
+
+-LDFLAGS_setup.elf := -m elf_i386 -T
++LDFLAGS_setup.elf := -m elf_i386 -z noexecstack -T
+ $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
+ $(call if_changed,ld)
+
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 19e1905dcbf6f..35ce1a64068b7 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -69,6 +69,10 @@ LDFLAGS_vmlinux := -pie $(call ld-option, --no-dynamic-linker)
+ ifdef CONFIG_LD_ORPHAN_WARN
+ LDFLAGS_vmlinux += --orphan-handling=warn
+ endif
++LDFLAGS_vmlinux += -z noexecstack
++ifeq ($(CONFIG_LD_IS_BFD),y)
++LDFLAGS_vmlinux += $(call ld-option,--no-warn-rwx-segments)
++endif
+ LDFLAGS_vmlinux += -T
+
+ hostprogs := mkpiggy
+diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
+index 2831685adf6fb..8ed4597fdf6a0 100644
+--- a/arch/x86/crypto/Makefile
++++ b/arch/x86/crypto/Makefile
+@@ -61,9 +61,7 @@ sha256-ssse3-$(CONFIG_AS_SHA256_NI) += sha256_ni_asm.o
+ obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
+ sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o
+
+-obj-$(CONFIG_CRYPTO_BLAKE2S_X86) += blake2s-x86_64.o
+-blake2s-x86_64-y := blake2s-shash.o
+-obj-$(if $(CONFIG_CRYPTO_BLAKE2S_X86),y) += libblake2s-x86_64.o
++obj-$(CONFIG_CRYPTO_BLAKE2S_X86) += libblake2s-x86_64.o
+ libblake2s-x86_64-y := blake2s-core.o blake2s-glue.o
+
+ obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o
+diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c
+index 69853c13e8fb0..aaba212305288 100644
+--- a/arch/x86/crypto/blake2s-glue.c
++++ b/arch/x86/crypto/blake2s-glue.c
+@@ -4,7 +4,6 @@
+ */
+
+ #include <crypto/internal/blake2s.h>
+-#include <crypto/internal/simd.h>
+
+ #include <linux/types.h>
+ #include <linux/jump_label.h>
+@@ -33,7 +32,7 @@ void blake2s_compress(struct blake2s_state *state, const u8 *block,
+ /* SIMD disables preemption, so relax after processing each page. */
+ BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);
+
+- if (!static_branch_likely(&blake2s_use_ssse3) || !crypto_simd_usable()) {
++ if (!static_branch_likely(&blake2s_use_ssse3) || !may_use_simd()) {
+ blake2s_compress_generic(state, block, nblocks, inc);
+ return;
+ }
+diff --git a/arch/x86/crypto/blake2s-shash.c b/arch/x86/crypto/blake2s-shash.c
+deleted file mode 100644
+index 59ae28abe35cc..0000000000000
+--- a/arch/x86/crypto/blake2s-shash.c
++++ /dev/null
+@@ -1,77 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0 OR MIT
+-/*
+- * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+- */
+-
+-#include <crypto/internal/blake2s.h>
+-#include <crypto/internal/simd.h>
+-#include <crypto/internal/hash.h>
+-
+-#include <linux/types.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/sizes.h>
+-
+-#include <asm/cpufeature.h>
+-#include <asm/processor.h>
+-
+-static int crypto_blake2s_update_x86(struct shash_desc *desc,
+- const u8 *in, unsigned int inlen)
+-{
+- return crypto_blake2s_update(desc, in, inlen, false);
+-}
+-
+-static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out)
+-{
+- return crypto_blake2s_final(desc, out, false);
+-}
+-
+-#define BLAKE2S_ALG(name, driver_name, digest_size) \
+- { \
+- .base.cra_name = name, \
+- .base.cra_driver_name = driver_name, \
+- .base.cra_priority = 200, \
+- .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \
+- .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \
+- .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \
+- .base.cra_module = THIS_MODULE, \
+- .digestsize = digest_size, \
+- .setkey = crypto_blake2s_setkey, \
+- .init = crypto_blake2s_init, \
+- .update = crypto_blake2s_update_x86, \
+- .final = crypto_blake2s_final_x86, \
+- .descsize = sizeof(struct blake2s_state), \
+- }
+-
+-static struct shash_alg blake2s_algs[] = {
+- BLAKE2S_ALG("blake2s-128", "blake2s-128-x86", BLAKE2S_128_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-160", "blake2s-160-x86", BLAKE2S_160_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-224", "blake2s-224-x86", BLAKE2S_224_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-256", "blake2s-256-x86", BLAKE2S_256_HASH_SIZE),
+-};
+-
+-static int __init blake2s_mod_init(void)
+-{
+- if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3))
+- return crypto_register_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
+- return 0;
+-}
+-
+-static void __exit blake2s_mod_exit(void)
+-{
+- if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3))
+- crypto_unregister_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
+-}
+-
+-module_init(blake2s_mod_init);
+-module_exit(blake2s_mod_exit);
+-
+-MODULE_ALIAS_CRYPTO("blake2s-128");
+-MODULE_ALIAS_CRYPTO("blake2s-128-x86");
+-MODULE_ALIAS_CRYPTO("blake2s-160");
+-MODULE_ALIAS_CRYPTO("blake2s-160-x86");
+-MODULE_ALIAS_CRYPTO("blake2s-224");
+-MODULE_ALIAS_CRYPTO("blake2s-224-x86");
+-MODULE_ALIAS_CRYPTO("blake2s-256");
+-MODULE_ALIAS_CRYPTO("blake2s-256-x86");
+-MODULE_LICENSE("GPL v2");
+diff --git a/arch/x86/entry/Makefile b/arch/x86/entry/Makefile
+index eeadbd7d92cc5..ca2fe186994b0 100644
+--- a/arch/x86/entry/Makefile
++++ b/arch/x86/entry/Makefile
+@@ -11,12 +11,13 @@ CFLAGS_REMOVE_common.o = $(CC_FLAGS_FTRACE)
+
+ CFLAGS_common.o += -fno-stack-protector
+
+-obj-y := entry.o entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
++obj-y := entry.o entry_$(BITS).o syscall_$(BITS).o
+ obj-y += common.o
+
+ obj-y += vdso/
+ obj-y += vsyscall/
+
++obj-$(CONFIG_PREEMPTION) += thunk_$(BITS).o
+ obj-$(CONFIG_IA32_EMULATION) += entry_64_compat.o syscall_32.o
+ obj-$(CONFIG_X86_X32_ABI) += syscall_x32.o
+
+diff --git a/arch/x86/entry/thunk_32.S b/arch/x86/entry/thunk_32.S
+index 7591bab060f70..ff6e7003da974 100644
+--- a/arch/x86/entry/thunk_32.S
++++ b/arch/x86/entry/thunk_32.S
+@@ -29,10 +29,8 @@ SYM_CODE_START_NOALIGN(\name)
+ SYM_CODE_END(\name)
+ .endm
+
+-#ifdef CONFIG_PREEMPTION
+ THUNK preempt_schedule_thunk, preempt_schedule
+ THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
+ EXPORT_SYMBOL(preempt_schedule_thunk)
+ EXPORT_SYMBOL(preempt_schedule_notrace_thunk)
+-#endif
+
+diff --git a/arch/x86/entry/thunk_64.S b/arch/x86/entry/thunk_64.S
+index 505b488fcc655..f38b07d2768bb 100644
+--- a/arch/x86/entry/thunk_64.S
++++ b/arch/x86/entry/thunk_64.S
+@@ -31,14 +31,11 @@ SYM_FUNC_END(\name)
+ _ASM_NOKPROBE(\name)
+ .endm
+
+-#ifdef CONFIG_PREEMPTION
+ THUNK preempt_schedule_thunk, preempt_schedule
+ THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
+ EXPORT_SYMBOL(preempt_schedule_thunk)
+ EXPORT_SYMBOL(preempt_schedule_notrace_thunk)
+-#endif
+
+-#ifdef CONFIG_PREEMPTION
+ SYM_CODE_START_LOCAL_NOALIGN(__thunk_restore)
+ popq %r11
+ popq %r10
+@@ -53,4 +50,3 @@ SYM_CODE_START_LOCAL_NOALIGN(__thunk_restore)
+ RET
+ _ASM_NOKPROBE(__thunk_restore)
+ SYM_CODE_END(__thunk_restore)
+-#endif
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 76cd790ed0bd5..12f6c4d714cd6 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -180,7 +180,7 @@ quiet_cmd_vdso = VDSO $@
+ sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@'
+
+ VDSO_LDFLAGS = -shared --hash-style=both --build-id=sha1 \
+- $(call ld-option, --eh-frame-hdr) -Bsymbolic
++ $(call ld-option, --eh-frame-hdr) -Bsymbolic -z noexecstack
+ GCOV_PROFILE := n
+
+ quiet_cmd_vdso_and_check = VDSO $@
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 45024abd929f0..bd8b988576097 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4141,6 +4141,8 @@ tnt_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ {
+ struct event_constraint *c;
+
++ c = intel_get_event_constraints(cpuc, idx, event);
++
+ /*
+ * :ppp means to do reduced skid PEBS,
+ * which is available on PMC0 and fixed counter 0.
+@@ -4153,8 +4155,6 @@ tnt_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ return &counter0_constraint;
+ }
+
+- c = intel_get_event_constraints(cpuc, idx, event);
+-
+ return c;
+ }
+
+@@ -6241,7 +6241,8 @@ __init int intel_pmu_init(void)
+ x86_pmu.flags |= PMU_FL_INSTR_LATENCY;
+ x86_pmu.flags |= PMU_FL_MEM_LOADS_AUX;
+ x86_pmu.lbr_pt_coexist = true;
+- intel_pmu_pebs_data_source_skl(false);
++ intel_pmu_pebs_data_source_adl();
++ x86_pmu.pebs_latency_data = adl_latency_data_small;
+ x86_pmu.num_topdown_events = 8;
+ x86_pmu.update_topdown_event = adl_update_topdown_event;
+ x86_pmu.set_topdown_event_period = adl_set_topdown_event_period;
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 376cc3d66094c..ba60427caa6d3 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -94,15 +94,40 @@ void __init intel_pmu_pebs_data_source_nhm(void)
+ pebs_data_source[0x07] = OP_LH | P(LVL, L3) | LEVEL(L3) | P(SNOOP, HITM);
+ }
+
+-void __init intel_pmu_pebs_data_source_skl(bool pmem)
++static void __init __intel_pmu_pebs_data_source_skl(bool pmem, u64 *data_source)
+ {
+ u64 pmem_or_l4 = pmem ? LEVEL(PMEM) : LEVEL(L4);
+
+- pebs_data_source[0x08] = OP_LH | pmem_or_l4 | P(SNOOP, HIT);
+- pebs_data_source[0x09] = OP_LH | pmem_or_l4 | REM | P(SNOOP, HIT);
+- pebs_data_source[0x0b] = OP_LH | LEVEL(RAM) | REM | P(SNOOP, NONE);
+- pebs_data_source[0x0c] = OP_LH | LEVEL(ANY_CACHE) | REM | P(SNOOPX, FWD);
+- pebs_data_source[0x0d] = OP_LH | LEVEL(ANY_CACHE) | REM | P(SNOOP, HITM);
++ data_source[0x08] = OP_LH | pmem_or_l4 | P(SNOOP, HIT);
++ data_source[0x09] = OP_LH | pmem_or_l4 | REM | P(SNOOP, HIT);
++ data_source[0x0b] = OP_LH | LEVEL(RAM) | REM | P(SNOOP, NONE);
++ data_source[0x0c] = OP_LH | LEVEL(ANY_CACHE) | REM | P(SNOOPX, FWD);
++ data_source[0x0d] = OP_LH | LEVEL(ANY_CACHE) | REM | P(SNOOP, HITM);
++}
++
++void __init intel_pmu_pebs_data_source_skl(bool pmem)
++{
++ __intel_pmu_pebs_data_source_skl(pmem, pebs_data_source);
++}
++
++static void __init intel_pmu_pebs_data_source_grt(u64 *data_source)
++{
++ data_source[0x05] = OP_LH | P(LVL, L3) | LEVEL(L3) | P(SNOOP, HIT);
++ data_source[0x06] = OP_LH | P(LVL, L3) | LEVEL(L3) | P(SNOOP, HITM);
++ data_source[0x08] = OP_LH | P(LVL, L3) | LEVEL(L3) | P(SNOOPX, FWD);
++}
++
++void __init intel_pmu_pebs_data_source_adl(void)
++{
++ u64 *data_source;
++
++ data_source = x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX].pebs_data_source;
++ memcpy(data_source, pebs_data_source, sizeof(pebs_data_source));
++ __intel_pmu_pebs_data_source_skl(false, data_source);
++
++ data_source = x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX].pebs_data_source;
++ memcpy(data_source, pebs_data_source, sizeof(pebs_data_source));
++ intel_pmu_pebs_data_source_grt(data_source);
+ }
+
+ static u64 precise_store_data(u64 status)
+@@ -171,7 +196,50 @@ static u64 precise_datala_hsw(struct perf_event *event, u64 status)
+ return dse.val;
+ }
+
+-static u64 load_latency_data(u64 status)
++static inline void pebs_set_tlb_lock(u64 *val, bool tlb, bool lock)
++{
++ /*
++ * TLB access
++ * 0 = did not miss 2nd level TLB
++ * 1 = missed 2nd level TLB
++ */
++ if (tlb)
++ *val |= P(TLB, MISS) | P(TLB, L2);
++ else
++ *val |= P(TLB, HIT) | P(TLB, L1) | P(TLB, L2);
++
++ /* locked prefix */
++ if (lock)
++ *val |= P(LOCK, LOCKED);
++}
++
++/* Retrieve the latency data for e-core of ADL */
++u64 adl_latency_data_small(struct perf_event *event, u64 status)
++{
++ union intel_x86_pebs_dse dse;
++ u64 val;
++
++ WARN_ON_ONCE(hybrid_pmu(event->pmu)->cpu_type == hybrid_big);
++
++ dse.val = status;
++
++ val = hybrid_var(event->pmu, pebs_data_source)[dse.ld_dse];
++
++ /*
++ * For the atom core on ADL,
++ * bit 4: lock, bit 5: TLB access.
++ */
++ pebs_set_tlb_lock(&val, dse.ld_locked, dse.ld_stlb_miss);
++
++ if (dse.ld_data_blk)
++ val |= P(BLK, DATA);
++ else
++ val |= P(BLK, NA);
++
++ return val;
++}
++
++static u64 load_latency_data(struct perf_event *event, u64 status)
+ {
+ union intel_x86_pebs_dse dse;
+ u64 val;
+@@ -181,7 +249,7 @@ static u64 load_latency_data(u64 status)
+ /*
+ * use the mapping table for bit 0-3
+ */
+- val = pebs_data_source[dse.ld_dse];
++ val = hybrid_var(event->pmu, pebs_data_source)[dse.ld_dse];
+
+ /*
+ * Nehalem models do not support TLB, Lock infos
+@@ -190,21 +258,8 @@ static u64 load_latency_data(u64 status)
+ val |= P(TLB, NA) | P(LOCK, NA);
+ return val;
+ }
+- /*
+- * bit 4: TLB access
+- * 0 = did not miss 2nd level TLB
+- * 1 = missed 2nd level TLB
+- */
+- if (dse.ld_stlb_miss)
+- val |= P(TLB, MISS) | P(TLB, L2);
+- else
+- val |= P(TLB, HIT) | P(TLB, L1) | P(TLB, L2);
+
+- /*
+- * bit 5: locked prefix
+- */
+- if (dse.ld_locked)
+- val |= P(LOCK, LOCKED);
++ pebs_set_tlb_lock(&val, dse.ld_stlb_miss, dse.ld_locked);
+
+ /*
+ * Ice Lake and earlier models do not support block infos.
+@@ -233,7 +288,7 @@ static u64 load_latency_data(u64 status)
+ return val;
+ }
+
+-static u64 store_latency_data(u64 status)
++static u64 store_latency_data(struct perf_event *event, u64 status)
+ {
+ union intel_x86_pebs_dse dse;
+ u64 val;
+@@ -243,23 +298,9 @@ static u64 store_latency_data(u64 status)
+ /*
+ * use the mapping table for bit 0-3
+ */
+- val = pebs_data_source[dse.st_lat_dse];
++ val = hybrid_var(event->pmu, pebs_data_source)[dse.st_lat_dse];
+
+- /*
+- * bit 4: TLB access
+- * 0 = did not miss 2nd level TLB
+- * 1 = missed 2nd level TLB
+- */
+- if (dse.st_lat_stlb_miss)
+- val |= P(TLB, MISS) | P(TLB, L2);
+- else
+- val |= P(TLB, HIT) | P(TLB, L1) | P(TLB, L2);
+-
+- /*
+- * bit 5: locked prefix
+- */
+- if (dse.st_lat_locked)
+- val |= P(LOCK, LOCKED);
++ pebs_set_tlb_lock(&val, dse.st_lat_stlb_miss, dse.st_lat_locked);
+
+ val |= P(BLK, NA);
+
+@@ -781,8 +822,8 @@ struct event_constraint intel_glm_pebs_event_constraints[] = {
+
+ struct event_constraint intel_grt_pebs_event_constraints[] = {
+ /* Allow all events as PEBS with no flags */
+- INTEL_PLD_CONSTRAINT(0x5d0, 0xf),
+- INTEL_PSD_CONSTRAINT(0x6d0, 0xf),
++ INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0xf),
++ INTEL_HYBRID_LAT_CONSTRAINT(0x6d0, 0xf),
+ EVENT_CONSTRAINT_END
+ };
+
+@@ -1443,9 +1484,11 @@ static u64 get_data_src(struct perf_event *event, u64 aux)
+ bool fst = fl & (PERF_X86_EVENT_PEBS_ST | PERF_X86_EVENT_PEBS_HSW_PREC);
+
+ if (fl & PERF_X86_EVENT_PEBS_LDLAT)
+- val = load_latency_data(aux);
++ val = load_latency_data(event, aux);
+ else if (fl & PERF_X86_EVENT_PEBS_STLAT)
+- val = store_latency_data(aux);
++ val = store_latency_data(event, aux);
++ else if (fl & PERF_X86_EVENT_PEBS_LAT_HYBRID)
++ val = x86_pmu.pebs_latency_data(event, aux);
+ else if (fst && (fl & PERF_X86_EVENT_PEBS_HSW_PREC))
+ val = precise_datala_hsw(event, aux);
+ else if (fst)
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 21a5482bcf845..821098aebf78c 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -84,6 +84,7 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
+ #define PERF_X86_EVENT_TOPDOWN 0x04000 /* Count Topdown slots/metrics events */
+ #define PERF_X86_EVENT_PEBS_STLAT 0x08000 /* st+stlat data address sampling */
+ #define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
++#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
+
+ static inline bool is_topdown_count(struct perf_event *event)
+ {
+@@ -460,6 +461,10 @@ struct cpu_hw_events {
+ __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \
+ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST)
+
++#define INTEL_HYBRID_LAT_CONSTRAINT(c, n) \
++ __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \
++ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_LAT_HYBRID)
++
+ /* Event constraint, but match on all event flags too. */
+ #define INTEL_FLAGS_EVENT_CONSTRAINT(c, n) \
+ EVENT_CONSTRAINT(c, n, ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS)
+@@ -638,6 +643,8 @@ enum {
+ x86_lbr_exclusive_max,
+ };
+
++#define PERF_PEBS_DATA_SOURCE_MAX 0x10
++
+ struct x86_hybrid_pmu {
+ struct pmu pmu;
+ const char *name;
+@@ -665,6 +672,8 @@ struct x86_hybrid_pmu {
+ unsigned int late_ack :1,
+ mid_ack :1,
+ enabled_ack :1;
++
++ u64 pebs_data_source[PERF_PEBS_DATA_SOURCE_MAX];
+ };
+
+ static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
+@@ -825,6 +834,7 @@ struct x86_pmu {
+ void (*drain_pebs)(struct pt_regs *regs, struct perf_sample_data *data);
+ struct event_constraint *pebs_constraints;
+ void (*pebs_aliases)(struct perf_event *event);
++ u64 (*pebs_latency_data)(struct perf_event *event, u64 status);
+ unsigned long large_pebs_flags;
+ u64 rtm_abort_event;
+
+@@ -1392,6 +1402,8 @@ void intel_pmu_disable_bts(void);
+
+ int intel_pmu_drain_bts_buffer(void);
+
++u64 adl_latency_data_small(struct perf_event *event, u64 status);
++
+ extern struct event_constraint intel_core2_pebs_event_constraints[];
+
+ extern struct event_constraint intel_atom_pebs_event_constraints[];
+@@ -1499,6 +1511,8 @@ void intel_pmu_pebs_data_source_nhm(void);
+
+ void intel_pmu_pebs_data_source_skl(bool pmem);
+
++void intel_pmu_pebs_data_source_adl(void);
++
+ int intel_pmu_setup_lbr_filter(struct perf_event *event);
+
+ void intel_pt_interrupt(void);
+diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
+index 6ad8d946cd3eb..5ec359c1b50cb 100644
+--- a/arch/x86/include/asm/kexec.h
++++ b/arch/x86/include/asm/kexec.h
+@@ -193,6 +193,12 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ const Elf_Shdr *relsec,
+ const Elf_Shdr *symtab);
+ #define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
++
++void *arch_kexec_kernel_image_load(struct kimage *image);
++#define arch_kexec_kernel_image_load arch_kexec_kernel_image_load
++
++int arch_kimage_file_post_load_cleanup(struct kimage *image);
++#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup
+ #endif
+ #endif
+
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 9217bd6cf0d14..4c0e812f2f044 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -505,6 +505,7 @@ struct kvm_pmu {
+ unsigned nr_arch_fixed_counters;
+ unsigned available_event_types;
+ u64 fixed_ctr_ctrl;
++ u64 fixed_ctr_ctrl_mask;
+ u64 global_ctrl;
+ u64 global_status;
+ u64 counter_bitmask[2];
+@@ -1654,7 +1655,7 @@ static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
+ #define kvm_arch_pmi_in_guest(vcpu) \
+ ((vcpu) && (vcpu)->arch.handling_intr_from_guest)
+
+-void kvm_mmu_x86_module_init(void);
++void __init kvm_mmu_x86_module_init(void);
+ int kvm_mmu_vendor_module_init(void);
+ void kvm_mmu_vendor_module_exit(void);
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 9f7e751b91df9..510d85261132b 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -152,7 +152,7 @@ void __init check_bugs(void)
+ /*
+ * spectre_v2_user_select_mitigation() relies on the state set by
+ * retbleed_select_mitigation(); specifically the STIBP selection is
+- * forced for UNRET.
++ * forced for UNRET or IBPB.
+ */
+ spectre_v2_user_select_mitigation();
+ ssb_select_mitigation();
+@@ -1179,7 +1179,8 @@ spectre_v2_user_select_mitigation(void)
+ boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+ mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+
+- if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) {
++ if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
++ retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
+ if (mode != SPECTRE_V2_USER_STRICT &&
+ mode != SPECTRE_V2_USER_STRICT_PREFERRED)
+ pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
+@@ -2360,10 +2361,11 @@ static ssize_t srbds_show_state(char *buf)
+
+ static ssize_t retbleed_show_state(char *buf)
+ {
+- if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) {
++ if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
++ retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
+- return sprintf(buf, "Vulnerable: untrained return thunk on non-Zen uarch\n");
++ return sprintf(buf, "Vulnerable: untrained return thunk / IBPB on non-AMD based uarch\n");
+
+ return sprintf(buf, "%s; SMT %s\n",
+ retbleed_strings[retbleed_mitigation],
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index fd5dead8371cc..cb796ca6eff58 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -1216,22 +1216,23 @@ static void bus_lock_init(void)
+ {
+ u64 val;
+
+- /*
+- * Warn and fatal are handled by #AC for split lock if #AC for
+- * split lock is supported.
+- */
+- if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT) ||
+- (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
+- (sld_state == sld_warn || sld_state == sld_fatal)) ||
+- sld_state == sld_off)
++ if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
+ return;
+
+- /*
+- * Enable #DB for bus lock. All bus locks are handled in #DB except
+- * split locks are handled in #AC in the fatal case.
+- */
+ rdmsrl(MSR_IA32_DEBUGCTLMSR, val);
+- val |= DEBUGCTLMSR_BUS_LOCK_DETECT;
++
++ if ((boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
++ (sld_state == sld_warn || sld_state == sld_fatal)) ||
++ sld_state == sld_off) {
++ /*
++ * Warn and fatal are handled by #AC for split lock if #AC for
++ * split lock is supported.
++ */
++ val &= ~DEBUGCTLMSR_BUS_LOCK_DETECT;
++ } else {
++ val |= DEBUGCTLMSR_BUS_LOCK_DETECT;
++ }
++
+ wrmsrl(MSR_IA32_DEBUGCTLMSR, val);
+ }
+
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 24b9fa89aa276..bd165004776d9 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -91,6 +91,7 @@ static int ftrace_verify_code(unsigned long ip, const char *old_code)
+
+ /* Make sure it is what we expect it to be */
+ if (memcmp(cur_code, old_code, MCOUNT_INSN_SIZE) != 0) {
++ ftrace_expected = old_code;
+ WARN_ON(1);
+ return -EINVAL;
+ }
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 7c4ab8870da44..74167dc5f55ec 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -814,16 +814,20 @@ set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
+ static void kprobe_post_process(struct kprobe *cur, struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb)
+ {
+- if ((kcb->kprobe_status != KPROBE_REENTER) && cur->post_handler) {
+- kcb->kprobe_status = KPROBE_HIT_SSDONE;
+- cur->post_handler(cur, regs, 0);
+- }
+-
+ /* Restore back the original saved kprobes variables and continue. */
+- if (kcb->kprobe_status == KPROBE_REENTER)
++ if (kcb->kprobe_status == KPROBE_REENTER) {
++ /* This will restore both kcb and current_kprobe */
+ restore_previous_kprobe(kcb);
+- else
++ } else {
++ /*
++ * Always update the kcb status because
++ * reset_curent_kprobe() doesn't update kcb.
++ */
++ kcb->kprobe_status = KPROBE_HIT_SSDONE;
++ if (cur->post_handler)
++ cur->post_handler(cur, regs, 0);
+ reset_current_kprobe();
++ }
+ }
+ NOKPROBE_SYMBOL(kprobe_post_process);
+
+diff --git a/arch/x86/kernel/pmem.c b/arch/x86/kernel/pmem.c
+index 6b07faaa15798..23154d24b1173 100644
+--- a/arch/x86/kernel/pmem.c
++++ b/arch/x86/kernel/pmem.c
+@@ -27,6 +27,11 @@ static __init int register_e820_pmem(void)
+ * simply here to trigger the module to load on demand.
+ */
+ pdev = platform_device_alloc("e820_pmem", -1);
+- return platform_device_add(pdev);
++
++ rc = platform_device_add(pdev);
++ if (rc)
++ platform_device_put(pdev);
++
++ return rc;
+ }
+ device_initcall(register_e820_pmem);
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index d456ce21c2552..9346c95e88794 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -821,6 +821,10 @@ static void amd_e400_idle(void)
+ */
+ static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
+ {
++ /* User has disallowed the use of MWAIT. Fallback to HALT */
++ if (boot_option_idle_override == IDLE_NOMWAIT)
++ return 0;
++
+ if (c->x86_vendor != X86_VENDOR_INTEL)
+ return 0;
+
+@@ -932,9 +936,8 @@ static int __init idle_setup(char *str)
+ } else if (!strcmp(str, "nomwait")) {
+ /*
+ * If the boot option of "idle=nomwait" is added,
+- * it means that mwait will be disabled for CPU C2/C3
+- * states. In such case it won't touch the variable
+- * of boot_option_idle_override.
++ * it means that mwait will be disabled for CPU C1/C2/C3
++ * states.
+ */
+ boot_option_idle_override = IDLE_NOMWAIT;
+ } else
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index f8382abe22ff8..aa907cec09187 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -1687,16 +1687,6 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ case VCPU_SREG_TR:
+ if (seg_desc.s || (seg_desc.type != 1 && seg_desc.type != 9))
+ goto exception;
+- if (!seg_desc.p) {
+- err_vec = NP_VECTOR;
+- goto exception;
+- }
+- old_desc = seg_desc;
+- seg_desc.type |= 2; /* busy */
+- ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
+- sizeof(seg_desc), &ctxt->exception);
+- if (ret != X86EMUL_CONTINUE)
+- return ret;
+ break;
+ case VCPU_SREG_LDTR:
+ if (seg_desc.s || seg_desc.type != 2)
+@@ -1734,8 +1724,17 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+ if (emul_is_noncanonical_address(get_desc_base(&seg_desc) |
+- ((u64)base3 << 32), ctxt))
+- return emulate_gp(ctxt, 0);
++ ((u64)base3 << 32), ctxt))
++ return emulate_gp(ctxt, err_code);
++ }
++
++ if (seg == VCPU_SREG_TR) {
++ old_desc = seg_desc;
++ seg_desc.type |= 2; /* busy */
++ ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
++ sizeof(seg_desc), &ctxt->exception);
++ if (ret != X86EMUL_CONTINUE)
++ return ret;
+ }
+ load:
+ ctxt->ops->set_segment(ctxt, selector, &seg_desc, base3, seg);
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index f8192864b496f..24d1fb29ea2e4 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -11,6 +11,8 @@
+ #define PT32_PT_BITS 10
+ #define PT32_ENT_PER_PAGE (1 << PT32_PT_BITS)
+
++extern bool __read_mostly enable_mmio_caching;
++
+ #define PT_WRITABLE_SHIFT 1
+ #define PT_USER_SHIFT 2
+
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 17252f39bd7c2..356226c7ebbdc 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4567,7 +4567,7 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context)
+
+ if (boot_cpu_is_amd())
+ __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(),
+- context->root_role.level, false,
++ context->root_role.level, true,
+ boot_cpu_has(X86_FEATURE_GBPAGES),
+ false, true);
+ else
+@@ -6274,11 +6274,15 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+ /*
+ * nx_huge_pages needs to be resolved to true/false when kvm.ko is loaded, as
+ * its default value of -1 is technically undefined behavior for a boolean.
++ * Forward the module init call to SPTE code so that it too can handle module
++ * params that need to be resolved/snapshot.
+ */
+-void kvm_mmu_x86_module_init(void)
++void __init kvm_mmu_x86_module_init(void)
+ {
+ if (nx_huge_pages == -1)
+ __set_nx_huge_pages(get_nx_auto_mode());
++
++ kvm_mmu_spte_module_init();
+ }
+
+ /*
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index db80f7ccaa4e3..1576e65b3b1f0 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -1053,7 +1053,14 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
+ if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access))
+ continue;
+
+- if (gfn != sp->gfns[i]) {
++ /*
++ * Drop the SPTE if the new protections would result in a RWX=0
++ * SPTE or if the gfn is changing. The RWX=0 case only affects
++ * EPT with execute-only support, i.e. EPT without an effective
++ * "present" bit, as all other paging modes will create a
++ * read-only SPTE if pte_access is zero.
++ */
++ if ((!pte_access && !shadow_present_mask) || gfn != sp->gfns[i]) {
+ drop_spte(vcpu->kvm, &sp->spt[i]);
+ flush = true;
+ continue;
+diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
+index b5960bbde7f74..186fa97d43756 100644
+--- a/arch/x86/kvm/mmu/spte.c
++++ b/arch/x86/kvm/mmu/spte.c
+@@ -20,7 +20,9 @@
+ #include <asm/vmx.h>
+
+ bool __read_mostly enable_mmio_caching = true;
++static bool __ro_after_init allow_mmio_caching;
+ module_param_named(mmio_caching, enable_mmio_caching, bool, 0444);
++EXPORT_SYMBOL_GPL(enable_mmio_caching);
+
+ u64 __read_mostly shadow_host_writable_mask;
+ u64 __read_mostly shadow_mmu_writable_mask;
+@@ -42,6 +44,18 @@ u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
+
+ u8 __read_mostly shadow_phys_bits;
+
++void __init kvm_mmu_spte_module_init(void)
++{
++ /*
++ * Snapshot userspace's desire to allow MMIO caching. Whether or not
++ * KVM can actually enable MMIO caching depends on vendor-specific
++ * hardware capabilities and other module params that can't be resolved
++ * until the vendor module is loaded, i.e. enable_mmio_caching can and
++ * will change when the vendor module is (re)loaded.
++ */
++ allow_mmio_caching = enable_mmio_caching;
++}
++
+ static u64 generation_mmio_spte_mask(u64 gen)
+ {
+ u64 mask;
+@@ -129,6 +143,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+ u64 spte = SPTE_MMU_PRESENT_MASK;
+ bool wrprot = false;
+
++ WARN_ON_ONCE(!pte_access && !shadow_present_mask);
++
+ if (sp->role.ad_disabled)
+ spte |= SPTE_TDP_AD_DISABLED_MASK;
+ else if (kvm_mmu_page_ad_need_write_protect(sp))
+@@ -337,6 +353,12 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
+ BUG_ON((u64)(unsigned)access_mask != access_mask);
+ WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
+
++ /*
++ * Reset to the original module param value to honor userspace's desire
++ * to (dis)allow MMIO caching. Update the param itself so that
++ * userspace can see whether or not KVM is actually using MMIO caching.
++ */
++ enable_mmio_caching = allow_mmio_caching;
+ if (!enable_mmio_caching)
+ mmio_value = 0;
+
+diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
+index 0127bb6e3c7de..f80dbb628df57 100644
+--- a/arch/x86/kvm/mmu/spte.h
++++ b/arch/x86/kvm/mmu/spte.h
+@@ -5,8 +5,6 @@
+
+ #include "mmu_internal.h"
+
+-extern bool __read_mostly enable_mmio_caching;
+-
+ /*
+ * A MMU present SPTE is backed by actual memory and may or may not be present
+ * in hardware. E.g. MMIO SPTEs are not considered present. Use bit 11, as it
+@@ -446,6 +444,7 @@ static inline u64 restore_acc_track_spte(u64 spte)
+
+ u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn);
+
++void __init kvm_mmu_spte_module_init(void);
+ void kvm_mmu_reset_all_pte_masks(void);
+
+ #endif
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index ba7cd26f438fc..1773080976caf 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -320,7 +320,8 @@ static bool __nested_vmcb_check_save(struct kvm_vcpu *vcpu,
+ return false;
+ }
+
+- if (CC(!kvm_is_valid_cr4(vcpu, save->cr4)))
++ /* Note, SVM doesn't have any additional restrictions on CR4. */
++ if (CC(!__kvm_is_valid_cr4(vcpu, save->cr4)))
+ return false;
+
+ if (CC(!kvm_valid_efer(vcpu, save->efer)))
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 0c240ed04f96c..eb7a088a80a43 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -22,6 +22,7 @@
+ #include <asm/trapnr.h>
+ #include <asm/fpu/xcr.h>
+
++#include "mmu.h"
+ #include "x86.h"
+ #include "svm.h"
+ #include "svm_ops.h"
+@@ -2221,6 +2222,15 @@ void __init sev_hardware_setup(void)
+ if (!sev_es_enabled)
+ goto out;
+
++ /*
++ * SEV-ES requires MMIO caching as KVM doesn't have access to the guest
++ * instruction stream, i.e. can't emulate in response to a #NPF and
++ * instead relies on #NPF(RSVD) being reflected into the guest as #VC
++ * (the guest can then do a #VMGEXIT to request MMIO emulation).
++ */
++ if (!enable_mmio_caching)
++ goto out;
++
+ /* Does the CPU support SEV-ES? */
+ if (!boot_cpu_has(X86_FEATURE_SEV_ES))
+ goto out;
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 44bbf25dfeb95..92b30b4937fcf 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -392,6 +392,10 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu)
+ */
+ (void)svm_skip_emulated_instruction(vcpu);
+ rip = kvm_rip_read(vcpu);
++
++ if (boot_cpu_has(X86_FEATURE_NRIPS))
++ svm->vmcb->control.next_rip = rip;
++
+ svm->int3_rip = rip + svm->vmcb->save.cs.base;
+ svm->int3_injected = rip - old_rip;
+ }
+@@ -3385,8 +3389,6 @@ static void svm_inject_irq(struct kvm_vcpu *vcpu)
+ {
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+- BUG_ON(!(gif_set(svm)));
+-
+ trace_kvm_inj_virq(vcpu->arch.interrupt.nr);
+ ++vcpu->stat.irq_injections;
+
+@@ -3701,6 +3703,18 @@ static void svm_complete_interrupts(struct kvm_vcpu *vcpu)
+ vector = exitintinfo & SVM_EXITINTINFO_VEC_MASK;
+ type = exitintinfo & SVM_EXITINTINFO_TYPE_MASK;
+
++ /*
++ * If NextRIP isn't enabled, KVM must manually advance RIP prior to
++ * injecting the soft exception/interrupt. That advancement needs to
++ * be unwound if vectoring didn't complete. Note, the new event may
++ * not be the injected event, e.g. if KVM injected an INTn, the INTn
++ * hit a #NP in the guest, and the #NP encountered a #PF, the #NP will
++ * be the reported vectored event, but RIP still needs to be unwound.
++ */
++ if (int3_injected && type == SVM_EXITINTINFO_TYPE_EXEPT &&
++ kvm_is_linear_rip(vcpu, svm->int3_rip))
++ kvm_rip_write(vcpu, kvm_rip_read(vcpu) - int3_injected);
++
+ switch (type) {
+ case SVM_EXITINTINFO_TYPE_NMI:
+ vcpu->arch.nmi_injected = true;
+@@ -3714,16 +3728,11 @@ static void svm_complete_interrupts(struct kvm_vcpu *vcpu)
+
+ /*
+ * In case of software exceptions, do not reinject the vector,
+- * but re-execute the instruction instead. Rewind RIP first
+- * if we emulated INT3 before.
++ * but re-execute the instruction instead.
+ */
+- if (kvm_exception_is_soft(vector)) {
+- if (vector == BP_VECTOR && int3_injected &&
+- kvm_is_linear_rip(vcpu, svm->int3_rip))
+- kvm_rip_write(vcpu,
+- kvm_rip_read(vcpu) - int3_injected);
++ if (kvm_exception_is_soft(vector))
+ break;
+- }
++
+ if (exitintinfo & SVM_EXITINTINFO_VALID_ERR) {
+ u32 err = svm->vmcb->control.exit_int_info_err;
+ kvm_requeue_exception_e(vcpu, vector, err);
+@@ -4899,13 +4908,16 @@ static __init int svm_hardware_setup(void)
+ /* Setup shadow_me_value and shadow_me_mask */
+ kvm_mmu_set_me_spte_mask(sme_me_mask, sme_me_mask);
+
+- /* Note, SEV setup consumes npt_enabled. */
++ svm_adjust_mmio_mask();
++
++ /*
++ * Note, SEV setup consumes npt_enabled and enable_mmio_caching (which
++ * may be modified by svm_adjust_mmio_mask()).
++ */
+ sev_hardware_setup();
+
+ svm_hv_hardware_setup();
+
+- svm_adjust_mmio_mask();
+-
+ for_each_possible_cpu(cpu) {
+ r = svm_cpu_init(cpu);
+ if (r)
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index ab135f9ef52f2..67215fd6bd4a5 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -1223,7 +1223,7 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
+ BIT_ULL(49) | BIT_ULL(54) | BIT_ULL(55) |
+ /* reserved */
+ BIT_ULL(31) | GENMASK_ULL(47, 45) | GENMASK_ULL(63, 56);
+- u64 vmx_basic = vmx->nested.msrs.basic;
++ u64 vmx_basic = vmcs_config.nested.basic;
+
+ if (!is_bitwise_subset(vmx_basic, data, feature_and_reserved))
+ return -EINVAL;
+@@ -1246,36 +1246,42 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
+ return 0;
+ }
+
+-static int
+-vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
++static void vmx_get_control_msr(struct nested_vmx_msrs *msrs, u32 msr_index,
++ u32 **low, u32 **high)
+ {
+- u64 supported;
+- u32 *lowp, *highp;
+-
+ switch (msr_index) {
+ case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
+- lowp = &vmx->nested.msrs.pinbased_ctls_low;
+- highp = &vmx->nested.msrs.pinbased_ctls_high;
++ *low = &msrs->pinbased_ctls_low;
++ *high = &msrs->pinbased_ctls_high;
+ break;
+ case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
+- lowp = &vmx->nested.msrs.procbased_ctls_low;
+- highp = &vmx->nested.msrs.procbased_ctls_high;
++ *low = &msrs->procbased_ctls_low;
++ *high = &msrs->procbased_ctls_high;
+ break;
+ case MSR_IA32_VMX_TRUE_EXIT_CTLS:
+- lowp = &vmx->nested.msrs.exit_ctls_low;
+- highp = &vmx->nested.msrs.exit_ctls_high;
++ *low = &msrs->exit_ctls_low;
++ *high = &msrs->exit_ctls_high;
+ break;
+ case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
+- lowp = &vmx->nested.msrs.entry_ctls_low;
+- highp = &vmx->nested.msrs.entry_ctls_high;
++ *low = &msrs->entry_ctls_low;
++ *high = &msrs->entry_ctls_high;
+ break;
+ case MSR_IA32_VMX_PROCBASED_CTLS2:
+- lowp = &vmx->nested.msrs.secondary_ctls_low;
+- highp = &vmx->nested.msrs.secondary_ctls_high;
++ *low = &msrs->secondary_ctls_low;
++ *high = &msrs->secondary_ctls_high;
+ break;
+ default:
+ BUG();
+ }
++}
++
++static int
++vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
++{
++ u32 *lowp, *highp;
++ u64 supported;
++
++ vmx_get_control_msr(&vmcs_config.nested, msr_index, &lowp, &highp);
+
+ supported = vmx_control_msr(*lowp, *highp);
+
+@@ -1287,6 +1293,7 @@ vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
+ if (!is_bitwise_subset(supported, data, GENMASK_ULL(63, 32)))
+ return -EINVAL;
+
++ vmx_get_control_msr(&vmx->nested.msrs, msr_index, &lowp, &highp);
+ *lowp = data;
+ *highp = data >> 32;
+ return 0;
+@@ -1300,10 +1307,8 @@ static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx, u64 data)
+ BIT_ULL(28) | BIT_ULL(29) | BIT_ULL(30) |
+ /* reserved */
+ GENMASK_ULL(13, 9) | BIT_ULL(31);
+- u64 vmx_misc;
+-
+- vmx_misc = vmx_control_msr(vmx->nested.msrs.misc_low,
+- vmx->nested.msrs.misc_high);
++ u64 vmx_misc = vmx_control_msr(vmcs_config.nested.misc_low,
++ vmcs_config.nested.misc_high);
+
+ if (!is_bitwise_subset(vmx_misc, data, feature_and_reserved_bits))
+ return -EINVAL;
+@@ -1331,10 +1336,8 @@ static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx, u64 data)
+
+ static int vmx_restore_vmx_ept_vpid_cap(struct vcpu_vmx *vmx, u64 data)
+ {
+- u64 vmx_ept_vpid_cap;
+-
+- vmx_ept_vpid_cap = vmx_control_msr(vmx->nested.msrs.ept_caps,
+- vmx->nested.msrs.vpid_caps);
++ u64 vmx_ept_vpid_cap = vmx_control_msr(vmcs_config.nested.ept_caps,
++ vmcs_config.nested.vpid_caps);
+
+ /* Every bit is either reserved or a feature bit. */
+ if (!is_bitwise_subset(vmx_ept_vpid_cap, data, -1ULL))
+@@ -1345,20 +1348,21 @@ static int vmx_restore_vmx_ept_vpid_cap(struct vcpu_vmx *vmx, u64 data)
+ return 0;
+ }
+
+-static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
++static u64 *vmx_get_fixed0_msr(struct nested_vmx_msrs *msrs, u32 msr_index)
+ {
+- u64 *msr;
+-
+ switch (msr_index) {
+ case MSR_IA32_VMX_CR0_FIXED0:
+- msr = &vmx->nested.msrs.cr0_fixed0;
+- break;
++ return &msrs->cr0_fixed0;
+ case MSR_IA32_VMX_CR4_FIXED0:
+- msr = &vmx->nested.msrs.cr4_fixed0;
+- break;
++ return &msrs->cr4_fixed0;
+ default:
+ BUG();
+ }
++}
++
++static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
++{
++ const u64 *msr = vmx_get_fixed0_msr(&vmcs_config.nested, msr_index);
+
+ /*
+ * 1 bits (which indicates bits which "must-be-1" during VMX operation)
+@@ -1367,7 +1371,7 @@ static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
+ if (!is_bitwise_subset(data, *msr, -1ULL))
+ return -EINVAL;
+
+- *msr = data;
++ *vmx_get_fixed0_msr(&vmx->nested.msrs, msr_index) = data;
+ return 0;
+ }
+
+@@ -1428,7 +1432,7 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
+ vmx->nested.msrs.vmcs_enum = data;
+ return 0;
+ case MSR_IA32_VMX_VMFUNC:
+- if (data & ~vmx->nested.msrs.vmfunc_controls)
++ if (data & ~vmcs_config.nested.vmfunc_controls)
+ return -EINVAL;
+ vmx->nested.msrs.vmfunc_controls = data;
+ return 0;
+@@ -2613,6 +2617,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ }
+
+ if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) &&
++ intel_pmu_has_perf_global_ctrl(vcpu_to_pmu(vcpu)) &&
+ WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
+ vmcs12->guest_ia32_perf_global_ctrl))) {
+ *entry_failure_code = ENTRY_FAIL_DEFAULT;
+@@ -3373,10 +3378,12 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ if (likely(!evaluate_pending_interrupts) && kvm_vcpu_apicv_active(vcpu))
+ evaluate_pending_interrupts |= vmx_has_apicv_interrupt(vcpu);
+
+- if (!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
++ if (!vmx->nested.nested_run_pending ||
++ !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
+ vmx->nested.vmcs01_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
+ if (kvm_mpx_supported() &&
+- !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++ (!vmx->nested.nested_run_pending ||
++ !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
+ vmx->nested.vmcs01_guest_bndcfgs = vmcs_read64(GUEST_BNDCFGS);
+
+ /*
+@@ -4336,7 +4343,8 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
+ vmcs_write64(GUEST_IA32_PAT, vmcs12->host_ia32_pat);
+ vcpu->arch.pat = vmcs12->host_ia32_pat;
+ }
+- if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL)
++ if ((vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL) &&
++ intel_pmu_has_perf_global_ctrl(vcpu_to_pmu(vcpu)))
+ WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
+ vmcs12->host_ia32_perf_global_ctrl));
+
+@@ -4962,20 +4970,25 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX;
+
+ /*
+- * The Intel VMX Instruction Reference lists a bunch of bits that are
+- * prerequisite to running VMXON, most notably cr4.VMXE must be set to
+- * 1 (see vmx_is_valid_cr4() for when we allow the guest to set this).
+- * Otherwise, we should fail with #UD. But most faulting conditions
+- * have already been checked by hardware, prior to the VM-exit for
+- * VMXON. We do test guest cr4.VMXE because processor CR4 always has
+- * that bit set to 1 in non-root mode.
++ * Note, KVM cannot rely on hardware to perform the CR0/CR4 #UD checks
++ * that have higher priority than VM-Exit (see Intel SDM's pseudocode
++ * for VMXON), as KVM must load valid CR0/CR4 values into hardware while
++ * running the guest, i.e. KVM needs to check the _guest_ values.
++ *
++ * Rely on hardware for the other two pre-VM-Exit checks, !VM86 and
++ * !COMPATIBILITY modes. KVM may run the guest in VM86 to emulate Real
++ * Mode, but KVM will never take the guest out of those modes.
+ */
+- if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE)) {
++ if (!nested_host_cr0_valid(vcpu, kvm_read_cr0(vcpu)) ||
++ !nested_host_cr4_valid(vcpu, kvm_read_cr4(vcpu))) {
+ kvm_queue_exception(vcpu, UD_VECTOR);
+ return 1;
+ }
+
+- /* CPL=0 must be checked manually. */
++ /*
++ * CPL=0 and all other checks that are lower priority than VM-Exit must
++ * be checked manually.
++ */
+ if (vmx_get_cpl(vcpu)) {
+ kvm_inject_gp(vcpu, 0);
+ return 1;
+@@ -6775,6 +6788,9 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps)
+ rdmsrl(MSR_IA32_VMX_CR0_FIXED1, msrs->cr0_fixed1);
+ rdmsrl(MSR_IA32_VMX_CR4_FIXED1, msrs->cr4_fixed1);
+
++ if (vmx_umip_emulated())
++ msrs->cr4_fixed1 |= X86_CR4_UMIP;
++
+ msrs->vmcs_enum = nested_vmx_calc_vmcs_enum_msr();
+ }
+
+diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
+index c92cea0b8cccf..129ae4e01f7c1 100644
+--- a/arch/x86/kvm/vmx/nested.h
++++ b/arch/x86/kvm/vmx/nested.h
+@@ -281,7 +281,8 @@ static inline bool nested_cr4_valid(struct kvm_vcpu *vcpu, unsigned long val)
+ u64 fixed0 = to_vmx(vcpu)->nested.msrs.cr4_fixed0;
+ u64 fixed1 = to_vmx(vcpu)->nested.msrs.cr4_fixed1;
+
+- return fixed_bits_valid(val, fixed0, fixed1);
++ return fixed_bits_valid(val, fixed0, fixed1) &&
++ __kvm_is_valid_cr4(vcpu, val);
+ }
+
+ /* No difference in the restrictions on guest and host CR4 in VMX operation. */
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 37e9eb32e3d90..a9280ebf78f5f 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -98,6 +98,9 @@ static bool intel_pmc_is_enabled(struct kvm_pmc *pmc)
+ {
+ struct kvm_pmu *pmu = pmc_to_pmu(pmc);
+
++ if (!intel_pmu_has_perf_global_ctrl(pmu))
++ return true;
++
+ return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl);
+ }
+
+@@ -212,7 +215,7 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
+ case MSR_CORE_PERF_GLOBAL_STATUS:
+ case MSR_CORE_PERF_GLOBAL_CTRL:
+ case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+- ret = pmu->version > 1;
++ return intel_pmu_has_perf_global_ctrl(pmu);
+ break;
+ default:
+ ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) ||
+@@ -395,7 +398,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ case MSR_CORE_PERF_FIXED_CTR_CTRL:
+ if (pmu->fixed_ctr_ctrl == data)
+ return 0;
+- if (!(data & 0xfffffffffffff444ull)) {
++ if (!(data & pmu->fixed_ctr_ctrl_mask)) {
+ reprogram_fixed_counters(pmu, data);
+ return 0;
+ }
+@@ -479,6 +482,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ struct kvm_cpuid_entry2 *entry;
+ union cpuid10_eax eax;
+ union cpuid10_edx edx;
++ int i;
+
+ pmu->nr_arch_gp_counters = 0;
+ pmu->nr_arch_fixed_counters = 0;
+@@ -487,6 +491,9 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ pmu->version = 0;
+ pmu->reserved_bits = 0xffffffff00200000ull;
+ pmu->raw_event_mask = X86_RAW_EVENT_MASK;
++ pmu->global_ctrl_mask = ~0ull;
++ pmu->global_ovf_ctrl_mask = ~0ull;
++ pmu->fixed_ctr_ctrl_mask = ~0ull;
+
+ entry = kvm_find_cpuid_entry(vcpu, 0xa, 0);
+ if (!entry || !vcpu->kvm->arch.enable_pmu)
+@@ -522,6 +529,8 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ setup_fixed_pmc_eventsel(pmu);
+ }
+
++ for (i = 0; i < pmu->nr_arch_fixed_counters; i++)
++ pmu->fixed_ctr_ctrl_mask &= ~(0xbull << (i * 4));
+ pmu->global_ctrl = ((1ull << pmu->nr_arch_gp_counters) - 1) |
+ (((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED);
+ pmu->global_ctrl_mask = ~pmu->global_ctrl;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index be7c19374fdd9..0aaea87a14597 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -3230,8 +3230,8 @@ static bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+ /*
+ * We operate under the default treatment of SMM, so VMX cannot be
+- * enabled under SMM. Note, whether or not VMXE is allowed at all is
+- * handled by kvm_is_valid_cr4().
++ * enabled under SMM. Note, whether or not VMXE is allowed at all,
++ * i.e. is a reserved bit, is handled by common x86 code.
+ */
+ if ((cr4 & X86_CR4_VMXE) && is_smm(vcpu))
+ return false;
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 1e7f9453894b1..93aa1f3ea01e5 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -92,6 +92,18 @@ union vmx_exit_reason {
+ u32 full;
+ };
+
++static inline bool intel_pmu_has_perf_global_ctrl(struct kvm_pmu *pmu)
++{
++ /*
++ * Architecturally, Intel's SDM states that IA32_PERF_GLOBAL_CTRL is
++ * supported if "CPUID.0AH: EAX[7:0] > 0", i.e. if the PMU version is
++ * greater than zero. However, KVM only exposes and emulates the MSR
++ * to/for the guest if the guest PMU supports at least "Architectural
++ * Performance Monitoring Version 2".
++ */
++ return pmu->version > 1;
++}
++
+ #define vcpu_to_lbr_desc(vcpu) (&to_vmx(vcpu)->lbr_desc)
+ #define vcpu_to_lbr_records(vcpu) (&to_vmx(vcpu)->lbr_desc.records)
+
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e5fa335a4ea79..bc411d19dac08 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1094,7 +1094,7 @@ int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_emulate_xsetbv);
+
+-bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++bool __kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+ if (cr4 & cr4_reserved_bits)
+ return false;
+@@ -1102,9 +1102,15 @@ bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ if (cr4 & vcpu->arch.cr4_guest_rsvd_bits)
+ return false;
+
+- return static_call(kvm_x86_is_valid_cr4)(vcpu, cr4);
++ return true;
++}
++EXPORT_SYMBOL_GPL(__kvm_is_valid_cr4);
++
++static bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++{
++ return __kvm_is_valid_cr4(vcpu, cr4) &&
++ static_call(kvm_x86_is_valid_cr4)(vcpu, cr4);
+ }
+-EXPORT_SYMBOL_GPL(kvm_is_valid_cr4);
+
+ void kvm_post_set_cr4(struct kvm_vcpu *vcpu, unsigned long old_cr4, unsigned long cr4)
+ {
+@@ -3239,17 +3245,20 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ /* only 0 or all 1s can be written to IA32_MCi_CTL
+ * some Linux kernels though clear bit 10 in bank 4 to
+ * workaround a BIOS/GART TBL issue on AMD K8s, ignore
+- * this to avoid an uncatched #GP in the guest
++ * this to avoid an uncatched #GP in the guest.
++ *
++ * UNIXWARE clears bit 0 of MC1_CTL to ignore
++ * correctable, single-bit ECC data errors.
+ */
+ if ((offset & 0x3) == 0 &&
+- data != 0 && (data | (1 << 10)) != ~(u64)0)
+- return -1;
++ data != 0 && (data | (1 << 10) | 1) != ~(u64)0)
++ return 1;
+
+ /* MCi_STATUS */
+ if (!msr_info->host_initiated &&
+ (offset & 0x3) == 1 && data != 0) {
+ if (!can_set_mci_status(vcpu))
+- return -1;
++ return 1;
+ }
+
+ vcpu->arch.mce_banks[offset] = data;
+@@ -3380,6 +3389,7 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
+ struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
+ struct kvm_steal_time __user *st;
+ struct kvm_memslots *slots;
++ gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
+ u64 steal;
+ u32 version;
+
+@@ -3397,13 +3407,12 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
+ slots = kvm_memslots(vcpu->kvm);
+
+ if (unlikely(slots->generation != ghc->generation ||
++ gpa != ghc->gpa ||
+ kvm_is_error_hva(ghc->hva) || !ghc->memslot)) {
+- gfn_t gfn = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
+-
+ /* We rely on the fact that it fits in a single page. */
+ BUILD_BUG_ON((sizeof(*st) - 1) & KVM_STEAL_VALID_BITS);
+
+- if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, gfn, sizeof(*st)) ||
++ if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, gpa, sizeof(*st)) ||
+ kvm_is_error_hva(ghc->hva) || !ghc->memslot)
+ return;
+ }
+@@ -4392,10 +4401,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ if (r < sizeof(struct kvm_xsave))
+ r = sizeof(struct kvm_xsave);
+ break;
++ }
+ case KVM_CAP_PMU_CAPABILITY:
+ r = enable_pmu ? KVM_CAP_PMU_VALID_MASK : 0;
+ break;
+- }
+ case KVM_CAP_DISABLE_QUIRKS2:
+ r = KVM_X86_VALID_QUIRKS;
+ break;
+@@ -4629,6 +4638,7 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
+ struct kvm_steal_time __user *st;
+ struct kvm_memslots *slots;
+ static const u8 preempted = KVM_VCPU_PREEMPTED;
++ gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
+
+ /*
+ * The vCPU can be marked preempted if and only if the VM-Exit was on
+@@ -4656,6 +4666,7 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
+ slots = kvm_memslots(vcpu->kvm);
+
+ if (unlikely(slots->generation != ghc->generation ||
++ gpa != ghc->gpa ||
+ kvm_is_error_hva(ghc->hva) || !ghc->memslot))
+ return;
+
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 588792f003345..80417761fe4ac 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -407,7 +407,7 @@ static inline void kvm_machine_check(void)
+ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu);
+ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu);
+ int kvm_spec_ctrl_test_value(u64 value);
+-bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
++bool __kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
+ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r,
+ struct x86_exception *e);
+ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva);
+diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
+index 610beba35907a..0a4785cbc8d1a 100644
+--- a/arch/x86/kvm/xen.c
++++ b/arch/x86/kvm/xen.c
+@@ -707,23 +707,24 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
+ break;
+
+ case KVM_XEN_VCPU_ATTR_TYPE_TIMER:
+- if (data->u.timer.port) {
+- if (data->u.timer.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL) {
+- r = -EINVAL;
+- break;
+- }
+- vcpu->arch.xen.timer_virq = data->u.timer.port;
++ if (data->u.timer.port &&
++ data->u.timer.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL) {
++ r = -EINVAL;
++ break;
++ }
++
++ if (!vcpu->arch.xen.timer.function)
+ kvm_xen_init_timer(vcpu);
+
+- /* Restart the timer if it's set */
+- if (data->u.timer.expires_ns)
+- kvm_xen_start_timer(vcpu, data->u.timer.expires_ns,
+- data->u.timer.expires_ns -
+- get_kvmclock_ns(vcpu->kvm));
+- } else if (kvm_xen_timer_enabled(vcpu)) {
+- kvm_xen_stop_timer(vcpu);
+- vcpu->arch.xen.timer_virq = 0;
+- }
++ /* Stop the timer (if it's running) before changing the vector */
++ kvm_xen_stop_timer(vcpu);
++ vcpu->arch.xen.timer_virq = data->u.timer.port;
++
++ /* Start the timer if the new value has a valid vector+expiry. */
++ if (data->u.timer.port && data->u.timer.expires_ns)
++ kvm_xen_start_timer(vcpu, data->u.timer.expires_ns,
++ data->u.timer.expires_ns -
++ get_kvmclock_ns(vcpu->kvm));
+
+ r = 0;
+ break;
+diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
+index dba2197c05c30..331310c293492 100644
+--- a/arch/x86/mm/extable.c
++++ b/arch/x86/mm/extable.c
+@@ -94,16 +94,18 @@ static bool ex_handler_copy(const struct exception_table_entry *fixup,
+ static bool ex_handler_msr(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, bool wrmsr, bool safe, int reg)
+ {
+- if (!safe && wrmsr &&
+- pr_warn_once("unchecked MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pS)\n",
+- (unsigned int)regs->cx, (unsigned int)regs->dx,
+- (unsigned int)regs->ax, regs->ip, (void *)regs->ip))
++ if (__ONCE_LITE_IF(!safe && wrmsr)) {
++ pr_warn("unchecked MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pS)\n",
++ (unsigned int)regs->cx, (unsigned int)regs->dx,
++ (unsigned int)regs->ax, regs->ip, (void *)regs->ip);
+ show_stack_regs(regs);
++ }
+
+- if (!safe && !wrmsr &&
+- pr_warn_once("unchecked MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pS)\n",
+- (unsigned int)regs->cx, regs->ip, (void *)regs->ip))
++ if (__ONCE_LITE_IF(!safe && !wrmsr)) {
++ pr_warn("unchecked MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pS)\n",
++ (unsigned int)regs->cx, regs->ip, (void *)regs->ip);
+ show_stack_regs(regs);
++ }
+
+ if (!wrmsr) {
+ /* Pretend that the read succeeded and returned 0. */
+diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
+index f6d038e2cd8e8..97452688f99fe 100644
+--- a/arch/x86/mm/mem_encrypt_amd.c
++++ b/arch/x86/mm/mem_encrypt_amd.c
+@@ -20,8 +20,8 @@
+ #include <linux/bitops.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/virtio_config.h>
++#include <linux/virtio_anchor.h>
+ #include <linux/cc_platform.h>
+-#include <linux/platform-feature.h>
+
+ #include <asm/tlbflush.h>
+ #include <asm/fixmap.h>
+@@ -245,7 +245,7 @@ void __init sev_setup_arch(void)
+ swiotlb_adjust_size(size);
+
+ /* Set restricted memory access for virtio. */
+- platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
++ virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
+ }
+
+ static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
+diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
+index e8b061557887d..2aadb2019b4f2 100644
+--- a/arch/x86/mm/numa.c
++++ b/arch/x86/mm/numa.c
+@@ -867,7 +867,7 @@ void debug_cpumask_set_cpu(int cpu, int node, bool enable)
+ return;
+ }
+ mask = node_to_cpumask_map[node];
+- if (!mask) {
++ if (!cpumask_available(mask)) {
+ pr_err("node_to_cpumask_map[%i] NULL\n", node);
+ dump_stack();
+ return;
+@@ -913,7 +913,7 @@ const struct cpumask *cpumask_of_node(int node)
+ dump_stack();
+ return cpu_none_mask;
+ }
+- if (node_to_cpumask_map[node] == NULL) {
++ if (!cpumask_available(node_to_cpumask_map[node])) {
+ printk(KERN_WARNING
+ "cpumask_of_node(%d): no node_to_cpumask_map!\n",
+ node);
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index b808c9a80d1be..41d170653e8d9 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -2506,3 +2506,34 @@ void *bpf_arch_text_copy(void *dst, void *src, size_t len)
+ return ERR_PTR(-EINVAL);
+ return dst;
+ }
++
++/* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */
++bool bpf_jit_supports_subprog_tailcalls(void)
++{
++ return true;
++}
++
++void bpf_jit_free(struct bpf_prog *prog)
++{
++ if (prog->jited) {
++ struct x64_jit_data *jit_data = prog->aux->jit_data;
++ struct bpf_binary_header *hdr;
++
++ /*
++ * If we fail the final pass of JIT (from jit_subprogs),
++ * the program may not be finalized yet. Call finalize here
++ * before freeing it.
++ */
++ if (jit_data) {
++ bpf_jit_binary_pack_finalize(prog, jit_data->header,
++ jit_data->rw_header);
++ kvfree(jit_data->addrs);
++ kfree(jit_data);
++ }
++ hdr = bpf_jit_binary_pack_hdr(prog);
++ bpf_jit_binary_pack_free(hdr, NULL);
++ WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(prog));
++ }
++
++ bpf_prog_unlock_free(prog);
++}
+diff --git a/arch/x86/platform/olpc/olpc-xo1-sci.c b/arch/x86/platform/olpc/olpc-xo1-sci.c
+index f03a6883dcc6d..89f25af4b3c33 100644
+--- a/arch/x86/platform/olpc/olpc-xo1-sci.c
++++ b/arch/x86/platform/olpc/olpc-xo1-sci.c
+@@ -80,7 +80,7 @@ static void send_ebook_state(void)
+ return;
+ }
+
+- if (!!test_bit(SW_TABLET_MODE, ebook_switch_idev->sw) == state)
++ if (test_bit(SW_TABLET_MODE, ebook_switch_idev->sw) == !!state)
+ return; /* Nothing new to report. */
+
+ input_report_switch(ebook_switch_idev, SW_TABLET_MODE, state);
+diff --git a/arch/x86/um/Makefile b/arch/x86/um/Makefile
+index ba5789c358094..a8cde4e8ab114 100644
+--- a/arch/x86/um/Makefile
++++ b/arch/x86/um/Makefile
+@@ -28,7 +28,8 @@ else
+
+ obj-y += syscalls_64.o vdso/
+
+-subarch-y = ../lib/csum-partial_64.o ../lib/memcpy_64.o ../entry/thunk_64.o
++subarch-y = ../lib/csum-partial_64.o ../lib/memcpy_64.o
++subarch-$(CONFIG_PREEMPTION) += ../entry/thunk_64.o
+
+ endif
+
+diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
+index 8b71b1dd76396..28762f8005961 100644
+--- a/arch/x86/xen/enlighten_hvm.c
++++ b/arch/x86/xen/enlighten_hvm.c
+@@ -4,6 +4,7 @@
+ #include <linux/cpu.h>
+ #include <linux/kexec.h>
+ #include <linux/memblock.h>
++#include <linux/virtio_anchor.h>
+
+ #include <xen/features.h>
+ #include <xen/events.h>
+@@ -195,7 +196,8 @@ static void __init xen_hvm_guest_init(void)
+ if (xen_pv_domain())
+ return;
+
+- xen_set_restricted_virtio_memory_access();
++ if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
++ virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
+
+ init_hvm_pv_info();
+
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 70fb2ea85e907..0ed2e487a693f 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -31,6 +31,7 @@
+ #include <linux/gfp.h>
+ #include <linux/edd.h>
+ #include <linux/reboot.h>
++#include <linux/virtio_anchor.h>
+
+ #include <xen/xen.h>
+ #include <xen/events.h>
+@@ -109,7 +110,9 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
+
+ static void __init xen_pv_init_platform(void)
+ {
+- xen_set_restricted_virtio_memory_access();
++ /* PV guests can't operate virtio devices without grants. */
++ if (IS_ENABLED(CONFIG_XEN_VIRTIO))
++ virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
+
+ populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
+
+diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
+index fd84d48917589..3805dc2c259ce 100644
+--- a/arch/xtensa/platforms/iss/network.c
++++ b/arch/xtensa/platforms/iss/network.c
+@@ -472,16 +472,24 @@ static const struct net_device_ops iss_netdev_ops = {
+ .ndo_set_rx_mode = iss_net_set_multicast_list,
+ };
+
+-static int iss_net_configure(int index, char *init)
++static void iss_net_pdev_release(struct device *dev)
++{
++ struct platform_device *pdev = to_platform_device(dev);
++ struct iss_net_private *lp =
++ container_of(pdev, struct iss_net_private, pdev);
++
++ free_netdev(lp->dev);
++}
++
++static void iss_net_configure(int index, char *init)
+ {
+ struct net_device *dev;
+ struct iss_net_private *lp;
+- int err;
+
+ dev = alloc_etherdev(sizeof(*lp));
+ if (dev == NULL) {
+ pr_err("eth_configure: failed to allocate device\n");
+- return 1;
++ return;
+ }
+
+ /* Initialize private element. */
+@@ -509,7 +517,7 @@ static int iss_net_configure(int index, char *init)
+ if (!tuntap_probe(lp, index, init)) {
+ pr_err("%s: invalid arguments. Skipping device!\n",
+ dev->name);
+- goto errout;
++ goto err_free_netdev;
+ }
+
+ pr_info("Netdevice %d (%pM)\n", index, dev->dev_addr);
+@@ -517,7 +525,8 @@ static int iss_net_configure(int index, char *init)
+ /* sysfs register */
+
+ if (!driver_registered) {
+- platform_driver_register(&iss_net_driver);
++ if (platform_driver_register(&iss_net_driver))
++ goto err_free_netdev;
+ driver_registered = 1;
+ }
+
+@@ -527,7 +536,9 @@ static int iss_net_configure(int index, char *init)
+
+ lp->pdev.id = index;
+ lp->pdev.name = DRIVER_NAME;
+- platform_device_register(&lp->pdev);
++ lp->pdev.dev.release = iss_net_pdev_release;
++ if (platform_device_register(&lp->pdev))
++ goto err_free_netdev;
+ SET_NETDEV_DEV(dev, &lp->pdev.dev);
+
+ dev->netdev_ops = &iss_netdev_ops;
+@@ -536,23 +547,20 @@ static int iss_net_configure(int index, char *init)
+ dev->irq = -1;
+
+ rtnl_lock();
+- err = register_netdevice(dev);
+- rtnl_unlock();
+-
+- if (err) {
++ if (register_netdevice(dev)) {
++ rtnl_unlock();
+ pr_err("%s: error registering net device!\n", dev->name);
+- /* XXX: should we call ->remove() here? */
+- free_netdev(dev);
+- return 1;
++ platform_device_unregister(&lp->pdev);
++ return;
+ }
++ rtnl_unlock();
+
+ timer_setup(&lp->tl, iss_net_user_timer_expire, 0);
+
+- return 0;
++ return;
+
+-errout:
+- /* FIXME: unregister; free, etc.. */
+- return -EIO;
++err_free_netdev:
++ free_netdev(dev);
+ }
+
+ /* ------------------------------------------------------------------------- */
+diff --git a/block/bio.c b/block/bio.c
+index 51c99f2c5c908..eb7cc591ee931 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1159,6 +1159,37 @@ static void bio_put_pages(struct page **pages, size_t size, size_t off)
+ put_page(pages[i]);
+ }
+
++static int bio_iov_add_page(struct bio *bio, struct page *page,
++ unsigned int len, unsigned int offset)
++{
++ bool same_page = false;
++
++ if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) {
++ if (WARN_ON_ONCE(bio_full(bio, len)))
++ return -EINVAL;
++ __bio_add_page(bio, page, len, offset);
++ return 0;
++ }
++
++ if (same_page)
++ put_page(page);
++ return 0;
++}
++
++static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
++ unsigned int len, unsigned int offset)
++{
++ struct request_queue *q = bdev_get_queue(bio->bi_bdev);
++ bool same_page = false;
++
++ if (bio_add_hw_page(q, bio, page, len, offset,
++ queue_max_zone_append_sectors(q), &same_page) != len)
++ return -EINVAL;
++ if (same_page)
++ put_page(page);
++ return 0;
++}
++
+ #define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *))
+
+ /**
+@@ -1177,61 +1208,11 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
+ struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
+ struct page **pages = (struct page **)bv;
+- bool same_page = false;
+- ssize_t size, left;
+- unsigned len, i;
+- size_t offset;
+-
+- /*
+- * Move page array up in the allocated memory for the bio vecs as far as
+- * possible so that we can start filling biovecs from the beginning
+- * without overwriting the temporary page array.
+- */
+- BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
+- pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
+-
+- size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
+- if (unlikely(size <= 0))
+- return size ? size : -EFAULT;
+-
+- for (left = size, i = 0; left > 0; left -= len, i++) {
+- struct page *page = pages[i];
+-
+- len = min_t(size_t, PAGE_SIZE - offset, left);
+-
+- if (__bio_try_merge_page(bio, page, len, offset, &same_page)) {
+- if (same_page)
+- put_page(page);
+- } else {
+- if (WARN_ON_ONCE(bio_full(bio, len))) {
+- bio_put_pages(pages + i, left, offset);
+- return -EINVAL;
+- }
+- __bio_add_page(bio, page, len, offset);
+- }
+- offset = 0;
+- }
+-
+- iov_iter_advance(iter, size);
+- return 0;
+-}
+-
+-static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
+-{
+- unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
+- unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
+- struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+- unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
+- struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
+- struct page **pages = (struct page **)bv;
+ ssize_t size, left;
+ unsigned len, i;
+ size_t offset;
+ int ret = 0;
+
+- if (WARN_ON_ONCE(!max_append_sectors))
+- return 0;
+-
+ /*
+ * Move page array up in the allocated memory for the bio vecs as far as
+ * possible so that we can start filling biovecs from the beginning
+@@ -1246,17 +1227,18 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
+
+ for (left = size, i = 0; left > 0; left -= len, i++) {
+ struct page *page = pages[i];
+- bool same_page = false;
+
+ len = min_t(size_t, PAGE_SIZE - offset, left);
+- if (bio_add_hw_page(q, bio, page, len, offset,
+- max_append_sectors, &same_page) != len) {
++ if (bio_op(bio) == REQ_OP_ZONE_APPEND)
++ ret = bio_iov_add_zone_append_page(bio, page, len,
++ offset);
++ else
++ ret = bio_iov_add_page(bio, page, len, offset);
++
++ if (ret) {
+ bio_put_pages(pages + i, left, offset);
+- ret = -EINVAL;
+ break;
+ }
+- if (same_page)
+- put_page(page);
+ offset = 0;
+ }
+
+@@ -1298,10 +1280,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ }
+
+ do {
+- if (bio_op(bio) == REQ_OP_ZONE_APPEND)
+- ret = __bio_iov_append_get_pages(bio, iter);
+- else
+- ret = __bio_iov_iter_get_pages(bio, iter);
++ ret = __bio_iov_iter_get_pages(bio, iter);
+ } while (!ret && iov_iter_count(iter) && !bio_full(bio, 0));
+
+ /* don't account direct I/O as memory stall */
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 33a11ba971eaf..c6181357e545b 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2886,15 +2886,21 @@ static int blk_iocost_init(struct request_queue *q)
+ * called before policy activation completion, can't assume that the
+ * target bio has an iocg associated and need to test for NULL iocg.
+ */
+- rq_qos_add(q, rqos);
++ ret = rq_qos_add(q, rqos);
++ if (ret)
++ goto err_free_ioc;
++
+ ret = blkcg_activate_policy(q, &blkcg_policy_iocost);
+- if (ret) {
+- rq_qos_del(q, rqos);
+- free_percpu(ioc->pcpu_stat);
+- kfree(ioc);
+- return ret;
+- }
++ if (ret)
++ goto err_del_qos;
+ return 0;
++
++err_del_qos:
++ rq_qos_del(q, rqos);
++err_free_ioc:
++ free_percpu(ioc->pcpu_stat);
++ kfree(ioc);
++ return ret;
+ }
+
+ static struct blkcg_policy_data *ioc_cpd_alloc(gfp_t gfp)
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 9568bf8dfe82b..7845dca5fcfdb 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -773,19 +773,23 @@ int blk_iolatency_init(struct request_queue *q)
+ rqos->ops = &blkcg_iolatency_ops;
+ rqos->q = q;
+
+- rq_qos_add(q, rqos);
+-
++ ret = rq_qos_add(q, rqos);
++ if (ret)
++ goto err_free;
+ ret = blkcg_activate_policy(q, &blkcg_policy_iolatency);
+- if (ret) {
+- rq_qos_del(q, rqos);
+- kfree(blkiolat);
+- return ret;
+- }
++ if (ret)
++ goto err_qos_del;
+
+ timer_setup(&blkiolat->timer, blkiolatency_timer_fn, 0);
+ INIT_WORK(&blkiolat->enable_work, blkiolatency_enable_work_fn);
+
+ return 0;
++
++err_qos_del:
++ rq_qos_del(q, rqos);
++err_free:
++ kfree(blkiolat);
++ return ret;
+ }
+
+ static void iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index 4d1ce9ef43187..61f179e5f151a 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -730,6 +730,9 @@ void blk_mq_debugfs_register_hctx(struct request_queue *q,
+ char name[20];
+ int i;
+
++ if (!q->debugfs_dir)
++ return;
++
+ snprintf(name, sizeof(name), "hctx%u", hctx->queue_num);
+ hctx->debugfs_dir = debugfs_create_dir(name, q->debugfs_dir);
+
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index 0e46052b018a4..08b856570ad10 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -86,7 +86,7 @@ static inline void rq_wait_init(struct rq_wait *rq_wait)
+ init_waitqueue_head(&rq_wait->wait);
+ }
+
+-static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
++static inline int rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
+ {
+ /*
+ * No IO can be in-flight when adding rqos, so freeze queue, which
+@@ -98,6 +98,8 @@ static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
+ blk_mq_freeze_queue(q);
+
+ spin_lock_irq(&q->queue_lock);
++ if (rq_qos_id(q, rqos->id))
++ goto ebusy;
+ rqos->next = q->rq_qos;
+ q->rq_qos = rqos;
+ spin_unlock_irq(&q->queue_lock);
+@@ -109,6 +111,13 @@ static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
+ blk_mq_debugfs_register_rqos(rqos);
+ mutex_unlock(&q->debugfs_mutex);
+ }
++
++ return 0;
++ebusy:
++ spin_unlock_irq(&q->queue_lock);
++ blk_mq_unfreeze_queue(q);
++ return -EBUSY;
++
+ }
+
+ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index 0c119be0e8133..ae6ea0b545799 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -820,6 +820,7 @@ int wbt_init(struct request_queue *q)
+ {
+ struct rq_wb *rwb;
+ int i;
++ int ret;
+
+ rwb = kzalloc(sizeof(*rwb), GFP_KERNEL);
+ if (!rwb)
+@@ -846,7 +847,10 @@ int wbt_init(struct request_queue *q)
+ /*
+ * Assign rwb and add the stats callback.
+ */
+- rq_qos_add(q, &rwb->rqos);
++ ret = rq_qos_add(q, &rwb->rqos);
++ if (ret)
++ goto err_free;
++
+ blk_stat_add_callback(q, rwb->cb);
+
+ rwb->min_lat_nsec = wbt_default_latency_nsec(q);
+@@ -855,4 +859,10 @@ int wbt_init(struct request_queue *q)
+ wbt_set_write_cache(q, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
+
+ return 0;
++
++err_free:
++ blk_stat_free_callback(rwb->cb);
++ kfree(rwb);
++ return ret;
++
+ }
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 7b81685b56550..c730eca940de5 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -704,26 +704,8 @@ config CRYPTO_BLAKE2B
+
+ See https://blake2.net for further information.
+
+-config CRYPTO_BLAKE2S
+- tristate "BLAKE2s digest algorithm"
+- select CRYPTO_LIB_BLAKE2S_GENERIC
+- select CRYPTO_HASH
+- help
+- Implementation of cryptographic hash function BLAKE2s
+- optimized for 8-32bit platforms and can produce digests of any size
+- between 1 to 32. The keyed hash is also implemented.
+-
+- This module provides the following algorithms:
+-
+- - blake2s-128
+- - blake2s-160
+- - blake2s-224
+- - blake2s-256
+-
+- See https://blake2.net for further information.
+-
+ config CRYPTO_BLAKE2S_X86
+- tristate "BLAKE2s digest algorithm (x86 accelerated version)"
++ bool "BLAKE2s digest algorithm (x86 accelerated version)"
+ depends on X86 && 64BIT
+ select CRYPTO_LIB_BLAKE2S_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_BLAKE2S
+diff --git a/crypto/Makefile b/crypto/Makefile
+index ceaaa9f34145a..5243f8908e8da 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -84,7 +84,6 @@ obj-$(CONFIG_CRYPTO_STREEBOG) += streebog_generic.o
+ obj-$(CONFIG_CRYPTO_WP512) += wp512.o
+ CFLAGS_wp512.o := $(call cc-option,-fno-schedule-insns) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
+ obj-$(CONFIG_CRYPTO_BLAKE2B) += blake2b_generic.o
+-obj-$(CONFIG_CRYPTO_BLAKE2S) += blake2s_generic.o
+ obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
+ obj-$(CONFIG_CRYPTO_ECB) += ecb.o
+ obj-$(CONFIG_CRYPTO_CBC) += cbc.o
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index 7c9e6be35c30c..2f8352e888602 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -304,6 +304,10 @@ static int cert_sig_digest_update(const struct public_key_signature *sig,
+
+ BUG_ON(!sig->data);
+
++ /* SM2 signatures always use the SM3 hash algorithm */
++ if (!sig->hash_algo || strcmp(sig->hash_algo, "sm3") != 0)
++ return -EINVAL;
++
+ ret = sm2_compute_z_digest(tfm_pkey, SM2_DEFAULT_USERID,
+ SM2_DEFAULT_USERID_LEN, dgst);
+ if (ret)
+@@ -414,8 +418,7 @@ int public_key_verify_signature(const struct public_key *pkey,
+ if (ret)
+ goto error_free_key;
+
+- if (sig->pkey_algo && strcmp(sig->pkey_algo, "sm2") == 0 &&
+- sig->data_size) {
++ if (strcmp(pkey->pkey_algo, "sm2") == 0 && sig->data_size) {
+ ret = cert_sig_digest_update(sig, tfm);
+ if (ret)
+ goto error_free_key;
+diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c
+deleted file mode 100644
+index 5f96a21f87883..0000000000000
+--- a/crypto/blake2s_generic.c
++++ /dev/null
+@@ -1,75 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0 OR MIT
+-/*
+- * shash interface to the generic implementation of BLAKE2s
+- *
+- * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+- */
+-
+-#include <crypto/internal/blake2s.h>
+-#include <crypto/internal/hash.h>
+-
+-#include <linux/types.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-
+-static int crypto_blake2s_update_generic(struct shash_desc *desc,
+- const u8 *in, unsigned int inlen)
+-{
+- return crypto_blake2s_update(desc, in, inlen, true);
+-}
+-
+-static int crypto_blake2s_final_generic(struct shash_desc *desc, u8 *out)
+-{
+- return crypto_blake2s_final(desc, out, true);
+-}
+-
+-#define BLAKE2S_ALG(name, driver_name, digest_size) \
+- { \
+- .base.cra_name = name, \
+- .base.cra_driver_name = driver_name, \
+- .base.cra_priority = 100, \
+- .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \
+- .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \
+- .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \
+- .base.cra_module = THIS_MODULE, \
+- .digestsize = digest_size, \
+- .setkey = crypto_blake2s_setkey, \
+- .init = crypto_blake2s_init, \
+- .update = crypto_blake2s_update_generic, \
+- .final = crypto_blake2s_final_generic, \
+- .descsize = sizeof(struct blake2s_state), \
+- }
+-
+-static struct shash_alg blake2s_algs[] = {
+- BLAKE2S_ALG("blake2s-128", "blake2s-128-generic",
+- BLAKE2S_128_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-160", "blake2s-160-generic",
+- BLAKE2S_160_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-224", "blake2s-224-generic",
+- BLAKE2S_224_HASH_SIZE),
+- BLAKE2S_ALG("blake2s-256", "blake2s-256-generic",
+- BLAKE2S_256_HASH_SIZE),
+-};
+-
+-static int __init blake2s_mod_init(void)
+-{
+- return crypto_register_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
+-}
+-
+-static void __exit blake2s_mod_exit(void)
+-{
+- crypto_unregister_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
+-}
+-
+-subsys_initcall(blake2s_mod_init);
+-module_exit(blake2s_mod_exit);
+-
+-MODULE_ALIAS_CRYPTO("blake2s-128");
+-MODULE_ALIAS_CRYPTO("blake2s-128-generic");
+-MODULE_ALIAS_CRYPTO("blake2s-160");
+-MODULE_ALIAS_CRYPTO("blake2s-160-generic");
+-MODULE_ALIAS_CRYPTO("blake2s-224");
+-MODULE_ALIAS_CRYPTO("blake2s-224-generic");
+-MODULE_ALIAS_CRYPTO("blake2s-256");
+-MODULE_ALIAS_CRYPTO("blake2s-256-generic");
+-MODULE_LICENSE("GPL v2");
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index 2bacf8384f59f..66b7ca1ccb23c 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -1669,10 +1669,6 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
+ ret += tcrypt_test("rmd160");
+ break;
+
+- case 41:
+- ret += tcrypt_test("blake2s-256");
+- break;
+-
+ case 42:
+ ret += tcrypt_test("blake2b-512");
+ break;
+@@ -2240,10 +2236,6 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
+ test_hash_speed("rmd160", sec, generic_hash_speed_template);
+ if (mode > 300 && mode < 400) break;
+ fallthrough;
+- case 316:
+- test_hash_speed("blake2s-256", sec, generic_hash_speed_template);
+- if (mode > 300 && mode < 400) break;
+- fallthrough;
+ case 317:
+ test_hash_speed("blake2b-512", sec, generic_hash_speed_template);
+ if (mode > 300 && mode < 400) break;
+@@ -2352,10 +2344,6 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
+ test_ahash_speed("rmd160", sec, generic_hash_speed_template);
+ if (mode > 400 && mode < 500) break;
+ fallthrough;
+- case 416:
+- test_ahash_speed("blake2s-256", sec, generic_hash_speed_template);
+- if (mode > 400 && mode < 500) break;
+- fallthrough;
+ case 417:
+ test_ahash_speed("blake2b-512", sec, generic_hash_speed_template);
+ if (mode > 400 && mode < 500) break;
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 5801a8f9f7134..38acebbb3ed11 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -4375,30 +4375,6 @@ static const struct alg_test_desc alg_test_descs[] = {
+ .suite = {
+ .hash = __VECS(blake2b_512_tv_template)
+ }
+- }, {
+- .alg = "blake2s-128",
+- .test = alg_test_hash,
+- .suite = {
+- .hash = __VECS(blakes2s_128_tv_template)
+- }
+- }, {
+- .alg = "blake2s-160",
+- .test = alg_test_hash,
+- .suite = {
+- .hash = __VECS(blakes2s_160_tv_template)
+- }
+- }, {
+- .alg = "blake2s-224",
+- .test = alg_test_hash,
+- .suite = {
+- .hash = __VECS(blakes2s_224_tv_template)
+- }
+- }, {
+- .alg = "blake2s-256",
+- .test = alg_test_hash,
+- .suite = {
+- .hash = __VECS(blakes2s_256_tv_template)
+- }
+ }, {
+ .alg = "cbc(aes)",
+ .test = alg_test_skcipher,
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index 4d7449fc6a655..c29658337d963 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -34034,221 +34034,4 @@ static const struct hash_testvec blake2b_512_tv_template[] = {{
+ 0xae, 0x15, 0x81, 0x15, 0xd0, 0x88, 0xa0, 0x3c, },
+ }};
+
+-static const struct hash_testvec blakes2s_128_tv_template[] = {{
+- .digest = (u8[]){ 0x64, 0x55, 0x0d, 0x6f, 0xfe, 0x2c, 0x0a, 0x01,
+- 0xa1, 0x4a, 0xba, 0x1e, 0xad, 0xe0, 0x20, 0x0c, },
+-}, {
+- .plaintext = blake2_ordered_sequence,
+- .psize = 64,
+- .digest = (u8[]){ 0xdc, 0x66, 0xca, 0x8f, 0x03, 0x86, 0x58, 0x01,
+- 0xb0, 0xff, 0xe0, 0x6e, 0xd8, 0xa1, 0xa9, 0x0e, },
+-}, {
+- .ksize = 16,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 1,
+- .digest = (u8[]){ 0x88, 0x1e, 0x42, 0xe7, 0xbb, 0x35, 0x80, 0x82,
+- 0x63, 0x7c, 0x0a, 0x0f, 0xd7, 0xec, 0x6c, 0x2f, },
+-}, {
+- .ksize = 32,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 7,
+- .digest = (u8[]){ 0xcf, 0x9e, 0x07, 0x2a, 0xd5, 0x22, 0xf2, 0xcd,
+- 0xa2, 0xd8, 0x25, 0x21, 0x80, 0x86, 0x73, 0x1c, },
+-}, {
+- .ksize = 1,
+- .key = "B",
+- .plaintext = blake2_ordered_sequence,
+- .psize = 15,
+- .digest = (u8[]){ 0xf6, 0x33, 0x5a, 0x2c, 0x22, 0xa0, 0x64, 0xb2,
+- 0xb6, 0x3f, 0xeb, 0xbc, 0xd1, 0xc3, 0xe5, 0xb2, },
+-}, {
+- .ksize = 16,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 247,
+- .digest = (u8[]){ 0x72, 0x66, 0x49, 0x60, 0xf9, 0x4a, 0xea, 0xbe,
+- 0x1f, 0xf4, 0x60, 0xce, 0xb7, 0x81, 0xcb, 0x09, },
+-}, {
+- .ksize = 32,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 256,
+- .digest = (u8[]){ 0xd5, 0xa4, 0x0e, 0xc3, 0x16, 0xc7, 0x51, 0xa6,
+- 0x3c, 0xd0, 0xd9, 0x11, 0x57, 0xfa, 0x1e, 0xbb, },
+-}};
+-
+-static const struct hash_testvec blakes2s_160_tv_template[] = {{
+- .plaintext = blake2_ordered_sequence,
+- .psize = 7,
+- .digest = (u8[]){ 0xb4, 0xf2, 0x03, 0x49, 0x37, 0xed, 0xb1, 0x3e,
+- 0x5b, 0x2a, 0xca, 0x64, 0x82, 0x74, 0xf6, 0x62,
+- 0xe3, 0xf2, 0x84, 0xff, },
+-}, {
+- .plaintext = blake2_ordered_sequence,
+- .psize = 256,
+- .digest = (u8[]){ 0xaa, 0x56, 0x9b, 0xdc, 0x98, 0x17, 0x75, 0xf2,
+- 0xb3, 0x68, 0x83, 0xb7, 0x9b, 0x8d, 0x48, 0xb1,
+- 0x9b, 0x2d, 0x35, 0x05, },
+-}, {
+- .ksize = 1,
+- .key = "B",
+- .digest = (u8[]){ 0x50, 0x16, 0xe7, 0x0c, 0x01, 0xd0, 0xd3, 0xc3,
+- 0xf4, 0x3e, 0xb1, 0x6e, 0x97, 0xa9, 0x4e, 0xd1,
+- 0x79, 0x65, 0x32, 0x93, },
+-}, {
+- .ksize = 32,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 1,
+- .digest = (u8[]){ 0x1c, 0x2b, 0xcd, 0x9a, 0x68, 0xca, 0x8c, 0x71,
+- 0x90, 0x29, 0x6c, 0x54, 0xfa, 0x56, 0x4a, 0xef,
+- 0xa2, 0x3a, 0x56, 0x9c, },
+-}, {
+- .ksize = 16,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 15,
+- .digest = (u8[]){ 0x36, 0xc3, 0x5f, 0x9a, 0xdc, 0x7e, 0xbf, 0x19,
+- 0x68, 0xaa, 0xca, 0xd8, 0x81, 0xbf, 0x09, 0x34,
+- 0x83, 0x39, 0x0f, 0x30, },
+-}, {
+- .ksize = 1,
+- .key = "B",
+- .plaintext = blake2_ordered_sequence,
+- .psize = 64,
+- .digest = (u8[]){ 0x86, 0x80, 0x78, 0xa4, 0x14, 0xec, 0x03, 0xe5,
+- 0xb6, 0x9a, 0x52, 0x0e, 0x42, 0xee, 0x39, 0x9d,
+- 0xac, 0xa6, 0x81, 0x63, },
+-}, {
+- .ksize = 32,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 247,
+- .digest = (u8[]){ 0x2d, 0xd8, 0xd2, 0x53, 0x66, 0xfa, 0xa9, 0x01,
+- 0x1c, 0x9c, 0xaf, 0xa3, 0xe2, 0x9d, 0x9b, 0x10,
+- 0x0a, 0xf6, 0x73, 0xe8, },
+-}};
+-
+-static const struct hash_testvec blakes2s_224_tv_template[] = {{
+- .plaintext = blake2_ordered_sequence,
+- .psize = 1,
+- .digest = (u8[]){ 0x61, 0xb9, 0x4e, 0xc9, 0x46, 0x22, 0xa3, 0x91,
+- 0xd2, 0xae, 0x42, 0xe6, 0x45, 0x6c, 0x90, 0x12,
+- 0xd5, 0x80, 0x07, 0x97, 0xb8, 0x86, 0x5a, 0xfc,
+- 0x48, 0x21, 0x97, 0xbb, },
+-}, {
+- .plaintext = blake2_ordered_sequence,
+- .psize = 247,
+- .digest = (u8[]){ 0x9e, 0xda, 0xc7, 0x20, 0x2c, 0xd8, 0x48, 0x2e,
+- 0x31, 0x94, 0xab, 0x46, 0x6d, 0x94, 0xd8, 0xb4,
+- 0x69, 0xcd, 0xae, 0x19, 0x6d, 0x9e, 0x41, 0xcc,
+- 0x2b, 0xa4, 0xd5, 0xf6, },
+-}, {
+- .ksize = 16,
+- .key = blake2_ordered_sequence,
+- .digest = (u8[]){ 0x32, 0xc0, 0xac, 0xf4, 0x3b, 0xd3, 0x07, 0x9f,
+- 0xbe, 0xfb, 0xfa, 0x4d, 0x6b, 0x4e, 0x56, 0xb3,
+- 0xaa, 0xd3, 0x27, 0xf6, 0x14, 0xbf, 0xb9, 0x32,
+- 0xa7, 0x19, 0xfc, 0xb8, },
+-}, {
+- .ksize = 1,
+- .key = "B",
+- .plaintext = blake2_ordered_sequence,
+- .psize = 7,
+- .digest = (u8[]){ 0x73, 0xad, 0x5e, 0x6d, 0xb9, 0x02, 0x8e, 0x76,
+- 0xf2, 0x66, 0x42, 0x4b, 0x4c, 0xfa, 0x1f, 0xe6,
+- 0x2e, 0x56, 0x40, 0xe5, 0xa2, 0xb0, 0x3c, 0xe8,
+- 0x7b, 0x45, 0xfe, 0x05, },
+-}, {
+- .ksize = 32,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 15,
+- .digest = (u8[]){ 0x16, 0x60, 0xfb, 0x92, 0x54, 0xb3, 0x6e, 0x36,
+- 0x81, 0xf4, 0x16, 0x41, 0xc3, 0x3d, 0xd3, 0x43,
+- 0x84, 0xed, 0x10, 0x6f, 0x65, 0x80, 0x7a, 0x3e,
+- 0x25, 0xab, 0xc5, 0x02, },
+-}, {
+- .ksize = 16,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 64,
+- .digest = (u8[]){ 0xca, 0xaa, 0x39, 0x67, 0x9c, 0xf7, 0x6b, 0xc7,
+- 0xb6, 0x82, 0xca, 0x0e, 0x65, 0x36, 0x5b, 0x7c,
+- 0x24, 0x00, 0xfa, 0x5f, 0xda, 0x06, 0x91, 0x93,
+- 0x6a, 0x31, 0x83, 0xb5, },
+-}, {
+- .ksize = 1,
+- .key = "B",
+- .plaintext = blake2_ordered_sequence,
+- .psize = 256,
+- .digest = (u8[]){ 0x90, 0x02, 0x26, 0xb5, 0x06, 0x9c, 0x36, 0x86,
+- 0x94, 0x91, 0x90, 0x1e, 0x7d, 0x2a, 0x71, 0xb2,
+- 0x48, 0xb5, 0xe8, 0x16, 0xfd, 0x64, 0x33, 0x45,
+- 0xb3, 0xd7, 0xec, 0xcc, },
+-}};
+-
+-static const struct hash_testvec blakes2s_256_tv_template[] = {{
+- .plaintext = blake2_ordered_sequence,
+- .psize = 15,
+- .digest = (u8[]){ 0xd9, 0x7c, 0x82, 0x8d, 0x81, 0x82, 0xa7, 0x21,
+- 0x80, 0xa0, 0x6a, 0x78, 0x26, 0x83, 0x30, 0x67,
+- 0x3f, 0x7c, 0x4e, 0x06, 0x35, 0x94, 0x7c, 0x04,
+- 0xc0, 0x23, 0x23, 0xfd, 0x45, 0xc0, 0xa5, 0x2d, },
+-}, {
+- .ksize = 32,
+- .key = blake2_ordered_sequence,
+- .digest = (u8[]){ 0x48, 0xa8, 0x99, 0x7d, 0xa4, 0x07, 0x87, 0x6b,
+- 0x3d, 0x79, 0xc0, 0xd9, 0x23, 0x25, 0xad, 0x3b,
+- 0x89, 0xcb, 0xb7, 0x54, 0xd8, 0x6a, 0xb7, 0x1a,
+- 0xee, 0x04, 0x7a, 0xd3, 0x45, 0xfd, 0x2c, 0x49, },
+-}, {
+- .ksize = 1,
+- .key = "B",
+- .plaintext = blake2_ordered_sequence,
+- .psize = 1,
+- .digest = (u8[]){ 0x22, 0x27, 0xae, 0xaa, 0x6e, 0x81, 0x56, 0x03,
+- 0xa7, 0xe3, 0xa1, 0x18, 0xa5, 0x9a, 0x2c, 0x18,
+- 0xf4, 0x63, 0xbc, 0x16, 0x70, 0xf1, 0xe7, 0x4b,
+- 0x00, 0x6d, 0x66, 0x16, 0xae, 0x9e, 0x74, 0x4e, },
+-}, {
+- .ksize = 16,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 7,
+- .digest = (u8[]){ 0x58, 0x5d, 0xa8, 0x60, 0x1c, 0xa4, 0xd8, 0x03,
+- 0x86, 0x86, 0x84, 0x64, 0xd7, 0xa0, 0x8e, 0x15,
+- 0x2f, 0x05, 0xa2, 0x1b, 0xbc, 0xef, 0x7a, 0x34,
+- 0xb3, 0xc5, 0xbc, 0x4b, 0xf0, 0x32, 0xeb, 0x12, },
+-}, {
+- .ksize = 32,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 64,
+- .digest = (u8[]){ 0x89, 0x75, 0xb0, 0x57, 0x7f, 0xd3, 0x55, 0x66,
+- 0xd7, 0x50, 0xb3, 0x62, 0xb0, 0x89, 0x7a, 0x26,
+- 0xc3, 0x99, 0x13, 0x6d, 0xf0, 0x7b, 0xab, 0xab,
+- 0xbd, 0xe6, 0x20, 0x3f, 0xf2, 0x95, 0x4e, 0xd4, },
+-}, {
+- .ksize = 1,
+- .key = "B",
+- .plaintext = blake2_ordered_sequence,
+- .psize = 247,
+- .digest = (u8[]){ 0x2e, 0x74, 0x1c, 0x1d, 0x03, 0xf4, 0x9d, 0x84,
+- 0x6f, 0xfc, 0x86, 0x32, 0x92, 0x49, 0x7e, 0x66,
+- 0xd7, 0xc3, 0x10, 0x88, 0xfe, 0x28, 0xb3, 0xe0,
+- 0xbf, 0x50, 0x75, 0xad, 0x8e, 0xa4, 0xe6, 0xb2, },
+-}, {
+- .ksize = 16,
+- .key = blake2_ordered_sequence,
+- .plaintext = blake2_ordered_sequence,
+- .psize = 256,
+- .digest = (u8[]){ 0xb9, 0xd2, 0x81, 0x0e, 0x3a, 0xb1, 0x62, 0x9b,
+- 0xad, 0x44, 0x05, 0xf4, 0x92, 0x2e, 0x99, 0xc1,
+- 0x4a, 0x47, 0xbb, 0x5b, 0x6f, 0xb2, 0x96, 0xed,
+- 0xd5, 0x06, 0xb5, 0x3a, 0x7c, 0x7a, 0x65, 0x1d, },
+-}};
+-
+ #endif /* _CRYPTO_TESTMGR_H */
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index fbe0756259c5a..c4d4d21391d7b 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -422,6 +422,9 @@ static int register_device_clock(struct acpi_device *adev,
+ if (!lpss_clk_dev)
+ lpt_register_clock_device();
+
++ if (IS_ERR(lpss_clk_dev))
++ return PTR_ERR(lpss_clk_dev);
++
+ clk_data = platform_get_drvdata(lpss_clk_dev);
+ if (!clk_data)
+ return -ENODEV;
+diff --git a/drivers/acpi/apei/einj.c b/drivers/acpi/apei/einj.c
+index d4326ec12d296..6b583373c58a2 100644
+--- a/drivers/acpi/apei/einj.c
++++ b/drivers/acpi/apei/einj.c
+@@ -546,6 +546,8 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
+ != REGION_INTERSECTS) &&
+ (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY)
+ != REGION_INTERSECTS) &&
++ (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_SOFT_RESERVED)
++ != REGION_INTERSECTS) &&
+ !arch_is_platform_page(base_addr)))
+ return -EINVAL;
+
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index e2db1bdd9dd25..1d36bb684f5cc 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -1399,6 +1399,7 @@ static int __init acpi_init(void)
+
+ pci_mmcfg_late_init();
+ acpi_iort_init();
++ acpi_viot_early_init();
+ acpi_hest_init();
+ acpi_ghes_init();
+ acpi_scan_init();
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 3c6d4ef87be0f..1e15a9f25ae97 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -618,33 +618,6 @@ static int pcc_data_alloc(int pcc_ss_id)
+ return 0;
+ }
+
+-/* Check if CPPC revision + num_ent combination is supported */
+-static bool is_cppc_supported(int revision, int num_ent)
+-{
+- int expected_num_ent;
+-
+- switch (revision) {
+- case CPPC_V2_REV:
+- expected_num_ent = CPPC_V2_NUM_ENT;
+- break;
+- case CPPC_V3_REV:
+- expected_num_ent = CPPC_V3_NUM_ENT;
+- break;
+- default:
+- pr_debug("Firmware exports unsupported CPPC revision: %d\n",
+- revision);
+- return false;
+- }
+-
+- if (expected_num_ent != num_ent) {
+- pr_debug("Firmware exports %d entries. Expected: %d for CPPC rev:%d\n",
+- num_ent, expected_num_ent, revision);
+- return false;
+- }
+-
+- return true;
+-}
+-
+ /*
+ * An example CPC table looks like the following.
+ *
+@@ -733,7 +706,6 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ cpc_obj->type, pr->id);
+ goto out_free;
+ }
+- cpc_ptr->num_entries = num_ent;
+
+ /* Second entry should be revision. */
+ cpc_obj = &out_obj->package.elements[1];
+@@ -744,10 +716,32 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ cpc_obj->type, pr->id);
+ goto out_free;
+ }
+- cpc_ptr->version = cpc_rev;
+
+- if (!is_cppc_supported(cpc_rev, num_ent))
++ if (cpc_rev < CPPC_V2_REV) {
++ pr_debug("Unsupported _CPC Revision (%d) for CPU:%d\n", cpc_rev,
++ pr->id);
++ goto out_free;
++ }
++
++ /*
++ * Disregard _CPC if the number of entries in the return pachage is not
++ * as expected, but support future revisions being proper supersets of
++ * the v3 and only causing more entries to be returned by _CPC.
++ */
++ if ((cpc_rev == CPPC_V2_REV && num_ent != CPPC_V2_NUM_ENT) ||
++ (cpc_rev == CPPC_V3_REV && num_ent != CPPC_V3_NUM_ENT) ||
++ (cpc_rev > CPPC_V3_REV && num_ent <= CPPC_V3_NUM_ENT)) {
++ pr_debug("Unexpected number of _CPC return package entries (%d) for CPU:%d\n",
++ num_ent, pr->id);
+ goto out_free;
++ }
++ if (cpc_rev > CPPC_V3_REV) {
++ num_ent = CPPC_V3_NUM_ENT;
++ cpc_rev = CPPC_V3_REV;
++ }
++
++ cpc_ptr->num_entries = num_ent;
++ cpc_ptr->version = cpc_rev;
+
+ /* Iterate through remaining entries in _CPC */
+ for (i = 2; i < num_ent; i++) {
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index a1b871a418f87..488c9ec0da0bc 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -180,7 +180,6 @@ static struct workqueue_struct *ec_wq;
+ static struct workqueue_struct *ec_query_wq;
+
+ static int EC_FLAGS_CORRECT_ECDT; /* Needs ECDT port address correction */
+-static int EC_FLAGS_IGNORE_DSDT_GPE; /* Needs ECDT GPE as correction setting */
+ static int EC_FLAGS_TRUST_DSDT_GPE; /* Needs DSDT GPE as correction setting */
+ static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
+
+@@ -1407,24 +1406,16 @@ ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval)
+ if (ec->data_addr == 0 || ec->command_addr == 0)
+ return AE_OK;
+
+- if (boot_ec && boot_ec_is_ecdt && EC_FLAGS_IGNORE_DSDT_GPE) {
+- /*
+- * Always inherit the GPE number setting from the ECDT
+- * EC.
+- */
+- ec->gpe = boot_ec->gpe;
+- } else {
+- /* Get GPE bit assignment (EC events). */
+- /* TODO: Add support for _GPE returning a package */
+- status = acpi_evaluate_integer(handle, "_GPE", NULL, &tmp);
+- if (ACPI_SUCCESS(status))
+- ec->gpe = tmp;
++ /* Get GPE bit assignment (EC events). */
++ /* TODO: Add support for _GPE returning a package */
++ status = acpi_evaluate_integer(handle, "_GPE", NULL, &tmp);
++ if (ACPI_SUCCESS(status))
++ ec->gpe = tmp;
++ /*
++ * Errors are non-fatal, allowing for ACPI Reduced Hardware
++ * platforms which use GpioInt instead of GPE.
++ */
+
+- /*
+- * Errors are non-fatal, allowing for ACPI Reduced Hardware
+- * platforms which use GpioInt instead of GPE.
+- */
+- }
+ /* Use the global lock for all EC transactions? */
+ tmp = 0;
+ acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
+@@ -1862,60 +1853,12 @@ static int ec_honor_dsdt_gpe(const struct dmi_system_id *id)
+ return 0;
+ }
+
+-/*
+- * Some DSDTs contain wrong GPE setting.
+- * Asus FX502VD/VE, GL702VMK, X550VXK, X580VD
+- * https://bugzilla.kernel.org/show_bug.cgi?id=195651
+- */
+-static int ec_honor_ecdt_gpe(const struct dmi_system_id *id)
+-{
+- pr_debug("Detected system needing ignore DSDT GPE setting.\n");
+- EC_FLAGS_IGNORE_DSDT_GPE = 1;
+- return 0;
+-}
+-
+ static const struct dmi_system_id ec_dmi_table[] __initconst = {
+ {
+ ec_correct_ecdt, "MSI MS-171F", {
+ DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "MS-171F"),}, NULL},
+ {
+- ec_honor_ecdt_gpe, "ASUS FX502VD", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "FX502VD"),}, NULL},
+- {
+- ec_honor_ecdt_gpe, "ASUS FX502VE", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "FX502VE"),}, NULL},
+- {
+- ec_honor_ecdt_gpe, "ASUS GL702VMK", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "GL702VMK"),}, NULL},
+- {
+- ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BA", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "X505BA"),}, NULL},
+- {
+- ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BP", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "X505BP"),}, NULL},
+- {
+- ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BA", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "X542BA"),}, NULL},
+- {
+- ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BP", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "X542BP"),}, NULL},
+- {
+- ec_honor_ecdt_gpe, "ASUS X550VXK", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "X550VXK"),}, NULL},
+- {
+- ec_honor_ecdt_gpe, "ASUS X580VD", {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "X580VD"),}, NULL},
+- {
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=209989 */
+ ec_honor_dsdt_gpe, "HP Pavilion Gaming Laptop 15-cx0xxx", {
+ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+@@ -2207,13 +2150,6 @@ static const struct dmi_system_id acpi_ec_no_wakeup[] = {
+ DMI_MATCH(DMI_PRODUCT_FAMILY, "Thinkpad X1 Carbon 6th"),
+ },
+ },
+- {
+- .ident = "ThinkPad X1 Carbon 6th",
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Carbon 6th"),
+- },
+- },
+ {
+ .ident = "ThinkPad X1 Yoga 3rd",
+ .matches = {
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 6a5572a1a80cc..13200969ccf35 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -607,7 +607,7 @@ static DEFINE_RAW_SPINLOCK(c3_lock);
+ * @cx: Target state context
+ * @index: index of target state
+ */
+-static int acpi_idle_enter_bm(struct cpuidle_driver *drv,
++static int __cpuidle acpi_idle_enter_bm(struct cpuidle_driver *drv,
+ struct acpi_processor *pr,
+ struct acpi_processor_cx *cx,
+ int index)
+@@ -664,7 +664,7 @@ static int acpi_idle_enter_bm(struct cpuidle_driver *drv,
+ return index;
+ }
+
+-static int acpi_idle_enter(struct cpuidle_device *dev,
++static int __cpuidle acpi_idle_enter(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int index)
+ {
+ struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
+@@ -693,7 +693,7 @@ static int acpi_idle_enter(struct cpuidle_device *dev,
+ return index;
+ }
+
+-static int acpi_idle_enter_s2idle(struct cpuidle_device *dev,
++static int __cpuidle acpi_idle_enter_s2idle(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int index)
+ {
+ struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 04ea1569df789..974746e6e59d9 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -360,6 +360,14 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "80E3"),
+ },
+ },
++ {
++ .callback = init_nvs_save_s3,
++ .ident = "Lenovo G40-45",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "80E1"),
++ },
++ },
+ /*
+ * ThinkPad X1 Tablet(2016) cannot do suspend-to-idle using
+ * the Low Power S0 Idle firmware interface (see
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 6615f59ab7fd2..5d7f38016a243 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -347,6 +347,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro12,1"),
+ },
+ },
++ {
++ .callback = video_detect_force_native,
++ /* Dell Inspiron N4010 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron N4010"),
++ },
++ },
+ {
+ .callback = video_detect_force_native,
+ /* Dell Vostro V131 */
+diff --git a/drivers/acpi/viot.c b/drivers/acpi/viot.c
+index d2256326c73ae..647f11cf165d7 100644
+--- a/drivers/acpi/viot.c
++++ b/drivers/acpi/viot.c
+@@ -248,6 +248,26 @@ err_free:
+ return ret;
+ }
+
++/**
++ * acpi_viot_early_init - Test the presence of VIOT and enable ACS
++ *
++ * If the VIOT does exist, ACS must be enabled. This cannot be
++ * done in acpi_viot_init() which is called after the bus scan
++ */
++void __init acpi_viot_early_init(void)
++{
++#ifdef CONFIG_PCI
++ acpi_status status;
++ struct acpi_table_header *hdr;
++
++ status = acpi_get_table(ACPI_SIG_VIOT, 0, &hdr);
++ if (ACPI_FAILURE(status))
++ return;
++ pci_request_acs();
++ acpi_put_table(hdr);
++#endif
++}
++
+ /**
+ * acpi_viot_init - Parse the VIOT table
+ *
+@@ -319,12 +339,6 @@ static int viot_pci_dev_iommu_init(struct pci_dev *pdev, u16 dev_id, void *data)
+ epid = ((domain_nr - ep->segment_start) << 16) +
+ dev_id - ep->bdf_start + ep->endpoint_id;
+
+- /*
+- * If we found a PCI range managed by the viommu, we're
+- * the one that has to request ACS.
+- */
+- pci_request_acs();
+-
+ return viot_dev_iommu_init(&pdev->dev, ep->viommu,
+ epid);
+ }
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 362c0deb65f11..54ac94fed0151 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -197,8 +197,32 @@ static inline void binder_stats_created(enum binder_stat_types type)
+ atomic_inc(&binder_stats.obj_created[type]);
+ }
+
+-struct binder_transaction_log binder_transaction_log;
+-struct binder_transaction_log binder_transaction_log_failed;
++struct binder_transaction_log_entry {
++ int debug_id;
++ int debug_id_done;
++ int call_type;
++ int from_proc;
++ int from_thread;
++ int target_handle;
++ int to_proc;
++ int to_thread;
++ int to_node;
++ int data_size;
++ int offsets_size;
++ int return_error_line;
++ uint32_t return_error;
++ uint32_t return_error_param;
++ char context_name[BINDERFS_MAX_NAME + 1];
++};
++
++struct binder_transaction_log {
++ atomic_t cur;
++ bool full;
++ struct binder_transaction_log_entry entry[32];
++};
++
++static struct binder_transaction_log binder_transaction_log;
++static struct binder_transaction_log binder_transaction_log_failed;
+
+ static struct binder_transaction_log_entry *binder_transaction_log_add(
+ struct binder_transaction_log *log)
+@@ -6197,8 +6221,7 @@ static void print_binder_proc_stats(struct seq_file *m,
+ print_binder_stats(m, " ", &proc->stats);
+ }
+
+-
+-int binder_state_show(struct seq_file *m, void *unused)
++static int state_show(struct seq_file *m, void *unused)
+ {
+ struct binder_proc *proc;
+ struct binder_node *node;
+@@ -6237,7 +6260,7 @@ int binder_state_show(struct seq_file *m, void *unused)
+ return 0;
+ }
+
+-int binder_stats_show(struct seq_file *m, void *unused)
++static int stats_show(struct seq_file *m, void *unused)
+ {
+ struct binder_proc *proc;
+
+@@ -6253,7 +6276,7 @@ int binder_stats_show(struct seq_file *m, void *unused)
+ return 0;
+ }
+
+-int binder_transactions_show(struct seq_file *m, void *unused)
++static int transactions_show(struct seq_file *m, void *unused)
+ {
+ struct binder_proc *proc;
+
+@@ -6309,7 +6332,7 @@ static void print_binder_transaction_log_entry(struct seq_file *m,
+ "\n" : " (incomplete)\n");
+ }
+
+-int binder_transaction_log_show(struct seq_file *m, void *unused)
++static int transaction_log_show(struct seq_file *m, void *unused)
+ {
+ struct binder_transaction_log *log = m->private;
+ unsigned int log_cur = atomic_read(&log->cur);
+@@ -6341,6 +6364,45 @@ const struct file_operations binder_fops = {
+ .release = binder_release,
+ };
+
++DEFINE_SHOW_ATTRIBUTE(state);
++DEFINE_SHOW_ATTRIBUTE(stats);
++DEFINE_SHOW_ATTRIBUTE(transactions);
++DEFINE_SHOW_ATTRIBUTE(transaction_log);
++
++const struct binder_debugfs_entry binder_debugfs_entries[] = {
++ {
++ .name = "state",
++ .mode = 0444,
++ .fops = &state_fops,
++ .data = NULL,
++ },
++ {
++ .name = "stats",
++ .mode = 0444,
++ .fops = &stats_fops,
++ .data = NULL,
++ },
++ {
++ .name = "transactions",
++ .mode = 0444,
++ .fops = &transactions_fops,
++ .data = NULL,
++ },
++ {
++ .name = "transaction_log",
++ .mode = 0444,
++ .fops = &transaction_log_fops,
++ .data = &binder_transaction_log,
++ },
++ {
++ .name = "failed_transaction_log",
++ .mode = 0444,
++ .fops = &transaction_log_fops,
++ .data = &binder_transaction_log_failed,
++ },
++ {} /* terminator */
++};
++
+ static int __init init_binder_device(const char *name)
+ {
+ int ret;
+@@ -6386,36 +6448,18 @@ static int __init binder_init(void)
+ atomic_set(&binder_transaction_log_failed.cur, ~0U);
+
+ binder_debugfs_dir_entry_root = debugfs_create_dir("binder", NULL);
+- if (binder_debugfs_dir_entry_root)
++ if (binder_debugfs_dir_entry_root) {
++ const struct binder_debugfs_entry *db_entry;
++
++ binder_for_each_debugfs_entry(db_entry)
++ debugfs_create_file(db_entry->name,
++ db_entry->mode,
++ binder_debugfs_dir_entry_root,
++ db_entry->data,
++ db_entry->fops);
++
+ binder_debugfs_dir_entry_proc = debugfs_create_dir("proc",
+ binder_debugfs_dir_entry_root);
+-
+- if (binder_debugfs_dir_entry_root) {
+- debugfs_create_file("state",
+- 0444,
+- binder_debugfs_dir_entry_root,
+- NULL,
+- &binder_state_fops);
+- debugfs_create_file("stats",
+- 0444,
+- binder_debugfs_dir_entry_root,
+- NULL,
+- &binder_stats_fops);
+- debugfs_create_file("transactions",
+- 0444,
+- binder_debugfs_dir_entry_root,
+- NULL,
+- &binder_transactions_fops);
+- debugfs_create_file("transaction_log",
+- 0444,
+- binder_debugfs_dir_entry_root,
+- &binder_transaction_log,
+- &binder_transaction_log_fops);
+- debugfs_create_file("failed_transaction_log",
+- 0444,
+- binder_debugfs_dir_entry_root,
+- &binder_transaction_log_failed,
+- &binder_transaction_log_fops);
+ }
+
+ if (!IS_ENABLED(CONFIG_ANDROID_BINDERFS) &&
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 5649a0371a1f2..d044418294f94 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -213,7 +213,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+
+ if (mm) {
+ mmap_read_lock(mm);
+- vma = alloc->vma;
++ vma = vma_lookup(mm, alloc->vma_addr);
+ }
+
+ if (!vma && need_mm) {
+@@ -313,16 +313,15 @@ err_no_vma:
+ static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
+ struct vm_area_struct *vma)
+ {
+- if (vma)
++ unsigned long vm_start = 0;
++
++ if (vma) {
++ vm_start = vma->vm_start;
+ alloc->vma_vm_mm = vma->vm_mm;
+- /*
+- * If we see alloc->vma is not NULL, buffer data structures set up
+- * completely. Look at smp_rmb side binder_alloc_get_vma.
+- * We also want to guarantee new alloc->vma_vm_mm is always visible
+- * if alloc->vma is set.
+- */
+- smp_wmb();
+- alloc->vma = vma;
++ }
++
++ mmap_assert_write_locked(alloc->vma_vm_mm);
++ alloc->vma_addr = vm_start;
+ }
+
+ static inline struct vm_area_struct *binder_alloc_get_vma(
+@@ -330,11 +329,9 @@ static inline struct vm_area_struct *binder_alloc_get_vma(
+ {
+ struct vm_area_struct *vma = NULL;
+
+- if (alloc->vma) {
+- /* Look at description in binder_alloc_set_vma */
+- smp_rmb();
+- vma = alloc->vma;
+- }
++ if (alloc->vma_addr)
++ vma = vma_lookup(alloc->vma_vm_mm, alloc->vma_addr);
++
+ return vma;
+ }
+
+@@ -817,7 +814,8 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
+
+ buffers = 0;
+ mutex_lock(&alloc->mutex);
+- BUG_ON(alloc->vma);
++ BUG_ON(alloc->vma_addr &&
++ vma_lookup(alloc->vma_vm_mm, alloc->vma_addr));
+
+ while ((n = rb_first(&alloc->allocated_buffers))) {
+ buffer = rb_entry(n, struct binder_buffer, rb_node);
+diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
+index 7dea57a84c79b..1e4fd37af5e03 100644
+--- a/drivers/android/binder_alloc.h
++++ b/drivers/android/binder_alloc.h
+@@ -100,7 +100,7 @@ struct binder_lru_page {
+ */
+ struct binder_alloc {
+ struct mutex mutex;
+- struct vm_area_struct *vma;
++ unsigned long vma_addr;
+ struct mm_struct *vma_vm_mm;
+ void __user *buffer;
+ struct list_head buffers;
+diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/binder_alloc_selftest.c
+index c2b323bc3b3a5..43a881073a428 100644
+--- a/drivers/android/binder_alloc_selftest.c
++++ b/drivers/android/binder_alloc_selftest.c
+@@ -287,7 +287,7 @@ void binder_selftest_alloc(struct binder_alloc *alloc)
+ if (!binder_selftest_run)
+ return;
+ mutex_lock(&binder_selftest_lock);
+- if (!binder_selftest_run || !alloc->vma)
++ if (!binder_selftest_run || !alloc->vma_addr)
+ goto done;
+ pr_info("STARTED\n");
+ binder_selftest_alloc_offset(alloc, end_offset, 0);
+diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h
+index 8dc0bccf85139..abe19d88c6ecc 100644
+--- a/drivers/android/binder_internal.h
++++ b/drivers/android/binder_internal.h
+@@ -107,41 +107,19 @@ static inline int __init init_binderfs(void)
+ }
+ #endif
+
+-int binder_stats_show(struct seq_file *m, void *unused);
+-DEFINE_SHOW_ATTRIBUTE(binder_stats);
+-
+-int binder_state_show(struct seq_file *m, void *unused);
+-DEFINE_SHOW_ATTRIBUTE(binder_state);
+-
+-int binder_transactions_show(struct seq_file *m, void *unused);
+-DEFINE_SHOW_ATTRIBUTE(binder_transactions);
+-
+-int binder_transaction_log_show(struct seq_file *m, void *unused);
+-DEFINE_SHOW_ATTRIBUTE(binder_transaction_log);
+-
+-struct binder_transaction_log_entry {
+- int debug_id;
+- int debug_id_done;
+- int call_type;
+- int from_proc;
+- int from_thread;
+- int target_handle;
+- int to_proc;
+- int to_thread;
+- int to_node;
+- int data_size;
+- int offsets_size;
+- int return_error_line;
+- uint32_t return_error;
+- uint32_t return_error_param;
+- char context_name[BINDERFS_MAX_NAME + 1];
++struct binder_debugfs_entry {
++ const char *name;
++ umode_t mode;
++ const struct file_operations *fops;
++ void *data;
+ };
+
+-struct binder_transaction_log {
+- atomic_t cur;
+- bool full;
+- struct binder_transaction_log_entry entry[32];
+-};
++extern const struct binder_debugfs_entry binder_debugfs_entries[];
++
++#define binder_for_each_debugfs_entry(entry) \
++ for ((entry) = binder_debugfs_entries; \
++ (entry)->name; \
++ (entry)++)
+
+ enum binder_stat_types {
+ BINDER_STAT_PROC,
+@@ -580,6 +558,4 @@ struct binder_object {
+ };
+ };
+
+-extern struct binder_transaction_log binder_transaction_log;
+-extern struct binder_transaction_log binder_transaction_log_failed;
+ #endif /* _LINUX_BINDER_INTERNAL_H */
+diff --git a/drivers/android/binderfs.c b/drivers/android/binderfs.c
+index 6c5e94f6cb3a4..588d753a7a199 100644
+--- a/drivers/android/binderfs.c
++++ b/drivers/android/binderfs.c
+@@ -629,6 +629,7 @@ static int init_binder_features(struct super_block *sb)
+ static int init_binder_logs(struct super_block *sb)
+ {
+ struct dentry *binder_logs_root_dir, *dentry, *proc_log_dir;
++ const struct binder_debugfs_entry *db_entry;
+ struct binderfs_info *info;
+ int ret = 0;
+
+@@ -639,43 +640,15 @@ static int init_binder_logs(struct super_block *sb)
+ goto out;
+ }
+
+- dentry = binderfs_create_file(binder_logs_root_dir, "stats",
+- &binder_stats_fops, NULL);
+- if (IS_ERR(dentry)) {
+- ret = PTR_ERR(dentry);
+- goto out;
+- }
+-
+- dentry = binderfs_create_file(binder_logs_root_dir, "state",
+- &binder_state_fops, NULL);
+- if (IS_ERR(dentry)) {
+- ret = PTR_ERR(dentry);
+- goto out;
+- }
+-
+- dentry = binderfs_create_file(binder_logs_root_dir, "transactions",
+- &binder_transactions_fops, NULL);
+- if (IS_ERR(dentry)) {
+- ret = PTR_ERR(dentry);
+- goto out;
+- }
+-
+- dentry = binderfs_create_file(binder_logs_root_dir,
+- "transaction_log",
+- &binder_transaction_log_fops,
+- &binder_transaction_log);
+- if (IS_ERR(dentry)) {
+- ret = PTR_ERR(dentry);
+- goto out;
+- }
+-
+- dentry = binderfs_create_file(binder_logs_root_dir,
+- "failed_transaction_log",
+- &binder_transaction_log_fops,
+- &binder_transaction_log_failed);
+- if (IS_ERR(dentry)) {
+- ret = PTR_ERR(dentry);
+- goto out;
++ binder_for_each_debugfs_entry(db_entry) {
++ dentry = binderfs_create_file(binder_logs_root_dir,
++ db_entry->name,
++ db_entry->fops,
++ db_entry->data);
++ if (IS_ERR(dentry)) {
++ ret = PTR_ERR(dentry);
++ goto out;
++ }
+ }
+
+ proc_log_dir = binderfs_create_dir(binder_logs_root_dir, "proc");
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 11b0fb6414d37..b766968a873ce 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -1115,6 +1115,7 @@ static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
+ static int __driver_attach(struct device *dev, void *data)
+ {
+ struct device_driver *drv = data;
++ bool async = false;
+ int ret;
+
+ /*
+@@ -1153,9 +1154,11 @@ static int __driver_attach(struct device *dev, void *data)
+ if (!dev->driver && !dev->p->async_driver) {
+ get_device(dev);
+ dev->p->async_driver = drv;
+- async_schedule_dev(__driver_attach_async_helper, dev);
++ async = true;
+ }
+ device_unlock(dev);
++ if (async)
++ async_schedule_dev(__driver_attach_async_helper, dev);
+ return 0;
+ }
+
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index 0ac6376ef7a10..eb0f43784c2b3 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -45,7 +45,7 @@ static inline ssize_t cpumap_read(struct file *file, struct kobject *kobj,
+ return n;
+ }
+
+-static BIN_ATTR_RO(cpumap, 0);
++static BIN_ATTR_RO(cpumap, CPUMAP_FILE_MAX_BYTES);
+
+ static inline ssize_t cpulist_read(struct file *file, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+@@ -66,7 +66,7 @@ static inline ssize_t cpulist_read(struct file *file, struct kobject *kobj,
+ return n;
+ }
+
+-static BIN_ATTR_RO(cpulist, 0);
++static BIN_ATTR_RO(cpulist, CPULIST_FILE_MAX_BYTES);
+
+ /**
+ * struct node_access_nodes - Access class device to hold user visible
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 739e52cd4aba5..55a10e6d4e2a7 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -222,6 +222,9 @@ static void genpd_debug_remove(struct generic_pm_domain *genpd)
+ {
+ struct dentry *d;
+
++ if (!genpd_debugfs_dir)
++ return;
++
+ d = debugfs_lookup(genpd->name, genpd_debugfs_dir);
+ debugfs_remove(d);
+ }
+diff --git a/drivers/base/topology.c b/drivers/base/topology.c
+index ac6ad9ab67f94..89f98be5c5b99 100644
+--- a/drivers/base/topology.c
++++ b/drivers/base/topology.c
+@@ -62,47 +62,47 @@ define_id_show_func(ppin, "0x%llx");
+ static DEVICE_ATTR_ADMIN_RO(ppin);
+
+ define_siblings_read_func(thread_siblings, sibling_cpumask);
+-static BIN_ATTR_RO(thread_siblings, 0);
+-static BIN_ATTR_RO(thread_siblings_list, 0);
++static BIN_ATTR_RO(thread_siblings, CPUMAP_FILE_MAX_BYTES);
++static BIN_ATTR_RO(thread_siblings_list, CPULIST_FILE_MAX_BYTES);
+
+ define_siblings_read_func(core_cpus, sibling_cpumask);
+-static BIN_ATTR_RO(core_cpus, 0);
+-static BIN_ATTR_RO(core_cpus_list, 0);
++static BIN_ATTR_RO(core_cpus, CPUMAP_FILE_MAX_BYTES);
++static BIN_ATTR_RO(core_cpus_list, CPULIST_FILE_MAX_BYTES);
+
+ define_siblings_read_func(core_siblings, core_cpumask);
+-static BIN_ATTR_RO(core_siblings, 0);
+-static BIN_ATTR_RO(core_siblings_list, 0);
++static BIN_ATTR_RO(core_siblings, CPUMAP_FILE_MAX_BYTES);
++static BIN_ATTR_RO(core_siblings_list, CPULIST_FILE_MAX_BYTES);
+
+ #ifdef TOPOLOGY_CLUSTER_SYSFS
+ define_siblings_read_func(cluster_cpus, cluster_cpumask);
+-static BIN_ATTR_RO(cluster_cpus, 0);
+-static BIN_ATTR_RO(cluster_cpus_list, 0);
++static BIN_ATTR_RO(cluster_cpus, CPUMAP_FILE_MAX_BYTES);
++static BIN_ATTR_RO(cluster_cpus_list, CPULIST_FILE_MAX_BYTES);
+ #endif
+
+ #ifdef TOPOLOGY_DIE_SYSFS
+ define_siblings_read_func(die_cpus, die_cpumask);
+-static BIN_ATTR_RO(die_cpus, 0);
+-static BIN_ATTR_RO(die_cpus_list, 0);
++static BIN_ATTR_RO(die_cpus, CPUMAP_FILE_MAX_BYTES);
++static BIN_ATTR_RO(die_cpus_list, CPULIST_FILE_MAX_BYTES);
+ #endif
+
+ define_siblings_read_func(package_cpus, core_cpumask);
+-static BIN_ATTR_RO(package_cpus, 0);
+-static BIN_ATTR_RO(package_cpus_list, 0);
++static BIN_ATTR_RO(package_cpus, CPUMAP_FILE_MAX_BYTES);
++static BIN_ATTR_RO(package_cpus_list, CPULIST_FILE_MAX_BYTES);
+
+ #ifdef TOPOLOGY_BOOK_SYSFS
+ define_id_show_func(book_id, "%d");
+ static DEVICE_ATTR_RO(book_id);
+ define_siblings_read_func(book_siblings, book_cpumask);
+-static BIN_ATTR_RO(book_siblings, 0);
+-static BIN_ATTR_RO(book_siblings_list, 0);
++static BIN_ATTR_RO(book_siblings, CPUMAP_FILE_MAX_BYTES);
++static BIN_ATTR_RO(book_siblings_list, CPULIST_FILE_MAX_BYTES);
+ #endif
+
+ #ifdef TOPOLOGY_DRAWER_SYSFS
+ define_id_show_func(drawer_id, "%d");
+ static DEVICE_ATTR_RO(drawer_id);
+ define_siblings_read_func(drawer_siblings, drawer_cpumask);
+-static BIN_ATTR_RO(drawer_siblings, 0);
+-static BIN_ATTR_RO(drawer_siblings_list, 0);
++static BIN_ATTR_RO(drawer_siblings, CPUMAP_FILE_MAX_BYTES);
++static BIN_ATTR_RO(drawer_siblings_list, CPULIST_FILE_MAX_BYTES);
+ #endif
+
+ static struct bin_attribute *bin_attrs[] = {
+diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
+index 27386a572ba49..6699e4b2f7f43 100644
+--- a/drivers/block/mtip32xx/mtip32xx.c
++++ b/drivers/block/mtip32xx/mtip32xx.c
+@@ -146,11 +146,8 @@ static bool mtip_check_surprise_removal(struct driver_data *dd)
+ pci_read_config_word(dd->pdev, 0x00, &vendor_id);
+ if (vendor_id == 0xFFFF) {
+ dd->sr = true;
+- if (dd->queue)
+- blk_queue_flag_set(QUEUE_FLAG_DEAD, dd->queue);
+- else
+- dev_warn(&dd->pdev->dev,
+- "%s: dd->queue is NULL\n", __func__);
++ if (dd->disk)
++ blk_mark_disk_dead(dd->disk);
+ return true; /* device removed */
+ }
+
+@@ -3297,26 +3294,12 @@ static int mtip_block_getgeo(struct block_device *dev,
+ return 0;
+ }
+
+-static int mtip_block_open(struct block_device *dev, fmode_t mode)
++static void mtip_block_free_disk(struct gendisk *disk)
+ {
+- struct driver_data *dd;
+-
+- if (dev && dev->bd_disk) {
+- dd = (struct driver_data *) dev->bd_disk->private_data;
+-
+- if (dd) {
+- if (test_bit(MTIP_DDF_REMOVAL_BIT,
+- &dd->dd_flag)) {
+- return -ENODEV;
+- }
+- return 0;
+- }
+- }
+- return -ENODEV;
+-}
++ struct driver_data *dd = disk->private_data;
+
+-static void mtip_block_release(struct gendisk *disk, fmode_t mode)
+-{
++ ida_free(&rssd_index_ida, dd->index);
++ kfree(dd);
+ }
+
+ /*
+@@ -3326,13 +3309,12 @@ static void mtip_block_release(struct gendisk *disk, fmode_t mode)
+ * layer.
+ */
+ static const struct block_device_operations mtip_block_ops = {
+- .open = mtip_block_open,
+- .release = mtip_block_release,
+ .ioctl = mtip_block_ioctl,
+ #ifdef CONFIG_COMPAT
+ .compat_ioctl = mtip_block_compat_ioctl,
+ #endif
+ .getgeo = mtip_block_getgeo,
++ .free_disk = mtip_block_free_disk,
+ .owner = THIS_MODULE
+ };
+
+@@ -3673,72 +3655,6 @@ protocol_init_error:
+ return rv;
+ }
+
+-static bool mtip_no_dev_cleanup(struct request *rq, void *data, bool reserv)
+-{
+- struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+-
+- cmd->status = BLK_STS_IOERR;
+- blk_mq_complete_request(rq);
+- return true;
+-}
+-
+-/*
+- * Block layer deinitialization function.
+- *
+- * Called by the PCI layer as each P320 device is removed.
+- *
+- * @dd Pointer to the driver data structure.
+- *
+- * return value
+- * 0
+- */
+-static int mtip_block_remove(struct driver_data *dd)
+-{
+- mtip_hw_debugfs_exit(dd);
+-
+- if (dd->mtip_svc_handler) {
+- set_bit(MTIP_PF_SVC_THD_STOP_BIT, &dd->port->flags);
+- wake_up_interruptible(&dd->port->svc_wait);
+- kthread_stop(dd->mtip_svc_handler);
+- }
+-
+- if (!dd->sr) {
+- /*
+- * Explicitly wait here for IOs to quiesce,
+- * as mtip_standby_drive usually won't wait for IOs.
+- */
+- if (!mtip_quiesce_io(dd->port, MTIP_QUIESCE_IO_TIMEOUT_MS))
+- mtip_standby_drive(dd);
+- }
+- else
+- dev_info(&dd->pdev->dev, "device %s surprise removal\n",
+- dd->disk->disk_name);
+-
+- blk_freeze_queue_start(dd->queue);
+- blk_mq_quiesce_queue(dd->queue);
+- blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
+- blk_mq_unquiesce_queue(dd->queue);
+-
+- if (dd->disk) {
+- if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
+- del_gendisk(dd->disk);
+- if (dd->disk->queue) {
+- blk_cleanup_queue(dd->queue);
+- blk_mq_free_tag_set(&dd->tags);
+- dd->queue = NULL;
+- }
+- put_disk(dd->disk);
+- }
+- dd->disk = NULL;
+-
+- ida_free(&rssd_index_ida, dd->index);
+-
+- /* De-initialize the protocol layer. */
+- mtip_hw_exit(dd);
+-
+- return 0;
+-}
+-
+ /*
+ * Function called by the PCI layer when just before the
+ * machine shuts down.
+@@ -3755,23 +3671,15 @@ static int mtip_block_shutdown(struct driver_data *dd)
+ {
+ mtip_hw_shutdown(dd);
+
+- /* Delete our gendisk structure, and cleanup the blk queue. */
+- if (dd->disk) {
+- dev_info(&dd->pdev->dev,
+- "Shutting down %s ...\n", dd->disk->disk_name);
++ dev_info(&dd->pdev->dev,
++ "Shutting down %s ...\n", dd->disk->disk_name);
+
+- if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
+- del_gendisk(dd->disk);
+- if (dd->disk->queue) {
+- blk_cleanup_queue(dd->queue);
+- blk_mq_free_tag_set(&dd->tags);
+- }
+- put_disk(dd->disk);
+- dd->disk = NULL;
+- dd->queue = NULL;
+- }
++ if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
++ del_gendisk(dd->disk);
+
+- ida_free(&rssd_index_ida, dd->index);
++ blk_cleanup_queue(dd->queue);
++ blk_mq_free_tag_set(&dd->tags);
++ put_disk(dd->disk);
+ return 0;
+ }
+
+@@ -4087,8 +3995,6 @@ static void mtip_pci_remove(struct pci_dev *pdev)
+ struct driver_data *dd = pci_get_drvdata(pdev);
+ unsigned long flags, to;
+
+- set_bit(MTIP_DDF_REMOVAL_BIT, &dd->dd_flag);
+-
+ spin_lock_irqsave(&dev_lock, flags);
+ list_del_init(&dd->online_list);
+ list_add(&dd->remove_list, &removing_list);
+@@ -4109,11 +4015,36 @@ static void mtip_pci_remove(struct pci_dev *pdev)
+ "Completion workers still active!\n");
+ }
+
+- blk_mark_disk_dead(dd->disk);
+ set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag);
+
+- /* Clean up the block layer. */
+- mtip_block_remove(dd);
++ if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
++ del_gendisk(dd->disk);
++
++ mtip_hw_debugfs_exit(dd);
++
++ if (dd->mtip_svc_handler) {
++ set_bit(MTIP_PF_SVC_THD_STOP_BIT, &dd->port->flags);
++ wake_up_interruptible(&dd->port->svc_wait);
++ kthread_stop(dd->mtip_svc_handler);
++ }
++
++ if (!dd->sr) {
++ /*
++ * Explicitly wait here for IOs to quiesce,
++ * as mtip_standby_drive usually won't wait for IOs.
++ */
++ if (!mtip_quiesce_io(dd->port, MTIP_QUIESCE_IO_TIMEOUT_MS))
++ mtip_standby_drive(dd);
++ }
++ else
++ dev_info(&dd->pdev->dev, "device %s surprise removal\n",
++ dd->disk->disk_name);
++
++ blk_cleanup_queue(dd->queue);
++ blk_mq_free_tag_set(&dd->tags);
++
++ /* De-initialize the protocol layer. */
++ mtip_hw_exit(dd);
+
+ if (dd->isr_workq) {
+ destroy_workqueue(dd->isr_workq);
+@@ -4128,10 +4059,10 @@ static void mtip_pci_remove(struct pci_dev *pdev)
+ list_del_init(&dd->remove_list);
+ spin_unlock_irqrestore(&dev_lock, flags);
+
+- kfree(dd);
+-
+ pcim_iounmap_regions(pdev, 1 << MTIP_ABAR);
+ pci_set_drvdata(pdev, NULL);
++
++ put_disk(dd->disk);
+ }
+
+ /*
+diff --git a/drivers/block/mtip32xx/mtip32xx.h b/drivers/block/mtip32xx/mtip32xx.h
+index 6816beb45352b..9c1e45b745dc5 100644
+--- a/drivers/block/mtip32xx/mtip32xx.h
++++ b/drivers/block/mtip32xx/mtip32xx.h
+@@ -149,7 +149,6 @@ enum {
+ MTIP_DDF_RESUME_BIT = 6,
+ MTIP_DDF_INIT_DONE_BIT = 7,
+ MTIP_DDF_REBUILD_FAILED_BIT = 8,
+- MTIP_DDF_REMOVAL_BIT = 9,
+
+ MTIP_DDF_STOP_IO = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) |
+ (1 << MTIP_DDF_SEC_LOCK_BIT) |
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 07f3c139a3d77..20e9c53eec53f 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -11,6 +11,8 @@
+ * (part of code stolen from loop.c)
+ */
+
++#define pr_fmt(fmt) "nbd: " fmt
++
+ #include <linux/major.h>
+
+ #include <linux/blkdev.h>
+@@ -1951,7 +1953,7 @@ again:
+ test_bit(NBD_DISCONNECT_REQUESTED, &nbd->flags)) ||
+ !refcount_inc_not_zero(&nbd->refs)) {
+ mutex_unlock(&nbd_index_mutex);
+- pr_err("nbd: device at index %d is going down\n",
++ pr_err("device at index %d is going down\n",
+ index);
+ return -EINVAL;
+ }
+@@ -1962,7 +1964,7 @@ again:
+ if (!nbd) {
+ nbd = nbd_dev_add(index, 2);
+ if (IS_ERR(nbd)) {
+- pr_err("nbd: failed to add new device\n");
++ pr_err("failed to add new device\n");
+ return PTR_ERR(nbd);
+ }
+ }
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 6b67088f4ea71..c0a0474be574d 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -2043,8 +2043,13 @@ static int null_add_dev(struct nullb_device *dev)
+ blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, nullb->q);
+
+ mutex_lock(&lock);
+- nullb->index = ida_simple_get(&nullb_indexes, 0, 0, GFP_KERNEL);
+- dev->index = nullb->index;
++ rv = ida_simple_get(&nullb_indexes, 0, 0, GFP_KERNEL);
++ if (rv < 0) {
++ mutex_unlock(&lock);
++ goto out_cleanup_zone;
++ }
++ nullb->index = rv;
++ dev->index = rv;
+ mutex_unlock(&lock);
+
+ blk_queue_logical_block_size(nullb->q, dev->blocksize);
+@@ -2070,7 +2075,7 @@ static int null_add_dev(struct nullb_device *dev)
+
+ rv = null_gendisk_register(nullb);
+ if (rv)
+- goto out_cleanup_zone;
++ goto out_ida_free;
+
+ mutex_lock(&lock);
+ list_add_tail(&nullb->list, &nullb_list);
+@@ -2079,6 +2084,9 @@ static int null_add_dev(struct nullb_device *dev)
+ pr_info("disk %s created\n", nullb->disk_name);
+
+ return 0;
++
++out_ida_free:
++ ida_free(&nullb_indexes, nullb->index);
+ out_cleanup_zone:
+ null_free_zoned_dev(dev);
+ out_cleanup_disk:
+diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c
+index beaef43a67b9d..cf9e29a08db21 100644
+--- a/drivers/block/rnbd/rnbd-srv.c
++++ b/drivers/block/rnbd/rnbd-srv.c
+@@ -323,10 +323,11 @@ void rnbd_srv_sess_dev_force_close(struct rnbd_srv_sess_dev *sess_dev,
+ {
+ struct rnbd_srv_session *sess = sess_dev->sess;
+
+- sess_dev->keep_id = true;
+ /* It is already started to close by client's close message. */
+ if (!mutex_trylock(&sess->lock))
+ return;
++
++ sess_dev->keep_id = true;
+ /* first remove sysfs itself to avoid deadlock */
+ sysfs_remove_file_self(&sess_dev->kobj, &attr->attr);
+ rnbd_srv_destroy_dev_session_sysfs(sess_dev);
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index 97de13b14175e..ee7ad2fb432d1 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -157,6 +157,11 @@ static int xen_blkif_alloc_rings(struct xen_blkif *blkif)
+ return 0;
+ }
+
++/* Enable the persistent grants feature. */
++static bool feature_persistent = true;
++module_param(feature_persistent, bool, 0644);
++MODULE_PARM_DESC(feature_persistent, "Enables the persistent grants feature");
++
+ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
+ {
+ struct xen_blkif *blkif;
+@@ -472,12 +477,6 @@ static void xen_vbd_free(struct xen_vbd *vbd)
+ vbd->bdev = NULL;
+ }
+
+-/* Enable the persistent grants feature. */
+-static bool feature_persistent = true;
+-module_param(feature_persistent, bool, 0644);
+-MODULE_PARM_DESC(feature_persistent,
+- "Enables the persistent grants feature");
+-
+ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
+ unsigned major, unsigned minor, int readonly,
+ int cdrom)
+@@ -520,8 +519,6 @@ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
+ if (bdev_max_secure_erase_sectors(bdev))
+ vbd->discard_secure = true;
+
+- vbd->feature_gnt_persistent = feature_persistent;
+-
+ pr_debug("Successful creation of handle=%04x (dom=%u)\n",
+ handle, blkif->domid);
+ return 0;
+@@ -1087,10 +1084,9 @@ static int connect_ring(struct backend_info *be)
+ xenbus_dev_fatal(dev, err, "unknown fe protocol %s", protocol);
+ return -ENOSYS;
+ }
+- if (blkif->vbd.feature_gnt_persistent)
+- blkif->vbd.feature_gnt_persistent =
+- xenbus_read_unsigned(dev->otherend,
+- "feature-persistent", 0);
++
++ blkif->vbd.feature_gnt_persistent = feature_persistent &&
++ xenbus_read_unsigned(dev->otherend, "feature-persistent", 0);
+
+ blkif->vbd.overflow_max_grants = 0;
+
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 3646c0cae672a..4e763701b3720 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1988,8 +1988,6 @@ static int blkfront_probe(struct xenbus_device *dev,
+ info->vdevice = vdevice;
+ info->connected = BLKIF_STATE_DISCONNECTED;
+
+- info->feature_persistent = feature_persistent;
+-
+ /* Front end dir is a number, which is used as the id. */
+ info->handle = simple_strtoul(strrchr(dev->nodename, '/')+1, NULL, 0);
+ dev_set_drvdata(&dev->dev, info);
+@@ -2283,7 +2281,7 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
+ if (xenbus_read_unsigned(info->xbdev->otherend, "feature-discard", 0))
+ blkfront_setup_discard(info);
+
+- if (info->feature_persistent)
++ if (feature_persistent)
+ info->feature_persistent =
+ !!xenbus_read_unsigned(info->xbdev->otherend,
+ "feature-persistent", 0);
+diff --git a/drivers/bluetooth/hci_intel.c b/drivers/bluetooth/hci_intel.c
+index 7249b91d9b91a..78afb9a348e70 100644
+--- a/drivers/bluetooth/hci_intel.c
++++ b/drivers/bluetooth/hci_intel.c
+@@ -1217,7 +1217,11 @@ static struct platform_driver intel_driver = {
+
+ int __init intel_init(void)
+ {
+- platform_driver_register(&intel_driver);
++ int err;
++
++ err = platform_driver_register(&intel_driver);
++ if (err)
++ return err;
+
+ return hci_uart_register_proto(&intel_proto);
+ }
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 4cda890ce6470..c0e5f42ec6b7d 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -231,6 +231,15 @@ static int hci_uart_setup(struct hci_dev *hdev)
+ return 0;
+ }
+
++/* Check if the device is wakeable */
++static bool hci_uart_wakeup(struct hci_dev *hdev)
++{
++ /* HCI UART devices are assumed to be wakeable by default.
++ * Implement wakeup callback to override this behavior.
++ */
++ return true;
++}
++
+ /** hci_uart_write_wakeup - transmit buffer wakeup
+ * @serdev: serial device
+ *
+@@ -342,6 +351,8 @@ int hci_uart_register_device(struct hci_uart *hu,
+ hdev->flush = hci_uart_flush;
+ hdev->send = hci_uart_send_frame;
+ hdev->setup = hci_uart_setup;
++ if (!hdev->wakeup)
++ hdev->wakeup = hci_uart_wakeup;
+ SET_HCIDEV_DEV(hdev, &hu->serdev->dev);
+
+ if (test_bit(HCI_UART_NO_SUSPEND_NOTIFIER, &hu->flags))
+diff --git a/drivers/bus/hisi_lpc.c b/drivers/bus/hisi_lpc.c
+index 378f5d62a9912..e7eaa8784fee0 100644
+--- a/drivers/bus/hisi_lpc.c
++++ b/drivers/bus/hisi_lpc.c
+@@ -503,13 +503,13 @@ static int hisi_lpc_acpi_probe(struct device *hostdev)
+ {
+ struct acpi_device *adev = ACPI_COMPANION(hostdev);
+ struct acpi_device *child;
++ struct platform_device *pdev;
+ int ret;
+
+ /* Only consider the children of the host */
+ list_for_each_entry(child, &adev->children, node) {
+ const char *hid = acpi_device_hid(child);
+ const struct hisi_lpc_acpi_cell *cell;
+- struct platform_device *pdev;
+ const struct resource *res;
+ bool found = false;
+ int num_res;
+@@ -571,22 +571,24 @@ static int hisi_lpc_acpi_probe(struct device *hostdev)
+
+ ret = platform_device_add_resources(pdev, res, num_res);
+ if (ret)
+- goto fail;
++ goto fail_put_device;
+
+ ret = platform_device_add_data(pdev, cell->pdata,
+ cell->pdata_size);
+ if (ret)
+- goto fail;
++ goto fail_put_device;
+
+ ret = platform_device_add(pdev);
+ if (ret)
+- goto fail;
++ goto fail_put_device;
+
+ acpi_device_set_enumerated(child);
+ }
+
+ return 0;
+
++fail_put_device:
++ platform_device_put(pdev);
+ fail:
+ hisi_lpc_acpi_remove(hostdev);
+ return ret;
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index c1eb5d2238395..65d03867e114c 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -752,6 +752,12 @@ int tpm2_auto_startup(struct tpm_chip *chip)
+ }
+
+ rc = tpm2_get_cc_attrs_tbl(chip);
++ if (rc == TPM2_RC_FAILURE || (rc < 0 && rc != -ENOMEM)) {
++ dev_info(&chip->dev,
++ "TPM in field failure mode, requires firmware upgrade\n");
++ chip->flags |= TPM_CHIP_FLAG_FIRMWARE_UPGRADE;
++ rc = 0;
++ }
+
+ out:
+ /*
+diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c
+index 71c102d950ab0..025b73229cddd 100644
+--- a/drivers/clk/imx/clk-fracn-gppll.c
++++ b/drivers/clk/imx/clk-fracn-gppll.c
+@@ -64,10 +64,10 @@ struct clk_fracn_gppll {
+ * Fout = Fvco / (rdiv * odiv)
+ */
+ static const struct imx_fracn_gppll_rate_table fracn_tbl[] = {
+- PLL_FRACN_GP(650000000U, 81, 0, 0, 0, 3),
+- PLL_FRACN_GP(594000000U, 198, 0, 0, 0, 8),
+- PLL_FRACN_GP(560000000U, 70, 0, 0, 0, 3),
+- PLL_FRACN_GP(400000000U, 50, 0, 0, 0, 3),
++ PLL_FRACN_GP(650000000U, 81, 0, 1, 0, 3),
++ PLL_FRACN_GP(594000000U, 198, 0, 1, 0, 8),
++ PLL_FRACN_GP(560000000U, 70, 0, 1, 0, 3),
++ PLL_FRACN_GP(400000000U, 50, 0, 1, 0, 3),
+ PLL_FRACN_GP(393216000U, 81, 92, 100, 0, 5)
+ };
+
+@@ -131,18 +131,7 @@ static unsigned long clk_fracn_gppll_recalc_rate(struct clk_hw *hw, unsigned lon
+ mfi = FIELD_GET(PLL_MFI_MASK, pll_div);
+
+ rdiv = FIELD_GET(PLL_RDIV_MASK, pll_div);
+- rdiv = rdiv + 1;
+ odiv = FIELD_GET(PLL_ODIV_MASK, pll_div);
+- switch (odiv) {
+- case 0:
+- odiv = 2;
+- break;
+- case 1:
+- odiv = 3;
+- break;
+- default:
+- break;
+- }
+
+ /*
+ * Sometimes, the recalculated rate has deviation due to
+@@ -160,6 +149,20 @@ static unsigned long clk_fracn_gppll_recalc_rate(struct clk_hw *hw, unsigned lon
+ if (rate)
+ return (unsigned long)rate;
+
++ if (!rdiv)
++ rdiv = rdiv + 1;
++
++ switch (odiv) {
++ case 0:
++ odiv = 2;
++ break;
++ case 1:
++ odiv = 3;
++ break;
++ default:
++ break;
++ }
++
+ /* Fvco = Fref * (MFI + MFN / MFD) */
+ fvco = fvco * mfi * mfd + fvco * mfn;
+ do_div(fvco, mfd * rdiv * odiv);
+diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
+index edcc87661d1f6..26885bd3971c4 100644
+--- a/drivers/clk/imx/clk-imx93.c
++++ b/drivers/clk/imx/clk-imx93.c
+@@ -150,7 +150,7 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_A55_GATE, "a55", "a55_root", 0x8000, },
+ /* M33 critical clk for system run */
+ { IMX93_CLK_CM33_GATE, "cm33", "m33_root", 0x8040, CLK_IS_CRITICAL },
+- { IMX93_CLK_ADC1_GATE, "adc1", "osc_24m", 0x82c0, },
++ { IMX93_CLK_ADC1_GATE, "adc1", "adc_root", 0x82c0, },
+ { IMX93_CLK_WDOG1_GATE, "wdog1", "osc_24m", 0x8300, },
+ { IMX93_CLK_WDOG2_GATE, "wdog2", "osc_24m", 0x8340, },
+ { IMX93_CLK_WDOG3_GATE, "wdog3", "osc_24m", 0x8380, },
+@@ -219,7 +219,7 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_LCDIF_GATE, "lcdif", "media_apb_root", 0x9640, },
+ { IMX93_CLK_PXP_GATE, "pxp", "media_apb_root", 0x9680, },
+ { IMX93_CLK_ISI_GATE, "isi", "media_apb_root", 0x96c0, },
+- { IMX93_CLK_NIC_MEDIA_GATE, "nic_media", "media_apb_root", 0x9700, },
++ { IMX93_CLK_NIC_MEDIA_GATE, "nic_media", "media_axi_root", 0x9700, },
+ { IMX93_CLK_USB_CONTROLLER_GATE, "usb_controller", "hsio_root", 0x9a00, },
+ { IMX93_CLK_USB_TEST_60M_GATE, "usb_test_60m", "hsio_usb_test_60m_root", 0x9a40, },
+ { IMX93_CLK_HSIO_TROUT_24M_GATE, "hsio_trout_24m", "osc_24m", 0x9a80, },
+diff --git a/drivers/clk/mediatek/reset.c b/drivers/clk/mediatek/reset.c
+index bcec4b89f449a..834d26e9bdfde 100644
+--- a/drivers/clk/mediatek/reset.c
++++ b/drivers/clk/mediatek/reset.c
+@@ -25,7 +25,7 @@ static int mtk_reset_assert_set_clr(struct reset_controller_dev *rcdev,
+ struct mtk_reset *data = container_of(rcdev, struct mtk_reset, rcdev);
+ unsigned int reg = data->regofs + ((id / 32) << 4);
+
+- return regmap_write(data->regmap, reg, 1);
++ return regmap_write(data->regmap, reg, BIT(id % 32));
+ }
+
+ static int mtk_reset_deassert_set_clr(struct reset_controller_dev *rcdev,
+@@ -34,7 +34,7 @@ static int mtk_reset_deassert_set_clr(struct reset_controller_dev *rcdev,
+ struct mtk_reset *data = container_of(rcdev, struct mtk_reset, rcdev);
+ unsigned int reg = data->regofs + ((id / 32) << 4) + 0x4;
+
+- return regmap_write(data->regmap, reg, 1);
++ return regmap_write(data->regmap, reg, BIT(id % 32));
+ }
+
+ static int mtk_reset_assert(struct reset_controller_dev *rcdev,
+diff --git a/drivers/clk/qcom/camcc-sdm845.c b/drivers/clk/qcom/camcc-sdm845.c
+index be3f953269657..27d44188a7abb 100644
+--- a/drivers/clk/qcom/camcc-sdm845.c
++++ b/drivers/clk/qcom/camcc-sdm845.c
+@@ -1534,6 +1534,8 @@ static struct clk_branch cam_cc_sys_tmr_clk = {
+ },
+ };
+
++static struct gdsc titan_top_gdsc;
++
+ static struct gdsc bps_gdsc = {
+ .gdscr = 0x6004,
+ .pd = {
+@@ -1567,6 +1569,7 @@ static struct gdsc ife_0_gdsc = {
+ .name = "ife_0_gdsc",
+ },
+ .flags = POLL_CFG_GDSCR,
++ .parent = &titan_top_gdsc.pd,
+ .pwrsts = PWRSTS_OFF_ON,
+ };
+
+@@ -1576,6 +1579,7 @@ static struct gdsc ife_1_gdsc = {
+ .name = "ife_1_gdsc",
+ },
+ .flags = POLL_CFG_GDSCR,
++ .parent = &titan_top_gdsc.pd,
+ .pwrsts = PWRSTS_OFF_ON,
+ };
+
+diff --git a/drivers/clk/qcom/camcc-sm8250.c b/drivers/clk/qcom/camcc-sm8250.c
+index 439eaafdcc862..9b32c56a5bc5a 100644
+--- a/drivers/clk/qcom/camcc-sm8250.c
++++ b/drivers/clk/qcom/camcc-sm8250.c
+@@ -2205,6 +2205,8 @@ static struct clk_branch cam_cc_sleep_clk = {
+ },
+ };
+
++static struct gdsc titan_top_gdsc;
++
+ static struct gdsc bps_gdsc = {
+ .gdscr = 0x7004,
+ .pd = {
+@@ -2238,6 +2240,7 @@ static struct gdsc ife_0_gdsc = {
+ .name = "ife_0_gdsc",
+ },
+ .flags = POLL_CFG_GDSCR,
++ .parent = &titan_top_gdsc.pd,
+ .pwrsts = PWRSTS_OFF_ON,
+ };
+
+@@ -2247,6 +2250,7 @@ static struct gdsc ife_1_gdsc = {
+ .name = "ife_1_gdsc",
+ },
+ .flags = POLL_CFG_GDSCR,
++ .parent = &titan_top_gdsc.pd,
+ .pwrsts = PWRSTS_OFF_ON,
+ };
+
+@@ -2440,17 +2444,7 @@ static struct platform_driver cam_cc_sm8250_driver = {
+ },
+ };
+
+-static int __init cam_cc_sm8250_init(void)
+-{
+- return platform_driver_register(&cam_cc_sm8250_driver);
+-}
+-subsys_initcall(cam_cc_sm8250_init);
+-
+-static void __exit cam_cc_sm8250_exit(void)
+-{
+- platform_driver_unregister(&cam_cc_sm8250_driver);
+-}
+-module_exit(cam_cc_sm8250_exit);
++module_platform_driver(cam_cc_sm8250_driver);
+
+ MODULE_DESCRIPTION("QTI CAMCC SM8250 Driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/clk/qcom/clk-krait.c b/drivers/clk/qcom/clk-krait.c
+index 59f1af415b580..90046428693c2 100644
+--- a/drivers/clk/qcom/clk-krait.c
++++ b/drivers/clk/qcom/clk-krait.c
+@@ -32,11 +32,16 @@ static void __krait_mux_set_sel(struct krait_mux_clk *mux, int sel)
+ regval |= (sel & mux->mask) << (mux->shift + LPL_SHIFT);
+ }
+ krait_set_l2_indirect_reg(mux->offset, regval);
+- spin_unlock_irqrestore(&krait_clock_reg_lock, flags);
+
+ /* Wait for switch to complete. */
+ mb();
+ udelay(1);
++
++ /*
++ * Unlock now to make sure the mux register is not
++ * modified while switching to the new parent.
++ */
++ spin_unlock_irqrestore(&krait_clock_reg_lock, flags);
+ }
+
+ static int krait_mux_set_parent(struct clk_hw *hw, u8 index)
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index 8e5dce09d162e..28019edd2a508 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -13,6 +13,7 @@
+ #include <linux/rational.h>
+ #include <linux/regmap.h>
+ #include <linux/math64.h>
++#include <linux/minmax.h>
+ #include <linux/slab.h>
+
+ #include <asm/div64.h>
+@@ -437,7 +438,7 @@ static int clk_rcg2_get_duty_cycle(struct clk_hw *hw, struct clk_duty *duty)
+ static int clk_rcg2_set_duty_cycle(struct clk_hw *hw, struct clk_duty *duty)
+ {
+ struct clk_rcg2 *rcg = to_clk_rcg2(hw);
+- u32 notn_m, n, m, d, not2d, mask, duty_per;
++ u32 notn_m, n, m, d, not2d, mask, duty_per, cfg;
+ int ret;
+
+ /* Duty-cycle cannot be modified for non-MND RCGs */
+@@ -448,6 +449,11 @@ static int clk_rcg2_set_duty_cycle(struct clk_hw *hw, struct clk_duty *duty)
+
+ regmap_read(rcg->clkr.regmap, RCG_N_OFFSET(rcg), ¬n_m);
+ regmap_read(rcg->clkr.regmap, RCG_M_OFFSET(rcg), &m);
++ regmap_read(rcg->clkr.regmap, RCG_CFG_OFFSET(rcg), &cfg);
++
++ /* Duty-cycle cannot be modified if MND divider is in bypass mode. */
++ if (!(cfg & CFG_MODE_MASK))
++ return -EINVAL;
+
+ n = (~(notn_m) + m) & mask;
+
+@@ -456,9 +462,11 @@ static int clk_rcg2_set_duty_cycle(struct clk_hw *hw, struct clk_duty *duty)
+ /* Calculate 2d value */
+ d = DIV_ROUND_CLOSEST(n * duty_per * 2, 100);
+
+- /* Check bit widths of 2d. If D is too big reduce duty cycle. */
+- if (d > mask)
+- d = mask;
++ /*
++ * Check bit widths of 2d. If D is too big reduce duty cycle.
++ * Also make sure it is never zero.
++ */
++ d = clamp_val(d, 1, mask);
+
+ if ((d / 2) > (n - m))
+ d = (n - m) * 2;
+diff --git a/drivers/clk/qcom/dispcc-sm8250.c b/drivers/clk/qcom/dispcc-sm8250.c
+index db9379634fb22..f646fdfe6f154 100644
+--- a/drivers/clk/qcom/dispcc-sm8250.c
++++ b/drivers/clk/qcom/dispcc-sm8250.c
+@@ -1134,7 +1134,6 @@ static struct gdsc mdss_gdsc = {
+ },
+ .pwrsts = PWRSTS_OFF_ON,
+ .flags = HW_CTRL,
+- .supply = "mmcx",
+ };
+
+ static struct clk_regmap *disp_cc_sm8250_clocks[] = {
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 541016db3c4bb..2c2ecfc5e61f5 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -1788,8 +1788,10 @@ static struct clk_regmap_div nss_port4_tx_div_clk_src = {
+ static const struct freq_tbl ftbl_nss_port5_rx_clk_src[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ F(25000000, P_UNIPHY1_RX, 12.5, 0, 0),
++ F(25000000, P_UNIPHY0_RX, 5, 0, 0),
+ F(78125000, P_UNIPHY1_RX, 4, 0, 0),
+ F(125000000, P_UNIPHY1_RX, 2.5, 0, 0),
++ F(125000000, P_UNIPHY0_RX, 1, 0, 0),
+ F(156250000, P_UNIPHY1_RX, 2, 0, 0),
+ F(312500000, P_UNIPHY1_RX, 1, 0, 0),
+ { }
+@@ -1828,8 +1830,10 @@ static struct clk_regmap_div nss_port5_rx_div_clk_src = {
+ static const struct freq_tbl ftbl_nss_port5_tx_clk_src[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ F(25000000, P_UNIPHY1_TX, 12.5, 0, 0),
++ F(25000000, P_UNIPHY0_TX, 5, 0, 0),
+ F(78125000, P_UNIPHY1_TX, 4, 0, 0),
+ F(125000000, P_UNIPHY1_TX, 2.5, 0, 0),
++ F(125000000, P_UNIPHY0_TX, 1, 0, 0),
+ F(156250000, P_UNIPHY1_TX, 2, 0, 0),
+ F(312500000, P_UNIPHY1_TX, 1, 0, 0),
+ { }
+@@ -1867,8 +1871,10 @@ static struct clk_regmap_div nss_port5_tx_div_clk_src = {
+
+ static const struct freq_tbl ftbl_nss_port6_rx_clk_src[] = {
+ F(19200000, P_XO, 1, 0, 0),
++ F(25000000, P_UNIPHY2_RX, 5, 0, 0),
+ F(25000000, P_UNIPHY2_RX, 12.5, 0, 0),
+ F(78125000, P_UNIPHY2_RX, 4, 0, 0),
++ F(125000000, P_UNIPHY2_RX, 1, 0, 0),
+ F(125000000, P_UNIPHY2_RX, 2.5, 0, 0),
+ F(156250000, P_UNIPHY2_RX, 2, 0, 0),
+ F(312500000, P_UNIPHY2_RX, 1, 0, 0),
+@@ -1907,8 +1913,10 @@ static struct clk_regmap_div nss_port6_rx_div_clk_src = {
+
+ static const struct freq_tbl ftbl_nss_port6_tx_clk_src[] = {
+ F(19200000, P_XO, 1, 0, 0),
++ F(25000000, P_UNIPHY2_TX, 5, 0, 0),
+ F(25000000, P_UNIPHY2_TX, 12.5, 0, 0),
+ F(78125000, P_UNIPHY2_TX, 4, 0, 0),
++ F(125000000, P_UNIPHY2_TX, 1, 0, 0),
+ F(125000000, P_UNIPHY2_TX, 2.5, 0, 0),
+ F(156250000, P_UNIPHY2_TX, 2, 0, 0),
+ F(312500000, P_UNIPHY2_TX, 1, 0, 0),
+@@ -3346,6 +3354,7 @@ static struct clk_branch gcc_nssnoc_ubi1_ahb_clk = {
+
+ static struct clk_branch gcc_ubi0_ahb_clk = {
+ .halt_reg = 0x6820c,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x6820c,
+ .enable_mask = BIT(0),
+@@ -3363,6 +3372,7 @@ static struct clk_branch gcc_ubi0_ahb_clk = {
+
+ static struct clk_branch gcc_ubi0_axi_clk = {
+ .halt_reg = 0x68200,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x68200,
+ .enable_mask = BIT(0),
+@@ -3380,6 +3390,7 @@ static struct clk_branch gcc_ubi0_axi_clk = {
+
+ static struct clk_branch gcc_ubi0_nc_axi_clk = {
+ .halt_reg = 0x68204,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x68204,
+ .enable_mask = BIT(0),
+@@ -3397,6 +3408,7 @@ static struct clk_branch gcc_ubi0_nc_axi_clk = {
+
+ static struct clk_branch gcc_ubi0_core_clk = {
+ .halt_reg = 0x68210,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x68210,
+ .enable_mask = BIT(0),
+@@ -3414,6 +3426,7 @@ static struct clk_branch gcc_ubi0_core_clk = {
+
+ static struct clk_branch gcc_ubi0_mpt_clk = {
+ .halt_reg = 0x68208,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x68208,
+ .enable_mask = BIT(0),
+@@ -3431,6 +3444,7 @@ static struct clk_branch gcc_ubi0_mpt_clk = {
+
+ static struct clk_branch gcc_ubi1_ahb_clk = {
+ .halt_reg = 0x6822c,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x6822c,
+ .enable_mask = BIT(0),
+@@ -3448,6 +3462,7 @@ static struct clk_branch gcc_ubi1_ahb_clk = {
+
+ static struct clk_branch gcc_ubi1_axi_clk = {
+ .halt_reg = 0x68220,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x68220,
+ .enable_mask = BIT(0),
+@@ -3465,6 +3480,7 @@ static struct clk_branch gcc_ubi1_axi_clk = {
+
+ static struct clk_branch gcc_ubi1_nc_axi_clk = {
+ .halt_reg = 0x68224,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x68224,
+ .enable_mask = BIT(0),
+@@ -3482,6 +3498,7 @@ static struct clk_branch gcc_ubi1_nc_axi_clk = {
+
+ static struct clk_branch gcc_ubi1_core_clk = {
+ .halt_reg = 0x68230,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x68230,
+ .enable_mask = BIT(0),
+@@ -3499,6 +3516,7 @@ static struct clk_branch gcc_ubi1_core_clk = {
+
+ static struct clk_branch gcc_ubi1_mpt_clk = {
+ .halt_reg = 0x68228,
++ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+ .enable_reg = 0x68228,
+ .enable_mask = BIT(0),
+@@ -4371,6 +4389,33 @@ static struct clk_branch gcc_pcie0_axi_s_bridge_clk = {
+ },
+ };
+
++static const struct alpha_pll_config ubi32_pll_config = {
++ .l = 0x4e,
++ .config_ctl_val = 0x200d4aa8,
++ .config_ctl_hi_val = 0x3c2,
++ .main_output_mask = BIT(0),
++ .aux_output_mask = BIT(1),
++ .pre_div_val = 0x0,
++ .pre_div_mask = BIT(12),
++ .post_div_val = 0x0,
++ .post_div_mask = GENMASK(9, 8),
++};
++
++static const struct alpha_pll_config nss_crypto_pll_config = {
++ .l = 0x3e,
++ .alpha = 0x0,
++ .alpha_hi = 0x80,
++ .config_ctl_val = 0x4001055b,
++ .main_output_mask = BIT(0),
++ .pre_div_val = 0x0,
++ .pre_div_mask = GENMASK(14, 12),
++ .post_div_val = 0x1 << 8,
++ .post_div_mask = GENMASK(11, 8),
++ .vco_mask = GENMASK(21, 20),
++ .vco_val = 0x0,
++ .alpha_en_mask = BIT(24),
++};
++
+ static struct clk_hw *gcc_ipq8074_hws[] = {
+ &gpll0_out_main_div2.hw,
+ &gpll6_out_main_div2.hw,
+@@ -4772,7 +4817,20 @@ static const struct qcom_cc_desc gcc_ipq8074_desc = {
+
+ static int gcc_ipq8074_probe(struct platform_device *pdev)
+ {
+- return qcom_cc_probe(pdev, &gcc_ipq8074_desc);
++ struct regmap *regmap;
++
++ regmap = qcom_cc_map(pdev, &gcc_ipq8074_desc);
++ if (IS_ERR(regmap))
++ return PTR_ERR(regmap);
++
++ /* SW Workaround for UBI32 Huayra PLL */
++ regmap_update_bits(regmap, 0x2501c, BIT(26), BIT(26));
++
++ clk_alpha_pll_configure(&ubi32_pll_main, regmap, &ubi32_pll_config);
++ clk_alpha_pll_configure(&nss_crypto_pll_main, regmap,
++ &nss_crypto_pll_config);
++
++ return qcom_cc_really_probe(pdev, &gcc_ipq8074_desc, regmap);
+ }
+
+ static struct platform_driver gcc_ipq8074_driver = {
+diff --git a/drivers/clk/qcom/gcc-msm8939.c b/drivers/clk/qcom/gcc-msm8939.c
+index 39ebb443ae3d5..de0022e5450de 100644
+--- a/drivers/clk/qcom/gcc-msm8939.c
++++ b/drivers/clk/qcom/gcc-msm8939.c
+@@ -632,7 +632,7 @@ static struct clk_rcg2 system_noc_bfdcd_clk_src = {
+ };
+
+ static struct clk_rcg2 bimc_ddr_clk_src = {
+- .cmd_rcgr = 0x32004,
++ .cmd_rcgr = 0x32024,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_bimc_map,
+ .clkr.hw.init = &(struct clk_init_data){
+@@ -644,6 +644,18 @@ static struct clk_rcg2 bimc_ddr_clk_src = {
+ },
+ };
+
++static struct clk_rcg2 system_mm_noc_bfdcd_clk_src = {
++ .cmd_rcgr = 0x2600c,
++ .hid_width = 5,
++ .parent_map = gcc_xo_gpll0_gpll6a_map,
++ .clkr.hw.init = &(struct clk_init_data){
++ .name = "system_mm_noc_bfdcd_clk_src",
++ .parent_data = gcc_xo_gpll0_gpll6a_parent_data,
++ .num_parents = 3,
++ .ops = &clk_rcg2_ops,
++ },
++};
++
+ static const struct freq_tbl ftbl_gcc_camss_ahb_clk[] = {
+ F(40000000, P_GPLL0, 10, 1, 2),
+ F(80000000, P_GPLL0, 10, 0, 0),
+@@ -1002,7 +1014,7 @@ static struct clk_rcg2 blsp1_uart2_apps_clk_src = {
+ };
+
+ static const struct freq_tbl ftbl_gcc_camss_cci_clk[] = {
+- F(19200000, P_XO, 1, 0, 0),
++ F(19200000, P_XO, 1, 0, 0),
+ { }
+ };
+
+@@ -2441,7 +2453,7 @@ static struct clk_branch gcc_camss_jpeg_axi_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_jpeg_axi_clk",
+ .parent_data = &(const struct clk_parent_data){
+- .hw = &system_noc_bfdcd_clk_src.clkr.hw,
++ .hw = &system_mm_noc_bfdcd_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -2645,7 +2657,7 @@ static struct clk_branch gcc_camss_vfe_axi_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_vfe_axi_clk",
+ .parent_data = &(const struct clk_parent_data){
+- .hw = &system_noc_bfdcd_clk_src.clkr.hw,
++ .hw = &system_mm_noc_bfdcd_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -2801,7 +2813,7 @@ static struct clk_branch gcc_mdss_axi_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdss_axi_clk",
+ .parent_data = &(const struct clk_parent_data){
+- .hw = &system_noc_bfdcd_clk_src.clkr.hw,
++ .hw = &system_mm_noc_bfdcd_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -3193,7 +3205,7 @@ static struct clk_branch gcc_mdp_tbu_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdp_tbu_clk",
+ .parent_data = &(const struct clk_parent_data){
+- .hw = &system_noc_bfdcd_clk_src.clkr.hw,
++ .hw = &system_mm_noc_bfdcd_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -3211,7 +3223,7 @@ static struct clk_branch gcc_venus_tbu_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_venus_tbu_clk",
+ .parent_data = &(const struct clk_parent_data){
+- .hw = &system_noc_bfdcd_clk_src.clkr.hw,
++ .hw = &system_mm_noc_bfdcd_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -3229,7 +3241,7 @@ static struct clk_branch gcc_vfe_tbu_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_vfe_tbu_clk",
+ .parent_data = &(const struct clk_parent_data){
+- .hw = &system_noc_bfdcd_clk_src.clkr.hw,
++ .hw = &system_mm_noc_bfdcd_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -3247,7 +3259,7 @@ static struct clk_branch gcc_jpeg_tbu_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_jpeg_tbu_clk",
+ .parent_data = &(const struct clk_parent_data){
+- .hw = &system_noc_bfdcd_clk_src.clkr.hw,
++ .hw = &system_mm_noc_bfdcd_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -3484,7 +3496,7 @@ static struct clk_branch gcc_venus0_axi_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_venus0_axi_clk",
+ .parent_data = &(const struct clk_parent_data){
+- .hw = &system_noc_bfdcd_clk_src.clkr.hw,
++ .hw = &system_mm_noc_bfdcd_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -3623,6 +3635,7 @@ static struct clk_regmap *gcc_msm8939_clocks[] = {
+ [GPLL2_VOTE] = &gpll2_vote,
+ [PCNOC_BFDCD_CLK_SRC] = &pcnoc_bfdcd_clk_src.clkr,
+ [SYSTEM_NOC_BFDCD_CLK_SRC] = &system_noc_bfdcd_clk_src.clkr,
++ [SYSTEM_MM_NOC_BFDCD_CLK_SRC] = &system_mm_noc_bfdcd_clk_src.clkr,
+ [CAMSS_AHB_CLK_SRC] = &camss_ahb_clk_src.clkr,
+ [APSS_AHB_CLK_SRC] = &apss_ahb_clk_src.clkr,
+ [CSI0_CLK_SRC] = &csi0_clk_src.clkr,
+diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
+index 44520efc6c72b..2db0938f8dd3f 100644
+--- a/drivers/clk/qcom/gdsc.c
++++ b/drivers/clk/qcom/gdsc.c
+@@ -420,6 +420,14 @@ static int gdsc_init(struct gdsc *sc)
+ return ret;
+ }
+
++ /* ...and the power-domain */
++ ret = gdsc_pm_runtime_get(sc);
++ if (ret) {
++ if (sc->rsupply)
++ regulator_disable(sc->rsupply);
++ return ret;
++ }
++
+ /*
+ * Votable GDSCs can be ON due to Vote from other masters.
+ * If a Votable GDSC is ON, make sure we have a Vote.
+diff --git a/drivers/clk/qcom/videocc-sm8250.c b/drivers/clk/qcom/videocc-sm8250.c
+index 8617454e4a77c..f28f2cb051d72 100644
+--- a/drivers/clk/qcom/videocc-sm8250.c
++++ b/drivers/clk/qcom/videocc-sm8250.c
+@@ -277,7 +277,6 @@ static struct gdsc mvs0c_gdsc = {
+ },
+ .flags = 0,
+ .pwrsts = PWRSTS_OFF_ON,
+- .supply = "mmcx",
+ };
+
+ static struct gdsc mvs1c_gdsc = {
+@@ -287,7 +286,6 @@ static struct gdsc mvs1c_gdsc = {
+ },
+ .flags = 0,
+ .pwrsts = PWRSTS_OFF_ON,
+- .supply = "mmcx",
+ };
+
+ static struct gdsc mvs0_gdsc = {
+@@ -297,7 +295,6 @@ static struct gdsc mvs0_gdsc = {
+ },
+ .flags = HW_CTRL,
+ .pwrsts = PWRSTS_OFF_ON,
+- .supply = "mmcx",
+ };
+
+ static struct gdsc mvs1_gdsc = {
+@@ -307,7 +304,6 @@ static struct gdsc mvs1_gdsc = {
+ },
+ .flags = HW_CTRL,
+ .pwrsts = PWRSTS_OFF_ON,
+- .supply = "mmcx",
+ };
+
+ static struct clk_regmap *video_cc_sm8250_clocks[] = {
+diff --git a/drivers/clk/renesas/r9a06g032-clocks.c b/drivers/clk/renesas/r9a06g032-clocks.c
+index 35ffc462af1ae..864b3dabecd9b 100644
+--- a/drivers/clk/renesas/r9a06g032-clocks.c
++++ b/drivers/clk/renesas/r9a06g032-clocks.c
+@@ -290,8 +290,8 @@ static const struct r9a06g032_clkdesc r9a06g032_clocks[] = {
+ .name = "uart_group_012",
+ .type = K_BITSEL,
+ .source = 1 + R9A06G032_DIV_UART,
+- /* R9A06G032_SYSCTRL_REG_PWRCTRL_PG1_PR2 */
+- .dual.sel = ((0xec / 4) << 5) | 24,
++ /* R9A06G032_SYSCTRL_REG_PWRCTRL_PG0_0 */
++ .dual.sel = ((0x34 / 4) << 5) | 30,
+ .dual.group = 0,
+ },
+ {
+@@ -299,8 +299,8 @@ static const struct r9a06g032_clkdesc r9a06g032_clocks[] = {
+ .name = "uart_group_34567",
+ .type = K_BITSEL,
+ .source = 1 + R9A06G032_DIV_P2_PG,
+- /* R9A06G032_SYSCTRL_REG_PWRCTRL_PG0_0 */
+- .dual.sel = ((0x34 / 4) << 5) | 30,
++ /* R9A06G032_SYSCTRL_REG_PWRCTRL_PG1_PR2 */
++ .dual.sel = ((0xec / 4) << 5) | 24,
+ .dual.group = 1,
+ },
+ D_UGATE(CLK_UART0, "clk_uart0", UART_GROUP_012, 0, 0, 0x1b2, 0x1b3, 0x1b4, 0x1b5),
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index e2999ab2b53c4..3ff6ecd617565 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -1180,7 +1180,7 @@ static int rzg2l_cpg_status(struct reset_controller_dev *rcdev,
+ s8 monbit = info->resets[id].monbit;
+
+ if (info->has_clk_mon_regs) {
+- return !(readl(priv->base + CLK_MRST_R(reg)) & bitmask);
++ return !!(readl(priv->base + CLK_MRST_R(reg)) & bitmask);
+ } else if (monbit >= 0) {
+ u32 monbitmask = BIT(monbit);
+
+diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c
+index 813cccbfe9348..f0e0a35c7f217 100644
+--- a/drivers/cpufreq/mediatek-cpufreq-hw.c
++++ b/drivers/cpufreq/mediatek-cpufreq-hw.c
+@@ -51,7 +51,7 @@ static const u16 cpufreq_mtk_offsets[REG_ARRAY_SIZE] = {
+ };
+
+ static int __maybe_unused
+-mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *mW,
++mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *uW,
+ unsigned long *KHz)
+ {
+ struct mtk_cpufreq_data *data;
+@@ -71,8 +71,9 @@ mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *mW,
+ i--;
+
+ *KHz = data->table[i].frequency;
+- *mW = readl_relaxed(data->reg_bases[REG_EM_POWER_TBL] +
+- i * LUT_ROW_SIZE) / 1000;
++ /* Provide micro-Watts value to the Energy Model */
++ *uW = readl_relaxed(data->reg_bases[REG_EM_POWER_TBL] +
++ i * LUT_ROW_SIZE);
+
+ return 0;
+ }
+diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
+index 76f6b3884e6b2..7f2680bc9a0f4 100644
+--- a/drivers/cpufreq/mediatek-cpufreq.c
++++ b/drivers/cpufreq/mediatek-cpufreq.c
+@@ -478,6 +478,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
+ if (info->soc_data->ccifreq_supported) {
+ info->vproc_on_boot = regulator_get_voltage(info->proc_reg);
+ if (info->vproc_on_boot < 0) {
++ ret = info->vproc_on_boot;
+ dev_err(info->cpu_dev,
+ "invalid Vproc value: %d\n", info->vproc_on_boot);
+ goto out_disable_inter_clock;
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index 6d2a4cf46db70..bfd35583d6532 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -19,6 +19,7 @@
+ #include <linux/slab.h>
+ #include <linux/scmi_protocol.h>
+ #include <linux/types.h>
++#include <linux/units.h>
+
+ struct scmi_data {
+ int domain_id;
+@@ -99,6 +100,7 @@ static int __maybe_unused
+ scmi_get_cpu_power(struct device *cpu_dev, unsigned long *power,
+ unsigned long *KHz)
+ {
++ bool power_scale_mw = perf_ops->power_scale_mw_get(ph);
+ unsigned long Hz;
+ int ret, domain;
+
+@@ -112,6 +114,10 @@ scmi_get_cpu_power(struct device *cpu_dev, unsigned long *power,
+ if (ret)
+ return ret;
+
++ /* Provide bigger resolution power to the Energy Model */
++ if (power_scale_mw)
++ *power *= MICROWATT_PER_MILLIWATT;
++
+ /* The EM framework specifies the frequency in KHz. */
+ *KHz = Hz / 1000;
+
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 5bb950182026f..910d6751644cf 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -170,6 +170,7 @@ dma_iv_error:
+ while (i >= 0) {
+ dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
+ memzero_explicit(sf->iv[i], ivsize);
++ i--;
+ }
+ return err;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index 98593a0cff694..ac2329e2b0e58 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -528,25 +528,33 @@ static int allocate_flows(struct sun8i_ss_dev *ss)
+
+ ss->flows[i].biv = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
+ GFP_KERNEL | GFP_DMA);
+- if (!ss->flows[i].biv)
++ if (!ss->flows[i].biv) {
++ err = -ENOMEM;
+ goto error_engine;
++ }
+
+ for (j = 0; j < MAX_SG; j++) {
+ ss->flows[i].iv[j] = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
+ GFP_KERNEL | GFP_DMA);
+- if (!ss->flows[i].iv[j])
++ if (!ss->flows[i].iv[j]) {
++ err = -ENOMEM;
+ goto error_engine;
++ }
+ }
+
+ /* the padding could be up to two block. */
+ ss->flows[i].pad = devm_kmalloc(ss->dev, MAX_PAD_SIZE,
+ GFP_KERNEL | GFP_DMA);
+- if (!ss->flows[i].pad)
++ if (!ss->flows[i].pad) {
++ err = -ENOMEM;
+ goto error_engine;
++ }
+ ss->flows[i].result = devm_kmalloc(ss->dev, SHA256_DIGEST_SIZE,
+ GFP_KERNEL | GFP_DMA);
+- if (!ss->flows[i].result)
++ if (!ss->flows[i].result) {
++ err = -ENOMEM;
+ goto error_engine;
++ }
+
+ ss->flows[i].engine = crypto_engine_alloc_init(ss->dev, true);
+ if (!ss->flows[i].engine) {
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index ac417a6b39e5f..36a82b22953cd 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -30,8 +30,8 @@ static int sun8i_ss_hashkey(struct sun8i_ss_hash_tfm_ctx *tfmctx, const u8 *key,
+ int ret = 0;
+
+ xtfm = crypto_alloc_shash("sha1", 0, CRYPTO_ALG_NEED_FALLBACK);
+- if (!xtfm)
+- return -ENOMEM;
++ if (IS_ERR(xtfm))
++ return PTR_ERR(xtfm);
+
+ len = sizeof(*sdesc) + crypto_shash_descsize(xtfm);
+ sdesc = kmalloc(len, GFP_KERNEL);
+@@ -586,7 +586,8 @@ retry:
+ rctx->t_dst[k + 1].len = rctx->t_dst[k].len;
+ }
+ addr_xpad = dma_map_single(ss->dev, tfmctx->ipad, bs, DMA_TO_DEVICE);
+- if (dma_mapping_error(ss->dev, addr_xpad)) {
++ err = dma_mapping_error(ss->dev, addr_xpad);
++ if (err) {
+ dev_err(ss->dev, "Fail to create DMA mapping of ipad\n");
+ goto err_dma_xpad;
+ }
+@@ -612,7 +613,8 @@ retry:
+ goto err_dma_result;
+ }
+ addr_xpad = dma_map_single(ss->dev, tfmctx->opad, bs, DMA_TO_DEVICE);
+- if (dma_mapping_error(ss->dev, addr_xpad)) {
++ err = dma_mapping_error(ss->dev, addr_xpad);
++ if (err) {
+ dev_err(ss->dev, "Fail to create DMA mapping of opad\n");
+ goto err_dma_xpad;
+ }
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 799b476fc3e82..9f588c9728f8b 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -503,7 +503,7 @@ static int __sev_platform_shutdown_locked(int *error)
+ struct sev_device *sev = psp_master->sev_data;
+ int ret;
+
+- if (sev->state == SEV_STATE_UNINIT)
++ if (!sev || sev->state == SEV_STATE_UNINIT)
+ return 0;
+
+ ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
+@@ -577,6 +577,8 @@ static int sev_ioctl_do_platform_status(struct sev_issue_cmd *argp)
+ struct sev_user_data_status data;
+ int ret;
+
++ memset(&data, 0, sizeof(data));
++
+ ret = __sev_do_cmd_locked(SEV_CMD_PLATFORM_STATUS, &data, &argp->error);
+ if (ret)
+ return ret;
+@@ -630,7 +632,7 @@ static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)
+ if (input.length > SEV_FW_BLOB_MAX_SIZE)
+ return -EFAULT;
+
+- blob = kmalloc(input.length, GFP_KERNEL);
++ blob = kzalloc(input.length, GFP_KERNEL);
+ if (!blob)
+ return -ENOMEM;
+
+@@ -854,7 +856,7 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
+ input_address = (void __user *)input.address;
+
+ if (input.address && input.length) {
+- id_blob = kmalloc(input.length, GFP_KERNEL);
++ id_blob = kzalloc(input.length, GFP_KERNEL);
+ if (!id_blob)
+ return -ENOMEM;
+
+@@ -973,14 +975,14 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
+ if (input.cert_chain_len > SEV_FW_BLOB_MAX_SIZE)
+ return -EFAULT;
+
+- pdh_blob = kmalloc(input.pdh_cert_len, GFP_KERNEL);
++ pdh_blob = kzalloc(input.pdh_cert_len, GFP_KERNEL);
+ if (!pdh_blob)
+ return -ENOMEM;
+
+ data.pdh_cert_address = __psp_pa(pdh_blob);
+ data.pdh_cert_len = input.pdh_cert_len;
+
+- cert_blob = kmalloc(input.cert_chain_len, GFP_KERNEL);
++ cert_blob = kzalloc(input.cert_chain_len, GFP_KERNEL);
+ if (!cert_blob) {
+ ret = -ENOMEM;
+ goto e_free_pdh;
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+index 97d54c1465c2b..3ba6f15deafc6 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+@@ -252,7 +252,7 @@ static int hpre_prepare_dma_buf(struct hpre_asym_request *hpre_req,
+ if (unlikely(shift < 0))
+ return -EINVAL;
+
+- ptr = dma_alloc_coherent(dev, ctx->key_sz, tmp, GFP_KERNEL);
++ ptr = dma_alloc_coherent(dev, ctx->key_sz, tmp, GFP_ATOMIC);
+ if (unlikely(!ptr))
+ return -ENOMEM;
+
+diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c b/drivers/crypto/hisilicon/sec/sec_algs.c
+index 0a3c8f019b025..490e1542305e1 100644
+--- a/drivers/crypto/hisilicon/sec/sec_algs.c
++++ b/drivers/crypto/hisilicon/sec/sec_algs.c
+@@ -449,7 +449,7 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp,
+ */
+ }
+
+- mutex_lock(&ctx->queue->queuelock);
++ spin_lock_bh(&ctx->queue->queuelock);
+ /* Put the IV in place for chained cases */
+ switch (ctx->cipher_alg) {
+ case SEC_C_AES_CBC_128:
+@@ -509,7 +509,7 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp,
+ list_del(&backlog_req->backlog_head);
+ }
+ }
+- mutex_unlock(&ctx->queue->queuelock);
++ spin_unlock_bh(&ctx->queue->queuelock);
+
+ mutex_lock(&sec_req->lock);
+ list_del(&sec_req_el->head);
+@@ -798,7 +798,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ */
+
+ /* Grab a big lock for a long time to avoid concurrency issues */
+- mutex_lock(&queue->queuelock);
++ spin_lock_bh(&queue->queuelock);
+
+ /*
+ * Can go on to queue if we have space in either:
+@@ -814,15 +814,15 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ ret = -EBUSY;
+ if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {
+ list_add_tail(&sec_req->backlog_head, &ctx->backlog);
+- mutex_unlock(&queue->queuelock);
++ spin_unlock_bh(&queue->queuelock);
+ goto out;
+ }
+
+- mutex_unlock(&queue->queuelock);
++ spin_unlock_bh(&queue->queuelock);
+ goto err_free_elements;
+ }
+ ret = sec_send_request(sec_req, queue);
+- mutex_unlock(&queue->queuelock);
++ spin_unlock_bh(&queue->queuelock);
+ if (ret)
+ goto err_free_elements;
+
+@@ -881,7 +881,7 @@ static int sec_alg_skcipher_init(struct crypto_skcipher *tfm)
+ if (IS_ERR(ctx->queue))
+ return PTR_ERR(ctx->queue);
+
+- mutex_init(&ctx->queue->queuelock);
++ spin_lock_init(&ctx->queue->queuelock);
+ ctx->queue->havesoftqueue = false;
+
+ return 0;
+diff --git a/drivers/crypto/hisilicon/sec/sec_drv.h b/drivers/crypto/hisilicon/sec/sec_drv.h
+index 179a8250d691c..e2a50bf2234b9 100644
+--- a/drivers/crypto/hisilicon/sec/sec_drv.h
++++ b/drivers/crypto/hisilicon/sec/sec_drv.h
+@@ -347,7 +347,7 @@ struct sec_queue {
+ DECLARE_BITMAP(unprocessed, SEC_QUEUE_LEN);
+ DECLARE_KFIFO_PTR(softqueue, typeof(struct sec_request_el *));
+ bool havesoftqueue;
+- struct mutex queuelock;
++ spinlock_t queuelock;
+ void *shadow[SEC_QUEUE_LEN];
+ };
+
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index c2e9b01187a74..a44c8dba3cda6 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -119,7 +119,7 @@ struct sec_qp_ctx {
+ struct idr req_idr;
+ struct sec_alg_res res[QM_Q_DEPTH];
+ struct sec_ctx *ctx;
+- struct mutex req_lock;
++ spinlock_t req_lock;
+ struct list_head backlog;
+ struct hisi_acc_sgl_pool *c_in_pool;
+ struct hisi_acc_sgl_pool *c_out_pool;
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 6eebe739893c5..77c9f13cf69ac 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -127,11 +127,11 @@ static int sec_alloc_req_id(struct sec_req *req, struct sec_qp_ctx *qp_ctx)
+ {
+ int req_id;
+
+- mutex_lock(&qp_ctx->req_lock);
++ spin_lock_bh(&qp_ctx->req_lock);
+
+ req_id = idr_alloc_cyclic(&qp_ctx->req_idr, NULL,
+ 0, QM_Q_DEPTH, GFP_ATOMIC);
+- mutex_unlock(&qp_ctx->req_lock);
++ spin_unlock_bh(&qp_ctx->req_lock);
+ if (unlikely(req_id < 0)) {
+ dev_err(req->ctx->dev, "alloc req id fail!\n");
+ return req_id;
+@@ -156,9 +156,9 @@ static void sec_free_req_id(struct sec_req *req)
+ qp_ctx->req_list[req_id] = NULL;
+ req->qp_ctx = NULL;
+
+- mutex_lock(&qp_ctx->req_lock);
++ spin_lock_bh(&qp_ctx->req_lock);
+ idr_remove(&qp_ctx->req_idr, req_id);
+- mutex_unlock(&qp_ctx->req_lock);
++ spin_unlock_bh(&qp_ctx->req_lock);
+ }
+
+ static u8 pre_parse_finished_bd(struct bd_status *status, void *resp)
+@@ -273,7 +273,7 @@ static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req)
+ !(req->flag & CRYPTO_TFM_REQ_MAY_BACKLOG))
+ return -EBUSY;
+
+- mutex_lock(&qp_ctx->req_lock);
++ spin_lock_bh(&qp_ctx->req_lock);
+ ret = hisi_qp_send(qp_ctx->qp, &req->sec_sqe);
+
+ if (ctx->fake_req_limit <=
+@@ -281,10 +281,10 @@ static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req)
+ list_add_tail(&req->backlog_head, &qp_ctx->backlog);
+ atomic64_inc(&ctx->sec->debug.dfx.send_cnt);
+ atomic64_inc(&ctx->sec->debug.dfx.send_busy_cnt);
+- mutex_unlock(&qp_ctx->req_lock);
++ spin_unlock_bh(&qp_ctx->req_lock);
+ return -EBUSY;
+ }
+- mutex_unlock(&qp_ctx->req_lock);
++ spin_unlock_bh(&qp_ctx->req_lock);
+
+ if (unlikely(ret == -EBUSY))
+ return -ENOBUFS;
+@@ -487,7 +487,7 @@ static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx,
+
+ qp->req_cb = sec_req_cb;
+
+- mutex_init(&qp_ctx->req_lock);
++ spin_lock_init(&qp_ctx->req_lock);
+ idr_init(&qp_ctx->req_idr);
+ INIT_LIST_HEAD(&qp_ctx->backlog);
+
+@@ -620,7 +620,7 @@ static int sec_auth_init(struct sec_ctx *ctx)
+ {
+ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+
+- a_ctx->a_key = dma_alloc_coherent(ctx->dev, SEC_MAX_KEY_SIZE,
++ a_ctx->a_key = dma_alloc_coherent(ctx->dev, SEC_MAX_AKEY_SIZE,
+ &a_ctx->a_key_dma, GFP_KERNEL);
+ if (!a_ctx->a_key)
+ return -ENOMEM;
+@@ -632,8 +632,8 @@ static void sec_auth_uninit(struct sec_ctx *ctx)
+ {
+ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+
+- memzero_explicit(a_ctx->a_key, SEC_MAX_KEY_SIZE);
+- dma_free_coherent(ctx->dev, SEC_MAX_KEY_SIZE,
++ memzero_explicit(a_ctx->a_key, SEC_MAX_AKEY_SIZE);
++ dma_free_coherent(ctx->dev, SEC_MAX_AKEY_SIZE,
+ a_ctx->a_key, a_ctx->a_key_dma);
+ }
+
+@@ -1382,7 +1382,7 @@ static struct sec_req *sec_back_req_clear(struct sec_ctx *ctx,
+ {
+ struct sec_req *backlog_req = NULL;
+
+- mutex_lock(&qp_ctx->req_lock);
++ spin_lock_bh(&qp_ctx->req_lock);
+ if (ctx->fake_req_limit >=
+ atomic_read(&qp_ctx->qp->qp_status.used) &&
+ !list_empty(&qp_ctx->backlog)) {
+@@ -1390,7 +1390,7 @@ static struct sec_req *sec_back_req_clear(struct sec_ctx *ctx,
+ typeof(*backlog_req), backlog_head);
+ list_del(&backlog_req->backlog_head);
+ }
+- mutex_unlock(&qp_ctx->req_lock);
++ spin_unlock_bh(&qp_ctx->req_lock);
+
+ return backlog_req;
+ }
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.h b/drivers/crypto/hisilicon/sec2/sec_crypto.h
+index 5e039b50e9d4c..d033f63b583f8 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.h
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.h
+@@ -7,6 +7,7 @@
+ #define SEC_AIV_SIZE 12
+ #define SEC_IV_SIZE 24
+ #define SEC_MAX_KEY_SIZE 64
++#define SEC_MAX_AKEY_SIZE 128
+ #define SEC_COMM_SCENE 0
+ #define SEC_MIN_BLOCK_SZ 1
+
+diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
+index 9b1a158aec299..ad0d8c4a71ac1 100644
+--- a/drivers/crypto/inside-secure/safexcel.c
++++ b/drivers/crypto/inside-secure/safexcel.c
+@@ -1831,6 +1831,8 @@ static const struct of_device_id safexcel_of_match_table[] = {
+ {},
+ };
+
++MODULE_DEVICE_TABLE(of, safexcel_of_match_table);
++
+ static struct platform_driver crypto_safexcel = {
+ .probe = safexcel_probe,
+ .remove = safexcel_remove,
+diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
+index 468d1097a1ece..f23569e4b0bde 100644
+--- a/drivers/dma/dw-edma/dw-edma-core.c
++++ b/drivers/dma/dw-edma/dw-edma-core.c
+@@ -423,7 +423,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
+ chunk->ll_region.sz += burst->sz;
+ desc->alloc_sz += burst->sz;
+
+- if (chan->dir == EDMA_DIR_WRITE) {
++ if (dir == DMA_DEV_TO_MEM) {
+ burst->sar = src_addr;
+ if (xfer->type == EDMA_XFER_CYCLIC) {
+ burst->dar = xfer->xfer.cyclic.paddr;
+diff --git a/drivers/dma/dw/rzn1-dmamux.c b/drivers/dma/dw/rzn1-dmamux.c
+index 11d254e450b0c..f9912c3dd4d7c 100644
+--- a/drivers/dma/dw/rzn1-dmamux.c
++++ b/drivers/dma/dw/rzn1-dmamux.c
+@@ -102,10 +102,12 @@ free_map:
+ return ERR_PTR(ret);
+ }
+
++#ifdef CONFIG_OF
+ static const struct of_device_id rzn1_dmac_match[] = {
+ { .compatible = "renesas,rzn1-dma" },
+ {}
+ };
++#endif
+
+ static int rzn1_dmamux_probe(struct platform_device *pdev)
+ {
+@@ -140,6 +142,7 @@ static const struct of_device_id rzn1_dmamux_match[] = {
+ { .compatible = "renesas,rzn1-dmamux" },
+ {}
+ };
++MODULE_DEVICE_TABLE(of, rzn1_dmamux_match);
+
+ static struct platform_driver rzn1_dmamux_driver = {
+ .driver = {
+diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
+index 3bffe3ecbd1b6..65c6094ce0639 100644
+--- a/drivers/dma/imx-dma.c
++++ b/drivers/dma/imx-dma.c
+@@ -1047,7 +1047,7 @@ static int __init imxdma_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ imxdma->dev = &pdev->dev;
+- imxdma->devtype = (enum imx_dma_type)of_device_get_match_data(&pdev->dev);
++ imxdma->devtype = (uintptr_t)of_device_get_match_data(&pdev->dev);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ imxdma->base = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/dma/sf-pdma/sf-pdma.c b/drivers/dma/sf-pdma/sf-pdma.c
+index db5a4ef760773..4f8b8498c5c62 100644
+--- a/drivers/dma/sf-pdma/sf-pdma.c
++++ b/drivers/dma/sf-pdma/sf-pdma.c
+@@ -52,16 +52,6 @@ static inline struct sf_pdma_desc *to_sf_pdma_desc(struct virt_dma_desc *vd)
+ static struct sf_pdma_desc *sf_pdma_alloc_desc(struct sf_pdma_chan *chan)
+ {
+ struct sf_pdma_desc *desc;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&chan->lock, flags);
+-
+- if (chan->desc && !chan->desc->in_use) {
+- spin_unlock_irqrestore(&chan->lock, flags);
+- return chan->desc;
+- }
+-
+- spin_unlock_irqrestore(&chan->lock, flags);
+
+ desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
+ if (!desc)
+@@ -111,7 +101,6 @@ sf_pdma_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest, dma_addr_t src,
+ desc->async_tx = vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
+
+ spin_lock_irqsave(&chan->vchan.lock, iflags);
+- chan->desc = desc;
+ sf_pdma_fill_desc(desc, dest, src, len);
+ spin_unlock_irqrestore(&chan->vchan.lock, iflags);
+
+@@ -170,11 +159,17 @@ static size_t sf_pdma_desc_residue(struct sf_pdma_chan *chan,
+ unsigned long flags;
+ u64 residue = 0;
+ struct sf_pdma_desc *desc;
+- struct dma_async_tx_descriptor *tx;
++ struct dma_async_tx_descriptor *tx = NULL;
+
+ spin_lock_irqsave(&chan->vchan.lock, flags);
+
+- tx = &chan->desc->vdesc.tx;
++ list_for_each_entry(vd, &chan->vchan.desc_submitted, node)
++ if (vd->tx.cookie == cookie)
++ tx = &vd->tx;
++
++ if (!tx)
++ goto out;
++
+ if (cookie == tx->chan->completed_cookie)
+ goto out;
+
+@@ -241,6 +236,19 @@ static void sf_pdma_enable_request(struct sf_pdma_chan *chan)
+ writel(v, regs->ctrl);
+ }
+
++static struct sf_pdma_desc *sf_pdma_get_first_pending_desc(struct sf_pdma_chan *chan)
++{
++ struct virt_dma_chan *vchan = &chan->vchan;
++ struct virt_dma_desc *vdesc;
++
++ if (list_empty(&vchan->desc_issued))
++ return NULL;
++
++ vdesc = list_first_entry(&vchan->desc_issued, struct virt_dma_desc, node);
++
++ return container_of(vdesc, struct sf_pdma_desc, vdesc);
++}
++
+ static void sf_pdma_xfer_desc(struct sf_pdma_chan *chan)
+ {
+ struct sf_pdma_desc *desc = chan->desc;
+@@ -268,8 +276,11 @@ static void sf_pdma_issue_pending(struct dma_chan *dchan)
+
+ spin_lock_irqsave(&chan->vchan.lock, flags);
+
+- if (vchan_issue_pending(&chan->vchan) && chan->desc)
++ if (!chan->desc && vchan_issue_pending(&chan->vchan)) {
++ /* vchan_issue_pending has made a check that desc in not NULL */
++ chan->desc = sf_pdma_get_first_pending_desc(chan);
+ sf_pdma_xfer_desc(chan);
++ }
+
+ spin_unlock_irqrestore(&chan->vchan.lock, flags);
+ }
+@@ -298,6 +309,11 @@ static void sf_pdma_donebh_tasklet(struct tasklet_struct *t)
+ spin_lock_irqsave(&chan->vchan.lock, flags);
+ list_del(&chan->desc->vdesc.node);
+ vchan_cookie_complete(&chan->desc->vdesc);
++
++ chan->desc = sf_pdma_get_first_pending_desc(chan);
++ if (chan->desc)
++ sf_pdma_xfer_desc(chan);
++
+ spin_unlock_irqrestore(&chan->vchan.lock, flags);
+ }
+
+diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c
+index ddf0b9ff9e15c..435d0e2658a42 100644
+--- a/drivers/firmware/arm_scpi.c
++++ b/drivers/firmware/arm_scpi.c
+@@ -815,7 +815,7 @@ static int scpi_init_versions(struct scpi_drvinfo *info)
+ info->firmware_version = le32_to_cpu(caps.platform_version);
+ }
+ /* Ignore error if not implemented */
+- if (scpi_info->is_legacy && ret == -EOPNOTSUPP)
++ if (info->is_legacy && ret == -EOPNOTSUPP)
+ return 0;
+
+ return ret;
+@@ -913,13 +913,14 @@ static int scpi_probe(struct platform_device *pdev)
+ struct resource res;
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
++ struct scpi_drvinfo *scpi_drvinfo;
+
+- scpi_info = devm_kzalloc(dev, sizeof(*scpi_info), GFP_KERNEL);
+- if (!scpi_info)
++ scpi_drvinfo = devm_kzalloc(dev, sizeof(*scpi_drvinfo), GFP_KERNEL);
++ if (!scpi_drvinfo)
+ return -ENOMEM;
+
+ if (of_match_device(legacy_scpi_of_match, &pdev->dev))
+- scpi_info->is_legacy = true;
++ scpi_drvinfo->is_legacy = true;
+
+ count = of_count_phandle_with_args(np, "mboxes", "#mbox-cells");
+ if (count < 0) {
+@@ -927,19 +928,19 @@ static int scpi_probe(struct platform_device *pdev)
+ return -ENODEV;
+ }
+
+- scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan),
+- GFP_KERNEL);
+- if (!scpi_info->channels)
++ scpi_drvinfo->channels =
++ devm_kcalloc(dev, count, sizeof(struct scpi_chan), GFP_KERNEL);
++ if (!scpi_drvinfo->channels)
+ return -ENOMEM;
+
+- ret = devm_add_action(dev, scpi_free_channels, scpi_info);
++ ret = devm_add_action(dev, scpi_free_channels, scpi_drvinfo);
+ if (ret)
+ return ret;
+
+- for (; scpi_info->num_chans < count; scpi_info->num_chans++) {
++ for (; scpi_drvinfo->num_chans < count; scpi_drvinfo->num_chans++) {
+ resource_size_t size;
+- int idx = scpi_info->num_chans;
+- struct scpi_chan *pchan = scpi_info->channels + idx;
++ int idx = scpi_drvinfo->num_chans;
++ struct scpi_chan *pchan = scpi_drvinfo->channels + idx;
+ struct mbox_client *cl = &pchan->cl;
+ struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
+
+@@ -986,45 +987,53 @@ static int scpi_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- scpi_info->commands = scpi_std_commands;
++ scpi_drvinfo->commands = scpi_std_commands;
+
+- platform_set_drvdata(pdev, scpi_info);
++ platform_set_drvdata(pdev, scpi_drvinfo);
+
+- if (scpi_info->is_legacy) {
++ if (scpi_drvinfo->is_legacy) {
+ /* Replace with legacy variants */
+ scpi_ops.clk_set_val = legacy_scpi_clk_set_val;
+- scpi_info->commands = scpi_legacy_commands;
++ scpi_drvinfo->commands = scpi_legacy_commands;
+
+ /* Fill priority bitmap */
+ for (idx = 0; idx < ARRAY_SIZE(legacy_hpriority_cmds); idx++)
+ set_bit(legacy_hpriority_cmds[idx],
+- scpi_info->cmd_priority);
++ scpi_drvinfo->cmd_priority);
+ }
+
+- ret = scpi_init_versions(scpi_info);
++ scpi_info = scpi_drvinfo;
++
++ ret = scpi_init_versions(scpi_drvinfo);
+ if (ret) {
+ dev_err(dev, "incorrect or no SCP firmware found\n");
++ scpi_info = NULL;
+ return ret;
+ }
+
+- if (scpi_info->is_legacy && !scpi_info->protocol_version &&
+- !scpi_info->firmware_version)
++ if (scpi_drvinfo->is_legacy && !scpi_drvinfo->protocol_version &&
++ !scpi_drvinfo->firmware_version)
+ dev_info(dev, "SCP Protocol legacy pre-1.0 firmware\n");
+ else
+ dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n",
+ FIELD_GET(PROTO_REV_MAJOR_MASK,
+- scpi_info->protocol_version),
++ scpi_drvinfo->protocol_version),
+ FIELD_GET(PROTO_REV_MINOR_MASK,
+- scpi_info->protocol_version),
++ scpi_drvinfo->protocol_version),
+ FIELD_GET(FW_REV_MAJOR_MASK,
+- scpi_info->firmware_version),
++ scpi_drvinfo->firmware_version),
+ FIELD_GET(FW_REV_MINOR_MASK,
+- scpi_info->firmware_version),
++ scpi_drvinfo->firmware_version),
+ FIELD_GET(FW_REV_PATCH_MASK,
+- scpi_info->firmware_version));
+- scpi_info->scpi_ops = &scpi_ops;
++ scpi_drvinfo->firmware_version));
++
++ scpi_drvinfo->scpi_ops = &scpi_ops;
+
+- return devm_of_platform_populate(dev);
++ ret = devm_of_platform_populate(dev);
++ if (ret)
++ scpi_info = NULL;
++
++ return ret;
+ }
+
+ static const struct of_device_id scpi_of_match[] = {
+diff --git a/drivers/firmware/tegra/bpmp-debugfs.c b/drivers/firmware/tegra/bpmp-debugfs.c
+index fd89899aeeed9..0c440afd52247 100644
+--- a/drivers/firmware/tegra/bpmp-debugfs.c
++++ b/drivers/firmware/tegra/bpmp-debugfs.c
+@@ -474,7 +474,7 @@ static int bpmp_populate_debugfs_inband(struct tegra_bpmp *bpmp,
+ mode |= attrs & DEBUGFS_S_IWUSR ? 0200 : 0;
+ dentry = debugfs_create_file(name, mode, parent, bpmp,
+ &bpmp_debug_fops);
+- if (!dentry) {
++ if (IS_ERR(dentry)) {
+ err = -ENOMEM;
+ goto out;
+ }
+@@ -725,7 +725,7 @@ static int bpmp_populate_dir(struct tegra_bpmp *bpmp, struct seqbuf *seqbuf,
+
+ if (t & DEBUGFS_S_ISDIR) {
+ dentry = debugfs_create_dir(name, parent);
+- if (!dentry)
++ if (IS_ERR(dentry))
+ return -ENOMEM;
+ err = bpmp_populate_dir(bpmp, seqbuf, dentry, depth+1);
+ if (err < 0)
+@@ -738,7 +738,7 @@ static int bpmp_populate_dir(struct tegra_bpmp *bpmp, struct seqbuf *seqbuf,
+ dentry = debugfs_create_file(name, mode,
+ parent, bpmp,
+ &debugfs_fops);
+- if (!dentry)
++ if (IS_ERR(dentry))
+ return -ENOMEM;
+ }
+ }
+@@ -788,11 +788,11 @@ int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp)
+ return 0;
+
+ root = debugfs_create_dir("bpmp", NULL);
+- if (!root)
++ if (IS_ERR(root))
+ return -ENOMEM;
+
+ bpmp->debugfs_mirror = debugfs_create_dir("debug", root);
+- if (!bpmp->debugfs_mirror) {
++ if (IS_ERR(bpmp->debugfs_mirror)) {
+ err = -ENOMEM;
+ goto out;
+ }
+diff --git a/drivers/fpga/altera-pr-ip-core.c b/drivers/fpga/altera-pr-ip-core.c
+index be0667968d33b..df8671af4a92a 100644
+--- a/drivers/fpga/altera-pr-ip-core.c
++++ b/drivers/fpga/altera-pr-ip-core.c
+@@ -108,7 +108,7 @@ static int alt_pr_fpga_write(struct fpga_manager *mgr, const char *buf,
+ u32 *buffer_32 = (u32 *)buf;
+ size_t i = 0;
+
+- if (count <= 0)
++ if (!count)
+ return -EINVAL;
+
+ /* Write out the complete 32-bit chunks */
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 3d6c3ffd55766..de100b0217dad 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -860,7 +860,8 @@ int of_mm_gpiochip_add_data(struct device_node *np,
+ if (mm_gc->save_regs)
+ mm_gc->save_regs(mm_gc);
+
+- mm_gc->gc.of_node = np;
++ of_node_put(mm_gc->gc.of_node);
++ mm_gc->gc.of_node = of_node_get(np);
+
+ ret = gpiochip_add_data(gc, data);
+ if (ret)
+@@ -868,6 +869,7 @@ int of_mm_gpiochip_add_data(struct device_node *np,
+
+ return 0;
+ err2:
++ of_node_put(np);
+ iounmap(mm_gc->regs);
+ err1:
+ kfree(gc->label);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+index 7dc92ef36b2b0..8534c4c3b337a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+@@ -271,32 +271,6 @@ static ktime_t amdgpu_ctx_fini_entity(struct amdgpu_ctx_entity *entity)
+ return res;
+ }
+
+-static int amdgpu_ctx_init(struct amdgpu_ctx_mgr *mgr, int32_t priority,
+- struct drm_file *filp, struct amdgpu_ctx *ctx)
+-{
+- int r;
+-
+- r = amdgpu_ctx_priority_permit(filp, priority);
+- if (r)
+- return r;
+-
+- memset(ctx, 0, sizeof(*ctx));
+-
+- kref_init(&ctx->refcount);
+- ctx->mgr = mgr;
+- spin_lock_init(&ctx->ring_lock);
+- mutex_init(&ctx->lock);
+-
+- ctx->reset_counter = atomic_read(&mgr->adev->gpu_reset_counter);
+- ctx->reset_counter_query = ctx->reset_counter;
+- ctx->vram_lost_counter = atomic_read(&mgr->adev->vram_lost_counter);
+- ctx->init_priority = priority;
+- ctx->override_priority = AMDGPU_CTX_PRIORITY_UNSET;
+- ctx->stable_pstate = AMDGPU_CTX_STABLE_PSTATE_NONE;
+-
+- return 0;
+-}
+-
+ static int amdgpu_ctx_get_stable_pstate(struct amdgpu_ctx *ctx,
+ u32 *stable_pstate)
+ {
+@@ -325,6 +299,38 @@ static int amdgpu_ctx_get_stable_pstate(struct amdgpu_ctx *ctx,
+ return 0;
+ }
+
++static int amdgpu_ctx_init(struct amdgpu_ctx_mgr *mgr, int32_t priority,
++ struct drm_file *filp, struct amdgpu_ctx *ctx)
++{
++ u32 current_stable_pstate;
++ int r;
++
++ r = amdgpu_ctx_priority_permit(filp, priority);
++ if (r)
++ return r;
++
++ memset(ctx, 0, sizeof(*ctx));
++
++ kref_init(&ctx->refcount);
++ ctx->mgr = mgr;
++ spin_lock_init(&ctx->ring_lock);
++ mutex_init(&ctx->lock);
++
++ ctx->reset_counter = atomic_read(&mgr->adev->gpu_reset_counter);
++ ctx->reset_counter_query = ctx->reset_counter;
++ ctx->vram_lost_counter = atomic_read(&mgr->adev->vram_lost_counter);
++ ctx->init_priority = priority;
++ ctx->override_priority = AMDGPU_CTX_PRIORITY_UNSET;
++
++ r = amdgpu_ctx_get_stable_pstate(ctx, ¤t_stable_pstate);
++ if (r)
++ return r;
++
++ ctx->stable_pstate = current_stable_pstate;
++
++ return 0;
++}
++
+ static int amdgpu_ctx_set_stable_pstate(struct amdgpu_ctx *ctx,
+ u32 stable_pstate)
+ {
+@@ -396,7 +402,7 @@ static void amdgpu_ctx_fini(struct kref *ref)
+ }
+
+ if (drm_dev_enter(&adev->ddev, &idx)) {
+- amdgpu_ctx_set_stable_pstate(ctx, AMDGPU_CTX_STABLE_PSTATE_NONE);
++ amdgpu_ctx_set_stable_pstate(ctx, ctx->stable_pstate);
+ drm_dev_exit(idx);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 47f0344205edb..c1636d311fe54 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -2194,12 +2194,9 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ break;
+ case IP_VERSION(7, 4, 0):
+ case IP_VERSION(7, 4, 1):
+- adev->nbio.funcs = &nbio_v7_4_funcs;
+- adev->nbio.hdp_flush_reg = &nbio_v7_4_hdp_flush_reg;
+- break;
+ case IP_VERSION(7, 4, 4):
+ adev->nbio.funcs = &nbio_v7_4_funcs;
+- adev->nbio.hdp_flush_reg = &nbio_v7_4_hdp_flush_reg_ald;
++ adev->nbio.hdp_flush_reg = &nbio_v7_4_hdp_flush_reg;
+ break;
+ case IP_VERSION(7, 2, 0):
+ case IP_VERSION(7, 2, 1):
+@@ -2213,15 +2210,12 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ case IP_VERSION(2, 3, 0):
+ case IP_VERSION(2, 3, 1):
+ case IP_VERSION(2, 3, 2):
+- adev->nbio.funcs = &nbio_v2_3_funcs;
+- adev->nbio.hdp_flush_reg = &nbio_v2_3_hdp_flush_reg;
+- break;
+ case IP_VERSION(3, 3, 0):
+ case IP_VERSION(3, 3, 1):
+ case IP_VERSION(3, 3, 2):
+ case IP_VERSION(3, 3, 3):
+ adev->nbio.funcs = &nbio_v2_3_funcs;
+- adev->nbio.hdp_flush_reg = &nbio_v2_3_hdp_flush_reg_sc;
++ adev->nbio.hdp_flush_reg = &nbio_v2_3_hdp_flush_reg;
+ break;
+ case IP_VERSION(4, 3, 0):
+ case IP_VERSION(4, 3, 1):
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 2c82b1d5a0d79..4570ad4493905 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -882,6 +882,10 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ if (WARN_ON_ONCE(min_offset > max_offset))
+ return -EINVAL;
+
++ /* Check domain to be pinned to against preferred domains */
++ if (bo->preferred_domains & domain)
++ domain = bo->preferred_domains & domain;
++
+ /* A shared bo cannot be migrated to VRAM */
+ if (bo->tbo.base.import_attach) {
+ if (domain & AMDGPU_GEM_DOMAIN_GTT)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index c5f46d264b23d..ecbaf92759b73 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -3780,11 +3780,12 @@ static void gfx_v10_0_wait_reg_mem(struct amdgpu_ring *ring, int eng_sel,
+ static int gfx_v10_0_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ struct amdgpu_device *adev = ring->adev;
++ uint32_t scratch = SOC15_REG_OFFSET(GC, 0, mmSCRATCH_REG0);
+ uint32_t tmp = 0;
+ unsigned i;
+ int r;
+
+- WREG32_SOC15(GC, 0, mmSCRATCH_REG0, 0xCAFEDEAD);
++ WREG32(scratch, 0xCAFEDEAD);
+ r = amdgpu_ring_alloc(ring, 3);
+ if (r) {
+ DRM_ERROR("amdgpu: cp failed to lock ring %d (%d).\n",
+@@ -3793,13 +3794,13 @@ static int gfx_v10_0_ring_test_ring(struct amdgpu_ring *ring)
+ }
+
+ amdgpu_ring_write(ring, PACKET3(PACKET3_SET_UCONFIG_REG, 1));
+- amdgpu_ring_write(ring, SOC15_REG_OFFSET(GC, 0, mmSCRATCH_REG0) -
++ amdgpu_ring_write(ring, scratch -
+ PACKET3_SET_UCONFIG_REG_START);
+ amdgpu_ring_write(ring, 0xDEADBEEF);
+ amdgpu_ring_commit(ring);
+
+ for (i = 0; i < adev->usec_timeout; i++) {
+- tmp = RREG32_SOC15(GC, 0, mmSCRATCH_REG0);
++ tmp = RREG32(scratch);
+ if (tmp == 0xDEADBEEF)
+ break;
+ if (amdgpu_emu_mode == 1)
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
+index 6cd1fb2eb9131..f49db13b3fbee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
+@@ -328,27 +328,6 @@ const struct nbio_hdp_flush_reg nbio_v2_3_hdp_flush_reg = {
+ .ref_and_mask_sdma1 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__SDMA1_MASK,
+ };
+
+-const struct nbio_hdp_flush_reg nbio_v2_3_hdp_flush_reg_sc = {
+- .ref_and_mask_cp0 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP0_MASK,
+- .ref_and_mask_cp1 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP1_MASK,
+- .ref_and_mask_cp2 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP2_MASK,
+- .ref_and_mask_cp3 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP3_MASK,
+- .ref_and_mask_cp4 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP4_MASK,
+- .ref_and_mask_cp5 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP5_MASK,
+- .ref_and_mask_cp6 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP6_MASK,
+- .ref_and_mask_cp7 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP7_MASK,
+- .ref_and_mask_cp8 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP8_MASK,
+- .ref_and_mask_cp9 = BIF_BX_PF_GPU_HDP_FLUSH_DONE__CP9_MASK,
+- .ref_and_mask_sdma0 = GPU_HDP_FLUSH_DONE__RSVD_ENG1_MASK,
+- .ref_and_mask_sdma1 = GPU_HDP_FLUSH_DONE__RSVD_ENG2_MASK,
+- .ref_and_mask_sdma2 = GPU_HDP_FLUSH_DONE__RSVD_ENG3_MASK,
+- .ref_and_mask_sdma3 = GPU_HDP_FLUSH_DONE__RSVD_ENG4_MASK,
+- .ref_and_mask_sdma4 = GPU_HDP_FLUSH_DONE__RSVD_ENG5_MASK,
+- .ref_and_mask_sdma5 = GPU_HDP_FLUSH_DONE__RSVD_ENG6_MASK,
+- .ref_and_mask_sdma6 = GPU_HDP_FLUSH_DONE__RSVD_ENG7_MASK,
+- .ref_and_mask_sdma7 = GPU_HDP_FLUSH_DONE__RSVD_ENG8_MASK,
+-};
+-
+ static void nbio_v2_3_init_registers(struct amdgpu_device *adev)
+ {
+ uint32_t def, data;
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.h b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.h
+index 6074dd3a1ed8f..a43b60acf7f63 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.h
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.h
+@@ -27,7 +27,6 @@
+ #include "soc15_common.h"
+
+ extern const struct nbio_hdp_flush_reg nbio_v2_3_hdp_flush_reg;
+-extern const struct nbio_hdp_flush_reg nbio_v2_3_hdp_flush_reg_sc;
+ extern const struct amdgpu_nbio_funcs nbio_v2_3_funcs;
+
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+index 4531761dcf77f..11848d1e238b6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+@@ -339,27 +339,6 @@ const struct nbio_hdp_flush_reg nbio_v7_4_hdp_flush_reg = {
+ .ref_and_mask_sdma1 = GPU_HDP_FLUSH_DONE__SDMA1_MASK,
+ };
+
+-const struct nbio_hdp_flush_reg nbio_v7_4_hdp_flush_reg_ald = {
+- .ref_and_mask_cp0 = GPU_HDP_FLUSH_DONE__CP0_MASK,
+- .ref_and_mask_cp1 = GPU_HDP_FLUSH_DONE__CP1_MASK,
+- .ref_and_mask_cp2 = GPU_HDP_FLUSH_DONE__CP2_MASK,
+- .ref_and_mask_cp3 = GPU_HDP_FLUSH_DONE__CP3_MASK,
+- .ref_and_mask_cp4 = GPU_HDP_FLUSH_DONE__CP4_MASK,
+- .ref_and_mask_cp5 = GPU_HDP_FLUSH_DONE__CP5_MASK,
+- .ref_and_mask_cp6 = GPU_HDP_FLUSH_DONE__CP6_MASK,
+- .ref_and_mask_cp7 = GPU_HDP_FLUSH_DONE__CP7_MASK,
+- .ref_and_mask_cp8 = GPU_HDP_FLUSH_DONE__CP8_MASK,
+- .ref_and_mask_cp9 = GPU_HDP_FLUSH_DONE__CP9_MASK,
+- .ref_and_mask_sdma0 = GPU_HDP_FLUSH_DONE__RSVD_ENG1_MASK,
+- .ref_and_mask_sdma1 = GPU_HDP_FLUSH_DONE__RSVD_ENG2_MASK,
+- .ref_and_mask_sdma2 = GPU_HDP_FLUSH_DONE__RSVD_ENG3_MASK,
+- .ref_and_mask_sdma3 = GPU_HDP_FLUSH_DONE__RSVD_ENG4_MASK,
+- .ref_and_mask_sdma4 = GPU_HDP_FLUSH_DONE__RSVD_ENG5_MASK,
+- .ref_and_mask_sdma5 = GPU_HDP_FLUSH_DONE__RSVD_ENG6_MASK,
+- .ref_and_mask_sdma6 = GPU_HDP_FLUSH_DONE__RSVD_ENG7_MASK,
+- .ref_and_mask_sdma7 = GPU_HDP_FLUSH_DONE__RSVD_ENG8_MASK,
+-};
+-
+ static void nbio_v7_4_init_registers(struct amdgpu_device *adev)
+ {
+ uint32_t baco_cntl;
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.h b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.h
+index 7490022d79d4f..f27c417288224 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.h
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.h
+@@ -27,7 +27,6 @@
+ #include "soc15_common.h"
+
+ extern const struct nbio_hdp_flush_reg nbio_v7_4_hdp_flush_reg;
+-extern const struct nbio_hdp_flush_reg nbio_v7_4_hdp_flush_reg_ald;
+ extern const struct amdgpu_nbio_funcs nbio_v7_4_funcs;
+ extern struct amdgpu_nbio_ras nbio_v7_4_ras;
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index a08769c5e94b0..d9f57a20a8bc5 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -75,7 +75,6 @@ static void kfd_device_info_set_sdma_info(struct kfd_dev *kfd)
+ case IP_VERSION(5, 2, 3):/* YELLOW_CARP */
+ case IP_VERSION(5, 2, 6):/* GC 10.3.6 */
+ case IP_VERSION(5, 2, 7):/* GC 10.3.7 */
+- case IP_VERSION(6, 0, 1):
+ kfd->device_info.num_sdma_queues_per_engine = 2;
+ break;
+ case IP_VERSION(4, 2, 0):/* VEGA20 */
+@@ -90,6 +89,7 @@ static void kfd_device_info_set_sdma_info(struct kfd_dev *kfd)
+ case IP_VERSION(5, 2, 4):/* DIMGREY_CAVEFISH */
+ case IP_VERSION(5, 2, 5):/* BEIGE_GOBY */
+ case IP_VERSION(6, 0, 0):
++ case IP_VERSION(6, 0, 1):
+ case IP_VERSION(6, 0, 2):
+ kfd->device_info.num_sdma_queues_per_engine = 8;
+ break;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3087dd1a1856c..d055d3c7eed6a 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1563,6 +1563,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ DRM_INFO("Seamless boot condition check passed\n");
+ }
+
++ init_data.flags.enable_mipi_converter_optimization = true;
++
+ INIT_LIST_HEAD(&adev->dm.da_list);
+
+ retrieve_dmi_info(&adev->dm);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index 7c799ddc1d278..82c04af09d18a 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -571,7 +571,7 @@ static bool execute_synaptics_rc_command(struct drm_dp_aux *aux,
+ unsigned char rc_cmd = 0;
+ unsigned char rc_result = 0xFF;
+ unsigned char i = 0;
+- uint8_t ret = 0;
++ int ret;
+
+ if (is_write_cmd) {
+ // write rc data
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index a789ea8af27f1..55a8f58ee2392 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -235,7 +235,8 @@ bool dc_link_detect_sink(struct dc_link *link, enum dc_connection_type *type)
+
+ if (link->connector_signal == SIGNAL_TYPE_EDP) {
+ /*in case it is not on*/
+- link->dc->hwss.edp_power_control(link, true);
++ if (!link->dc->config.edp_no_power_sequencing)
++ link->dc->hwss.edp_power_control(link, true);
+ link->dc->hwss.edp_wait_for_hpd_ready(link, true);
+ }
+
+@@ -1016,6 +1017,7 @@ static bool detect_link_and_local_sink(struct dc_link *link,
+ bool same_edid = false;
+ enum dc_edid_status edid_status;
+ struct dc_context *dc_ctx = link->ctx;
++ struct dc *dc = dc_ctx->dc;
+ struct dc_sink *sink = NULL;
+ struct dc_sink *prev_sink = NULL;
+ struct dpcd_caps prev_dpcd_caps;
+@@ -1095,6 +1097,16 @@ static bool detect_link_and_local_sink(struct dc_link *link,
+
+ detect_edp_sink_caps(link);
+ read_current_link_settings_on_detect(link);
++
++ /* Disable power sequence on MIPI panel + converter
++ */
++ if (dc->config.enable_mipi_converter_optimization &&
++ dc_ctx->dce_version == DCN_VERSION_3_01 &&
++ link->dpcd_caps.sink_dev_id == DP_BRANCH_DEVICE_ID_0022B9 &&
++ memcmp(&link->dpcd_caps.branch_dev_name, DP_SINK_BRANCH_DEV_NAME_7580,
++ sizeof(link->dpcd_caps.branch_dev_name)) == 0)
++ dc->config.edp_no_power_sequencing = true;
++
+ sink_caps.transaction_type = DDC_TRANSACTION_TYPE_I2C_OVER_AUX;
+ sink_caps.signal = SIGNAL_TYPE_EDP;
+ break;
+@@ -1993,7 +2005,8 @@ static enum dc_status enable_link_dp(struct dc_state *state,
+
+ if (pipe_ctx->stream->signal == SIGNAL_TYPE_EDP) {
+ /*in case it is not on*/
+- link->dc->hwss.edp_power_control(link, true);
++ if (!link->dc->config.edp_no_power_sequencing)
++ link->dc->hwss.edp_power_control(link, true);
+ link->dc->hwss.edp_wait_for_hpd_ready(link, true);
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index d8eee89e63ce3..a4fc9a6c850ed 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -2074,7 +2074,8 @@ static enum link_training_result dp_perform_128b_132b_channel_eq_done_sequence(
+ uint32_t wait_time = 0;
+ union lane_align_status_updated dpcd_lane_status_updated = {0};
+ union lane_status dpcd_lane_status[LANE_COUNT_DP_MAX] = {0};
+- enum link_training_result status = LINK_TRAINING_SUCCESS;
++ enum dc_status status = DC_OK;
++ enum link_training_result result = LINK_TRAINING_SUCCESS;
+ union lane_adjust dpcd_lane_adjust[LANE_COUNT_DP_MAX] = {0};
+
+ /* Transmit 128b/132b_TPS1 over Main-Link */
+@@ -2099,22 +2100,24 @@ static enum link_training_result dp_perform_128b_132b_channel_eq_done_sequence(
+ lt_settings->pattern_for_eq, DPRX);
+
+ /* poll for channel EQ done */
+- while (status == LINK_TRAINING_SUCCESS) {
++ while (result == LINK_TRAINING_SUCCESS) {
+ dp_wait_for_training_aux_rd_interval(link, aux_rd_interval);
+ wait_time += aux_rd_interval;
+- dp_get_lane_status_and_lane_adjust(link, lt_settings, dpcd_lane_status,
++ status = dp_get_lane_status_and_lane_adjust(link, lt_settings, dpcd_lane_status,
+ &dpcd_lane_status_updated, dpcd_lane_adjust, DPRX);
+ dp_decide_lane_settings(lt_settings, dpcd_lane_adjust,
+ lt_settings->hw_lane_settings, lt_settings->dpcd_lane_settings);
+ dpcd_128b_132b_get_aux_rd_interval(link, &aux_rd_interval);
+- if (dp_is_ch_eq_done(lt_settings->link_settings.lane_count,
++ if (status != DC_OK) {
++ result = LINK_TRAINING_ABORT;
++ } else if (dp_is_ch_eq_done(lt_settings->link_settings.lane_count,
+ dpcd_lane_status)) {
+ /* pass */
+ break;
+ } else if (loop_count >= lt_settings->eq_loop_count_limit) {
+- status = DP_128b_132b_MAX_LOOP_COUNT_REACHED;
++ result = DP_128b_132b_MAX_LOOP_COUNT_REACHED;
+ } else if (dpcd_lane_status_updated.bits.LT_FAILED_128b_132b) {
+- status = DP_128b_132b_LT_FAILED;
++ result = DP_128b_132b_LT_FAILED;
+ } else {
+ dp_set_hw_lane_settings(link, link_res, lt_settings, DPRX);
+ dpcd_set_lane_settings(link, lt_settings, DPRX);
+@@ -2123,24 +2126,26 @@ static enum link_training_result dp_perform_128b_132b_channel_eq_done_sequence(
+ }
+
+ /* poll for EQ interlane align done */
+- while (status == LINK_TRAINING_SUCCESS) {
+- if (dpcd_lane_status_updated.bits.EQ_INTERLANE_ALIGN_DONE_128b_132b) {
++ while (result == LINK_TRAINING_SUCCESS) {
++ if (status != DC_OK) {
++ result = LINK_TRAINING_ABORT;
++ } else if (dpcd_lane_status_updated.bits.EQ_INTERLANE_ALIGN_DONE_128b_132b) {
+ /* pass */
+ break;
+ } else if (wait_time >= lt_settings->eq_wait_time_limit) {
+- status = DP_128b_132b_CHANNEL_EQ_DONE_TIMEOUT;
++ result = DP_128b_132b_CHANNEL_EQ_DONE_TIMEOUT;
+ } else if (dpcd_lane_status_updated.bits.LT_FAILED_128b_132b) {
+- status = DP_128b_132b_LT_FAILED;
++ result = DP_128b_132b_LT_FAILED;
+ } else {
+ dp_wait_for_training_aux_rd_interval(link,
+ lt_settings->eq_pattern_time);
+ wait_time += lt_settings->eq_pattern_time;
+- dp_get_lane_status_and_lane_adjust(link, lt_settings, dpcd_lane_status,
++ status = dp_get_lane_status_and_lane_adjust(link, lt_settings, dpcd_lane_status,
+ &dpcd_lane_status_updated, dpcd_lane_adjust, DPRX);
+ }
+ }
+
+- return status;
++ return result;
+ }
+
+ static enum link_training_result dp_perform_128b_132b_cds_done_sequence(
+@@ -2149,7 +2154,8 @@ static enum link_training_result dp_perform_128b_132b_cds_done_sequence(
+ struct link_training_settings *lt_settings)
+ {
+ /* Assumption: assume hardware has transmitted eq pattern */
+- enum link_training_result status = LINK_TRAINING_SUCCESS;
++ enum dc_status status = DC_OK;
++ enum link_training_result result = LINK_TRAINING_SUCCESS;
+ union lane_align_status_updated dpcd_lane_status_updated = {0};
+ union lane_status dpcd_lane_status[LANE_COUNT_DP_MAX] = {0};
+ union lane_adjust dpcd_lane_adjust[LANE_COUNT_DP_MAX] = { { {0} } };
+@@ -2159,24 +2165,26 @@ static enum link_training_result dp_perform_128b_132b_cds_done_sequence(
+ dpcd_set_training_pattern(link, lt_settings->pattern_for_cds);
+
+ /* poll for CDS interlane align done and symbol lock */
+- while (status == LINK_TRAINING_SUCCESS) {
++ while (result == LINK_TRAINING_SUCCESS) {
+ dp_wait_for_training_aux_rd_interval(link,
+ lt_settings->cds_pattern_time);
+ wait_time += lt_settings->cds_pattern_time;
+- dp_get_lane_status_and_lane_adjust(link, lt_settings, dpcd_lane_status,
++ status = dp_get_lane_status_and_lane_adjust(link, lt_settings, dpcd_lane_status,
+ &dpcd_lane_status_updated, dpcd_lane_adjust, DPRX);
+- if (dp_is_symbol_locked(lt_settings->link_settings.lane_count, dpcd_lane_status) &&
++ if (status != DC_OK) {
++ result = LINK_TRAINING_ABORT;
++ } else if (dp_is_symbol_locked(lt_settings->link_settings.lane_count, dpcd_lane_status) &&
+ dpcd_lane_status_updated.bits.CDS_INTERLANE_ALIGN_DONE_128b_132b) {
+ /* pass */
+ break;
+ } else if (dpcd_lane_status_updated.bits.LT_FAILED_128b_132b) {
+- status = DP_128b_132b_LT_FAILED;
++ result = DP_128b_132b_LT_FAILED;
+ } else if (wait_time >= lt_settings->cds_wait_time_limit) {
+- status = DP_128b_132b_CDS_DONE_TIMEOUT;
++ result = DP_128b_132b_CDS_DONE_TIMEOUT;
+ }
+ }
+
+- return status;
++ return result;
+ }
+
+ static enum link_training_result dp_perform_8b_10b_link_training(
+@@ -7099,7 +7107,8 @@ void dp_enable_link_phy(
+ unsigned int i;
+
+ if (link->connector_signal == SIGNAL_TYPE_EDP) {
+- link->dc->hwss.edp_power_control(link, true);
++ if (!link->dc->config.edp_no_power_sequencing)
++ link->dc->hwss.edp_power_control(link, true);
+ link->dc->hwss.edp_wait_for_hpd_ready(link, true);
+ }
+
+@@ -7226,7 +7235,8 @@ void dp_disable_link_phy(struct dc_link *link, const struct link_resource *link_
+ link->dc->hwss.edp_backlight_control(link, false);
+ if (link_hwss->ext.disable_dp_link_output)
+ link_hwss->ext.disable_dp_link_output(link, link_res, signal);
+- link->dc->hwss.edp_power_control(link, false);
++ if (!link->dc->config.edp_no_power_sequencing)
++ link->dc->hwss.edp_power_control(link, false);
+ } else {
+ if (dmcu != NULL && dmcu->funcs->lock_phy)
+ dmcu->funcs->lock_phy(dmcu);
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 817028d3c4a0c..11b02a98cf0f9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -337,6 +337,7 @@ struct dc_config {
+ bool is_single_rank_dimm;
+ bool use_pipe_ctx_sync_logic;
+ bool ignore_dpref_ss;
++ bool enable_mipi_converter_optimization;
+ };
+
+ enum visual_confirm {
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index 5f2afa5b48142..aee31c785aa9f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1245,8 +1245,18 @@ void dce110_blank_stream(struct pipe_ctx *pipe_ctx)
+ * has changed or they enter protection state and hang.
+ */
+ msleep(60);
+- } else if (pipe_ctx->stream->signal == SIGNAL_TYPE_EDP)
+- edp_receiver_ready_T9(link);
++ } else if (pipe_ctx->stream->signal == SIGNAL_TYPE_EDP) {
++ if (!link->dc->config.edp_no_power_sequencing) {
++ /*
++ * Sometimes, DP receiver chip power-controlled externally by an
++ * Embedded Controller could be treated and used as eDP,
++ * if it drives mobile display. In this case,
++ * we shouldn't be doing power-sequencing, hence we can skip
++ * waiting for T9-ready.
++ */
++ edp_receiver_ready_T9(link);
++ }
++ }
+ }
+
+ }
+@@ -2161,15 +2171,18 @@ static void dce110_setup_audio_dto(
+ build_audio_output(context, pipe_ctx, &audio_output);
+
+ if (dc->res_pool->dccg && dc->res_pool->dccg->funcs->set_audio_dtbclk_dto) {
+- /* disable audio DTBCLK DTO */
+- dc->res_pool->dccg->funcs->set_audio_dtbclk_dto(
+- dc->res_pool->dccg, 0);
++ struct dtbclk_dto_params dto_params = {0};
+
+ pipe_ctx->stream_res.audio->funcs->wall_dto_setup(
+ pipe_ctx->stream_res.audio,
+ pipe_ctx->stream->signal,
+ &audio_output.crtc_info,
+ &audio_output.pll_info);
++
++ dc->res_pool->dccg->funcs->set_audio_dtbclk_dto(
++ dc->res_pool->dccg,
++ &dto_params);
++
+ } else
+ pipe_ctx->stream_res.audio->funcs->wall_dto_setup(
+ pipe_ctx->stream_res.audio,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
+index bbc58d167c630..4519ecef2e7b7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
+@@ -513,7 +513,7 @@ void dccg31_set_physymclk(
+ /* Controls the generation of pixel valid for OTG in (OTG -> HPO case) */
+ static void dccg31_set_dtbclk_dto(
+ struct dccg *dccg,
+- struct dtbclk_dto_params *params)
++ const struct dtbclk_dto_params *params)
+ {
+ struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ int req_dtbclk_khz = params->pixclk_khz;
+@@ -579,18 +579,17 @@ static void dccg31_set_dtbclk_dto(
+
+ void dccg31_set_audio_dtbclk_dto(
+ struct dccg *dccg,
+- uint32_t req_audio_dtbclk_khz)
++ const struct dtbclk_dto_params *params)
+ {
+ struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+- if (dccg->ref_dtbclk_khz && req_audio_dtbclk_khz) {
++ if (params->ref_dtbclk_khz && params->req_audio_dtbclk_khz) {
+ uint32_t modulo, phase;
+
+ // phase / modulo = dtbclk / dtbclk ref
+- modulo = dccg->ref_dtbclk_khz * 1000;
+- phase = div_u64((((unsigned long long)modulo * req_audio_dtbclk_khz) + dccg->ref_dtbclk_khz - 1),
+- dccg->ref_dtbclk_khz);
+-
++ modulo = params->ref_dtbclk_khz * 1000;
++ phase = div_u64((((unsigned long long)modulo * params->req_audio_dtbclk_khz) + params->ref_dtbclk_khz - 1),
++ params->ref_dtbclk_khz);
+
+ REG_WRITE(DCCG_AUDIO_DTBCLK_DTO_MODULO, modulo);
+ REG_WRITE(DCCG_AUDIO_DTBCLK_DTO_PHASE, phase);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
+index 269cabbea72ab..f158c1ea214b3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
+@@ -192,7 +192,7 @@ void dccg31_set_physymclk(
+
+ void dccg31_set_audio_dtbclk_dto(
+ struct dccg *dccg,
+- uint32_t req_audio_dtbclk_khz);
++ const struct dtbclk_dto_params *params);
+
+ void dccg31_set_hdmistreamclk(
+ struct dccg *dccg,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+index c7021915bac88..c1023cc84f553 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+@@ -120,11 +120,11 @@ struct dccg_funcs {
+
+ void (*set_dtbclk_dto)(
+ struct dccg *dccg,
+- struct dtbclk_dto_params *dto_params);
++ const struct dtbclk_dto_params *params);
+
+ void (*set_audio_dtbclk_dto)(
+ struct dccg *dccg,
+- uint32_t req_audio_dtbclk_khz);
++ const struct dtbclk_dto_params *params);
+
+ void (*set_dispclk_change_mode)(
+ struct dccg *dccg,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h b/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
+index f5fd2a0673230..5097037e39625 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
+@@ -346,6 +346,11 @@ struct mpc_funcs {
+ int mpcc_id,
+ const struct mpc_grph_gamut_adjustment *adjust);
+
++ bool (*program_1dlut)(
++ struct mpc *mpc,
++ const struct pwl_params *params,
++ uint32_t rmu_idx);
++
+ bool (*program_shaper)(
+ struct mpc *mpc,
+ const struct pwl_params *params,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
+index 8c2f190c47124..d2cb0e7945000 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
+@@ -140,6 +140,8 @@ struct hwseq_private_funcs {
+ const struct dc_plane_state *plane_state);
+ bool (*set_shaper_3dlut)(struct pipe_ctx *pipe_ctx,
+ const struct dc_plane_state *plane_state);
++ bool (*set_mcm_luts)(struct pipe_ctx *pipe_ctx,
++ const struct dc_plane_state *plane_state);
+ void (*PLAT_58856_wa)(struct dc_state *context,
+ struct pipe_ctx *pipe_ctx);
+ void (*setup_hpo_hw_control)(const struct dce_hwseq *hws, bool enable);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index ef9b56de143bb..5aa08c031f721 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -714,6 +714,8 @@ int smu_v13_0_get_vbios_bootup_values(struct smu_context *smu)
+ smu->smu_table.boot_values.vclk = smu_info_v3_6->bootup_vclk_10khz;
+ smu->smu_table.boot_values.dclk = smu_info_v3_6->bootup_dclk_10khz;
+ smu->smu_table.boot_values.fclk = smu_info_v3_6->bootup_fclk_10khz;
++ } else if ((frev == 3) && (crev == 1)) {
++ return 0;
+ } else if ((frev == 4) && (crev == 0)) {
+ smu_info_v4_0 = (struct atom_smu_info_v4_0 *)header;
+
+diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
+index 307b135da2f66..745c68735dd25 100644
+--- a/drivers/gpu/drm/bridge/Kconfig
++++ b/drivers/gpu/drm/bridge/Kconfig
+@@ -78,6 +78,7 @@ config DRM_DISPLAY_CONNECTOR
+ config DRM_FSL_LDB
+ tristate "Freescale i.MX8MP LDB bridge"
+ depends on OF
++ depends on ARCH_MXC || COMPILE_TEST
+ select DRM_KMS_HELPER
+ select DRM_PANEL_BRIDGE
+ help
+@@ -93,6 +94,8 @@ config DRM_ITE_IT6505
+ select DRM_KMS_HELPER
+ select DRM_DP_HELPER
+ select EXTCON
++ select CRYPTO
++ select CRYPTO_HASH
+ help
+ ITE IT6505 DisplayPort bridge chip driver.
+
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+index 9e3bb8a8ee409..a031a0cd1f181 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+@@ -226,18 +226,6 @@
+ #define ADV7511_REG_CEC_CLK_DIV 0x4e
+ #define ADV7511_REG_CEC_SOFT_RESET 0x50
+
+-static const u8 ADV7511_REG_CEC_RX_FRAME_HDR[] = {
+- ADV7511_REG_CEC_RX1_FRAME_HDR,
+- ADV7511_REG_CEC_RX2_FRAME_HDR,
+- ADV7511_REG_CEC_RX3_FRAME_HDR,
+-};
+-
+-static const u8 ADV7511_REG_CEC_RX_FRAME_LEN[] = {
+- ADV7511_REG_CEC_RX1_FRAME_LEN,
+- ADV7511_REG_CEC_RX2_FRAME_LEN,
+- ADV7511_REG_CEC_RX3_FRAME_LEN,
+-};
+-
+ #define ADV7533_REG_CEC_OFFSET 0x70
+
+ enum adv7511_input_clock {
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c b/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
+index 399f625a50c8d..0b266f28f150f 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
+@@ -15,6 +15,18 @@
+
+ #include "adv7511.h"
+
++static const u8 ADV7511_REG_CEC_RX_FRAME_HDR[] = {
++ ADV7511_REG_CEC_RX1_FRAME_HDR,
++ ADV7511_REG_CEC_RX2_FRAME_HDR,
++ ADV7511_REG_CEC_RX3_FRAME_HDR,
++};
++
++static const u8 ADV7511_REG_CEC_RX_FRAME_LEN[] = {
++ ADV7511_REG_CEC_RX1_FRAME_LEN,
++ ADV7511_REG_CEC_RX2_FRAME_LEN,
++ ADV7511_REG_CEC_RX3_FRAME_LEN,
++};
++
+ #define ADV7511_INT1_CEC_MASK \
+ (ADV7511_INT1_CEC_TX_READY | ADV7511_INT1_CEC_TX_ARBIT_LOST | \
+ ADV7511_INT1_CEC_TX_RETRY_TIMEOUT | ADV7511_INT1_CEC_RX_READY1 | \
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 5bb9300040dd6..38bf28720f3a2 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1065,6 +1065,10 @@ static int adv7511_init_cec_regmap(struct adv7511 *adv)
+ ADV7511_CEC_I2C_ADDR_DEFAULT);
+ if (IS_ERR(adv->i2c_cec))
+ return PTR_ERR(adv->i2c_cec);
++
++ regmap_write(adv->regmap, ADV7511_REG_CEC_I2C_ADDR,
++ adv->i2c_cec->addr << 1);
++
+ i2c_set_clientdata(adv->i2c_cec, adv);
+
+ adv->regmap_cec = devm_regmap_init_i2c(adv->i2c_cec,
+@@ -1271,9 +1275,6 @@ static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id)
+ if (ret)
+ goto err_i2c_unregister_packet;
+
+- regmap_write(adv7511->regmap, ADV7511_REG_CEC_I2C_ADDR,
+- adv7511->i2c_cec->addr << 1);
+-
+ INIT_WORK(&adv7511->hpd_work, adv7511_hpd_work);
+
+ if (i2c->irq) {
+@@ -1392,10 +1393,21 @@ static struct i2c_driver adv7511_driver = {
+
+ static int __init adv7511_init(void)
+ {
+- if (IS_ENABLED(CONFIG_DRM_MIPI_DSI))
+- mipi_dsi_driver_register(&adv7533_dsi_driver);
++ int ret;
+
+- return i2c_add_driver(&adv7511_driver);
++ if (IS_ENABLED(CONFIG_DRM_MIPI_DSI)) {
++ ret = mipi_dsi_driver_register(&adv7533_dsi_driver);
++ if (ret)
++ return ret;
++ }
++
++ ret = i2c_add_driver(&adv7511_driver);
++ if (ret) {
++ if (IS_ENABLED(CONFIG_DRM_MIPI_DSI))
++ mipi_dsi_driver_unregister(&adv7533_dsi_driver);
++ }
++
++ return ret;
+ }
+ module_init(adv7511_init);
+
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 53a5da6c49dd3..d5f6686b7603a 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -1657,8 +1657,10 @@ static int anx7625_parse_dt(struct device *dev,
+
+ pdata->panel_bridge = devm_drm_of_get_bridge(dev, np, 1, 0);
+ if (IS_ERR(pdata->panel_bridge)) {
+- if (PTR_ERR(pdata->panel_bridge) == -ENODEV)
++ if (PTR_ERR(pdata->panel_bridge) == -ENODEV) {
++ pdata->panel_bridge = NULL;
+ return 0;
++ }
+
+ return PTR_ERR(pdata->panel_bridge);
+ }
+@@ -2654,14 +2656,6 @@ static int anx7625_i2c_probe(struct i2c_client *client,
+ platform->aux.dev = dev;
+ platform->aux.transfer = anx7625_aux_transfer;
+ drm_dp_aux_init(&platform->aux);
+- devm_of_dp_aux_populate_ep_devices(&platform->aux);
+-
+- ret = anx7625_parse_dt(dev, pdata);
+- if (ret) {
+- if (ret != -EPROBE_DEFER)
+- DRM_DEV_ERROR(dev, "fail to parse DT : %d\n", ret);
+- goto free_wq;
+- }
+
+ if (anx7625_register_i2c_dummy_clients(platform, client) != 0) {
+ ret = -ENOMEM;
+@@ -2677,6 +2671,15 @@ static int anx7625_i2c_probe(struct i2c_client *client,
+ if (ret)
+ goto free_wq;
+
++ devm_of_dp_aux_populate_ep_devices(&platform->aux);
++
++ ret = anx7625_parse_dt(dev, pdata);
++ if (ret) {
++ if (ret != -EPROBE_DEFER)
++ DRM_DEV_ERROR(dev, "fail to parse DT : %d\n", ret);
++ goto free_wq;
++ }
++
+ if (!platform->pdata.low_power_mode) {
+ anx7625_disable_pd_protocol(platform);
+ pm_runtime_get_sync(dev);
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index 7ef8fe5abc12e..c0b182d1374e4 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -586,7 +586,7 @@ lt9611_connector_detect(struct drm_connector *connector, bool force)
+ int connected = 0;
+
+ regmap_read(lt9611->regmap, 0x825e, ®_val);
+- connected = (reg_val & BIT(0));
++ connected = (reg_val & (BIT(2) | BIT(0)));
+
+ lt9611->status = connected ? connector_status_connected :
+ connector_status_disconnected;
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611uxc.c b/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
+index 3d62e6bf68926..310b3b1944919 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
+@@ -982,7 +982,7 @@ static int lt9611uxc_remove(struct i2c_client *client)
+ struct lt9611uxc *lt9611uxc = i2c_get_clientdata(client);
+
+ disable_irq(client->irq);
+- flush_scheduled_work();
++ cancel_work_sync(<9611uxc->work);
+ lt9611uxc_audio_exit(lt9611uxc);
+ drm_bridge_remove(<9611uxc->bridge);
+
+diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c
+index ec7745c31da07..ab0bce4a988c5 100644
+--- a/drivers/gpu/drm/bridge/sil-sii8620.c
++++ b/drivers/gpu/drm/bridge/sil-sii8620.c
+@@ -605,7 +605,7 @@ static void *sii8620_burst_get_tx_buf(struct sii8620 *ctx, int len)
+ u8 *buf = &ctx->burst.tx_buf[ctx->burst.tx_count];
+ int size = len + 2;
+
+- if (ctx->burst.tx_count + size > ARRAY_SIZE(ctx->burst.tx_buf)) {
++ if (ctx->burst.tx_count + size >= ARRAY_SIZE(ctx->burst.tx_buf)) {
+ dev_err(ctx->dev, "TX-BLK buffer exhausted\n");
+ ctx->error = -EINVAL;
+ return NULL;
+@@ -622,7 +622,7 @@ static u8 *sii8620_burst_get_rx_buf(struct sii8620 *ctx, int len)
+ u8 *buf = &ctx->burst.rx_buf[ctx->burst.rx_count];
+ int size = len + 1;
+
+- if (ctx->burst.tx_count + size > ARRAY_SIZE(ctx->burst.tx_buf)) {
++ if (ctx->burst.rx_count + size >= ARRAY_SIZE(ctx->burst.rx_buf)) {
+ dev_err(ctx->dev, "RX-BLK buffer exhausted\n");
+ ctx->error = -EINVAL;
+ return NULL;
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index 485717c8f0b40..16affb42086ad 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1871,7 +1871,7 @@ static int tc_mipi_dsi_host_attach(struct tc_data *tc)
+ of_node_put(host_node);
+ of_node_put(endpoint);
+
+- if (dsi_lanes < 0 || dsi_lanes > 4)
++ if (dsi_lanes <= 0 || dsi_lanes > 4)
+ return -EINVAL;
+
+ if (!host)
+@@ -2004,6 +2004,13 @@ static int tc_probe_bridge_endpoint(struct tc_data *tc)
+ return -EINVAL;
+ }
+
++static void tc_clk_disable(void *data)
++{
++ struct clk *refclk = data;
++
++ clk_disable_unprepare(refclk);
++}
++
+ static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ {
+ struct device *dev = &client->dev;
+@@ -2020,6 +2027,24 @@ static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ if (ret)
+ return ret;
+
++ tc->refclk = devm_clk_get(dev, "ref");
++ if (IS_ERR(tc->refclk)) {
++ ret = PTR_ERR(tc->refclk);
++ dev_err(dev, "Failed to get refclk: %d\n", ret);
++ return ret;
++ }
++
++ ret = clk_prepare_enable(tc->refclk);
++ if (ret)
++ return ret;
++
++ ret = devm_add_action_or_reset(dev, tc_clk_disable, tc->refclk);
++ if (ret)
++ return ret;
++
++ /* tRSTW = 100 cycles , at 13 MHz that is ~7.69 us */
++ usleep_range(10, 15);
++
+ /* Shut down GPIO is optional */
+ tc->sd_gpio = devm_gpiod_get_optional(dev, "shutdown", GPIOD_OUT_HIGH);
+ if (IS_ERR(tc->sd_gpio))
+@@ -2040,13 +2065,6 @@ static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ usleep_range(5000, 10000);
+ }
+
+- tc->refclk = devm_clk_get(dev, "ref");
+- if (IS_ERR(tc->refclk)) {
+- ret = PTR_ERR(tc->refclk);
+- dev_err(dev, "Failed to get refclk: %d\n", ret);
+- return ret;
+- }
+-
+ tc->regmap = devm_regmap_init_i2c(client, &tc_regmap_config);
+ if (IS_ERR(tc->regmap)) {
+ ret = PTR_ERR(tc->regmap);
+diff --git a/drivers/gpu/drm/display/Kconfig b/drivers/gpu/drm/display/Kconfig
+index 1b6e6af375467..09712b88a5b83 100644
+--- a/drivers/gpu/drm/display/Kconfig
++++ b/drivers/gpu/drm/display/Kconfig
+@@ -3,7 +3,7 @@
+ config DRM_DP_AUX_BUS
+ tristate
+ depends on DRM
+- depends on OF
++ depends on OF || COMPILE_TEST
+
+ config DRM_DISPLAY_HELPER
+ tristate
+diff --git a/drivers/gpu/drm/display/drm_dp_aux_bus.c b/drivers/gpu/drm/display/drm_dp_aux_bus.c
+index dccf3e2ea3234..552f949cff597 100644
+--- a/drivers/gpu/drm/display/drm_dp_aux_bus.c
++++ b/drivers/gpu/drm/display/drm_dp_aux_bus.c
+@@ -66,7 +66,6 @@ static int dp_aux_ep_probe(struct device *dev)
+ * @dev: The device to remove.
+ *
+ * Calls through to the endpoint driver remove.
+- *
+ */
+ static void dp_aux_ep_remove(struct device *dev)
+ {
+@@ -120,8 +119,6 @@ ATTRIBUTE_GROUPS(dp_aux_ep_dev);
+ /**
+ * dp_aux_ep_dev_release() - Free memory for the dp_aux_ep device
+ * @dev: The device to free.
+- *
+- * Return: 0 if no error or negative error code.
+ */
+ static void dp_aux_ep_dev_release(struct device *dev)
+ {
+@@ -256,6 +253,7 @@ int of_dp_aux_populate_ep_devices(struct drm_dp_aux *aux)
+
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(of_dp_aux_populate_ep_devices);
+
+ static void of_dp_aux_depopulate_ep_devices_void(void *data)
+ {
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index 67b3b9697da7f..18f2b6075b780 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -3860,9 +3860,7 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr,
+ if (!mgr->mst_primary)
+ goto out_fail;
+
+- ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd,
+- DP_RECEIVER_CAP_SIZE);
+- if (ret != DP_RECEIVER_CAP_SIZE) {
++ if (drm_dp_read_dpcd_caps(mgr->aux, mgr->dpcd) < 0) {
+ drm_dbg_kms(mgr->dev, "dpcd read failed - undocked during suspend?\n");
+ goto out_fail;
+ }
+@@ -4911,8 +4909,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+ u8 buf[DP_PAYLOAD_TABLE_SIZE];
+ int ret;
+
+- ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, buf, DP_RECEIVER_CAP_SIZE);
+- if (ret) {
++ if (drm_dp_read_dpcd_caps(mgr->aux, buf) < 0) {
+ seq_printf(m, "dpcd read failed\n");
+ goto out;
+ }
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index bc43e1b320921..1dea0e2f0cabb 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -5697,6 +5697,7 @@ static int drm_edid_connector_update(struct drm_connector *connector,
+ u32 quirks;
+
+ if (edid == NULL) {
++ drm_reset_display_info(connector);
+ clear_eld(connector);
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 5ad2b6a2778ca..1705e8d345aba 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -680,7 +680,11 @@ static void drm_fb_helper_damage(struct fb_info *info, u32 x, u32 y,
+ schedule_work(&helper->damage_work);
+ }
+
+-/* Convert memory region into area of scanlines and pixels per scanline */
++/*
++ * Convert memory region into area of scanlines and pixels per
++ * scanline. The parameters off and len must not reach beyond
++ * the end of the framebuffer.
++ */
+ static void drm_fb_helper_memory_range_to_clip(struct fb_info *info, off_t off, size_t len,
+ struct drm_rect *clip)
+ {
+@@ -715,22 +719,29 @@ static void drm_fb_helper_memory_range_to_clip(struct fb_info *info, off_t off,
+ */
+ void drm_fb_helper_deferred_io(struct fb_info *info, struct list_head *pagereflist)
+ {
+- unsigned long start, end, min, max;
++ unsigned long start, end, min_off, max_off;
+ struct fb_deferred_io_pageref *pageref;
+ struct drm_rect damage_area;
+
+- min = ULONG_MAX;
+- max = 0;
++ min_off = ULONG_MAX;
++ max_off = 0;
+ list_for_each_entry(pageref, pagereflist, list) {
+ start = pageref->offset;
+ end = start + PAGE_SIZE;
+- min = min(min, start);
+- max = max(max, end);
++ min_off = min(min_off, start);
++ max_off = max(max_off, end);
+ }
+- if (min >= max)
++ if (min_off >= max_off)
+ return;
+
+- drm_fb_helper_memory_range_to_clip(info, min, max - min, &damage_area);
++ /*
++ * As we can only track pages, we might reach beyond the end
++ * of the screen and account for non-existing scanlines. Hence,
++ * keep the covered memory area within the screen buffer.
++ */
++ max_off = min(max_off, info->screen_size);
++
++ drm_fb_helper_memory_range_to_clip(info, min_off, max_off - min_off, &damage_area);
+ drm_fb_helper_damage(info, damage_area.x1, damage_area.y1,
+ drm_rect_width(&damage_area),
+ drm_rect_height(&damage_area));
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index eb0c2d041f138..86d670c712867 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -1226,7 +1226,7 @@ retry:
+ ret = dma_resv_lock_slow_interruptible(obj->resv,
+ acquire_ctx);
+ if (ret) {
+- ww_acquire_done(acquire_ctx);
++ ww_acquire_fini(acquire_ctx);
+ return ret;
+ }
+ }
+@@ -1251,7 +1251,7 @@ retry:
+ goto retry;
+ }
+
+- ww_acquire_done(acquire_ctx);
++ ww_acquire_fini(acquire_ctx);
+ return ret;
+ }
+ }
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index 8ad0e02991ca0..904fc893c905b 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -302,6 +302,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
+ ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ if (!ret) {
+ if (WARN_ON(map->is_iomem)) {
++ dma_buf_vunmap(obj->import_attach->dmabuf, map);
+ ret = -EIO;
+ goto err_put_pages;
+ }
+diff --git a/drivers/gpu/drm/drm_mipi_dbi.c b/drivers/gpu/drm/drm_mipi_dbi.c
+index 9314f2ead79fe..09e4edb5a9927 100644
+--- a/drivers/gpu/drm/drm_mipi_dbi.c
++++ b/drivers/gpu/drm/drm_mipi_dbi.c
+@@ -1199,6 +1199,13 @@ int mipi_dbi_spi_transfer(struct spi_device *spi, u32 speed_hz,
+ size_t chunk;
+ int ret;
+
++ /* In __spi_validate, there's a validation that no partial transfers
++ * are accepted (xfer->len % w_size must be zero).
++ * Here we align max_chunk to multiple of 2 (16bits),
++ * to prevent transfers from being rejected.
++ */
++ max_chunk = ALIGN_DOWN(max_chunk, 2);
++
+ spi_message_init_with_transfers(&m, &tr, 1);
+
+ while (len) {
+diff --git a/drivers/gpu/drm/exynos/exynos7_drm_decon.c b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+index c04264f70ad17..3c31405600f0b 100644
+--- a/drivers/gpu/drm/exynos/exynos7_drm_decon.c
++++ b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+@@ -800,31 +800,40 @@ static int exynos7_decon_resume(struct device *dev)
+ if (ret < 0) {
+ DRM_DEV_ERROR(dev, "Failed to prepare_enable the pclk [%d]\n",
+ ret);
+- return ret;
++ goto err_pclk_enable;
+ }
+
+ ret = clk_prepare_enable(ctx->aclk);
+ if (ret < 0) {
+ DRM_DEV_ERROR(dev, "Failed to prepare_enable the aclk [%d]\n",
+ ret);
+- return ret;
++ goto err_aclk_enable;
+ }
+
+ ret = clk_prepare_enable(ctx->eclk);
+ if (ret < 0) {
+ DRM_DEV_ERROR(dev, "Failed to prepare_enable the eclk [%d]\n",
+ ret);
+- return ret;
++ goto err_eclk_enable;
+ }
+
+ ret = clk_prepare_enable(ctx->vclk);
+ if (ret < 0) {
+ DRM_DEV_ERROR(dev, "Failed to prepare_enable the vclk [%d]\n",
+ ret);
+- return ret;
++ goto err_vclk_enable;
+ }
+
+ return 0;
++
++err_vclk_enable:
++ clk_disable_unprepare(ctx->eclk);
++err_eclk_enable:
++ clk_disable_unprepare(ctx->aclk);
++err_aclk_enable:
++ clk_disable_unprepare(ctx->pclk);
++err_pclk_enable:
++ return ret;
+ }
+ #endif
+
+diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_modeset.c b/drivers/gpu/drm/hyperv/hyperv_drm_modeset.c
+index 27f4fcb058f93..b8e64dd8d3a60 100644
+--- a/drivers/gpu/drm/hyperv/hyperv_drm_modeset.c
++++ b/drivers/gpu/drm/hyperv/hyperv_drm_modeset.c
+@@ -7,9 +7,11 @@
+
+ #include <drm/drm_damage_helper.h>
+ #include <drm/drm_drv.h>
++#include <drm/drm_edid.h>
+ #include <drm/drm_fb_helper.h>
+ #include <drm/drm_format_helper.h>
+ #include <drm/drm_fourcc.h>
++#include <drm/drm_framebuffer.h>
+ #include <drm/drm_gem_atomic_helper.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_gem_shmem_helper.h>
+diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
+index d0752e5553db4..b7b257f54d2e2 100644
+--- a/drivers/gpu/drm/i915/i915_gem.h
++++ b/drivers/gpu/drm/i915/i915_gem.h
+@@ -54,9 +54,7 @@ struct drm_i915_private;
+ } while(0)
+ #define GEM_WARN_ON(expr) WARN_ON(expr)
+
+-#define GEM_DEBUG_DECL(var) var
+ #define GEM_DEBUG_EXEC(expr) expr
+-#define GEM_DEBUG_BUG_ON(expr) GEM_BUG_ON(expr)
+ #define GEM_DEBUG_WARN_ON(expr) GEM_WARN_ON(expr)
+
+ #else
+@@ -66,9 +64,7 @@ struct drm_i915_private;
+ #define GEM_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr)
+ #define GEM_WARN_ON(expr) ({ unlikely(!!(expr)); })
+
+-#define GEM_DEBUG_DECL(var)
+ #define GEM_DEBUG_EXEC(expr) do { } while (0)
+-#define GEM_DEBUG_BUG_ON(expr)
+ #define GEM_DEBUG_WARN_ON(expr) ({ BUILD_BUG_ON_INVALID(expr); 0; })
+ #endif
+
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+index 8eb0ad501a7b9..150a973c60010 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+@@ -69,6 +69,7 @@ struct jz_soc_info {
+ bool map_noncoherent;
+ bool use_extended_hwdesc;
+ bool plane_f0_not_working;
++ u32 max_burst;
+ unsigned int max_width, max_height;
+ const u32 *formats_f0, *formats_f1;
+ unsigned int num_formats_f0, num_formats_f1;
+@@ -318,8 +319,9 @@ static void ingenic_drm_crtc_update_timings(struct ingenic_drm *priv,
+ regmap_write(priv->map, JZ_REG_LCD_REV, mode->htotal << 16);
+ }
+
+- regmap_set_bits(priv->map, JZ_REG_LCD_CTRL,
+- JZ_LCD_CTRL_OFUP | JZ_LCD_CTRL_BURST_16);
++ regmap_update_bits(priv->map, JZ_REG_LCD_CTRL,
++ JZ_LCD_CTRL_OFUP | JZ_LCD_CTRL_BURST_MASK,
++ JZ_LCD_CTRL_OFUP | priv->soc_info->max_burst);
+
+ /*
+ * IPU restart - specify how much time the LCDC will wait before
+@@ -1518,6 +1520,7 @@ static const struct jz_soc_info jz4740_soc_info = {
+ .map_noncoherent = false,
+ .max_width = 800,
+ .max_height = 600,
++ .max_burst = JZ_LCD_CTRL_BURST_16,
+ .formats_f1 = jz4740_formats,
+ .num_formats_f1 = ARRAY_SIZE(jz4740_formats),
+ /* JZ4740 has only one plane */
+@@ -1529,6 +1532,7 @@ static const struct jz_soc_info jz4725b_soc_info = {
+ .map_noncoherent = false,
+ .max_width = 800,
+ .max_height = 600,
++ .max_burst = JZ_LCD_CTRL_BURST_16,
+ .formats_f1 = jz4725b_formats_f1,
+ .num_formats_f1 = ARRAY_SIZE(jz4725b_formats_f1),
+ .formats_f0 = jz4725b_formats_f0,
+@@ -1541,6 +1545,7 @@ static const struct jz_soc_info jz4770_soc_info = {
+ .map_noncoherent = true,
+ .max_width = 1280,
+ .max_height = 720,
++ .max_burst = JZ_LCD_CTRL_BURST_64,
+ .formats_f1 = jz4770_formats_f1,
+ .num_formats_f1 = ARRAY_SIZE(jz4770_formats_f1),
+ .formats_f0 = jz4770_formats_f0,
+@@ -1555,6 +1560,7 @@ static const struct jz_soc_info jz4780_soc_info = {
+ .plane_f0_not_working = true, /* REVISIT */
+ .max_width = 4096,
+ .max_height = 2048,
++ .max_burst = JZ_LCD_CTRL_BURST_64,
+ .formats_f1 = jz4770_formats_f1,
+ .num_formats_f1 = ARRAY_SIZE(jz4770_formats_f1),
+ .formats_f0 = jz4770_formats_f0,
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm.h b/drivers/gpu/drm/ingenic/ingenic-drm.h
+index cb1d09b625881..e5bd007ea93d8 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm.h
++++ b/drivers/gpu/drm/ingenic/ingenic-drm.h
+@@ -106,6 +106,9 @@
+ #define JZ_LCD_CTRL_BURST_4 (0x0 << 28)
+ #define JZ_LCD_CTRL_BURST_8 (0x1 << 28)
+ #define JZ_LCD_CTRL_BURST_16 (0x2 << 28)
++#define JZ_LCD_CTRL_BURST_32 (0x3 << 28)
++#define JZ_LCD_CTRL_BURST_64 (0x4 << 28)
++#define JZ_LCD_CTRL_BURST_MASK (0x7 << 28)
+ #define JZ_LCD_CTRL_RGB555 BIT(27)
+ #define JZ_LCD_CTRL_OFUP BIT(26)
+ #define JZ_LCD_CTRL_FRC_GRAYSCALE_16 (0x0 << 24)
+diff --git a/drivers/gpu/drm/mcde/mcde_dsi.c b/drivers/gpu/drm/mcde/mcde_dsi.c
+index 5651734ce977f..9f9ac8699310d 100644
+--- a/drivers/gpu/drm/mcde/mcde_dsi.c
++++ b/drivers/gpu/drm/mcde/mcde_dsi.c
+@@ -1111,6 +1111,7 @@ static int mcde_dsi_bind(struct device *dev, struct device *master,
+ bridge = of_drm_find_bridge(child);
+ if (!bridge) {
+ dev_err(dev, "failed to find bridge\n");
++ of_node_put(child);
+ return -EINVAL;
+ }
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index e61cd67b978ff..41c783349321e 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -54,13 +54,7 @@ enum mtk_dpi_out_channel_swap {
+ };
+
+ enum mtk_dpi_out_color_format {
+- MTK_DPI_COLOR_FORMAT_RGB,
+- MTK_DPI_COLOR_FORMAT_RGB_FULL,
+- MTK_DPI_COLOR_FORMAT_YCBCR_444,
+- MTK_DPI_COLOR_FORMAT_YCBCR_422,
+- MTK_DPI_COLOR_FORMAT_XV_YCC,
+- MTK_DPI_COLOR_FORMAT_YCBCR_444_FULL,
+- MTK_DPI_COLOR_FORMAT_YCBCR_422_FULL
++ MTK_DPI_COLOR_FORMAT_RGB
+ };
+
+ struct mtk_dpi {
+@@ -364,24 +358,11 @@ static void mtk_dpi_config_disable_edge(struct mtk_dpi *dpi)
+ static void mtk_dpi_config_color_format(struct mtk_dpi *dpi,
+ enum mtk_dpi_out_color_format format)
+ {
+- if ((format == MTK_DPI_COLOR_FORMAT_YCBCR_444) ||
+- (format == MTK_DPI_COLOR_FORMAT_YCBCR_444_FULL)) {
+- mtk_dpi_config_yuv422_enable(dpi, false);
+- mtk_dpi_config_csc_enable(dpi, true);
+- mtk_dpi_config_swap_input(dpi, false);
+- mtk_dpi_config_channel_swap(dpi, MTK_DPI_OUT_CHANNEL_SWAP_BGR);
+- } else if ((format == MTK_DPI_COLOR_FORMAT_YCBCR_422) ||
+- (format == MTK_DPI_COLOR_FORMAT_YCBCR_422_FULL)) {
+- mtk_dpi_config_yuv422_enable(dpi, true);
+- mtk_dpi_config_csc_enable(dpi, true);
+- mtk_dpi_config_swap_input(dpi, true);
+- mtk_dpi_config_channel_swap(dpi, MTK_DPI_OUT_CHANNEL_SWAP_RGB);
+- } else {
+- mtk_dpi_config_yuv422_enable(dpi, false);
+- mtk_dpi_config_csc_enable(dpi, false);
+- mtk_dpi_config_swap_input(dpi, false);
+- mtk_dpi_config_channel_swap(dpi, MTK_DPI_OUT_CHANNEL_SWAP_RGB);
+- }
++ /* only support RGB888 */
++ mtk_dpi_config_yuv422_enable(dpi, false);
++ mtk_dpi_config_csc_enable(dpi, false);
++ mtk_dpi_config_swap_input(dpi, false);
++ mtk_dpi_config_channel_swap(dpi, MTK_DPI_OUT_CHANNEL_SWAP_RGB);
+ }
+
+ static void mtk_dpi_dual_edge(struct mtk_dpi *dpi)
+@@ -436,7 +417,6 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
+ if (dpi->pinctrl && dpi->pins_dpi)
+ pinctrl_select_state(dpi->pinctrl, dpi->pins_dpi);
+
+- mtk_dpi_enable(dpi);
+ return 0;
+
+ err_pixel:
+@@ -658,6 +638,7 @@ static void mtk_dpi_bridge_enable(struct drm_bridge *bridge)
+
+ mtk_dpi_power_on(dpi);
+ mtk_dpi_set_display_mode(dpi, &dpi->mode);
++ mtk_dpi_enable(dpi);
+ }
+
+ static enum drm_mode_status
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index d9f10a33e6fad..af2f123e9a9a9 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -203,6 +203,7 @@ struct mtk_dsi {
+ struct mtk_phy_timing phy_timing;
+ int refcount;
+ bool enabled;
++ bool lanes_ready;
+ u32 irq_data;
+ wait_queue_head_t irq_wait_queue;
+ const struct mtk_dsi_driver_data *driver_data;
+@@ -661,18 +662,11 @@ static int mtk_dsi_poweron(struct mtk_dsi *dsi)
+ mtk_dsi_reset_engine(dsi);
+ mtk_dsi_phy_timconfig(dsi);
+
+- mtk_dsi_rxtx_control(dsi);
+- usleep_range(30, 100);
+- mtk_dsi_reset_dphy(dsi);
+ mtk_dsi_ps_control_vact(dsi);
+ mtk_dsi_set_vm_cmd(dsi);
+ mtk_dsi_config_vdo_timing(dsi);
+ mtk_dsi_set_interrupt_enable(dsi);
+
+- mtk_dsi_clk_ulp_mode_leave(dsi);
+- mtk_dsi_lane0_ulp_mode_leave(dsi);
+- mtk_dsi_clk_hs_mode(dsi, 0);
+-
+ return 0;
+ err_disable_engine_clk:
+ clk_disable_unprepare(dsi->engine_clk);
+@@ -691,19 +685,11 @@ static void mtk_dsi_poweroff(struct mtk_dsi *dsi)
+ if (--dsi->refcount != 0)
+ return;
+
+- /*
+- * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
+- * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
+- * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
+- * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
+- * after dsi is fully set.
+- */
+- mtk_dsi_stop(dsi);
+-
+- mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
+ mtk_dsi_reset_engine(dsi);
+ mtk_dsi_lane0_ulp_mode_enter(dsi);
+ mtk_dsi_clk_ulp_mode_enter(dsi);
++ /* set the lane number as 0 to pull down mipi */
++ writel(0, dsi->regs + DSI_TXRX_CTRL);
+
+ mtk_dsi_disable(dsi);
+
+@@ -711,21 +697,31 @@ static void mtk_dsi_poweroff(struct mtk_dsi *dsi)
+ clk_disable_unprepare(dsi->digital_clk);
+
+ phy_power_off(dsi->phy);
++
++ dsi->lanes_ready = false;
+ }
+
+-static void mtk_output_dsi_enable(struct mtk_dsi *dsi)
++static void mtk_dsi_lane_ready(struct mtk_dsi *dsi)
+ {
+- int ret;
++ if (!dsi->lanes_ready) {
++ dsi->lanes_ready = true;
++ mtk_dsi_rxtx_control(dsi);
++ usleep_range(30, 100);
++ mtk_dsi_reset_dphy(dsi);
++ mtk_dsi_clk_ulp_mode_leave(dsi);
++ mtk_dsi_lane0_ulp_mode_leave(dsi);
++ mtk_dsi_clk_hs_mode(dsi, 0);
++ msleep(20);
++ /* The reaction time after pulling up the mipi signal for dsi_rx */
++ }
++}
+
++static void mtk_output_dsi_enable(struct mtk_dsi *dsi)
++{
+ if (dsi->enabled)
+ return;
+
+- ret = mtk_dsi_poweron(dsi);
+- if (ret < 0) {
+- DRM_ERROR("failed to power on dsi\n");
+- return;
+- }
+-
++ mtk_dsi_lane_ready(dsi);
+ mtk_dsi_set_mode(dsi);
+ mtk_dsi_clk_hs_mode(dsi, 1);
+
+@@ -739,7 +735,16 @@ static void mtk_output_dsi_disable(struct mtk_dsi *dsi)
+ if (!dsi->enabled)
+ return;
+
+- mtk_dsi_poweroff(dsi);
++ /*
++ * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
++ * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
++ * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
++ * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
++ * after dsi is fully set.
++ */
++ mtk_dsi_stop(dsi);
++
++ mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
+
+ dsi->enabled = false;
+ }
+@@ -763,24 +768,50 @@ static void mtk_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ drm_display_mode_to_videomode(adjusted, &dsi->vm);
+ }
+
+-static void mtk_dsi_bridge_disable(struct drm_bridge *bridge)
++static void mtk_dsi_bridge_atomic_disable(struct drm_bridge *bridge,
++ struct drm_bridge_state *old_bridge_state)
+ {
+ struct mtk_dsi *dsi = bridge_to_dsi(bridge);
+
+ mtk_output_dsi_disable(dsi);
+ }
+
+-static void mtk_dsi_bridge_enable(struct drm_bridge *bridge)
++static void mtk_dsi_bridge_atomic_enable(struct drm_bridge *bridge,
++ struct drm_bridge_state *old_bridge_state)
+ {
+ struct mtk_dsi *dsi = bridge_to_dsi(bridge);
+
++ if (dsi->refcount == 0)
++ return;
++
+ mtk_output_dsi_enable(dsi);
+ }
+
++static void mtk_dsi_bridge_atomic_pre_enable(struct drm_bridge *bridge,
++ struct drm_bridge_state *old_bridge_state)
++{
++ struct mtk_dsi *dsi = bridge_to_dsi(bridge);
++ int ret;
++
++ ret = mtk_dsi_poweron(dsi);
++ if (ret < 0)
++ DRM_ERROR("failed to power on dsi\n");
++}
++
++static void mtk_dsi_bridge_atomic_post_disable(struct drm_bridge *bridge,
++ struct drm_bridge_state *old_bridge_state)
++{
++ struct mtk_dsi *dsi = bridge_to_dsi(bridge);
++
++ mtk_dsi_poweroff(dsi);
++}
++
+ static const struct drm_bridge_funcs mtk_dsi_bridge_funcs = {
+ .attach = mtk_dsi_bridge_attach,
+- .disable = mtk_dsi_bridge_disable,
+- .enable = mtk_dsi_bridge_enable,
++ .atomic_disable = mtk_dsi_bridge_atomic_disable,
++ .atomic_enable = mtk_dsi_bridge_atomic_enable,
++ .atomic_pre_enable = mtk_dsi_bridge_atomic_pre_enable,
++ .atomic_post_disable = mtk_dsi_bridge_atomic_post_disable,
+ .mode_set = mtk_dsi_bridge_mode_set,
+ };
+
+@@ -1000,6 +1031,8 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
+ if (MTK_DSI_HOST_IS_READ(msg->type))
+ irq_flag |= LPRX_RD_RDY_INT_FLAG;
+
++ mtk_dsi_lane_ready(dsi);
++
+ ret = mtk_dsi_host_send_cmd(dsi, msg, irq_flag);
+ if (ret)
+ goto restore_dsi_mode;
+diff --git a/drivers/gpu/drm/meson/meson_encoder_cvbs.c b/drivers/gpu/drm/meson/meson_encoder_cvbs.c
+index fd8db97ba8ba2..8110a6e39320f 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_cvbs.c
++++ b/drivers/gpu/drm/meson/meson_encoder_cvbs.c
+@@ -238,6 +238,7 @@ int meson_encoder_cvbs_init(struct meson_drm *priv)
+ }
+
+ meson_encoder_cvbs->next_bridge = of_drm_find_bridge(remote);
++ of_node_put(remote);
+ if (!meson_encoder_cvbs->next_bridge) {
+ dev_err(priv->dev, "Failed to find CVBS Connector bridge\n");
+ return -EPROBE_DEFER;
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+index 5e306de6f4853..a7692584487cc 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -365,7 +365,8 @@ int meson_encoder_hdmi_init(struct meson_drm *priv)
+ meson_encoder_hdmi->next_bridge = of_drm_find_bridge(remote);
+ if (!meson_encoder_hdmi->next_bridge) {
+ dev_err(priv->dev, "Failed to find HDMI transceiver bridge\n");
+- return -EPROBE_DEFER;
++ ret = -EPROBE_DEFER;
++ goto err_put_node;
+ }
+
+ /* HDMI Encoder Bridge */
+@@ -383,7 +384,7 @@ int meson_encoder_hdmi_init(struct meson_drm *priv)
+ DRM_MODE_ENCODER_TMDS);
+ if (ret) {
+ dev_err(priv->dev, "Failed to init HDMI encoder: %d\n", ret);
+- return ret;
++ goto err_put_node;
+ }
+
+ meson_encoder_hdmi->encoder.possible_crtcs = BIT(0);
+@@ -393,7 +394,7 @@ int meson_encoder_hdmi_init(struct meson_drm *priv)
+ DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+ if (ret) {
+ dev_err(priv->dev, "Failed to attach bridge: %d\n", ret);
+- return ret;
++ goto err_put_node;
+ }
+
+ /* Initialize & attach Bridge Connector */
+@@ -401,7 +402,8 @@ int meson_encoder_hdmi_init(struct meson_drm *priv)
+ &meson_encoder_hdmi->encoder);
+ if (IS_ERR(meson_encoder_hdmi->connector)) {
+ dev_err(priv->dev, "Unable to create HDMI bridge connector\n");
+- return PTR_ERR(meson_encoder_hdmi->connector);
++ ret = PTR_ERR(meson_encoder_hdmi->connector);
++ goto err_put_node;
+ }
+ drm_connector_attach_encoder(meson_encoder_hdmi->connector,
+ &meson_encoder_hdmi->encoder);
+@@ -428,6 +430,7 @@ int meson_encoder_hdmi_init(struct meson_drm *priv)
+ meson_encoder_hdmi->connector->ycbcr_420_allowed = true;
+
+ pdev = of_find_device_by_node(remote);
++ of_node_put(remote);
+ if (pdev) {
+ struct cec_connector_info conn_info;
+ struct cec_notifier *notifier;
+@@ -435,8 +438,10 @@ int meson_encoder_hdmi_init(struct meson_drm *priv)
+ cec_fill_conn_info_from_drm(&conn_info, meson_encoder_hdmi->connector);
+
+ notifier = cec_notifier_conn_register(&pdev->dev, NULL, &conn_info);
+- if (!notifier)
++ if (!notifier) {
++ put_device(&pdev->dev);
+ return -ENOMEM;
++ }
+
+ meson_encoder_hdmi->cec_notifier = notifier;
+ }
+@@ -444,4 +449,8 @@ int meson_encoder_hdmi_init(struct meson_drm *priv)
+ dev_dbg(priv->dev, "HDMI encoder initialized\n");
+
+ return 0;
++
++err_put_node:
++ of_node_put(remote);
++ return ret;
+ }
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index abde7655477db..4ad8d62c5631d 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -667,16 +667,26 @@ static void mgag200_disable_display(struct mga_device *mdev)
+
+ static int mga_vga_get_modes(struct drm_connector *connector)
+ {
++ struct mga_device *mdev = to_mga_device(connector->dev);
+ struct mga_connector *mga_connector = to_mga_connector(connector);
+ struct edid *edid;
+ int ret = 0;
+
++ /*
++ * Protect access to I/O registers from concurrent modesetting
++ * by acquiring the I/O-register lock.
++ */
++ mutex_lock(&mdev->rmmio_lock);
++
+ edid = drm_get_edid(connector, &mga_connector->i2c->adapter);
+ if (edid) {
+ drm_connector_update_edid_property(connector, edid);
+ ret = drm_add_edid_modes(connector, edid);
+ kfree(edid);
+ }
++
++ mutex_unlock(&mdev->rmmio_lock);
++
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index c424e9a376696..3dcec7acb3840 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1666,18 +1666,10 @@ static u64 a5xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate)
+ {
+ u64 busy_cycles;
+
+- /* Only read the gpu busy if the hardware is already active */
+- if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0) {
+- *out_sample_rate = 1;
+- return 0;
+- }
+-
+ busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO,
+ REG_A5XX_RBBM_PERFCTR_RBBM_0_HI);
+ *out_sample_rate = clk_get_rate(gpu->core_clk);
+
+- pm_runtime_put(&gpu->pdev->dev);
+-
+ return busy_cycles;
+ }
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 9f76f5b157598..dc715d88ff214 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -102,7 +102,8 @@ bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu)
+ A6XX_GMU_SPTPRAC_PWR_CLK_STATUS_GX_HM_CLK_OFF));
+ }
+
+-void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
++void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
++ bool suspended)
+ {
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+@@ -127,15 +128,16 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
+
+ /*
+ * This can get called from devfreq while the hardware is idle. Don't
+- * bring up the power if it isn't already active
++ * bring up the power if it isn't already active. All we're doing here
++ * is updating the frequency so that when we come back online we're at
++ * the right rate.
+ */
+- if (pm_runtime_get_if_in_use(gmu->dev) == 0)
++ if (suspended)
+ return;
+
+ if (!gmu->legacy) {
+ a6xx_hfi_set_freq(gmu, perf_index);
+ dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
+- pm_runtime_put(gmu->dev);
+ return;
+ }
+
+@@ -159,7 +161,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
+ dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
+
+ dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
+- pm_runtime_put(gmu->dev);
+ }
+
+ unsigned long a6xx_gmu_get_freq(struct msm_gpu *gpu)
+@@ -895,7 +896,7 @@ static void a6xx_gmu_set_initial_freq(struct msm_gpu *gpu, struct a6xx_gmu *gmu)
+ return;
+
+ gmu->freq = 0; /* so a6xx_gmu_set_freq() doesn't exit early */
+- a6xx_gmu_set_freq(gpu, gpu_opp);
++ a6xx_gmu_set_freq(gpu, gpu_opp, false);
+ dev_pm_opp_put(gpu_opp);
+ }
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 42ed9a3c49055..8c02a67f29f25 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1658,27 +1658,21 @@ static u64 a6xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate)
+ /* 19.2MHz */
+ *out_sample_rate = 19200000;
+
+- /* Only read the gpu busy if the hardware is already active */
+- if (pm_runtime_get_if_in_use(a6xx_gpu->gmu.dev) == 0)
+- return 0;
+-
+ busy_cycles = gmu_read64(&a6xx_gpu->gmu,
+ REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_L,
+ REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_H);
+
+-
+- pm_runtime_put(a6xx_gpu->gmu.dev);
+-
+ return busy_cycles;
+ }
+
+-static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
++static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
++ bool suspended)
+ {
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+
+ mutex_lock(&a6xx_gpu->gmu.lock);
+- a6xx_gmu_set_freq(gpu, opp);
++ a6xx_gmu_set_freq(gpu, opp, suspended);
+ mutex_unlock(&a6xx_gpu->gmu.lock);
+ }
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+index 86e0a7c3fe6df..ab853f61db632 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+@@ -77,7 +77,8 @@ void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state);
+ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
+ void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu);
+
+-void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp);
++void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
++ bool suspended);
+ unsigned long a6xx_gmu_get_freq(struct msm_gpu *gpu);
+
+ void a6xx_show(struct msm_gpu *gpu, struct msm_gpu_state *state,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index b56f777dbd0ea..4c5c1f627cb88 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -361,6 +361,9 @@ static void _dpu_crtc_blend_setup_mixer(struct drm_crtc *crtc,
+ if (!state)
+ continue;
+
++ if (!state->visible)
++ continue;
++
+ pstate = to_dpu_plane_state(state);
+ fb = state->fb;
+
+@@ -1134,6 +1137,9 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+ if (cnt >= DPU_STAGE_MAX * 4)
+ continue;
+
++ if (!pstate->visible)
++ continue;
++
+ pstates[cnt].dpu_pstate = dpu_pstate;
+ pstates[cnt].drm_pstate = pstate;
+ pstates[cnt].stage = pstate->normalized_zpos;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index a1b8c45929437..9b4df3084366b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -1048,24 +1048,6 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc,
+ phys->hw_pp = dpu_enc->hw_pp[i];
+ phys->hw_ctl = to_dpu_hw_ctl(hw_ctl[i]);
+
+- if (phys->intf_idx >= INTF_0 && phys->intf_idx < INTF_MAX)
+- phys->hw_intf = dpu_rm_get_intf(&dpu_kms->rm, phys->intf_idx);
+-
+- if (phys->wb_idx >= WB_0 && phys->wb_idx < WB_MAX)
+- phys->hw_wb = dpu_rm_get_wb(&dpu_kms->rm, phys->wb_idx);
+-
+- if (!phys->hw_intf && !phys->hw_wb) {
+- DPU_ERROR_ENC(dpu_enc,
+- "no intf or wb block assigned at idx: %d\n", i);
+- return;
+- }
+-
+- if (phys->hw_intf && phys->hw_wb) {
+- DPU_ERROR_ENC(dpu_enc,
+- "invalid phys both intf and wb block at idx: %d\n", i);
+- return;
+- }
+-
+ phys->cached_mode = crtc_state->adjusted_mode;
+ if (phys->ops.atomic_mode_set)
+ phys->ops.atomic_mode_set(phys, crtc_state, conn_state);
+@@ -2294,7 +2276,25 @@ static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc,
+ struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
+ atomic_set(&phys->vsync_cnt, 0);
+ atomic_set(&phys->underrun_cnt, 0);
++
++ if (phys->intf_idx >= INTF_0 && phys->intf_idx < INTF_MAX)
++ phys->hw_intf = dpu_rm_get_intf(&dpu_kms->rm, phys->intf_idx);
++
++ if (phys->wb_idx >= WB_0 && phys->wb_idx < WB_MAX)
++ phys->hw_wb = dpu_rm_get_wb(&dpu_kms->rm, phys->wb_idx);
++
++ if (!phys->hw_intf && !phys->hw_wb) {
++ DPU_ERROR_ENC(dpu_enc, "no intf or wb block assigned at idx: %d\n", i);
++ ret = -EINVAL;
++ }
++
++ if (phys->hw_intf && phys->hw_wb) {
++ DPU_ERROR_ENC(dpu_enc,
++ "invalid phys both intf and wb block at idx: %d\n", i);
++ ret = -EINVAL;
++ }
+ }
++
+ mutex_unlock(&dpu_enc->enc_lock);
+
+ return ret;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+index 0ec809ab06e72..15919e1a8dc3b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+@@ -20,8 +20,6 @@
+ #include "dpu_crtc.h"
+ #include "disp/msm_disp_snapshot.h"
+
+-#define DEFAULT_MAX_WRITEBACK_WIDTH 2048
+-
+ #define to_dpu_encoder_phys_wb(x) \
+ container_of(x, struct dpu_encoder_phys_wb, base)
+
+@@ -278,9 +276,9 @@ static int dpu_encoder_phys_wb_atomic_check(
+ DPU_ERROR("invalid fb h=%d, mode h=%d\n", fb->height,
+ mode->vdisplay);
+ return -EINVAL;
+- } else if (fb->width > DEFAULT_MAX_WRITEBACK_WIDTH) {
++ } else if (fb->width > phys_enc->hw_wb->caps->maxlinewidth) {
+ DPU_ERROR("invalid fb w=%d, maxlinewidth=%u\n",
+- fb->width, DEFAULT_MAX_WRITEBACK_WIDTH);
++ fb->width, phys_enc->hw_wb->caps->maxlinewidth);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 400ebceb56bb6..dd7537e32f885 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -1285,7 +1285,7 @@ static const struct dpu_intf_cfg qcm2290_intf[] = {
+ * Writeback blocks config
+ *************************************************************/
+ #define WB_BLK(_name, _id, _base, _features, _clk_ctrl, \
+- __xin_id, vbif_id, _reg, _wb_done_bit) \
++ __xin_id, vbif_id, _reg, _max_linewidth, _wb_done_bit) \
+ { \
+ .name = _name, .id = _id, \
+ .base = _base, .len = 0x2c8, \
+@@ -1295,13 +1295,13 @@ static const struct dpu_intf_cfg qcm2290_intf[] = {
+ .clk_ctrl = _clk_ctrl, \
+ .xin_id = __xin_id, \
+ .vbif_idx = vbif_id, \
+- .maxlinewidth = DEFAULT_DPU_LINE_WIDTH, \
++ .maxlinewidth = _max_linewidth, \
+ .intr_wb_done = DPU_IRQ_IDX(_reg, _wb_done_bit) \
+ }
+
+ static const struct dpu_wb_cfg sm8250_wb[] = {
+ WB_BLK("wb_2", WB_2, 0x65000, WB_SM8250_MASK, DPU_CLK_CTRL_WB2, 6,
+- VBIF_RT, MDP_SSPP_TOP0_INTR, 4),
++ VBIF_RT, MDP_SSPP_TOP0_INTR, 4096, 4),
+ };
+
+ /*************************************************************
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+index a4f5cb90f3e80..e4b8a789835a4 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+@@ -123,12 +123,13 @@ int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
+ {
+ struct msm_drm_private *priv = s->dev->dev_private;
+ struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
+- struct mdp5_global_state *state = mdp5_get_global_state(s);
++ struct mdp5_global_state *state;
+ struct mdp5_hw_pipe_state *new_state;
+
+ if (!hwpipe)
+ return 0;
+
++ state = mdp5_get_global_state(s);
+ if (IS_ERR(state))
+ return PTR_ERR(state);
+
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index cf24e68864ba0..73070ec1a9361 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -180,6 +180,9 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev)
+ goto fail;
+ }
+
++ for (i = 0; i < config->pwr_reg_cnt; i++)
++ hdmi->pwr_regs[i].supply = config->pwr_reg_names[i];
++
+ ret = devm_regulator_bulk_get(&pdev->dev, config->pwr_reg_cnt, hdmi->pwr_regs);
+ if (ret) {
+ DRM_DEV_ERROR(&pdev->dev, "failed to get pwr regulator: %d\n", ret);
+diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
+index 38e3323bc2324..a47e5837c528f 100644
+--- a/drivers/gpu/drm/msm/msm_fence.c
++++ b/drivers/gpu/drm/msm/msm_fence.c
+@@ -28,6 +28,14 @@ msm_fence_context_alloc(struct drm_device *dev, volatile uint32_t *fenceptr,
+ fctx->fenceptr = fenceptr;
+ spin_lock_init(&fctx->spinlock);
+
++ /*
++ * Start out close to the 32b fence rollover point, so we can
++ * catch bugs with fence comparisons.
++ */
++ fctx->last_fence = 0xffffff00;
++ fctx->completed_fence = fctx->last_fence;
++ *fctx->fenceptr = fctx->last_fence;
++
+ return fctx;
+ }
+
+@@ -52,7 +60,8 @@ void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence)
+ unsigned long flags;
+
+ spin_lock_irqsave(&fctx->spinlock, flags);
+- fctx->completed_fence = max(fence, fctx->completed_fence);
++ if (fence_after(fence, fctx->completed_fence))
++ fctx->completed_fence = fence;
+ spin_unlock_irqrestore(&fctx->spinlock, flags);
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
+index 6def008830463..31269c1c896b5 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.h
++++ b/drivers/gpu/drm/msm/msm_gpu.h
+@@ -64,11 +64,14 @@ struct msm_gpu_funcs {
+ /* for generation specific debugfs: */
+ void (*debugfs_init)(struct msm_gpu *gpu, struct drm_minor *minor);
+ #endif
++ /* note: gpu_busy() can assume that we have been pm_resumed */
+ u64 (*gpu_busy)(struct msm_gpu *gpu, unsigned long *out_sample_rate);
+ struct msm_gpu_state *(*gpu_state_get)(struct msm_gpu *gpu);
+ int (*gpu_state_put)(struct msm_gpu_state *state);
+ unsigned long (*gpu_get_freq)(struct msm_gpu *gpu);
+- void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp);
++ /* note: gpu_set_freq() can assume that we have been pm_resumed */
++ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp,
++ bool suspended);
+ struct msm_gem_address_space *(*create_address_space)
+ (struct msm_gpu *gpu, struct platform_device *pdev);
+ struct msm_gem_address_space *(*create_private_address_space)
+@@ -92,6 +95,9 @@ struct msm_gpu_devfreq {
+ /** devfreq: devfreq instance */
+ struct devfreq *devfreq;
+
++ /** lock: lock for "suspended", "busy_cycles", and "time" */
++ struct mutex lock;
++
+ /**
+ * idle_constraint:
+ *
+@@ -135,6 +141,9 @@ struct msm_gpu_devfreq {
+ * elapsed
+ */
+ struct msm_hrtimer_work boost_work;
++
++ /** suspended: tracks if we're suspended */
++ bool suspended;
+ };
+
+ struct msm_gpu {
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index d2539ca78c296..ea94bc18e72eb 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -20,6 +20,7 @@ static int msm_devfreq_target(struct device *dev, unsigned long *freq,
+ u32 flags)
+ {
+ struct msm_gpu *gpu = dev_to_gpu(dev);
++ struct msm_gpu_devfreq *df = &gpu->devfreq;
+ struct dev_pm_opp *opp;
+
+ /*
+@@ -32,10 +33,13 @@ static int msm_devfreq_target(struct device *dev, unsigned long *freq,
+
+ trace_msm_gpu_freq_change(dev_pm_opp_get_freq(opp));
+
+- if (gpu->funcs->gpu_set_freq)
+- gpu->funcs->gpu_set_freq(gpu, opp);
+- else
++ if (gpu->funcs->gpu_set_freq) {
++ mutex_lock(&df->lock);
++ gpu->funcs->gpu_set_freq(gpu, opp, df->suspended);
++ mutex_unlock(&df->lock);
++ } else {
+ clk_set_rate(gpu->core_clk, *freq);
++ }
+
+ dev_pm_opp_put(opp);
+
+@@ -58,15 +62,24 @@ static void get_raw_dev_status(struct msm_gpu *gpu,
+ unsigned long sample_rate;
+ ktime_t time;
+
++ mutex_lock(&df->lock);
++
+ status->current_frequency = get_freq(gpu);
+- busy_cycles = gpu->funcs->gpu_busy(gpu, &sample_rate);
+ time = ktime_get();
+-
+- busy_time = busy_cycles - df->busy_cycles;
+ status->total_time = ktime_us_delta(time, df->time);
++ df->time = time;
+
++ if (df->suspended) {
++ mutex_unlock(&df->lock);
++ status->busy_time = 0;
++ return;
++ }
++
++ busy_cycles = gpu->funcs->gpu_busy(gpu, &sample_rate);
++ busy_time = busy_cycles - df->busy_cycles;
+ df->busy_cycles = busy_cycles;
+- df->time = time;
++
++ mutex_unlock(&df->lock);
+
+ busy_time *= USEC_PER_SEC;
+ do_div(busy_time, sample_rate);
+@@ -175,6 +188,8 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+ if (!gpu->funcs->gpu_busy)
+ return;
+
++ mutex_init(&df->lock);
++
+ dev_pm_qos_add_request(&gpu->pdev->dev, &df->idle_freq,
+ DEV_PM_QOS_MAX_FREQUENCY,
+ PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE);
+@@ -244,12 +259,16 @@ void msm_devfreq_cleanup(struct msm_gpu *gpu)
+ void msm_devfreq_resume(struct msm_gpu *gpu)
+ {
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
++ unsigned long sample_rate;
+
+ if (!has_devfreq(gpu))
+ return;
+
+- df->busy_cycles = 0;
++ mutex_lock(&df->lock);
++ df->busy_cycles = gpu->funcs->gpu_busy(gpu, &sample_rate);
+ df->time = ktime_get();
++ df->suspended = false;
++ mutex_unlock(&df->lock);
+
+ devfreq_resume_device(df->devfreq);
+ }
+@@ -261,6 +280,10 @@ void msm_devfreq_suspend(struct msm_gpu *gpu)
+ if (!has_devfreq(gpu))
+ return;
+
++ mutex_lock(&df->lock);
++ df->suspended = true;
++ mutex_unlock(&df->lock);
++
+ devfreq_suspend_device(df->devfreq);
+
+ cancel_idle_work(df);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index 22b83a6577eb0..df83c4654e269 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -1361,13 +1361,11 @@ nouveau_connector_create(struct drm_device *dev,
+ snprintf(aux_name, sizeof(aux_name), "sor-%04x-%04x",
+ dcbe->hasht, dcbe->hashm);
+ nv_connector->aux.name = kstrdup(aux_name, GFP_KERNEL);
+- drm_dp_aux_init(&nv_connector->aux);
+- if (ret) {
+- NV_ERROR(drm, "Failed to init AUX adapter for sor-%04x-%04x: %d\n",
+- dcbe->hasht, dcbe->hashm, ret);
++ if (!nv_connector->aux.name) {
+ kfree(nv_connector);
+- return ERR_PTR(ret);
++ return ERR_PTR(-ENOMEM);
+ }
++ drm_dp_aux_init(&nv_connector->aux);
+ fallthrough;
+ default:
+ funcs = &nouveau_connector_funcs;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
+index 2cd0932b3d687..a2f5df568ca54 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.c
++++ b/drivers/gpu/drm/nouveau/nouveau_display.c
+@@ -515,7 +515,7 @@ nouveau_display_hpd_work(struct work_struct *work)
+
+ pm_runtime_mark_last_busy(drm->dev->dev);
+ noop:
+- pm_runtime_put_sync(drm->dev->dev);
++ pm_runtime_put_autosuspend(dev->dev);
+ }
+
+ #ifdef CONFIG_ACPI
+@@ -537,7 +537,7 @@ nouveau_display_acpi_ntfy(struct notifier_block *nb, unsigned long val,
+ * it's own hotplug events.
+ */
+ pm_runtime_put_autosuspend(drm->dev->dev);
+- } else if (ret == 0) {
++ } else if (ret == 0 || ret == -EINPROGRESS) {
+ /* We've started resuming the GPU already, so
+ * it will handle scheduling a full reprobe
+ * itself
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+index 4f9b3aa5deda9..20ac1ce2c0f14 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+@@ -466,7 +466,7 @@ nouveau_fbcon_set_suspend_work(struct work_struct *work)
+ if (state == FBINFO_STATE_RUNNING) {
+ nouveau_fbcon_hotplug_resume(drm->fbcon);
+ pm_runtime_mark_last_busy(drm->dev->dev);
+- pm_runtime_put_sync(drm->dev->dev);
++ pm_runtime_put_autosuspend(drm->dev->dev);
+ }
+ }
+
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
+index 64e423dddd9e7..6c318e41bde04 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
+@@ -33,7 +33,7 @@ nvbios_addr(struct nvkm_bios *bios, u32 *addr, u8 size)
+ {
+ u32 p = *addr;
+
+- if (*addr > bios->image0_size && bios->imaged_addr) {
++ if (*addr >= bios->image0_size && bios->imaged_addr) {
+ *addr -= bios->image0_size;
+ *addr += bios->imaged_addr;
+ }
+diff --git a/drivers/gpu/drm/panel/Kconfig b/drivers/gpu/drm/panel/Kconfig
+index 38799effd00ad..4f1f004b3c543 100644
+--- a/drivers/gpu/drm/panel/Kconfig
++++ b/drivers/gpu/drm/panel/Kconfig
+@@ -438,6 +438,8 @@ config DRM_PANEL_SAMSUNG_ATNA33XC20
+ depends on OF
+ depends on BACKLIGHT_CLASS_DEVICE
+ depends on PM
++ select DRM_DISPLAY_DP_HELPER
++ select DRM_DISPLAY_HELPER
+ select DRM_DP_AUX_BUS
+ help
+ DRM panel driver for the Samsung ATNA33XC20 panel. This panel can't
+diff --git a/drivers/gpu/drm/radeon/.gitignore b/drivers/gpu/drm/radeon/.gitignore
+index 9c1a941539836..d8777383a64aa 100644
+--- a/drivers/gpu/drm/radeon/.gitignore
++++ b/drivers/gpu/drm/radeon/.gitignore
+@@ -1,4 +1,4 @@
+-# SPDX-License-Identifier: GPL-2.0-only
++# SPDX-License-Identifier: MIT
+ mkregtable
+ *_reg_safe.h
+
+diff --git a/drivers/gpu/drm/radeon/Kconfig b/drivers/gpu/drm/radeon/Kconfig
+index 6f60f4840cc58..52819e7f1fca1 100644
+--- a/drivers/gpu/drm/radeon/Kconfig
++++ b/drivers/gpu/drm/radeon/Kconfig
+@@ -1,4 +1,4 @@
+-# SPDX-License-Identifier: GPL-2.0-only
++# SPDX-License-Identifier: MIT
+ config DRM_RADEON_USERPTR
+ bool "Always enable userptr support"
+ depends on DRM_RADEON
+diff --git a/drivers/gpu/drm/radeon/Makefile b/drivers/gpu/drm/radeon/Makefile
+index ea5380e24c3cc..e3ab3aca13967 100644
+--- a/drivers/gpu/drm/radeon/Makefile
++++ b/drivers/gpu/drm/radeon/Makefile
+@@ -1,4 +1,4 @@
+-# SPDX-License-Identifier: GPL-2.0
++# SPDX-License-Identifier: MIT
+ #
+ # Makefile for the drm device driver. This driver provides support for the
+ # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
+diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c
+index 769f666335ac4..672d2239293e0 100644
+--- a/drivers/gpu/drm/radeon/ni_dpm.c
++++ b/drivers/gpu/drm/radeon/ni_dpm.c
+@@ -2741,10 +2741,10 @@ static int ni_set_mc_special_registers(struct radeon_device *rdev,
+ table->mc_reg_table_entry[k].mc_data[j] |= 0x100;
+ }
+ j++;
+- if (j > SMC_NISLANDS_MC_REGISTER_ARRAY_SIZE)
+- return -EINVAL;
+ break;
+ case MC_SEQ_RESERVE_M >> 2:
++ if (j >= SMC_NISLANDS_MC_REGISTER_ARRAY_SIZE)
++ return -EINVAL;
+ temp_reg = RREG32(MC_PMG_CMD_MRS1);
+ table->mc_reg_address[j].s1 = MC_PMG_CMD_MRS1 >> 2;
+ table->mc_reg_address[j].s0 = MC_SEQ_PMG_CMD_MRS1_LP >> 2;
+@@ -2753,8 +2753,6 @@ static int ni_set_mc_special_registers(struct radeon_device *rdev,
+ (temp_reg & 0xffff0000) |
+ (table->mc_reg_table_entry[k].mc_data[i] & 0x0000ffff);
+ j++;
+- if (j > SMC_NISLANDS_MC_REGISTER_ARRAY_SIZE)
+- return -EINVAL;
+ break;
+ default:
+ break;
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index 15692cb241fc0..429644d5ddc69 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1113,7 +1113,7 @@ static int radeon_gart_size_auto(enum radeon_family family)
+ static void radeon_check_arguments(struct radeon_device *rdev)
+ {
+ /* vramlimit must be a power of two */
+- if (!is_power_of_2(radeon_vram_limit)) {
++ if (radeon_vram_limit != 0 && !is_power_of_2(radeon_vram_limit)) {
+ dev_warn(rdev->dev, "vram limit (%d) must be a power of 2\n",
+ radeon_vram_limit);
+ radeon_vram_limit = 0;
+diff --git a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+index 70be64ca0a000..ad2d3ae7e6211 100644
+--- a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
++++ b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+@@ -408,7 +408,15 @@ static int rockchip_dp_probe(struct platform_device *pdev)
+ if (IS_ERR(dp->adp))
+ return PTR_ERR(dp->adp);
+
+- return component_add(dev, &rockchip_dp_component_ops);
++ ret = component_add(dev, &rockchip_dp_component_ops);
++ if (ret)
++ goto err_dp_remove;
++
++ return 0;
++
++err_dp_remove:
++ analogix_dp_remove(dp->adp);
++ return ret;
+ }
+
+ static int rockchip_dp_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 74562d40f6396..daf1928813533 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -1570,6 +1570,9 @@ static struct drm_crtc_state *vop_crtc_duplicate_state(struct drm_crtc *crtc)
+ {
+ struct rockchip_crtc_state *rockchip_state;
+
++ if (WARN_ON(!crtc->state))
++ return NULL;
++
+ rockchip_state = kzalloc(sizeof(*rockchip_state), GFP_KERNEL);
+ if (!rockchip_state)
+ return NULL;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index 26ac91db0f35b..d6e831576cd2b 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -1524,6 +1524,7 @@ static void vop2_crtc_atomic_enable(struct drm_crtc *crtc,
+ if (ret < 0) {
+ drm_err(vop2->drm, "failed to enable dclk for video port%d - %d\n",
+ vp->id, ret);
++ vop2_unlock(vop2);
+ return;
+ }
+
+diff --git a/drivers/gpu/drm/solomon/ssd130x-spi.c b/drivers/gpu/drm/solomon/ssd130x-spi.c
+index 43722adab1f82..07802907e39ad 100644
+--- a/drivers/gpu/drm/solomon/ssd130x-spi.c
++++ b/drivers/gpu/drm/solomon/ssd130x-spi.c
+@@ -143,6 +143,7 @@ static const struct of_device_id ssd130x_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ssd130x_of_match);
+
++#if IS_MODULE(CONFIG_DRM_SSD130X_SPI)
+ /*
+ * The SPI core always reports a MODALIAS uevent of the form "spi:<dev>", even
+ * if the device was registered via OF. This means that the module will not be
+@@ -160,6 +161,7 @@ static const struct spi_device_id ssd130x_spi_table[] = {
+ { /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(spi, ssd130x_spi_table);
++#endif
+
+ static struct spi_driver ssd130x_spi_driver = {
+ .driver = {
+diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
+index 7c7dd84e6db84..81991090adcc9 100644
+--- a/drivers/gpu/drm/tegra/gem.c
++++ b/drivers/gpu/drm/tegra/gem.c
+@@ -704,14 +704,23 @@ static int tegra_gem_prime_vmap(struct dma_buf *buf, struct iosys_map *map)
+ {
+ struct drm_gem_object *gem = buf->priv;
+ struct tegra_bo *bo = to_tegra_bo(gem);
++ void *vaddr;
+
+- iosys_map_set_vaddr(map, bo->vaddr);
++ vaddr = tegra_bo_mmap(&bo->base);
++ if (IS_ERR(vaddr))
++ return PTR_ERR(vaddr);
++
++ iosys_map_set_vaddr(map, vaddr);
+
+ return 0;
+ }
+
+ static void tegra_gem_prime_vunmap(struct dma_buf *buf, struct iosys_map *map)
+ {
++ struct drm_gem_object *gem = buf->priv;
++ struct tegra_bo *bo = to_tegra_bo(gem);
++
++ tegra_bo_munmap(&bo->base, map->vaddr);
+ }
+
+ static const struct dma_buf_ops tegra_gem_prime_dmabuf_ops = {
+diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7735r.c
+index 29d618093e946..e0f02d367d880 100644
+--- a/drivers/gpu/drm/tiny/st7735r.c
++++ b/drivers/gpu/drm/tiny/st7735r.c
+@@ -174,6 +174,7 @@ MODULE_DEVICE_TABLE(of, st7735r_of_match);
+
+ static const struct spi_device_id st7735r_id[] = {
+ { "jd-t18003-t01", (uintptr_t)&jd_t18003_t01_cfg },
++ { "rh128128t", (uintptr_t)&rh128128t_cfg },
+ { },
+ };
+ MODULE_DEVICE_TABLE(spi, st7735r_id);
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 9355213dc883c..af0fcb41e420f 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -316,10 +316,13 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
+ struct drm_crtc_state *crtc_state = crtc->state;
+ struct drm_display_mode *mode = &crtc_state->adjusted_mode;
+ bool interlace = mode->flags & DRM_MODE_FLAG_INTERLACE;
+- u32 pixel_rep = (mode->flags & DRM_MODE_FLAG_DBLCLK) ? 2 : 1;
++ bool is_hdmi = vc4_encoder->type == VC4_ENCODER_TYPE_HDMI0 ||
++ vc4_encoder->type == VC4_ENCODER_TYPE_HDMI1;
++ u32 pixel_rep = ((mode->flags & DRM_MODE_FLAG_DBLCLK) && !is_hdmi) ? 2 : 1;
+ bool is_dsi = (vc4_encoder->type == VC4_ENCODER_TYPE_DSI0 ||
+ vc4_encoder->type == VC4_ENCODER_TYPE_DSI1);
+- u32 format = is_dsi ? PV_CONTROL_FORMAT_DSIV_24 : PV_CONTROL_FORMAT_24;
++ bool is_dsi1 = vc4_encoder->type == VC4_ENCODER_TYPE_DSI1;
++ u32 format = is_dsi1 ? PV_CONTROL_FORMAT_DSIV_24 : PV_CONTROL_FORMAT_24;
+ u8 ppc = pv_data->pixels_per_clock;
+ bool debug_dump_regs = false;
+
+@@ -345,7 +348,8 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
+ PV_HORZB_HACTIVE));
+
+ CRTC_WRITE(PV_VERTA,
+- VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
++ VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end +
++ interlace,
+ PV_VERTA_VBP) |
+ VC4_SET_FIELD(mode->crtc_vsync_end - mode->crtc_vsync_start,
+ PV_VERTA_VSYNC));
+@@ -357,7 +361,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
+ if (interlace) {
+ CRTC_WRITE(PV_VERTA_EVEN,
+ VC4_SET_FIELD(mode->crtc_vtotal -
+- mode->crtc_vsync_end - 1,
++ mode->crtc_vsync_end,
+ PV_VERTA_VBP) |
+ VC4_SET_FIELD(mode->crtc_vsync_end -
+ mode->crtc_vsync_start,
+@@ -377,7 +381,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
+ PV_VCONTROL_CONTINUOUS |
+ (is_dsi ? PV_VCONTROL_DSI : 0) |
+ PV_VCONTROL_INTERLACE |
+- VC4_SET_FIELD(mode->htotal * pixel_rep / 2,
++ VC4_SET_FIELD(mode->htotal * pixel_rep / (2 * ppc),
+ PV_VCONTROL_ODD_DELAY));
+ CRTC_WRITE(PV_VSYNCD_EVEN, 0);
+ } else {
+diff --git a/drivers/gpu/drm/vc4/vc4_dsi.c b/drivers/gpu/drm/vc4/vc4_dsi.c
+index 98308a17e4ed7..b7b2c76770dc6 100644
+--- a/drivers/gpu/drm/vc4/vc4_dsi.c
++++ b/drivers/gpu/drm/vc4/vc4_dsi.c
+@@ -181,8 +181,50 @@
+
+ #define DSI0_TXPKT_PIX_FIFO 0x20 /* AKA PIX_FIFO */
+
+-#define DSI0_INT_STAT 0x24
+-#define DSI0_INT_EN 0x28
++#define DSI0_INT_STAT 0x24
++#define DSI0_INT_EN 0x28
++# define DSI0_INT_FIFO_ERR BIT(25)
++# define DSI0_INT_CMDC_DONE_MASK VC4_MASK(24, 23)
++# define DSI0_INT_CMDC_DONE_SHIFT 23
++# define DSI0_INT_CMDC_DONE_NO_REPEAT 1
++# define DSI0_INT_CMDC_DONE_REPEAT 3
++# define DSI0_INT_PHY_DIR_RTF BIT(22)
++# define DSI0_INT_PHY_D1_ULPS BIT(21)
++# define DSI0_INT_PHY_D1_STOP BIT(20)
++# define DSI0_INT_PHY_RXLPDT BIT(19)
++# define DSI0_INT_PHY_RXTRIG BIT(18)
++# define DSI0_INT_PHY_D0_ULPS BIT(17)
++# define DSI0_INT_PHY_D0_LPDT BIT(16)
++# define DSI0_INT_PHY_D0_FTR BIT(15)
++# define DSI0_INT_PHY_D0_STOP BIT(14)
++/* Signaled when the clock lane enters the given state. */
++# define DSI0_INT_PHY_CLK_ULPS BIT(13)
++# define DSI0_INT_PHY_CLK_HS BIT(12)
++# define DSI0_INT_PHY_CLK_FTR BIT(11)
++/* Signaled on timeouts */
++# define DSI0_INT_PR_TO BIT(10)
++# define DSI0_INT_TA_TO BIT(9)
++# define DSI0_INT_LPRX_TO BIT(8)
++# define DSI0_INT_HSTX_TO BIT(7)
++/* Contention on a line when trying to drive the line low */
++# define DSI0_INT_ERR_CONT_LP1 BIT(6)
++# define DSI0_INT_ERR_CONT_LP0 BIT(5)
++/* Control error: incorrect line state sequence on data lane 0. */
++# define DSI0_INT_ERR_CONTROL BIT(4)
++# define DSI0_INT_ERR_SYNC_ESC BIT(3)
++# define DSI0_INT_RX2_PKT BIT(2)
++# define DSI0_INT_RX1_PKT BIT(1)
++# define DSI0_INT_CMD_PKT BIT(0)
++
++#define DSI0_INTERRUPTS_ALWAYS_ENABLED (DSI0_INT_ERR_SYNC_ESC | \
++ DSI0_INT_ERR_CONTROL | \
++ DSI0_INT_ERR_CONT_LP0 | \
++ DSI0_INT_ERR_CONT_LP1 | \
++ DSI0_INT_HSTX_TO | \
++ DSI0_INT_LPRX_TO | \
++ DSI0_INT_TA_TO | \
++ DSI0_INT_PR_TO)
++
+ # define DSI1_INT_PHY_D3_ULPS BIT(30)
+ # define DSI1_INT_PHY_D3_STOP BIT(29)
+ # define DSI1_INT_PHY_D2_ULPS BIT(28)
+@@ -761,6 +803,9 @@ static void vc4_dsi_encoder_disable(struct drm_encoder *encoder)
+ list_for_each_entry_reverse(iter, &dsi->bridge_chain, chain_node) {
+ if (iter->funcs->disable)
+ iter->funcs->disable(iter);
++
++ if (iter == dsi->bridge)
++ break;
+ }
+
+ vc4_dsi_ulps(dsi, true);
+@@ -805,11 +850,9 @@ static bool vc4_dsi_encoder_mode_fixup(struct drm_encoder *encoder,
+ /* Find what divider gets us a faster clock than the requested
+ * pixel clock.
+ */
+- for (divider = 1; divider < 8; divider++) {
+- if (parent_rate / divider < pll_clock) {
+- divider--;
++ for (divider = 1; divider < 255; divider++) {
++ if (parent_rate / (divider + 1) < pll_clock)
+ break;
+- }
+ }
+
+ /* Now that we've picked a PLL divider, calculate back to its
+@@ -894,6 +937,9 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+
+ DSI_PORT_WRITE(PHY_AFEC0, afec0);
+
++ /* AFEC reset hold time */
++ mdelay(1);
++
+ DSI_PORT_WRITE(PHY_AFEC1,
+ VC4_SET_FIELD(6, DSI0_PHY_AFEC1_IDR_DLANE1) |
+ VC4_SET_FIELD(6, DSI0_PHY_AFEC1_IDR_DLANE0) |
+@@ -1060,12 +1106,9 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+ DSI_PORT_WRITE(CTRL, DSI_PORT_READ(CTRL) | DSI1_CTRL_EN);
+
+ /* Bring AFE out of reset. */
+- if (dsi->variant->port == 0) {
+- } else {
+- DSI_PORT_WRITE(PHY_AFEC0,
+- DSI_PORT_READ(PHY_AFEC0) &
+- ~DSI1_PHY_AFEC0_RESET);
+- }
++ DSI_PORT_WRITE(PHY_AFEC0,
++ DSI_PORT_READ(PHY_AFEC0) &
++ ~DSI_PORT_BIT(PHY_AFEC0_RESET));
+
+ vc4_dsi_ulps(dsi, false);
+
+@@ -1184,13 +1227,28 @@ static ssize_t vc4_dsi_host_transfer(struct mipi_dsi_host *host,
+ /* Enable the appropriate interrupt for the transfer completion. */
+ dsi->xfer_result = 0;
+ reinit_completion(&dsi->xfer_completion);
+- DSI_PORT_WRITE(INT_STAT, DSI1_INT_TXPKT1_DONE | DSI1_INT_PHY_DIR_RTF);
+- if (msg->rx_len) {
+- DSI_PORT_WRITE(INT_EN, (DSI1_INTERRUPTS_ALWAYS_ENABLED |
+- DSI1_INT_PHY_DIR_RTF));
++ if (dsi->variant->port == 0) {
++ DSI_PORT_WRITE(INT_STAT,
++ DSI0_INT_CMDC_DONE_MASK | DSI1_INT_PHY_DIR_RTF);
++ if (msg->rx_len) {
++ DSI_PORT_WRITE(INT_EN, (DSI0_INTERRUPTS_ALWAYS_ENABLED |
++ DSI0_INT_PHY_DIR_RTF));
++ } else {
++ DSI_PORT_WRITE(INT_EN,
++ (DSI0_INTERRUPTS_ALWAYS_ENABLED |
++ VC4_SET_FIELD(DSI0_INT_CMDC_DONE_NO_REPEAT,
++ DSI0_INT_CMDC_DONE)));
++ }
+ } else {
+- DSI_PORT_WRITE(INT_EN, (DSI1_INTERRUPTS_ALWAYS_ENABLED |
+- DSI1_INT_TXPKT1_DONE));
++ DSI_PORT_WRITE(INT_STAT,
++ DSI1_INT_TXPKT1_DONE | DSI1_INT_PHY_DIR_RTF);
++ if (msg->rx_len) {
++ DSI_PORT_WRITE(INT_EN, (DSI1_INTERRUPTS_ALWAYS_ENABLED |
++ DSI1_INT_PHY_DIR_RTF));
++ } else {
++ DSI_PORT_WRITE(INT_EN, (DSI1_INTERRUPTS_ALWAYS_ENABLED |
++ DSI1_INT_TXPKT1_DONE));
++ }
+ }
+
+ /* Send the packet. */
+@@ -1207,7 +1265,7 @@ static ssize_t vc4_dsi_host_transfer(struct mipi_dsi_host *host,
+ ret = dsi->xfer_result;
+ }
+
+- DSI_PORT_WRITE(INT_EN, DSI1_INTERRUPTS_ALWAYS_ENABLED);
++ DSI_PORT_WRITE(INT_EN, DSI_PORT_BIT(INTERRUPTS_ALWAYS_ENABLED));
+
+ if (ret)
+ goto reset_fifo_and_return;
+@@ -1253,7 +1311,7 @@ reset_fifo_and_return:
+ DSI_PORT_BIT(CTRL_RESET_FIFOS));
+
+ DSI_PORT_WRITE(TXPKT1C, 0);
+- DSI_PORT_WRITE(INT_EN, DSI1_INTERRUPTS_ALWAYS_ENABLED);
++ DSI_PORT_WRITE(INT_EN, DSI_PORT_BIT(INTERRUPTS_ALWAYS_ENABLED));
+ return ret;
+ }
+
+@@ -1390,26 +1448,28 @@ static irqreturn_t vc4_dsi_irq_handler(int irq, void *data)
+ DSI_PORT_WRITE(INT_STAT, stat);
+
+ dsi_handle_error(dsi, &ret, stat,
+- DSI1_INT_ERR_SYNC_ESC, "LPDT sync");
++ DSI_PORT_BIT(INT_ERR_SYNC_ESC), "LPDT sync");
+ dsi_handle_error(dsi, &ret, stat,
+- DSI1_INT_ERR_CONTROL, "data lane 0 sequence");
++ DSI_PORT_BIT(INT_ERR_CONTROL), "data lane 0 sequence");
+ dsi_handle_error(dsi, &ret, stat,
+- DSI1_INT_ERR_CONT_LP0, "LP0 contention");
++ DSI_PORT_BIT(INT_ERR_CONT_LP0), "LP0 contention");
+ dsi_handle_error(dsi, &ret, stat,
+- DSI1_INT_ERR_CONT_LP1, "LP1 contention");
++ DSI_PORT_BIT(INT_ERR_CONT_LP1), "LP1 contention");
+ dsi_handle_error(dsi, &ret, stat,
+- DSI1_INT_HSTX_TO, "HSTX timeout");
++ DSI_PORT_BIT(INT_HSTX_TO), "HSTX timeout");
+ dsi_handle_error(dsi, &ret, stat,
+- DSI1_INT_LPRX_TO, "LPRX timeout");
++ DSI_PORT_BIT(INT_LPRX_TO), "LPRX timeout");
+ dsi_handle_error(dsi, &ret, stat,
+- DSI1_INT_TA_TO, "turnaround timeout");
++ DSI_PORT_BIT(INT_TA_TO), "turnaround timeout");
+ dsi_handle_error(dsi, &ret, stat,
+- DSI1_INT_PR_TO, "peripheral reset timeout");
++ DSI_PORT_BIT(INT_PR_TO), "peripheral reset timeout");
+
+- if (stat & (DSI1_INT_TXPKT1_DONE | DSI1_INT_PHY_DIR_RTF)) {
++ if (stat & ((dsi->variant->port ? DSI1_INT_TXPKT1_DONE :
++ DSI0_INT_CMDC_DONE_MASK) |
++ DSI_PORT_BIT(INT_PHY_DIR_RTF))) {
+ complete(&dsi->xfer_completion);
+ ret = IRQ_HANDLED;
+- } else if (stat & DSI1_INT_HSTX_TO) {
++ } else if (stat & DSI_PORT_BIT(INT_HSTX_TO)) {
+ complete(&dsi->xfer_completion);
+ dsi->xfer_result = -ETIMEDOUT;
+ ret = IRQ_HANDLED;
+@@ -1487,13 +1547,29 @@ vc4_dsi_init_phy_clocks(struct vc4_dsi *dsi)
+ dsi->clk_onecell);
+ }
+
++static void vc4_dsi_dma_mem_release(void *ptr)
++{
++ struct vc4_dsi *dsi = ptr;
++ struct device *dev = &dsi->pdev->dev;
++
++ dma_free_coherent(dev, 4, dsi->reg_dma_mem, dsi->reg_dma_paddr);
++ dsi->reg_dma_mem = NULL;
++}
++
++static void vc4_dsi_dma_chan_release(void *ptr)
++{
++ struct vc4_dsi *dsi = ptr;
++
++ dma_release_channel(dsi->reg_dma_chan);
++ dsi->reg_dma_chan = NULL;
++}
++
+ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ {
+ struct platform_device *pdev = to_platform_device(dev);
+ struct drm_device *drm = dev_get_drvdata(master);
+ struct vc4_dsi *dsi = dev_get_drvdata(dev);
+ struct vc4_dsi_encoder *vc4_dsi_encoder;
+- dma_cap_mask_t dma_mask;
+ int ret;
+
+ dsi->variant = of_device_get_match_data(dev);
+@@ -1504,7 +1580,8 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&dsi->bridge_chain);
+- vc4_dsi_encoder->base.type = VC4_ENCODER_TYPE_DSI1;
++ vc4_dsi_encoder->base.type = dsi->variant->port ?
++ VC4_ENCODER_TYPE_DSI1 : VC4_ENCODER_TYPE_DSI0;
+ vc4_dsi_encoder->dsi = dsi;
+ dsi->encoder = &vc4_dsi_encoder->base.base;
+
+@@ -1527,6 +1604,8 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ * so set up a channel for talking to it.
+ */
+ if (dsi->variant->broken_axi_workaround) {
++ dma_cap_mask_t dma_mask;
++
+ dsi->reg_dma_mem = dma_alloc_coherent(dev, 4,
+ &dsi->reg_dma_paddr,
+ GFP_KERNEL);
+@@ -1535,8 +1614,13 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ return -ENOMEM;
+ }
+
++ ret = devm_add_action_or_reset(dev, vc4_dsi_dma_mem_release, dsi);
++ if (ret)
++ return ret;
++
+ dma_cap_zero(dma_mask);
+ dma_cap_set(DMA_MEMCPY, dma_mask);
++
+ dsi->reg_dma_chan = dma_request_chan_by_mask(&dma_mask);
+ if (IS_ERR(dsi->reg_dma_chan)) {
+ ret = PTR_ERR(dsi->reg_dma_chan);
+@@ -1546,6 +1630,10 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
+ return ret;
+ }
+
++ ret = devm_add_action_or_reset(dev, vc4_dsi_dma_chan_release, dsi);
++ if (ret)
++ return ret;
++
+ /* Get the physical address of the device's registers. The
+ * struct resource for the regs gives us the bus address
+ * instead.
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index ce9d16666d91f..23ff6aa5e8f60 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -79,6 +79,11 @@
+ #define VC5_HDMI_VERTB_VSPO_SHIFT 16
+ #define VC5_HDMI_VERTB_VSPO_MASK VC4_MASK(29, 16)
+
++#define VC4_HDMI_MISC_CONTROL_PIXEL_REP_SHIFT 0
++#define VC4_HDMI_MISC_CONTROL_PIXEL_REP_MASK VC4_MASK(3, 0)
++#define VC5_HDMI_MISC_CONTROL_PIXEL_REP_SHIFT 0
++#define VC5_HDMI_MISC_CONTROL_PIXEL_REP_MASK VC4_MASK(3, 0)
++
+ #define VC5_HDMI_SCRAMBLER_CTL_ENABLE BIT(0)
+
+ #define VC5_HDMI_DEEP_COLOR_CONFIG_1_INIT_PACK_PHASE_SHIFT 8
+@@ -145,6 +150,12 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
+
+ drm_print_regset32(&p, &vc4_hdmi->hdmi_regset);
+ drm_print_regset32(&p, &vc4_hdmi->hd_regset);
++ drm_print_regset32(&p, &vc4_hdmi->cec_regset);
++ drm_print_regset32(&p, &vc4_hdmi->csc_regset);
++ drm_print_regset32(&p, &vc4_hdmi->dvp_regset);
++ drm_print_regset32(&p, &vc4_hdmi->phy_regset);
++ drm_print_regset32(&p, &vc4_hdmi->ram_regset);
++ drm_print_regset32(&p, &vc4_hdmi->rm_regset);
+
+ return 0;
+ }
+@@ -455,9 +466,11 @@ static void vc4_hdmi_write_infoframe(struct drm_encoder *encoder,
+ const struct vc4_hdmi_register *ram_packet_start =
+ &vc4_hdmi->variant->registers[HDMI_RAM_PACKET_START];
+ u32 packet_reg = ram_packet_start->offset + VC4_HDMI_PACKET_STRIDE * packet_id;
++ u32 packet_reg_next = ram_packet_start->offset +
++ VC4_HDMI_PACKET_STRIDE * (packet_id + 1);
+ void __iomem *base = __vc4_hdmi_get_field_base(vc4_hdmi,
+ ram_packet_start->reg);
+- uint8_t buffer[VC4_HDMI_PACKET_STRIDE];
++ uint8_t buffer[VC4_HDMI_PACKET_STRIDE] = {};
+ unsigned long flags;
+ ssize_t len, i;
+ int ret;
+@@ -493,6 +506,13 @@ static void vc4_hdmi_write_infoframe(struct drm_encoder *encoder,
+ packet_reg += 4;
+ }
+
++ /*
++ * clear remainder of packet ram as it's included in the
++ * infoframe and triggers a checksum error on hdmi analyser
++ */
++ for (; packet_reg < packet_reg_next; packet_reg += 4)
++ writel(0, base + packet_reg);
++
+ HDMI_WRITE(HDMI_RAM_PACKET_CONFIG,
+ HDMI_READ(HDMI_RAM_PACKET_CONFIG) | BIT(packet_id));
+
+@@ -970,14 +990,15 @@ static void vc4_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
+ VC4_HDMI_VERTA_VFP) |
+ VC4_SET_FIELD(mode->crtc_vdisplay, VC4_HDMI_VERTA_VAL));
+ u32 vertb = (VC4_SET_FIELD(0, VC4_HDMI_VERTB_VSPO) |
+- VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
++ VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end +
++ interlaced,
+ VC4_HDMI_VERTB_VBP));
+ u32 vertb_even = (VC4_SET_FIELD(0, VC4_HDMI_VERTB_VSPO) |
+ VC4_SET_FIELD(mode->crtc_vtotal -
+- mode->crtc_vsync_end -
+- interlaced,
++ mode->crtc_vsync_end,
+ VC4_HDMI_VERTB_VBP));
+ unsigned long flags;
++ u32 reg;
+
+ spin_lock_irqsave(&vc4_hdmi->hw_lock, flags);
+
+@@ -1004,6 +1025,11 @@ static void vc4_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
+ HDMI_WRITE(HDMI_VERTB0, vertb_even);
+ HDMI_WRITE(HDMI_VERTB1, vertb);
+
++ reg = HDMI_READ(HDMI_MISC_CONTROL);
++ reg &= ~VC4_HDMI_MISC_CONTROL_PIXEL_REP_MASK;
++ reg |= VC4_SET_FIELD(pixel_rep - 1, VC4_HDMI_MISC_CONTROL_PIXEL_REP);
++ HDMI_WRITE(HDMI_MISC_CONTROL, reg);
++
+ spin_unlock_irqrestore(&vc4_hdmi->hw_lock, flags);
+ }
+
+@@ -1022,13 +1048,13 @@ static void vc5_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
+ VC4_SET_FIELD(mode->crtc_vsync_start - mode->crtc_vdisplay,
+ VC5_HDMI_VERTA_VFP) |
+ VC4_SET_FIELD(mode->crtc_vdisplay, VC5_HDMI_VERTA_VAL));
+- u32 vertb = (VC4_SET_FIELD(0, VC5_HDMI_VERTB_VSPO) |
++ u32 vertb = (VC4_SET_FIELD(mode->htotal >> (2 - pixel_rep),
++ VC5_HDMI_VERTB_VSPO) |
+ VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
+ VC4_HDMI_VERTB_VBP));
+ u32 vertb_even = (VC4_SET_FIELD(0, VC5_HDMI_VERTB_VSPO) |
+ VC4_SET_FIELD(mode->crtc_vtotal -
+- mode->crtc_vsync_end -
+- interlaced,
++ mode->crtc_vsync_end - interlaced,
+ VC4_HDMI_VERTB_VBP));
+ unsigned long flags;
+ unsigned char gcp;
+@@ -1102,6 +1128,11 @@ static void vc5_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
+ reg |= gcp_en ? VC5_HDMI_GCP_CONFIG_GCP_ENABLE : 0;
+ HDMI_WRITE(HDMI_GCP_CONFIG, reg);
+
++ reg = HDMI_READ(HDMI_MISC_CONTROL);
++ reg &= ~VC5_HDMI_MISC_CONTROL_PIXEL_REP_MASK;
++ reg |= VC4_SET_FIELD(pixel_rep - 1, VC5_HDMI_MISC_CONTROL_PIXEL_REP);
++ HDMI_WRITE(HDMI_MISC_CONTROL, reg);
++
+ HDMI_WRITE(HDMI_CLOCK_STOP, 0);
+
+ spin_unlock_irqrestore(&vc4_hdmi->hw_lock, flags);
+@@ -1597,18 +1628,37 @@ static int vc4_hdmi_encoder_atomic_check(struct drm_encoder *encoder,
+ struct drm_crtc_state *crtc_state,
+ struct drm_connector_state *conn_state)
+ {
++ struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
++ struct drm_connector *connector = &vc4_hdmi->connector;
++ struct drm_connector_state *old_conn_state =
++ drm_atomic_get_old_connector_state(conn_state->state, connector);
++ struct vc4_hdmi_connector_state *old_vc4_state =
++ conn_state_to_vc4_hdmi_conn_state(old_conn_state);
+ struct vc4_hdmi_connector_state *vc4_state = conn_state_to_vc4_hdmi_conn_state(conn_state);
+ struct drm_display_mode *mode = &crtc_state->adjusted_mode;
+- struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
+ unsigned long long tmds_char_rate = mode->clock * 1000;
+ unsigned long long tmds_bit_rate;
+ int ret;
+
+- if (vc4_hdmi->variant->unsupported_odd_h_timings &&
+- !(mode->flags & DRM_MODE_FLAG_DBLCLK) &&
+- ((mode->hdisplay % 2) || (mode->hsync_start % 2) ||
+- (mode->hsync_end % 2) || (mode->htotal % 2)))
+- return -EINVAL;
++ if (vc4_hdmi->variant->unsupported_odd_h_timings) {
++ if (mode->flags & DRM_MODE_FLAG_DBLCLK) {
++ /* Only try to fixup DBLCLK modes to get 480i and 576i
++ * working.
++ * A generic solution for all modes with odd horizontal
++ * timing values seems impossible based on trying to
++ * solve it for 1366x768 monitors.
++ */
++ if ((mode->hsync_start - mode->hdisplay) & 1)
++ mode->hsync_start--;
++ if ((mode->hsync_end - mode->hsync_start) & 1)
++ mode->hsync_end--;
++ }
++
++ /* Now check whether we still have odd values remaining */
++ if ((mode->hdisplay % 2) || (mode->hsync_start % 2) ||
++ (mode->hsync_end % 2) || (mode->htotal % 2))
++ return -EINVAL;
++ }
+
+ /*
+ * The 1440p@60 pixel rate is in the same range than the first
+@@ -1628,6 +1678,11 @@ static int vc4_hdmi_encoder_atomic_check(struct drm_encoder *encoder,
+ if (ret)
+ return ret;
+
++ /* vc4_hdmi_encoder_compute_config may have changed output_bpc and/or output_format */
++ if (vc4_state->output_bpc != old_vc4_state->output_bpc ||
++ vc4_state->output_format != old_vc4_state->output_format)
++ crtc_state->mode_changed = true;
++
+ return 0;
+ }
+
+@@ -1941,10 +1996,10 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data,
+
+ /* Set the MAI threshold */
+ HDMI_WRITE(HDMI_MAI_THR,
+- VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICHIGH) |
+- VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICLOW) |
+- VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_DREQHIGH) |
+- VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_DREQLOW));
++ VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICHIGH) |
++ VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICLOW) |
++ VC4_SET_FIELD(0x06, VC4_HD_MAI_THR_DREQHIGH) |
++ VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_DREQLOW));
+
+ HDMI_WRITE(HDMI_MAI_CONFIG,
+ VC4_HDMI_MAI_CONFIG_BIT_REVERSE |
+@@ -2035,12 +2090,12 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
+ struct device *dev = &vc4_hdmi->pdev->dev;
+ struct platform_device *codec_pdev;
+ const __be32 *addr;
+- int index;
++ int index, len;
+ int ret;
+
+- if (!of_find_property(dev->of_node, "dmas", NULL)) {
++ if (!of_find_property(dev->of_node, "dmas", &len) || !len) {
+ dev_warn(dev,
+- "'dmas' DT property is missing, no HDMI audio\n");
++ "'dmas' DT property is missing or empty, no HDMI audio\n");
+ return 0;
+ }
+
+@@ -2521,8 +2576,6 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ struct cec_connector_info conn_info;
+ struct platform_device *pdev = vc4_hdmi->pdev;
+ struct device *dev = &pdev->dev;
+- unsigned long flags;
+- u32 value;
+ int ret;
+
+ if (!of_find_property(dev->of_node, "interrupts", NULL)) {
+@@ -2541,15 +2594,6 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ cec_fill_conn_info_from_drm(&conn_info, &vc4_hdmi->connector);
+ cec_s_conn_info(vc4_hdmi->cec_adap, &conn_info);
+
+- spin_lock_irqsave(&vc4_hdmi->hw_lock, flags);
+- value = HDMI_READ(HDMI_CEC_CNTRL_1);
+- /* Set the logical address to Unregistered */
+- value |= VC4_HDMI_CEC_ADDR_MASK;
+- HDMI_WRITE(HDMI_CEC_CNTRL_1, value);
+- spin_unlock_irqrestore(&vc4_hdmi->hw_lock, flags);
+-
+- vc4_hdmi_cec_update_clk_div(vc4_hdmi);
+-
+ if (vc4_hdmi->variant->external_irq_controller) {
+ ret = request_threaded_irq(platform_get_irq_byname(pdev, "cec-rx"),
+ vc4_cec_irq_handler_rx_bare,
+@@ -2565,10 +2609,6 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ if (ret)
+ goto err_remove_cec_rx_handler;
+ } else {
+- spin_lock_irqsave(&vc4_hdmi->hw_lock, flags);
+- HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, 0xffffffff);
+- spin_unlock_irqrestore(&vc4_hdmi->hw_lock, flags);
+-
+ ret = request_threaded_irq(platform_get_irq(pdev, 0),
+ vc4_cec_irq_handler,
+ vc4_cec_irq_handler_thread, 0,
+@@ -2619,7 +2659,6 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ }
+
+ static void vc4_hdmi_cec_exit(struct vc4_hdmi *vc4_hdmi) {};
+-
+ #endif
+
+ static int vc4_hdmi_build_regset(struct vc4_hdmi *vc4_hdmi,
+@@ -2704,6 +2743,7 @@ static int vc5_hdmi_init_resources(struct vc4_hdmi *vc4_hdmi)
+ struct platform_device *pdev = vc4_hdmi->pdev;
+ struct device *dev = &pdev->dev;
+ struct resource *res;
++ int ret;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "hdmi");
+ if (!res)
+@@ -2800,6 +2840,38 @@ static int vc5_hdmi_init_resources(struct vc4_hdmi *vc4_hdmi)
+ return PTR_ERR(vc4_hdmi->reset);
+ }
+
++ ret = vc4_hdmi_build_regset(vc4_hdmi, &vc4_hdmi->hdmi_regset, VC4_HDMI);
++ if (ret)
++ return ret;
++
++ ret = vc4_hdmi_build_regset(vc4_hdmi, &vc4_hdmi->hd_regset, VC4_HD);
++ if (ret)
++ return ret;
++
++ ret = vc4_hdmi_build_regset(vc4_hdmi, &vc4_hdmi->cec_regset, VC5_CEC);
++ if (ret)
++ return ret;
++
++ ret = vc4_hdmi_build_regset(vc4_hdmi, &vc4_hdmi->csc_regset, VC5_CSC);
++ if (ret)
++ return ret;
++
++ ret = vc4_hdmi_build_regset(vc4_hdmi, &vc4_hdmi->dvp_regset, VC5_DVP);
++ if (ret)
++ return ret;
++
++ ret = vc4_hdmi_build_regset(vc4_hdmi, &vc4_hdmi->phy_regset, VC5_PHY);
++ if (ret)
++ return ret;
++
++ ret = vc4_hdmi_build_regset(vc4_hdmi, &vc4_hdmi->ram_regset, VC5_RAM);
++ if (ret)
++ return ret;
++
++ ret = vc4_hdmi_build_regset(vc4_hdmi, &vc4_hdmi->rm_regset, VC5_RM);
++ if (ret)
++ return ret;
++
+ return 0;
+ }
+
+@@ -2815,12 +2887,34 @@ static int __maybe_unused vc4_hdmi_runtime_suspend(struct device *dev)
+ static int vc4_hdmi_runtime_resume(struct device *dev)
+ {
+ struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
++ unsigned long __maybe_unused flags;
++ u32 __maybe_unused value;
+ int ret;
+
+ ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
+ if (ret)
+ return ret;
+
++ if (vc4_hdmi->variant->reset)
++ vc4_hdmi->variant->reset(vc4_hdmi);
++
++#ifdef CONFIG_DRM_VC4_HDMI_CEC
++ spin_lock_irqsave(&vc4_hdmi->hw_lock, flags);
++ value = HDMI_READ(HDMI_CEC_CNTRL_1);
++ /* Set the logical address to Unregistered */
++ value |= VC4_HDMI_CEC_ADDR_MASK;
++ HDMI_WRITE(HDMI_CEC_CNTRL_1, value);
++ spin_unlock_irqrestore(&vc4_hdmi->hw_lock, flags);
++
++ vc4_hdmi_cec_update_clk_div(vc4_hdmi);
++
++ if (!vc4_hdmi->variant->external_irq_controller) {
++ spin_lock_irqsave(&vc4_hdmi->hw_lock, flags);
++ HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, 0xffffffff);
++ spin_unlock_irqrestore(&vc4_hdmi->hw_lock, flags);
++ }
++#endif
++
+ return 0;
+ }
+
+@@ -2910,9 +3004,6 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
+- if (vc4_hdmi->variant->reset)
+- vc4_hdmi->variant->reset(vc4_hdmi);
+-
+ if ((of_device_is_compatible(dev->of_node, "brcm,bcm2711-hdmi0") ||
+ of_device_is_compatible(dev->of_node, "brcm,bcm2711-hdmi1")) &&
+ HDMI_READ(HDMI_VID_CTL) & VC4_HD_VID_CTL_ENABLE) {
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.h b/drivers/gpu/drm/vc4/vc4_hdmi.h
+index 51b27dcdcd9b7..1520387b317f0 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.h
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.h
+@@ -179,6 +179,14 @@ struct vc4_hdmi {
+ struct debugfs_regset32 hdmi_regset;
+ struct debugfs_regset32 hd_regset;
+
++ /* VC5 only */
++ struct debugfs_regset32 cec_regset;
++ struct debugfs_regset32 csc_regset;
++ struct debugfs_regset32 dvp_regset;
++ struct debugfs_regset32 phy_regset;
++ struct debugfs_regset32 ram_regset;
++ struct debugfs_regset32 rm_regset;
++
+ /**
+ * @hw_lock: Spinlock protecting device register access.
+ */
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi_regs.h b/drivers/gpu/drm/vc4/vc4_hdmi_regs.h
+index a040356b6bdc1..0198de96c7b22 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi_regs.h
++++ b/drivers/gpu/drm/vc4/vc4_hdmi_regs.h
+@@ -127,6 +127,7 @@ enum vc4_hdmi_field {
+ HDMI_VERTB0,
+ HDMI_VERTB1,
+ HDMI_VID_CTL,
++ HDMI_MISC_CONTROL,
+ };
+
+ struct vc4_hdmi_register {
+@@ -237,6 +238,7 @@ static const struct vc4_hdmi_register __maybe_unused vc5_hdmi_hdmi0_fields[] = {
+ VC4_HDMI_REG(HDMI_VERTB0, 0x0f0),
+ VC4_HDMI_REG(HDMI_VERTA1, 0x0f4),
+ VC4_HDMI_REG(HDMI_VERTB1, 0x0f8),
++ VC4_HDMI_REG(HDMI_MISC_CONTROL, 0x100),
+ VC4_HDMI_REG(HDMI_MAI_CHANNEL_MAP, 0x09c),
+ VC4_HDMI_REG(HDMI_MAI_CONFIG, 0x0a0),
+ VC4_HDMI_REG(HDMI_DEEP_COLOR_CONFIG_1, 0x170),
+@@ -319,6 +321,7 @@ static const struct vc4_hdmi_register __maybe_unused vc5_hdmi_hdmi1_fields[] = {
+ VC4_HDMI_REG(HDMI_VERTB0, 0x0f0),
+ VC4_HDMI_REG(HDMI_VERTA1, 0x0f4),
+ VC4_HDMI_REG(HDMI_VERTB1, 0x0f8),
++ VC4_HDMI_REG(HDMI_MISC_CONTROL, 0x100),
+ VC4_HDMI_REG(HDMI_MAI_CHANNEL_MAP, 0x09c),
+ VC4_HDMI_REG(HDMI_MAI_CONFIG, 0x0a0),
+ VC4_HDMI_REG(HDMI_DEEP_COLOR_CONFIG_1, 0x170),
+@@ -420,7 +423,7 @@ static inline u32 vc4_hdmi_read(struct vc4_hdmi *hdmi,
+ const struct vc4_hdmi_variant *variant = hdmi->variant;
+ void __iomem *base;
+
+- WARN_ON(!pm_runtime_active(&hdmi->pdev->dev));
++ WARN_ON(pm_runtime_status_suspended(&hdmi->pdev->dev));
+
+ if (reg >= variant->num_registers) {
+ dev_warn(&hdmi->pdev->dev,
+@@ -450,7 +453,7 @@ static inline void vc4_hdmi_write(struct vc4_hdmi *hdmi,
+
+ lockdep_assert_held(&hdmi->hw_lock);
+
+- WARN_ON(!pm_runtime_active(&hdmi->pdev->dev));
++ WARN_ON(pm_runtime_status_suspended(&hdmi->pdev->dev));
+
+ if (reg >= variant->num_registers) {
+ dev_warn(&hdmi->pdev->dev,
+diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c
+index 893d831b24aa0..b7353d4c08110 100644
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -950,7 +950,9 @@ vc4_core_clock_atomic_check(struct drm_atomic_state *state)
+ continue;
+
+ num_outputs++;
+- cob_rate += hvs_new_state->fifo_state[i].fifo_load;
++ cob_rate = max_t(unsigned long,
++ hvs_new_state->fifo_state[i].fifo_load,
++ cob_rate);
+ }
+
+ pixel_rate = load_state->hvs_load;
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index 1e866dc00ac32..568371aa89c55 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -310,16 +310,16 @@ static int vc4_plane_margins_adj(struct drm_plane_state *pstate)
+ adjhdisplay,
+ crtc_state->mode.hdisplay);
+ vc4_pstate->crtc_x += left;
+- if (vc4_pstate->crtc_x > crtc_state->mode.hdisplay - left)
+- vc4_pstate->crtc_x = crtc_state->mode.hdisplay - left;
++ if (vc4_pstate->crtc_x > crtc_state->mode.hdisplay - right)
++ vc4_pstate->crtc_x = crtc_state->mode.hdisplay - right;
+
+ adjvdisplay = crtc_state->mode.vdisplay - (top + bottom);
+ vc4_pstate->crtc_y = DIV_ROUND_CLOSEST(vc4_pstate->crtc_y *
+ adjvdisplay,
+ crtc_state->mode.vdisplay);
+ vc4_pstate->crtc_y += top;
+- if (vc4_pstate->crtc_y > crtc_state->mode.vdisplay - top)
+- vc4_pstate->crtc_y = crtc_state->mode.vdisplay - top;
++ if (vc4_pstate->crtc_y > crtc_state->mode.vdisplay - bottom)
++ vc4_pstate->crtc_y = crtc_state->mode.vdisplay - bottom;
+
+ vc4_pstate->crtc_w = DIV_ROUND_CLOSEST(vc4_pstate->crtc_w *
+ adjhdisplay,
+@@ -339,7 +339,6 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ struct vc4_plane_state *vc4_state = to_vc4_plane_state(state);
+ struct drm_framebuffer *fb = state->fb;
+ struct drm_gem_cma_object *bo = drm_fb_cma_get_gem_obj(fb, 0);
+- u32 subpixel_src_mask = (1 << 16) - 1;
+ int num_planes = fb->format->num_planes;
+ struct drm_crtc_state *crtc_state;
+ u32 h_subsample = fb->format->hsub;
+@@ -361,18 +360,15 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ for (i = 0; i < num_planes; i++)
+ vc4_state->offsets[i] = bo->paddr + fb->offsets[i];
+
+- /* We don't support subpixel source positioning for scaling. */
+- if ((state->src.x1 & subpixel_src_mask) ||
+- (state->src.x2 & subpixel_src_mask) ||
+- (state->src.y1 & subpixel_src_mask) ||
+- (state->src.y2 & subpixel_src_mask)) {
+- return -EINVAL;
+- }
+-
+- vc4_state->src_x = state->src.x1 >> 16;
+- vc4_state->src_y = state->src.y1 >> 16;
+- vc4_state->src_w[0] = (state->src.x2 - state->src.x1) >> 16;
+- vc4_state->src_h[0] = (state->src.y2 - state->src.y1) >> 16;
++ /*
++ * We don't support subpixel source positioning for scaling,
++ * but fractional coordinates can be generated by clipping
++ * so just round for now
++ */
++ vc4_state->src_x = DIV_ROUND_CLOSEST(state->src.x1, 1 << 16);
++ vc4_state->src_y = DIV_ROUND_CLOSEST(state->src.y1, 1 << 16);
++ vc4_state->src_w[0] = DIV_ROUND_CLOSEST(state->src.x2, 1 << 16) - vc4_state->src_x;
++ vc4_state->src_h[0] = DIV_ROUND_CLOSEST(state->src.y2, 1 << 16) - vc4_state->src_y;
+
+ vc4_state->crtc_x = state->dst.x1;
+ vc4_state->crtc_y = state->dst.y1;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+index f8d83358d2a0f..9b2702116f93e 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+@@ -580,8 +580,10 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev,
+ spin_unlock(&vgdev->display_info_lock);
+
+ /* not in cache - need to talk to hw */
+- virtio_gpu_cmd_get_capset(vgdev, found_valid, args->cap_set_ver,
+- &cache_ent);
++ ret = virtio_gpu_cmd_get_capset(vgdev, found_valid, args->cap_set_ver,
++ &cache_ent);
++ if (ret)
++ return ret;
+ virtio_gpu_notify(vgdev);
+
+ copy_exit:
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index f293e6ad52daf..1cc8f3fc8e4ba 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -168,9 +168,9 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
+ * since virtio_gpu doesn't support dma-buf import from other devices.
+ */
+ shmem->pages = drm_gem_shmem_get_sg_table(&bo->base);
+- if (!shmem->pages) {
++ if (IS_ERR(shmem->pages)) {
+ drm_gem_shmem_unpin(&bo->base);
+- return -EINVAL;
++ return PTR_ERR(shmem->pages);
+ }
+
+ if (use_dma_api) {
+diff --git a/drivers/gpu/drm/vkms/vkms_composer.c b/drivers/gpu/drm/vkms/vkms_composer.c
+index c6a1036bf2ea7..b47ac170108ca 100644
+--- a/drivers/gpu/drm/vkms/vkms_composer.c
++++ b/drivers/gpu/drm/vkms/vkms_composer.c
+@@ -157,7 +157,7 @@ static void compose_plane(struct vkms_composer *primary_composer,
+ void *vaddr;
+ void (*pixel_blend)(const u8 *p_src, u8 *p_dst);
+
+- if (WARN_ON(iosys_map_is_null(&primary_composer->map[0])))
++ if (WARN_ON(iosys_map_is_null(&plane_composer->map[0])))
+ return;
+
+ vaddr = plane_composer->map[0].vaddr;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index 0f770a2b47ff5..e27ee18710666 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -173,6 +173,8 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
+ dev = &privdata->pdev->dev;
+
+ cl_data->num_hid_devices = amd_mp2_get_sensor_num(privdata, &cl_data->sensor_idx[0]);
++ if (cl_data->num_hid_devices == 0)
++ return -ENODEV;
+
+ INIT_DELAYED_WORK(&cl_data->work, amd_sfh_work);
+ INIT_DELAYED_WORK(&cl_data->work_buffer, amd_sfh_work_buffer);
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_hid.c b/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
+index 1089134030b0c..1b18291fc5afe 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
+@@ -101,11 +101,15 @@ static int amdtp_wait_for_response(struct hid_device *hid)
+
+ void amdtp_hid_wakeup(struct hid_device *hid)
+ {
+- struct amdtp_hid_data *hid_data = hid->driver_data;
+- struct amdtp_cl_data *cli_data = hid_data->cli_data;
++ struct amdtp_hid_data *hid_data;
++ struct amdtp_cl_data *cli_data;
+
+- cli_data->request_done[cli_data->cur_hid_dev] = true;
+- wake_up_interruptible(&hid_data->hid_wait);
++ if (hid) {
++ hid_data = hid->driver_data;
++ cli_data = hid_data->cli_data;
++ cli_data->request_done[cli_data->cur_hid_dev] = true;
++ wake_up_interruptible(&hid_data->hid_wait);
++ }
+ }
+
+ static struct hid_ll_driver amdtp_hid_ll_driver = {
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index dadc491bbf6b2..1441787a154a8 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -327,7 +327,8 @@ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ rc = amd_sfh_hid_client_init(privdata);
+ if (rc) {
+ amd_sfh_clear_intr(privdata);
+- dev_err(&pdev->dev, "amd_sfh_hid_client_init failed\n");
++ if (rc != -EOPNOTSUPP)
++ dev_err(&pdev->dev, "amd_sfh_hid_client_init failed\n");
+ return rc;
+ }
+
+diff --git a/drivers/hid/hid-alps.c b/drivers/hid/hid-alps.c
+index 2b986d0dbde46..db146d0f7937e 100644
+--- a/drivers/hid/hid-alps.c
++++ b/drivers/hid/hid-alps.c
+@@ -830,6 +830,8 @@ static const struct hid_device_id alps_id[] = {
+ USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_U1_DUAL) },
+ { HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY,
+ USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_U1) },
++ { HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY,
++ USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY) },
+ { HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY,
+ USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_T4_BTNLESS) },
+ { }
+diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
+index ece147d1a2789..1e16b0fa310d1 100644
+--- a/drivers/hid/hid-cp2112.c
++++ b/drivers/hid/hid-cp2112.c
+@@ -790,6 +790,11 @@ static int cp2112_xfer(struct i2c_adapter *adap, u16 addr,
+ data->word = le16_to_cpup((__le16 *)buf);
+ break;
+ case I2C_SMBUS_I2C_BLOCK_DATA:
++ if (read_length > I2C_SMBUS_BLOCK_MAX) {
++ ret = -EINVAL;
++ goto power_normal;
++ }
++
+ memcpy(data->block + 1, buf, read_length);
+ break;
+ case I2C_SMBUS_BLOCK_DATA:
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index d9eb676abe960..9c4e92a9c6460 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -413,6 +413,7 @@
+ #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544
+ #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
+ #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A
++#define I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN 0x2A1C
+
+ #define USB_VENDOR_ID_ELECOM 0x056e
+ #define USB_DEVICE_ID_ELECOM_BM084 0x0061
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index c6b27aab90414..48c1c02c69f4e 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -381,6 +381,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ HID_BATTERY_QUIRK_IGNORE },
+ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN),
+ HID_BATTERY_QUIRK_IGNORE },
++ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN),
++ HID_BATTERY_QUIRK_IGNORE },
+ {}
+ };
+
+diff --git a/drivers/hid/hid-mcp2221.c b/drivers/hid/hid-mcp2221.c
+index 4211b9839209b..de52e9f7bb8cb 100644
+--- a/drivers/hid/hid-mcp2221.c
++++ b/drivers/hid/hid-mcp2221.c
+@@ -385,6 +385,9 @@ static int mcp_smbus_write(struct mcp2221 *mcp, u16 addr,
+ data_len = 7;
+ break;
+ default:
++ if (len > I2C_SMBUS_BLOCK_MAX)
++ return -EINVAL;
++
+ memcpy(&mcp->txbuf[5], buf, len);
+ data_len = len + 5;
+ }
+diff --git a/drivers/hid/hid-nintendo.c b/drivers/hid/hid-nintendo.c
+index 2204de889739f..4b1173957c17c 100644
+--- a/drivers/hid/hid-nintendo.c
++++ b/drivers/hid/hid-nintendo.c
+@@ -1586,6 +1586,7 @@ static const unsigned int joycon_button_inputs_r[] = {
+ /* We report joy-con d-pad inputs as buttons and pro controller as a hat. */
+ static const unsigned int joycon_dpad_inputs_jc[] = {
+ BTN_DPAD_UP, BTN_DPAD_DOWN, BTN_DPAD_LEFT, BTN_DPAD_RIGHT,
++ 0 /* 0 signals end of array */
+ };
+
+ static int joycon_input_create(struct joycon_ctlr *ctlr)
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 620fe74f56769..98384b911288e 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2121,7 +2121,7 @@ static int wacom_register_inputs(struct wacom *wacom)
+
+ error = wacom_setup_pad_input_capabilities(pad_input_dev, wacom_wac);
+ if (error) {
+- /* no pad in use on this interface */
++ /* no pad events using this interface */
+ input_free_device(pad_input_dev);
+ wacom_wac->pad_input = NULL;
+ pad_input_dev = NULL;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 9470c2b0b5294..f8cc4bb3e3a72 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -638,9 +638,26 @@ static int wacom_intuos_id_mangle(int tool_id)
+ return (tool_id & ~0xFFF) << 4 | (tool_id & 0xFFF);
+ }
+
++static bool wacom_is_art_pen(int tool_id)
++{
++ bool is_art_pen = false;
++
++ switch (tool_id) {
++ case 0x885: /* Intuos3 Marker Pen */
++ case 0x804: /* Intuos4/5 13HD/24HD Marker Pen */
++ case 0x10804: /* Intuos4/5 13HD/24HD Art Pen */
++ is_art_pen = true;
++ break;
++ }
++ return is_art_pen;
++}
++
+ static int wacom_intuos_get_tool_type(int tool_id)
+ {
+- int tool_type;
++ int tool_type = BTN_TOOL_PEN;
++
++ if (wacom_is_art_pen(tool_id))
++ return tool_type;
+
+ switch (tool_id) {
+ case 0x812: /* Inking pen */
+@@ -655,12 +672,9 @@ static int wacom_intuos_get_tool_type(int tool_id)
+ case 0x852:
+ case 0x823: /* Intuos3 Grip Pen */
+ case 0x813: /* Intuos3 Classic Pen */
+- case 0x885: /* Intuos3 Marker Pen */
+ case 0x802: /* Intuos4/5 13HD/24HD General Pen */
+- case 0x804: /* Intuos4/5 13HD/24HD Marker Pen */
+ case 0x8e2: /* IntuosHT2 pen */
+ case 0x022:
+- case 0x10804: /* Intuos4/5 13HD/24HD Art Pen */
+ case 0x10842: /* MobileStudio Pro Pro Pen slim */
+ case 0x14802: /* Intuos4/5 13HD/24HD Classic Pen */
+ case 0x16802: /* Cintiq 13HD Pro Pen */
+@@ -718,10 +732,6 @@ static int wacom_intuos_get_tool_type(int tool_id)
+ case 0x10902: /* Intuos4/5 13HD/24HD Airbrush */
+ tool_type = BTN_TOOL_AIRBRUSH;
+ break;
+-
+- default: /* Unknown tool */
+- tool_type = BTN_TOOL_PEN;
+- break;
+ }
+ return tool_type;
+ }
+@@ -2009,7 +2019,6 @@ static void wacom_wac_pad_usage_mapping(struct hid_device *hdev,
+ wacom_wac->has_mute_touch_switch = true;
+ usage->type = EV_SW;
+ usage->code = SW_MUTE_DEVICE;
+- features->device_type |= WACOM_DEVICETYPE_PAD;
+ break;
+ case WACOM_HID_WD_TOUCHSTRIP:
+ wacom_map_usage(input, usage, field, EV_ABS, ABS_RX, 0);
+@@ -2089,6 +2098,30 @@ static void wacom_wac_pad_event(struct hid_device *hdev, struct hid_field *field
+ wacom_wac->hid_data.inrange_state |= value;
+ }
+
++ /* Process touch switch state first since it is reported through touch interface,
++ * which is indepentent of pad interface. In the case when there are no other pad
++ * events, the pad interface will not even be created.
++ */
++ if ((equivalent_usage == WACOM_HID_WD_MUTE_DEVICE) ||
++ (equivalent_usage == WACOM_HID_WD_TOUCHONOFF)) {
++ if (wacom_wac->shared->touch_input) {
++ bool *is_touch_on = &wacom_wac->shared->is_touch_on;
++
++ if (equivalent_usage == WACOM_HID_WD_MUTE_DEVICE && value)
++ *is_touch_on = !(*is_touch_on);
++ else if (equivalent_usage == WACOM_HID_WD_TOUCHONOFF)
++ *is_touch_on = value;
++
++ input_report_switch(wacom_wac->shared->touch_input,
++ SW_MUTE_DEVICE, !(*is_touch_on));
++ input_sync(wacom_wac->shared->touch_input);
++ }
++ return;
++ }
++
++ if (!input)
++ return;
++
+ switch (equivalent_usage) {
+ case WACOM_HID_WD_TOUCHRING:
+ /*
+@@ -2124,22 +2157,6 @@ static void wacom_wac_pad_event(struct hid_device *hdev, struct hid_field *field
+ input_event(input, usage->type, usage->code, 0);
+ break;
+
+- case WACOM_HID_WD_MUTE_DEVICE:
+- case WACOM_HID_WD_TOUCHONOFF:
+- if (wacom_wac->shared->touch_input) {
+- bool *is_touch_on = &wacom_wac->shared->is_touch_on;
+-
+- if (equivalent_usage == WACOM_HID_WD_MUTE_DEVICE && value)
+- *is_touch_on = !(*is_touch_on);
+- else if (equivalent_usage == WACOM_HID_WD_TOUCHONOFF)
+- *is_touch_on = value;
+-
+- input_report_switch(wacom_wac->shared->touch_input,
+- SW_MUTE_DEVICE, !(*is_touch_on));
+- input_sync(wacom_wac->shared->touch_input);
+- }
+- break;
+-
+ case WACOM_HID_WD_MODE_CHANGE:
+ if (wacom_wac->is_direct_mode != value) {
+ wacom_wac->is_direct_mode = value;
+@@ -2336,6 +2353,9 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ }
+ return;
+ case HID_DG_TWIST:
++ /* don't modify the value if the pen doesn't support the feature */
++ if (!wacom_is_art_pen(wacom_wac->id[0])) return;
++
+ /*
+ * Userspace expects pen twist to have its zero point when
+ * the buttons/finger is on the tablet's left. HID values
+@@ -2822,7 +2842,7 @@ void wacom_wac_event(struct hid_device *hdev, struct hid_field *field,
+ /* usage tests must precede field tests */
+ if (WACOM_BATTERY_USAGE(usage))
+ wacom_wac_battery_event(hdev, field, usage, value);
+- else if (WACOM_PAD_FIELD(field) && wacom->wacom_wac.pad_input)
++ else if (WACOM_PAD_FIELD(field))
+ wacom_wac_pad_event(hdev, field, usage, value);
+ else if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input)
+ wacom_wac_pen_event(hdev, field, usage, value);
+diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c
+index 071aa6f4e109b..16c10ac84a91e 100644
+--- a/drivers/hwmon/dell-smm-hwmon.c
++++ b/drivers/hwmon/dell-smm-hwmon.c
+@@ -1365,6 +1365,14 @@ static const struct dmi_system_id i8k_whitelist_fan_control[] __initconst = {
+ },
+ .driver_data = (void *)&i8k_fan_control_data[I8K_FAN_34A3_35A3],
+ },
++ {
++ .ident = "Dell XPS 13 7390",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 13 7390"),
++ },
++ .driver_data = (void *)&i8k_fan_control_data[I8K_FAN_34A3_35A3],
++ },
+ { }
+ };
+
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 1eb37106a220b..5bac2b0fc7bb6 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -621,3 +621,4 @@ module_exit(drivetemp_exit);
+ MODULE_AUTHOR("Guenter Roeck <linus@roeck-us.net>");
+ MODULE_DESCRIPTION("Hard drive temperature monitor");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS("platform:drivetemp");
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index 446964cbae4c0..da9ec6983e139 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -1480,7 +1480,7 @@ static int nct6775_update_pwm_limits(struct device *dev)
+ return 0;
+ }
+
+-static struct nct6775_data *nct6775_update_device(struct device *dev)
++struct nct6775_data *nct6775_update_device(struct device *dev)
+ {
+ struct nct6775_data *data = dev_get_drvdata(dev);
+ int i, j, err = 0;
+@@ -1615,6 +1615,7 @@ out:
+ mutex_unlock(&data->update_lock);
+ return err ? ERR_PTR(err) : data;
+ }
++EXPORT_SYMBOL_GPL(nct6775_update_device);
+
+ /*
+ * Sysfs callback functions
+diff --git a/drivers/hwmon/nct6775-platform.c b/drivers/hwmon/nct6775-platform.c
+index 6d46c94018984..8c108f4cc5037 100644
+--- a/drivers/hwmon/nct6775-platform.c
++++ b/drivers/hwmon/nct6775-platform.c
+@@ -359,7 +359,7 @@ static int __maybe_unused nct6775_suspend(struct device *dev)
+ {
+ int err;
+ u16 tmp;
+- struct nct6775_data *data = dev_get_drvdata(dev);
++ struct nct6775_data *data = nct6775_update_device(dev);
+
+ if (IS_ERR(data))
+ return PTR_ERR(data);
+diff --git a/drivers/hwmon/nct6775.h b/drivers/hwmon/nct6775.h
+index 93f708148e658..be41848c3cd29 100644
+--- a/drivers/hwmon/nct6775.h
++++ b/drivers/hwmon/nct6775.h
+@@ -196,6 +196,8 @@ static inline int nct6775_write_value(struct nct6775_data *data, u16 reg, u16 va
+ return regmap_write(data->regmap, reg, value);
+ }
+
++struct nct6775_data *nct6775_update_device(struct device *dev);
++
+ bool nct6775_reg_is_word_sized(struct nct6775_data *data, u16 reg);
+ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ const struct regmap_config *regmapcfg);
+diff --git a/drivers/hwmon/sch56xx-common.c b/drivers/hwmon/sch56xx-common.c
+index 3ece53adabd62..de3a0886c2f72 100644
+--- a/drivers/hwmon/sch56xx-common.c
++++ b/drivers/hwmon/sch56xx-common.c
+@@ -523,6 +523,28 @@ static int __init sch56xx_device_add(int address, const char *name)
+ return PTR_ERR_OR_ZERO(sch56xx_pdev);
+ }
+
++static const struct dmi_system_id sch56xx_dmi_override_table[] __initconst = {
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS W380"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO P710"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO E9900"),
++ },
++ },
++ { }
++};
++
+ /* For autoloading only */
+ static const struct dmi_system_id sch56xx_dmi_table[] __initconst = {
+ {
+@@ -543,16 +565,18 @@ static int __init sch56xx_init(void)
+ if (!dmi_check_system(sch56xx_dmi_table))
+ return -ENODEV;
+
+- /*
+- * Some machines like the Esprimo P720 and Esprimo C700 have
+- * onboard devices named " Antiope"/" Theseus" instead of
+- * "Antiope"/"Theseus", so we need to check for both.
+- */
+- if (!dmi_find_device(DMI_DEV_TYPE_OTHER, "Antiope", NULL) &&
+- !dmi_find_device(DMI_DEV_TYPE_OTHER, " Antiope", NULL) &&
+- !dmi_find_device(DMI_DEV_TYPE_OTHER, "Theseus", NULL) &&
+- !dmi_find_device(DMI_DEV_TYPE_OTHER, " Theseus", NULL))
+- return -ENODEV;
++ if (!dmi_check_system(sch56xx_dmi_override_table)) {
++ /*
++ * Some machines like the Esprimo P720 and Esprimo C700 have
++ * onboard devices named " Antiope"/" Theseus" instead of
++ * "Antiope"/"Theseus", so we need to check for both.
++ */
++ if (!dmi_find_device(DMI_DEV_TYPE_OTHER, "Antiope", NULL) &&
++ !dmi_find_device(DMI_DEV_TYPE_OTHER, " Antiope", NULL) &&
++ !dmi_find_device(DMI_DEV_TYPE_OTHER, "Theseus", NULL) &&
++ !dmi_find_device(DMI_DEV_TYPE_OTHER, " Theseus", NULL))
++ return -ENODEV;
++ }
+ }
+
+ /*
+diff --git a/drivers/hwmon/sht15.c b/drivers/hwmon/sht15.c
+index 7f4a639597306..ae4d14257a11d 100644
+--- a/drivers/hwmon/sht15.c
++++ b/drivers/hwmon/sht15.c
+@@ -1020,25 +1020,20 @@ err_release_reg:
+ static int sht15_remove(struct platform_device *pdev)
+ {
+ struct sht15_data *data = platform_get_drvdata(pdev);
++ int ret;
+
+- /*
+- * Make sure any reads from the device are done and
+- * prevent new ones beginning
+- */
+- mutex_lock(&data->read_lock);
+- if (sht15_soft_reset(data)) {
+- mutex_unlock(&data->read_lock);
+- return -EFAULT;
+- }
+ hwmon_device_unregister(data->hwmon_dev);
+ sysfs_remove_group(&pdev->dev.kobj, &sht15_attr_group);
++
++ ret = sht15_soft_reset(data);
++ if (ret)
++ dev_err(&pdev->dev, "Failed to reset device (%pe)\n", ERR_PTR(ret));
++
+ if (!IS_ERR(data->reg)) {
+ regulator_unregister_notifier(data->reg, &data->nb);
+ regulator_disable(data->reg);
+ }
+
+- mutex_unlock(&data->read_lock);
+-
+ return 0;
+ }
+
+diff --git a/drivers/hwtracing/coresight/coresight-config.h b/drivers/hwtracing/coresight/coresight-config.h
+index 2e1670523461c..6ba0139757418 100644
+--- a/drivers/hwtracing/coresight/coresight-config.h
++++ b/drivers/hwtracing/coresight/coresight-config.h
+@@ -134,6 +134,7 @@ struct cscfg_feature_desc {
+ * @active_cnt: ref count for activate on this configuration.
+ * @load_owner: handle to load owner for dynamic load and unload of configs.
+ * @fs_group: reference to configfs group for dynamic unload.
++ * @available: config can be activated - multi-stage load sets true on completion.
+ */
+ struct cscfg_config_desc {
+ const char *name;
+@@ -148,6 +149,7 @@ struct cscfg_config_desc {
+ atomic_t active_cnt;
+ void *load_owner;
+ struct config_group *fs_group;
++ bool available;
+ };
+
+ /**
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index ee6ce92ab4c31..1edfec1e9d18e 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1424,6 +1424,7 @@ static int coresight_remove_match(struct device *dev, void *data)
+ * platform data.
+ */
+ fwnode_handle_put(conn->child_fwnode);
++ conn->child_fwnode = NULL;
+ /* No need to continue */
+ break;
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-syscfg.c b/drivers/hwtracing/coresight/coresight-syscfg.c
+index 11850fd8c3b5b..11138a9762b01 100644
+--- a/drivers/hwtracing/coresight/coresight-syscfg.c
++++ b/drivers/hwtracing/coresight/coresight-syscfg.c
+@@ -414,6 +414,27 @@ static void cscfg_remove_owned_csdev_features(struct coresight_device *csdev, vo
+ }
+ }
+
++/*
++ * Unregister all configuration and features from configfs owned by load_owner.
++ * Although this is called without the list mutex being held, it is in the
++ * context of an unload operation which are strictly serialised,
++ * so the lists cannot change during this call.
++ */
++static void cscfg_fs_unregister_cfgs_feats(void *load_owner)
++{
++ struct cscfg_config_desc *config_desc;
++ struct cscfg_feature_desc *feat_desc;
++
++ list_for_each_entry(config_desc, &cscfg_mgr->config_desc_list, item) {
++ if (config_desc->load_owner == load_owner)
++ cscfg_configfs_del_config(config_desc);
++ }
++ list_for_each_entry(feat_desc, &cscfg_mgr->feat_desc_list, item) {
++ if (feat_desc->load_owner == load_owner)
++ cscfg_configfs_del_feature(feat_desc);
++ }
++}
++
+ /*
+ * removal is relatively easy - just remove from all lists, anything that
+ * matches the owner. Memory for the descriptors will be managed by the owner,
+@@ -426,6 +447,8 @@ static void cscfg_unload_owned_cfgs_feats(void *load_owner)
+ struct cscfg_feature_desc *feat_desc, *feat_tmp;
+ struct cscfg_registered_csdev *csdev_item;
+
++ lockdep_assert_held(&cscfg_mutex);
++
+ /* remove from each csdev instance feature and config lists */
+ list_for_each_entry(csdev_item, &cscfg_mgr->csdev_desc_list, item) {
+ /*
+@@ -439,7 +462,6 @@ static void cscfg_unload_owned_cfgs_feats(void *load_owner)
+ /* remove from the config descriptor lists */
+ list_for_each_entry_safe(config_desc, cfg_tmp, &cscfg_mgr->config_desc_list, item) {
+ if (config_desc->load_owner == load_owner) {
+- cscfg_configfs_del_config(config_desc);
+ etm_perf_del_symlink_cscfg(config_desc);
+ list_del(&config_desc->item);
+ }
+@@ -448,12 +470,90 @@ static void cscfg_unload_owned_cfgs_feats(void *load_owner)
+ /* remove from the feature descriptor lists */
+ list_for_each_entry_safe(feat_desc, feat_tmp, &cscfg_mgr->feat_desc_list, item) {
+ if (feat_desc->load_owner == load_owner) {
+- cscfg_configfs_del_feature(feat_desc);
+ list_del(&feat_desc->item);
+ }
+ }
+ }
+
++/*
++ * load the features and configs to the lists - called with list mutex held
++ */
++static int cscfg_load_owned_cfgs_feats(struct cscfg_config_desc **config_descs,
++ struct cscfg_feature_desc **feat_descs,
++ struct cscfg_load_owner_info *owner_info)
++{
++ int i, err;
++
++ lockdep_assert_held(&cscfg_mutex);
++
++ /* load features first */
++ if (feat_descs) {
++ for (i = 0; feat_descs[i]; i++) {
++ err = cscfg_load_feat(feat_descs[i]);
++ if (err) {
++ pr_err("coresight-syscfg: Failed to load feature %s\n",
++ feat_descs[i]->name);
++ return err;
++ }
++ feat_descs[i]->load_owner = owner_info;
++ }
++ }
++
++ /* next any configurations to check feature dependencies */
++ if (config_descs) {
++ for (i = 0; config_descs[i]; i++) {
++ err = cscfg_load_config(config_descs[i]);
++ if (err) {
++ pr_err("coresight-syscfg: Failed to load configuration %s\n",
++ config_descs[i]->name);
++ return err;
++ }
++ config_descs[i]->load_owner = owner_info;
++ config_descs[i]->available = false;
++ }
++ }
++ return 0;
++}
++
++/* set configurations as available to activate at the end of the load process */
++static void cscfg_set_configs_available(struct cscfg_config_desc **config_descs)
++{
++ int i;
++
++ lockdep_assert_held(&cscfg_mutex);
++
++ if (config_descs) {
++ for (i = 0; config_descs[i]; i++)
++ config_descs[i]->available = true;
++ }
++}
++
++/*
++ * Create and register each of the configurations and features with configfs.
++ * Called without mutex being held.
++ */
++static int cscfg_fs_register_cfgs_feats(struct cscfg_config_desc **config_descs,
++ struct cscfg_feature_desc **feat_descs)
++{
++ int i, err;
++
++ if (feat_descs) {
++ for (i = 0; feat_descs[i]; i++) {
++ err = cscfg_configfs_add_feature(feat_descs[i]);
++ if (err)
++ return err;
++ }
++ }
++ if (config_descs) {
++ for (i = 0; config_descs[i]; i++) {
++ err = cscfg_configfs_add_config(config_descs[i]);
++ if (err)
++ return err;
++ }
++ }
++ return 0;
++}
++
+ /**
+ * cscfg_load_config_sets - API function to load feature and config sets.
+ *
+@@ -476,57 +576,63 @@ int cscfg_load_config_sets(struct cscfg_config_desc **config_descs,
+ struct cscfg_feature_desc **feat_descs,
+ struct cscfg_load_owner_info *owner_info)
+ {
+- int err = 0, i = 0;
++ int err = 0;
+
+ mutex_lock(&cscfg_mutex);
+-
+- /* load features first */
+- if (feat_descs) {
+- while (feat_descs[i]) {
+- err = cscfg_load_feat(feat_descs[i]);
+- if (!err)
+- err = cscfg_configfs_add_feature(feat_descs[i]);
+- if (err) {
+- pr_err("coresight-syscfg: Failed to load feature %s\n",
+- feat_descs[i]->name);
+- cscfg_unload_owned_cfgs_feats(owner_info);
+- goto exit_unlock;
+- }
+- feat_descs[i]->load_owner = owner_info;
+- i++;
+- }
++ if (cscfg_mgr->load_state != CSCFG_NONE) {
++ mutex_unlock(&cscfg_mutex);
++ return -EBUSY;
+ }
++ cscfg_mgr->load_state = CSCFG_LOAD;
+
+- /* next any configurations to check feature dependencies */
+- i = 0;
+- if (config_descs) {
+- while (config_descs[i]) {
+- err = cscfg_load_config(config_descs[i]);
+- if (!err)
+- err = cscfg_configfs_add_config(config_descs[i]);
+- if (err) {
+- pr_err("coresight-syscfg: Failed to load configuration %s\n",
+- config_descs[i]->name);
+- cscfg_unload_owned_cfgs_feats(owner_info);
+- goto exit_unlock;
+- }
+- config_descs[i]->load_owner = owner_info;
+- i++;
+- }
+- }
++ /* first load and add to the lists */
++ err = cscfg_load_owned_cfgs_feats(config_descs, feat_descs, owner_info);
++ if (err)
++ goto err_clean_load;
+
+ /* add the load owner to the load order list */
+ list_add_tail(&owner_info->item, &cscfg_mgr->load_order_list);
+ if (!list_is_singular(&cscfg_mgr->load_order_list)) {
+ /* lock previous item in load order list */
+ err = cscfg_owner_get(list_prev_entry(owner_info, item));
+- if (err) {
+- cscfg_unload_owned_cfgs_feats(owner_info);
+- list_del(&owner_info->item);
+- }
++ if (err)
++ goto err_clean_owner_list;
+ }
+
++ /*
++ * make visible to configfs - configfs manipulation must occur outside
++ * the list mutex lock to avoid circular lockdep issues with configfs
++ * built in mutexes and semaphores. This is safe as it is not possible
++ * to start a new load/unload operation till the current one is done.
++ */
++ mutex_unlock(&cscfg_mutex);
++
++ /* create the configfs elements */
++ err = cscfg_fs_register_cfgs_feats(config_descs, feat_descs);
++ mutex_lock(&cscfg_mutex);
++
++ if (err)
++ goto err_clean_cfs;
++
++ /* mark any new configs as available for activation */
++ cscfg_set_configs_available(config_descs);
++ goto exit_unlock;
++
++err_clean_cfs:
++ /* cleanup after error registering with configfs */
++ cscfg_fs_unregister_cfgs_feats(owner_info);
++
++ if (!list_is_singular(&cscfg_mgr->load_order_list))
++ cscfg_owner_put(list_prev_entry(owner_info, item));
++
++err_clean_owner_list:
++ list_del(&owner_info->item);
++
++err_clean_load:
++ cscfg_unload_owned_cfgs_feats(owner_info);
++
+ exit_unlock:
++ cscfg_mgr->load_state = CSCFG_NONE;
+ mutex_unlock(&cscfg_mutex);
+ return err;
+ }
+@@ -543,6 +649,9 @@ EXPORT_SYMBOL_GPL(cscfg_load_config_sets);
+ * 1) no configurations are active.
+ * 2) the set being unloaded was the last to be loaded to maintain dependencies.
+ *
++ * Once the unload operation commences, we disallow any configuration being
++ * made active until it is complete.
++ *
+ * @owner_info: Information on owner for set being unloaded.
+ */
+ int cscfg_unload_config_sets(struct cscfg_load_owner_info *owner_info)
+@@ -551,6 +660,13 @@ int cscfg_unload_config_sets(struct cscfg_load_owner_info *owner_info)
+ struct cscfg_load_owner_info *load_list_item = NULL;
+
+ mutex_lock(&cscfg_mutex);
++ if (cscfg_mgr->load_state != CSCFG_NONE) {
++ mutex_unlock(&cscfg_mutex);
++ return -EBUSY;
++ }
++
++ /* unload op in progress also prevents activation of any config */
++ cscfg_mgr->load_state = CSCFG_UNLOAD;
+
+ /* cannot unload if anything is active */
+ if (atomic_read(&cscfg_mgr->sys_active_cnt)) {
+@@ -571,7 +687,12 @@ int cscfg_unload_config_sets(struct cscfg_load_owner_info *owner_info)
+ goto exit_unlock;
+ }
+
+- /* unload all belonging to load_owner */
++ /* remove from configfs - again outside the scope of the list mutex */
++ mutex_unlock(&cscfg_mutex);
++ cscfg_fs_unregister_cfgs_feats(owner_info);
++ mutex_lock(&cscfg_mutex);
++
++ /* unload everything from lists belonging to load_owner */
+ cscfg_unload_owned_cfgs_feats(owner_info);
+
+ /* remove from load order list */
+@@ -582,6 +703,7 @@ int cscfg_unload_config_sets(struct cscfg_load_owner_info *owner_info)
+ list_del(&owner_info->item);
+
+ exit_unlock:
++ cscfg_mgr->load_state = CSCFG_NONE;
+ mutex_unlock(&cscfg_mutex);
+ return err;
+ }
+@@ -759,8 +881,15 @@ static int _cscfg_activate_config(unsigned long cfg_hash)
+ struct cscfg_config_desc *config_desc;
+ int err = -EINVAL;
+
++ if (cscfg_mgr->load_state == CSCFG_UNLOAD)
++ return -EBUSY;
++
+ list_for_each_entry(config_desc, &cscfg_mgr->config_desc_list, item) {
+ if ((unsigned long)config_desc->event_ea->var == cfg_hash) {
++ /* if we happen upon a partly loaded config, can't use it */
++ if (config_desc->available == false)
++ return -EBUSY;
++
+ /* must ensure that config cannot be unloaded in use */
+ err = cscfg_owner_get(config_desc->load_owner);
+ if (err)
+@@ -1022,8 +1151,10 @@ struct device *cscfg_device(void)
+ /* Must have a release function or the kernel will complain on module unload */
+ static void cscfg_dev_release(struct device *dev)
+ {
++ mutex_lock(&cscfg_mutex);
+ kfree(cscfg_mgr);
+ cscfg_mgr = NULL;
++ mutex_unlock(&cscfg_mutex);
+ }
+
+ /* a device is needed to "own" some kernel elements such as sysfs entries. */
+@@ -1042,6 +1173,14 @@ static int cscfg_create_device(void)
+ if (!cscfg_mgr)
+ goto create_dev_exit_unlock;
+
++ /* initialise the cscfg_mgr structure */
++ INIT_LIST_HEAD(&cscfg_mgr->csdev_desc_list);
++ INIT_LIST_HEAD(&cscfg_mgr->feat_desc_list);
++ INIT_LIST_HEAD(&cscfg_mgr->config_desc_list);
++ INIT_LIST_HEAD(&cscfg_mgr->load_order_list);
++ atomic_set(&cscfg_mgr->sys_active_cnt, 0);
++ cscfg_mgr->load_state = CSCFG_NONE;
++
+ /* setup the device */
+ dev = cscfg_device();
+ dev->release = cscfg_dev_release;
+@@ -1056,17 +1195,73 @@ create_dev_exit_unlock:
+ return err;
+ }
+
+-static void cscfg_clear_device(void)
++/*
++ * Loading and unloading is generally on user discretion.
++ * If exiting due to coresight module unload, we need to unload any configurations that remain,
++ * before we unregister the configfs intrastructure.
++ *
++ * Do this by walking the load_owner list and taking appropriate action, depending on the load
++ * owner type.
++ */
++static void cscfg_unload_cfgs_on_exit(void)
+ {
+- struct cscfg_config_desc *cfg_desc;
++ struct cscfg_load_owner_info *owner_info = NULL;
+
++ /*
++ * grab the mutex - even though we are exiting, some configfs files
++ * may still be live till we dump them, so ensure list data is
++ * protected from a race condition.
++ */
+ mutex_lock(&cscfg_mutex);
+- list_for_each_entry(cfg_desc, &cscfg_mgr->config_desc_list, item) {
+- etm_perf_del_symlink_cscfg(cfg_desc);
++ while (!list_empty(&cscfg_mgr->load_order_list)) {
++
++ /* remove in reverse order of loading */
++ owner_info = list_last_entry(&cscfg_mgr->load_order_list,
++ struct cscfg_load_owner_info, item);
++
++ /* action according to type */
++ switch (owner_info->type) {
++ case CSCFG_OWNER_PRELOAD:
++ /*
++ * preloaded descriptors are statically allocated in
++ * this module - just need to unload dynamic items from
++ * csdev lists, and remove from configfs directories.
++ */
++ pr_info("cscfg: unloading preloaded configurations\n");
++ break;
++
++ case CSCFG_OWNER_MODULE:
++ /*
++ * this is an error - the loadable module must have been unloaded prior
++ * to the coresight module unload. Therefore that module has not
++ * correctly unloaded configs in its own exit code.
++ * Nothing to do other than emit an error string as the static descriptor
++ * references we need to unload will have disappeared with the module.
++ */
++ pr_err("cscfg: ERROR: prior module failed to unload configuration\n");
++ goto list_remove;
++ }
++
++ /* remove from configfs - outside the scope of the list mutex */
++ mutex_unlock(&cscfg_mutex);
++ cscfg_fs_unregister_cfgs_feats(owner_info);
++ mutex_lock(&cscfg_mutex);
++
++ /* Next unload from csdev lists. */
++ cscfg_unload_owned_cfgs_feats(owner_info);
++
++list_remove:
++ /* remove from load order list */
++ list_del(&owner_info->item);
+ }
++ mutex_unlock(&cscfg_mutex);
++}
++
++static void cscfg_clear_device(void)
++{
++ cscfg_unload_cfgs_on_exit();
+ cscfg_configfs_release(cscfg_mgr);
+ device_unregister(cscfg_device());
+- mutex_unlock(&cscfg_mutex);
+ }
+
+ /* Initialise system config management API device */
+@@ -1074,20 +1269,16 @@ int __init cscfg_init(void)
+ {
+ int err = 0;
+
++ /* create the device and init cscfg_mgr */
+ err = cscfg_create_device();
+ if (err)
+ return err;
+
++ /* initialise configfs subsystem */
+ err = cscfg_configfs_init(cscfg_mgr);
+ if (err)
+ goto exit_err;
+
+- INIT_LIST_HEAD(&cscfg_mgr->csdev_desc_list);
+- INIT_LIST_HEAD(&cscfg_mgr->feat_desc_list);
+- INIT_LIST_HEAD(&cscfg_mgr->config_desc_list);
+- INIT_LIST_HEAD(&cscfg_mgr->load_order_list);
+- atomic_set(&cscfg_mgr->sys_active_cnt, 0);
+-
+ /* preload built-in configurations */
+ err = cscfg_preload(THIS_MODULE);
+ if (err)
+diff --git a/drivers/hwtracing/coresight/coresight-syscfg.h b/drivers/hwtracing/coresight/coresight-syscfg.h
+index 9106ffab48337..66e2db890d820 100644
+--- a/drivers/hwtracing/coresight/coresight-syscfg.h
++++ b/drivers/hwtracing/coresight/coresight-syscfg.h
+@@ -12,6 +12,17 @@
+
+ #include "coresight-config.h"
+
++/*
++ * Load operation types.
++ * When loading or unloading, another load operation cannot be run.
++ * When unloading configurations cannot be activated.
++ */
++enum cscfg_load_ops {
++ CSCFG_NONE,
++ CSCFG_LOAD,
++ CSCFG_UNLOAD
++};
++
+ /**
+ * System configuration manager device.
+ *
+@@ -30,6 +41,7 @@
+ * @cfgfs_subsys: configfs subsystem used to manage configurations.
+ * @sysfs_active_config:Active config hash used if CoreSight controlled from sysfs.
+ * @sysfs_active_preset:Active preset index used if CoreSight controlled from sysfs.
++ * @load_state: A multi-stage load/unload operation is in progress.
+ */
+ struct cscfg_manager {
+ struct device dev;
+@@ -41,6 +53,7 @@ struct cscfg_manager {
+ struct configfs_subsystem cfgfs_subsys;
+ u32 sysfs_active_config;
+ int sysfs_active_preset;
++ enum cscfg_load_ops load_state;
+ };
+
+ /* get reference to dev in cscfg_manager */
+diff --git a/drivers/hwtracing/intel_th/msu-sink.c b/drivers/hwtracing/intel_th/msu-sink.c
+index 2c7f5116be126..891b28ea25fe6 100644
+--- a/drivers/hwtracing/intel_th/msu-sink.c
++++ b/drivers/hwtracing/intel_th/msu-sink.c
+@@ -71,6 +71,9 @@ static int msu_sink_alloc_window(void *data, struct sg_table **sgt, size_t size)
+ block = dma_alloc_coherent(priv->dev->parent->parent,
+ PAGE_SIZE, &sg_dma_address(sg_ptr),
+ GFP_KERNEL);
++ if (!block)
++ return -ENOMEM;
++
+ sg_set_buf(sg_ptr, block, PAGE_SIZE);
+ }
+
+diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
+index 70a07b4e99673..6c8215a47a601 100644
+--- a/drivers/hwtracing/intel_th/msu.c
++++ b/drivers/hwtracing/intel_th/msu.c
+@@ -1067,6 +1067,16 @@ msc_buffer_set_uc(struct msc *msc) {}
+ static inline void msc_buffer_set_wb(struct msc *msc) {}
+ #endif /* CONFIG_X86 */
+
++static struct page *msc_sg_page(struct scatterlist *sg)
++{
++ void *addr = sg_virt(sg);
++
++ if (is_vmalloc_addr(addr))
++ return vmalloc_to_page(addr);
++
++ return sg_page(sg);
++}
++
+ /**
+ * msc_buffer_win_alloc() - alloc a window for a multiblock mode
+ * @msc: MSC device
+@@ -1137,7 +1147,7 @@ static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win)
+ int i;
+
+ for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) {
+- struct page *page = sg_page(sg);
++ struct page *page = msc_sg_page(sg);
+
+ page->mapping = NULL;
+ dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE,
+@@ -1401,7 +1411,7 @@ found:
+ pgoff -= win->pgoff;
+
+ for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) {
+- struct page *page = sg_page(sg);
++ struct page *page = msc_sg_page(sg);
+ size_t pgsz = PFN_DOWN(sg->length);
+
+ if (pgoff < pgsz)
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 7da4f298ed01e..147d338c191e7 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -100,8 +100,10 @@ static int intel_th_pci_probe(struct pci_dev *pdev,
+ }
+
+ th = intel_th_alloc(&pdev->dev, drvdata, resource, r);
+- if (IS_ERR(th))
+- return PTR_ERR(th);
++ if (IS_ERR(th)) {
++ err = PTR_ERR(th);
++ goto err_free_irq;
++ }
+
+ th->activate = intel_th_pci_activate;
+ th->deactivate = intel_th_pci_deactivate;
+@@ -109,6 +111,10 @@ static int intel_th_pci_probe(struct pci_dev *pdev,
+ pci_set_master(pdev);
+
+ return 0;
++
++err_free_irq:
++ pci_free_irq_vectors(pdev);
++ return err;
+ }
+
+ static void intel_th_pci_remove(struct pci_dev *pdev)
+@@ -278,6 +284,21 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x54a6),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
++ {
++ /* Meteor Lake-P */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7e24),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
++ {
++ /* Raptor Lake-S */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7a26),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
++ {
++ /* Raptor Lake-S CPU */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa76f),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
+ {
+ /* Alder Lake CPU */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index 630cfa4ddd468..33f5588a50c07 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -573,8 +573,13 @@ static void cdns_i2c_mrecv(struct cdns_i2c *id)
+ ctrl_reg = cdns_i2c_readreg(CDNS_I2C_CR_OFFSET);
+ ctrl_reg |= CDNS_I2C_CR_RW | CDNS_I2C_CR_CLR_FIFO;
+
++ /*
++ * Receive up to I2C_SMBUS_BLOCK_MAX data bytes, plus one message length
++ * byte, plus one checksum byte if PEC is enabled. p_msg->len will be 2 if
++ * PEC is enabled, otherwise 1.
++ */
+ if (id->p_msg->flags & I2C_M_RECV_LEN)
+- id->recv_count = I2C_SMBUS_BLOCK_MAX + 1;
++ id->recv_count = I2C_SMBUS_BLOCK_MAX + id->p_msg->len;
+
+ id->curr_recv_count = id->recv_count;
+
+@@ -789,6 +794,9 @@ static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg,
+ if (id->err_status & CDNS_I2C_IXR_ARB_LOST)
+ return -EAGAIN;
+
++ if (msg->flags & I2C_M_RECV_LEN)
++ msg->len += min_t(unsigned int, msg->buf[0], I2C_SMBUS_BLOCK_MAX);
++
+ return 0;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-mxs.c b/drivers/i2c/busses/i2c-mxs.c
+index 864a3f1bd4e14..68f67d084c63a 100644
+--- a/drivers/i2c/busses/i2c-mxs.c
++++ b/drivers/i2c/busses/i2c-mxs.c
+@@ -799,7 +799,7 @@ static int mxs_i2c_probe(struct platform_device *pdev)
+ if (!i2c)
+ return -ENOMEM;
+
+- i2c->dev_type = (enum mxs_i2c_devtype)of_device_get_match_data(&pdev->dev);
++ i2c->dev_type = (uintptr_t)of_device_get_match_data(&pdev->dev);
+
+ i2c->regs = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(i2c->regs))
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index aede9d551130b..7b112be5e35ca 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -123,11 +123,11 @@ enum i2c_addr {
+ * Since the addr regs are sprinkled all over the address space,
+ * use this array to get the address or each register.
+ */
+-#define I2C_NUM_OWN_ADDR 10
++#define I2C_NUM_OWN_ADDR 2
++#define I2C_NUM_OWN_ADDR_SUPPORTED 2
++
+ static const int npcm_i2caddr[I2C_NUM_OWN_ADDR] = {
+- NPCM_I2CADDR1, NPCM_I2CADDR2, NPCM_I2CADDR3, NPCM_I2CADDR4,
+- NPCM_I2CADDR5, NPCM_I2CADDR6, NPCM_I2CADDR7, NPCM_I2CADDR8,
+- NPCM_I2CADDR9, NPCM_I2CADDR10,
++ NPCM_I2CADDR1, NPCM_I2CADDR2,
+ };
+ #endif
+
+@@ -392,14 +392,10 @@ static void npcm_i2c_disable(struct npcm_i2c *bus)
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ int i;
+
+- /* select bank 0 for I2C addresses */
+- npcm_i2c_select_bank(bus, I2C_BANK_0);
+-
+ /* Slave addresses removal */
+- for (i = I2C_SLAVE_ADDR1; i < I2C_NUM_OWN_ADDR; i++)
++ for (i = I2C_SLAVE_ADDR1; i < I2C_NUM_OWN_ADDR_SUPPORTED; i++)
+ iowrite8(0, bus->reg + npcm_i2caddr[i]);
+
+- npcm_i2c_select_bank(bus, I2C_BANK_1);
+ #endif
+ /* Disable module */
+ i2cctl2 = ioread8(bus->reg + NPCM_I2CCTL2);
+@@ -604,8 +600,7 @@ static int npcm_i2c_slave_enable(struct npcm_i2c *bus, enum i2c_addr addr_type,
+ i2cctl1 &= ~NPCM_I2CCTL1_GCMEN;
+ iowrite8(i2cctl1, bus->reg + NPCM_I2CCTL1);
+ return 0;
+- }
+- if (addr_type == I2C_ARP_ADDR) {
++ } else if (addr_type == I2C_ARP_ADDR) {
+ i2cctl3 = ioread8(bus->reg + NPCM_I2CCTL3);
+ if (enable)
+ i2cctl3 |= I2CCTL3_ARPMEN;
+@@ -614,16 +609,16 @@ static int npcm_i2c_slave_enable(struct npcm_i2c *bus, enum i2c_addr addr_type,
+ iowrite8(i2cctl3, bus->reg + NPCM_I2CCTL3);
+ return 0;
+ }
++ if (addr_type > I2C_SLAVE_ADDR2 && addr_type <= I2C_SLAVE_ADDR10)
++ dev_err(bus->dev, "try to enable more than 2 SA not supported\n");
++
+ if (addr_type >= I2C_ARP_ADDR)
+ return -EFAULT;
+- /* select bank 0 for address 3 to 10 */
+- if (addr_type > I2C_SLAVE_ADDR2)
+- npcm_i2c_select_bank(bus, I2C_BANK_0);
++
+ /* Set and enable the address */
+ iowrite8(sa_reg, bus->reg + npcm_i2caddr[addr_type]);
+ npcm_i2c_slave_int_enable(bus, enable);
+- if (addr_type > I2C_SLAVE_ADDR2)
+- npcm_i2c_select_bank(bus, I2C_BANK_1);
++
+ return 0;
+ }
+ #endif
+@@ -846,15 +841,11 @@ static u8 npcm_i2c_get_slave_addr(struct npcm_i2c *bus, enum i2c_addr addr_type)
+ {
+ u8 slave_add;
+
+- /* select bank 0 for address 3 to 10 */
+- if (addr_type > I2C_SLAVE_ADDR2)
+- npcm_i2c_select_bank(bus, I2C_BANK_0);
++ if (addr_type > I2C_SLAVE_ADDR2 && addr_type <= I2C_SLAVE_ADDR10)
++ dev_err(bus->dev, "get slave: try to use more than 2 SA not supported\n");
+
+ slave_add = ioread8(bus->reg + npcm_i2caddr[(int)addr_type]);
+
+- if (addr_type > I2C_SLAVE_ADDR2)
+- npcm_i2c_select_bank(bus, I2C_BANK_1);
+-
+ return slave_add;
+ }
+
+@@ -864,12 +855,12 @@ static int npcm_i2c_remove_slave_addr(struct npcm_i2c *bus, u8 slave_add)
+
+ /* Set the enable bit */
+ slave_add |= 0x80;
+- npcm_i2c_select_bank(bus, I2C_BANK_0);
+- for (i = I2C_SLAVE_ADDR1; i < I2C_NUM_OWN_ADDR; i++) {
++
++ for (i = I2C_SLAVE_ADDR1; i < I2C_NUM_OWN_ADDR_SUPPORTED; i++) {
+ if (ioread8(bus->reg + npcm_i2caddr[i]) == slave_add)
+ iowrite8(0, bus->reg + npcm_i2caddr[i]);
+ }
+- npcm_i2c_select_bank(bus, I2C_BANK_1);
++
+ return 0;
+ }
+
+@@ -924,11 +915,15 @@ static int npcm_i2c_slave_get_wr_buf(struct npcm_i2c *bus)
+ for (i = 0; i < I2C_HW_FIFO_SIZE; i++) {
+ if (bus->slv_wr_size >= I2C_HW_FIFO_SIZE)
+ break;
+- i2c_slave_event(bus->slave, I2C_SLAVE_READ_REQUESTED, &value);
++ if (bus->state == I2C_SLAVE_MATCH) {
++ i2c_slave_event(bus->slave, I2C_SLAVE_READ_REQUESTED, &value);
++ bus->state = I2C_OPER_STARTED;
++ } else {
++ i2c_slave_event(bus->slave, I2C_SLAVE_READ_PROCESSED, &value);
++ }
+ ind = (bus->slv_wr_ind + bus->slv_wr_size) % I2C_HW_FIFO_SIZE;
+ bus->slv_wr_buf[ind] = value;
+ bus->slv_wr_size++;
+- i2c_slave_event(bus->slave, I2C_SLAVE_READ_PROCESSED, &value);
+ }
+ return I2C_HW_FIFO_SIZE - ret;
+ }
+@@ -976,7 +971,6 @@ static void npcm_i2c_slave_xmit(struct npcm_i2c *bus, u16 nwrite,
+ if (nwrite == 0)
+ return;
+
+- bus->state = I2C_OPER_STARTED;
+ bus->operation = I2C_WRITE_OPER;
+
+ /* get the next buffer */
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 6ac402ea58fbe..3bec7c782824a 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -688,7 +688,7 @@ static int geni_i2c_xfer(struct i2c_adapter *adap,
+ pm_runtime_put_autosuspend(gi2c->se.dev);
+ gi2c->cur = NULL;
+ gi2c->err = 0;
+- return num;
++ return ret;
+ }
+
+ static u32 geni_i2c_func(struct i2c_adapter *adap)
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index d43db2c3876e7..19a317fdcf5bf 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -2467,8 +2467,9 @@ void i2c_put_adapter(struct i2c_adapter *adap)
+ if (!adap)
+ return;
+
+- put_device(&adap->dev);
+ module_put(adap->owner);
++ /* Should be last, otherwise we risk use-after-free with 'adap' */
++ put_device(&adap->dev);
+ }
+ EXPORT_SYMBOL(i2c_put_adapter);
+
+diff --git a/drivers/i2c/muxes/i2c-mux-gpmux.c b/drivers/i2c/muxes/i2c-mux-gpmux.c
+index d3acd8d66c323..33024acaac02b 100644
+--- a/drivers/i2c/muxes/i2c-mux-gpmux.c
++++ b/drivers/i2c/muxes/i2c-mux-gpmux.c
+@@ -134,6 +134,7 @@ static int i2c_mux_probe(struct platform_device *pdev)
+ return 0;
+
+ err_children:
++ of_node_put(child);
+ i2c_mux_del_adapters(muxc);
+ err_parent:
+ i2c_put_adapter(parent);
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 907700d1e78eb..9515a3146dc97 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -911,16 +911,6 @@ static struct cpuidle_state adl_l_cstates[] __initdata = {
+ .enter = NULL }
+ };
+
+-/*
+- * On Sapphire Rapids Xeon C1 has to be disabled if C1E is enabled, and vice
+- * versa. On SPR C1E is enabled only if "C1E promotion" bit is set in
+- * MSR_IA32_POWER_CTL. But in this case there effectively no C1, because C1
+- * requests are promoted to C1E. If the "C1E promotion" bit is cleared, then
+- * both C1 and C1E requests end up with C1, so there is effectively no C1E.
+- *
+- * By default we enable C1 and disable C1E by marking it with
+- * 'CPUIDLE_FLAG_UNUSABLE'.
+- */
+ static struct cpuidle_state spr_cstates[] __initdata = {
+ {
+ .name = "C1",
+@@ -933,8 +923,7 @@ static struct cpuidle_state spr_cstates[] __initdata = {
+ {
+ .name = "C1E",
+ .desc = "MWAIT 0x01",
+- .flags = MWAIT2flg(0x01) | CPUIDLE_FLAG_ALWAYS_ENABLE |
+- CPUIDLE_FLAG_UNUSABLE,
++ .flags = MWAIT2flg(0x01) | CPUIDLE_FLAG_ALWAYS_ENABLE,
+ .exit_latency = 2,
+ .target_residency = 4,
+ .enter = &intel_idle,
+@@ -1756,17 +1745,6 @@ static void __init spr_idle_state_table_update(void)
+ {
+ unsigned long long msr;
+
+- /* Check if user prefers C1E over C1. */
+- if ((preferred_states_mask & BIT(2)) &&
+- !(preferred_states_mask & BIT(1))) {
+- /* Disable C1 and enable C1E. */
+- spr_cstates[0].flags |= CPUIDLE_FLAG_UNUSABLE;
+- spr_cstates[1].flags &= ~CPUIDLE_FLAG_UNUSABLE;
+-
+- /* Enable C1E using the "C1E promotion" bit. */
+- c1e_promotion = C1E_PROMOTION_ENABLE;
+- }
+-
+ /*
+ * By default, the C6 state assumes the worst-case scenario of package
+ * C6. However, if PC6 is disabled, we update the numbers to match
+diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig
+index b53f010f3e403..35798712f8118 100644
+--- a/drivers/iio/accel/Kconfig
++++ b/drivers/iio/accel/Kconfig
+@@ -204,6 +204,8 @@ config BMA220
+ config BMA400
+ tristate "Bosch BMA400 3-Axis Accelerometer Driver"
+ select REGMAP
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ select BMA400_I2C if I2C
+ select BMA400_SPI if SPI
+ help
+diff --git a/drivers/iio/accel/adxl313_core.c b/drivers/iio/accel/adxl313_core.c
+index 9e4193e64765f..afeef779e1d08 100644
+--- a/drivers/iio/accel/adxl313_core.c
++++ b/drivers/iio/accel/adxl313_core.c
+@@ -46,7 +46,7 @@ EXPORT_SYMBOL_NS_GPL(adxl313_writable_regs_table, IIO_ADXL313);
+ struct adxl313_data {
+ struct regmap *regmap;
+ struct mutex lock; /* lock to protect transf_buf */
+- __le16 transf_buf ____cacheline_aligned;
++ __le16 transf_buf __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const int adxl313_odr_freqs[][2] = {
+diff --git a/drivers/iio/accel/adxl355_core.c b/drivers/iio/accel/adxl355_core.c
+index 7561399daef32..4bc648eac8b29 100644
+--- a/drivers/iio/accel/adxl355_core.c
++++ b/drivers/iio/accel/adxl355_core.c
+@@ -177,7 +177,7 @@ struct adxl355_data {
+ u8 buf[14];
+ s64 ts;
+ } buffer;
+- } ____cacheline_aligned;
++ } __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int adxl355_set_op_mode(struct adxl355_data *data,
+diff --git a/drivers/iio/accel/adxl367.c b/drivers/iio/accel/adxl367.c
+index 0289ed8cf2c6a..0168329ec5055 100644
+--- a/drivers/iio/accel/adxl367.c
++++ b/drivers/iio/accel/adxl367.c
+@@ -179,7 +179,7 @@ struct adxl367_state {
+ unsigned int fifo_set_size;
+ unsigned int fifo_watermark;
+
+- __be16 fifo_buf[ADXL367_FIFO_SIZE] ____cacheline_aligned;
++ __be16 fifo_buf[ADXL367_FIFO_SIZE] __aligned(IIO_DMA_MINALIGN);
+ __be16 sample_buf;
+ u8 act_threshold_buf[2];
+ u8 inact_time_buf[2];
+diff --git a/drivers/iio/accel/adxl367_spi.c b/drivers/iio/accel/adxl367_spi.c
+index 26dfc821ebbe0..118c894015a57 100644
+--- a/drivers/iio/accel/adxl367_spi.c
++++ b/drivers/iio/accel/adxl367_spi.c
+@@ -9,6 +9,8 @@
+ #include <linux/regmap.h>
+ #include <linux/spi/spi.h>
+
++#include <linux/iio/iio.h>
++
+ #include "adxl367.h"
+
+ #define ADXL367_SPI_WRITE_COMMAND 0x0A
+@@ -28,10 +30,10 @@ struct adxl367_spi_state {
+ struct spi_transfer fifo_xfer[2];
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
+- * transfer buffers to live in their own cache lines.
++ * DMA (thus cache coherency maintenance) may require the
++ * transfer buffers live in their own cache lines.
+ */
+- u8 reg_write_tx_buf[1] ____cacheline_aligned;
++ u8 reg_write_tx_buf[1] __aligned(IIO_DMA_MINALIGN);
+ u8 reg_read_tx_buf[2];
+ u8 fifo_tx_buf[1];
+ };
+diff --git a/drivers/iio/accel/bma220_spi.c b/drivers/iio/accel/bma220_spi.c
+index 74024d7ce5ac2..b6d9ab8e2054e 100644
+--- a/drivers/iio/accel/bma220_spi.c
++++ b/drivers/iio/accel/bma220_spi.c
+@@ -67,7 +67,7 @@ struct bma220_data {
+ /* Ensure timestamp is naturally aligned. */
+ s64 timestamp __aligned(8);
+ } scan;
+- u8 tx_buf[2] ____cacheline_aligned;
++ u8 tx_buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct iio_chan_spec bma220_channels[] = {
+diff --git a/drivers/iio/accel/bma400.h b/drivers/iio/accel/bma400.h
+index c4c8d74155c2a..907e1a6c0a38a 100644
+--- a/drivers/iio/accel/bma400.h
++++ b/drivers/iio/accel/bma400.h
+@@ -62,6 +62,13 @@
+ #define BMA400_ACC_CONFIG2_REG 0x1b
+ #define BMA400_CMD_REG 0x7e
+
++/* Interrupt registers */
++#define BMA400_INT_CONFIG0_REG 0x1f
++#define BMA400_INT_CONFIG1_REG 0x20
++#define BMA400_INT1_MAP_REG 0x21
++#define BMA400_INT_IO_CTRL_REG 0x24
++#define BMA400_INT_DRDY_MSK BIT(7)
++
+ /* Chip ID of BMA 400 devices found in the chip ID register. */
+ #define BMA400_ID_REG_VAL 0x90
+
+@@ -83,8 +90,27 @@
+ #define BMA400_ACC_ODR_MIN_WHOLE_HZ 25
+ #define BMA400_ACC_ODR_MIN_HZ 12
+
+-#define BMA400_SCALE_MIN 38357
+-#define BMA400_SCALE_MAX 306864
++/*
++ * BMA400_SCALE_MIN macro value represents m/s^2 for 1 LSB before
++ * converting to micro values for +-2g range.
++ *
++ * For +-2g - 1 LSB = 0.976562 milli g = 0.009576 m/s^2
++ * For +-4g - 1 LSB = 1.953125 milli g = 0.019153 m/s^2
++ * For +-16g - 1 LSB = 7.8125 milli g = 0.076614 m/s^2
++ *
++ * The raw value which is used to select the different ranges is determined
++ * by the first bit set position from the scale value, so BMA400_SCALE_MIN
++ * should be odd.
++ *
++ * Scale values for +-2g, +-4g, +-8g and +-16g are populated into bma400_scales
++ * array by left shifting BMA400_SCALE_MIN.
++ * e.g.:
++ * To select +-2g = 9577 << 0 = raw value to write is 0.
++ * To select +-8g = 9577 << 2 = raw value to write is 2.
++ * To select +-16g = 9577 << 3 = raw value to write is 3.
++ */
++#define BMA400_SCALE_MIN 9577
++#define BMA400_SCALE_MAX 76617
+
+ #define BMA400_NUM_REGULATORS 2
+ #define BMA400_VDD_REGULATOR 0
+@@ -92,8 +118,7 @@
+
+ extern const struct regmap_config bma400_regmap_config;
+
+-int bma400_probe(struct device *dev, struct regmap *regmap, const char *name);
+-
+-void bma400_remove(struct device *dev);
++int bma400_probe(struct device *dev, struct regmap *regmap, int irq,
++ const char *name);
+
+ #endif
+diff --git a/drivers/iio/accel/bma400_core.c b/drivers/iio/accel/bma400_core.c
+index 043002fe6f633..837f8671e00db 100644
+--- a/drivers/iio/accel/bma400_core.c
++++ b/drivers/iio/accel/bma400_core.c
+@@ -11,16 +11,21 @@
+ * - Create channel for sensor time
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/bitops.h>
+ #include <linux/device.h>
+-#include <linux/iio/iio.h>
+-#include <linux/iio/sysfs.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+
++#include <linux/iio/iio.h>
++#include <linux/iio/buffer.h>
++#include <linux/iio/trigger.h>
++#include <linux/iio/trigger_consumer.h>
++#include <linux/iio/triggered_buffer.h>
++
+ #include "bma400.h"
+
+ /*
+@@ -46,6 +51,13 @@ enum bma400_power_mode {
+ POWER_MODE_INVALID = 0x03,
+ };
+
++enum bma400_scan {
++ BMA400_ACCL_X,
++ BMA400_ACCL_Y,
++ BMA400_ACCL_Z,
++ BMA400_TEMP,
++};
++
+ struct bma400_sample_freq {
+ int hz;
+ int uhz;
+@@ -61,6 +73,14 @@ struct bma400_data {
+ struct bma400_sample_freq sample_freq;
+ int oversampling_ratio;
+ int scale;
++ struct iio_trigger *trig;
++ /* Correct time stamp alignment */
++ struct {
++ __le16 buff[3];
++ u8 temperature;
++ s64 ts __aligned(8);
++ } buffer __aligned(IIO_DMA_MINALIGN);
++ __le16 status;
+ };
+
+ static bool bma400_is_writable_reg(struct device *dev, unsigned int reg)
+@@ -152,7 +172,7 @@ static const struct iio_chan_spec_ext_info bma400_ext_info[] = {
+ { }
+ };
+
+-#define BMA400_ACC_CHANNEL(_axis) { \
++#define BMA400_ACC_CHANNEL(_index, _axis) { \
+ .type = IIO_ACCEL, \
+ .modified = 1, \
+ .channel2 = IIO_MOD_##_axis, \
+@@ -164,17 +184,32 @@ static const struct iio_chan_spec_ext_info bma400_ext_info[] = {
+ BIT(IIO_CHAN_INFO_SCALE) | \
+ BIT(IIO_CHAN_INFO_OVERSAMPLING_RATIO), \
+ .ext_info = bma400_ext_info, \
++ .scan_index = _index, \
++ .scan_type = { \
++ .sign = 's', \
++ .realbits = 12, \
++ .storagebits = 16, \
++ .endianness = IIO_LE, \
++ }, \
+ }
+
+ static const struct iio_chan_spec bma400_channels[] = {
+- BMA400_ACC_CHANNEL(X),
+- BMA400_ACC_CHANNEL(Y),
+- BMA400_ACC_CHANNEL(Z),
++ BMA400_ACC_CHANNEL(0, X),
++ BMA400_ACC_CHANNEL(1, Y),
++ BMA400_ACC_CHANNEL(2, Z),
+ {
+ .type = IIO_TEMP,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SAMP_FREQ),
++ .scan_index = 3,
++ .scan_type = {
++ .sign = 's',
++ .realbits = 8,
++ .storagebits = 8,
++ .endianness = IIO_LE,
++ },
+ },
++ IIO_CHAN_SOFT_TIMESTAMP(4),
+ };
+
+ static int bma400_get_temp_reg(struct bma400_data *data, int *val, int *val2)
+@@ -560,6 +595,26 @@ static void bma400_init_tables(void)
+ }
+ }
+
++static void bma400_regulators_disable(void *data_ptr)
++{
++ struct bma400_data *data = data_ptr;
++
++ regulator_bulk_disable(ARRAY_SIZE(data->regulators), data->regulators);
++}
++
++static void bma400_power_disable(void *data_ptr)
++{
++ struct bma400_data *data = data_ptr;
++ int ret;
++
++ mutex_lock(&data->mutex);
++ ret = bma400_set_power_mode(data, POWER_MODE_SLEEP);
++ mutex_unlock(&data->mutex);
++ if (ret)
++ dev_warn(data->dev, "Failed to put device into sleep mode (%pe)\n",
++ ERR_PTR(ret));
++}
++
+ static int bma400_init(struct bma400_data *data)
+ {
+ unsigned int val;
+@@ -569,13 +624,12 @@ static int bma400_init(struct bma400_data *data)
+ ret = regmap_read(data->regmap, BMA400_CHIP_ID_REG, &val);
+ if (ret) {
+ dev_err(data->dev, "Failed to read chip id register\n");
+- goto out;
++ return ret;
+ }
+
+ if (val != BMA400_ID_REG_VAL) {
+ dev_err(data->dev, "Chip ID mismatch\n");
+- ret = -ENODEV;
+- goto out;
++ return -ENODEV;
+ }
+
+ data->regulators[BMA400_VDD_REGULATOR].supply = "vdd";
+@@ -589,27 +643,31 @@ static int bma400_init(struct bma400_data *data)
+ "Failed to get regulators: %d\n",
+ ret);
+
+- goto out;
++ return ret;
+ }
+ ret = regulator_bulk_enable(ARRAY_SIZE(data->regulators),
+ data->regulators);
+ if (ret) {
+ dev_err(data->dev, "Failed to enable regulators: %d\n",
+ ret);
+- goto out;
++ return ret;
+ }
+
++ ret = devm_add_action_or_reset(data->dev, bma400_regulators_disable, data);
++ if (ret)
++ return ret;
++
+ ret = bma400_get_power_mode(data);
+ if (ret) {
+ dev_err(data->dev, "Failed to get the initial power-mode\n");
+- goto err_reg_disable;
++ return ret;
+ }
+
+ if (data->power_mode != POWER_MODE_NORMAL) {
+ ret = bma400_set_power_mode(data, POWER_MODE_NORMAL);
+ if (ret) {
+ dev_err(data->dev, "Failed to wake up the device\n");
+- goto err_reg_disable;
++ return ret;
+ }
+ /*
+ * TODO: The datasheet waits 1500us here in the example, but
+@@ -618,20 +676,28 @@ static int bma400_init(struct bma400_data *data)
+ usleep_range(1500, 2000);
+ }
+
++ ret = devm_add_action_or_reset(data->dev, bma400_power_disable, data);
++ if (ret)
++ return ret;
++
+ bma400_init_tables();
+
+ ret = bma400_get_accel_output_data_rate(data);
+ if (ret)
+- goto err_reg_disable;
++ return ret;
+
+ ret = bma400_get_accel_oversampling_ratio(data);
+ if (ret)
+- goto err_reg_disable;
++ return ret;
+
+ ret = bma400_get_accel_scale(data);
+ if (ret)
+- goto err_reg_disable;
++ return ret;
+
++ /* Configure INT1 pin to open drain */
++ ret = regmap_write(data->regmap, BMA400_INT_IO_CTRL_REG, 0x06);
++ if (ret)
++ return ret;
+ /*
+ * Once the interrupt engine is supported we might use the
+ * data_src_reg, but for now ensure this is set to the
+@@ -639,12 +705,6 @@ static int bma400_init(struct bma400_data *data)
+ * channel.
+ */
+ return regmap_write(data->regmap, BMA400_ACC_CONFIG2_REG, 0x00);
+-
+-err_reg_disable:
+- regulator_bulk_disable(ARRAY_SIZE(data->regulators),
+- data->regulators);
+-out:
+- return ret;
+ }
+
+ static int bma400_read_raw(struct iio_dev *indio_dev,
+@@ -786,6 +846,31 @@ static int bma400_write_raw_get_fmt(struct iio_dev *indio_dev,
+ }
+ }
+
++static int bma400_data_rdy_trigger_set_state(struct iio_trigger *trig,
++ bool state)
++{
++ struct iio_dev *indio_dev = iio_trigger_get_drvdata(trig);
++ struct bma400_data *data = iio_priv(indio_dev);
++ int ret;
++
++ ret = regmap_update_bits(data->regmap, BMA400_INT_CONFIG0_REG,
++ BMA400_INT_DRDY_MSK,
++ FIELD_PREP(BMA400_INT_DRDY_MSK, state));
++ if (ret)
++ return ret;
++
++ return regmap_update_bits(data->regmap, BMA400_INT1_MAP_REG,
++ BMA400_INT_DRDY_MSK,
++ FIELD_PREP(BMA400_INT_DRDY_MSK, state));
++}
++
++static const unsigned long bma400_avail_scan_masks[] = {
++ BIT(BMA400_ACCL_X) | BIT(BMA400_ACCL_Y) | BIT(BMA400_ACCL_Z),
++ BIT(BMA400_ACCL_X) | BIT(BMA400_ACCL_Y) | BIT(BMA400_ACCL_Z)
++ | BIT(BMA400_TEMP),
++ 0
++};
++
+ static const struct iio_info bma400_info = {
+ .read_raw = bma400_read_raw,
+ .read_avail = bma400_read_avail,
+@@ -793,7 +878,78 @@ static const struct iio_info bma400_info = {
+ .write_raw_get_fmt = bma400_write_raw_get_fmt,
+ };
+
+-int bma400_probe(struct device *dev, struct regmap *regmap, const char *name)
++static const struct iio_trigger_ops bma400_trigger_ops = {
++ .set_trigger_state = &bma400_data_rdy_trigger_set_state,
++ .validate_device = &iio_trigger_validate_own_device,
++};
++
++static irqreturn_t bma400_trigger_handler(int irq, void *p)
++{
++ struct iio_poll_func *pf = p;
++ struct iio_dev *indio_dev = pf->indio_dev;
++ struct bma400_data *data = iio_priv(indio_dev);
++ int ret, temp;
++
++ /* Lock to protect the data->buffer */
++ mutex_lock(&data->mutex);
++
++ /* bulk read six registers, with the base being the LSB register */
++ ret = regmap_bulk_read(data->regmap, BMA400_X_AXIS_LSB_REG,
++ &data->buffer.buff, sizeof(data->buffer.buff));
++ if (ret)
++ goto unlock_err;
++
++ if (test_bit(BMA400_TEMP, indio_dev->active_scan_mask)) {
++ ret = regmap_read(data->regmap, BMA400_TEMP_DATA_REG, &temp);
++ if (ret)
++ goto unlock_err;
++
++ data->buffer.temperature = temp;
++ }
++
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->buffer,
++ iio_get_time_ns(indio_dev));
++
++ mutex_unlock(&data->mutex);
++ iio_trigger_notify_done(indio_dev->trig);
++ return IRQ_HANDLED;
++
++unlock_err:
++ mutex_unlock(&data->mutex);
++ return IRQ_NONE;
++}
++
++static irqreturn_t bma400_interrupt(int irq, void *private)
++{
++ struct iio_dev *indio_dev = private;
++ struct bma400_data *data = iio_priv(indio_dev);
++ int ret;
++
++ /* Lock to protect the data->status */
++ mutex_lock(&data->mutex);
++ ret = regmap_bulk_read(data->regmap, BMA400_INT_STAT0_REG,
++ &data->status,
++ sizeof(data->status));
++ /*
++ * if none of the bit is set in the status register then it is
++ * spurious interrupt.
++ */
++ if (ret || !data->status)
++ goto unlock_err;
++
++ if (FIELD_GET(BMA400_INT_DRDY_MSK, le16_to_cpu(data->status))) {
++ mutex_unlock(&data->mutex);
++ iio_trigger_poll_chained(data->trig);
++ return IRQ_HANDLED;
++ }
++
++unlock_err:
++ mutex_unlock(&data->mutex);
++ return IRQ_NONE;
++}
++
++int bma400_probe(struct device *dev, struct regmap *regmap, int irq,
++ const char *name)
+ {
+ struct iio_dev *indio_dev;
+ struct bma400_data *data;
+@@ -820,33 +976,43 @@ int bma400_probe(struct device *dev, struct regmap *regmap, const char *name)
+ indio_dev->info = &bma400_info;
+ indio_dev->channels = bma400_channels;
+ indio_dev->num_channels = ARRAY_SIZE(bma400_channels);
++ indio_dev->available_scan_masks = bma400_avail_scan_masks;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+
+- dev_set_drvdata(dev, indio_dev);
++ if (irq > 0) {
++ data->trig = devm_iio_trigger_alloc(dev, "%s-dev%d",
++ indio_dev->name,
++ iio_device_id(indio_dev));
++ if (!data->trig)
++ return -ENOMEM;
+
+- return iio_device_register(indio_dev);
+-}
+-EXPORT_SYMBOL_NS(bma400_probe, IIO_BMA400);
++ data->trig->ops = &bma400_trigger_ops;
++ iio_trigger_set_drvdata(data->trig, indio_dev);
+
+-void bma400_remove(struct device *dev)
+-{
+- struct iio_dev *indio_dev = dev_get_drvdata(dev);
+- struct bma400_data *data = iio_priv(indio_dev);
+- int ret;
+-
+- mutex_lock(&data->mutex);
+- ret = bma400_set_power_mode(data, POWER_MODE_SLEEP);
+- mutex_unlock(&data->mutex);
++ ret = devm_iio_trigger_register(data->dev, data->trig);
++ if (ret)
++ return dev_err_probe(data->dev, ret,
++ "iio trigger register fail\n");
++
++ indio_dev->trig = iio_trigger_get(data->trig);
++ ret = devm_request_threaded_irq(dev, irq, NULL,
++ &bma400_interrupt,
++ IRQF_TRIGGER_RISING | IRQF_ONESHOT,
++ indio_dev->name, indio_dev);
++ if (ret)
++ return dev_err_probe(data->dev, ret,
++ "request irq %d failed\n", irq);
++ }
+
++ ret = devm_iio_triggered_buffer_setup(dev, indio_dev, NULL,
++ &bma400_trigger_handler, NULL);
+ if (ret)
+- dev_warn(dev, "Failed to put device into sleep mode (%pe)\n", ERR_PTR(ret));
++ return dev_err_probe(data->dev, ret,
++ "iio triggered buffer setup failed\n");
+
+- regulator_bulk_disable(ARRAY_SIZE(data->regulators),
+- data->regulators);
+-
+- iio_device_unregister(indio_dev);
++ return devm_iio_device_register(dev, indio_dev);
+ }
+-EXPORT_SYMBOL_NS(bma400_remove, IIO_BMA400);
++EXPORT_SYMBOL_NS(bma400_probe, IIO_BMA400);
+
+ MODULE_AUTHOR("Dan Robertson <dan@dlrobertson.com>");
+ MODULE_DESCRIPTION("Bosch BMA400 triaxial acceleration sensor core");
+diff --git a/drivers/iio/accel/bma400_i2c.c b/drivers/iio/accel/bma400_i2c.c
+index da104ffd3fe07..1ba2a982ea736 100644
+--- a/drivers/iio/accel/bma400_i2c.c
++++ b/drivers/iio/accel/bma400_i2c.c
+@@ -24,14 +24,7 @@ static int bma400_i2c_probe(struct i2c_client *client,
+ return PTR_ERR(regmap);
+ }
+
+- return bma400_probe(&client->dev, regmap, id->name);
+-}
+-
+-static int bma400_i2c_remove(struct i2c_client *client)
+-{
+- bma400_remove(&client->dev);
+-
+- return 0;
++ return bma400_probe(&client->dev, regmap, client->irq, id->name);
+ }
+
+ static const struct i2c_device_id bma400_i2c_ids[] = {
+@@ -52,7 +45,6 @@ static struct i2c_driver bma400_i2c_driver = {
+ .of_match_table = bma400_of_i2c_match,
+ },
+ .probe = bma400_i2c_probe,
+- .remove = bma400_i2c_remove,
+ .id_table = bma400_i2c_ids,
+ };
+
+diff --git a/drivers/iio/accel/bma400_spi.c b/drivers/iio/accel/bma400_spi.c
+index 51f23bdc0ea5f..ec13c044b3047 100644
+--- a/drivers/iio/accel/bma400_spi.c
++++ b/drivers/iio/accel/bma400_spi.c
+@@ -84,12 +84,7 @@ static int bma400_spi_probe(struct spi_device *spi)
+ if (ret)
+ dev_err(&spi->dev, "Failed to read chip id register\n");
+
+- return bma400_probe(&spi->dev, regmap, id->name);
+-}
+-
+-static void bma400_spi_remove(struct spi_device *spi)
+-{
+- bma400_remove(&spi->dev);
++ return bma400_probe(&spi->dev, regmap, spi->irq, id->name);
+ }
+
+ static const struct spi_device_id bma400_spi_ids[] = {
+@@ -110,7 +105,6 @@ static struct spi_driver bma400_spi_driver = {
+ .of_match_table = bma400_of_spi_match,
+ },
+ .probe = bma400_spi_probe,
+- .remove = bma400_spi_remove,
+ .id_table = bma400_spi_ids,
+ };
+
+diff --git a/drivers/iio/accel/cros_ec_accel_legacy.c b/drivers/iio/accel/cros_ec_accel_legacy.c
+index b6f3471b62dcf..3b77fded2dc07 100644
+--- a/drivers/iio/accel/cros_ec_accel_legacy.c
++++ b/drivers/iio/accel/cros_ec_accel_legacy.c
+@@ -215,7 +215,7 @@ static int cros_ec_accel_legacy_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ ret = cros_ec_sensors_core_init(pdev, indio_dev, true,
+- cros_ec_sensors_capture, NULL);
++ cros_ec_sensors_capture);
+ if (ret)
+ return ret;
+
+@@ -235,7 +235,7 @@ static int cros_ec_accel_legacy_probe(struct platform_device *pdev)
+ state->sign[CROS_EC_SENSOR_Z] = -1;
+ }
+
+- return devm_iio_device_register(dev, indio_dev);
++ return cros_ec_sensors_core_register(dev, indio_dev, NULL);
+ }
+
+ static struct platform_driver cros_ec_accel_platform_driver = {
+diff --git a/drivers/iio/accel/sca3000.c b/drivers/iio/accel/sca3000.c
+index 29a68a7d34cda..cc0aa1dda611b 100644
+--- a/drivers/iio/accel/sca3000.c
++++ b/drivers/iio/accel/sca3000.c
+@@ -167,8 +167,8 @@ struct sca3000_state {
+ int mo_det_use_count;
+ struct mutex lock;
+ /* Can these share a cacheline ? */
+- u8 rx[384] ____cacheline_aligned;
+- u8 tx[6] ____cacheline_aligned;
++ u8 rx[384] __aligned(IIO_DMA_MINALIGN);
++ u8 tx[6] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ /**
+diff --git a/drivers/iio/accel/sca3300.c b/drivers/iio/accel/sca3300.c
+index f7ef8ecfd34a6..39e0c24364aec 100644
+--- a/drivers/iio/accel/sca3300.c
++++ b/drivers/iio/accel/sca3300.c
+@@ -115,7 +115,7 @@ struct sca3300_data {
+ s16 channels[4];
+ s64 ts __aligned(sizeof(s64));
+ } scan;
+- u8 txbuf[4] ____cacheline_aligned;
++ u8 txbuf[4] __aligned(IIO_DMA_MINALIGN);
+ u8 rxbuf[4];
+ };
+
+diff --git a/drivers/iio/adc/ad7266.c b/drivers/iio/adc/ad7266.c
+index f20d39f0bc01b..468c2656d2be7 100644
+--- a/drivers/iio/adc/ad7266.c
++++ b/drivers/iio/adc/ad7266.c
+@@ -37,7 +37,7 @@ struct ad7266_state {
+ struct gpio_desc *gpios[3];
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ * The buffer needs to be large enough to hold two samples (4 bytes) and
+ * the naturally aligned timestamp (8 bytes).
+@@ -45,7 +45,7 @@ struct ad7266_state {
+ struct {
+ __be16 sample[2];
+ s64 timestamp;
+- } data ____cacheline_aligned;
++ } data __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ad7266_wakeup(struct ad7266_state *st)
+diff --git a/drivers/iio/adc/ad7280a.c b/drivers/iio/adc/ad7280a.c
+index 3bdf3d9422f24..d4a4e15c82447 100644
+--- a/drivers/iio/adc/ad7280a.c
++++ b/drivers/iio/adc/ad7280a.c
+@@ -183,7 +183,7 @@ struct ad7280_state {
+ unsigned char cb_mask[AD7280A_MAX_CHAIN];
+ struct mutex lock; /* protect sensor state */
+
+- __be32 tx ____cacheline_aligned;
++ __be32 tx __aligned(IIO_DMA_MINALIGN);
+ __be32 rx;
+ };
+
+diff --git a/drivers/iio/adc/ad7292.c b/drivers/iio/adc/ad7292.c
+index 3271a31afde1c..92c68d467c505 100644
+--- a/drivers/iio/adc/ad7292.c
++++ b/drivers/iio/adc/ad7292.c
+@@ -80,7 +80,7 @@ struct ad7292_state {
+ struct regulator *reg;
+ unsigned short vref_mv;
+
+- __be16 d16 ____cacheline_aligned;
++ __be16 d16 __aligned(IIO_DMA_MINALIGN);
+ u8 d8[2];
+ };
+
+diff --git a/drivers/iio/adc/ad7298.c b/drivers/iio/adc/ad7298.c
+index 3f4e73f7d35a0..c0430f71f592a 100644
+--- a/drivers/iio/adc/ad7298.c
++++ b/drivers/iio/adc/ad7298.c
+@@ -49,7 +49,7 @@ struct ad7298_state {
+ * DMA (thus cache coherency maintenance) requires the
+ * transfer buffers to live in their own cache lines.
+ */
+- __be16 rx_buf[12] ____cacheline_aligned;
++ __be16 rx_buf[12] __aligned(IIO_DMA_MINALIGN);
+ __be16 tx_buf[2];
+ };
+
+diff --git a/drivers/iio/adc/ad7476.c b/drivers/iio/adc/ad7476.c
+index a1e8b32671cf6..94776f6962907 100644
+--- a/drivers/iio/adc/ad7476.c
++++ b/drivers/iio/adc/ad7476.c
+@@ -44,13 +44,12 @@ struct ad7476_state {
+ struct spi_transfer xfer;
+ struct spi_message msg;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ * Make the buffer large enough for one 16 bit sample and one 64 bit
+ * aligned 64 bit timestamp.
+ */
+- unsigned char data[ALIGN(2, sizeof(s64)) + sizeof(s64)]
+- ____cacheline_aligned;
++ unsigned char data[ALIGN(2, sizeof(s64)) + sizeof(s64)] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ad7476_supported_device_ids {
+diff --git a/drivers/iio/adc/ad7606.h b/drivers/iio/adc/ad7606.h
+index 4f82d7c9acfde..2dc4f599f9df9 100644
+--- a/drivers/iio/adc/ad7606.h
++++ b/drivers/iio/adc/ad7606.h
+@@ -116,11 +116,11 @@ struct ad7606_state {
+ struct completion completion;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ * 16 * 16-bit samples + 64-bit timestamp
+ */
+- unsigned short data[20] ____cacheline_aligned;
++ unsigned short data[20] __aligned(IIO_DMA_MINALIGN);
+ __be16 d16[2];
+ };
+
+diff --git a/drivers/iio/adc/ad7766.c b/drivers/iio/adc/ad7766.c
+index 51ee9482e0df9..3079a0872947e 100644
+--- a/drivers/iio/adc/ad7766.c
++++ b/drivers/iio/adc/ad7766.c
+@@ -45,13 +45,12 @@ struct ad7766 {
+ struct spi_message msg;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ * Make the buffer large enough for one 24 bit sample and one 64 bit
+ * aligned 64 bit timestamp.
+ */
+- unsigned char data[ALIGN(3, sizeof(s64)) + sizeof(s64)]
+- ____cacheline_aligned;
++ unsigned char data[ALIGN(3, sizeof(s64)) + sizeof(s64)] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ /*
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index aa42ba759fa1a..60f394da46401 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -163,7 +163,7 @@ struct ad7768_state {
+ struct gpio_desc *gpio_sync_in;
+ const char *labels[ARRAY_SIZE(ad7768_channels)];
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+ union {
+@@ -173,7 +173,7 @@ struct ad7768_state {
+ } scan;
+ __be32 d32;
+ u8 d8[2];
+- } data ____cacheline_aligned;
++ } data __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ad7768_spi_reg_read(struct ad7768_state *st, unsigned int addr,
+diff --git a/drivers/iio/adc/ad7887.c b/drivers/iio/adc/ad7887.c
+index f64999714a4da..965bdc8aa6961 100644
+--- a/drivers/iio/adc/ad7887.c
++++ b/drivers/iio/adc/ad7887.c
+@@ -66,13 +66,12 @@ struct ad7887_state {
+ unsigned char tx_cmd_buf[4];
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ * Buffer needs to be large enough to hold two 16 bit samples and a
+ * 64 bit aligned 64 bit timestamp.
+ */
+- unsigned char data[ALIGN(4, sizeof(s64)) + sizeof(s64)]
+- ____cacheline_aligned;
++ unsigned char data[ALIGN(4, sizeof(s64)) + sizeof(s64)] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ad7887_supported_device_ids {
+diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c
+index 069b561ee7689..edad1f30121dd 100644
+--- a/drivers/iio/adc/ad7923.c
++++ b/drivers/iio/adc/ad7923.c
+@@ -57,12 +57,12 @@ struct ad7923_state {
+ unsigned int settings;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ * Ensure rx_buf can be directly used in iio_push_to_buffers_with_timetamp
+ * Length = 8 channels + 4 extra for 8 byte timestamp
+ */
+- __be16 rx_buf[12] ____cacheline_aligned;
++ __be16 rx_buf[12] __aligned(IIO_DMA_MINALIGN);
+ __be16 tx_buf[4];
+ };
+
+diff --git a/drivers/iio/adc/ad7949.c b/drivers/iio/adc/ad7949.c
+index 44bb5fde83de0..ed4c1656ca75d 100644
+--- a/drivers/iio/adc/ad7949.c
++++ b/drivers/iio/adc/ad7949.c
+@@ -86,7 +86,7 @@ struct ad7949_adc_chip {
+ u8 resolution;
+ u16 cfg;
+ unsigned int current_channel;
+- u16 buffer ____cacheline_aligned;
++ u16 buffer __aligned(IIO_DMA_MINALIGN);
+ __be16 buf8b;
+ };
+
+diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
+index a9e655e69eaa2..8ffabdaf841ea 100644
+--- a/drivers/iio/adc/adi-axi-adc.c
++++ b/drivers/iio/adc/adi-axi-adc.c
+@@ -84,7 +84,8 @@ void *adi_axi_adc_conv_priv(struct adi_axi_adc_conv *conv)
+ {
+ struct adi_axi_adc_client *cl = conv_to_client(conv);
+
+- return (char *)cl + ALIGN(sizeof(struct adi_axi_adc_client), IIO_ALIGN);
++ return (char *)cl + ALIGN(sizeof(struct adi_axi_adc_client),
++ IIO_DMA_MINALIGN);
+ }
+ EXPORT_SYMBOL_GPL(adi_axi_adc_conv_priv);
+
+@@ -169,9 +170,9 @@ static struct adi_axi_adc_conv *adi_axi_adc_conv_register(struct device *dev,
+ struct adi_axi_adc_client *cl;
+ size_t alloc_size;
+
+- alloc_size = ALIGN(sizeof(struct adi_axi_adc_client), IIO_ALIGN);
++ alloc_size = ALIGN(sizeof(struct adi_axi_adc_client), IIO_DMA_MINALIGN);
+ if (sizeof_priv)
+- alloc_size += ALIGN(sizeof_priv, IIO_ALIGN);
++ alloc_size += ALIGN(sizeof_priv, IIO_DMA_MINALIGN);
+
+ cl = kzalloc(alloc_size, GFP_KERNEL);
+ if (!cl)
+diff --git a/drivers/iio/adc/hi8435.c b/drivers/iio/adc/hi8435.c
+index 8eb0140df133a..771fa12bdc026 100644
+--- a/drivers/iio/adc/hi8435.c
++++ b/drivers/iio/adc/hi8435.c
+@@ -49,7 +49,7 @@ struct hi8435_priv {
+
+ unsigned threshold_lo[2]; /* GND-Open and Supply-Open thresholds */
+ unsigned threshold_hi[2]; /* GND-Open and Supply-Open thresholds */
+- u8 reg_buffer[3] ____cacheline_aligned;
++ u8 reg_buffer[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int hi8435_readb(struct hi8435_priv *priv, u8 reg, u8 *val)
+diff --git a/drivers/iio/adc/ltc2496.c b/drivers/iio/adc/ltc2496.c
+index 5a55f79f25749..dfb3bb5997e57 100644
+--- a/drivers/iio/adc/ltc2496.c
++++ b/drivers/iio/adc/ltc2496.c
+@@ -24,10 +24,10 @@ struct ltc2496_driverdata {
+ struct spi_device *spi;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- unsigned char rxbuf[3] ____cacheline_aligned;
++ unsigned char rxbuf[3] __aligned(IIO_DMA_MINALIGN);
+ unsigned char txbuf[3];
+ };
+
+diff --git a/drivers/iio/adc/ltc2497.c b/drivers/iio/adc/ltc2497.c
+index 1adddf5a88a94..f7c786f37ceb1 100644
+--- a/drivers/iio/adc/ltc2497.c
++++ b/drivers/iio/adc/ltc2497.c
+@@ -20,10 +20,10 @@ struct ltc2497_driverdata {
+ struct ltc2497core_driverdata common_ddata;
+ struct i2c_client *client;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- __be32 buf ____cacheline_aligned;
++ __be32 buf __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ltc2497_result_and_measure(struct ltc2497core_driverdata *ddata,
+diff --git a/drivers/iio/adc/max1027.c b/drivers/iio/adc/max1027.c
+index 4daf1d576c4ee..136fcf753837c 100644
+--- a/drivers/iio/adc/max1027.c
++++ b/drivers/iio/adc/max1027.c
+@@ -272,7 +272,7 @@ struct max1027_state {
+ struct mutex lock;
+ struct completion complete;
+
+- u8 reg ____cacheline_aligned;
++ u8 reg __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int max1027_wait_eoc(struct iio_dev *indio_dev)
+@@ -349,8 +349,7 @@ static int max1027_read_single_value(struct iio_dev *indio_dev,
+ if (ret < 0) {
+ dev_err(&indio_dev->dev,
+ "Failed to configure conversion register\n");
+- iio_device_release_direct_mode(indio_dev);
+- return ret;
++ goto release;
+ }
+
+ /*
+@@ -360,11 +359,12 @@ static int max1027_read_single_value(struct iio_dev *indio_dev,
+ */
+ ret = max1027_wait_eoc(indio_dev);
+ if (ret)
+- return ret;
++ goto release;
+
+ /* Read result */
+ ret = spi_read(st->spi, st->buffer, (chan->type == IIO_TEMP) ? 4 : 2);
+
++release:
+ iio_device_release_direct_mode(indio_dev);
+
+ if (ret < 0)
+diff --git a/drivers/iio/adc/max11100.c b/drivers/iio/adc/max11100.c
+index eb1ce6a0315c5..49e38dca8fe2c 100644
+--- a/drivers/iio/adc/max11100.c
++++ b/drivers/iio/adc/max11100.c
+@@ -33,10 +33,10 @@ struct max11100_state {
+ struct spi_device *spi;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- u8 buffer[3] ____cacheline_aligned;
++ u8 buffer[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct iio_chan_spec max11100_channels[] = {
+diff --git a/drivers/iio/adc/max1118.c b/drivers/iio/adc/max1118.c
+index a41bc570be210..75ab57d9aef74 100644
+--- a/drivers/iio/adc/max1118.c
++++ b/drivers/iio/adc/max1118.c
+@@ -42,7 +42,7 @@ struct max1118 {
+ s64 ts __aligned(8);
+ } scan;
+
+- u8 data ____cacheline_aligned;
++ u8 data __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define MAX1118_CHANNEL(ch) \
+diff --git a/drivers/iio/adc/max1241.c b/drivers/iio/adc/max1241.c
+index a5afd84af58b9..a815ad1f6913b 100644
+--- a/drivers/iio/adc/max1241.c
++++ b/drivers/iio/adc/max1241.c
+@@ -26,7 +26,7 @@ struct max1241 {
+ struct regulator *vref;
+ struct gpio_desc *shutdown;
+
+- __be16 data ____cacheline_aligned;
++ __be16 data __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct iio_chan_spec max1241_channels[] = {
+diff --git a/drivers/iio/adc/mcp320x.c b/drivers/iio/adc/mcp320x.c
+index b4c69acb33e34..f3b81798b3c93 100644
+--- a/drivers/iio/adc/mcp320x.c
++++ b/drivers/iio/adc/mcp320x.c
+@@ -92,7 +92,7 @@ struct mcp320x {
+ struct mutex lock;
+ const struct mcp320x_chip_info *chip_info;
+
+- u8 tx_buf ____cacheline_aligned;
++ u8 tx_buf __aligned(IIO_DMA_MINALIGN);
+ u8 rx_buf[4];
+ };
+
+diff --git a/drivers/iio/adc/ti-adc0832.c b/drivers/iio/adc/ti-adc0832.c
+index fb5e72600b968..b11ce555ba3b9 100644
+--- a/drivers/iio/adc/ti-adc0832.c
++++ b/drivers/iio/adc/ti-adc0832.c
+@@ -36,7 +36,7 @@ struct adc0832 {
+ */
+ u8 data[24] __aligned(8);
+
+- u8 tx_buf[2] ____cacheline_aligned;
++ u8 tx_buf[2] __aligned(IIO_DMA_MINALIGN);
+ u8 rx_buf[2];
+ };
+
+diff --git a/drivers/iio/adc/ti-adc084s021.c b/drivers/iio/adc/ti-adc084s021.c
+index c9b5d9aec3dc4..1f6e53832e062 100644
+--- a/drivers/iio/adc/ti-adc084s021.c
++++ b/drivers/iio/adc/ti-adc084s021.c
+@@ -32,10 +32,10 @@ struct adc084s021 {
+ s64 ts __aligned(8);
+ } scan;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache line.
+ */
+- u16 tx_buf[4] ____cacheline_aligned;
++ u16 tx_buf[4] __aligned(IIO_DMA_MINALIGN);
+ __be16 rx_buf[5]; /* First 16-bits are trash */
+ };
+
+diff --git a/drivers/iio/adc/ti-adc108s102.c b/drivers/iio/adc/ti-adc108s102.c
+index c8e48881c37f9..c82a161630e1d 100644
+--- a/drivers/iio/adc/ti-adc108s102.c
++++ b/drivers/iio/adc/ti-adc108s102.c
+@@ -77,8 +77,8 @@ struct adc108s102_state {
+ * tx_buf: 8 channel read commands, plus 1 dummy command
+ * rx_buf: 1 dummy response, 8 channel responses
+ */
+- __be16 rx_buf[9] ____cacheline_aligned;
+- __be16 tx_buf[9] ____cacheline_aligned;
++ __be16 rx_buf[9] __aligned(IIO_DMA_MINALIGN);
++ __be16 tx_buf[9] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define ADC108S102_V_CHAN(index) \
+diff --git a/drivers/iio/adc/ti-adc12138.c b/drivers/iio/adc/ti-adc12138.c
+index 59d75d09604f3..c0a72d72f3a99 100644
+--- a/drivers/iio/adc/ti-adc12138.c
++++ b/drivers/iio/adc/ti-adc12138.c
+@@ -55,7 +55,7 @@ struct adc12138 {
+ */
+ __be16 data[20] __aligned(8);
+
+- u8 tx_buf[2] ____cacheline_aligned;
++ u8 tx_buf[2] __aligned(IIO_DMA_MINALIGN);
+ u8 rx_buf[2];
+ };
+
+diff --git a/drivers/iio/adc/ti-adc128s052.c b/drivers/iio/adc/ti-adc128s052.c
+index 8e7adec877555..622fd384983c7 100644
+--- a/drivers/iio/adc/ti-adc128s052.c
++++ b/drivers/iio/adc/ti-adc128s052.c
+@@ -29,7 +29,7 @@ struct adc128 {
+ struct regulator *reg;
+ struct mutex lock;
+
+- u8 buffer[2] ____cacheline_aligned;
++ u8 buffer[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int adc128_adc_conversion(struct adc128 *adc, u8 channel)
+diff --git a/drivers/iio/adc/ti-adc161s626.c b/drivers/iio/adc/ti-adc161s626.c
+index 75ca7f1c87264..b789891dcf490 100644
+--- a/drivers/iio/adc/ti-adc161s626.c
++++ b/drivers/iio/adc/ti-adc161s626.c
+@@ -71,7 +71,7 @@ struct ti_adc_data {
+ u8 read_size;
+ u8 shift;
+
+- u8 buffer[16] ____cacheline_aligned;
++ u8 buffer[16] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ti_adc_read_measurement(struct ti_adc_data *data,
+diff --git a/drivers/iio/adc/ti-ads124s08.c b/drivers/iio/adc/ti-ads124s08.c
+index 767b3b6348092..64833156c1998 100644
+--- a/drivers/iio/adc/ti-ads124s08.c
++++ b/drivers/iio/adc/ti-ads124s08.c
+@@ -106,7 +106,7 @@ struct ads124s_private {
+ * timestamp is maintained.
+ */
+ u32 buffer[ADS124S08_MAX_CHANNELS + sizeof(s64)/sizeof(u32)] __aligned(8);
+- u8 data[5] ____cacheline_aligned;
++ u8 data[5] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define ADS124S08_CHAN(index) \
+diff --git a/drivers/iio/adc/ti-ads131e08.c b/drivers/iio/adc/ti-ads131e08.c
+index 80a09817c1194..32237cacc9a37 100644
+--- a/drivers/iio/adc/ti-ads131e08.c
++++ b/drivers/iio/adc/ti-ads131e08.c
+@@ -105,7 +105,7 @@ struct ads131e08_state {
+ s64 ts __aligned(8);
+ } tmp_buf;
+
+- u8 tx_buf[3] ____cacheline_aligned;
++ u8 tx_buf[3] __aligned(IIO_DMA_MINALIGN);
+ /*
+ * Add extra one padding byte to be able to access the last channel
+ * value using u32 pointer
+diff --git a/drivers/iio/adc/ti-ads7950.c b/drivers/iio/adc/ti-ads7950.c
+index e3658b969c5bf..2cc9a9bd9db60 100644
+--- a/drivers/iio/adc/ti-ads7950.c
++++ b/drivers/iio/adc/ti-ads7950.c
+@@ -102,11 +102,11 @@ struct ti_ads7950_state {
+ unsigned int gpio_cmd_settings_bitmask;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+ u16 rx_buf[TI_ADS7950_MAX_CHAN + 2 + TI_ADS7950_TIMESTAMP_SIZE]
+- ____cacheline_aligned;
++ __aligned(IIO_DMA_MINALIGN);
+ u16 tx_buf[TI_ADS7950_MAX_CHAN + 2];
+ u16 single_tx;
+ u16 single_rx;
+diff --git a/drivers/iio/adc/ti-ads8344.c b/drivers/iio/adc/ti-ads8344.c
+index c96d2a9ba9247..bbd85cb47f816 100644
+--- a/drivers/iio/adc/ti-ads8344.c
++++ b/drivers/iio/adc/ti-ads8344.c
+@@ -28,7 +28,7 @@ struct ads8344 {
+ */
+ struct mutex lock;
+
+- u8 tx_buf ____cacheline_aligned;
++ u8 tx_buf __aligned(IIO_DMA_MINALIGN);
+ u8 rx_buf[3];
+ };
+
+diff --git a/drivers/iio/adc/ti-ads8688.c b/drivers/iio/adc/ti-ads8688.c
+index 708cca0a63be1..ef06a897421ac 100644
+--- a/drivers/iio/adc/ti-ads8688.c
++++ b/drivers/iio/adc/ti-ads8688.c
+@@ -71,7 +71,7 @@ struct ads8688_state {
+ union {
+ __be32 d32;
+ u8 d8[4];
+- } data[2] ____cacheline_aligned;
++ } data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ads8688_id {
+diff --git a/drivers/iio/adc/ti-tlc4541.c b/drivers/iio/adc/ti-tlc4541.c
+index 2406eda9dfc6a..30f629a553a14 100644
+--- a/drivers/iio/adc/ti-tlc4541.c
++++ b/drivers/iio/adc/ti-tlc4541.c
+@@ -37,12 +37,12 @@ struct tlc4541_state {
+ struct spi_message scan_single_msg;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ * 2 bytes data + 6 bytes padding + 8 bytes timestamp when
+ * call iio_push_to_buffers_with_timestamp.
+ */
+- __be16 rx_buf[8] ____cacheline_aligned;
++ __be16 rx_buf[8] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ struct tlc4541_chip_info {
+diff --git a/drivers/iio/addac/ad74413r.c b/drivers/iio/addac/ad74413r.c
+index acd230a6af35a..6a66d7a65db79 100644
+--- a/drivers/iio/addac/ad74413r.c
++++ b/drivers/iio/addac/ad74413r.c
+@@ -77,13 +77,13 @@ struct ad74413r_state {
+ struct spi_transfer adc_samples_xfer[AD74413R_CHANNEL_MAX + 1];
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+ struct {
+ u8 rx_buf[AD74413R_FRAME_SIZE * AD74413R_CHANNEL_MAX];
+ s64 timestamp;
+- } adc_samples_buf ____cacheline_aligned;
++ } adc_samples_buf __aligned(IIO_DMA_MINALIGN);
+
+ u8 adc_samples_tx_buf[AD74413R_FRAME_SIZE * AD74413R_CHANNEL_MAX];
+ u8 reg_tx_buf[AD74413R_FRAME_SIZE];
+diff --git a/drivers/iio/amplifiers/ad8366.c b/drivers/iio/amplifiers/ad8366.c
+index 1134ae12e5319..f2c2ea79a07f3 100644
+--- a/drivers/iio/amplifiers/ad8366.c
++++ b/drivers/iio/amplifiers/ad8366.c
+@@ -45,10 +45,10 @@ struct ad8366_state {
+ enum ad8366_type type;
+ struct ad8366_info *info;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- unsigned char data[2] ____cacheline_aligned;
++ unsigned char data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static struct ad8366_info ad8366_infos[] = {
+diff --git a/drivers/iio/common/cros_ec_sensors/cros_ec_lid_angle.c b/drivers/iio/common/cros_ec_sensors/cros_ec_lid_angle.c
+index af801e203623e..02d3cf36acb0c 100644
+--- a/drivers/iio/common/cros_ec_sensors/cros_ec_lid_angle.c
++++ b/drivers/iio/common/cros_ec_sensors/cros_ec_lid_angle.c
+@@ -97,7 +97,7 @@ static int cros_ec_lid_angle_probe(struct platform_device *pdev)
+ if (!indio_dev)
+ return -ENOMEM;
+
+- ret = cros_ec_sensors_core_init(pdev, indio_dev, false, NULL, NULL);
++ ret = cros_ec_sensors_core_init(pdev, indio_dev, false, NULL);
+ if (ret)
+ return ret;
+
+@@ -113,7 +113,7 @@ static int cros_ec_lid_angle_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- return devm_iio_device_register(dev, indio_dev);
++ return cros_ec_sensors_core_register(dev, indio_dev, NULL);
+ }
+
+ static const struct platform_device_id cros_ec_lid_angle_ids[] = {
+diff --git a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors.c b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors.c
+index 376a5b30010ae..5cce34fdff022 100644
+--- a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors.c
++++ b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors.c
+@@ -235,8 +235,7 @@ static int cros_ec_sensors_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ ret = cros_ec_sensors_core_init(pdev, indio_dev, true,
+- cros_ec_sensors_capture,
+- cros_ec_sensors_push_data);
++ cros_ec_sensors_capture);
+ if (ret)
+ return ret;
+
+@@ -297,7 +296,8 @@ static int cros_ec_sensors_probe(struct platform_device *pdev)
+ else
+ state->core.read_ec_sensors_data = cros_ec_sensors_read_cmd;
+
+- return devm_iio_device_register(dev, indio_dev);
++ return cros_ec_sensors_core_register(dev, indio_dev,
++ cros_ec_sensors_push_data);
+ }
+
+ static const struct platform_device_id cros_ec_sensors_ids[] = {
+diff --git a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
+index 5976aca48e3bd..310d1511f3762 100644
+--- a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
++++ b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
+@@ -234,21 +234,18 @@ static void cros_ec_sensors_core_clean(void *arg)
+
+ /**
+ * cros_ec_sensors_core_init() - basic initialization of the core structure
+- * @pdev: platform device created for the sensors
++ * @pdev: platform device created for the sensor
+ * @indio_dev: iio device structure of the device
+ * @physical_device: true if the device refers to a physical device
+ * @trigger_capture: function pointer to call buffer is triggered,
+ * for backward compatibility.
+- * @push_data: function to call when cros_ec_sensorhub receives
+- * a sample for that sensor.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+ int cros_ec_sensors_core_init(struct platform_device *pdev,
+ struct iio_dev *indio_dev,
+ bool physical_device,
+- cros_ec_sensors_capture_t trigger_capture,
+- cros_ec_sensorhub_push_data_cb_t push_data)
++ cros_ec_sensors_capture_t trigger_capture)
+ {
+ struct device *dev = &pdev->dev;
+ struct cros_ec_sensors_core_state *state = iio_priv(indio_dev);
+@@ -338,17 +335,6 @@ int cros_ec_sensors_core_init(struct platform_device *pdev,
+ if (ret)
+ return ret;
+
+- ret = cros_ec_sensorhub_register_push_data(
+- sensor_hub, sensor_platform->sensor_num,
+- indio_dev, push_data);
+- if (ret)
+- return ret;
+-
+- ret = devm_add_action_or_reset(
+- dev, cros_ec_sensors_core_clean, pdev);
+- if (ret)
+- return ret;
+-
+ /* Timestamp coming from FIFO are in ns since boot. */
+ ret = iio_device_set_clock(indio_dev, CLOCK_BOOTTIME);
+ if (ret)
+@@ -370,6 +356,46 @@ int cros_ec_sensors_core_init(struct platform_device *pdev,
+ }
+ EXPORT_SYMBOL_GPL(cros_ec_sensors_core_init);
+
++/**
++ * cros_ec_sensors_core_register() - Register callback to FIFO and IIO when
++ * sensor is ready.
++ * It must be called at the end of the sensor probe routine.
++ * @dev: device created for the sensor
++ * @indio_dev: iio device structure of the device
++ * @push_data: function to call when cros_ec_sensorhub receives
++ * a sample for that sensor.
++ *
++ * Return: 0 on success, -errno on failure.
++ */
++int cros_ec_sensors_core_register(struct device *dev,
++ struct iio_dev *indio_dev,
++ cros_ec_sensorhub_push_data_cb_t push_data)
++{
++ struct cros_ec_sensor_platform *sensor_platform = dev_get_platdata(dev);
++ struct cros_ec_sensorhub *sensor_hub = dev_get_drvdata(dev->parent);
++ struct platform_device *pdev = to_platform_device(dev);
++ struct cros_ec_dev *ec = sensor_hub->ec;
++ int ret;
++
++ ret = devm_iio_device_register(dev, indio_dev);
++ if (ret)
++ return ret;
++
++ if (!push_data ||
++ !cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO))
++ return 0;
++
++ ret = cros_ec_sensorhub_register_push_data(
++ sensor_hub, sensor_platform->sensor_num,
++ indio_dev, push_data);
++ if (ret)
++ return ret;
++
++ return devm_add_action_or_reset(
++ dev, cros_ec_sensors_core_clean, pdev);
++}
++EXPORT_SYMBOL_GPL(cros_ec_sensors_core_register);
++
+ /**
+ * cros_ec_motion_send_host_cmd() - send motion sense host command
+ * @state: pointer to state information for device
+diff --git a/drivers/iio/common/ssp_sensors/ssp.h b/drivers/iio/common/ssp_sensors/ssp.h
+index abb8327956194..f649cdecc2774 100644
+--- a/drivers/iio/common/ssp_sensors/ssp.h
++++ b/drivers/iio/common/ssp_sensors/ssp.h
+@@ -221,8 +221,7 @@ struct ssp_data {
+ struct iio_dev *sensor_devs[SSP_SENSOR_MAX];
+ atomic_t enable_refcount;
+
+- __le16 header_buffer[SSP_HEADER_BUFFER_SIZE / sizeof(__le16)]
+- ____cacheline_aligned;
++ __le16 header_buffer[SSP_HEADER_BUFFER_SIZE / sizeof(__le16)] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ void ssp_clean_pending_list(struct ssp_data *data);
+diff --git a/drivers/iio/dac/ad5064.c b/drivers/iio/dac/ad5064.c
+index d87cf14daabe4..4447b88118270 100644
+--- a/drivers/iio/dac/ad5064.c
++++ b/drivers/iio/dac/ad5064.c
+@@ -115,13 +115,13 @@ struct ad5064_state {
+ struct mutex lock;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+ union {
+ u8 i2c[3];
+ __be32 spi;
+- } data ____cacheline_aligned;
++ } data __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ad5064_type {
+diff --git a/drivers/iio/dac/ad5360.c b/drivers/iio/dac/ad5360.c
+index 22b000a408286..e0b7f658d6119 100644
+--- a/drivers/iio/dac/ad5360.c
++++ b/drivers/iio/dac/ad5360.c
+@@ -79,13 +79,13 @@ struct ad5360_state {
+ struct mutex lock;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+ union {
+ __be32 d32;
+ u8 d8[4];
+- } data[2] ____cacheline_aligned;
++ } data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ad5360_type {
+diff --git a/drivers/iio/dac/ad5421.c b/drivers/iio/dac/ad5421.c
+index eedf661d32b2d..7644acfd879e0 100644
+--- a/drivers/iio/dac/ad5421.c
++++ b/drivers/iio/dac/ad5421.c
+@@ -72,13 +72,13 @@ struct ad5421_state {
+ struct mutex lock;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+ union {
+ __be32 d32;
+ u8 d8[4];
+- } data[2] ____cacheline_aligned;
++ } data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct iio_event_spec ad5421_current_event[] = {
+diff --git a/drivers/iio/dac/ad5449.c b/drivers/iio/dac/ad5449.c
+index bad9bdaafa94d..4572d6f49275f 100644
+--- a/drivers/iio/dac/ad5449.c
++++ b/drivers/iio/dac/ad5449.c
+@@ -68,10 +68,10 @@ struct ad5449 {
+ uint16_t dac_cache[AD5449_MAX_CHANNELS];
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- __be16 data[2] ____cacheline_aligned;
++ __be16 data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ad5449_type {
+diff --git a/drivers/iio/dac/ad5504.c b/drivers/iio/dac/ad5504.c
+index a0817e799cc07..e6c5be728bb21 100644
+--- a/drivers/iio/dac/ad5504.c
++++ b/drivers/iio/dac/ad5504.c
+@@ -54,7 +54,7 @@ struct ad5504_state {
+ unsigned pwr_down_mask;
+ unsigned pwr_down_mode;
+
+- __be16 data[2] ____cacheline_aligned;
++ __be16 data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ /*
+diff --git a/drivers/iio/dac/ad5592r-base.h b/drivers/iio/dac/ad5592r-base.h
+index 2a22ef6919965..cc7be426cbc88 100644
+--- a/drivers/iio/dac/ad5592r-base.h
++++ b/drivers/iio/dac/ad5592r-base.h
+@@ -14,6 +14,8 @@
+ #include <linux/mutex.h>
+ #include <linux/gpio/driver.h>
+
++#include <linux/iio/iio.h>
++
+ struct device;
+ struct ad5592r_state;
+
+@@ -65,7 +67,7 @@ struct ad5592r_state {
+ u8 gpio_in;
+ u8 gpio_val;
+
+- __be16 spi_msg ____cacheline_aligned;
++ __be16 spi_msg __aligned(IIO_DMA_MINALIGN);
+ __be16 spi_msg_nop;
+ };
+
+diff --git a/drivers/iio/dac/ad5686.h b/drivers/iio/dac/ad5686.h
+index cd5fff9e9d537..b7ade3a6b9b6c 100644
+--- a/drivers/iio/dac/ad5686.h
++++ b/drivers/iio/dac/ad5686.h
+@@ -13,6 +13,8 @@
+ #include <linux/mutex.h>
+ #include <linux/kernel.h>
+
++#include <linux/iio/iio.h>
++
+ #define AD5310_CMD(x) ((x) << 12)
+
+ #define AD5683_DATA(x) ((x) << 4)
+@@ -137,7 +139,7 @@ struct ad5686_state {
+ struct mutex lock;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+
+@@ -145,7 +147,7 @@ struct ad5686_state {
+ __be32 d32;
+ __be16 d16;
+ u8 d8[4];
+- } data[3] ____cacheline_aligned;
++ } data[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+
+diff --git a/drivers/iio/dac/ad5755.c b/drivers/iio/dac/ad5755.c
+index 1a63b8456725f..beadfa938d2da 100644
+--- a/drivers/iio/dac/ad5755.c
++++ b/drivers/iio/dac/ad5755.c
+@@ -189,14 +189,14 @@ struct ad5755_state {
+ struct mutex lock;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+
+ union {
+ __be32 d32;
+ u8 d8[4];
+- } data[2] ____cacheline_aligned;
++ } data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ad5755_type {
+diff --git a/drivers/iio/dac/ad5761.c b/drivers/iio/dac/ad5761.c
+index 4cb8471db81e0..6aa1a068adb06 100644
+--- a/drivers/iio/dac/ad5761.c
++++ b/drivers/iio/dac/ad5761.c
+@@ -70,13 +70,13 @@ struct ad5761_state {
+ enum ad5761_voltage_range range;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+ union {
+ __be32 d32;
+ u8 d8[4];
+- } data[3] ____cacheline_aligned;
++ } data[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct ad5761_range_params ad5761_range_params[] = {
+diff --git a/drivers/iio/dac/ad5764.c b/drivers/iio/dac/ad5764.c
+index d235a8047ba0c..26c049d5b73a5 100644
+--- a/drivers/iio/dac/ad5764.c
++++ b/drivers/iio/dac/ad5764.c
+@@ -56,13 +56,13 @@ struct ad5764_state {
+ struct mutex lock;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+ union {
+ __be32 d32;
+ u8 d8[4];
+- } data[2] ____cacheline_aligned;
++ } data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ad5764_type {
+diff --git a/drivers/iio/dac/ad5766.c b/drivers/iio/dac/ad5766.c
+index 43189af2fb1f3..899894523752f 100644
+--- a/drivers/iio/dac/ad5766.c
++++ b/drivers/iio/dac/ad5766.c
+@@ -123,7 +123,7 @@ struct ad5766_state {
+ u32 d32;
+ u16 w16[2];
+ u8 b8[4];
+- } data[3] ____cacheline_aligned;
++ } data[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ struct ad5766_span_tbl {
+diff --git a/drivers/iio/dac/ad5770r.c b/drivers/iio/dac/ad5770r.c
+index 7e2fd32e993a6..f66d67402e436 100644
+--- a/drivers/iio/dac/ad5770r.c
++++ b/drivers/iio/dac/ad5770r.c
+@@ -140,7 +140,7 @@ struct ad5770r_state {
+ bool ch_pwr_down[AD5770R_MAX_CHANNELS];
+ bool internal_ref;
+ bool external_res;
+- u8 transf_buf[2] ____cacheline_aligned;
++ u8 transf_buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct regmap_config ad5770r_spi_regmap_config = {
+diff --git a/drivers/iio/dac/ad5791.c b/drivers/iio/dac/ad5791.c
+index 339564fe47d12..a4167454da818 100644
+--- a/drivers/iio/dac/ad5791.c
++++ b/drivers/iio/dac/ad5791.c
+@@ -95,7 +95,7 @@ struct ad5791_state {
+ union {
+ __be32 d32;
+ u8 d8[4];
+- } data[3] ____cacheline_aligned;
++ } data[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum ad5791_supported_device_ids {
+diff --git a/drivers/iio/dac/ad7293.c b/drivers/iio/dac/ad7293.c
+index 59a38ca4c3c77..06f05750d9216 100644
+--- a/drivers/iio/dac/ad7293.c
++++ b/drivers/iio/dac/ad7293.c
+@@ -144,7 +144,7 @@ struct ad7293_state {
+ struct regulator *reg_avdd;
+ struct regulator *reg_vdrive;
+ u8 page_select;
+- u8 data[3] ____cacheline_aligned;
++ u8 data[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ad7293_page_select(struct ad7293_state *st, unsigned int reg)
+diff --git a/drivers/iio/dac/ad7303.c b/drivers/iio/dac/ad7303.c
+index 03edf046dec6f..bff6bf697d9c1 100644
+--- a/drivers/iio/dac/ad7303.c
++++ b/drivers/iio/dac/ad7303.c
+@@ -44,10 +44,10 @@ struct ad7303_state {
+
+ struct mutex lock;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- __be16 data ____cacheline_aligned;
++ __be16 data __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ad7303_write(struct ad7303_state *st, unsigned int chan,
+diff --git a/drivers/iio/dac/ad8801.c b/drivers/iio/dac/ad8801.c
+index 6be35c92d435a..919e8c8806973 100644
+--- a/drivers/iio/dac/ad8801.c
++++ b/drivers/iio/dac/ad8801.c
+@@ -26,7 +26,7 @@ struct ad8801_state {
+ struct regulator *vrefh_reg;
+ struct regulator *vrefl_reg;
+
+- __be16 data ____cacheline_aligned;
++ __be16 data __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ad8801_spi_write(struct ad8801_state *state,
+diff --git a/drivers/iio/dac/ltc2688.c b/drivers/iio/dac/ltc2688.c
+index 937b0d25a11cc..28bdde2d30889 100644
+--- a/drivers/iio/dac/ltc2688.c
++++ b/drivers/iio/dac/ltc2688.c
+@@ -91,10 +91,10 @@ struct ltc2688_state {
+ struct mutex lock;
+ int vref;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- u8 tx_data[6] ____cacheline_aligned;
++ u8 tx_data[6] __aligned(IIO_DMA_MINALIGN);
+ u8 rx_data[3];
+ };
+
+diff --git a/drivers/iio/dac/mcp4922.c b/drivers/iio/dac/mcp4922.c
+index cb9e60e71b915..6c0e31032c570 100644
+--- a/drivers/iio/dac/mcp4922.c
++++ b/drivers/iio/dac/mcp4922.c
+@@ -29,7 +29,7 @@ struct mcp4922_state {
+ unsigned int value[MCP4922_NUM_CHANNELS];
+ unsigned int vref_mv;
+ struct regulator *vref_reg;
+- u8 mosi[2] ____cacheline_aligned;
++ u8 mosi[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define MCP4922_CHAN(chan, bits) { \
+diff --git a/drivers/iio/dac/ti-dac082s085.c b/drivers/iio/dac/ti-dac082s085.c
+index 106ce35464195..8e1590e3cc8b2 100644
+--- a/drivers/iio/dac/ti-dac082s085.c
++++ b/drivers/iio/dac/ti-dac082s085.c
+@@ -55,7 +55,7 @@ struct ti_dac_chip {
+ bool powerdown;
+ u8 powerdown_mode;
+ u8 resolution;
+- u8 buf[2] ____cacheline_aligned;
++ u8 buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define WRITE_NOT_UPDATE(chan) (0x00 | (chan) << 6)
+diff --git a/drivers/iio/dac/ti-dac5571.c b/drivers/iio/dac/ti-dac5571.c
+index 4b6b04038e941..c8fbacb275159 100644
+--- a/drivers/iio/dac/ti-dac5571.c
++++ b/drivers/iio/dac/ti-dac5571.c
+@@ -52,7 +52,7 @@ struct dac5571_data {
+ struct dac5571_spec const *spec;
+ int (*dac5571_cmd)(struct dac5571_data *data, int channel, u16 val);
+ int (*dac5571_pwrdwn)(struct dac5571_data *data, int channel, u8 pwrdwn);
+- u8 buf[3] ____cacheline_aligned;
++ u8 buf[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define DAC5571_POWERDOWN(mode) ((mode) + 1)
+diff --git a/drivers/iio/dac/ti-dac7311.c b/drivers/iio/dac/ti-dac7311.c
+index 4afc411725d9d..7f89d2a52f49c 100644
+--- a/drivers/iio/dac/ti-dac7311.c
++++ b/drivers/iio/dac/ti-dac7311.c
+@@ -52,7 +52,7 @@ struct ti_dac_chip {
+ bool powerdown;
+ u8 powerdown_mode;
+ u8 resolution;
+- u8 buf[2] ____cacheline_aligned;
++ u8 buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static u8 ti_dac_get_power(struct ti_dac_chip *ti_dac, bool powerdown)
+diff --git a/drivers/iio/dac/ti-dac7612.c b/drivers/iio/dac/ti-dac7612.c
+index 4c0f4b5e9ff44..8195815de26fe 100644
+--- a/drivers/iio/dac/ti-dac7612.c
++++ b/drivers/iio/dac/ti-dac7612.c
+@@ -31,10 +31,10 @@ struct dac7612 {
+ struct mutex lock;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- uint8_t data[2] ____cacheline_aligned;
++ uint8_t data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int dac7612_cmd_single(struct dac7612 *priv, int channel, u16 val)
+diff --git a/drivers/iio/frequency/ad9523.c b/drivers/iio/frequency/ad9523.c
+index 942870539268d..97662ca1ca966 100644
+--- a/drivers/iio/frequency/ad9523.c
++++ b/drivers/iio/frequency/ad9523.c
+@@ -287,13 +287,13 @@ struct ad9523_state {
+ struct mutex lock;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
+- * transfer buffers to live in their own cache lines.
++ * DMA (thus cache coherency maintenance) may require that
++ * transfer buffers live in their own cache lines.
+ */
+ union {
+ __be32 d32;
+ u8 d8[4];
+- } data[2] ____cacheline_aligned;
++ } data[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ad9523_read(struct iio_dev *indio_dev, unsigned int addr)
+diff --git a/drivers/iio/frequency/adf4350.c b/drivers/iio/frequency/adf4350.c
+index be1218d862919..85e289700c3c5 100644
+--- a/drivers/iio/frequency/adf4350.c
++++ b/drivers/iio/frequency/adf4350.c
+@@ -56,10 +56,10 @@ struct adf4350_state {
+ */
+ struct mutex lock;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
+- * transfer buffers to live in their own cache lines.
++ * DMA (thus cache coherency maintenance) may require that
++ * transfer buffers live in their own cache lines.
+ */
+- __be32 val ____cacheline_aligned;
++ __be32 val __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static struct adf4350_platform_data default_pdata = {
+diff --git a/drivers/iio/frequency/adf4371.c b/drivers/iio/frequency/adf4371.c
+index ecd5e18995adc..135c8cedc33dc 100644
+--- a/drivers/iio/frequency/adf4371.c
++++ b/drivers/iio/frequency/adf4371.c
+@@ -175,7 +175,7 @@ struct adf4371_state {
+ unsigned int mod2;
+ unsigned int rf_div_sel;
+ unsigned int ref_div_factor;
+- u8 buf[10] ____cacheline_aligned;
++ u8 buf[10] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static unsigned long long adf4371_pll_fract_n_get_rate(struct adf4371_state *st,
+diff --git a/drivers/iio/frequency/admv1013.c b/drivers/iio/frequency/admv1013.c
+index b0e1f6571afba..ed81672713586 100644
+--- a/drivers/iio/frequency/admv1013.c
++++ b/drivers/iio/frequency/admv1013.c
+@@ -100,7 +100,7 @@ struct admv1013_state {
+ unsigned int input_mode;
+ unsigned int quad_se_mode;
+ bool det_en;
+- u8 data[3] ____cacheline_aligned;
++ u8 data[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int __admv1013_spi_read(struct admv1013_state *st, unsigned int reg,
+diff --git a/drivers/iio/frequency/admv1014.c b/drivers/iio/frequency/admv1014.c
+index 1aac5665b5de3..865addd10db44 100644
+--- a/drivers/iio/frequency/admv1014.c
++++ b/drivers/iio/frequency/admv1014.c
+@@ -127,7 +127,7 @@ struct admv1014_state {
+ unsigned int quad_se_mode;
+ unsigned int p1db_comp;
+ bool det_en;
+- u8 data[3] ____cacheline_aligned;
++ u8 data[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const int mixer_vgate_table[] = {106, 107, 108, 110, 111, 112, 113, 114,
+diff --git a/drivers/iio/frequency/admv4420.c b/drivers/iio/frequency/admv4420.c
+index 51134aee85109..863ba8e98c95e 100644
+--- a/drivers/iio/frequency/admv4420.c
++++ b/drivers/iio/frequency/admv4420.c
+@@ -113,7 +113,7 @@ struct admv4420_state {
+ struct admv4420_n_counter n_counter;
+ enum admv4420_mux_sel mux_sel;
+ struct mutex lock;
+- u8 transf_buf[4] ____cacheline_aligned;
++ u8 transf_buf[4] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct regmap_config admv4420_regmap_config = {
+diff --git a/drivers/iio/frequency/adrf6780.c b/drivers/iio/frequency/adrf6780.c
+index 8255ffd174f6a..21878bad09097 100644
+--- a/drivers/iio/frequency/adrf6780.c
++++ b/drivers/iio/frequency/adrf6780.c
+@@ -86,7 +86,7 @@ struct adrf6780_state {
+ bool uc_bias_en;
+ bool lo_sideband;
+ bool vdet_out_en;
+- u8 data[3] ____cacheline_aligned;
++ u8 data[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int __adrf6780_spi_read(struct adrf6780_state *st, unsigned int reg,
+diff --git a/drivers/iio/gyro/adis16080.c b/drivers/iio/gyro/adis16080.c
+index acef59d822b10..14b3abf6dce95 100644
+--- a/drivers/iio/gyro/adis16080.c
++++ b/drivers/iio/gyro/adis16080.c
+@@ -45,7 +45,7 @@ struct adis16080_state {
+ const struct adis16080_chip_info *info;
+ struct mutex lock;
+
+- __be16 buf ____cacheline_aligned;
++ __be16 buf __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int adis16080_read_sample(struct iio_dev *indio_dev,
+diff --git a/drivers/iio/gyro/adis16130.c b/drivers/iio/gyro/adis16130.c
+index b9c952e65b553..33cde9e6fca59 100644
+--- a/drivers/iio/gyro/adis16130.c
++++ b/drivers/iio/gyro/adis16130.c
+@@ -41,7 +41,7 @@
+ struct adis16130_state {
+ struct spi_device *us;
+ struct mutex buf_lock;
+- u8 buf[4] ____cacheline_aligned;
++ u8 buf[4] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int adis16130_spi_read(struct iio_dev *indio_dev, u8 reg_addr, u32 *val)
+diff --git a/drivers/iio/gyro/adxrs450.c b/drivers/iio/gyro/adxrs450.c
+index 04f3500252152..f84438e0c42c5 100644
+--- a/drivers/iio/gyro/adxrs450.c
++++ b/drivers/iio/gyro/adxrs450.c
+@@ -73,7 +73,7 @@ enum {
+ struct adxrs450_state {
+ struct spi_device *us;
+ struct mutex buf_lock;
+- __be32 tx ____cacheline_aligned;
++ __be32 tx __aligned(IIO_DMA_MINALIGN);
+ __be32 rx;
+
+ };
+diff --git a/drivers/iio/gyro/fxas21002c_core.c b/drivers/iio/gyro/fxas21002c_core.c
+index 0923fd793492b..a36d71d9e3ea9 100644
+--- a/drivers/iio/gyro/fxas21002c_core.c
++++ b/drivers/iio/gyro/fxas21002c_core.c
+@@ -150,10 +150,10 @@ struct fxas21002c_data {
+ struct regulator *vddio;
+
+ /*
+- * DMA (thus cache coherency maintenance) requires the
+- * transfer buffers to live in their own cache lines.
++ * DMA (thus cache coherency maintenance) may require the
++ * transfer buffers live in their own cache lines.
+ */
+- s16 buffer[8] ____cacheline_aligned;
++ s16 buffer[8] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ enum fxas21002c_channel_index {
+diff --git a/drivers/iio/imu/fxos8700_core.c b/drivers/iio/imu/fxos8700_core.c
+index ab288186f36e4..423cfe526f2a1 100644
+--- a/drivers/iio/imu/fxos8700_core.c
++++ b/drivers/iio/imu/fxos8700_core.c
+@@ -167,7 +167,7 @@
+ struct fxos8700_data {
+ struct regmap *regmap;
+ struct iio_trigger *trig;
+- __be16 buf[FXOS8700_DATA_BUF_SIZE] ____cacheline_aligned;
++ __be16 buf[FXOS8700_DATA_BUF_SIZE] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ /* Regmap info */
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600.h b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+index 995a9dc06521d..3d91469beccbb 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600.h
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+@@ -141,7 +141,7 @@ struct inv_icm42600_state {
+ struct inv_icm42600_suspended suspended;
+ struct iio_dev *indio_gyro;
+ struct iio_dev *indio_accel;
+- uint8_t buffer[2] ____cacheline_aligned;
++ uint8_t buffer[2] __aligned(IIO_DMA_MINALIGN);
+ struct inv_icm42600_fifo fifo;
+ struct {
+ int64_t gyro;
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h
+index de2a3949dcc7d..8b85ee333bf8f 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h
+@@ -39,7 +39,7 @@ struct inv_icm42600_fifo {
+ size_t accel;
+ size_t total;
+ } nb;
+- uint8_t data[2080] ____cacheline_aligned;
++ uint8_t data[2080] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ /* FIFO data packet */
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+index 8e14f20b13145..94b54c501ec0a 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+@@ -204,7 +204,7 @@ struct inv_mpu6050_state {
+ s32 magn_raw_to_gauss[3];
+ struct iio_mount_matrix magn_orient;
+ unsigned int suspended_sensors;
+- u8 data[INV_MPU6050_OUTPUT_DATA_SIZE] ____cacheline_aligned;
++ u8 data[INV_MPU6050_OUTPUT_DATA_SIZE] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ /*register and associated bit definition*/
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index adf054c7a75e1..ed36851d646ba 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -835,7 +835,23 @@ static ssize_t iio_format_avail_list(char *buf, const int *vals,
+
+ static ssize_t iio_format_avail_range(char *buf, const int *vals, int type)
+ {
+- return iio_format_list(buf, vals, type, 3, "[", "]");
++ int length;
++
++ /*
++ * length refers to the array size , not the number of elements.
++ * The purpose is to print the range [min , step ,max] so length should
++ * be 3 in case of int, and 6 for other types.
++ */
++ switch (type) {
++ case IIO_VAL_INT:
++ length = 3;
++ break;
++ default:
++ length = 6;
++ break;
++ }
++
++ return iio_format_list(buf, vals, type, length, "[", "]");
+ }
+
+ static ssize_t iio_read_channel_info_avail(struct device *dev,
+@@ -1653,7 +1669,7 @@ struct iio_dev *iio_device_alloc(struct device *parent, int sizeof_priv)
+
+ alloc_size = sizeof(struct iio_dev_opaque);
+ if (sizeof_priv) {
+- alloc_size = ALIGN(alloc_size, IIO_ALIGN);
++ alloc_size = ALIGN(alloc_size, IIO_DMA_MINALIGN);
+ alloc_size += sizeof_priv;
+ }
+
+@@ -1663,7 +1679,7 @@ struct iio_dev *iio_device_alloc(struct device *parent, int sizeof_priv)
+
+ indio_dev = &iio_dev_opaque->indio_dev;
+ indio_dev->priv = (char *)iio_dev_opaque +
+- ALIGN(sizeof(struct iio_dev_opaque), IIO_ALIGN);
++ ALIGN(sizeof(struct iio_dev_opaque), IIO_DMA_MINALIGN);
+
+ indio_dev->dev.parent = parent;
+ indio_dev->dev.type = &iio_device_type;
+diff --git a/drivers/iio/light/cros_ec_light_prox.c b/drivers/iio/light/cros_ec_light_prox.c
+index de472f23d1cba..16b893bae3881 100644
+--- a/drivers/iio/light/cros_ec_light_prox.c
++++ b/drivers/iio/light/cros_ec_light_prox.c
+@@ -181,8 +181,7 @@ static int cros_ec_light_prox_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ ret = cros_ec_sensors_core_init(pdev, indio_dev, true,
+- cros_ec_sensors_capture,
+- cros_ec_sensors_push_data);
++ cros_ec_sensors_capture);
+ if (ret)
+ return ret;
+
+@@ -240,7 +239,8 @@ static int cros_ec_light_prox_probe(struct platform_device *pdev)
+
+ state->core.read_ec_sensors_data = cros_ec_sensors_read_cmd;
+
+- return devm_iio_device_register(dev, indio_dev);
++ return cros_ec_sensors_core_register(dev, indio_dev,
++ cros_ec_sensors_push_data);
+ }
+
+ static const struct platform_device_id cros_ec_light_prox_ids[] = {
+diff --git a/drivers/iio/light/isl29028.c b/drivers/iio/light/isl29028.c
+index 9de3262aa6883..a62787f5d5e7b 100644
+--- a/drivers/iio/light/isl29028.c
++++ b/drivers/iio/light/isl29028.c
+@@ -625,7 +625,7 @@ static int isl29028_probe(struct i2c_client *client,
+ ISL29028_POWER_OFF_DELAY_MS);
+ pm_runtime_use_autosuspend(&client->dev);
+
+- ret = devm_iio_device_register(indio_dev->dev.parent, indio_dev);
++ ret = iio_device_register(indio_dev);
+ if (ret < 0) {
+ dev_err(&client->dev,
+ "%s(): iio registration failed with error %d\n",
+diff --git a/drivers/iio/potentiometer/ad5110.c b/drivers/iio/potentiometer/ad5110.c
+index d4eeedae56e5a..8fbcce4829898 100644
+--- a/drivers/iio/potentiometer/ad5110.c
++++ b/drivers/iio/potentiometer/ad5110.c
+@@ -63,10 +63,10 @@ struct ad5110_data {
+ struct mutex lock;
+ const struct ad5110_cfg *cfg;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ */
+- u8 buf[2] ____cacheline_aligned;
++ u8 buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct iio_chan_spec ad5110_channels[] = {
+diff --git a/drivers/iio/potentiometer/ad5272.c b/drivers/iio/potentiometer/ad5272.c
+index d8cbd170262f8..ed5fc0b50fe97 100644
+--- a/drivers/iio/potentiometer/ad5272.c
++++ b/drivers/iio/potentiometer/ad5272.c
+@@ -50,7 +50,7 @@ struct ad5272_data {
+ struct i2c_client *client;
+ struct mutex lock;
+ const struct ad5272_cfg *cfg;
+- u8 buf[2] ____cacheline_aligned;
++ u8 buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct iio_chan_spec ad5272_channel = {
+diff --git a/drivers/iio/potentiometer/max5481.c b/drivers/iio/potentiometer/max5481.c
+index 098d144a8fddc..b40e5ac218d73 100644
+--- a/drivers/iio/potentiometer/max5481.c
++++ b/drivers/iio/potentiometer/max5481.c
+@@ -44,7 +44,7 @@ static const struct max5481_cfg max5481_cfg[] = {
+ struct max5481_data {
+ struct spi_device *spi;
+ const struct max5481_cfg *cfg;
+- u8 msg[3] ____cacheline_aligned;
++ u8 msg[3] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define MAX5481_CHANNEL { \
+diff --git a/drivers/iio/potentiometer/mcp41010.c b/drivers/iio/potentiometer/mcp41010.c
+index 30a4594d4e115..2b73c75402094 100644
+--- a/drivers/iio/potentiometer/mcp41010.c
++++ b/drivers/iio/potentiometer/mcp41010.c
+@@ -60,7 +60,7 @@ struct mcp41010_data {
+ const struct mcp41010_cfg *cfg;
+ struct mutex lock; /* Protect write sequences */
+ unsigned int value[MCP41010_MAX_WIPERS]; /* Cache wiper values */
+- u8 buf[2] ____cacheline_aligned;
++ u8 buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define MCP41010_CHANNEL(ch) { \
+diff --git a/drivers/iio/potentiometer/mcp4131.c b/drivers/iio/potentiometer/mcp4131.c
+index 7c8c18ab87649..7890c0993ec48 100644
+--- a/drivers/iio/potentiometer/mcp4131.c
++++ b/drivers/iio/potentiometer/mcp4131.c
+@@ -129,7 +129,7 @@ struct mcp4131_data {
+ struct spi_device *spi;
+ const struct mcp4131_cfg *cfg;
+ struct mutex lock;
+- u8 buf[2] ____cacheline_aligned;
++ u8 buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ #define MCP4131_CHANNEL(ch) { \
+diff --git a/drivers/iio/pressure/cros_ec_baro.c b/drivers/iio/pressure/cros_ec_baro.c
+index 2f882e1094232..0511edbf868d7 100644
+--- a/drivers/iio/pressure/cros_ec_baro.c
++++ b/drivers/iio/pressure/cros_ec_baro.c
+@@ -138,8 +138,7 @@ static int cros_ec_baro_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ ret = cros_ec_sensors_core_init(pdev, indio_dev, true,
+- cros_ec_sensors_capture,
+- cros_ec_sensors_push_data);
++ cros_ec_sensors_capture);
+ if (ret)
+ return ret;
+
+@@ -186,7 +185,8 @@ static int cros_ec_baro_probe(struct platform_device *pdev)
+
+ state->core.read_ec_sensors_data = cros_ec_sensors_read_cmd;
+
+- return devm_iio_device_register(dev, indio_dev);
++ return cros_ec_sensors_core_register(dev, indio_dev,
++ cros_ec_sensors_push_data);
+ }
+
+ static const struct platform_device_id cros_ec_baro_ids[] = {
+diff --git a/drivers/iio/proximity/as3935.c b/drivers/iio/proximity/as3935.c
+index 67891ce2bd095..ebc95cf8f5f42 100644
+--- a/drivers/iio/proximity/as3935.c
++++ b/drivers/iio/proximity/as3935.c
+@@ -65,7 +65,7 @@ struct as3935_state {
+ u8 chan;
+ s64 timestamp __aligned(8);
+ } scan;
+- u8 buf[2] ____cacheline_aligned;
++ u8 buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static const struct iio_chan_spec as3935_channels[] = {
+diff --git a/drivers/iio/proximity/sx9324.c b/drivers/iio/proximity/sx9324.c
+index 63fbcaa4cac81..a30ac8007a3d3 100644
+--- a/drivers/iio/proximity/sx9324.c
++++ b/drivers/iio/proximity/sx9324.c
+@@ -93,7 +93,7 @@
+ #define SX9324_REG_PROX_CTRL4_AVGNEGFILT_MASK GENMASK(5, 3)
+ #define SX9324_REG_PROX_CTRL4_AVGNEG_FILT_2 0x08
+ #define SX9324_REG_PROX_CTRL4_AVGPOSFILT_MASK GENMASK(2, 0)
+-#define SX9324_REG_PROX_CTRL3_AVGPOS_FILT_256 0x04
++#define SX9324_REG_PROX_CTRL4_AVGPOS_FILT_256 0x04
+ #define SX9324_REG_PROX_CTRL5 0x35
+ #define SX9324_REG_PROX_CTRL5_HYST_MASK GENMASK(5, 4)
+ #define SX9324_REG_PROX_CTRL5_CLOSE_DEBOUNCE_MASK GENMASK(3, 2)
+@@ -810,7 +810,7 @@ static const struct sx_common_reg_default sx9324_default_regs[] = {
+ { SX9324_REG_PROX_CTRL3, SX9324_REG_PROX_CTRL3_AVGDEB_2SAMPLES |
+ SX9324_REG_PROX_CTRL3_AVGPOS_THRESH_16K },
+ { SX9324_REG_PROX_CTRL4, SX9324_REG_PROX_CTRL4_AVGNEG_FILT_2 |
+- SX9324_REG_PROX_CTRL3_AVGPOS_FILT_256 },
++ SX9324_REG_PROX_CTRL4_AVGPOS_FILT_256 },
+ { SX9324_REG_PROX_CTRL5, 0x00 },
+ { SX9324_REG_PROX_CTRL6, SX9324_REG_PROX_CTRL6_PROXTHRESH_32 },
+ { SX9324_REG_PROX_CTRL7, SX9324_REG_PROX_CTRL6_PROXTHRESH_32 },
+diff --git a/drivers/iio/resolver/ad2s1200.c b/drivers/iio/resolver/ad2s1200.c
+index 9746bd9356285..9d95241bdf8f2 100644
+--- a/drivers/iio/resolver/ad2s1200.c
++++ b/drivers/iio/resolver/ad2s1200.c
+@@ -41,7 +41,7 @@ struct ad2s1200_state {
+ struct spi_device *sdev;
+ struct gpio_desc *sample;
+ struct gpio_desc *rdvel;
+- __be16 rx ____cacheline_aligned;
++ __be16 rx __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ad2s1200_read_raw(struct iio_dev *indio_dev,
+diff --git a/drivers/iio/resolver/ad2s90.c b/drivers/iio/resolver/ad2s90.c
+index d6a91f137e134..be6836e55376f 100644
+--- a/drivers/iio/resolver/ad2s90.c
++++ b/drivers/iio/resolver/ad2s90.c
+@@ -24,7 +24,7 @@
+ struct ad2s90_state {
+ struct mutex lock; /* lock to protect rx buffer */
+ struct spi_device *sdev;
+- u8 rx[2] ____cacheline_aligned;
++ u8 rx[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int ad2s90_read_raw(struct iio_dev *indio_dev,
+diff --git a/drivers/iio/temperature/ltc2983.c b/drivers/iio/temperature/ltc2983.c
+index 4fc654275155d..4b7f2b8a97586 100644
+--- a/drivers/iio/temperature/ltc2983.c
++++ b/drivers/iio/temperature/ltc2983.c
+@@ -204,11 +204,11 @@ struct ltc2983_data {
+ u8 num_channels;
+ u8 iio_channels;
+ /*
+- * DMA (thus cache coherency maintenance) requires the
++ * DMA (thus cache coherency maintenance) may require the
+ * transfer buffers to live in their own cache lines.
+ * Holds the converted temperature
+ */
+- __be32 temp ____cacheline_aligned;
++ __be32 temp __aligned(IIO_DMA_MINALIGN);
+ };
+
+ struct ltc2983_sensor {
+diff --git a/drivers/iio/temperature/max31865.c b/drivers/iio/temperature/max31865.c
+index e3bb78184c6e2..29e23652ba5a1 100644
+--- a/drivers/iio/temperature/max31865.c
++++ b/drivers/iio/temperature/max31865.c
+@@ -55,7 +55,7 @@ struct max31865_data {
+ struct mutex lock;
+ bool filter_50hz;
+ bool three_wire;
+- u8 buf[2] ____cacheline_aligned;
++ u8 buf[2] __aligned(IIO_DMA_MINALIGN);
+ };
+
+ static int max31865_read(struct max31865_data *data, u8 reg,
+diff --git a/drivers/iio/temperature/maxim_thermocouple.c b/drivers/iio/temperature/maxim_thermocouple.c
+index 98c41cddc6f00..c28a7a6dea5f1 100644
+--- a/drivers/iio/temperature/maxim_thermocouple.c
++++ b/drivers/iio/temperature/maxim_thermocouple.c
+@@ -122,7 +122,7 @@ struct maxim_thermocouple_data {
+ struct spi_device *spi;
+ const struct maxim_thermocouple_chip *chip;
+
+- u8 buffer[16] ____cacheline_aligned;
++ u8 buffer[16] __aligned(IIO_DMA_MINALIGN);
+ char tc_type;
+ };
+
+diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
+index 2e4cf2b116534..629beff053add 100644
+--- a/drivers/infiniband/hw/hfi1/file_ops.c
++++ b/drivers/infiniband/hw/hfi1/file_ops.c
+@@ -1179,8 +1179,10 @@ static int setup_base_ctxt(struct hfi1_filedata *fd,
+ goto done;
+
+ ret = init_user_ctxt(fd, uctxt);
+- if (ret)
++ if (ret) {
++ hfi1_free_ctxt_rcv_groups(uctxt);
+ goto done;
++ }
+
+ user_init(uctxt);
+
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index ba3c742258efe..b354caeaa9b29 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -6000,8 +6000,8 @@ static irqreturn_t hns_roce_v2_msix_interrupt_abn(int irq, void *dev_id)
+
+ dev_err(dev, "AEQ overflow!\n");
+
+- int_st |= 1 << HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S;
+- roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG, int_st);
++ roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG,
++ 1 << HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S);
+
+ /* Set reset level for reset_event() */
+ if (ops->set_default_reset_request)
+diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c
+index 646fa86774909..7b086fe63a245 100644
+--- a/drivers/infiniband/hw/irdma/cm.c
++++ b/drivers/infiniband/hw/irdma/cm.c
+@@ -1477,12 +1477,13 @@ irdma_find_listener(struct irdma_cm_core *cm_core, u32 *dst_addr, u16 dst_port,
+ list_for_each_entry (listen_node, &cm_core->listen_list, list) {
+ memcpy(listen_addr, listen_node->loc_addr, sizeof(listen_addr));
+ listen_port = listen_node->loc_port;
++ if (listen_port != dst_port ||
++ !(listener_state & listen_node->listener_state))
++ continue;
+ /* compare node pair, return node handle if a match */
+- if ((!memcmp(listen_addr, dst_addr, sizeof(listen_addr)) ||
+- !memcmp(listen_addr, ip_zero, sizeof(listen_addr))) &&
+- listen_port == dst_port &&
+- vlan_id == listen_node->vlan_id &&
+- (listener_state & listen_node->listener_state)) {
++ if (!memcmp(listen_addr, ip_zero, sizeof(listen_addr)) ||
++ (!memcmp(listen_addr, dst_addr, sizeof(listen_addr)) &&
++ vlan_id == listen_node->vlan_id)) {
+ refcount_inc(&listen_node->refcnt);
+ spin_unlock_irqrestore(&cm_core->listen_list_lock,
+ flags);
+diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
+index dd3943d22dc61..6bba1335993a1 100644
+--- a/drivers/infiniband/hw/irdma/hw.c
++++ b/drivers/infiniband/hw/irdma/hw.c
+@@ -257,10 +257,6 @@ static void irdma_process_aeq(struct irdma_pci_f *rf)
+ iwqp->last_aeq = info->ae_id;
+ spin_unlock_irqrestore(&iwqp->lock, flags);
+ ctx_info = &iwqp->ctx_info;
+- if (rdma_protocol_roce(&iwqp->iwdev->ibdev, 1))
+- ctx_info->roce_info->err_rq_idx_valid = true;
+- else
+- ctx_info->iwarp_info->err_rq_idx_valid = true;
+ } else {
+ if (info->ae_id != IRDMA_AE_CQ_OPERATION_ERROR)
+ continue;
+@@ -370,16 +366,12 @@ static void irdma_process_aeq(struct irdma_pci_f *rf)
+ case IRDMA_AE_LCE_FUNCTION_CATASTROPHIC:
+ case IRDMA_AE_LCE_CQ_CATASTROPHIC:
+ case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG:
+- if (rdma_protocol_roce(&iwdev->ibdev, 1))
+- ctx_info->roce_info->err_rq_idx_valid = false;
+- else
+- ctx_info->iwarp_info->err_rq_idx_valid = false;
+- fallthrough;
+ default:
+ ibdev_err(&iwdev->ibdev, "abnormal ae_id = 0x%x bool qp=%d qp_id = %d\n",
+ info->ae_id, info->qp, info->qp_cq_id);
+ if (rdma_protocol_roce(&iwdev->ibdev, 1)) {
+- if (!info->sq && ctx_info->roce_info->err_rq_idx_valid) {
++ ctx_info->roce_info->err_rq_idx_valid = info->rq;
++ if (info->rq) {
+ ctx_info->roce_info->err_rq_idx = info->wqe_idx;
+ irdma_sc_qp_setctx_roce(&iwqp->sc_qp, iwqp->host_ctx.va,
+ ctx_info);
+@@ -388,7 +380,8 @@ static void irdma_process_aeq(struct irdma_pci_f *rf)
+ irdma_cm_disconn(iwqp);
+ break;
+ }
+- if (!info->sq && ctx_info->iwarp_info->err_rq_idx_valid) {
++ ctx_info->iwarp_info->err_rq_idx_valid = info->rq;
++ if (info->rq) {
+ ctx_info->iwarp_info->err_rq_idx = info->wqe_idx;
+ ctx_info->tcp_info_valid = false;
+ ctx_info->iwarp_info_valid = true;
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 96135a228f26c..227a799385d1d 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -1776,11 +1776,11 @@ static int irdma_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ spin_unlock_irqrestore(&iwcq->lock, flags);
+
+ irdma_cq_wq_destroy(iwdev->rf, cq);
+- irdma_cq_free_rsrc(iwdev->rf, iwcq);
+
+ spin_lock_irqsave(&iwceq->ce_lock, flags);
+ irdma_sc_cleanup_ceqes(cq, ceq);
+ spin_unlock_irqrestore(&iwceq->ce_lock, flags);
++ irdma_cq_free_rsrc(iwdev->rf, iwcq);
+
+ return 0;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
+index 39ffb363ba0c7..531aa35ba67c7 100644
+--- a/drivers/infiniband/hw/mlx5/fs.c
++++ b/drivers/infiniband/hw/mlx5/fs.c
+@@ -2050,12 +2050,10 @@ static int mlx5_ib_matcher_ns(struct uverbs_attr_bundle *attrs,
+ if (err)
+ return err;
+
+- if (flags) {
+- mlx5_ib_ft_type_to_namespace(
++ if (flags)
++ return mlx5_ib_ft_type_to_namespace(
+ MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX,
+ &obj->ns_type);
+- return 0;
+- }
+ }
+
+ obj->ns_type = MLX5_FLOW_NAMESPACE_BYPASS;
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 03ed7c0fae505..d745ce9dc88aa 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -3084,7 +3084,7 @@ static struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd,
+ else
+ DP_ERR(dev, "roce alloc tid returned error %d\n", rc);
+
+- goto err0;
++ goto err1;
+ }
+
+ /* Index only, 18 bit long, lkey = itid << 8 | key */
+@@ -3108,7 +3108,7 @@ static struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd,
+ rc = dev->ops->rdma_register_tid(dev->rdma_ctx, &mr->hw_mr);
+ if (rc) {
+ DP_ERR(dev, "roce register tid returned an error %d\n", rc);
+- goto err1;
++ goto err2;
+ }
+
+ mr->ibmr.lkey = mr->hw_mr.itid << 8 | mr->hw_mr.key;
+@@ -3117,8 +3117,10 @@ static struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd,
+ DP_DEBUG(dev, QEDR_MSG_MR, "alloc frmr: %x\n", mr->ibmr.lkey);
+ return mr;
+
+-err1:
++err2:
+ dev->ops->rdma_free_tid(dev->rdma_ctx, mr->hw_mr.itid);
++err1:
++ qedr_free_pbl(dev, &mr->info.pbl_info, mr->info.pbl_table);
+ err0:
+ kfree(mr);
+ return ERR_PTR(rc);
+diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
+index da3a398053b8e..4fc31bb7eee6d 100644
+--- a/drivers/infiniband/sw/rxe/rxe_comp.c
++++ b/drivers/infiniband/sw/rxe/rxe_comp.c
+@@ -114,6 +114,8 @@ void retransmit_timer(struct timer_list *t)
+ {
+ struct rxe_qp *qp = from_timer(qp, t, retrans_timer);
+
++ pr_debug("%s: fired for qp#%d\n", __func__, qp->elem.index);
++
+ if (qp->valid) {
+ qp->comp.timeout = 1;
+ rxe_run_task(&qp->comp.task, 1);
+@@ -730,11 +732,15 @@ int rxe_completer(void *arg)
+ break;
+
+ case COMPST_RNR_RETRY:
++ /* we come here if we received an RNR NAK */
+ if (qp->comp.rnr_retry > 0) {
+ if (qp->comp.rnr_retry != 7)
+ qp->comp.rnr_retry--;
+
+- qp->req.need_retry = 1;
++ /* don't start a retry flow until the
++ * rnr timer has fired
++ */
++ qp->req.wait_for_rnr_timer = 1;
+ pr_debug("qp#%d set rnr nak timer\n",
+ qp_num(qp));
+ mod_timer(&qp->rnr_nak_timer,
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index 0e022ae1b8a55..37484a559d209 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -77,7 +77,7 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
+ enum rxe_mr_lookup_type type);
+ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length);
+ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
+-int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey);
++int rxe_invalidate_mr(struct rxe_qp *qp, u32 key);
+ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
+ int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr);
+ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index fc3942e04a1fd..3add521290064 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -576,22 +576,22 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
+ return mr;
+ }
+
+-int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey)
++int rxe_invalidate_mr(struct rxe_qp *qp, u32 key)
+ {
+ struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ struct rxe_mr *mr;
+ int ret;
+
+- mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8);
++ mr = rxe_pool_get_index(&rxe->mr_pool, key >> 8);
+ if (!mr) {
+- pr_err("%s: No MR for rkey %#x\n", __func__, rkey);
++ pr_err("%s: No MR for key %#x\n", __func__, key);
+ ret = -EINVAL;
+ goto err;
+ }
+
+- if (rkey != mr->rkey) {
+- pr_err("%s: rkey (%#x) doesn't match mr->rkey (%#x)\n",
+- __func__, rkey, mr->rkey);
++ if (mr->rkey ? (key != mr->rkey) : (key != mr->lkey)) {
++ pr_err("%s: wr key (%#x) doesn't match mr key (%#x)\n",
++ __func__, key, (mr->rkey ? mr->rkey : mr->lkey));
+ ret = -EINVAL;
+ goto err_drop_ref;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
+index 2e1fa844fabfd..824739008d5b6 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mw.c
++++ b/drivers/infiniband/sw/rxe/rxe_mw.c
+@@ -48,8 +48,6 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
+ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ struct rxe_mw *mw, struct rxe_mr *mr)
+ {
+- u32 key = wqe->wr.wr.mw.rkey & 0xff;
+-
+ if (mw->ibmw.type == IB_MW_TYPE_1) {
+ if (unlikely(mw->state != RXE_MW_STATE_VALID)) {
+ pr_err_once(
+@@ -87,11 +85,6 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ }
+ }
+
+- if (unlikely(key == (mw->rkey & 0xff))) {
+- pr_err_once("attempt to bind MW with same key\n");
+- return -EINVAL;
+- }
+-
+ /* remaining checks only apply to a nonzero MR */
+ if (!mr)
+ return 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
+index 19b14826385b6..e9f3bbd8d6052 100644
+--- a/drivers/infiniband/sw/rxe/rxe_pool.c
++++ b/drivers/infiniband/sw/rxe/rxe_pool.c
+@@ -139,7 +139,7 @@ void *rxe_alloc(struct rxe_pool *pool)
+
+ err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+ &pool->next, GFP_KERNEL);
+- if (err)
++ if (err < 0)
+ goto err_free;
+
+ return obj;
+@@ -167,7 +167,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
+
+ err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+ &pool->next, GFP_KERNEL);
+- if (err)
++ if (err < 0)
+ goto err_cnt;
+
+ return 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 22e9b85344c35..fd706dc3009de 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -174,6 +174,14 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp,
+
+ spin_lock_init(&qp->state_lock);
+
++ spin_lock_init(&qp->req.task.state_lock);
++ spin_lock_init(&qp->resp.task.state_lock);
++ spin_lock_init(&qp->comp.task.state_lock);
++
++ spin_lock_init(&qp->sq.sq_lock);
++ spin_lock_init(&qp->rq.producer_lock);
++ spin_lock_init(&qp->rq.consumer_lock);
++
+ atomic_set(&qp->ssn, 0);
+ atomic_set(&qp->skb_out, 0);
+ }
+@@ -233,7 +241,6 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ qp->req.opcode = -1;
+ qp->comp.opcode = -1;
+
+- spin_lock_init(&qp->sq.sq_lock);
+ skb_queue_head_init(&qp->req_pkts);
+
+ rxe_init_task(rxe, &qp->req.task, qp,
+@@ -284,9 +291,6 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
+ }
+ }
+
+- spin_lock_init(&qp->rq.producer_lock);
+- spin_lock_init(&qp->rq.consumer_lock);
+-
+ skb_queue_head_init(&qp->resp_pkts);
+
+ rxe_init_task(rxe, &qp->resp.task, qp,
+@@ -507,6 +511,7 @@ static void rxe_qp_reset(struct rxe_qp *qp)
+ atomic_set(&qp->ssn, 0);
+ qp->req.opcode = -1;
+ qp->req.need_retry = 0;
++ qp->req.wait_for_rnr_timer = 0;
+ qp->req.noack_pkts = 0;
+ qp->resp.msn = 0;
+ qp->resp.opcode = -1;
+@@ -804,13 +809,15 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+ if (qp->rq.queue)
+ rxe_queue_cleanup(qp->rq.queue);
+
+- atomic_dec(&qp->scq->num_wq);
+- if (qp->scq)
++ if (qp->scq) {
++ atomic_dec(&qp->scq->num_wq);
+ rxe_put(qp->scq);
++ }
+
+- atomic_dec(&qp->rcq->num_wq);
+- if (qp->rcq)
++ if (qp->rcq) {
++ atomic_dec(&qp->rcq->num_wq);
+ rxe_put(qp->rcq);
++ }
+
+ if (qp->pd)
+ rxe_put(qp->pd);
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 9d98237389cf2..9f8e3db179ccd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -101,7 +101,11 @@ void rnr_nak_timer(struct timer_list *t)
+ {
+ struct rxe_qp *qp = from_timer(qp, t, rnr_nak_timer);
+
+- pr_debug("qp#%d rnr nak timer fired\n", qp_num(qp));
++ pr_debug("%s: fired for qp#%d\n", __func__, qp_num(qp));
++
++ /* request a send queue retry */
++ qp->req.need_retry = 1;
++ qp->req.wait_for_rnr_timer = 0;
+ rxe_run_task(&qp->req.task, 1);
+ }
+
+@@ -581,9 +585,11 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ wqe->status = IB_WC_SUCCESS;
+ qp->req.wqe_index = queue_next_index(qp->sq.queue, qp->req.wqe_index);
+
+- if ((wqe->wr.send_flags & IB_SEND_SIGNALED) ||
+- qp->sq_sig_type == IB_SIGNAL_ALL_WR)
+- rxe_run_task(&qp->comp.task, 1);
++ /* There is no ack coming for local work requests
++ * which can lead to a deadlock. So go ahead and complete
++ * it now.
++ */
++ rxe_run_task(&qp->comp.task, 1);
+
+ return 0;
+ }
+@@ -620,10 +626,17 @@ next_wqe:
+ qp->req.need_rd_atomic = 0;
+ qp->req.wait_psn = 0;
+ qp->req.need_retry = 0;
++ qp->req.wait_for_rnr_timer = 0;
+ goto exit;
+ }
+
+- if (unlikely(qp->req.need_retry)) {
++ /* we come here if the retransmot timer has fired
++ * or if the rnr timer has fired. If the retransmit
++ * timer fires while we are processing an RNR NAK wait
++ * until the rnr timer has fired before starting the
++ * retry flow
++ */
++ if (unlikely(qp->req.need_retry && !qp->req.wait_for_rnr_timer)) {
+ req_retry(qp);
+ qp->req.need_retry = 0;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index f4f6ee5d81fe4..e38bf958ab485 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -21,6 +21,7 @@ enum resp_states {
+ RESPST_CHK_RKEY,
+ RESPST_EXECUTE,
+ RESPST_READ_REPLY,
++ RESPST_ATOMIC_REPLY,
+ RESPST_COMPLETE,
+ RESPST_ACKNOWLEDGE,
+ RESPST_CLEANUP,
+@@ -55,6 +56,7 @@ static char *resp_state_name[] = {
+ [RESPST_CHK_RKEY] = "CHK_RKEY",
+ [RESPST_EXECUTE] = "EXECUTE",
+ [RESPST_READ_REPLY] = "READ_REPLY",
++ [RESPST_ATOMIC_REPLY] = "ATOMIC_REPLY",
+ [RESPST_COMPLETE] = "COMPLETE",
+ [RESPST_ACKNOWLEDGE] = "ACKNOWLEDGE",
+ [RESPST_CLEANUP] = "CLEANUP",
+@@ -552,8 +554,8 @@ out:
+ /* Guarantee atomicity of atomic operations at the machine level. */
+ static DEFINE_SPINLOCK(atomic_ops_lock);
+
+-static enum resp_states process_atomic(struct rxe_qp *qp,
+- struct rxe_pkt_info *pkt)
++static enum resp_states rxe_atomic_reply(struct rxe_qp *qp,
++ struct rxe_pkt_info *pkt)
+ {
+ u64 *vaddr;
+ enum resp_states ret;
+@@ -585,7 +587,16 @@ static enum resp_states process_atomic(struct rxe_qp *qp,
+
+ spin_unlock_bh(&atomic_ops_lock);
+
+- ret = RESPST_NONE;
++ qp->resp.msn++;
++
++ /* next expected psn, read handles this separately */
++ qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK;
++ qp->resp.ack_psn = qp->resp.psn;
++
++ qp->resp.opcode = pkt->opcode;
++ qp->resp.status = IB_WC_SUCCESS;
++
++ ret = RESPST_ACKNOWLEDGE;
+ out:
+ return ret;
+ }
+@@ -858,9 +869,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ qp->resp.msn++;
+ return RESPST_READ_REPLY;
+ } else if (pkt->mask & RXE_ATOMIC_MASK) {
+- err = process_atomic(qp, pkt);
+- if (err)
+- return err;
++ return RESPST_ATOMIC_REPLY;
+ } else {
+ /* Unreachable */
+ WARN_ON_ONCE(1);
+@@ -1316,6 +1325,9 @@ int rxe_responder(void *arg)
+ case RESPST_READ_REPLY:
+ state = read_reply(qp, pkt);
+ break;
++ case RESPST_ATOMIC_REPLY:
++ state = rxe_atomic_reply(qp, pkt);
++ break;
+ case RESPST_ACKNOWLEDGE:
+ state = acknowledge(qp, pkt);
+ break;
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index ac464e68c9230..9bdf333465114 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -124,6 +124,7 @@ struct rxe_req_info {
+ int need_rd_atomic;
+ int wait_psn;
+ int need_retry;
++ int wait_for_rnr_timer;
+ int noack_pkts;
+ struct rxe_task task;
+ };
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index 17f34d584cd9e..f88d2971c2c63 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -725,11 +725,11 @@ static int siw_proc_mpareply(struct siw_cep *cep)
+ enum mpa_v2_ctrl mpa_p2p_mode = MPA_V2_RDMA_NO_RTR;
+
+ rv = siw_recv_mpa_rr(cep);
+- if (rv != -EAGAIN)
+- siw_cancel_mpatimer(cep);
+ if (rv)
+ goto out_err;
+
++ siw_cancel_mpatimer(cep);
++
+ rep = &cep->mpa.hdr;
+
+ if (__mpa_rr_revision(rep->params.bits) > MPA_REVISION_2) {
+@@ -895,7 +895,8 @@ static int siw_proc_mpareply(struct siw_cep *cep)
+ }
+
+ out_err:
+- siw_cm_upcall(cep, IW_CM_EVENT_CONNECT_REPLY, -EINVAL);
++ if (rv != -EAGAIN)
++ siw_cm_upcall(cep, IW_CM_EVENT_CONNECT_REPLY, -EINVAL);
+
+ return rv;
+ }
+diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
+index 321949a570ed6..620ae5b2d80dc 100644
+--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
++++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
+@@ -568,7 +568,7 @@ static void iscsi_iser_session_destroy(struct iscsi_cls_session *cls_session)
+ struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
+
+ iscsi_session_teardown(cls_session);
+- iscsi_host_remove(shost);
++ iscsi_host_remove(shost, false);
+ iscsi_host_free(shost);
+ }
+
+@@ -685,7 +685,7 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep,
+ return cls_session;
+
+ remove_host:
+- iscsi_host_remove(shost);
++ iscsi_host_remove(shost, false);
+ free_host:
+ iscsi_host_free(shost);
+ return NULL;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 9809c38839798..525f083fcaeb4 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -740,25 +740,25 @@ struct path_it {
+ struct rtrs_clt_path *(*next_path)(struct path_it *it);
+ };
+
+-/**
+- * list_next_or_null_rr_rcu - get next list element in round-robin fashion.
++/*
++ * rtrs_clt_get_next_path_or_null - get clt path from the list or return NULL
+ * @head: the head for the list.
+- * @ptr: the list head to take the next element from.
+- * @type: the type of the struct this is embedded in.
+- * @memb: the name of the list_head within the struct.
++ * @clt_path: The element to take the next clt_path from.
+ *
+- * Next element returned in round-robin fashion, i.e. head will be skipped,
++ * Next clt path returned in round-robin fashion, i.e. head will be skipped,
+ * but if list is observed as empty, NULL will be returned.
+ *
+- * This primitive may safely run concurrently with the _rcu list-mutation
++ * This function may safely run concurrently with the _rcu list-mutation
+ * primitives such as list_add_rcu() as long as it's guarded by rcu_read_lock().
+ */
+-#define list_next_or_null_rr_rcu(head, ptr, type, memb) \
+-({ \
+- list_next_or_null_rcu(head, ptr, type, memb) ?: \
+- list_next_or_null_rcu(head, READ_ONCE((ptr)->next), \
+- type, memb); \
+-})
++static inline struct rtrs_clt_path *
++rtrs_clt_get_next_path_or_null(struct list_head *head, struct rtrs_clt_path *clt_path)
++{
++ return list_next_or_null_rcu(head, &clt_path->s.entry, typeof(*clt_path), s.entry) ?:
++ list_next_or_null_rcu(head,
++ READ_ONCE((&clt_path->s.entry)->next),
++ typeof(*clt_path), s.entry);
++}
+
+ /**
+ * get_next_path_rr() - Returns path in round-robin fashion.
+@@ -789,10 +789,8 @@ static struct rtrs_clt_path *get_next_path_rr(struct path_it *it)
+ path = list_first_or_null_rcu(&clt->paths_list,
+ typeof(*path), s.entry);
+ else
+- path = list_next_or_null_rr_rcu(&clt->paths_list,
+- &path->s.entry,
+- typeof(*path),
+- s.entry);
++ path = rtrs_clt_get_next_path_or_null(&clt->paths_list, path);
++
+ rcu_assign_pointer(*ppcpu_path, path);
+
+ return path;
+@@ -2277,8 +2275,7 @@ static void rtrs_clt_remove_path_from_arr(struct rtrs_clt_path *clt_path)
+ * removed. If @sess is the last element, then @next is NULL.
+ */
+ rcu_read_lock();
+- next = list_next_or_null_rr_rcu(&clt->paths_list, &clt_path->s.entry,
+- typeof(*next), s.entry);
++ next = rtrs_clt_get_next_path_or_null(&clt->paths_list, clt_path);
+ rcu_read_unlock();
+
+ /*
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+index 9a1e5c2ae55c0..ac0df734eba8c 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+@@ -23,6 +23,17 @@
+ #define RTRS_PROTO_VER_STRING __stringify(RTRS_PROTO_VER_MAJOR) "." \
+ __stringify(RTRS_PROTO_VER_MINOR)
+
++/*
++ * Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS)
++ * and the minimum chunk size is 4096 (2^12).
++ * So the maximum sess_queue_depth is 65536 (2^16) in theory.
++ * But mempool_create, create_qp and ib_post_send fail with
++ * "cannot allocate memory" error if sess_queue_depth is too big.
++ * Therefore the pratical max value of sess_queue_depth is
++ * somewhere between 1 and 65534 and it depends on the system.
++ */
++#define MAX_SESS_QUEUE_DEPTH 65535
++
+ enum rtrs_imm_const {
+ MAX_IMM_TYPE_BITS = 4,
+ MAX_IMM_TYPE_MASK = ((1 << MAX_IMM_TYPE_BITS) - 1),
+@@ -46,16 +57,6 @@ enum {
+
+ MAX_PATHS_NUM = 128,
+
+- /*
+- * Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS)
+- * and the minimum chunk size is 4096 (2^12).
+- * So the maximum sess_queue_depth is 65536 (2^16) in theory.
+- * But mempool_create, create_qp and ib_post_send fail with
+- * "cannot allocate memory" error if sess_queue_depth is too big.
+- * Therefore the pratical max value of sess_queue_depth is
+- * somewhere between 1 and 65534 and it depends on the system.
+- */
+- MAX_SESS_QUEUE_DEPTH = 65535,
+ MIN_CHUNK_SIZE = 8192,
+
+ RTRS_HB_INTERVAL_MS = 5000,
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index f86ee1c4b970a..c3036aeac89ed 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -565,12 +565,9 @@ static int srpt_refresh_port(struct srpt_port *sport)
+ if (ret)
+ return ret;
+
+- sport->port_guid_id.wwn.priv = sport;
+- srpt_format_guid(sport->port_guid_id.name,
+- sizeof(sport->port_guid_id.name),
++ srpt_format_guid(sport->guid_name, ARRAY_SIZE(sport->guid_name),
+ &sport->gid.global.interface_id);
+- sport->port_gid_id.wwn.priv = sport;
+- snprintf(sport->port_gid_id.name, sizeof(sport->port_gid_id.name),
++ snprintf(sport->gid_name, ARRAY_SIZE(sport->gid_name),
+ "0x%016llx%016llx",
+ be64_to_cpu(sport->gid.global.subnet_prefix),
+ be64_to_cpu(sport->gid.global.interface_id));
+@@ -2314,31 +2311,35 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ tag_num = ch->rq_size;
+ tag_size = 1; /* ib_srpt does not use se_sess->sess_cmd_map */
+
+- mutex_lock(&sport->port_guid_id.mutex);
+- list_for_each_entry(stpg, &sport->port_guid_id.tpg_list, entry) {
+- if (!IS_ERR_OR_NULL(ch->sess))
+- break;
+- ch->sess = target_setup_session(&stpg->tpg, tag_num,
++ if (sport->guid_id) {
++ mutex_lock(&sport->guid_id->mutex);
++ list_for_each_entry(stpg, &sport->guid_id->tpg_list, entry) {
++ if (!IS_ERR_OR_NULL(ch->sess))
++ break;
++ ch->sess = target_setup_session(&stpg->tpg, tag_num,
+ tag_size, TARGET_PROT_NORMAL,
+ ch->sess_name, ch, NULL);
++ }
++ mutex_unlock(&sport->guid_id->mutex);
+ }
+- mutex_unlock(&sport->port_guid_id.mutex);
+
+- mutex_lock(&sport->port_gid_id.mutex);
+- list_for_each_entry(stpg, &sport->port_gid_id.tpg_list, entry) {
+- if (!IS_ERR_OR_NULL(ch->sess))
+- break;
+- ch->sess = target_setup_session(&stpg->tpg, tag_num,
++ if (sport->gid_id) {
++ mutex_lock(&sport->gid_id->mutex);
++ list_for_each_entry(stpg, &sport->gid_id->tpg_list, entry) {
++ if (!IS_ERR_OR_NULL(ch->sess))
++ break;
++ ch->sess = target_setup_session(&stpg->tpg, tag_num,
+ tag_size, TARGET_PROT_NORMAL, i_port_id,
+ ch, NULL);
+- if (!IS_ERR_OR_NULL(ch->sess))
+- break;
+- /* Retry without leading "0x" */
+- ch->sess = target_setup_session(&stpg->tpg, tag_num,
++ if (!IS_ERR_OR_NULL(ch->sess))
++ break;
++ /* Retry without leading "0x" */
++ ch->sess = target_setup_session(&stpg->tpg, tag_num,
+ tag_size, TARGET_PROT_NORMAL,
+ i_port_id + 2, ch, NULL);
++ }
++ mutex_unlock(&sport->gid_id->mutex);
+ }
+- mutex_unlock(&sport->port_gid_id.mutex);
+
+ if (IS_ERR_OR_NULL(ch->sess)) {
+ WARN_ON_ONCE(ch->sess == NULL);
+@@ -2983,7 +2984,12 @@ static int srpt_release_sport(struct srpt_port *sport)
+ return 0;
+ }
+
+-static struct se_wwn *__srpt_lookup_wwn(const char *name)
++struct port_and_port_id {
++ struct srpt_port *sport;
++ struct srpt_port_id **port_id;
++};
++
++static struct port_and_port_id __srpt_lookup_port(const char *name)
+ {
+ struct ib_device *dev;
+ struct srpt_device *sdev;
+@@ -2998,25 +3004,38 @@ static struct se_wwn *__srpt_lookup_wwn(const char *name)
+ for (i = 0; i < dev->phys_port_cnt; i++) {
+ sport = &sdev->port[i];
+
+- if (strcmp(sport->port_guid_id.name, name) == 0)
+- return &sport->port_guid_id.wwn;
+- if (strcmp(sport->port_gid_id.name, name) == 0)
+- return &sport->port_gid_id.wwn;
++ if (strcmp(sport->guid_name, name) == 0) {
++ kref_get(&sdev->refcnt);
++ return (struct port_and_port_id){
++ sport, &sport->guid_id};
++ }
++ if (strcmp(sport->gid_name, name) == 0) {
++ kref_get(&sdev->refcnt);
++ return (struct port_and_port_id){
++ sport, &sport->gid_id};
++ }
+ }
+ }
+
+- return NULL;
++ return (struct port_and_port_id){};
+ }
+
+-static struct se_wwn *srpt_lookup_wwn(const char *name)
++/**
++ * srpt_lookup_port() - Look up an RDMA port by name
++ * @name: ASCII port name
++ *
++ * Increments the RDMA port reference count if an RDMA port pointer is returned.
++ * The caller must drop that reference count by calling srpt_port_put_ref().
++ */
++static struct port_and_port_id srpt_lookup_port(const char *name)
+ {
+- struct se_wwn *wwn;
++ struct port_and_port_id papi;
+
+ spin_lock(&srpt_dev_lock);
+- wwn = __srpt_lookup_wwn(name);
++ papi = __srpt_lookup_port(name);
+ spin_unlock(&srpt_dev_lock);
+
+- return wwn;
++ return papi;
+ }
+
+ static void srpt_free_srq(struct srpt_device *sdev)
+@@ -3101,6 +3120,18 @@ static int srpt_use_srq(struct srpt_device *sdev, bool use_srq)
+ return ret;
+ }
+
++static void srpt_free_sdev(struct kref *refcnt)
++{
++ struct srpt_device *sdev = container_of(refcnt, typeof(*sdev), refcnt);
++
++ kfree(sdev);
++}
++
++static void srpt_sdev_put(struct srpt_device *sdev)
++{
++ kref_put(&sdev->refcnt, srpt_free_sdev);
++}
++
+ /**
+ * srpt_add_one - InfiniBand device addition callback function
+ * @device: Describes a HCA.
+@@ -3119,6 +3150,7 @@ static int srpt_add_one(struct ib_device *device)
+ if (!sdev)
+ return -ENOMEM;
+
++ kref_init(&sdev->refcnt);
+ sdev->device = device;
+ mutex_init(&sdev->sdev_mutex);
+
+@@ -3182,10 +3214,6 @@ static int srpt_add_one(struct ib_device *device)
+ sport->port_attrib.srp_sq_size = DEF_SRPT_SQ_SIZE;
+ sport->port_attrib.use_srq = false;
+ INIT_WORK(&sport->work, srpt_refresh_port_work);
+- mutex_init(&sport->port_guid_id.mutex);
+- INIT_LIST_HEAD(&sport->port_guid_id.tpg_list);
+- mutex_init(&sport->port_gid_id.mutex);
+- INIT_LIST_HEAD(&sport->port_gid_id.tpg_list);
+
+ ret = srpt_refresh_port(sport);
+ if (ret) {
+@@ -3214,7 +3242,7 @@ err_ring:
+ srpt_free_srq(sdev);
+ ib_dealloc_pd(sdev->pd);
+ free_dev:
+- kfree(sdev);
++ srpt_sdev_put(sdev);
+ pr_info("%s(%s) failed.\n", __func__, dev_name(&device->dev));
+ return ret;
+ }
+@@ -3258,7 +3286,7 @@ static void srpt_remove_one(struct ib_device *device, void *client_data)
+
+ ib_dealloc_pd(sdev->pd);
+
+- kfree(sdev);
++ srpt_sdev_put(sdev);
+ }
+
+ static struct ib_client srpt_client = {
+@@ -3286,10 +3314,10 @@ static struct srpt_port_id *srpt_wwn_to_sport_id(struct se_wwn *wwn)
+ {
+ struct srpt_port *sport = wwn->priv;
+
+- if (wwn == &sport->port_guid_id.wwn)
+- return &sport->port_guid_id;
+- if (wwn == &sport->port_gid_id.wwn)
+- return &sport->port_gid_id;
++ if (sport->guid_id && &sport->guid_id->wwn == wwn)
++ return sport->guid_id;
++ if (sport->gid_id && &sport->gid_id->wwn == wwn)
++ return sport->gid_id;
+ WARN_ON_ONCE(true);
+ return NULL;
+ }
+@@ -3774,7 +3802,31 @@ static struct se_wwn *srpt_make_tport(struct target_fabric_configfs *tf,
+ struct config_group *group,
+ const char *name)
+ {
+- return srpt_lookup_wwn(name) ? : ERR_PTR(-EINVAL);
++ struct port_and_port_id papi = srpt_lookup_port(name);
++ struct srpt_port *sport = papi.sport;
++ struct srpt_port_id *port_id;
++
++ if (!papi.port_id)
++ return ERR_PTR(-EINVAL);
++ if (*papi.port_id) {
++ /* Attempt to create a directory that already exists. */
++ WARN_ON_ONCE(true);
++ return &(*papi.port_id)->wwn;
++ }
++ port_id = kzalloc(sizeof(*port_id), GFP_KERNEL);
++ if (!port_id) {
++ srpt_sdev_put(sport->sdev);
++ return ERR_PTR(-ENOMEM);
++ }
++ mutex_init(&port_id->mutex);
++ INIT_LIST_HEAD(&port_id->tpg_list);
++ port_id->wwn.priv = sport;
++ memcpy(port_id->name, port_id == sport->guid_id ? sport->guid_name :
++ sport->gid_name, ARRAY_SIZE(port_id->name));
++
++ *papi.port_id = port_id;
++
++ return &port_id->wwn;
+ }
+
+ /**
+@@ -3783,6 +3835,18 @@ static struct se_wwn *srpt_make_tport(struct target_fabric_configfs *tf,
+ */
+ static void srpt_drop_tport(struct se_wwn *wwn)
+ {
++ struct srpt_port_id *port_id = container_of(wwn, typeof(*port_id), wwn);
++ struct srpt_port *sport = wwn->priv;
++
++ if (sport->guid_id == port_id)
++ sport->guid_id = NULL;
++ else if (sport->gid_id == port_id)
++ sport->gid_id = NULL;
++ else
++ WARN_ON_ONCE(true);
++
++ srpt_sdev_put(sport->sdev);
++ kfree(port_id);
+ }
+
+ static ssize_t srpt_wwn_version_show(struct config_item *item, char *buf)
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h
+index 76e66f630c17a..4c46b301eea18 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.h
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.h
+@@ -376,7 +376,7 @@ struct srpt_tpg {
+ };
+
+ /**
+- * struct srpt_port_id - information about an RDMA port name
++ * struct srpt_port_id - LIO RDMA port information
+ * @mutex: Protects @tpg_list changes.
+ * @tpg_list: TPGs associated with the RDMA port name.
+ * @wwn: WWN associated with the RDMA port name.
+@@ -393,7 +393,7 @@ struct srpt_port_id {
+ };
+
+ /**
+- * struct srpt_port - information associated by SRPT with a single IB port
++ * struct srpt_port - SRPT RDMA port information
+ * @sdev: backpointer to the HCA information.
+ * @mad_agent: per-port management datagram processing information.
+ * @enabled: Whether or not this target port is enabled.
+@@ -402,8 +402,10 @@ struct srpt_port_id {
+ * @lid: cached value of the port's lid.
+ * @gid: cached value of the port's gid.
+ * @work: work structure for refreshing the aforementioned cached values.
+- * @port_guid_id: target port GUID
+- * @port_gid_id: target port GID
++ * @guid_name: port name in GUID format.
++ * @guid_id: LIO target port information for the port name in GUID format.
++ * @gid_name: port name in GID format.
++ * @gid_id: LIO target port information for the port name in GID format.
+ * @port_attrib: Port attributes that can be accessed through configfs.
+ * @refcount: Number of objects associated with this port.
+ * @freed_channels: Completion that will be signaled once @refcount becomes 0.
+@@ -419,8 +421,10 @@ struct srpt_port {
+ u32 lid;
+ union ib_gid gid;
+ struct work_struct work;
+- struct srpt_port_id port_guid_id;
+- struct srpt_port_id port_gid_id;
++ char guid_name[64];
++ struct srpt_port_id *guid_id;
++ char gid_name[64];
++ struct srpt_port_id *gid_id;
+ struct srpt_port_attrib port_attrib;
+ atomic_t refcount;
+ struct completion *freed_channels;
+@@ -430,6 +434,7 @@ struct srpt_port {
+
+ /**
+ * struct srpt_device - information associated by SRPT with a single HCA
++ * @refcnt: Reference count for this device.
+ * @device: Backpointer to the struct ib_device managed by the IB core.
+ * @pd: IB protection domain.
+ * @lkey: L_Key (local key) with write access to all local memory.
+@@ -445,6 +450,7 @@ struct srpt_port {
+ * @port: Information about the ports owned by this HCA.
+ */
+ struct srpt_device {
++ struct kref refcnt;
+ struct ib_device *device;
+ struct ib_pd *pd;
+ u32 lkey;
+diff --git a/drivers/input/serio/gscps2.c b/drivers/input/serio/gscps2.c
+index a9065c6ab5508..da2c67cb86422 100644
+--- a/drivers/input/serio/gscps2.c
++++ b/drivers/input/serio/gscps2.c
+@@ -350,6 +350,10 @@ static int __init gscps2_probe(struct parisc_device *dev)
+ ps2port->port = serio;
+ ps2port->padev = dev;
+ ps2port->addr = ioremap(hpa, GSC_STATUS + 4);
++ if (!ps2port->addr) {
++ ret = -ENOMEM;
++ goto fail_nomem;
++ }
+ spin_lock_init(&ps2port->lock);
+
+ gscps2_reset(ps2port);
+diff --git a/drivers/interconnect/imx/imx.c b/drivers/interconnect/imx/imx.c
+index 249ca25d1d556..4406ec45fa90f 100644
+--- a/drivers/interconnect/imx/imx.c
++++ b/drivers/interconnect/imx/imx.c
+@@ -234,16 +234,16 @@ int imx_icc_register(struct platform_device *pdev,
+ struct device *dev = &pdev->dev;
+ struct icc_onecell_data *data;
+ struct icc_provider *provider;
+- int max_node_id;
++ int num_nodes;
+ int ret;
+
+ /* icc_onecell_data is indexed by node_id, unlike nodes param */
+- max_node_id = get_max_node_id(nodes, nodes_count);
+- data = devm_kzalloc(dev, struct_size(data, nodes, max_node_id),
++ num_nodes = get_max_node_id(nodes, nodes_count) + 1;
++ data = devm_kzalloc(dev, struct_size(data, nodes, num_nodes),
+ GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+- data->num_nodes = max_node_id;
++ data->num_nodes = num_nodes;
+
+ provider = devm_kzalloc(dev, sizeof(*provider), GFP_KERNEL);
+ if (!provider)
+diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+index 4c077c38fbd64..c11d2c2cbb620 100644
+--- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c
++++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+@@ -750,9 +750,12 @@ static bool qcom_iommu_has_secure_context(struct qcom_iommu_dev *qcom_iommu)
+ {
+ struct device_node *child;
+
+- for_each_child_of_node(qcom_iommu->dev->of_node, child)
+- if (of_device_is_compatible(child, "qcom,msm-iommu-v1-sec"))
++ for_each_child_of_node(qcom_iommu->dev->of_node, child) {
++ if (of_device_is_compatible(child, "qcom,msm-iommu-v1-sec")) {
++ of_node_put(child);
+ return true;
++ }
++ }
+
+ return false;
+ }
+diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
+index 71f2018e23fe9..cd4b889d55379 100644
+--- a/drivers/iommu/exynos-iommu.c
++++ b/drivers/iommu/exynos-iommu.c
+@@ -630,7 +630,7 @@ static int exynos_sysmmu_probe(struct platform_device *pdev)
+
+ ret = iommu_device_register(&data->iommu, &exynos_iommu_ops, dev);
+ if (ret)
+- return ret;
++ goto err_iommu_register;
+
+ platform_set_drvdata(pdev, data);
+
+@@ -657,6 +657,10 @@ static int exynos_sysmmu_probe(struct platform_device *pdev)
+ pm_runtime_enable(dev);
+
+ return 0;
++
++err_iommu_register:
++ iommu_device_sysfs_remove(&data->iommu);
++ return ret;
+ }
+
+ static int __maybe_unused exynos_sysmmu_suspend(struct device *dev)
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 9699ca101c624..64b14ac4c7b02 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -494,7 +494,7 @@ static int dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
+ if (drhd->reg_base_addr == rhsa->base_address) {
+ int node = pxm_to_node(rhsa->proximity_domain);
+
+- if (!node_online(node))
++ if (node != NUMA_NO_NODE && !node_online(node))
+ node = NUMA_NO_NODE;
+ drhd->iommu->node = node;
+ return 0;
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index bbb11cb8b0f73..6b287dc025a9b 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -177,7 +177,7 @@ config MADERA_IRQ
+ config IRQ_MIPS_CPU
+ bool
+ select GENERIC_IRQ_CHIP
+- select GENERIC_IRQ_IPI if SYS_SUPPORTS_MULTITHREADING
++ select GENERIC_IRQ_IPI if SMP && SYS_SUPPORTS_MULTITHREADING
+ select IRQ_DOMAIN
+ select GENERIC_IRQ_EFFECTIVE_AFF_MASK
+
+@@ -322,7 +322,8 @@ config KEYSTONE_IRQ
+
+ config MIPS_GIC
+ bool
+- select GENERIC_IRQ_IPI
++ select GENERIC_IRQ_IPI if SMP
++ select IRQ_DOMAIN_HIERARCHY
+ select MIPS_CM
+
+ config INGENIC_IRQ
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index ff89b36267dd4..1ba0f1555c805 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -52,13 +52,15 @@ static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks);
+
+ static DEFINE_SPINLOCK(gic_lock);
+ static struct irq_domain *gic_irq_domain;
+-static struct irq_domain *gic_ipi_domain;
+ static int gic_shared_intrs;
+ static unsigned int gic_cpu_pin;
+ static unsigned int timer_cpu_pin;
+ static struct irq_chip gic_level_irq_controller, gic_edge_irq_controller;
++
++#ifdef CONFIG_GENERIC_IRQ_IPI
+ static DECLARE_BITMAP(ipi_resrv, GIC_MAX_INTRS);
+ static DECLARE_BITMAP(ipi_available, GIC_MAX_INTRS);
++#endif /* CONFIG_GENERIC_IRQ_IPI */
+
+ static struct gic_all_vpes_chip_data {
+ u32 map;
+@@ -472,9 +474,11 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ u32 map;
+
+ if (hwirq >= GIC_SHARED_HWIRQ_BASE) {
++#ifdef CONFIG_GENERIC_IRQ_IPI
+ /* verify that shared irqs don't conflict with an IPI irq */
+ if (test_bit(GIC_HWIRQ_TO_SHARED(hwirq), ipi_resrv))
+ return -EBUSY;
++#endif /* CONFIG_GENERIC_IRQ_IPI */
+
+ err = irq_domain_set_hwirq_and_chip(d, virq, hwirq,
+ &gic_level_irq_controller,
+@@ -567,6 +571,8 @@ static const struct irq_domain_ops gic_irq_domain_ops = {
+ .map = gic_irq_domain_map,
+ };
+
++#ifdef CONFIG_GENERIC_IRQ_IPI
++
+ static int gic_ipi_domain_xlate(struct irq_domain *d, struct device_node *ctrlr,
+ const u32 *intspec, unsigned int intsize,
+ irq_hw_number_t *out_hwirq,
+@@ -670,6 +676,48 @@ static const struct irq_domain_ops gic_ipi_domain_ops = {
+ .match = gic_ipi_domain_match,
+ };
+
++static int gic_register_ipi_domain(struct device_node *node)
++{
++ struct irq_domain *gic_ipi_domain;
++ unsigned int v[2], num_ipis;
++
++ gic_ipi_domain = irq_domain_add_hierarchy(gic_irq_domain,
++ IRQ_DOMAIN_FLAG_IPI_PER_CPU,
++ GIC_NUM_LOCAL_INTRS + gic_shared_intrs,
++ node, &gic_ipi_domain_ops, NULL);
++ if (!gic_ipi_domain) {
++ pr_err("Failed to add IPI domain");
++ return -ENXIO;
++ }
++
++ irq_domain_update_bus_token(gic_ipi_domain, DOMAIN_BUS_IPI);
++
++ if (node &&
++ !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) {
++ bitmap_set(ipi_resrv, v[0], v[1]);
++ } else {
++ /*
++ * Reserve 2 interrupts per possible CPU/VP for use as IPIs,
++ * meeting the requirements of arch/mips SMP.
++ */
++ num_ipis = 2 * num_possible_cpus();
++ bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis);
++ }
++
++ bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS);
++
++ return 0;
++}
++
++#else /* !CONFIG_GENERIC_IRQ_IPI */
++
++static inline int gic_register_ipi_domain(struct device_node *node)
++{
++ return 0;
++}
++
++#endif /* !CONFIG_GENERIC_IRQ_IPI */
++
+ static int gic_cpu_startup(unsigned int cpu)
+ {
+ /* Enable or disable EIC */
+@@ -688,11 +736,12 @@ static int gic_cpu_startup(unsigned int cpu)
+ static int __init gic_of_init(struct device_node *node,
+ struct device_node *parent)
+ {
+- unsigned int cpu_vec, i, gicconfig, v[2], num_ipis;
++ unsigned int cpu_vec, i, gicconfig;
+ unsigned long reserved;
+ phys_addr_t gic_base;
+ struct resource res;
+ size_t gic_len;
++ int ret;
+
+ /* Find the first available CPU vector. */
+ i = 0;
+@@ -734,6 +783,10 @@ static int __init gic_of_init(struct device_node *node,
+ }
+
+ mips_gic_base = ioremap(gic_base, gic_len);
++ if (!mips_gic_base) {
++ pr_err("Failed to ioremap gic_base\n");
++ return -ENOMEM;
++ }
+
+ gicconfig = read_gic_config();
+ gic_shared_intrs = FIELD_GET(GIC_CONFIG_NUMINTERRUPTS, gicconfig);
+@@ -780,30 +833,9 @@ static int __init gic_of_init(struct device_node *node,
+ return -ENXIO;
+ }
+
+- gic_ipi_domain = irq_domain_add_hierarchy(gic_irq_domain,
+- IRQ_DOMAIN_FLAG_IPI_PER_CPU,
+- GIC_NUM_LOCAL_INTRS + gic_shared_intrs,
+- node, &gic_ipi_domain_ops, NULL);
+- if (!gic_ipi_domain) {
+- pr_err("Failed to add IPI domain");
+- return -ENXIO;
+- }
+-
+- irq_domain_update_bus_token(gic_ipi_domain, DOMAIN_BUS_IPI);
+-
+- if (node &&
+- !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) {
+- bitmap_set(ipi_resrv, v[0], v[1]);
+- } else {
+- /*
+- * Reserve 2 interrupts per possible CPU/VP for use as IPIs,
+- * meeting the requirements of arch/mips SMP.
+- */
+- num_ipis = 2 * num_possible_cpus();
+- bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis);
+- }
+-
+- bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS);
++ ret = gic_register_ipi_domain(node);
++ if (ret)
++ return ret;
+
+ board_bind_eic_interrupt = &gic_bind_eic_interrupt;
+
+diff --git a/drivers/leds/rgb/leds-pwm-multicolor.c b/drivers/leds/rgb/leds-pwm-multicolor.c
+index 45e38708ecb17..eb67b89d28e92 100644
+--- a/drivers/leds/rgb/leds-pwm-multicolor.c
++++ b/drivers/leds/rgb/leds-pwm-multicolor.c
+@@ -72,8 +72,7 @@ static int iterate_subleds(struct device *dev, struct pwm_mc_led *priv,
+ pwmled = &priv->leds[priv->mc_cdev.num_colors];
+ pwmled->pwm = devm_fwnode_pwm_get(dev, fwnode, NULL);
+ if (IS_ERR(pwmled->pwm)) {
+- ret = PTR_ERR(pwmled->pwm);
+- dev_err(dev, "unable to request PWM: %d\n", ret);
++ ret = dev_err_probe(dev, PTR_ERR(pwmled->pwm), "unable to request PWM\n");
+ goto release_fwnode;
+ }
+ pwm_init_state(pwmled->pwm, &pwmled->state);
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 80c9f7134e9b9..ba3638d1d0468 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3097,6 +3097,7 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ INIT_WORK(&rs->md.event_work, do_table_event);
+ ti->private = rs;
+ ti->num_flush_bios = 1;
++ ti->needs_bio_set_dev = true;
+
+ /* Restore any requested new layout for conversion decision */
+ rs_config_restore(rs, &rs_layout);
+@@ -3509,7 +3510,7 @@ static void raid_status(struct dm_target *ti, status_type_t type,
+ {
+ struct raid_set *rs = ti->private;
+ struct mddev *mddev = &rs->md;
+- struct r5conf *conf = mddev->private;
++ struct r5conf *conf = rs_is_raid456(rs) ? mddev->private : NULL;
+ int i, max_nr_stripes = conf ? conf->max_nr_stripes : 0;
+ unsigned long recovery;
+ unsigned int raid_param_cnt = 1; /* at least 1 for chunksize */
+@@ -3819,7 +3820,7 @@ static void attempt_restore_of_faulty_devices(struct raid_set *rs)
+
+ memset(cleared_failed_devices, 0, sizeof(cleared_failed_devices));
+
+- for (i = 0; i < mddev->raid_disks; i++) {
++ for (i = 0; i < rs->raid_disks; i++) {
+ r = &rs->dev[i].rdev;
+ /* HM FIXME: enhance journal device recovery processing */
+ if (test_bit(Journal, &r->flags))
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index 2db7030aba00b..a27395c8621ff 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -2045,10 +2045,13 @@ int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
+ dm_sm_threshold_fn fn,
+ void *context)
+ {
+- int r;
++ int r = -EINVAL;
+
+ pmd_write_lock_in_core(pmd);
+- r = dm_sm_register_threshold_callback(pmd->metadata_sm, threshold, fn, context);
++ if (!pmd->fail_io) {
++ r = dm_sm_register_threshold_callback(pmd->metadata_sm,
++ threshold, fn, context);
++ }
+ pmd_write_unlock(pmd);
+
+ return r;
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 84c083f766736..e76c96c760a9b 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -3375,8 +3375,10 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ calc_metadata_threshold(pt),
+ metadata_low_callback,
+ pool);
+- if (r)
++ if (r) {
++ ti->error = "Error registering metadata threshold";
+ goto out_flags_changed;
++ }
+
+ dm_pool_register_pre_commit_callback(pool->pmd,
+ metadata_pre_commit_callback, pool);
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index d74c5a7a0ab49..ead008ea38f2f 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -22,7 +22,7 @@
+
+ #define HIGH_WATERMARK 50
+ #define LOW_WATERMARK 45
+-#define MAX_WRITEBACK_JOBS 0
++#define MAX_WRITEBACK_JOBS min(0x10000000 / PAGE_SIZE, totalram_pages() / 16)
+ #define ENDIO_LATENCY 16
+ #define WRITEBACK_LATENCY 64
+ #define AUTOCOMMIT_BLOCKS_SSD 65536
+@@ -1329,8 +1329,8 @@ enum wc_map_op {
+ WC_MAP_ERROR,
+ };
+
+-static enum wc_map_op writecache_map_remap_origin(struct dm_writecache *wc, struct bio *bio,
+- struct wc_entry *e)
++static void writecache_map_remap_origin(struct dm_writecache *wc, struct bio *bio,
++ struct wc_entry *e)
+ {
+ if (e) {
+ sector_t next_boundary =
+@@ -1338,8 +1338,6 @@ static enum wc_map_op writecache_map_remap_origin(struct dm_writecache *wc, stru
+ if (next_boundary < bio->bi_iter.bi_size >> SECTOR_SHIFT)
+ dm_accept_partial_bio(bio, next_boundary);
+ }
+-
+- return WC_MAP_REMAP_ORIGIN;
+ }
+
+ static enum wc_map_op writecache_map_read(struct dm_writecache *wc, struct bio *bio)
+@@ -1366,14 +1364,16 @@ read_next_block:
+ map_op = WC_MAP_REMAP;
+ }
+ } else {
+- map_op = writecache_map_remap_origin(wc, bio, e);
++ writecache_map_remap_origin(wc, bio, e);
++ wc->stats.reads += (bio->bi_iter.bi_size - wc->block_size) >> wc->block_size_bits;
++ map_op = WC_MAP_REMAP_ORIGIN;
+ }
+
+ return map_op;
+ }
+
+-static enum wc_map_op writecache_bio_copy_ssd(struct dm_writecache *wc, struct bio *bio,
+- struct wc_entry *e, bool search_used)
++static void writecache_bio_copy_ssd(struct dm_writecache *wc, struct bio *bio,
++ struct wc_entry *e, bool search_used)
+ {
+ unsigned bio_size = wc->block_size;
+ sector_t start_cache_sec = cache_sector(wc, e);
+@@ -1413,14 +1413,15 @@ static enum wc_map_op writecache_bio_copy_ssd(struct dm_writecache *wc, struct b
+ bio->bi_iter.bi_sector = start_cache_sec;
+ dm_accept_partial_bio(bio, bio_size >> SECTOR_SHIFT);
+
++ wc->stats.writes += bio->bi_iter.bi_size >> wc->block_size_bits;
++ wc->stats.writes_allocate += (bio->bi_iter.bi_size - wc->block_size) >> wc->block_size_bits;
++
+ if (unlikely(wc->uncommitted_blocks >= wc->autocommit_blocks)) {
+ wc->uncommitted_blocks = 0;
+ queue_work(wc->writeback_wq, &wc->flush_work);
+ } else {
+ writecache_schedule_autocommit(wc);
+ }
+-
+- return WC_MAP_REMAP;
+ }
+
+ static enum wc_map_op writecache_map_write(struct dm_writecache *wc, struct bio *bio)
+@@ -1430,9 +1431,10 @@ static enum wc_map_op writecache_map_write(struct dm_writecache *wc, struct bio
+ do {
+ bool found_entry = false;
+ bool search_used = false;
+- wc->stats.writes++;
+- if (writecache_has_error(wc))
++ if (writecache_has_error(wc)) {
++ wc->stats.writes += bio->bi_iter.bi_size >> wc->block_size_bits;
+ return WC_MAP_ERROR;
++ }
+ e = writecache_find_entry(wc, bio->bi_iter.bi_sector, 0);
+ if (e) {
+ if (!writecache_entry_is_committed(wc, e)) {
+@@ -1456,9 +1458,11 @@ static enum wc_map_op writecache_map_write(struct dm_writecache *wc, struct bio
+ if (unlikely(!e)) {
+ if (!WC_MODE_PMEM(wc) && !found_entry) {
+ direct_write:
+- wc->stats.writes_around++;
+ e = writecache_find_entry(wc, bio->bi_iter.bi_sector, WFE_RETURN_FOLLOWING);
+- return writecache_map_remap_origin(wc, bio, e);
++ writecache_map_remap_origin(wc, bio, e);
++ wc->stats.writes_around += bio->bi_iter.bi_size >> wc->block_size_bits;
++ wc->stats.writes += bio->bi_iter.bi_size >> wc->block_size_bits;
++ return WC_MAP_REMAP_ORIGIN;
+ }
+ wc->stats.writes_blocked_on_freelist++;
+ writecache_wait_on_freelist(wc);
+@@ -1469,10 +1473,13 @@ direct_write:
+ wc->uncommitted_blocks++;
+ wc->stats.writes_allocate++;
+ bio_copy:
+- if (WC_MODE_PMEM(wc))
++ if (WC_MODE_PMEM(wc)) {
+ bio_copy_block(wc, bio, memory_data(wc, e));
+- else
+- return writecache_bio_copy_ssd(wc, bio, e, search_used);
++ wc->stats.writes++;
++ } else {
++ writecache_bio_copy_ssd(wc, bio, e, search_used);
++ return WC_MAP_REMAP;
++ }
+ } while (bio->bi_iter.bi_size);
+
+ if (unlikely(bio->bi_opf & REQ_FUA || wc->uncommitted_blocks >= wc->autocommit_blocks))
+@@ -1507,7 +1514,7 @@ static enum wc_map_op writecache_map_flush(struct dm_writecache *wc, struct bio
+
+ static enum wc_map_op writecache_map_discard(struct dm_writecache *wc, struct bio *bio)
+ {
+- wc->stats.discards++;
++ wc->stats.discards += bio->bi_iter.bi_size >> wc->block_size_bits;
+
+ if (writecache_has_error(wc))
+ return WC_MAP_ERROR;
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 2b75f1ef7386b..c30bb0cba32a2 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -578,9 +578,6 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio)
+ struct bio *clone;
+
+ clone = bio_alloc_clone(NULL, bio, GFP_NOIO, &md->mempools->io_bs);
+- /* Set default bdev, but target must bio_set_dev() before issuing IO */
+- clone->bi_bdev = md->disk->part0;
+-
+ tio = clone_to_tio(clone);
+ tio->flags = 0;
+ dm_tio_set_flag(tio, DM_TIO_INSIDE_DM_IO);
+@@ -614,6 +611,7 @@ static void free_io(struct dm_io *io)
+ static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti,
+ unsigned target_bio_nr, unsigned *len, gfp_t gfp_mask)
+ {
++ struct mapped_device *md = ci->io->md;
+ struct dm_target_io *tio;
+ struct bio *clone;
+
+@@ -623,14 +621,10 @@ static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti,
+ /* alloc_io() already initialized embedded clone */
+ clone = &tio->clone;
+ } else {
+- struct mapped_device *md = ci->io->md;
+-
+ clone = bio_alloc_clone(NULL, ci->bio, gfp_mask,
+ &md->mempools->bs);
+ if (!clone)
+ return NULL;
+- /* Set default bdev, but target must bio_set_dev() before issuing IO */
+- clone->bi_bdev = md->disk->part0;
+
+ /* REQ_DM_POLL_LIST shouldn't be inherited */
+ clone->bi_opf &= ~REQ_DM_POLL_LIST;
+@@ -646,6 +640,11 @@ static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti,
+ tio->len_ptr = len;
+ tio->old_sector = 0;
+
++ /* Set default bdev, but target must bio_set_dev() before issuing IO */
++ clone->bi_bdev = md->disk->part0;
++ if (unlikely(ti->needs_bio_set_dev))
++ bio_set_dev(clone, md->disk->part0);
++
+ if (len) {
+ clone->bi_iter.bi_size = to_bytes(*len);
+ if (bio_integrity(clone))
+@@ -3066,6 +3065,11 @@ static int dm_call_pr(struct block_device *bdev, iterate_devices_callout_fn fn,
+ goto out;
+ ti = dm_table_get_target(table, 0);
+
++ if (dm_suspended_md(md)) {
++ ret = -EAGAIN;
++ goto out;
++ }
++
+ ret = -EINVAL;
+ if (!ti->type->iterate_devices)
+ goto out;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index c7ecb0bffda0d..660c52d48256d 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6244,11 +6244,11 @@ static void mddev_detach(struct mddev *mddev)
+ static void __md_stop(struct mddev *mddev)
+ {
+ struct md_personality *pers = mddev->pers;
+- md_bitmap_destroy(mddev);
+ mddev_detach(mddev);
+ /* Ensure ->event_work is done */
+ if (mddev->event_work.func)
+ flush_workqueue(md_misc_wq);
++ md_bitmap_destroy(mddev);
+ spin_lock(&mddev->lock);
+ mddev->pers = NULL;
+ spin_unlock(&mddev->lock);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index d589f823feb11..f1908fe616771 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -2167,9 +2167,12 @@ static int raid10_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
+ int err = 0;
+ int number = rdev->raid_disk;
+ struct md_rdev **rdevp;
+- struct raid10_info *p = conf->mirrors + number;
++ struct raid10_info *p;
+
+ print_conf(conf);
++ if (unlikely(number >= mddev->raid_disks))
++ return 0;
++ p = conf->mirrors + number;
+ if (rdev == p->rdev)
+ rdevp = &p->rdev;
+ else if (rdev == p->replacement)
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index 2b20aa6c37b1b..c926e5d43820c 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -1178,6 +1178,7 @@ config VIDEO_ISL7998X
+ depends on OF_GPIO
+ select MEDIA_CONTROLLER
+ select VIDEO_V4L2_SUBDEV_API
++ select V4L2_FWNODE
+ help
+ Support for Intersil ISL7998x analog to MIPI-CSI2 or
+ BT.656 decoder.
+diff --git a/drivers/media/i2c/ov7251.c b/drivers/media/i2c/ov7251.c
+index 0e7be15bc20a7..ad9689820eccc 100644
+--- a/drivers/media/i2c/ov7251.c
++++ b/drivers/media/i2c/ov7251.c
+@@ -934,6 +934,8 @@ static int ov7251_set_power_on(struct device *dev)
+ ARRAY_SIZE(ov7251_global_init_setting));
+ if (ret < 0) {
+ dev_err(ov7251->dev, "error during global init\n");
++ gpiod_set_value_cansleep(ov7251->enable_gpio, 0);
++ clk_disable_unprepare(ov7251->xclk);
+ ov7251_regulators_disable(ov7251);
+ return ret;
+ }
+diff --git a/drivers/media/pci/sta2x11/Kconfig b/drivers/media/pci/sta2x11/Kconfig
+index a96e170ab04ef..118b922c08c35 100644
+--- a/drivers/media/pci/sta2x11/Kconfig
++++ b/drivers/media/pci/sta2x11/Kconfig
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config STA2X11_VIP
+ tristate "STA2X11 VIP Video For Linux"
+- depends on PCI && VIDEO_DEV && VIRT_TO_BUS && I2C
++ depends on PCI && VIDEO_DEV && I2C
+ depends on STA2X11 || COMPILE_TEST
+ select GPIOLIB if MEDIA_SUBDRV_AUTOSELECT
+ select VIDEO_ADV7180 if MEDIA_SUBDRV_AUTOSELECT
+diff --git a/drivers/media/pci/tw686x/tw686x-core.c b/drivers/media/pci/tw686x/tw686x-core.c
+index 6676e069b515d..384d38754a4b1 100644
+--- a/drivers/media/pci/tw686x/tw686x-core.c
++++ b/drivers/media/pci/tw686x/tw686x-core.c
+@@ -315,13 +315,6 @@ static int tw686x_probe(struct pci_dev *pci_dev,
+
+ spin_lock_init(&dev->lock);
+
+- err = request_irq(pci_dev->irq, tw686x_irq, IRQF_SHARED,
+- dev->name, dev);
+- if (err < 0) {
+- dev_err(&pci_dev->dev, "unable to request interrupt\n");
+- goto iounmap;
+- }
+-
+ timer_setup(&dev->dma_delay_timer, tw686x_dma_delay, 0);
+
+ /*
+@@ -333,18 +326,23 @@ static int tw686x_probe(struct pci_dev *pci_dev,
+ err = tw686x_video_init(dev);
+ if (err) {
+ dev_err(&pci_dev->dev, "can't register video\n");
+- goto free_irq;
++ goto iounmap;
+ }
+
+ err = tw686x_audio_init(dev);
+ if (err)
+ dev_warn(&pci_dev->dev, "can't register audio\n");
+
++ err = request_irq(pci_dev->irq, tw686x_irq, IRQF_SHARED,
++ dev->name, dev);
++ if (err < 0) {
++ dev_err(&pci_dev->dev, "unable to request interrupt\n");
++ goto iounmap;
++ }
++
+ pci_set_drvdata(pci_dev, dev);
+ return 0;
+
+-free_irq:
+- free_irq(pci_dev->irq, dev);
+ iounmap:
+ pci_iounmap(pci_dev, dev->mmio);
+ free_region:
+diff --git a/drivers/media/pci/tw686x/tw686x-video.c b/drivers/media/pci/tw686x/tw686x-video.c
+index 6344a479119fe..3ebf7a2c95f03 100644
+--- a/drivers/media/pci/tw686x/tw686x-video.c
++++ b/drivers/media/pci/tw686x/tw686x-video.c
+@@ -1280,8 +1280,10 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ video_set_drvdata(vdev, vc);
+
+ err = video_register_device(vdev, VFL_TYPE_VIDEO, -1);
+- if (err < 0)
++ if (err < 0) {
++ video_device_release(vdev);
+ goto error;
++ }
+ vc->num = vdev->num;
+ }
+
+diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
+index 3c02aa2a54aa6..44dbca0fe17f1 100644
+--- a/drivers/media/platform/amphion/vdec.c
++++ b/drivers/media/platform/amphion/vdec.c
+@@ -63,6 +63,7 @@ struct vdec_t {
+ bool is_source_changed;
+ u32 source_change;
+ u32 drain;
++ bool aborting;
+ };
+
+ static const struct vpu_format vdec_formats[] = {
+@@ -104,7 +105,6 @@ static const struct vpu_format vdec_formats[] = {
+ .pixfmt = V4L2_PIX_FMT_VC1_ANNEX_L,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+- .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_MPEG2,
+@@ -178,16 +178,6 @@ static int vdec_ctrl_init(struct vpu_inst *inst)
+ return 0;
+ }
+
+-static void vdec_set_last_buffer_dequeued(struct vpu_inst *inst)
+-{
+- struct vdec_t *vdec = inst->priv;
+-
+- if (vdec->eos_received) {
+- if (!vpu_set_last_buffer_dequeued(inst))
+- vdec->eos_received--;
+- }
+-}
+-
+ static void vdec_handle_resolution_change(struct vpu_inst *inst)
+ {
+ struct vdec_t *vdec = inst->priv;
+@@ -234,6 +224,21 @@ static int vdec_update_state(struct vpu_inst *inst, enum vpu_codec_state state,
+ return 0;
+ }
+
++static void vdec_set_last_buffer_dequeued(struct vpu_inst *inst)
++{
++ struct vdec_t *vdec = inst->priv;
++
++ if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
++ return;
++
++ if (vdec->eos_received) {
++ if (!vpu_set_last_buffer_dequeued(inst)) {
++ vdec->eos_received--;
++ vdec_update_state(inst, VPU_CODEC_STATE_DRAIN, 0);
++ }
++ }
++}
++
+ static int vdec_querycap(struct file *file, void *fh, struct v4l2_capability *cap)
+ {
+ strscpy(cap->driver, "amphion-vpu", sizeof(cap->driver));
+@@ -493,6 +498,8 @@ static int vdec_drain(struct vpu_inst *inst)
+
+ static int vdec_cmd_start(struct vpu_inst *inst)
+ {
++ struct vdec_t *vdec = inst->priv;
++
+ switch (inst->state) {
+ case VPU_CODEC_STATE_STARTED:
+ case VPU_CODEC_STATE_DRAIN:
+@@ -503,6 +510,8 @@ static int vdec_cmd_start(struct vpu_inst *inst)
+ break;
+ }
+ vpu_process_capture_buffer(inst);
++ if (vdec->eos_received)
++ vdec_set_last_buffer_dequeued(inst);
+ return 0;
+ }
+
+@@ -731,6 +740,7 @@ static void vdec_stop_done(struct vpu_inst *inst)
+ vdec->eos_received = 0;
+ vdec->is_source_changed = false;
+ vdec->source_change = 0;
++ inst->total_input_count = 0;
+ vpu_inst_unlock(inst);
+ }
+
+@@ -939,6 +949,9 @@ static int vdec_response_frame(struct vpu_inst *inst, struct vb2_v4l2_buffer *vb
+ if (inst->state != VPU_CODEC_STATE_ACTIVE)
+ return -EINVAL;
+
++ if (vdec->aborting)
++ return -EINVAL;
++
+ if (!vdec->req_frame_count)
+ return -EINVAL;
+
+@@ -1048,6 +1061,8 @@ static void vdec_clear_slots(struct vpu_inst *inst)
+ vpu_buf = vdec->slots[i];
+ vbuf = &vpu_buf->m2m_buf.vb;
+
++ vpu_trace(inst->dev, "clear slot %d\n", i);
++ vdec_response_fs_release(inst, i, vpu_buf->tag);
+ vdec_recycle_buffer(inst, vbuf);
+ vdec->slots[i]->state = VPU_BUF_STATE_IDLE;
+ vdec->slots[i] = NULL;
+@@ -1203,7 +1218,6 @@ static void vdec_event_eos(struct vpu_inst *inst)
+ vdec->eos_received++;
+ vdec->fixed_fmt = false;
+ inst->min_buffer_cap = VDEC_MIN_BUFFER_CAP;
+- vdec_update_state(inst, VPU_CODEC_STATE_DRAIN, 0);
+ vdec_set_last_buffer_dequeued(inst);
+ vpu_inst_unlock(inst);
+ }
+@@ -1310,6 +1324,8 @@ static void vdec_abort(struct vpu_inst *inst)
+ int ret;
+
+ vpu_trace(inst->dev, "[%d] state = %d\n", inst->id, inst->state);
++
++ vdec->aborting = true;
+ vpu_iface_add_scode(inst, SCODE_PADDING_ABORT);
+ vdec->params.end_flag = 1;
+ vpu_iface_set_decode_params(inst, &vdec->params, 1);
+@@ -1333,6 +1349,7 @@ static void vdec_abort(struct vpu_inst *inst)
+ vdec->decoded_frame_count = 0;
+ vdec->display_frame_count = 0;
+ vdec->sequence = 0;
++ vdec->aborting = false;
+ }
+
+ static void vdec_stop(struct vpu_inst *inst, bool free)
+@@ -1480,10 +1497,10 @@ static int vdec_stop_session(struct vpu_inst *inst, u32 type)
+ vdec_update_state(inst, VPU_CODEC_STATE_SEEK, 0);
+ vdec->drain = 0;
+ } else {
+- if (inst->state != VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
++ if (inst->state != VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE) {
+ vdec_abort(inst);
+-
+- vdec->eos_received = 0;
++ vdec->eos_received = 0;
++ }
+ vdec_clear_slots(inst);
+ }
+
+diff --git a/drivers/media/platform/amphion/vpu.h b/drivers/media/platform/amphion/vpu.h
+index e56b96a7e5d3f..f914de6ed81e9 100644
+--- a/drivers/media/platform/amphion/vpu.h
++++ b/drivers/media/platform/amphion/vpu.h
+@@ -258,6 +258,7 @@ struct vpu_inst {
+ struct vpu_format cap_format;
+ u32 min_buffer_cap;
+ u32 min_buffer_out;
++ u32 total_input_count;
+
+ struct v4l2_rect crop;
+ u32 colorspace;
+diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c
+index 68ad183925fdb..51a764713159a 100644
+--- a/drivers/media/platform/amphion/vpu_core.c
++++ b/drivers/media/platform/amphion/vpu_core.c
+@@ -455,8 +455,13 @@ int vpu_inst_unregister(struct vpu_inst *inst)
+ }
+ vpu_core_check_hang(core);
+ if (core->state == VPU_CORE_HANG && !core->instance_mask) {
++ int err;
++
+ dev_info(core->dev, "reset hang core\n");
+- if (!vpu_core_sw_reset(core)) {
++ mutex_unlock(&core->lock);
++ err = vpu_core_sw_reset(core);
++ mutex_lock(&core->lock);
++ if (!err) {
+ core->state = VPU_CORE_ACTIVE;
+ core->hang_mask = 0;
+ }
+diff --git a/drivers/media/platform/amphion/vpu_malone.c b/drivers/media/platform/amphion/vpu_malone.c
+index f29c223eefced..542bbe361bd87 100644
+--- a/drivers/media/platform/amphion/vpu_malone.c
++++ b/drivers/media/platform/amphion/vpu_malone.c
+@@ -610,6 +610,8 @@ static int vpu_malone_set_params(struct vpu_shared_addr *shared,
+ enum vpu_malone_format malone_format;
+
+ malone_format = vpu_malone_format_remap(params->codec_format);
++ if (WARN_ON(malone_format == MALONE_FMT_NULL))
++ return -EINVAL;
+ iface->udata_buffer[instance].base = params->udata.base;
+ iface->udata_buffer[instance].slot_size = params->udata.size;
+
+@@ -1296,6 +1298,8 @@ static int vpu_malone_insert_scode_vc1_l_seq(struct malone_scode_t *scode)
+ int size = 0;
+ u8 rcv_seqhdr[MALONE_VC1_RCV_SEQ_HEADER_LEN];
+
++ if (scode->inst->total_input_count)
++ return 0;
+ scode->need_data = 0;
+
+ ret = vpu_malone_insert_scode_seq(scode, MALONE_CODEC_ID_VC1_SIMPLE, sizeof(rcv_seqhdr));
+diff --git a/drivers/media/platform/amphion/vpu_msgs.c b/drivers/media/platform/amphion/vpu_msgs.c
+index d5850df8f1d5c..d8247f36d84ba 100644
+--- a/drivers/media/platform/amphion/vpu_msgs.c
++++ b/drivers/media/platform/amphion/vpu_msgs.c
+@@ -150,7 +150,12 @@ static void vpu_session_handle_eos(struct vpu_inst *inst, struct vpu_rpc_event *
+
+ static void vpu_session_handle_error(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+ {
+- dev_err(inst->dev, "unsupported stream\n");
++ char *str = (char *)pkt->data;
++
++ if (strlen(str))
++ dev_err(inst->dev, "instance %d firmware error : %s\n", inst->id, str);
++ else
++ dev_err(inst->dev, "instance %d is unsupported stream\n", inst->id);
+ call_void_vop(inst, event_notify, VPU_MSG_ID_UNSUPPORTED, NULL);
+ vpu_v4l2_set_error(inst);
+ }
+diff --git a/drivers/media/platform/amphion/vpu_rpc.h b/drivers/media/platform/amphion/vpu_rpc.h
+index 25119e5e807e1..7eb6f01e6ab5d 100644
+--- a/drivers/media/platform/amphion/vpu_rpc.h
++++ b/drivers/media/platform/amphion/vpu_rpc.h
+@@ -312,11 +312,16 @@ static inline int vpu_iface_input_frame(struct vpu_inst *inst,
+ struct vb2_buffer *vb)
+ {
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
++ int ret;
+
+ if (!ops || !ops->input_frame)
+ return -EINVAL;
+
+- return ops->input_frame(inst->core->iface, inst, vb);
++ ret = ops->input_frame(inst->core->iface, inst, vb);
++ if (ret < 0)
++ return ret;
++ inst->total_input_count++;
++ return ret;
+ }
+
+ static inline int vpu_iface_config_memory_resource(struct vpu_inst *inst,
+diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
+index 446f07d09d0bb..8a3eed957ae6e 100644
+--- a/drivers/media/platform/amphion/vpu_v4l2.c
++++ b/drivers/media/platform/amphion/vpu_v4l2.c
+@@ -500,10 +500,12 @@ static int vpu_vb2_start_streaming(struct vb2_queue *q, unsigned int count)
+ fmt->sizeimage[1], fmt->bytesperline[1],
+ fmt->sizeimage[2], fmt->bytesperline[2],
+ q->num_buffers);
+- call_void_vop(inst, start, q->type);
+ vb2_clear_last_buffer_dequeued(q);
++ ret = call_vop(inst, start, q->type);
++ if (ret)
++ vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_QUEUED);
+
+- return 0;
++ return ret;
+ }
+
+ static void vpu_vb2_stop_streaming(struct vb2_queue *q)
+diff --git a/drivers/media/platform/atmel/atmel-sama7g5-isc.c b/drivers/media/platform/atmel/atmel-sama7g5-isc.c
+index 83b175070c067..8b11aa8340d7e 100644
+--- a/drivers/media/platform/atmel/atmel-sama7g5-isc.c
++++ b/drivers/media/platform/atmel/atmel-sama7g5-isc.c
+@@ -591,11 +591,13 @@ static const struct dev_pm_ops microchip_xisc_dev_pm_ops = {
+ SET_RUNTIME_PM_OPS(xisc_runtime_suspend, xisc_runtime_resume, NULL)
+ };
+
++#if IS_ENABLED(CONFIG_OF)
+ static const struct of_device_id microchip_xisc_of_match[] = {
+ { .compatible = "microchip,sama7g5-isc" },
+ { }
+ };
+ MODULE_DEVICE_TABLE(of, microchip_xisc_of_match);
++#endif
+
+ static struct platform_driver microchip_xisc_driver = {
+ .probe = microchip_xisc_probe,
+diff --git a/drivers/media/platform/mediatek/mdp/mtk_mdp_ipi.h b/drivers/media/platform/mediatek/mdp/mtk_mdp_ipi.h
+index 2cb8cecb30771..b810c96695c83 100644
+--- a/drivers/media/platform/mediatek/mdp/mtk_mdp_ipi.h
++++ b/drivers/media/platform/mediatek/mdp/mtk_mdp_ipi.h
+@@ -40,12 +40,14 @@ struct mdp_ipi_init {
+ * @ipi_id : IPI_MDP
+ * @ap_inst : AP mtk_mdp_vpu address
+ * @vpu_inst_addr : VPU MDP instance address
++ * @padding : Alignment padding
+ */
+ struct mdp_ipi_comm {
+ uint32_t msg_id;
+ uint32_t ipi_id;
+ uint64_t ap_inst;
+ uint32_t vpu_inst_addr;
++ uint32_t padding;
+ };
+
+ /**
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
+index 52e5d36aa912c..af3cd2e364510 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
+@@ -112,8 +112,6 @@ void mtk_vcodec_dec_set_default_params(struct mtk_vcodec_ctx *ctx)
+ {
+ struct mtk_q_data *q_data;
+
+- ctx->dev->vdec_pdata->init_vdec_params(ctx);
+-
+ ctx->m2m_ctx->q_lock = &ctx->dev->dev_mutex;
+ ctx->fh.m2m_ctx = ctx->m2m_ctx;
+ ctx->fh.ctrl_handler = &ctx->ctrl_hdl;
+@@ -141,15 +139,6 @@ void mtk_vcodec_dec_set_default_params(struct mtk_vcodec_ctx *ctx)
+ q_data->coded_height = DFT_CFG_HEIGHT;
+ q_data->fmt = ctx->dev->vdec_pdata->default_cap_fmt;
+ q_data->field = V4L2_FIELD_NONE;
+- ctx->max_width = MTK_VDEC_MAX_W;
+- ctx->max_height = MTK_VDEC_MAX_H;
+-
+- v4l_bound_align_image(&q_data->coded_width,
+- MTK_VDEC_MIN_W,
+- ctx->max_width, 4,
+- &q_data->coded_height,
+- MTK_VDEC_MIN_H,
+- ctx->max_height, 5, 6);
+
+ q_data->sizeimage[0] = q_data->coded_width * q_data->coded_height;
+ q_data->bytesperline[0] = q_data->coded_width;
+@@ -198,6 +187,11 @@ static int vidioc_vdec_querycap(struct file *file, void *priv,
+ static int vidioc_vdec_subscribe_evt(struct v4l2_fh *fh,
+ const struct v4l2_event_subscription *sub)
+ {
++ struct mtk_vcodec_ctx *ctx = fh_to_ctx(fh);
++
++ if (ctx->dev->vdec_pdata->uses_stateless_api)
++ return v4l2_ctrl_subscribe_event(fh, sub);
++
+ switch (sub->type) {
+ case V4L2_EVENT_EOS:
+ return v4l2_event_subscribe(fh, sub, 2, NULL);
+@@ -208,17 +202,44 @@ static int vidioc_vdec_subscribe_evt(struct v4l2_fh *fh,
+ }
+ }
+
++static const struct v4l2_frmsize_stepwise *mtk_vdec_get_frmsize(struct mtk_vcodec_ctx *ctx,
++ u32 pixfmt)
++{
++ const struct mtk_vcodec_dec_pdata *dec_pdata = ctx->dev->vdec_pdata;
++ int i;
++
++ for (i = 0; i < *dec_pdata->num_framesizes; ++i)
++ if (pixfmt == dec_pdata->vdec_framesizes[i].fourcc)
++ return &dec_pdata->vdec_framesizes[i].stepwise;
++
++ /*
++ * This should never happen since vidioc_try_fmt_vid_out_mplane()
++ * always passes through a valid format for the output side, and
++ * for the capture side, a valid output format should already have
++ * been set.
++ */
++ WARN_ONCE(1, "Unsupported format requested.\n");
++ return &dec_pdata->vdec_framesizes[0].stepwise;
++}
++
+ static int vidioc_try_fmt(struct mtk_vcodec_ctx *ctx, struct v4l2_format *f,
+ const struct mtk_video_fmt *fmt)
+ {
+ struct v4l2_pix_format_mplane *pix_fmt_mp = &f->fmt.pix_mp;
++ const struct v4l2_frmsize_stepwise *frmsize;
++ u32 fourcc;
+
+ pix_fmt_mp->field = V4L2_FIELD_NONE;
+
+- pix_fmt_mp->width =
+- clamp(pix_fmt_mp->width, MTK_VDEC_MIN_W, ctx->max_width);
+- pix_fmt_mp->height =
+- clamp(pix_fmt_mp->height, MTK_VDEC_MIN_H, ctx->max_height);
++ /* Always apply frame size constraints from the coded side */
++ if (V4L2_TYPE_IS_OUTPUT(f->type))
++ fourcc = f->fmt.pix_mp.pixelformat;
++ else
++ fourcc = ctx->q_data[MTK_Q_DATA_SRC].fmt->fourcc;
++
++ frmsize = mtk_vdec_get_frmsize(ctx, fourcc);
++ pix_fmt_mp->width = clamp(pix_fmt_mp->width, MTK_VDEC_MIN_W, frmsize->max_width);
++ pix_fmt_mp->height = clamp(pix_fmt_mp->height, MTK_VDEC_MIN_H, frmsize->max_height);
+
+ if (f->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
+ pix_fmt_mp->num_planes = 1;
+@@ -234,18 +255,15 @@ static int vidioc_try_fmt(struct mtk_vcodec_ctx *ctx, struct v4l2_format *f,
+ */
+ tmp_w = pix_fmt_mp->width;
+ tmp_h = pix_fmt_mp->height;
+- v4l_bound_align_image(&pix_fmt_mp->width,
+- MTK_VDEC_MIN_W,
+- ctx->max_width, 6,
+- &pix_fmt_mp->height,
+- MTK_VDEC_MIN_H,
+- ctx->max_height, 6, 9);
++ v4l_bound_align_image(&pix_fmt_mp->width, MTK_VDEC_MIN_W, frmsize->max_width, 6,
++ &pix_fmt_mp->height, MTK_VDEC_MIN_H, frmsize->max_height, 6,
++ 9);
+
+ if (pix_fmt_mp->width < tmp_w &&
+- (pix_fmt_mp->width + 64) <= ctx->max_width)
++ (pix_fmt_mp->width + 64) <= frmsize->max_width)
+ pix_fmt_mp->width += 64;
+ if (pix_fmt_mp->height < tmp_h &&
+- (pix_fmt_mp->height + 64) <= ctx->max_height)
++ (pix_fmt_mp->height + 64) <= frmsize->max_height)
+ pix_fmt_mp->height += 64;
+
+ mtk_v4l2_debug(0,
+@@ -435,13 +453,6 @@ static int vidioc_vdec_s_fmt(struct file *file, void *priv,
+ if (fmt == NULL)
+ return -EINVAL;
+
+- if (!(ctx->dev->dec_capability & VCODEC_CAPABILITY_4K_DISABLED) &&
+- fmt->fourcc != V4L2_PIX_FMT_VP8_FRAME) {
+- mtk_v4l2_debug(3, "4K is enabled");
+- ctx->max_width = VCODEC_DEC_4K_CODED_WIDTH;
+- ctx->max_height = VCODEC_DEC_4K_CODED_HEIGHT;
+- }
+-
+ q_data->fmt = fmt;
+ vidioc_try_fmt(ctx, f, q_data->fmt);
+ if (f->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
+@@ -533,8 +544,6 @@ static int vidioc_enum_framesizes(struct file *file, void *priv,
+ fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
+ fsize->stepwise = dec_pdata->vdec_framesizes[i].stepwise;
+
+- fsize->stepwise.max_width = ctx->max_width;
+- fsize->stepwise.max_height = ctx->max_height;
+ mtk_v4l2_debug(1, "%x, %d %d %d %d %d %d",
+ ctx->dev->dec_capability,
+ fsize->stepwise.min_width,
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
+index 995e6e2fb1ab2..eed11a62febfa 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
+@@ -208,9 +208,12 @@ static int fops_vcodec_open(struct file *file)
+
+ dev->dec_capability =
+ mtk_vcodec_fw_get_vdec_capa(dev->fw_handler);
++
+ mtk_v4l2_debug(0, "decoder capability %x", dev->dec_capability);
+ }
+
++ ctx->dev->vdec_pdata->init_vdec_params(ctx);
++
+ list_add(&ctx->list, &dev->ctx_list);
+
+ mutex_unlock(&dev->dev_mutex);
+@@ -386,6 +389,8 @@ static int mtk_vcodec_probe(struct platform_device *pdev)
+ mtk_v4l2_err("Main device of_platform_populate failed.");
+ goto err_reg_cont;
+ }
++ } else {
++ set_bit(MTK_VDEC_CORE, dev->subdev_bitmap);
+ }
+
+ ret = video_register_device(vfd_dec, VFL_TYPE_VIDEO, -1);
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
+index 16d55785d84ba..9a4d3e3658aaa 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
+@@ -360,6 +360,13 @@ static void mtk_vcodec_add_formats(unsigned int fourcc,
+
+ mtk_vdec_framesizes[count_framesizes].fourcc = fourcc;
+ mtk_vdec_framesizes[count_framesizes].stepwise = stepwise_fhd;
++ if (!(ctx->dev->dec_capability & VCODEC_CAPABILITY_4K_DISABLED) &&
++ fourcc != V4L2_PIX_FMT_VP8_FRAME) {
++ mtk_vdec_framesizes[count_framesizes].stepwise.max_width =
++ VCODEC_DEC_4K_CODED_WIDTH;
++ mtk_vdec_framesizes[count_framesizes].stepwise.max_height =
++ VCODEC_DEC_4K_CODED_HEIGHT;
++ }
+ num_framesizes++;
+ break;
+ case V4L2_PIX_FMT_MM21:
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_drv.h b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_drv.h
+index a29041a0b7e00..16e91d9568e9c 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_drv.h
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_drv.h
+@@ -285,8 +285,6 @@ struct vdec_pic_info {
+ * mtk_video_dec_buf.
+ * @hw_id: hardware index used to identify different hardware.
+ *
+- * @max_width: hardware supported max width
+- * @max_height: hardware supported max height
+ * @msg_queue: msg queue used to store lat buffer information.
+ */
+ struct mtk_vcodec_ctx {
+@@ -333,8 +331,6 @@ struct mtk_vcodec_ctx {
+ struct mutex lock;
+ int hw_id;
+
+- unsigned int max_width;
+- unsigned int max_height;
+ struct vdec_msg_queue msg_queue;
+ };
+
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.c
+index 29c604b1b1790..718b7b08f93e0 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.c
+@@ -79,6 +79,11 @@ void mxc_jpeg_enable_irq(void __iomem *reg, int slot)
+ writel(0xFFFFFFFF, reg + MXC_SLOT_OFFSET(slot, SLOT_IRQ_EN));
+ }
+
++void mxc_jpeg_disable_irq(void __iomem *reg, int slot)
++{
++ writel(0x0, reg + MXC_SLOT_OFFSET(slot, SLOT_IRQ_EN));
++}
++
+ void mxc_jpeg_sw_reset(void __iomem *reg)
+ {
+ /*
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.h b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.h
+index d838e875616c3..645a24fe8bc16 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.h
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.h
+@@ -53,10 +53,10 @@
+ #define CAST_REC_REGS_SEL CAST_STATUS4
+ #define CAST_LUMTH CAST_STATUS5
+ #define CAST_CHRTH CAST_STATUS6
+-#define CAST_NOMFRSIZE_LO CAST_STATUS7
+-#define CAST_NOMFRSIZE_HI CAST_STATUS8
+-#define CAST_OFBSIZE_LO CAST_STATUS9
+-#define CAST_OFBSIZE_HI CAST_STATUS10
++#define CAST_NOMFRSIZE_LO CAST_STATUS16
++#define CAST_NOMFRSIZE_HI CAST_STATUS17
++#define CAST_OFBSIZE_LO CAST_STATUS18
++#define CAST_OFBSIZE_HI CAST_STATUS19
+
+ #define MXC_MAX_SLOTS 1 /* TODO use all 4 slots*/
+ /* JPEG-Decoder Wrapper Slot Registers 0..3 */
+@@ -125,6 +125,7 @@ u32 mxc_jpeg_get_offset(void __iomem *reg, int slot);
+ void mxc_jpeg_enable_slot(void __iomem *reg, int slot);
+ void mxc_jpeg_set_l_endian(void __iomem *reg, int le);
+ void mxc_jpeg_enable_irq(void __iomem *reg, int slot);
++void mxc_jpeg_disable_irq(void __iomem *reg, int slot);
+ int mxc_jpeg_set_input(void __iomem *reg, u32 in_buf, u32 bufsize);
+ int mxc_jpeg_set_output(void __iomem *reg, u16 out_pitch, u32 out_buf,
+ u16 w, u16 h);
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index f36b512bae51f..b2ea57b450283 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -520,6 +520,7 @@ static bool mxc_jpeg_alloc_slot_data(struct mxc_jpeg_dev *jpeg,
+ GFP_ATOMIC);
+ if (!cfg_stm)
+ goto err;
++ memset(cfg_stm, 0, MXC_JPEG_MAX_CFG_STREAM);
+ jpeg->slot_data[slot].cfg_stream_vaddr = cfg_stm;
+
+ skip_alloc:
+@@ -558,6 +559,18 @@ static void mxc_jpeg_free_slot_data(struct mxc_jpeg_dev *jpeg,
+ jpeg->slot_data[slot].used = false;
+ }
+
++static void mxc_jpeg_check_and_set_last_buffer(struct mxc_jpeg_ctx *ctx,
++ struct vb2_v4l2_buffer *src_buf,
++ struct vb2_v4l2_buffer *dst_buf)
++{
++ if (v4l2_m2m_is_last_draining_src_buf(ctx->fh.m2m_ctx, src_buf)) {
++ dst_buf->flags |= V4L2_BUF_FLAG_LAST;
++ v4l2_m2m_mark_stopped(ctx->fh.m2m_ctx);
++ notify_eos(ctx);
++ ctx->header_parsed = false;
++ }
++}
++
+ static irqreturn_t mxc_jpeg_dec_irq(int irq, void *priv)
+ {
+ struct mxc_jpeg_dev *jpeg = priv;
+@@ -580,15 +593,8 @@ static irqreturn_t mxc_jpeg_dec_irq(int irq, void *priv)
+ dev_dbg(dev, "Irq %d on slot %d.\n", irq, slot);
+
+ ctx = v4l2_m2m_get_curr_priv(jpeg->m2m_dev);
+- if (!ctx) {
+- dev_err(dev,
+- "Instance released before the end of transaction.\n");
+- /* soft reset only resets internal state, not registers */
+- mxc_jpeg_sw_reset(reg);
+- /* clear all interrupts */
+- writel(0xFFFFFFFF, reg + MXC_SLOT_OFFSET(slot, SLOT_STATUS));
++ if (WARN_ON(!ctx))
+ goto job_unlock;
+- }
+
+ if (slot != ctx->slot) {
+ /* TODO investigate when adding multi-instance support */
+@@ -632,6 +638,7 @@ static irqreturn_t mxc_jpeg_dec_irq(int irq, void *priv)
+ dev_dbg(dev, "Decoder DHT cfg finished. Start decoding...\n");
+ goto job_unlock;
+ }
++
+ if (jpeg->mode == MXC_JPEG_ENCODE) {
+ payload = readl(reg + MXC_SLOT_OFFSET(slot, SLOT_BUF_PTR));
+ vb2_set_plane_payload(&dst_buf->vb2_buf, 0, payload);
+@@ -659,7 +666,9 @@ static irqreturn_t mxc_jpeg_dec_irq(int irq, void *priv)
+ buf_state = VB2_BUF_STATE_DONE;
+
+ buffers_done:
++ mxc_jpeg_disable_irq(reg, ctx->slot);
+ jpeg->slot_data[slot].used = false; /* unused, but don't free */
++ mxc_jpeg_check_and_set_last_buffer(ctx, src_buf, dst_buf);
+ v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ v4l2_m2m_buf_done(src_buf, buf_state);
+@@ -755,7 +764,13 @@ static unsigned int mxc_jpeg_setup_cfg_stream(void *cfg_stream_vaddr,
+ u32 fourcc,
+ u16 w, u16 h)
+ {
+- unsigned int offset = 0;
++ /*
++ * There is a hardware issue that first 128 bytes of configuration data
++ * can't be loaded correctly.
++ * To avoid this issue, we need to write the configuration from
++ * an offset which should be no less than 0x80 (128 bytes).
++ */
++ unsigned int offset = 0x80;
+ u8 *cfg = (u8 *)cfg_stream_vaddr;
+ struct mxc_jpeg_sof *sof;
+ struct mxc_jpeg_sos *sos;
+@@ -887,8 +902,8 @@ static void mxc_jpeg_config_enc_desc(struct vb2_buffer *out_buf,
+ jpeg->slot_data[slot].cfg_stream_size =
+ mxc_jpeg_setup_cfg_stream(cfg_stream_vaddr,
+ q_data->fmt->fourcc,
+- q_data->w_adjusted,
+- q_data->h_adjusted);
++ q_data->w,
++ q_data->h);
+
+ /* chain the config descriptor with the encoding descriptor */
+ cfg_desc->next_descpt_ptr = desc_handle | MXC_NXT_DESCPT_EN;
+@@ -970,7 +985,7 @@ static bool mxc_jpeg_source_change(struct mxc_jpeg_ctx *ctx,
+ &q_data_cap->h_adjusted,
+ q_data_cap->h_adjusted, /* adjust up */
+ MXC_JPEG_MAX_HEIGHT,
+- q_data_cap->fmt->v_align,
++ 0,
+ 0);
+
+ /* setup bytesperline/sizeimage for capture queue */
+@@ -1027,6 +1042,7 @@ static void mxc_jpeg_device_run(void *priv)
+ jpeg_src_buf->jpeg_parse_error = true;
+ }
+ if (jpeg_src_buf->jpeg_parse_error) {
++ mxc_jpeg_check_and_set_last_buffer(ctx, src_buf, dst_buf);
+ v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_ERROR);
+@@ -1077,45 +1093,33 @@ end:
+ spin_unlock_irqrestore(&ctx->mxc_jpeg->hw_lock, flags);
+ }
+
+-static void mxc_jpeg_set_last_buffer_dequeued(struct mxc_jpeg_ctx *ctx)
+-{
+- struct vb2_queue *q;
+-
+- ctx->stopped = 1;
+- q = v4l2_m2m_get_dst_vq(ctx->fh.m2m_ctx);
+- if (!list_empty(&q->done_list))
+- return;
+-
+- q->last_buffer_dequeued = true;
+- wake_up(&q->done_wq);
+- ctx->stopped = 0;
+- ctx->header_parsed = false;
+-}
+-
+ static int mxc_jpeg_decoder_cmd(struct file *file, void *priv,
+ struct v4l2_decoder_cmd *cmd)
+ {
+ struct v4l2_fh *fh = file->private_data;
+ struct mxc_jpeg_ctx *ctx = mxc_jpeg_fh_to_ctx(fh);
+- struct device *dev = ctx->mxc_jpeg->dev;
+ int ret;
+
+ ret = v4l2_m2m_ioctl_try_decoder_cmd(file, fh, cmd);
+ if (ret < 0)
+ return ret;
+
+- if (cmd->cmd == V4L2_DEC_CMD_STOP) {
+- dev_dbg(dev, "Received V4L2_DEC_CMD_STOP");
+- if (v4l2_m2m_num_src_bufs_ready(fh->m2m_ctx) == 0) {
+- /* No more src bufs, notify app EOS */
+- notify_eos(ctx);
+- mxc_jpeg_set_last_buffer_dequeued(ctx);
+- } else {
+- /* will send EOS later*/
+- ctx->stopping = 1;
+- }
++ if (!vb2_is_streaming(v4l2_m2m_get_src_vq(fh->m2m_ctx)))
++ return 0;
++
++ ret = v4l2_m2m_ioctl_decoder_cmd(file, priv, cmd);
++ if (ret < 0)
++ return ret;
++
++ if (cmd->cmd == V4L2_DEC_CMD_STOP &&
++ v4l2_m2m_has_stopped(fh->m2m_ctx)) {
++ notify_eos(ctx);
++ ctx->header_parsed = false;
+ }
+
++ if (cmd->cmd == V4L2_DEC_CMD_START &&
++ v4l2_m2m_has_stopped(fh->m2m_ctx))
++ vb2_clear_last_buffer_dequeued(&fh->m2m_ctx->cap_q_ctx.q);
+ return 0;
+ }
+
+@@ -1124,24 +1128,27 @@ static int mxc_jpeg_encoder_cmd(struct file *file, void *priv,
+ {
+ struct v4l2_fh *fh = file->private_data;
+ struct mxc_jpeg_ctx *ctx = mxc_jpeg_fh_to_ctx(fh);
+- struct device *dev = ctx->mxc_jpeg->dev;
+ int ret;
+
+ ret = v4l2_m2m_ioctl_try_encoder_cmd(file, fh, cmd);
+ if (ret < 0)
+ return ret;
+
+- if (cmd->cmd == V4L2_ENC_CMD_STOP) {
+- dev_dbg(dev, "Received V4L2_ENC_CMD_STOP");
+- if (v4l2_m2m_num_src_bufs_ready(fh->m2m_ctx) == 0) {
+- /* No more src bufs, notify app EOS */
+- notify_eos(ctx);
+- mxc_jpeg_set_last_buffer_dequeued(ctx);
+- } else {
+- /* will send EOS later*/
+- ctx->stopping = 1;
+- }
+- }
++ if (!vb2_is_streaming(v4l2_m2m_get_src_vq(fh->m2m_ctx)) ||
++ !vb2_is_streaming(v4l2_m2m_get_dst_vq(fh->m2m_ctx)))
++ return 0;
++
++ ret = v4l2_m2m_ioctl_encoder_cmd(file, fh, cmd);
++ if (ret < 0)
++ return 0;
++
++ if (cmd->cmd == V4L2_ENC_CMD_STOP &&
++ v4l2_m2m_has_stopped(fh->m2m_ctx))
++ notify_eos(ctx);
++
++ if (cmd->cmd == V4L2_ENC_CMD_START &&
++ v4l2_m2m_has_stopped(fh->m2m_ctx))
++ vb2_clear_last_buffer_dequeued(&fh->m2m_ctx->cap_q_ctx.q);
+
+ return 0;
+ }
+@@ -1154,18 +1161,30 @@ static int mxc_jpeg_queue_setup(struct vb2_queue *q,
+ {
+ struct mxc_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+ struct mxc_jpeg_q_data *q_data = NULL;
++ struct mxc_jpeg_q_data tmp_q;
+ int i;
+
+ q_data = mxc_jpeg_get_q_data(ctx, q->type);
+ if (!q_data)
+ return -EINVAL;
+
++ tmp_q.fmt = q_data->fmt;
++ tmp_q.w = q_data->w_adjusted;
++ tmp_q.h = q_data->h_adjusted;
++ for (i = 0; i < MXC_JPEG_MAX_PLANES; i++) {
++ tmp_q.bytesperline[i] = q_data->bytesperline[i];
++ tmp_q.sizeimage[i] = q_data->sizeimage[i];
++ }
++ mxc_jpeg_sizeimage(&tmp_q);
++ for (i = 0; i < MXC_JPEG_MAX_PLANES; i++)
++ tmp_q.sizeimage[i] = max(tmp_q.sizeimage[i], q_data->sizeimage[i]);
++
+ /* Handle CREATE_BUFS situation - *nplanes != 0 */
+ if (*nplanes) {
+ if (*nplanes != q_data->fmt->colplanes)
+ return -EINVAL;
+ for (i = 0; i < *nplanes; i++) {
+- if (sizes[i] < q_data->sizeimage[i])
++ if (sizes[i] < tmp_q.sizeimage[i])
+ return -EINVAL;
+ }
+ return 0;
+@@ -1174,7 +1193,7 @@ static int mxc_jpeg_queue_setup(struct vb2_queue *q,
+ /* Handle REQBUFS situation */
+ *nplanes = q_data->fmt->colplanes;
+ for (i = 0; i < *nplanes; i++)
+- sizes[i] = q_data->sizeimage[i];
++ sizes[i] = tmp_q.sizeimage[i];
+
+ return 0;
+ }
+@@ -1185,6 +1204,8 @@ static int mxc_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ struct mxc_jpeg_q_data *q_data = mxc_jpeg_get_q_data(ctx, q->type);
+ int ret;
+
++ v4l2_m2m_update_start_streaming_state(ctx->fh.m2m_ctx, q);
++
+ if (ctx->mxc_jpeg->mode == MXC_JPEG_DECODE && V4L2_TYPE_IS_CAPTURE(q->type))
+ ctx->source_change = 0;
+ dev_dbg(ctx->mxc_jpeg->dev, "Start streaming ctx=%p", ctx);
+@@ -1216,11 +1237,15 @@ static void mxc_jpeg_stop_streaming(struct vb2_queue *q)
+ break;
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
+ }
+- pm_runtime_put_sync(&ctx->mxc_jpeg->pdev->dev);
+- if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+- ctx->stopping = 0;
+- ctx->stopped = 0;
++
++ v4l2_m2m_update_stop_streaming_state(ctx->fh.m2m_ctx, q);
++ if (V4L2_TYPE_IS_OUTPUT(q->type) &&
++ v4l2_m2m_has_stopped(ctx->fh.m2m_ctx)) {
++ notify_eos(ctx);
++ ctx->header_parsed = false;
+ }
++
++ pm_runtime_put_sync(&ctx->mxc_jpeg->pdev->dev);
+ }
+
+ static int mxc_jpeg_valid_comp_id(struct device *dev,
+@@ -1374,11 +1399,6 @@ static int mxc_jpeg_parse(struct mxc_jpeg_ctx *ctx, struct vb2_buffer *vb)
+ }
+ q_data_out->w = header.frame.width;
+ q_data_out->h = header.frame.height;
+- if (header.frame.width % 8 != 0 || header.frame.height % 8 != 0) {
+- dev_err(dev, "JPEG width or height not multiple of 8: %dx%d\n",
+- header.frame.width, header.frame.height);
+- return -EINVAL;
+- }
+ if (header.frame.width > MXC_JPEG_MAX_WIDTH ||
+ header.frame.height > MXC_JPEG_MAX_HEIGHT) {
+ dev_err(dev, "JPEG width or height should be <= 8192: %dx%d\n",
+@@ -1424,6 +1444,20 @@ static void mxc_jpeg_buf_queue(struct vb2_buffer *vb)
+ struct mxc_jpeg_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+ struct mxc_jpeg_src_buf *jpeg_src_buf;
+
++ if (V4L2_TYPE_IS_CAPTURE(vb->vb2_queue->type) &&
++ vb2_is_streaming(vb->vb2_queue) &&
++ v4l2_m2m_dst_buf_is_last(ctx->fh.m2m_ctx)) {
++ struct mxc_jpeg_q_data *q_data;
++
++ q_data = mxc_jpeg_get_q_data(ctx, vb->vb2_queue->type);
++ vbuf->field = V4L2_FIELD_NONE;
++ vbuf->sequence = q_data->sequence++;
++ v4l2_m2m_last_buffer_done(ctx->fh.m2m_ctx, vbuf);
++ notify_eos(ctx);
++ ctx->header_parsed = false;
++ return;
++ }
++
+ if (vb->vb2_queue->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ goto end;
+
+@@ -1472,24 +1506,11 @@ static int mxc_jpeg_buf_prepare(struct vb2_buffer *vb)
+ return -EINVAL;
+ }
+ }
+- return 0;
+-}
+-
+-static void mxc_jpeg_buf_finish(struct vb2_buffer *vb)
+-{
+- struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+- struct mxc_jpeg_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+- struct vb2_queue *q = vb->vb2_queue;
+-
+- if (V4L2_TYPE_IS_OUTPUT(vb->type))
+- return;
+- if (!ctx->stopped)
+- return;
+- if (list_empty(&q->done_list)) {
+- vbuf->flags |= V4L2_BUF_FLAG_LAST;
+- ctx->stopped = 0;
+- ctx->header_parsed = false;
++ if (V4L2_TYPE_IS_CAPTURE(vb->vb2_queue->type)) {
++ vb2_set_plane_payload(vb, 0, 0);
++ vb2_set_plane_payload(vb, 1, 0);
+ }
++ return 0;
+ }
+
+ static const struct vb2_ops mxc_jpeg_qops = {
+@@ -1498,7 +1519,6 @@ static const struct vb2_ops mxc_jpeg_qops = {
+ .wait_finish = vb2_ops_wait_finish,
+ .buf_out_validate = mxc_jpeg_buf_out_validate,
+ .buf_prepare = mxc_jpeg_buf_prepare,
+- .buf_finish = mxc_jpeg_buf_finish,
+ .start_streaming = mxc_jpeg_start_streaming,
+ .stop_streaming = mxc_jpeg_stop_streaming,
+ .buf_queue = mxc_jpeg_buf_queue,
+@@ -1684,22 +1704,17 @@ static int mxc_jpeg_try_fmt(struct v4l2_format *f, const struct mxc_jpeg_fmt *fm
+ pix_mp->num_planes = fmt->colplanes;
+ pix_mp->pixelformat = fmt->fourcc;
+
+- /*
+- * use MXC_JPEG_H_ALIGN instead of fmt->v_align, for vertical
+- * alignment, to loosen up the alignment to multiple of 8,
+- * otherwise NV12-1080p fails as 1080 is not a multiple of 16
+- */
++ pix_mp->width = w;
++ pix_mp->height = h;
+ v4l_bound_align_image(&w,
+- MXC_JPEG_MIN_WIDTH,
+- w, /* adjust downwards*/
++ w, /* adjust upwards*/
++ MXC_JPEG_MAX_WIDTH,
+ fmt->h_align,
+ &h,
+- MXC_JPEG_MIN_HEIGHT,
+- h, /* adjust downwards*/
+- MXC_JPEG_H_ALIGN,
++ h, /* adjust upwards*/
++ MXC_JPEG_MAX_HEIGHT,
++ 0,
+ 0);
+- pix_mp->width = w; /* negotiate the width */
+- pix_mp->height = h; /* negotiate the height */
+
+ /* get user input into the tmp_q */
+ tmp_q.w = w;
+@@ -1825,35 +1840,19 @@ static int mxc_jpeg_s_fmt(struct mxc_jpeg_ctx *ctx,
+
+ q_data->w_adjusted = q_data->w;
+ q_data->h_adjusted = q_data->h;
+- if (jpeg->mode == MXC_JPEG_DECODE) {
+- /*
+- * align up the resolution for CAST IP,
+- * but leave the buffer resolution unchanged
+- */
+- v4l_bound_align_image(&q_data->w_adjusted,
+- q_data->w_adjusted, /* adjust upwards */
+- MXC_JPEG_MAX_WIDTH,
+- q_data->fmt->h_align,
+- &q_data->h_adjusted,
+- q_data->h_adjusted, /* adjust upwards */
+- MXC_JPEG_MAX_HEIGHT,
+- q_data->fmt->v_align,
+- 0);
+- } else {
+- /*
+- * align down the resolution for CAST IP,
+- * but leave the buffer resolution unchanged
+- */
+- v4l_bound_align_image(&q_data->w_adjusted,
+- MXC_JPEG_MIN_WIDTH,
+- q_data->w_adjusted, /* adjust downwards*/
+- q_data->fmt->h_align,
+- &q_data->h_adjusted,
+- MXC_JPEG_MIN_HEIGHT,
+- q_data->h_adjusted, /* adjust downwards*/
+- q_data->fmt->v_align,
+- 0);
+- }
++ /*
++ * align up the resolution for CAST IP,
++ * but leave the buffer resolution unchanged
++ */
++ v4l_bound_align_image(&q_data->w_adjusted,
++ q_data->w_adjusted, /* adjust upwards */
++ MXC_JPEG_MAX_WIDTH,
++ q_data->fmt->h_align,
++ &q_data->h_adjusted,
++ q_data->h_adjusted, /* adjust upwards */
++ MXC_JPEG_MAX_HEIGHT,
++ q_data->fmt->v_align,
++ 0);
+
+ for (i = 0; i < pix_mp->num_planes; i++) {
+ q_data->bytesperline[i] = pix_mp->plane_fmt[i].bytesperline;
+@@ -1963,27 +1962,6 @@ static int mxc_jpeg_subscribe_event(struct v4l2_fh *fh,
+ }
+ }
+
+-static int mxc_jpeg_dqbuf(struct file *file, void *priv,
+- struct v4l2_buffer *buf)
+-{
+- struct v4l2_fh *fh = file->private_data;
+- struct mxc_jpeg_ctx *ctx = mxc_jpeg_fh_to_ctx(priv);
+- struct device *dev = ctx->mxc_jpeg->dev;
+- int num_src_ready = v4l2_m2m_num_src_bufs_ready(fh->m2m_ctx);
+- int ret;
+-
+- dev_dbg(dev, "DQBUF type=%d, index=%d", buf->type, buf->index);
+- if (ctx->stopping == 1 && num_src_ready == 0) {
+- /* No more src bufs, notify app EOS */
+- notify_eos(ctx);
+- ctx->stopping = 0;
+- mxc_jpeg_set_last_buffer_dequeued(ctx);
+- }
+-
+- ret = v4l2_m2m_dqbuf(file, fh->m2m_ctx, buf);
+- return ret;
+-}
+-
+ static const struct v4l2_ioctl_ops mxc_jpeg_ioctl_ops = {
+ .vidioc_querycap = mxc_jpeg_querycap,
+ .vidioc_enum_fmt_vid_cap = mxc_jpeg_enum_fmt_vid_cap,
+@@ -2007,7 +1985,7 @@ static const struct v4l2_ioctl_ops mxc_jpeg_ioctl_ops = {
+ .vidioc_encoder_cmd = mxc_jpeg_encoder_cmd,
+
+ .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
+- .vidioc_dqbuf = mxc_jpeg_dqbuf,
++ .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
+
+ .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
+ .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
+@@ -2167,12 +2145,14 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
+ jpeg->clk_ipg = devm_clk_get(dev, "ipg");
+ if (IS_ERR(jpeg->clk_ipg)) {
+ dev_err(dev, "failed to get clock: ipg\n");
++ ret = PTR_ERR(jpeg->clk_ipg);
+ goto err_clk;
+ }
+
+ jpeg->clk_per = devm_clk_get(dev, "per");
+ if (IS_ERR(jpeg->clk_per)) {
+ dev_err(dev, "failed to get clock: per\n");
++ ret = PTR_ERR(jpeg->clk_per);
+ goto err_clk;
+ }
+
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
+index 760eaf5387a1c..1d41cb8ffb6c4 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
+@@ -92,8 +92,6 @@ struct mxc_jpeg_ctx {
+ struct mxc_jpeg_q_data cap_q;
+ struct v4l2_fh fh;
+ enum mxc_jpeg_enc_state enc_state;
+- unsigned int stopping;
+- unsigned int stopped;
+ unsigned int slot;
+ unsigned int source_change;
+ bool header_parsed;
+diff --git a/drivers/media/platform/qcom/camss/camss-csid.c b/drivers/media/platform/qcom/camss/camss-csid.c
+index f993f349b66bf..80628801cf09f 100644
+--- a/drivers/media/platform/qcom/camss/camss-csid.c
++++ b/drivers/media/platform/qcom/camss/camss-csid.c
+@@ -666,7 +666,7 @@ int msm_csid_subdev_init(struct camss *camss, struct csid_device *csid,
+ if (csid->num_supplies) {
+ csid->supplies = devm_kmalloc_array(camss->dev,
+ csid->num_supplies,
+- sizeof(csid->supplies),
++ sizeof(*csid->supplies),
+ GFP_KERNEL);
+ if (!csid->supplies)
+ return -ENOMEM;
+diff --git a/drivers/media/platform/renesas/rcar-vin/rcar-core.c b/drivers/media/platform/renesas/rcar-vin/rcar-core.c
+index 49bdcfba010b2..4b7a9743554af 100644
+--- a/drivers/media/platform/renesas/rcar-vin/rcar-core.c
++++ b/drivers/media/platform/renesas/rcar-vin/rcar-core.c
+@@ -1261,7 +1261,7 @@ static const struct rvin_info rcar_info_r8a77980 = {
+ };
+
+ static const struct rvin_group_route rcar_info_r8a77990_routes[] = {
+- { .master = 0, .csi = RVIN_CSI40, .chsel = 0x03 },
++ { .master = 4, .csi = RVIN_CSI40, .chsel = 0x03 },
+ { /* Sentinel */ }
+ };
+
+diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c
+index 60e57e0f19272..fd7d2a9d0449a 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-video.c
++++ b/drivers/media/usb/hdpvr/hdpvr-video.c
+@@ -409,7 +409,7 @@ static ssize_t hdpvr_read(struct file *file, char __user *buffer, size_t count,
+ struct hdpvr_device *dev = video_drvdata(file);
+ struct hdpvr_buffer *buf = NULL;
+ struct urb *urb;
+- unsigned int ret = 0;
++ int ret = 0;
+ int rem, cnt;
+
+ if (*pos)
+diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
+index c6995718237a4..b16f3ce8e5ef1 100644
+--- a/drivers/media/v4l2-core/v4l2-async.c
++++ b/drivers/media/v4l2-core/v4l2-async.c
+@@ -66,8 +66,10 @@ static bool match_i2c(struct v4l2_async_notifier *notifier,
+ #endif
+ }
+
+-static bool match_fwnode(struct v4l2_async_notifier *notifier,
+- struct v4l2_subdev *sd, struct v4l2_async_subdev *asd)
++static bool
++match_fwnode_one(struct v4l2_async_notifier *notifier,
++ struct v4l2_subdev *sd, struct fwnode_handle *sd_fwnode,
++ struct v4l2_async_subdev *asd)
+ {
+ struct fwnode_handle *other_fwnode;
+ struct fwnode_handle *dev_fwnode;
+@@ -80,15 +82,7 @@ static bool match_fwnode(struct v4l2_async_notifier *notifier,
+ * fwnode or a device fwnode. Start with the simple case of direct
+ * fwnode matching.
+ */
+- if (sd->fwnode == asd->match.fwnode)
+- return true;
+-
+- /*
+- * Check the same situation for any possible secondary assigned to the
+- * subdev's fwnode
+- */
+- if (!IS_ERR_OR_NULL(sd->fwnode->secondary) &&
+- sd->fwnode->secondary == asd->match.fwnode)
++ if (sd_fwnode == asd->match.fwnode)
+ return true;
+
+ /*
+@@ -99,7 +93,7 @@ static bool match_fwnode(struct v4l2_async_notifier *notifier,
+ * ACPI. This won't make a difference, as drivers should not try to
+ * match unconnected endpoints.
+ */
+- sd_fwnode_is_ep = fwnode_graph_is_endpoint(sd->fwnode);
++ sd_fwnode_is_ep = fwnode_graph_is_endpoint(sd_fwnode);
+ asd_fwnode_is_ep = fwnode_graph_is_endpoint(asd->match.fwnode);
+
+ if (sd_fwnode_is_ep == asd_fwnode_is_ep)
+@@ -110,11 +104,11 @@ static bool match_fwnode(struct v4l2_async_notifier *notifier,
+ * parent of the endpoint fwnode, and compare it with the other fwnode.
+ */
+ if (sd_fwnode_is_ep) {
+- dev_fwnode = fwnode_graph_get_port_parent(sd->fwnode);
++ dev_fwnode = fwnode_graph_get_port_parent(sd_fwnode);
+ other_fwnode = asd->match.fwnode;
+ } else {
+ dev_fwnode = fwnode_graph_get_port_parent(asd->match.fwnode);
+- other_fwnode = sd->fwnode;
++ other_fwnode = sd_fwnode;
+ }
+
+ fwnode_handle_put(dev_fwnode);
+@@ -143,6 +137,19 @@ static bool match_fwnode(struct v4l2_async_notifier *notifier,
+ return true;
+ }
+
++static bool match_fwnode(struct v4l2_async_notifier *notifier,
++ struct v4l2_subdev *sd, struct v4l2_async_subdev *asd)
++{
++ if (match_fwnode_one(notifier, sd, sd->fwnode, asd))
++ return true;
++
++ /* Also check the secondary fwnode. */
++ if (IS_ERR_OR_NULL(sd->fwnode->secondary))
++ return false;
++
++ return match_fwnode_one(notifier, sd, sd->fwnode->secondary, asd);
++}
++
+ static LIST_HEAD(subdev_list);
+ static LIST_HEAD(notifier_list);
+ static DEFINE_MUTEX(list_lock);
+diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
+index 6469f9a25a4e2..837e1855f94bf 100644
+--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
++++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
+@@ -925,7 +925,7 @@ static __poll_t v4l2_m2m_poll_for_data(struct file *file,
+ if ((!src_q->streaming || src_q->error ||
+ list_empty(&src_q->queued_list)) &&
+ (!dst_q->streaming || dst_q->error ||
+- list_empty(&dst_q->queued_list)))
++ (list_empty(&dst_q->queued_list) && !dst_q->last_buffer_dequeued)))
+ return EPOLLERR;
+
+ spin_lock_irqsave(&src_q->done_lock, flags);
+diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
+index 3993bdd4b519c..f8fdf88fb240c 100644
+--- a/drivers/memstick/core/ms_block.c
++++ b/drivers/memstick/core/ms_block.c
+@@ -1341,17 +1341,17 @@ static int msb_ftl_initialize(struct msb_data *msb)
+ msb->zone_count = msb->block_count / MS_BLOCKS_IN_ZONE;
+ msb->logical_block_count = msb->zone_count * 496 - 2;
+
+- msb->used_blocks_bitmap = kzalloc(msb->block_count / 8, GFP_KERNEL);
+- msb->erased_blocks_bitmap = kzalloc(msb->block_count / 8, GFP_KERNEL);
++ msb->used_blocks_bitmap = bitmap_zalloc(msb->block_count, GFP_KERNEL);
++ msb->erased_blocks_bitmap = bitmap_zalloc(msb->block_count, GFP_KERNEL);
+ msb->lba_to_pba_table =
+ kmalloc_array(msb->logical_block_count, sizeof(u16),
+ GFP_KERNEL);
+
+ if (!msb->used_blocks_bitmap || !msb->lba_to_pba_table ||
+ !msb->erased_blocks_bitmap) {
+- kfree(msb->used_blocks_bitmap);
++ bitmap_free(msb->used_blocks_bitmap);
++ bitmap_free(msb->erased_blocks_bitmap);
+ kfree(msb->lba_to_pba_table);
+- kfree(msb->erased_blocks_bitmap);
+ return -ENOMEM;
+ }
+
+@@ -1946,7 +1946,8 @@ static DEFINE_MUTEX(msb_disk_lock); /* protects against races in open/release */
+ static void msb_data_clear(struct msb_data *msb)
+ {
+ kfree(msb->boot_page);
+- kfree(msb->used_blocks_bitmap);
++ bitmap_free(msb->used_blocks_bitmap);
++ bitmap_free(msb->erased_blocks_bitmap);
+ kfree(msb->lba_to_pba_table);
+ kfree(msb->cache);
+ msb->card = NULL;
+diff --git a/drivers/mfd/max77620.c b/drivers/mfd/max77620.c
+index fec2096474ad1..a6661e07035ba 100644
+--- a/drivers/mfd/max77620.c
++++ b/drivers/mfd/max77620.c
+@@ -419,9 +419,11 @@ static int max77620_initialise_fps(struct max77620_chip *chip)
+ ret = max77620_config_fps(chip, fps_child);
+ if (ret < 0) {
+ of_node_put(fps_child);
++ of_node_put(fps_np);
+ return ret;
+ }
+ }
++ of_node_put(fps_np);
+
+ config = chip->enable_global_lpm ? MAX77620_ONOFFCNFG2_SLP_LPM_MSK : 0;
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_ONOFFCNFG2,
+diff --git a/drivers/mfd/t7l66xb.c b/drivers/mfd/t7l66xb.c
+index 5369c67e3280d..663ffd4b85706 100644
+--- a/drivers/mfd/t7l66xb.c
++++ b/drivers/mfd/t7l66xb.c
+@@ -397,11 +397,8 @@ err_noirq:
+
+ static int t7l66xb_remove(struct platform_device *dev)
+ {
+- struct t7l66xb_platform_data *pdata = dev_get_platdata(&dev->dev);
+ struct t7l66xb *t7l66xb = platform_get_drvdata(dev);
+- int ret;
+
+- ret = pdata->disable(dev);
+ clk_disable_unprepare(t7l66xb->clk48m);
+ clk_put(t7l66xb->clk48m);
+ clk_disable_unprepare(t7l66xb->clk32k);
+@@ -412,8 +409,7 @@ static int t7l66xb_remove(struct platform_device *dev)
+ mfd_remove_devices(&dev->dev);
+ kfree(t7l66xb);
+
+- return ret;
+-
++ return 0;
+ }
+
+ static struct platform_driver t7l66xb_platform_driver = {
+diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c
+index 2a2619e3c72cc..f001d99bf366b 100644
+--- a/drivers/misc/cardreader/rtsx_pcr.c
++++ b/drivers/misc/cardreader/rtsx_pcr.c
+@@ -1507,7 +1507,7 @@ static int rtsx_pci_probe(struct pci_dev *pcidev,
+ pcr->remap_addr = ioremap(base, len);
+ if (!pcr->remap_addr) {
+ ret = -ENOMEM;
+- goto free_handle;
++ goto free_idr;
+ }
+
+ pcr->rtsx_resv_buf = dma_alloc_coherent(&(pcidev->dev),
+@@ -1570,6 +1570,10 @@ disable_msi:
+ pcr->rtsx_resv_buf, pcr->rtsx_resv_buf_addr);
+ unmap:
+ iounmap(pcr->remap_addr);
++free_idr:
++ spin_lock(&rtsx_pci_lock);
++ idr_remove(&rtsx_pci_idr, pcr->id);
++ spin_unlock(&rtsx_pci_lock);
+ free_handle:
+ kfree(handle);
+ free_pcr:
+diff --git a/drivers/misc/eeprom/idt_89hpesx.c b/drivers/misc/eeprom/idt_89hpesx.c
+index b0cff4b152da8..7f430742ce2b8 100644
+--- a/drivers/misc/eeprom/idt_89hpesx.c
++++ b/drivers/misc/eeprom/idt_89hpesx.c
+@@ -909,14 +909,18 @@ static ssize_t idt_dbgfs_csr_write(struct file *filep, const char __user *ubuf,
+ u32 csraddr, csrval;
+ char *buf;
+
++ if (*offp)
++ return 0;
++
+ /* Copy data from User-space */
+ buf = kmalloc(count + 1, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+- ret = simple_write_to_buffer(buf, count, offp, ubuf, count);
+- if (ret < 0)
++ if (copy_from_user(buf, ubuf, count)) {
++ ret = -EFAULT;
+ goto free_buf;
++ }
+ buf[count] = 0;
+
+ /* Find position of colon in the buffer */
+diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
+index 663dd7e589d45..d5e6500f8a1f1 100644
+--- a/drivers/misc/habanalabs/common/memory.c
++++ b/drivers/misc/habanalabs/common/memory.c
+@@ -1245,16 +1245,16 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args, u64 *device
+ rc = map_phys_pg_pack(ctx, ret_vaddr, phys_pg_pack);
+ if (rc) {
+ dev_err(hdev->dev, "mapping page pack failed for handle %u\n", handle);
++ mutex_unlock(&ctx->mmu_lock);
+ goto map_err;
+ }
+
+ rc = hl_mmu_invalidate_cache_range(hdev, false, *vm_type | MMU_OP_SKIP_LOW_CACHE_INV,
+ ctx->asid, ret_vaddr, phys_pg_pack->total_size);
++ mutex_unlock(&ctx->mmu_lock);
+ if (rc)
+ goto map_err;
+
+- mutex_unlock(&ctx->mmu_lock);
+-
+ /*
+ * prefetch is done upon user's request. it is performed in WQ as and so can
+ * be outside the MMU lock. the operation itself is already protected by the mmu lock
+@@ -1283,8 +1283,6 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args, u64 *device
+ return rc;
+
+ map_err:
+- mutex_unlock(&ctx->mmu_lock);
+-
+ if (add_va_block(hdev, va_range, ret_vaddr,
+ ret_vaddr + phys_pg_pack->total_size - 1))
+ dev_warn(hdev->dev,
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index f4a1281658db0..912a398a9a764 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -176,7 +176,7 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
+ unsigned int part_type);
+ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+ struct mmc_card *card,
+- int disable_multi,
++ int recovery_mode,
+ struct mmc_queue *mq);
+ static void mmc_blk_hsq_req_done(struct mmc_request *mrq);
+
+@@ -1302,7 +1302,7 @@ static void mmc_blk_eval_resp_error(struct mmc_blk_request *brq)
+ }
+
+ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+- int disable_multi, bool *do_rel_wr_p,
++ int recovery_mode, bool *do_rel_wr_p,
+ bool *do_data_tag_p)
+ {
+ struct mmc_blk_data *md = mq->blkdata;
+@@ -1368,12 +1368,12 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+ brq->data.blocks--;
+
+ /*
+- * After a read error, we redo the request one sector
++ * After a read error, we redo the request one (native) sector
+ * at a time in order to accurately determine which
+ * sectors can be read successfully.
+ */
+- if (disable_multi)
+- brq->data.blocks = 1;
++ if (recovery_mode)
++ brq->data.blocks = queue_physical_block_size(mq->queue) >> 9;
+
+ /*
+ * Some controllers have HW issues while operating
+@@ -1590,7 +1590,7 @@ static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+
+ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+ struct mmc_card *card,
+- int disable_multi,
++ int recovery_mode,
+ struct mmc_queue *mq)
+ {
+ u32 readcmd, writecmd;
+@@ -1599,7 +1599,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+ struct mmc_blk_data *md = mq->blkdata;
+ bool do_rel_wr, do_data_tag;
+
+- mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag);
++ mmc_blk_data_prep(mq, mqrq, recovery_mode, &do_rel_wr, &do_data_tag);
+
+ brq->mrq.cmd = &brq->cmd;
+
+@@ -1690,7 +1690,7 @@ static int mmc_blk_fix_state(struct mmc_card *card, struct request *req)
+
+ #define MMC_READ_SINGLE_RETRIES 2
+
+-/* Single sector read during recovery */
++/* Single (native) sector read during recovery */
+ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
+ {
+ struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
+@@ -1698,6 +1698,7 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
+ struct mmc_card *card = mq->card;
+ struct mmc_host *host = card->host;
+ blk_status_t error = BLK_STS_OK;
++ size_t bytes_per_read = queue_physical_block_size(mq->queue);
+
+ do {
+ u32 status;
+@@ -1732,13 +1733,13 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
+ else
+ error = BLK_STS_OK;
+
+- } while (blk_update_request(req, error, 512));
++ } while (blk_update_request(req, error, bytes_per_read));
+
+ return;
+
+ error_exit:
+ mrq->data->bytes_xfered = 0;
+- blk_update_request(req, BLK_STS_IOERR, 512);
++ blk_update_request(req, BLK_STS_IOERR, bytes_per_read);
+ /* Let it try the remaining request again */
+ if (mqrq->retries > MMC_MAX_RETRIES - 1)
+ mqrq->retries = MMC_MAX_RETRIES - 1;
+@@ -1879,10 +1880,9 @@ static void mmc_blk_mq_rw_recovery(struct mmc_queue *mq, struct request *req)
+ return;
+ }
+
+- /* FIXME: Missing single sector read for large sector size */
+- if (!mmc_large_sector(card) && rq_data_dir(req) == READ &&
+- brq->data.blocks > 1) {
+- /* Read one sector at a time */
++ if (rq_data_dir(req) == READ && brq->data.blocks >
++ queue_physical_block_size(mq->queue) >> 9) {
++ /* Read one (native) sector at a time */
+ mmc_blk_read_single(mq, req);
+ return;
+ }
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index f879dc63d9364..be43939880868 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -163,8 +163,10 @@ static inline bool mmc_fixup_of_compatible_match(struct mmc_card *card,
+ struct device_node *np;
+
+ for_each_child_of_node(mmc_dev(card->host)->of_node, np) {
+- if (of_device_is_compatible(np, compatible))
++ if (of_device_is_compatible(np, compatible)) {
++ of_node_put(np);
+ return true;
++ }
+ }
+
+ return false;
+diff --git a/drivers/mmc/host/cavium-octeon.c b/drivers/mmc/host/cavium-octeon.c
+index 2c4b2df52adb1..12dca91a8ef61 100644
+--- a/drivers/mmc/host/cavium-octeon.c
++++ b/drivers/mmc/host/cavium-octeon.c
+@@ -277,6 +277,7 @@ static int octeon_mmc_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(&pdev->dev, "Error populating slots\n");
+ octeon_mmc_set_shared_power(host, 0);
++ of_node_put(cn);
+ goto error;
+ }
+ i++;
+diff --git a/drivers/mmc/host/cavium-thunderx.c b/drivers/mmc/host/cavium-thunderx.c
+index 76013bbbcff30..202b1d6da678c 100644
+--- a/drivers/mmc/host/cavium-thunderx.c
++++ b/drivers/mmc/host/cavium-thunderx.c
+@@ -142,8 +142,10 @@ static int thunder_mmc_probe(struct pci_dev *pdev,
+ continue;
+
+ ret = cvm_mmc_of_slot_probe(&host->slot_pdev[i]->dev, host);
+- if (ret)
++ if (ret) {
++ of_node_put(child_node);
+ goto error;
++ }
+ }
+ i++;
+ }
+diff --git a/drivers/mmc/host/mxcmmc.c b/drivers/mmc/host/mxcmmc.c
+index de04b5afef2e8..613f13306433e 100644
+--- a/drivers/mmc/host/mxcmmc.c
++++ b/drivers/mmc/host/mxcmmc.c
+@@ -1025,7 +1025,7 @@ static int mxcmci_probe(struct platform_device *pdev)
+ mmc->max_req_size = mmc->max_blk_size * mmc->max_blk_count;
+ mmc->max_seg_size = mmc->max_req_size;
+
+- host->devtype = (enum mxcmci_type)of_device_get_match_data(&pdev->dev);
++ host->devtype = (uintptr_t)of_device_get_match_data(&pdev->dev);
+
+ /* adjust max_segs after devtype detection */
+ if (!is_mpc512x_mmc(host))
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 4404ca1f98d80..0d258b6e1a436 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -938,6 +938,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ if (IS_ERR(priv->clk_cd))
+ return dev_err_probe(&pdev->dev, PTR_ERR(priv->clk_cd), "cannot get cd clock");
+
++ priv->rstc = devm_reset_control_get_optional_exclusive(&pdev->dev, NULL);
++ if (IS_ERR(priv->rstc))
++ return PTR_ERR(priv->rstc);
++
+ priv->pinctrl = devm_pinctrl_get(&pdev->dev);
+ if (!IS_ERR(priv->pinctrl)) {
+ priv->pins_default = pinctrl_lookup_state(priv->pinctrl,
+@@ -1030,10 +1034,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ if (ret)
+ goto efree;
+
+- priv->rstc = devm_reset_control_get_optional_exclusive(&pdev->dev, NULL);
+- if (IS_ERR(priv->rstc))
+- return PTR_ERR(priv->rstc);
+-
+ ver = sd_ctrl_read16(host, CTL_VERSION);
+ /* GEN2_SDR104 is first known SDHI to use 32bit block count */
+ if (ver < SDHI_VER_GEN2_SDR104 && mmc_data->max_blk_count > U16_MAX)
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index 10fb4cb2c731e..cd0134580a901 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -100,8 +100,13 @@ static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
+ static void sdhci_at91_set_uhs_signaling(struct sdhci_host *host,
+ unsigned int timing)
+ {
+- if (timing == MMC_TIMING_MMC_DDR52)
+- sdhci_writeb(host, SDMMC_MC1R_DDR, SDMMC_MC1R);
++ u8 mc1r;
++
++ if (timing == MMC_TIMING_MMC_DDR52) {
++ mc1r = sdhci_readb(host, SDMMC_MC1R);
++ mc1r |= SDMMC_MC1R_DDR;
++ sdhci_writeb(host, mc1r, SDMMC_MC1R);
++ }
+ sdhci_set_uhs_signaling(host, timing);
+ }
+
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index d9dc41143bb35..8b3d8119f3880 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -904,6 +904,7 @@ static int esdhc_signal_voltage_switch(struct mmc_host *mmc,
+ scfg_node = of_find_matching_node(NULL, scfg_device_ids);
+ if (scfg_node)
+ scfg_base = of_iomap(scfg_node, 0);
++ of_node_put(scfg_node);
+ if (scfg_base) {
+ sdhciovselcr = SDHCIOVSELCR_TGLEN |
+ SDHCIOVSELCR_VSELVAL;
+diff --git a/drivers/mtd/devices/mtd_dataflash.c b/drivers/mtd/devices/mtd_dataflash.c
+index 134e273285974..25bad43183052 100644
+--- a/drivers/mtd/devices/mtd_dataflash.c
++++ b/drivers/mtd/devices/mtd_dataflash.c
+@@ -112,6 +112,13 @@ static const struct of_device_id dataflash_dt_ids[] = {
+ MODULE_DEVICE_TABLE(of, dataflash_dt_ids);
+ #endif
+
++static const struct spi_device_id dataflash_spi_ids[] = {
++ { .name = "at45", },
++ { .name = "dataflash", },
++ { /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(spi, dataflash_spi_ids);
++
+ /* ......................................................................... */
+
+ /*
+@@ -936,6 +943,7 @@ static struct spi_driver dataflash_driver = {
+
+ .probe = dataflash_probe,
+ .remove = dataflash_remove,
++ .id_table = dataflash_spi_ids,
+
+ /* FIXME: investigate suspend and resume... */
+ };
+diff --git a/drivers/mtd/devices/spear_smi.c b/drivers/mtd/devices/spear_smi.c
+index 24073518587fe..f58742486d3de 100644
+--- a/drivers/mtd/devices/spear_smi.c
++++ b/drivers/mtd/devices/spear_smi.c
+@@ -1045,13 +1045,9 @@ static int spear_smi_remove(struct platform_device *pdev)
+ {
+ struct spear_smi *dev;
+ struct spear_snor_flash *flash;
+- int ret, i;
++ int i;
+
+ dev = platform_get_drvdata(pdev);
+- if (!dev) {
+- dev_err(&pdev->dev, "dev is null\n");
+- return -ENODEV;
+- }
+
+ /* clean up for all nor flash */
+ for (i = 0; i < dev->num_flashes; i++) {
+@@ -1060,9 +1056,7 @@ static int spear_smi_remove(struct platform_device *pdev)
+ continue;
+
+ /* clean up mtd stuff */
+- ret = mtd_device_unregister(&flash->mtd);
+- if (ret)
+- dev_err(&pdev->dev, "error removing mtd\n");
++ WARN_ON(mtd_device_unregister(&flash->mtd));
+ }
+
+ clk_disable_unprepare(dev->clk);
+diff --git a/drivers/mtd/devices/st_spi_fsm.c b/drivers/mtd/devices/st_spi_fsm.c
+index d3377b10fc0f6..9f6d4dd8bade3 100644
+--- a/drivers/mtd/devices/st_spi_fsm.c
++++ b/drivers/mtd/devices/st_spi_fsm.c
+@@ -2115,10 +2115,12 @@ static int stfsm_probe(struct platform_device *pdev)
+ (long long)fsm->mtd.size, (long long)(fsm->mtd.size >> 20),
+ fsm->mtd.erasesize, (fsm->mtd.erasesize >> 10));
+
+- return mtd_device_register(&fsm->mtd, NULL, 0);
+-
++ ret = mtd_device_register(&fsm->mtd, NULL, 0);
++ if (ret) {
+ err_clk_unprepare:
+- clk_disable_unprepare(fsm->clk);
++ clk_disable_unprepare(fsm->clk);
++ }
++
+ return ret;
+ }
+
+@@ -2126,9 +2128,11 @@ static int stfsm_remove(struct platform_device *pdev)
+ {
+ struct stfsm *fsm = platform_get_drvdata(pdev);
+
++ WARN_ON(mtd_device_unregister(&fsm->mtd));
++
+ clk_disable_unprepare(fsm->clk);
+
+- return mtd_device_unregister(&fsm->mtd);
++ return 0;
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c
+index 6e08ec1d4f098..b70d259e48a7c 100644
+--- a/drivers/mtd/hyperbus/rpc-if.c
++++ b/drivers/mtd/hyperbus/rpc-if.c
+@@ -134,7 +134,7 @@ static int rpcif_hb_probe(struct platform_device *pdev)
+
+ error = rpcif_hw_init(&hyperbus->rpc, true);
+ if (error)
+- return error;
++ goto out_disable_rpm;
+
+ hyperbus->hbdev.map.size = hyperbus->rpc.size;
+ hyperbus->hbdev.map.virt = hyperbus->rpc.dirmap;
+@@ -145,8 +145,12 @@ static int rpcif_hb_probe(struct platform_device *pdev)
+ hyperbus->hbdev.np = of_get_next_child(pdev->dev.parent->of_node, NULL);
+ error = hyperbus_register_device(&hyperbus->hbdev);
+ if (error)
+- rpcif_disable_rpm(&hyperbus->rpc);
++ goto out_disable_rpm;
++
++ return 0;
+
++out_disable_rpm:
++ rpcif_disable_rpm(&hyperbus->rpc);
+ return error;
+ }
+
+diff --git a/drivers/mtd/maps/physmap-versatile.c b/drivers/mtd/maps/physmap-versatile.c
+index ad7cd9cfaee04..a1b8b7b25f88b 100644
+--- a/drivers/mtd/maps/physmap-versatile.c
++++ b/drivers/mtd/maps/physmap-versatile.c
+@@ -93,6 +93,7 @@ static int ap_flash_init(struct platform_device *pdev)
+ return -ENODEV;
+ }
+ ebi_base = of_iomap(ebi, 0);
++ of_node_put(ebi);
+ if (!ebi_base)
+ return -ENODEV;
+
+@@ -207,6 +208,7 @@ int of_flash_probe_versatile(struct platform_device *pdev,
+
+ versatile_flashprot = (enum versatile_flashprot)devid->data;
+ rmap = syscon_node_to_regmap(sysnp);
++ of_node_put(sysnp);
+ if (IS_ERR(rmap))
+ return PTR_ERR(rmap);
+
+diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
+index 53bd10738418b..296fb16c8dc3c 100644
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -347,17 +347,17 @@ static int anfc_select_target(struct nand_chip *chip, int target)
+
+ /* Update clock frequency */
+ if (nfc->cur_clk != anand->clk) {
+- clk_disable_unprepare(nfc->controller_clk);
+- ret = clk_set_rate(nfc->controller_clk, anand->clk);
++ clk_disable_unprepare(nfc->bus_clk);
++ ret = clk_set_rate(nfc->bus_clk, anand->clk);
+ if (ret) {
+ dev_err(nfc->dev, "Failed to change clock rate\n");
+ return ret;
+ }
+
+- ret = clk_prepare_enable(nfc->controller_clk);
++ ret = clk_prepare_enable(nfc->bus_clk);
+ if (ret) {
+ dev_err(nfc->dev,
+- "Failed to re-enable the controller clock\n");
++ "Failed to re-enable the bus clock\n");
+ return ret;
+ }
+
+@@ -1043,7 +1043,13 @@ static int anfc_setup_interface(struct nand_chip *chip, int target,
+ DQS_BUFF_SEL_OUT(dqs_mode);
+ }
+
+- anand->clk = ANFC_XLNX_SDR_DFLT_CORE_CLK;
++ if (nand_interface_is_sdr(conf)) {
++ anand->clk = ANFC_XLNX_SDR_DFLT_CORE_CLK;
++ } else {
++ /* ONFI timings are defined in picoseconds */
++ anand->clk = div_u64((u64)NSEC_PER_SEC * 1000,
++ conf->timings.nvddr.tCK_min);
++ }
+
+ /*
+ * Due to a hardware bug in the ZynqMP SoC, SDR timing modes 0-1 work
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index ac3be92872d06..0321801833393 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -1307,7 +1307,6 @@ static int meson_nfc_nand_chip_cleanup(struct meson_nfc *nfc)
+ if (ret)
+ return ret;
+
+- meson_nfc_free_buffer(&meson_chip->nand);
+ nand_cleanup(&meson_chip->nand);
+ list_del(&meson_chip->node);
+ }
+diff --git a/drivers/mtd/parsers/ofpart_bcm4908.c b/drivers/mtd/parsers/ofpart_bcm4908.c
+index 0eddef4c198ec..bb072a0940e48 100644
+--- a/drivers/mtd/parsers/ofpart_bcm4908.c
++++ b/drivers/mtd/parsers/ofpart_bcm4908.c
+@@ -35,12 +35,15 @@ static long long bcm4908_partitions_fw_offset(void)
+ err = kstrtoul(s + len + 1, 0, &offset);
+ if (err) {
+ pr_err("failed to parse %s\n", s + len + 1);
++ of_node_put(root);
+ return err;
+ }
+
++ of_node_put(root);
+ return offset << 10;
+ }
+
++ of_node_put(root);
+ return -ENOENT;
+ }
+
+diff --git a/drivers/mtd/parsers/redboot.c b/drivers/mtd/parsers/redboot.c
+index feb44a573d447..a16b42a885816 100644
+--- a/drivers/mtd/parsers/redboot.c
++++ b/drivers/mtd/parsers/redboot.c
+@@ -58,6 +58,7 @@ static void parse_redboot_of(struct mtd_info *master)
+ return;
+
+ ret = of_property_read_u32(npart, "fis-index-block", &dirblock);
++ of_node_put(npart);
+ if (ret)
+ return;
+
+diff --git a/drivers/mtd/sm_ftl.c b/drivers/mtd/sm_ftl.c
+index 0cff2cda1b5a0..7f955fade8383 100644
+--- a/drivers/mtd/sm_ftl.c
++++ b/drivers/mtd/sm_ftl.c
+@@ -1111,9 +1111,9 @@ static void sm_release(struct mtd_blktrans_dev *dev)
+ {
+ struct sm_ftl *ftl = dev->priv;
+
+- mutex_lock(&ftl->mutex);
+ del_timer_sync(&ftl->timer);
+ cancel_work_sync(&ftl->flush_work);
++ mutex_lock(&ftl->mutex);
+ sm_cache_flush(ftl);
+ mutex_unlock(&ftl->mutex);
+ }
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 502967c76c5f3..e758ebfe1a9f6 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -177,7 +177,7 @@ int spi_nor_controller_ops_write_reg(struct spi_nor *nor, u8 opcode,
+
+ static int spi_nor_controller_ops_erase(struct spi_nor *nor, loff_t offs)
+ {
+- if (spi_nor_protocol_is_dtr(nor->write_proto))
++ if (spi_nor_protocol_is_dtr(nor->reg_proto))
+ return -EOPNOTSUPP;
+
+ return nor->controller_ops->erase(nor, offs);
+@@ -972,7 +972,7 @@ static int spi_nor_erase_chip(struct spi_nor *nor)
+ if (nor->spimem) {
+ struct spi_mem_op op = SPI_NOR_CHIP_ERASE_OP;
+
+- spi_nor_spimem_setup_op(nor, &op, nor->write_proto);
++ spi_nor_spimem_setup_op(nor, &op, nor->reg_proto);
+
+ ret = spi_mem_exec_op(nor->spimem, &op);
+ } else {
+@@ -1115,7 +1115,7 @@ int spi_nor_erase_sector(struct spi_nor *nor, u32 addr)
+ SPI_NOR_SECTOR_ERASE_OP(nor->erase_opcode,
+ nor->addr_width, addr);
+
+- spi_nor_spimem_setup_op(nor, &op, nor->write_proto);
++ spi_nor_spimem_setup_op(nor, &op, nor->reg_proto);
+
+ return spi_mem_exec_op(nor->spimem, &op);
+ } else if (nor->controller_ops->erase) {
+diff --git a/drivers/net/can/dev/netlink.c b/drivers/net/can/dev/netlink.c
+index 7633d98e39121..037824011266e 100644
+--- a/drivers/net/can/dev/netlink.c
++++ b/drivers/net/can/dev/netlink.c
+@@ -176,7 +176,8 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ * directly via do_set_bitrate(). Bail out if neither
+ * is given.
+ */
+- if (!priv->bittiming_const && !priv->do_set_bittiming)
++ if (!priv->bittiming_const && !priv->do_set_bittiming &&
++ !priv->bitrate_const)
+ return -EOPNOTSUPP;
+
+ memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
+@@ -278,7 +279,8 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ * directly via do_set_bitrate(). Bail out if neither
+ * is given.
+ */
+- if (!priv->data_bittiming_const && !priv->do_set_data_bittiming)
++ if (!priv->data_bittiming_const && !priv->do_set_data_bittiming &&
++ !priv->data_bitrate_const)
+ return -EOPNOTSUPP;
+
+ memcpy(&dbt, nla_data(data[IFLA_CAN_DATA_BITTIMING]),
+diff --git a/drivers/net/can/pch_can.c b/drivers/net/can/pch_can.c
+index fde3ac516d264..f1afab4f8a273 100644
+--- a/drivers/net/can/pch_can.c
++++ b/drivers/net/can/pch_can.c
+@@ -489,6 +489,7 @@ static void pch_can_error(struct net_device *ndev, u32 status)
+ if (!skb)
+ return;
+
++ errc = ioread32(&priv->regs->errc);
+ if (status & PCH_BUS_OFF) {
+ pch_can_set_tx_all(priv, 0);
+ pch_can_set_rx_all(priv, 0);
+@@ -496,9 +497,11 @@ static void pch_can_error(struct net_device *ndev, u32 status)
+ cf->can_id |= CAN_ERR_BUSOFF;
+ priv->can.can_stats.bus_off++;
+ can_bus_off(ndev);
++ } else {
++ cf->data[6] = errc & PCH_TEC;
++ cf->data[7] = (errc & PCH_REC) >> 8;
+ }
+
+- errc = ioread32(&priv->regs->errc);
+ /* Warning interrupt. */
+ if (status & PCH_EWARN) {
+ state = CAN_STATE_ERROR_WARNING;
+@@ -556,9 +559,6 @@ static void pch_can_error(struct net_device *ndev, u32 status)
+ break;
+ }
+
+- cf->data[6] = errc & PCH_TEC;
+- cf->data[7] = (errc & PCH_REC) >> 8;
+-
+ priv->can.state = state;
+ netif_receive_skb(skb);
+ }
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index d45762f1cf6bc..24d7a71def6a0 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -232,11 +232,8 @@ static void rcar_can_error(struct net_device *ndev)
+ if (eifr & (RCAR_CAN_EIFR_EWIF | RCAR_CAN_EIFR_EPIF)) {
+ txerr = readb(&priv->regs->tecr);
+ rxerr = readb(&priv->regs->recr);
+- if (skb) {
++ if (skb)
+ cf->can_id |= CAN_ERR_CRTL;
+- cf->data[6] = txerr;
+- cf->data[7] = rxerr;
+- }
+ }
+ if (eifr & RCAR_CAN_EIFR_BEIF) {
+ int rx_errors = 0, tx_errors = 0;
+@@ -336,6 +333,9 @@ static void rcar_can_error(struct net_device *ndev)
+ can_bus_off(ndev);
+ if (skb)
+ cf->can_id |= CAN_ERR_BUSOFF;
++ } else if (skb) {
++ cf->data[6] = txerr;
++ cf->data[7] = rxerr;
+ }
+ if (eifr & RCAR_CAN_EIFR_ORIF) {
+ netdev_dbg(priv->ndev, "Receive overrun error interrupt\n");
+diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
+index 2e7638f98cf1b..84adf8b5945e7 100644
+--- a/drivers/net/can/sja1000/sja1000.c
++++ b/drivers/net/can/sja1000/sja1000.c
+@@ -402,9 +402,6 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ txerr = priv->read_reg(priv, SJA1000_TXERR);
+ rxerr = priv->read_reg(priv, SJA1000_RXERR);
+
+- cf->data[6] = txerr;
+- cf->data[7] = rxerr;
+-
+ if (isrc & IRQ_DOI) {
+ /* data overrun interrupt */
+ netdev_dbg(dev, "data overrun interrupt\n");
+@@ -426,6 +423,10 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ else
+ state = CAN_STATE_ERROR_ACTIVE;
+ }
++ if (state != CAN_STATE_BUS_OFF) {
++ cf->data[6] = txerr;
++ cf->data[7] = rxerr;
++ }
+ if (isrc & IRQ_BEI) {
+ /* bus error interrupt */
+ priv->can.can_stats.bus_error++;
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index ebc4ebb44c980..bfb7c4bb5bc32 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -667,8 +667,6 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+
+ txerr = hi3110_read(spi, HI3110_READ_TEC);
+ rxerr = hi3110_read(spi, HI3110_READ_REC);
+- cf->data[6] = txerr;
+- cf->data[7] = rxerr;
+ tx_state = txerr >= rxerr ? new_state : 0;
+ rx_state = txerr <= rxerr ? new_state : 0;
+ can_change_state(net, cf, tx_state, rx_state);
+@@ -681,6 +679,9 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ hi3110_hw_sleep(spi);
+ break;
+ }
++ } else {
++ cf->data[6] = txerr;
++ cf->data[7] = rxerr;
+ }
+ }
+
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index 155b90f6c767c..afe9b541f0376 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -535,11 +535,6 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ rxerr = (errc >> 16) & 0xFF;
+ txerr = errc & 0xFF;
+
+- if (skb) {
+- cf->data[6] = txerr;
+- cf->data[7] = rxerr;
+- }
+-
+ if (isrc & SUN4I_INT_DATA_OR) {
+ /* data overrun interrupt */
+ netdev_dbg(dev, "data overrun interrupt\n");
+@@ -570,6 +565,10 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ else
+ state = CAN_STATE_ERROR_ACTIVE;
+ }
++ if (skb && state != CAN_STATE_BUS_OFF) {
++ cf->data[6] = txerr;
++ cf->data[7] = rxerr;
++ }
+ if (isrc & SUN4I_INT_BUS_ERR) {
+ /* bus error interrupt */
+ netdev_dbg(dev, "bus error interrupt\n");
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index 5d70844ac0300..404093468b2f1 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -917,8 +917,10 @@ static void kvaser_usb_hydra_update_state(struct kvaser_usb_net_priv *priv,
+ new_state < CAN_STATE_BUS_OFF)
+ priv->can.can_stats.restarts++;
+
+- cf->data[6] = bec->txerr;
+- cf->data[7] = bec->rxerr;
++ if (new_state != CAN_STATE_BUS_OFF) {
++ cf->data[6] = bec->txerr;
++ cf->data[7] = bec->rxerr;
++ }
+
+ netif_rx(skb);
+ }
+@@ -1069,8 +1071,10 @@ kvaser_usb_hydra_error_frame(struct kvaser_usb_net_priv *priv,
+ shhwtstamps->hwtstamp = hwtstamp;
+
+ cf->can_id |= CAN_ERR_BUSERROR;
+- cf->data[6] = bec.txerr;
+- cf->data[7] = bec.rxerr;
++ if (new_state != CAN_STATE_BUS_OFF) {
++ cf->data[6] = bec.txerr;
++ cf->data[7] = bec.rxerr;
++ }
+
+ netif_rx(skb);
+
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index cc809ecd1e622..f551fde16a709 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -853,8 +853,10 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ break;
+ }
+
+- cf->data[6] = es->txerr;
+- cf->data[7] = es->rxerr;
++ if (new_state != CAN_STATE_BUS_OFF) {
++ cf->data[6] = es->txerr;
++ cf->data[7] = es->rxerr;
++ }
+
+ netif_rx(skb);
+ }
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index f3363575bf32c..4d38dc90472a8 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -438,9 +438,10 @@ static void usb_8dev_rx_err_msg(struct usb_8dev_priv *priv,
+
+ if (rx_errors)
+ stats->rx_errors++;
+-
+- cf->data[6] = txerr;
+- cf->data[7] = rxerr;
++ if (priv->can.state != CAN_STATE_BUS_OFF) {
++ cf->data[6] = txerr;
++ cf->data[7] = rxerr;
++ }
+
+ priv->bec.txerr = txerr;
+ priv->bec.rxerr = rxerr;
+diff --git a/drivers/net/dsa/ocelot/Kconfig b/drivers/net/dsa/ocelot/Kconfig
+index 220b0b027b555..08db9cf768180 100644
+--- a/drivers/net/dsa/ocelot/Kconfig
++++ b/drivers/net/dsa/ocelot/Kconfig
+@@ -6,6 +6,7 @@ config NET_DSA_MSCC_FELIX
+ depends on NET_VENDOR_FREESCALE
+ depends on HAS_IOMEM
+ depends on PTP_1588_CLOCK_OPTIONAL
++ depends on NET_SCH_TAPRIO || NET_SCH_TAPRIO=n
+ select MSCC_OCELOT_SWITCH_LIB
+ select NET_DSA_TAG_OCELOT_8021Q
+ select NET_DSA_TAG_OCELOT
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 3e07dc39007a5..859196898a7d0 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -1553,9 +1553,18 @@ static void felix_txtstamp(struct dsa_switch *ds, int port,
+ static int felix_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ {
+ struct ocelot *ocelot = ds->priv;
++ struct ocelot_port *ocelot_port = ocelot->ports[port];
++ struct felix *felix = ocelot_to_felix(ocelot);
+
+ ocelot_port_set_maxlen(ocelot, port, new_mtu);
+
++ mutex_lock(&ocelot->tas_lock);
++
++ if (ocelot_port->taprio && felix->info->tas_guard_bands_update)
++ felix->info->tas_guard_bands_update(ocelot, port);
++
++ mutex_unlock(&ocelot->tas_lock);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/dsa/ocelot/felix.h b/drivers/net/dsa/ocelot/felix.h
+index 9e07eb7ee28de..deb8dde1fc19d 100644
+--- a/drivers/net/dsa/ocelot/felix.h
++++ b/drivers/net/dsa/ocelot/felix.h
+@@ -53,6 +53,7 @@ struct felix_info {
+ struct phylink_link_state *state);
+ int (*port_setup_tc)(struct dsa_switch *ds, int port,
+ enum tc_setup_type type, void *type_data);
++ void (*tas_guard_bands_update)(struct ocelot *ocelot, int port);
+ void (*port_sched_speed_set)(struct ocelot *ocelot, int port,
+ u32 speed);
+ struct regmap *(*init_regmap)(struct ocelot *ocelot,
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 9c27b9b0128db..d0920f5a8f04f 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1127,9 +1127,212 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
+ mdiobus_free(felix->imdio);
+ }
+
++/* Extract shortest continuous gate open intervals in ns for each traffic class
++ * of a cyclic tc-taprio schedule. If a gate is always open, the duration is
++ * considered U64_MAX. If the gate is always closed, it is considered 0.
++ */
++static void vsc9959_tas_min_gate_lengths(struct tc_taprio_qopt_offload *taprio,
++ u64 min_gate_len[OCELOT_NUM_TC])
++{
++ struct tc_taprio_sched_entry *entry;
++ u64 gate_len[OCELOT_NUM_TC];
++ u8 gates_ever_opened = 0;
++ int tc, i, n;
++
++ /* Initialize arrays */
++ for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
++ min_gate_len[tc] = U64_MAX;
++ gate_len[tc] = 0;
++ }
++
++ /* If we don't have taprio, consider all gates as permanently open */
++ if (!taprio)
++ return;
++
++ n = taprio->num_entries;
++
++ /* Walk through the gate list twice to determine the length
++ * of consecutively open gates for a traffic class, including
++ * open gates that wrap around. We are just interested in the
++ * minimum window size, and this doesn't change what the
++ * minimum is (if the gate never closes, min_gate_len will
++ * remain U64_MAX).
++ */
++ for (i = 0; i < 2 * n; i++) {
++ entry = &taprio->entries[i % n];
++
++ for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
++ if (entry->gate_mask & BIT(tc)) {
++ gate_len[tc] += entry->interval;
++ gates_ever_opened |= BIT(tc);
++ } else {
++ /* Gate closes now, record a potential new
++ * minimum and reinitialize length
++ */
++ if (min_gate_len[tc] > gate_len[tc] &&
++ gate_len[tc])
++ min_gate_len[tc] = gate_len[tc];
++ gate_len[tc] = 0;
++ }
++ }
++ }
++
++ /* min_gate_len[tc] actually tracks minimum *open* gate time, so for
++ * permanently closed gates, min_gate_len[tc] will still be U64_MAX.
++ * Therefore they are currently indistinguishable from permanently
++ * open gates. Overwrite the gate len with 0 when we know they're
++ * actually permanently closed, i.e. after the loop above.
++ */
++ for (tc = 0; tc < OCELOT_NUM_TC; tc++)
++ if (!(gates_ever_opened & BIT(tc)))
++ min_gate_len[tc] = 0;
++}
++
++/* Update QSYS_PORT_MAX_SDU to make sure the static guard bands added by the
++ * switch (see the ALWAYS_GUARD_BAND_SCH_Q comment) are correct at all MTU
++ * values (the default value is 1518). Also, for traffic class windows smaller
++ * than one MTU sized frame, update QSYS_QMAXSDU_CFG to enable oversized frame
++ * dropping, such that these won't hang the port, as they will never be sent.
++ */
++static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
++{
++ struct ocelot_port *ocelot_port = ocelot->ports[port];
++ u64 min_gate_len[OCELOT_NUM_TC];
++ int speed, picos_per_byte;
++ u64 needed_bit_time_ps;
++ u32 val, maxlen;
++ u8 tas_speed;
++ int tc;
++
++ lockdep_assert_held(&ocelot->tas_lock);
++
++ val = ocelot_read_rix(ocelot, QSYS_TAG_CONFIG, port);
++ tas_speed = QSYS_TAG_CONFIG_LINK_SPEED_X(val);
++
++ switch (tas_speed) {
++ case OCELOT_SPEED_10:
++ speed = SPEED_10;
++ break;
++ case OCELOT_SPEED_100:
++ speed = SPEED_100;
++ break;
++ case OCELOT_SPEED_1000:
++ speed = SPEED_1000;
++ break;
++ case OCELOT_SPEED_2500:
++ speed = SPEED_2500;
++ break;
++ default:
++ return;
++ }
++
++ picos_per_byte = (USEC_PER_SEC * 8) / speed;
++
++ val = ocelot_port_readl(ocelot_port, DEV_MAC_MAXLEN_CFG);
++ /* MAXLEN_CFG accounts automatically for VLAN. We need to include it
++ * manually in the bit time calculation, plus the preamble and SFD.
++ */
++ maxlen = val + 2 * VLAN_HLEN;
++ /* Consider the standard Ethernet overhead of 8 octets preamble+SFD,
++ * 4 octets FCS, 12 octets IFG.
++ */
++ needed_bit_time_ps = (maxlen + 24) * picos_per_byte;
++
++ dev_dbg(ocelot->dev,
++ "port %d: max frame size %d needs %llu ps at speed %d\n",
++ port, maxlen, needed_bit_time_ps, speed);
++
++ vsc9959_tas_min_gate_lengths(ocelot_port->taprio, min_gate_len);
++
++ for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
++ u32 max_sdu;
++
++ if (min_gate_len[tc] == U64_MAX /* Gate always open */ ||
++ min_gate_len[tc] * 1000 > needed_bit_time_ps) {
++ /* Setting QMAXSDU_CFG to 0 disables oversized frame
++ * dropping.
++ */
++ max_sdu = 0;
++ dev_dbg(ocelot->dev,
++ "port %d tc %d min gate len %llu"
++ ", sending all frames\n",
++ port, tc, min_gate_len[tc]);
++ } else {
++ /* If traffic class doesn't support a full MTU sized
++ * frame, make sure to enable oversize frame dropping
++ * for frames larger than the smallest that would fit.
++ */
++ max_sdu = div_u64(min_gate_len[tc] * 1000,
++ picos_per_byte);
++ /* A TC gate may be completely closed, which is a
++ * special case where all packets are oversized.
++ * Any limit smaller than 64 octets accomplishes this
++ */
++ if (!max_sdu)
++ max_sdu = 1;
++ /* Take L1 overhead into account, but just don't allow
++ * max_sdu to go negative or to 0. Here we use 20
++ * because QSYS_MAXSDU_CFG_* already counts the 4 FCS
++ * octets as part of packet size.
++ */
++ if (max_sdu > 20)
++ max_sdu -= 20;
++ dev_info(ocelot->dev,
++ "port %d tc %d min gate length %llu"
++ " ns not enough for max frame size %d at %d"
++ " Mbps, dropping frames over %d"
++ " octets including FCS\n",
++ port, tc, min_gate_len[tc], maxlen, speed,
++ max_sdu);
++ }
++
++ /* ocelot_write_rix is a macro that concatenates
++ * QSYS_MAXSDU_CFG_* with _RSZ, so we need to spell out
++ * the writes to each traffic class
++ */
++ switch (tc) {
++ case 0:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_0,
++ port);
++ break;
++ case 1:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_1,
++ port);
++ break;
++ case 2:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_2,
++ port);
++ break;
++ case 3:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_3,
++ port);
++ break;
++ case 4:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_4,
++ port);
++ break;
++ case 5:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_5,
++ port);
++ break;
++ case 6:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_6,
++ port);
++ break;
++ case 7:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_7,
++ port);
++ break;
++ }
++ }
++
++ ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port);
++}
++
+ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
+ u32 speed)
+ {
++ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ u8 tas_speed;
+
+ switch (speed) {
+@@ -1154,6 +1357,13 @@ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
+ QSYS_TAG_CONFIG_LINK_SPEED(tas_speed),
+ QSYS_TAG_CONFIG_LINK_SPEED_M,
+ QSYS_TAG_CONFIG, port);
++
++ mutex_lock(&ocelot->tas_lock);
++
++ if (ocelot_port->taprio)
++ vsc9959_tas_guard_bands_update(ocelot, port);
++
++ mutex_unlock(&ocelot->tas_lock);
+ }
+
+ static void vsc9959_new_base_time(struct ocelot *ocelot, ktime_t base_time,
+@@ -1196,10 +1406,13 @@ static void vsc9959_tas_gcl_set(struct ocelot *ocelot, const u32 gcl_ix,
+ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
+ struct tc_taprio_qopt_offload *taprio)
+ {
++ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ struct timespec64 base_ts;
+ int ret, i;
+ u32 val;
+
++ mutex_lock(&ocelot->tas_lock);
++
+ if (!taprio->enable) {
+ ocelot_rmw_rix(ocelot,
+ QSYS_TAG_CONFIG_INIT_GATE_STATE(0xFF),
+@@ -1207,15 +1420,25 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
+ QSYS_TAG_CONFIG_INIT_GATE_STATE_M,
+ QSYS_TAG_CONFIG, port);
+
++ taprio_offload_free(ocelot_port->taprio);
++ ocelot_port->taprio = NULL;
++
++ vsc9959_tas_guard_bands_update(ocelot, port);
++
++ mutex_unlock(&ocelot->tas_lock);
+ return 0;
+ }
+
+ if (taprio->cycle_time > NSEC_PER_SEC ||
+- taprio->cycle_time_extension >= NSEC_PER_SEC)
+- return -EINVAL;
++ taprio->cycle_time_extension >= NSEC_PER_SEC) {
++ ret = -EINVAL;
++ goto err;
++ }
+
+- if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX)
+- return -ERANGE;
++ if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX) {
++ ret = -ERANGE;
++ goto err;
++ }
+
+ /* Enable guard band. The switch will schedule frames without taking
+ * their length into account. Thus we'll always need to enable the
+@@ -1236,8 +1459,10 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
+ * config is pending, need reset the TAS module
+ */
+ val = ocelot_read(ocelot, QSYS_PARAM_STATUS_REG_8);
+- if (val & QSYS_PARAM_STATUS_REG_8_CONFIG_PENDING)
+- return -EBUSY;
++ if (val & QSYS_PARAM_STATUS_REG_8_CONFIG_PENDING) {
++ ret = -EBUSY;
++ goto err;
++ }
+
+ ocelot_rmw_rix(ocelot,
+ QSYS_TAG_CONFIG_ENABLE |
+@@ -1270,10 +1495,71 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
+ ret = readx_poll_timeout(vsc9959_tas_read_cfg_status, ocelot, val,
+ !(val & QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE),
+ 10, 100000);
++ if (ret)
++ goto err;
++
++ ocelot_port->taprio = taprio_offload_get(taprio);
++ vsc9959_tas_guard_bands_update(ocelot, port);
++
++err:
++ mutex_unlock(&ocelot->tas_lock);
+
+ return ret;
+ }
+
++static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
++{
++ struct tc_taprio_qopt_offload *taprio;
++ struct ocelot_port *ocelot_port;
++ struct timespec64 base_ts;
++ int port;
++ u32 val;
++
++ mutex_lock(&ocelot->tas_lock);
++
++ for (port = 0; port < ocelot->num_phys_ports; port++) {
++ ocelot_port = ocelot->ports[port];
++ taprio = ocelot_port->taprio;
++ if (!taprio)
++ continue;
++
++ ocelot_rmw(ocelot,
++ QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port),
++ QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M,
++ QSYS_TAS_PARAM_CFG_CTRL);
++
++ ocelot_rmw_rix(ocelot,
++ QSYS_TAG_CONFIG_INIT_GATE_STATE(0xFF),
++ QSYS_TAG_CONFIG_ENABLE |
++ QSYS_TAG_CONFIG_INIT_GATE_STATE_M,
++ QSYS_TAG_CONFIG, port);
++
++ vsc9959_new_base_time(ocelot, taprio->base_time,
++ taprio->cycle_time, &base_ts);
++
++ ocelot_write(ocelot, base_ts.tv_nsec, QSYS_PARAM_CFG_REG_1);
++ ocelot_write(ocelot, lower_32_bits(base_ts.tv_sec),
++ QSYS_PARAM_CFG_REG_2);
++ val = upper_32_bits(base_ts.tv_sec);
++ ocelot_rmw(ocelot,
++ QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB(val),
++ QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB_M,
++ QSYS_PARAM_CFG_REG_3);
++
++ ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
++ QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
++ QSYS_TAS_PARAM_CFG_CTRL);
++
++ ocelot_rmw_rix(ocelot,
++ QSYS_TAG_CONFIG_INIT_GATE_STATE(0xFF) |
++ QSYS_TAG_CONFIG_ENABLE,
++ QSYS_TAG_CONFIG_ENABLE |
++ QSYS_TAG_CONFIG_INIT_GATE_STATE_M,
++ QSYS_TAG_CONFIG, port);
++ }
++ mutex_unlock(&ocelot->tas_lock);
++}
++
+ static int vsc9959_qos_port_cbs_set(struct dsa_switch *ds, int port,
+ struct tc_cbs_qopt_offload *cbs_qopt)
+ {
+@@ -2214,6 +2500,7 @@ static const struct ocelot_ops vsc9959_ops = {
+ .psfp_filter_del = vsc9959_psfp_filter_del,
+ .psfp_stats_get = vsc9959_psfp_stats_get,
+ .cut_through_fwd = vsc9959_cut_through_fwd,
++ .tas_clock_adjust = vsc9959_tas_clock_adjust,
+ };
+
+ static const struct felix_info felix_info_vsc9959 = {
+@@ -2240,6 +2527,7 @@ static const struct felix_info felix_info_vsc9959 = {
+ .port_modes = vsc9959_port_modes,
+ .port_setup_tc = vsc9959_port_setup_tc,
+ .port_sched_speed_set = vsc9959_sched_speed_set,
++ .tas_guard_bands_update = vsc9959_tas_guard_bands_update,
+ .init_regmap = ocelot_regmap_init,
+ };
+
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index cac509708e9df..1c6ea6766aa19 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -946,7 +946,7 @@ static unsigned int ag71xx_max_frame_len(unsigned int mtu)
+ return ETH_HLEN + VLAN_HLEN + mtu + ETH_FCS_LEN;
+ }
+
+-static void ag71xx_hw_set_macaddr(struct ag71xx *ag, unsigned char *mac)
++static void ag71xx_hw_set_macaddr(struct ag71xx *ag, const unsigned char *mac)
+ {
+ u32 t;
+
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_dev.h b/drivers/net/ethernet/huawei/hinic/hinic_dev.h
+index fb3e89141a0d9..a4fbf44f944cd 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_dev.h
++++ b/drivers/net/ethernet/huawei/hinic/hinic_dev.h
+@@ -95,9 +95,6 @@ struct hinic_dev {
+ u16 sq_depth;
+ u16 rq_depth;
+
+- struct hinic_txq_stats tx_stats;
+- struct hinic_rxq_stats rx_stats;
+-
+ u8 rss_tmpl_idx;
+ u8 rss_hash_engine;
+ u16 num_rss;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+index 05329292d940f..c23ee2ddbce3e 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+@@ -62,8 +62,6 @@ MODULE_PARM_DESC(rx_weight, "Number Rx packets for NAPI budget (default=64)");
+
+ #define HINIC_LRO_RX_TIMER_DEFAULT 16
+
+-#define VLAN_BITMAP_SIZE(nic_dev) (ALIGN(VLAN_N_VID, 8) / 8)
+-
+ #define work_to_rx_mode_work(work) \
+ container_of(work, struct hinic_rx_mode_work, work)
+
+@@ -82,56 +80,44 @@ static int set_features(struct hinic_dev *nic_dev,
+ netdev_features_t pre_features,
+ netdev_features_t features, bool force_change);
+
+-static void update_rx_stats(struct hinic_dev *nic_dev, struct hinic_rxq *rxq)
++static void gather_rx_stats(struct hinic_rxq_stats *nic_rx_stats, struct hinic_rxq *rxq)
+ {
+- struct hinic_rxq_stats *nic_rx_stats = &nic_dev->rx_stats;
+ struct hinic_rxq_stats rx_stats;
+
+- u64_stats_init(&rx_stats.syncp);
+-
+ hinic_rxq_get_stats(rxq, &rx_stats);
+
+- u64_stats_update_begin(&nic_rx_stats->syncp);
+ nic_rx_stats->bytes += rx_stats.bytes;
+ nic_rx_stats->pkts += rx_stats.pkts;
+ nic_rx_stats->errors += rx_stats.errors;
+ nic_rx_stats->csum_errors += rx_stats.csum_errors;
+ nic_rx_stats->other_errors += rx_stats.other_errors;
+- u64_stats_update_end(&nic_rx_stats->syncp);
+-
+- hinic_rxq_clean_stats(rxq);
+ }
+
+-static void update_tx_stats(struct hinic_dev *nic_dev, struct hinic_txq *txq)
++static void gather_tx_stats(struct hinic_txq_stats *nic_tx_stats, struct hinic_txq *txq)
+ {
+- struct hinic_txq_stats *nic_tx_stats = &nic_dev->tx_stats;
+ struct hinic_txq_stats tx_stats;
+
+- u64_stats_init(&tx_stats.syncp);
+-
+ hinic_txq_get_stats(txq, &tx_stats);
+
+- u64_stats_update_begin(&nic_tx_stats->syncp);
+ nic_tx_stats->bytes += tx_stats.bytes;
+ nic_tx_stats->pkts += tx_stats.pkts;
+ nic_tx_stats->tx_busy += tx_stats.tx_busy;
+ nic_tx_stats->tx_wake += tx_stats.tx_wake;
+ nic_tx_stats->tx_dropped += tx_stats.tx_dropped;
+ nic_tx_stats->big_frags_pkts += tx_stats.big_frags_pkts;
+- u64_stats_update_end(&nic_tx_stats->syncp);
+-
+- hinic_txq_clean_stats(txq);
+ }
+
+-static void update_nic_stats(struct hinic_dev *nic_dev)
++static void gather_nic_stats(struct hinic_dev *nic_dev,
++ struct hinic_rxq_stats *nic_rx_stats,
++ struct hinic_txq_stats *nic_tx_stats)
+ {
+ int i, num_qps = hinic_hwdev_num_qps(nic_dev->hwdev);
+
+ for (i = 0; i < num_qps; i++)
+- update_rx_stats(nic_dev, &nic_dev->rxqs[i]);
++ gather_rx_stats(nic_rx_stats, &nic_dev->rxqs[i]);
+
+ for (i = 0; i < num_qps; i++)
+- update_tx_stats(nic_dev, &nic_dev->txqs[i]);
++ gather_tx_stats(nic_tx_stats, &nic_dev->txqs[i]);
+ }
+
+ /**
+@@ -560,8 +546,6 @@ int hinic_close(struct net_device *netdev)
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
+- update_nic_stats(nic_dev);
+-
+ up(&nic_dev->mgmt_lock);
+
+ if (!HINIC_IS_VF(nic_dev->hwdev->hwif))
+@@ -855,26 +839,19 @@ static void hinic_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+ {
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+- struct hinic_rxq_stats *nic_rx_stats;
+- struct hinic_txq_stats *nic_tx_stats;
+-
+- nic_rx_stats = &nic_dev->rx_stats;
+- nic_tx_stats = &nic_dev->tx_stats;
+-
+- down(&nic_dev->mgmt_lock);
++ struct hinic_rxq_stats nic_rx_stats = {};
++ struct hinic_txq_stats nic_tx_stats = {};
+
+ if (nic_dev->flags & HINIC_INTF_UP)
+- update_nic_stats(nic_dev);
+-
+- up(&nic_dev->mgmt_lock);
++ gather_nic_stats(nic_dev, &nic_rx_stats, &nic_tx_stats);
+
+- stats->rx_bytes = nic_rx_stats->bytes;
+- stats->rx_packets = nic_rx_stats->pkts;
+- stats->rx_errors = nic_rx_stats->errors;
++ stats->rx_bytes = nic_rx_stats.bytes;
++ stats->rx_packets = nic_rx_stats.pkts;
++ stats->rx_errors = nic_rx_stats.errors;
+
+- stats->tx_bytes = nic_tx_stats->bytes;
+- stats->tx_packets = nic_tx_stats->pkts;
+- stats->tx_errors = nic_tx_stats->tx_dropped;
++ stats->tx_bytes = nic_tx_stats.bytes;
++ stats->tx_packets = nic_tx_stats.pkts;
++ stats->tx_errors = nic_tx_stats.tx_dropped;
+ }
+
+ static int hinic_set_features(struct net_device *netdev,
+@@ -1173,8 +1150,6 @@ static void hinic_free_intr_coalesce(struct hinic_dev *nic_dev)
+ static int nic_dev_init(struct pci_dev *pdev)
+ {
+ struct hinic_rx_mode_work *rx_mode_work;
+- struct hinic_txq_stats *tx_stats;
+- struct hinic_rxq_stats *rx_stats;
+ struct hinic_dev *nic_dev;
+ struct net_device *netdev;
+ struct hinic_hwdev *hwdev;
+@@ -1236,15 +1211,8 @@ static int nic_dev_init(struct pci_dev *pdev)
+
+ sema_init(&nic_dev->mgmt_lock, 1);
+
+- tx_stats = &nic_dev->tx_stats;
+- rx_stats = &nic_dev->rx_stats;
+-
+- u64_stats_init(&tx_stats->syncp);
+- u64_stats_init(&rx_stats->syncp);
+-
+- nic_dev->vlan_bitmap = devm_kzalloc(&pdev->dev,
+- VLAN_BITMAP_SIZE(nic_dev),
+- GFP_KERNEL);
++ nic_dev->vlan_bitmap = devm_bitmap_zalloc(&pdev->dev, VLAN_N_VID,
++ GFP_KERNEL);
+ if (!nic_dev->vlan_bitmap) {
+ err = -ENOMEM;
+ goto err_vlan_bitmap;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+index 24b7b819dbfba..a866bea651103 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+@@ -73,7 +73,6 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
+ struct hinic_rxq_stats *rxq_stats = &rxq->rxq_stats;
+ unsigned int start;
+
+- u64_stats_update_begin(&stats->syncp);
+ do {
+ start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ stats->pkts = rxq_stats->pkts;
+@@ -83,7 +82,6 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
+ stats->csum_errors = rxq_stats->csum_errors;
+ stats->other_errors = rxq_stats->other_errors;
+ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+- u64_stats_update_end(&stats->syncp);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+index 87408e7bb8097..5051cdff2384b 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+@@ -98,7 +98,6 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
+ struct hinic_txq_stats *txq_stats = &txq->txq_stats;
+ unsigned int start;
+
+- u64_stats_update_begin(&stats->syncp);
+ do {
+ start = u64_stats_fetch_begin(&txq_stats->syncp);
+ stats->pkts = txq_stats->pkts;
+@@ -108,7 +107,6 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
+ stats->tx_dropped = txq_stats->tx_dropped;
+ stats->big_frags_pkts = txq_stats->big_frags_pkts;
+ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+- u64_stats_update_end(&stats->syncp);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 0ea0361cd86b1..a988c08e906f1 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -92,6 +92,7 @@ struct iavf_vsi {
+ #define IAVF_HKEY_ARRAY_SIZE ((IAVF_VFQF_HKEY_MAX_INDEX + 1) * 4)
+ #define IAVF_HLUT_ARRAY_SIZE ((IAVF_VFQF_HLUT_MAX_INDEX + 1) * 4)
+ #define IAVF_MBPS_DIVISOR 125000 /* divisor to convert to Mbps */
++#define IAVF_MBPS_QUANTA 50
+
+ #define IAVF_VIRTCHNL_VF_RESOURCE_SIZE (sizeof(struct virtchnl_vf_resource) + \
+ (IAVF_MAX_VF_VSI * \
+@@ -430,6 +431,11 @@ struct iavf_adapter {
+ /* lock to protect access to the cloud filter list */
+ spinlock_t cloud_filter_list_lock;
+ u16 num_cloud_filters;
++ /* snapshot of "num_active_queues" before setup_tc for qdisc add
++ * is invoked. This information is useful during qdisc del flow,
++ * to restore correct number of queues
++ */
++ int orig_num_active_queues;
+
+ #define IAVF_MAX_FDIR_FILTERS 128 /* max allowed Flow Director filters */
+ u16 fdir_active_fltr;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 2e2c153ce46a3..3dbfaead2ac74 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -3322,6 +3322,7 @@ static int iavf_validate_ch_config(struct iavf_adapter *adapter,
+ struct tc_mqprio_qopt_offload *mqprio_qopt)
+ {
+ u64 total_max_rate = 0;
++ u32 tx_rate_rem = 0;
+ int i, num_qps = 0;
+ u64 tx_rate = 0;
+ int ret = 0;
+@@ -3336,12 +3337,32 @@ static int iavf_validate_ch_config(struct iavf_adapter *adapter,
+ return -EINVAL;
+ if (mqprio_qopt->min_rate[i]) {
+ dev_err(&adapter->pdev->dev,
+- "Invalid min tx rate (greater than 0) specified\n");
++ "Invalid min tx rate (greater than 0) specified for TC%d\n",
++ i);
+ return -EINVAL;
+ }
+- /*convert to Mbps */
++
++ /* convert to Mbps */
+ tx_rate = div_u64(mqprio_qopt->max_rate[i],
+ IAVF_MBPS_DIVISOR);
++
++ if (mqprio_qopt->max_rate[i] &&
++ tx_rate < IAVF_MBPS_QUANTA) {
++ dev_err(&adapter->pdev->dev,
++ "Invalid max tx rate for TC%d, minimum %dMbps\n",
++ i, IAVF_MBPS_QUANTA);
++ return -EINVAL;
++ }
++
++ (void)div_u64_rem(tx_rate, IAVF_MBPS_QUANTA, &tx_rate_rem);
++
++ if (tx_rate_rem != 0) {
++ dev_err(&adapter->pdev->dev,
++ "Invalid max tx rate for TC%d, not divisible by %d\n",
++ i, IAVF_MBPS_QUANTA);
++ return -EINVAL;
++ }
++
+ total_max_rate += tx_rate;
+ num_qps += mqprio_qopt->qopt.count[i];
+ }
+@@ -3408,6 +3429,7 @@ static int __iavf_setup_tc(struct net_device *netdev, void *type_data)
+ netif_tx_disable(netdev);
+ iavf_del_all_cloud_filters(adapter);
+ adapter->aq_required = IAVF_FLAG_AQ_DISABLE_CHANNELS;
++ total_qps = adapter->orig_num_active_queues;
+ goto exit;
+ } else {
+ return -EINVAL;
+@@ -3451,7 +3473,21 @@ static int __iavf_setup_tc(struct net_device *netdev, void *type_data)
+ adapter->ch_config.ch_info[i].offset = 0;
+ }
+ }
++
++ /* Take snapshot of original config such as "num_active_queues"
++ * It is used later when delete ADQ flow is exercised, so that
++ * once delete ADQ flow completes, VF shall go back to its
++ * original queue configuration
++ */
++
++ adapter->orig_num_active_queues = adapter->num_active_queues;
++
++ /* Store queue info based on TC so that VF gets configured
++ * with correct number of queues when VF completes ADQ config
++ * flow
++ */
+ adapter->ch_config.total_qps = total_qps;
++
+ netif_tx_stop_all_queues(netdev);
+ netif_tx_disable(netdev);
+ adapter->aq_required |= IAVF_FLAG_AQ_ENABLE_CHANNELS;
+@@ -3468,6 +3504,12 @@ static int __iavf_setup_tc(struct net_device *netdev, void *type_data)
+ }
+ }
+ exit:
++ if (test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))
++ return 0;
++
++ netif_set_real_num_rx_queues(netdev, total_qps);
++ netif_set_real_num_tx_queues(netdev, total_qps);
++
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 9f02b60459f10..bc68dc5c6927d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -433,7 +433,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi)
+ IFF_PROMISC;
+ goto out_promisc;
+ }
+- if (vsi->current_netdev_flags &
++ if (vsi->netdev->features &
+ NETIF_F_HW_VLAN_CTAG_FILTER)
+ vlan_ops->ena_rx_filtering(vsi);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 8d8f3eec79eeb..9b2872e891518 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -4934,7 +4934,7 @@ ice_find_free_recp_res_idx(struct ice_hw *hw, const unsigned long *profiles,
+ bitmap_zero(recipes, ICE_MAX_NUM_RECIPES);
+ bitmap_zero(used_idx, ICE_MAX_FV_WORDS);
+
+- bitmap_set(possible_idx, 0, ICE_MAX_FV_WORDS);
++ bitmap_fill(possible_idx, ICE_MAX_FV_WORDS);
+
+ /* For each profile we are going to associate the recipe with, add the
+ * recipes that are associated with that profile. This will give us
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index b6c15efe92ad4..29b10ef787b90 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -109,7 +109,7 @@ struct page_pool;
+ #define MLX5E_REQUIRED_WQE_MTTS (MLX5_ALIGN_MTTS(MLX5_MPWRQ_PAGES_PER_WQE + 1))
+ #define MLX5E_REQUIRED_MTTS(wqes) (wqes * MLX5E_REQUIRED_WQE_MTTS)
+ #define MLX5E_MAX_RQ_NUM_MTTS \
+- ((1 << 16) * 2) /* So that MLX5_MTT_OCTW(num_mtts) fits into u16 */
++ (ALIGN_DOWN(U16_MAX, 4) * 2) /* So that MLX5_MTT_OCTW(num_mtts) fits into u16 */
+ #define MLX5E_ORDER2_MAX_PACKET_MTU (order_base_2(10 * 1024))
+ #define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE_MPW \
+ (ilog2(MLX5E_MAX_RQ_NUM_MTTS / MLX5E_REQUIRED_WQE_MTTS))
+@@ -174,8 +174,8 @@ struct page_pool;
+ ALIGN_DOWN(MLX5E_KLM_MAX_ENTRIES_PER_WQE(wqe_size), MLX5_UMR_KLM_ALIGNMENT)
+
+ #define MLX5E_MAX_KLM_PER_WQE(mdev) \
+- MLX5E_KLM_ENTRIES_PER_WQE(mlx5e_get_sw_max_sq_mpw_wqebbs(mlx5e_get_max_sq_wqebbs(mdev)) \
+- << MLX5_MKEY_BSF_OCTO_SIZE)
++ MLX5E_KLM_ENTRIES_PER_WQE(MLX5_SEND_WQE_BB * \
++ mlx5e_get_sw_max_sq_mpw_wqebbs(mlx5e_get_max_sq_wqebbs(mdev)))
+
+ #define MLX5E_MSG_LEVEL NETIF_MSG_LINK
+
+@@ -233,7 +233,7 @@ static inline u16 mlx5e_get_max_sq_wqebbs(struct mlx5_core_dev *mdev)
+ MLX5_CAP_GEN(mdev, max_wqe_sz_sq) / MLX5_SEND_WQE_BB);
+ }
+
+-static inline u16 mlx5e_get_sw_max_sq_mpw_wqebbs(u16 max_sq_wqebbs)
++static inline u8 mlx5e_get_sw_max_sq_mpw_wqebbs(u8 max_sq_wqebbs)
+ {
+ /* The return value will be multiplied by MLX5_SEND_WQEBB_NUM_DS.
+ * Since max_sq_wqebbs may be up to MLX5_SEND_WQE_MAX_WQEBBS == 16,
+@@ -242,11 +242,12 @@ static inline u16 mlx5e_get_sw_max_sq_mpw_wqebbs(u16 max_sq_wqebbs)
+ * than MLX5_SEND_WQE_MAX_WQEBBS to let a full-session WQE be
+ * cache-aligned.
+ */
+-#if L1_CACHE_BYTES < 128
+- return min_t(u16, max_sq_wqebbs, MLX5_SEND_WQE_MAX_WQEBBS - 1);
+-#else
+- return min_t(u16, max_sq_wqebbs, MLX5_SEND_WQE_MAX_WQEBBS - 2);
++ u8 wqebbs = min_t(u8, max_sq_wqebbs, MLX5_SEND_WQE_MAX_WQEBBS - 1);
++
++#if L1_CACHE_BYTES >= 128
++ wqebbs = ALIGN_DOWN(wqebbs, 2);
+ #endif
++ return wqebbs;
+ }
+
+ struct mlx5e_tx_wqe {
+@@ -455,7 +456,7 @@ struct mlx5e_txqsq {
+ struct netdev_queue *txq;
+ u32 sqn;
+ u16 stop_room;
+- u16 max_sq_mpw_wqebbs;
++ u8 max_sq_mpw_wqebbs;
+ u8 min_inline_mode;
+ struct device *pdev;
+ __be32 mkey_be;
+@@ -570,7 +571,7 @@ struct mlx5e_xdpsq {
+ struct device *pdev;
+ __be32 mkey_be;
+ u16 stop_room;
+- u16 max_sq_mpw_wqebbs;
++ u8 max_sq_mpw_wqebbs;
+ u8 min_inline_mode;
+ unsigned long state;
+ unsigned int hw_mtu;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+index 3c1edfa33aa79..e025040350bab 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+@@ -790,8 +790,20 @@ static u8 mlx5e_build_icosq_log_wq_sz(struct mlx5_core_dev *mdev,
+ return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
+
+ wqebbs = MLX5E_UMR_WQEBBS * BIT(mlx5e_get_rq_log_wq_sz(rqp->rqc));
++
++ /* If XDP program is attached, XSK may be turned on at any time without
++ * restarting the channel. ICOSQ must be big enough to fit UMR WQEs of
++ * both regular RQ and XSK RQ.
++ * Although mlx5e_mpwqe_get_log_rq_size accepts mlx5e_xsk_param, it
++ * doesn't affect its return value, as long as params->xdp_prog != NULL,
++ * so we can just multiply by 2.
++ */
++ if (params->xdp_prog)
++ wqebbs *= 2;
++
+ if (params->packet_merge.type == MLX5E_PACKET_MERGE_SHAMPO)
+ wqebbs += mlx5e_shampo_icosq_sz(mdev, params, rqp);
++
+ return max_t(u8, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE, order_base_2(wqebbs));
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
+index dea137dd744b4..2b64dd557b5d1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
+@@ -128,6 +128,7 @@ mlx5e_tc_post_act_add(struct mlx5e_post_act *post_act, struct mlx5_flow_attr *at
+ post_attr->inner_match_level = MLX5_MATCH_NONE;
+ post_attr->outer_match_level = MLX5_MATCH_NONE;
+ post_attr->action &= ~MLX5_FLOW_CONTEXT_ACTION_DECAP;
++ post_attr->flags |= MLX5_ATTR_FLAG_NO_IN_PORT;
+
+ handle->ns_type = post_act->ns_type;
+ /* Splits were handled before post action */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
+index a8cfab4a393c3..cc18d97d8ee06 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
+@@ -7,6 +7,8 @@
+ #include "en.h"
+ #include <net/xdp_sock_drv.h>
+
++#define MLX5E_MTT_PTAG_MASK 0xfffffffffffffff8ULL
++
+ /* RX data path */
+
+ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
+@@ -21,6 +23,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
+ static inline int mlx5e_xsk_page_alloc_pool(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info)
+ {
++retry:
+ dma_info->xsk = xsk_buff_alloc(rq->xsk_pool);
+ if (!dma_info->xsk)
+ return -ENOMEM;
+@@ -32,6 +35,17 @@ static inline int mlx5e_xsk_page_alloc_pool(struct mlx5e_rq *rq,
+ */
+ dma_info->addr = xsk_buff_xdp_get_frame_dma(dma_info->xsk);
+
++ /* MTT page mapping has alignment requirements. If they are not
++ * satisfied, leak the descriptor so that it won't come again, and try
++ * to allocate a new one.
++ */
++ if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
++ if (unlikely(dma_info->addr & ~MLX5E_MTT_PTAG_MASK)) {
++ xsk_buff_discard(dma_info->xsk);
++ goto retry;
++ }
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
+index 814f2a56f633a..30a70d1390468 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
+@@ -54,7 +54,7 @@ static int mlx5e_ktls_add(struct net_device *netdev, struct sock *sk,
+ struct mlx5_core_dev *mdev = priv->mdev;
+ int err;
+
+- if (WARN_ON(!mlx5e_ktls_type_check(mdev, crypto_info)))
++ if (!mlx5e_ktls_type_check(mdev, crypto_info))
+ return -EOPNOTSUPP;
+
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 2ce3728576d1a..eb79810199d3e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -230,10 +230,8 @@ esw_setup_ft_dest(struct mlx5_flow_destination *dest,
+ }
+
+ static void
+-esw_setup_slow_path_dest(struct mlx5_flow_destination *dest,
+- struct mlx5_flow_act *flow_act,
+- struct mlx5_fs_chains *chains,
+- int i)
++esw_setup_accept_dest(struct mlx5_flow_destination *dest, struct mlx5_flow_act *flow_act,
++ struct mlx5_fs_chains *chains, int i)
+ {
+ if (mlx5_chains_ignore_flow_level_supported(chains))
+ flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
+@@ -241,6 +239,16 @@ esw_setup_slow_path_dest(struct mlx5_flow_destination *dest,
+ dest[i].ft = mlx5_chains_get_tc_end_ft(chains);
+ }
+
++static void
++esw_setup_slow_path_dest(struct mlx5_flow_destination *dest, struct mlx5_flow_act *flow_act,
++ struct mlx5_eswitch *esw, int i)
++{
++ if (MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level))
++ flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
++ dest[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
++ dest[i].ft = esw->fdb_table.offloads.slow_fdb;
++}
++
+ static int
+ esw_setup_chain_dest(struct mlx5_flow_destination *dest,
+ struct mlx5_flow_act *flow_act,
+@@ -475,8 +483,11 @@ esw_setup_dests(struct mlx5_flow_destination *dest,
+ } else if (attr->dest_ft) {
+ esw_setup_ft_dest(dest, flow_act, esw, attr, spec, *i);
+ (*i)++;
+- } else if (mlx5e_tc_attr_flags_skip(attr->flags)) {
+- esw_setup_slow_path_dest(dest, flow_act, chains, *i);
++ } else if (attr->flags & MLX5_ATTR_FLAG_SLOW_PATH) {
++ esw_setup_slow_path_dest(dest, flow_act, esw, *i);
++ (*i)++;
++ } else if (attr->flags & MLX5_ATTR_FLAG_ACCEPT) {
++ esw_setup_accept_dest(dest, flow_act, chains, *i);
+ (*i)++;
+ } else if (attr->dest_chain) {
+ err = esw_setup_chain_dest(dest, flow_act, chains, attr->dest_chain,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
+index d758848d34d0c..696e45e2bd06d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
+@@ -32,20 +32,17 @@ static void tout_set(struct mlx5_core_dev *dev, u64 val, enum mlx5_timeouts_type
+ dev->timeouts->to[type] = val;
+ }
+
+-void mlx5_tout_set_def_val(struct mlx5_core_dev *dev)
++int mlx5_tout_init(struct mlx5_core_dev *dev)
+ {
+ int i;
+
+- for (i = 0; i < MAX_TIMEOUT_TYPES; i++)
+- tout_set(dev, tout_def_sw_val[i], i);
+-}
+-
+-int mlx5_tout_init(struct mlx5_core_dev *dev)
+-{
+ dev->timeouts = kmalloc(sizeof(*dev->timeouts), GFP_KERNEL);
+ if (!dev->timeouts)
+ return -ENOMEM;
+
++ for (i = 0; i < MAX_TIMEOUT_TYPES; i++)
++ tout_set(dev, tout_def_sw_val[i], i);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
+index 257c03eeab365..bc9e9aeda8478 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
+@@ -35,7 +35,6 @@ int mlx5_tout_init(struct mlx5_core_dev *dev);
+ void mlx5_tout_cleanup(struct mlx5_core_dev *dev);
+ void mlx5_tout_query_iseg(struct mlx5_core_dev *dev);
+ int mlx5_tout_query_dtor(struct mlx5_core_dev *dev);
+-void mlx5_tout_set_def_val(struct mlx5_core_dev *dev);
+ u64 _mlx5_tout_ms(struct mlx5_core_dev *dev, enum mlx5_timeouts_types type);
+
+ #define mlx5_tout_ms(dev, type) _mlx5_tout_ms(dev, MLX5_TO_##type##_MS)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index c9b4e50a593ed..ba2e5232b90be 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -524,7 +524,7 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
+
+ /* Check log_max_qp from HCA caps to set in current profile */
+ if (prof->log_max_qp == LOG_MAX_SUPPORTED_QPS) {
+- prof->log_max_qp = min_t(u8, 17, MLX5_CAP_GEN_MAX(dev, log_max_qp));
++ prof->log_max_qp = min_t(u8, 18, MLX5_CAP_GEN_MAX(dev, log_max_qp));
+ } else if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) {
+ mlx5_core_warn(dev, "log_max_qp value in current profile is %d, changing it to HCA capability limit (%d)\n",
+ prof->log_max_qp,
+@@ -1023,8 +1023,6 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, u64 timeout)
+ if (mlx5_core_is_pf(dev))
+ pcie_print_link_status(dev->pdev);
+
+- mlx5_tout_set_def_val(dev);
+-
+ /* wait for firmware to accept initialization segments configurations
+ */
+ err = wait_fw_init(dev, timeout,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c
+index d5998ef59be47..7adcf0eec13be 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c
+@@ -21,10 +21,11 @@ enum dr_dump_rec_type {
+ DR_DUMP_REC_TYPE_TABLE_TX = 3102,
+
+ DR_DUMP_REC_TYPE_MATCHER = 3200,
+- DR_DUMP_REC_TYPE_MATCHER_MASK = 3201,
++ DR_DUMP_REC_TYPE_MATCHER_MASK_DEPRECATED = 3201,
+ DR_DUMP_REC_TYPE_MATCHER_RX = 3202,
+ DR_DUMP_REC_TYPE_MATCHER_TX = 3203,
+ DR_DUMP_REC_TYPE_MATCHER_BUILDER = 3204,
++ DR_DUMP_REC_TYPE_MATCHER_MASK = 3205,
+
+ DR_DUMP_REC_TYPE_RULE = 3300,
+ DR_DUMP_REC_TYPE_RULE_RX_ENTRY_V0 = 3301,
+@@ -114,13 +115,15 @@ dr_dump_rule_action_mem(struct seq_file *file, const u64 rule_id,
+ break;
+ case DR_ACTION_TYP_FT:
+ if (action->dest_tbl->is_fw_tbl)
+- seq_printf(file, "%d,0x%llx,0x%llx,0x%x\n",
++ seq_printf(file, "%d,0x%llx,0x%llx,0x%x,0x%x\n",
+ DR_DUMP_REC_TYPE_ACTION_FT, action_id,
+- rule_id, action->dest_tbl->fw_tbl.id);
++ rule_id, action->dest_tbl->fw_tbl.id,
++ -1);
+ else
+- seq_printf(file, "%d,0x%llx,0x%llx,0x%x\n",
++ seq_printf(file, "%d,0x%llx,0x%llx,0x%x,0x%llx\n",
+ DR_DUMP_REC_TYPE_ACTION_FT, action_id,
+- rule_id, action->dest_tbl->tbl->table_id);
++ rule_id, action->dest_tbl->tbl->table_id,
++ DR_DBG_PTR_TO_ID(action->dest_tbl->tbl));
+
+ break;
+ case DR_ACTION_TYP_CTR:
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 8da7e25a47c96..d4649e4ee0e7f 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -3367,6 +3367,7 @@ int ocelot_init(struct ocelot *ocelot)
+ mutex_init(&ocelot->ptp_lock);
+ mutex_init(&ocelot->mact_lock);
+ mutex_init(&ocelot->fwd_domain_lock);
++ mutex_init(&ocelot->tas_lock);
+ spin_lock_init(&ocelot->ptp_clock_lock);
+ spin_lock_init(&ocelot->ts_id_lock);
+ snprintf(queue_name, sizeof(queue_name), "%s-stats",
+diff --git a/drivers/net/ethernet/mscc/ocelot_ptp.c b/drivers/net/ethernet/mscc/ocelot_ptp.c
+index 87ad2137ba065..09c703efe946c 100644
+--- a/drivers/net/ethernet/mscc/ocelot_ptp.c
++++ b/drivers/net/ethernet/mscc/ocelot_ptp.c
+@@ -72,6 +72,10 @@ int ocelot_ptp_settime64(struct ptp_clock_info *ptp,
+ ocelot_write_rix(ocelot, val, PTP_PIN_CFG, TOD_ACC_PIN);
+
+ spin_unlock_irqrestore(&ocelot->ptp_clock_lock, flags);
++
++ if (ocelot->ops->tas_clock_adjust)
++ ocelot->ops->tas_clock_adjust(ocelot);
++
+ return 0;
+ }
+ EXPORT_SYMBOL(ocelot_ptp_settime64);
+@@ -105,6 +109,9 @@ int ocelot_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+ ocelot_write_rix(ocelot, val, PTP_PIN_CFG, TOD_ACC_PIN);
+
+ spin_unlock_irqrestore(&ocelot->ptp_clock_lock, flags);
++
++ if (ocelot->ops->tas_clock_adjust)
++ ocelot->ops->tas_clock_adjust(ocelot);
+ } else {
+ /* Fall back using ocelot_ptp_settime64 which is not exact. */
+ struct timespec64 ts;
+@@ -117,6 +124,7 @@ int ocelot_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+
+ ocelot_ptp_settime64(ptp, &ts);
+ }
++
+ return 0;
+ }
+ EXPORT_SYMBOL(ocelot_ptp_adjtime);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index f3568901eb916..1443f788ee37c 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1437,7 +1437,7 @@ static int ionic_set_nic_features(struct ionic_lif *lif,
+ if ((old_hw_features ^ lif->hw_features) & IONIC_ETH_HW_RX_HASH)
+ ionic_lif_rss_config(lif, lif->rss_types, NULL, NULL);
+
+- if ((vlan_flags & features) &&
++ if ((vlan_flags & le64_to_cpu(ctx.cmd.lif_setattr.features)) &&
+ !(vlan_flags & le64_to_cpu(ctx.comp.lif_setattr.features)))
+ dev_info_once(lif->ionic->dev, "NIC is not supporting vlan offload, likely in SmartNIC mode\n");
+
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 2495a5719e1c1..018d365f9debf 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -815,6 +815,7 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ fl4->saddr = info->key.u.ipv4.src;
+ fl4->fl4_dport = dport;
+ fl4->fl4_sport = sport;
++ fl4->flowi4_flags = info->key.flow_flags;
+
+ tos = info->key.tos;
+ if ((tos == 1) && !geneve->cfg.collect_md) {
+diff --git a/drivers/net/netdevsim/bpf.c b/drivers/net/netdevsim/bpf.c
+index a438202129323..50854265864d1 100644
+--- a/drivers/net/netdevsim/bpf.c
++++ b/drivers/net/netdevsim/bpf.c
+@@ -351,10 +351,12 @@ nsim_map_alloc_elem(struct bpf_offloaded_map *offmap, unsigned int idx)
+ {
+ struct nsim_bpf_bound_map *nmap = offmap->dev_priv;
+
+- nmap->entry[idx].key = kmalloc(offmap->map.key_size, GFP_USER);
++ nmap->entry[idx].key = kmalloc(offmap->map.key_size,
++ GFP_KERNEL_ACCOUNT | __GFP_NOWARN);
+ if (!nmap->entry[idx].key)
+ return -ENOMEM;
+- nmap->entry[idx].value = kmalloc(offmap->map.value_size, GFP_USER);
++ nmap->entry[idx].value = kmalloc(offmap->map.value_size,
++ GFP_KERNEL_ACCOUNT | __GFP_NOWARN);
+ if (!nmap->entry[idx].value) {
+ kfree(nmap->entry[idx].key);
+ nmap->entry[idx].key = NULL;
+@@ -496,7 +498,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
+ if (offmap->map.map_flags)
+ return -EINVAL;
+
+- nmap = kzalloc(sizeof(*nmap), GFP_USER);
++ nmap = kzalloc(sizeof(*nmap), GFP_KERNEL_ACCOUNT);
+ if (!nmap)
+ return -ENOMEM;
+
+diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
+index c8f398f5bc5b8..57371c697d5cf 100644
+--- a/drivers/net/netdevsim/fib.c
++++ b/drivers/net/netdevsim/fib.c
+@@ -54,6 +54,7 @@ struct nsim_fib_data {
+ struct rhashtable nexthop_ht;
+ struct devlink *devlink;
+ struct work_struct fib_event_work;
++ struct work_struct fib_flush_work;
+ struct list_head fib_event_queue;
+ spinlock_t fib_event_queue_lock; /* Protects fib event queue list */
+ struct mutex nh_lock; /* Protects NH HT */
+@@ -978,7 +979,7 @@ static int nsim_fib_event_schedule_work(struct nsim_fib_data *data,
+
+ fib_event = kzalloc(sizeof(*fib_event), GFP_ATOMIC);
+ if (!fib_event)
+- return NOTIFY_BAD;
++ goto err_fib_event_alloc;
+
+ fib_event->data = data;
+ fib_event->event = event;
+@@ -1006,6 +1007,9 @@ static int nsim_fib_event_schedule_work(struct nsim_fib_data *data,
+
+ err_fib_prepare_event:
+ kfree(fib_event);
++err_fib_event_alloc:
++ if (event == FIB_EVENT_ENTRY_DEL)
++ schedule_work(&data->fib_flush_work);
+ return NOTIFY_BAD;
+ }
+
+@@ -1483,6 +1487,24 @@ static void nsim_fib_event_work(struct work_struct *work)
+ mutex_unlock(&data->fib_lock);
+ }
+
++static void nsim_fib_flush_work(struct work_struct *work)
++{
++ struct nsim_fib_data *data = container_of(work, struct nsim_fib_data,
++ fib_flush_work);
++ struct nsim_fib_rt *fib_rt, *fib_rt_tmp;
++
++ /* Process pending work. */
++ flush_work(&data->fib_event_work);
++
++ mutex_lock(&data->fib_lock);
++ list_for_each_entry_safe(fib_rt, fib_rt_tmp, &data->fib_rt_list, list) {
++ rhashtable_remove_fast(&data->fib_rt_ht, &fib_rt->ht_node,
++ nsim_fib_rt_ht_params);
++ nsim_fib_rt_free(fib_rt, data);
++ }
++ mutex_unlock(&data->fib_lock);
++}
++
+ static int
+ nsim_fib_debugfs_init(struct nsim_fib_data *data, struct nsim_dev *nsim_dev)
+ {
+@@ -1541,6 +1563,7 @@ struct nsim_fib_data *nsim_fib_create(struct devlink *devlink,
+ goto err_rhashtable_nexthop_destroy;
+
+ INIT_WORK(&data->fib_event_work, nsim_fib_event_work);
++ INIT_WORK(&data->fib_flush_work, nsim_fib_flush_work);
+ INIT_LIST_HEAD(&data->fib_event_queue);
+ spin_lock_init(&data->fib_event_queue_lock);
+
+@@ -1587,6 +1610,7 @@ struct nsim_fib_data *nsim_fib_create(struct devlink *devlink,
+ err_nexthop_nb_unregister:
+ unregister_nexthop_notifier(devlink_net(devlink), &data->nexthop_nb);
+ err_rhashtable_fib_destroy:
++ cancel_work_sync(&data->fib_flush_work);
+ flush_work(&data->fib_event_work);
+ rhashtable_free_and_destroy(&data->fib_rt_ht, nsim_fib_rt_free,
+ data);
+@@ -1616,6 +1640,7 @@ void nsim_fib_destroy(struct devlink *devlink, struct nsim_fib_data *data)
+ NSIM_RESOURCE_IPV4_FIB);
+ unregister_fib_notifier(devlink_net(devlink), &data->fib_nb);
+ unregister_nexthop_notifier(devlink_net(devlink), &data->nexthop_nb);
++ cancel_work_sync(&data->fib_flush_work);
+ flush_work(&data->fib_event_work);
+ rhashtable_free_and_destroy(&data->fib_rt_ht, nsim_fib_rt_free,
+ data);
+diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
+index e62fc4f2aee0d..76659c1c525a2 100644
+--- a/drivers/net/usb/Kconfig
++++ b/drivers/net/usb/Kconfig
+@@ -637,8 +637,9 @@ config USB_NET_AQC111
+ * Aquantia AQtion USB to 5GbE
+
+ config USB_RTL8153_ECM
+- tristate "RTL8153 ECM support"
++ tristate
+ depends on USB_NET_CDCETHER && (USB_RTL8152 || USB_RTL8152=n)
++ default y
+ help
+ This option supports ECM mode for RTL8153 ethernet adapter, when
+ CONFIG_USB_RTL8152 is not set, or the RTL8153 device is not
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index ac2d400d1d6cd..3e890699632b5 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1801,7 +1801,7 @@ static const struct driver_info ax88179_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1814,7 +1814,7 @@ static const struct driver_info ax88178a_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1827,7 +1827,7 @@ static const struct driver_info cypress_GX3_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1840,7 +1840,7 @@ static const struct driver_info dlink_dub1312_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1853,7 +1853,7 @@ static const struct driver_info sitecom_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1866,7 +1866,7 @@ static const struct driver_info samsung_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1879,7 +1879,7 @@ static const struct driver_info lenovo_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1892,7 +1892,7 @@ static const struct driver_info belkin_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1905,7 +1905,7 @@ static const struct driver_info toshiba_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1918,7 +1918,7 @@ static const struct driver_info mct_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1931,7 +1931,7 @@ static const struct driver_info at_umc2000_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1944,7 +1944,7 @@ static const struct driver_info at_umc200_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+@@ -1957,7 +1957,7 @@ static const struct driver_info at_umc2000sp_info = {
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+- .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+ };
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index bd03e16f98a18..4dc43929e370f 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -71,6 +71,7 @@ struct smsc95xx_priv {
+ struct fwnode_handle *irqfwnode;
+ struct mii_bus *mdiobus;
+ struct phy_device *phydev;
++ struct task_struct *pm_task;
+ };
+
+ static bool turbo_mode = true;
+@@ -80,13 +81,14 @@ MODULE_PARM_DESC(turbo_mode, "Enable multiple frames per Rx transaction");
+ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
+ u32 *data, int in_pm)
+ {
++ struct smsc95xx_priv *pdata = dev->driver_priv;
+ u32 buf;
+ int ret;
+ int (*fn)(struct usbnet *, u8, u8, u16, u16, void *, u16);
+
+ BUG_ON(!dev);
+
+- if (!in_pm)
++ if (current != pdata->pm_task)
+ fn = usbnet_read_cmd;
+ else
+ fn = usbnet_read_cmd_nopm;
+@@ -110,13 +112,14 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
+ static int __must_check __smsc95xx_write_reg(struct usbnet *dev, u32 index,
+ u32 data, int in_pm)
+ {
++ struct smsc95xx_priv *pdata = dev->driver_priv;
+ u32 buf;
+ int ret;
+ int (*fn)(struct usbnet *, u8, u8, u16, u16, const void *, u16);
+
+ BUG_ON(!dev);
+
+- if (!in_pm)
++ if (current != pdata->pm_task)
+ fn = usbnet_write_cmd;
+ else
+ fn = usbnet_write_cmd_nopm;
+@@ -1490,9 +1493,12 @@ static int smsc95xx_suspend(struct usb_interface *intf, pm_message_t message)
+ u32 val, link_up;
+ int ret;
+
++ pdata->pm_task = current;
++
+ ret = usbnet_suspend(intf, message);
+ if (ret < 0) {
+ netdev_warn(dev->net, "usbnet_suspend error\n");
++ pdata->pm_task = NULL;
+ return ret;
+ }
+
+@@ -1732,6 +1738,7 @@ done:
+ if (ret && PMSG_IS_AUTO(message))
+ usbnet_resume(intf);
+
++ pdata->pm_task = NULL;
+ return ret;
+ }
+
+@@ -1752,29 +1759,31 @@ static int smsc95xx_resume(struct usb_interface *intf)
+ /* do this first to ensure it's cleared even in error case */
+ pdata->suspend_flags = 0;
+
++ pdata->pm_task = current;
++
+ if (suspend_flags & SUSPEND_ALLMODES) {
+ /* clear wake-up sources */
+ ret = smsc95xx_read_reg_nopm(dev, WUCSR, &val);
+ if (ret < 0)
+- return ret;
++ goto done;
+
+ val &= ~(WUCSR_WAKE_EN_ | WUCSR_MPEN_);
+
+ ret = smsc95xx_write_reg_nopm(dev, WUCSR, val);
+ if (ret < 0)
+- return ret;
++ goto done;
+
+ /* clear wake-up status */
+ ret = smsc95xx_read_reg_nopm(dev, PM_CTRL, &val);
+ if (ret < 0)
+- return ret;
++ goto done;
+
+ val &= ~PM_CTL_WOL_EN_;
+ val |= PM_CTL_WUPS_;
+
+ ret = smsc95xx_write_reg_nopm(dev, PM_CTRL, val);
+ if (ret < 0)
+- return ret;
++ goto done;
+ }
+
+ phy_init_hw(pdata->phydev);
+@@ -1783,15 +1792,20 @@ static int smsc95xx_resume(struct usb_interface *intf)
+ if (ret < 0)
+ netdev_warn(dev->net, "usbnet_resume error\n");
+
++done:
++ pdata->pm_task = NULL;
+ return ret;
+ }
+
+ static int smsc95xx_reset_resume(struct usb_interface *intf)
+ {
+ struct usbnet *dev = usb_get_intfdata(intf);
++ struct smsc95xx_priv *pdata = dev->driver_priv;
+ int ret;
+
++ pdata->pm_task = current;
+ ret = smsc95xx_reset(dev);
++ pdata->pm_task = NULL;
+ if (ret < 0)
+ return ret;
+
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 78a92751ce4c2..0ed09bb91c442 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -849,13 +849,11 @@ int usbnet_stop (struct net_device *net)
+
+ mpn = !test_and_clear_bit(EVENT_NO_RUNTIME_PM, &dev->flags);
+
+- /* deferred work (task, timer, softirq) must also stop.
+- * can't flush_scheduled_work() until we drop rtnl (later),
+- * else workers could deadlock; so make workers a NOP.
+- */
++ /* deferred work (timer, softirq, task) must also stop */
+ dev->flags = 0;
+ del_timer_sync (&dev->delay);
+ tasklet_kill (&dev->bh);
++ cancel_work_sync(&dev->kevent);
+ if (!pm)
+ usb_autopm_put_interface(dev->intf);
+
+@@ -1619,8 +1617,6 @@ void usbnet_disconnect (struct usb_interface *intf)
+ net = dev->net;
+ unregister_netdev (net);
+
+- cancel_work_sync(&dev->kevent);
+-
+ usb_scuttle_anchored_urbs(&dev->deferred);
+
+ if (dev->driver_info->unbind)
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 265d4a0245e7f..6991bf7c1cf03 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -2243,7 +2243,7 @@ static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan, struct net_device
+ struct vxlan_sock *sock4,
+ struct sk_buff *skb, int oif, u8 tos,
+ __be32 daddr, __be32 *saddr, __be16 dport, __be16 sport,
+- struct dst_cache *dst_cache,
++ __u8 flow_flags, struct dst_cache *dst_cache,
+ const struct ip_tunnel_info *info)
+ {
+ bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+@@ -2270,6 +2270,7 @@ static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan, struct net_device
+ fl4.saddr = *saddr;
+ fl4.fl4_dport = dport;
+ fl4.fl4_sport = sport;
++ fl4.flowi4_flags = flow_flags;
+
+ rt = ip_route_output_key(vxlan->net, &fl4);
+ if (!IS_ERR(rt)) {
+@@ -2459,7 +2460,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ unsigned int pkt_len = skb->len;
+ __be16 src_port = 0, dst_port;
+ struct dst_entry *ndst = NULL;
+- __u8 tos, ttl;
++ __u8 tos, ttl, flow_flags = 0;
+ int ifindex;
+ int err;
+ u32 flags = vxlan->cfg.flags;
+@@ -2525,6 +2526,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ }
+ dst = &remote_ip;
+ dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
++ flow_flags = info->key.flow_flags;
+ vni = tunnel_id_to_key32(info->key.tun_id);
+ ifindex = 0;
+ dst_cache = &info->dst_cache;
+@@ -2555,7 +2557,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ rt = vxlan_get_route(vxlan, dev, sock4, skb, ifindex, tos,
+ dst->sin.sin_addr.s_addr,
+ &local_ip.sin.sin_addr.s_addr,
+- dst_port, src_port,
++ dst_port, src_port, flow_flags,
+ dst_cache, info);
+ if (IS_ERR(rt)) {
+ err = PTR_ERR(rt);
+@@ -3061,7 +3063,8 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ rt = vxlan_get_route(vxlan, dev, sock4, skb, 0, info->key.tos,
+ info->key.u.ipv4.dst,
+ &info->key.u.ipv4.src, dport, sport,
+- &info->dst_cache, info);
++ info->key.flow_flags, &info->dst_cache,
++ info);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+ ip_rt_put(rt);
+diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
+index 9a4c8ff32d9dd..5bf7822c53f18 100644
+--- a/drivers/net/wireguard/allowedips.c
++++ b/drivers/net/wireguard/allowedips.c
+@@ -6,6 +6,8 @@
+ #include "allowedips.h"
+ #include "peer.h"
+
++enum { MAX_ALLOWEDIPS_BITS = 128 };
++
+ static struct kmem_cache *node_cache;
+
+ static void swap_endian(u8 *dst, const u8 *src, u8 bits)
+@@ -40,7 +42,8 @@ static void push_rcu(struct allowedips_node **stack,
+ struct allowedips_node __rcu *p, unsigned int *len)
+ {
+ if (rcu_access_pointer(p)) {
+- WARN_ON(IS_ENABLED(DEBUG) && *len >= 128);
++ if (WARN_ON(IS_ENABLED(DEBUG) && *len >= MAX_ALLOWEDIPS_BITS))
++ return;
+ stack[(*len)++] = rcu_dereference_raw(p);
+ }
+ }
+@@ -52,7 +55,7 @@ static void node_free_rcu(struct rcu_head *rcu)
+
+ static void root_free_rcu(struct rcu_head *rcu)
+ {
+- struct allowedips_node *node, *stack[128] = {
++ struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_BITS] = {
+ container_of(rcu, struct allowedips_node, rcu) };
+ unsigned int len = 1;
+
+@@ -65,7 +68,7 @@ static void root_free_rcu(struct rcu_head *rcu)
+
+ static void root_remove_peer_lists(struct allowedips_node *root)
+ {
+- struct allowedips_node *node, *stack[128] = { root };
++ struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_BITS] = { root };
+ unsigned int len = 1;
+
+ while (len > 0 && (node = stack[--len])) {
+diff --git a/drivers/net/wireguard/selftest/allowedips.c b/drivers/net/wireguard/selftest/allowedips.c
+index e173204ae7d78..41db10f9be498 100644
+--- a/drivers/net/wireguard/selftest/allowedips.c
++++ b/drivers/net/wireguard/selftest/allowedips.c
+@@ -593,10 +593,10 @@ bool __init wg_allowedips_selftest(void)
+ wg_allowedips_remove_by_peer(&t, a, &mutex);
+ test_negative(4, a, 192, 168, 0, 1);
+
+- /* These will hit the WARN_ON(len >= 128) in free_node if something
+- * goes wrong.
++ /* These will hit the WARN_ON(len >= MAX_ALLOWEDIPS_BITS) in free_node
++ * if something goes wrong.
+ */
+- for (i = 0; i < 128; ++i) {
++ for (i = 0; i < MAX_ALLOWEDIPS_BITS; ++i) {
+ part = cpu_to_be64(~(1LLU << (i % 64)));
+ memset(&ip, 0xff, 16);
+ memcpy((u8 *)&ip + (i < 64) * 8, &part, 8);
+diff --git a/drivers/net/wireguard/selftest/ratelimiter.c b/drivers/net/wireguard/selftest/ratelimiter.c
+index 007cd4457c5f6..ba87d294604fe 100644
+--- a/drivers/net/wireguard/selftest/ratelimiter.c
++++ b/drivers/net/wireguard/selftest/ratelimiter.c
+@@ -6,28 +6,29 @@
+ #ifdef DEBUG
+
+ #include <linux/jiffies.h>
++#include <linux/hrtimer.h>
+
+ static const struct {
+ bool result;
+- unsigned int msec_to_sleep_before;
++ u64 nsec_to_sleep_before;
+ } expected_results[] __initconst = {
+ [0 ... PACKETS_BURSTABLE - 1] = { true, 0 },
+ [PACKETS_BURSTABLE] = { false, 0 },
+- [PACKETS_BURSTABLE + 1] = { true, MSEC_PER_SEC / PACKETS_PER_SECOND },
++ [PACKETS_BURSTABLE + 1] = { true, NSEC_PER_SEC / PACKETS_PER_SECOND },
+ [PACKETS_BURSTABLE + 2] = { false, 0 },
+- [PACKETS_BURSTABLE + 3] = { true, (MSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
++ [PACKETS_BURSTABLE + 3] = { true, (NSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
+ [PACKETS_BURSTABLE + 4] = { true, 0 },
+ [PACKETS_BURSTABLE + 5] = { false, 0 }
+ };
+
+ static __init unsigned int maximum_jiffies_at_index(int index)
+ {
+- unsigned int total_msecs = 2 * MSEC_PER_SEC / PACKETS_PER_SECOND / 3;
++ u64 total_nsecs = 2 * NSEC_PER_SEC / PACKETS_PER_SECOND / 3;
+ int i;
+
+ for (i = 0; i <= index; ++i)
+- total_msecs += expected_results[i].msec_to_sleep_before;
+- return msecs_to_jiffies(total_msecs);
++ total_nsecs += expected_results[i].nsec_to_sleep_before;
++ return nsecs_to_jiffies(total_nsecs);
+ }
+
+ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
+@@ -42,8 +43,12 @@ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
+ loop_start_time = jiffies;
+
+ for (i = 0; i < ARRAY_SIZE(expected_results); ++i) {
+- if (expected_results[i].msec_to_sleep_before)
+- msleep(expected_results[i].msec_to_sleep_before);
++ if (expected_results[i].nsec_to_sleep_before) {
++ ktime_t timeout = ktime_add(ktime_add_ns(ktime_get_coarse_boottime(), TICK_NSEC * 4 / 3),
++ ns_to_ktime(expected_results[i].nsec_to_sleep_before));
++ set_current_state(TASK_UNINTERRUPTIBLE);
++ schedule_hrtimeout_range_clock(&timeout, 0, HRTIMER_MODE_ABS, CLOCK_BOOTTIME);
++ }
+
+ if (time_is_before_jiffies(loop_start_time +
+ maximum_jiffies_at_index(i)))
+@@ -127,7 +132,7 @@ bool __init wg_ratelimiter_selftest(void)
+ if (IS_ENABLED(CONFIG_KASAN) || IS_ENABLED(CONFIG_UBSAN))
+ return true;
+
+- BUILD_BUG_ON(MSEC_PER_SEC % PACKETS_PER_SECOND != 0);
++ BUILD_BUG_ON(NSEC_PER_SEC % PACKETS_PER_SECOND != 0);
+
+ if (wg_ratelimiter_init())
+ goto out;
+@@ -176,7 +181,6 @@ bool __init wg_ratelimiter_selftest(void)
+ test += test_count;
+ goto err;
+ }
+- msleep(500);
+ continue;
+ } else if (ret < 0) {
+ test += test_count;
+@@ -195,7 +199,6 @@ bool __init wg_ratelimiter_selftest(void)
+ test += test_count;
+ goto err;
+ }
+- msleep(50);
+ continue;
+ }
+ test += test_count;
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index 771252dd6d4ea..fe34fcc00af02 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -3840,7 +3840,7 @@ ath10k_update_per_peer_tx_stats(struct ath10k *ar,
+ switch (txrate.flags) {
+ case WMI_RATE_PREAMBLE_OFDM:
+ if (arsta->arvif && arsta->arvif->vif)
+- conf = rcu_dereference(arsta->arvif->vif->chanctx_conf);
++ conf = rcu_dereference(arsta->arvif->vif->bss_conf.chanctx_conf);
+ if (conf && conf->def.chan->band == NL80211_BAND_5GHZ)
+ arsta->tx_info.status.rates[0].idx = rate_idx - 4;
+ break;
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 3570a5895ea8c..6407f509e91b8 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -659,7 +659,7 @@ int ath10k_mac_vif_chan(struct ieee80211_vif *vif,
+ struct ieee80211_chanctx_conf *conf;
+
+ rcu_read_lock();
+- conf = rcu_dereference(vif->chanctx_conf);
++ conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (!conf) {
+ rcu_read_unlock();
+ return -ENOENT;
+@@ -2028,7 +2028,7 @@ static void ath10k_mac_vif_ap_csa_count_down(struct ath10k_vif *arvif)
+ if (arvif->vdev_type != WMI_VDEV_TYPE_AP)
+ return;
+
+- if (!vif->csa_active)
++ if (!vif->bss_conf.csa_active)
+ return;
+
+ if (!arvif->is_up)
+@@ -8798,7 +8798,7 @@ ath10k_mac_change_chanctx_cnt_iter(void *data, u8 *mac,
+ {
+ struct ath10k_mac_change_chanctx_arg *arg = data;
+
+- if (rcu_access_pointer(vif->chanctx_conf) != arg->ctx)
++ if (rcu_access_pointer(vif->bss_conf.chanctx_conf) != arg->ctx)
+ return;
+
+ arg->n_vifs++;
+@@ -8811,7 +8811,7 @@ ath10k_mac_change_chanctx_fill_iter(void *data, u8 *mac,
+ struct ath10k_mac_change_chanctx_arg *arg = data;
+ struct ieee80211_chanctx_conf *ctx;
+
+- ctx = rcu_access_pointer(vif->chanctx_conf);
++ ctx = rcu_access_pointer(vif->bss_conf.chanctx_conf);
+ if (ctx != arg->ctx)
+ return;
+
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index 607e8164bf984..5576ad9fd1161 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -1249,13 +1249,12 @@ static void ath10k_snoc_init_napi(struct ath10k *ar)
+ static int ath10k_snoc_request_irq(struct ath10k *ar)
+ {
+ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
+- int irqflags = IRQF_TRIGGER_RISING;
+ int ret, id;
+
+ for (id = 0; id < CE_COUNT_MAX; id++) {
+ ret = request_irq(ar_snoc->ce_irqs[id].irq_line,
+- ath10k_snoc_per_engine_handler,
+- irqflags, ce_name[id], ar);
++ ath10k_snoc_per_engine_handler, 0,
++ ce_name[id], ar);
+ if (ret) {
+ ath10k_err(ar,
+ "failed to register IRQ handler for CE %d: %d\n",
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 7efbe03fbca82..876410a47d1d2 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -205,7 +205,7 @@ static int ath10k_wmi_tlv_event_bcn_tx_status(struct ath10k *ar,
+ }
+
+ arvif = ath10k_get_arvif(ar, vdev_id);
+- if (arvif && arvif->is_up && arvif->vif->csa_active)
++ if (arvif && arvif->is_up && arvif->vif->bss_conf.csa_active)
+ ieee80211_queue_work(ar->hw, &arvif->ap_csa_work);
+
+ kfree(tb);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index cd438f76f284b..af19cab24c76d 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -3882,7 +3882,7 @@ void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb)
+ * Once CSA counter is completed stop sending beacons until
+ * actual channel switch is done
+ */
+- if (arvif->vif->csa_active &&
++ if (arvif->vif->bss_conf.csa_active &&
+ ieee80211_beacon_cntdwn_is_complete(arvif->vif)) {
+ ieee80211_csa_finish(arvif->vif);
+ continue;
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index fa11807f48a94..c474147101382 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -140,8 +140,53 @@ ath11k_ahb_get_msi_irq_wcn6750(struct ath11k_base *ab, unsigned int vector)
+ return ab->pci.msi.irqs[vector];
+ }
+
++static inline u32
++ath11k_ahb_get_window_start_wcn6750(struct ath11k_base *ab, u32 offset)
++{
++ u32 window_start = 0;
++
++ /* If offset lies within DP register range, use 1st window */
++ if ((offset ^ HAL_SEQ_WCSS_UMAC_OFFSET) < ATH11K_PCI_WINDOW_RANGE_MASK)
++ window_start = ATH11K_PCI_WINDOW_START;
++ /* If offset lies within CE register range, use 2nd window */
++ else if ((offset ^ HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab)) <
++ ATH11K_PCI_WINDOW_RANGE_MASK)
++ window_start = 2 * ATH11K_PCI_WINDOW_START;
++
++ return window_start;
++}
++
++static void
++ath11k_ahb_window_write32_wcn6750(struct ath11k_base *ab, u32 offset, u32 value)
++{
++ u32 window_start;
++
++ /* WCN6750 uses static window based register access*/
++ window_start = ath11k_ahb_get_window_start_wcn6750(ab, offset);
++
++ iowrite32(value, ab->mem + window_start +
++ (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
++}
++
++static u32 ath11k_ahb_window_read32_wcn6750(struct ath11k_base *ab, u32 offset)
++{
++ u32 window_start;
++ u32 val;
++
++ /* WCN6750 uses static window based register access */
++ window_start = ath11k_ahb_get_window_start_wcn6750(ab, offset);
++
++ val = ioread32(ab->mem + window_start +
++ (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
++ return val;
++}
++
+ static const struct ath11k_pci_ops ath11k_ahb_pci_ops_wcn6750 = {
++ .wakeup = NULL,
++ .release = NULL,
+ .get_msi_irq = ath11k_ahb_get_msi_irq_wcn6750,
++ .window_write32 = ath11k_ahb_window_write32_wcn6750,
++ .window_read32 = ath11k_ahb_window_read32_wcn6750,
+ };
+
+ static inline u32 ath11k_ahb_read32(struct ath11k_base *ab, u32 offset)
+@@ -971,19 +1016,24 @@ static int ath11k_ahb_probe(struct platform_device *pdev)
+ }
+
+ ab->hif.ops = hif_ops;
+- ab->pci.ops = pci_ops;
+ ab->pdev = pdev;
+ ab->hw_rev = hw_rev;
+ platform_set_drvdata(pdev, ab);
+
+- ret = ath11k_ahb_setup_resources(ab);
+- if (ret)
++ ret = ath11k_pcic_register_pci_ops(ab, pci_ops);
++ if (ret) {
++ ath11k_err(ab, "failed to register PCI ops: %d\n", ret);
+ goto err_core_free;
++ }
+
+ ret = ath11k_core_pre_init(ab);
+ if (ret)
+ goto err_core_free;
+
++ ret = ath11k_ahb_setup_resources(ab);
++ if (ret)
++ goto err_core_free;
++
+ ret = ath11k_ahb_fw_resources_init(ab);
+ if (ret)
+ goto err_core_free;
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 1e98ff9ff2888..6ddc698f4a2dc 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -107,8 +107,6 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ .fixed_mem_region = true,
+ .static_window_map = false,
+ .hybrid_bus_type = false,
+- .dp_window_idx = 0,
+- .ce_window_idx = 0,
+ .fixed_fw_mem = false,
+ .support_off_channel_tx = false,
+ },
+@@ -183,8 +181,6 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ .fixed_mem_region = true,
+ .static_window_map = false,
+ .hybrid_bus_type = false,
+- .dp_window_idx = 0,
+- .ce_window_idx = 0,
+ .fixed_fw_mem = false,
+ .support_off_channel_tx = false,
+ },
+@@ -258,8 +254,6 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ .fixed_mem_region = false,
+ .static_window_map = false,
+ .hybrid_bus_type = false,
+- .dp_window_idx = 0,
+- .ce_window_idx = 0,
+ .fixed_fw_mem = false,
+ .support_off_channel_tx = true,
+ },
+@@ -333,8 +327,6 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ .fixed_mem_region = false,
+ .static_window_map = true,
+ .hybrid_bus_type = false,
+- .dp_window_idx = 3,
+- .ce_window_idx = 2,
+ .fixed_fw_mem = false,
+ .support_off_channel_tx = false,
+ },
+@@ -408,8 +400,6 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ .fixed_mem_region = false,
+ .static_window_map = false,
+ .hybrid_bus_type = false,
+- .dp_window_idx = 0,
+- .ce_window_idx = 0,
+ .fixed_fw_mem = false,
+ .support_off_channel_tx = true,
+ },
+@@ -482,8 +472,6 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ .fixed_mem_region = false,
+ .static_window_map = false,
+ .hybrid_bus_type = false,
+- .dp_window_idx = 0,
+- .ce_window_idx = 0,
+ .fixed_fw_mem = false,
+ .support_off_channel_tx = true,
+ },
+@@ -556,8 +544,6 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ .fixed_mem_region = false,
+ .static_window_map = true,
+ .hybrid_bus_type = true,
+- .dp_window_idx = 1,
+- .ce_window_idx = 2,
+ .fixed_fw_mem = true,
+ .support_off_channel_tx = false,
+ },
+@@ -1225,23 +1211,23 @@ static int ath11k_core_pdev_create(struct ath11k_base *ab)
+ return ret;
+ }
+
+- ret = ath11k_mac_register(ab);
++ ret = ath11k_dp_pdev_alloc(ab);
+ if (ret) {
+- ath11k_err(ab, "failed register the radio with mac80211: %d\n", ret);
++ ath11k_err(ab, "failed to attach DP pdev: %d\n", ret);
+ goto err_pdev_debug;
+ }
+
+- ret = ath11k_dp_pdev_alloc(ab);
++ ret = ath11k_mac_register(ab);
+ if (ret) {
+- ath11k_err(ab, "failed to attach DP pdev: %d\n", ret);
+- goto err_mac_unregister;
++ ath11k_err(ab, "failed register the radio with mac80211: %d\n", ret);
++ goto err_dp_pdev_free;
+ }
+
+ ret = ath11k_thermal_register(ab);
+ if (ret) {
+ ath11k_err(ab, "could not register thermal device: %d\n",
+ ret);
+- goto err_dp_pdev_free;
++ goto err_mac_unregister;
+ }
+
+ ret = ath11k_spectral_init(ab);
+@@ -1254,10 +1240,10 @@ static int ath11k_core_pdev_create(struct ath11k_base *ab)
+
+ err_thermal_unregister:
+ ath11k_thermal_unregister(ab);
+-err_dp_pdev_free:
+- ath11k_dp_pdev_free(ab);
+ err_mac_unregister:
+ ath11k_mac_unregister(ab);
++err_dp_pdev_free:
++ ath11k_dp_pdev_free(ab);
+ err_pdev_debug:
+ ath11k_debugfs_pdev_destroy(ab);
+
+diff --git a/drivers/net/wireless/ath/ath11k/debug.h b/drivers/net/wireless/ath/ath11k/debug.h
+index fbbd5fe02aa83..91545640c47b2 100644
+--- a/drivers/net/wireless/ath/ath11k/debug.h
++++ b/drivers/net/wireless/ath/ath11k/debug.h
+@@ -23,8 +23,8 @@ enum ath11k_debug_mask {
+ ATH11K_DBG_TESTMODE = 0x00000400,
+ ATH11k_DBG_HAL = 0x00000800,
+ ATH11K_DBG_PCI = 0x00001000,
+- ATH11K_DBG_DP_TX = 0x00001000,
+- ATH11K_DBG_DP_RX = 0x00002000,
++ ATH11K_DBG_DP_TX = 0x00002000,
++ ATH11K_DBG_DP_RX = 0x00004000,
+ ATH11K_DBG_ANY = 0xffffffff,
+ };
+
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 049774cc158cc..b3e133add1ce5 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -835,8 +835,9 @@ void ath11k_peer_rx_tid_delete(struct ath11k *ar,
+ HAL_REO_CMD_UPDATE_RX_QUEUE, &cmd,
+ ath11k_dp_rx_tid_del_func);
+ if (ret) {
+- ath11k_err(ar->ab, "failed to send HAL_REO_CMD_UPDATE_RX_QUEUE cmd, tid %d (%d)\n",
+- tid, ret);
++ if (ret != -ESHUTDOWN)
++ ath11k_err(ar->ab, "failed to send HAL_REO_CMD_UPDATE_RX_QUEUE cmd, tid %d (%d)\n",
++ tid, ret);
+ dma_unmap_single(ar->ab->dev, rx_tid->paddr, rx_tid->size,
+ DMA_BIDIRECTIONAL);
+ kfree(rx_tid->vaddr);
+diff --git a/drivers/net/wireless/ath/ath11k/htc.c b/drivers/net/wireless/ath/ath11k/htc.c
+index 069c29a4fac70..ca3aedc0252d5 100644
+--- a/drivers/net/wireless/ath/ath11k/htc.c
++++ b/drivers/net/wireless/ath/ath11k/htc.c
+@@ -258,8 +258,10 @@ void ath11k_htc_tx_completion_handler(struct ath11k_base *ab,
+ u8 eid;
+
+ eid = ATH11K_SKB_CB(skb)->eid;
+- if (eid >= ATH11K_HTC_EP_COUNT)
++ if (eid >= ATH11K_HTC_EP_COUNT) {
++ dev_kfree_skb_any(skb);
+ return;
++ }
+
+ ep = &htc->endpoint[eid];
+ spin_lock_bh(&htc->tx_lock);
+diff --git a/drivers/net/wireless/ath/ath11k/hw.h b/drivers/net/wireless/ath/ath11k/hw.h
+index 77dc5c851c9b7..84c284fab5db2 100644
+--- a/drivers/net/wireless/ath/ath11k/hw.h
++++ b/drivers/net/wireless/ath/ath11k/hw.h
+@@ -201,8 +201,6 @@ struct ath11k_hw_params {
+ bool fixed_mem_region;
+ bool static_window_map;
+ bool hybrid_bus_type;
+- u8 dp_window_idx;
+- u8 ce_window_idx;
+ bool fixed_fw_mem;
+ bool support_off_channel_tx;
+ };
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index ee1590b16eff7..06b86dcc3826b 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -505,7 +505,7 @@ static int ath11k_mac_vif_chan(struct ieee80211_vif *vif,
+ struct ieee80211_chanctx_conf *conf;
+
+ rcu_read_lock();
+- conf = rcu_dereference(vif->chanctx_conf);
++ conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (!conf) {
+ rcu_read_unlock();
+ return -ENOENT;
+@@ -1398,10 +1398,10 @@ void ath11k_mac_bcn_tx_event(struct ath11k_vif *arvif)
+ {
+ struct ieee80211_vif *vif = arvif->vif;
+
+- if (!vif->color_change_active && !arvif->bcca_zero_sent)
++ if (!vif->bss_conf.color_change_active && !arvif->bcca_zero_sent)
+ return;
+
+- if (vif->color_change_active && ieee80211_beacon_cntdwn_is_complete(vif)) {
++ if (vif->bss_conf.color_change_active && ieee80211_beacon_cntdwn_is_complete(vif)) {
+ arvif->bcca_zero_sent = true;
+ ieee80211_color_change_finish(vif);
+ return;
+@@ -1409,7 +1409,7 @@ void ath11k_mac_bcn_tx_event(struct ath11k_vif *arvif)
+
+ arvif->bcca_zero_sent = false;
+
+- if (vif->color_change_active)
++ if (vif->bss_conf.color_change_active)
+ ieee80211_beacon_update_cntdwn(vif);
+ ath11k_mac_setup_bcn_tmpl(arvif);
+ }
+@@ -6848,7 +6848,7 @@ ath11k_mac_change_chanctx_cnt_iter(void *data, u8 *mac,
+ {
+ struct ath11k_mac_change_chanctx_arg *arg = data;
+
+- if (rcu_access_pointer(vif->chanctx_conf) != arg->ctx)
++ if (rcu_access_pointer(vif->bss_conf.chanctx_conf) != arg->ctx)
+ return;
+
+ arg->n_vifs++;
+@@ -6861,7 +6861,7 @@ ath11k_mac_change_chanctx_fill_iter(void *data, u8 *mac,
+ struct ath11k_mac_change_chanctx_arg *arg = data;
+ struct ieee80211_chanctx_conf *ctx;
+
+- ctx = rcu_access_pointer(vif->chanctx_conf);
++ ctx = rcu_access_pointer(vif->bss_conf.chanctx_conf);
+ if (ctx != arg->ctx)
+ return;
+
+@@ -8297,11 +8297,15 @@ static int ath11k_mac_op_set_bios_sar_specs(struct ieee80211_hw *hw,
+ const struct cfg80211_sar_specs *sar)
+ {
+ struct ath11k *ar = hw->priv;
+- const struct cfg80211_sar_sub_specs *sspec = sar->sub_specs;
++ const struct cfg80211_sar_sub_specs *sspec;
+ int ret, index;
+ u8 *sar_tbl;
+ u32 i;
+
++ if (!sar || sar->type != NL80211_SAR_TYPE_POWER ||
++ sar->num_sub_specs == 0)
++ return -EINVAL;
++
+ mutex_lock(&ar->conf_mutex);
+
+ if (!test_bit(WMI_TLV_SERVICE_BIOS_SAR_SUPPORT, ar->ab->wmi_ab.svc_map) ||
+@@ -8310,12 +8314,6 @@ static int ath11k_mac_op_set_bios_sar_specs(struct ieee80211_hw *hw,
+ goto exit;
+ }
+
+- if (!sar || sar->type != NL80211_SAR_TYPE_POWER ||
+- sar->num_sub_specs == 0) {
+- ret = -EINVAL;
+- goto exit;
+- }
+-
+ ret = ath11k_wmi_pdev_set_bios_geo_table_param(ar);
+ if (ret) {
+ ath11k_warn(ar->ab, "failed to set geo table: %d\n", ret);
+@@ -8328,6 +8326,7 @@ static int ath11k_mac_op_set_bios_sar_specs(struct ieee80211_hw *hw,
+ goto exit;
+ }
+
++ sspec = sar->sub_specs;
+ for (i = 0; i < sar->num_sub_specs; i++) {
+ if (sspec->freq_range_index >= (BIOS_SAR_TABLE_LEN >> 1)) {
+ ath11k_warn(ar->ab, "Ignore bad frequency index %u, max allowed %u\n",
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index dedf1b88ddf6b..5bd34a6273d99 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -50,6 +50,22 @@ static void ath11k_pci_bus_release(struct ath11k_base *ab)
+ mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+ }
+
++static u32 ath11k_pci_get_window_start(struct ath11k_base *ab, u32 offset)
++{
++ if (!ab->hw_params.static_window_map)
++ return ATH11K_PCI_WINDOW_START;
++
++ if ((offset ^ HAL_SEQ_WCSS_UMAC_OFFSET) < ATH11K_PCI_WINDOW_RANGE_MASK)
++ /* if offset lies within DP register range, use 3rd window */
++ return 3 * ATH11K_PCI_WINDOW_START;
++ else if ((offset ^ HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab)) <
++ ATH11K_PCI_WINDOW_RANGE_MASK)
++ /* if offset lies within CE register range, use 2nd window */
++ return 2 * ATH11K_PCI_WINDOW_START;
++ else
++ return ATH11K_PCI_WINDOW_START;
++}
++
+ static inline void ath11k_pci_select_window(struct ath11k_pci *ab_pci, u32 offset)
+ {
+ struct ath11k_base *ab = ab_pci->ab;
+@@ -70,26 +86,39 @@ static void
+ ath11k_pci_window_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ {
+ struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+- u32 window_start = ATH11K_PCI_WINDOW_START;
++ u32 window_start;
++
++ window_start = ath11k_pci_get_window_start(ab, offset);
+
+- spin_lock_bh(&ab_pci->window_lock);
+- ath11k_pci_select_window(ab_pci, offset);
+- iowrite32(value, ab->mem + window_start +
+- (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
+- spin_unlock_bh(&ab_pci->window_lock);
++ if (window_start == ATH11K_PCI_WINDOW_START) {
++ spin_lock_bh(&ab_pci->window_lock);
++ ath11k_pci_select_window(ab_pci, offset);
++ iowrite32(value, ab->mem + window_start +
++ (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
++ spin_unlock_bh(&ab_pci->window_lock);
++ } else {
++ iowrite32(value, ab->mem + window_start +
++ (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
++ }
+ }
+
+ static u32 ath11k_pci_window_read32(struct ath11k_base *ab, u32 offset)
+ {
+ struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+- u32 window_start = ATH11K_PCI_WINDOW_START;
+- u32 val;
++ u32 window_start, val;
+
+- spin_lock_bh(&ab_pci->window_lock);
+- ath11k_pci_select_window(ab_pci, offset);
+- val = ioread32(ab->mem + window_start +
+- (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
+- spin_unlock_bh(&ab_pci->window_lock);
++ window_start = ath11k_pci_get_window_start(ab, offset);
++
++ if (window_start == ATH11K_PCI_WINDOW_START) {
++ spin_lock_bh(&ab_pci->window_lock);
++ ath11k_pci_select_window(ab_pci, offset);
++ val = ioread32(ab->mem + window_start +
++ (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
++ spin_unlock_bh(&ab_pci->window_lock);
++ } else {
++ val = ioread32(ab->mem + window_start +
++ (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
++ }
+
+ return val;
+ }
+@@ -110,6 +139,8 @@ static const struct ath11k_pci_ops ath11k_pci_ops_qca6390 = {
+ };
+
+ static const struct ath11k_pci_ops ath11k_pci_ops_qcn9074 = {
++ .wakeup = NULL,
++ .release = NULL,
+ .get_msi_irq = ath11k_pci_get_msi_irq,
+ .window_write32 = ath11k_pci_window_write32,
+ .window_read32 = ath11k_pci_window_read32,
+@@ -697,6 +728,7 @@ static int ath11k_pci_probe(struct pci_dev *pdev,
+ struct ath11k_base *ab;
+ struct ath11k_pci *ab_pci;
+ u32 soc_hw_version_major, soc_hw_version_minor, addr;
++ const struct ath11k_pci_ops *pci_ops;
+ int ret;
+
+ ab = ath11k_core_alloc(&pdev->dev, sizeof(*ab_pci), ATH11K_BUS_PCI);
+@@ -754,10 +786,10 @@ static int ath11k_pci_probe(struct pci_dev *pdev,
+ goto err_pci_free_region;
+ }
+
+- ab->pci.ops = &ath11k_pci_ops_qca6390;
++ pci_ops = &ath11k_pci_ops_qca6390;
+ break;
+ case QCN9074_DEVICE_ID:
+- ab->pci.ops = &ath11k_pci_ops_qcn9074;
++ pci_ops = &ath11k_pci_ops_qcn9074;
+ ab->hw_rev = ATH11K_HW_QCN9074_HW10;
+ break;
+ case WCN6855_DEVICE_ID:
+@@ -787,7 +819,7 @@ unsupported_wcn6855_soc:
+ goto err_pci_free_region;
+ }
+
+- ab->pci.ops = &ath11k_pci_ops_qca6390;
++ pci_ops = &ath11k_pci_ops_qca6390;
+ break;
+ default:
+ dev_err(&pdev->dev, "Unknown PCI device found: 0x%x\n",
+@@ -796,6 +828,12 @@ unsupported_wcn6855_soc:
+ goto err_pci_free_region;
+ }
+
++ ret = ath11k_pcic_register_pci_ops(ab, pci_ops);
++ if (ret) {
++ ath11k_err(ab, "failed to register PCI ops: %d\n", ret);
++ goto err_pci_free_region;
++ }
++
+ ret = ath11k_pcic_init_msi_config(ab);
+ if (ret) {
+ ath11k_err(ab, "failed to init msi config: %d\n", ret);
+@@ -920,7 +958,9 @@ qmi_fail:
+ static void ath11k_pci_shutdown(struct pci_dev *pdev)
+ {
+ struct ath11k_base *ab = pci_get_drvdata(pdev);
++ struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+
++ ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
+ ath11k_pci_power_down(ab);
+ }
+
+diff --git a/drivers/net/wireless/ath/ath11k/pcic.c b/drivers/net/wireless/ath/ath11k/pcic.c
+index cf12b98c480d6..1adf20ebef27c 100644
+--- a/drivers/net/wireless/ath/ath11k/pcic.c
++++ b/drivers/net/wireless/ath/ath11k/pcic.c
+@@ -140,23 +140,8 @@ int ath11k_pcic_init_msi_config(struct ath11k_base *ab)
+ }
+ EXPORT_SYMBOL(ath11k_pcic_init_msi_config);
+
+-static inline u32 ath11k_pcic_get_window_start(struct ath11k_base *ab,
+- u32 offset)
+-{
+- u32 window_start = 0;
+-
+- if ((offset ^ HAL_SEQ_WCSS_UMAC_OFFSET) < ATH11K_PCI_WINDOW_RANGE_MASK)
+- window_start = ab->hw_params.dp_window_idx * ATH11K_PCI_WINDOW_START;
+- else if ((offset ^ HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab)) <
+- ATH11K_PCI_WINDOW_RANGE_MASK)
+- window_start = ab->hw_params.ce_window_idx * ATH11K_PCI_WINDOW_START;
+-
+- return window_start;
+-}
+-
+ void ath11k_pcic_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ {
+- u32 window_start;
+ int ret = 0;
+
+ /* for offset beyond BAR + 4K - 32, may
+@@ -166,15 +151,10 @@ void ath11k_pcic_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ offset >= ATH11K_PCI_ACCESS_ALWAYS_OFF && ab->pci.ops->wakeup)
+ ret = ab->pci.ops->wakeup(ab);
+
+- if (offset < ATH11K_PCI_WINDOW_START) {
++ if (offset < ATH11K_PCI_WINDOW_START)
+ iowrite32(value, ab->mem + offset);
+- } else if (ab->hw_params.static_window_map) {
+- window_start = ath11k_pcic_get_window_start(ab, offset);
+- iowrite32(value, ab->mem + window_start +
+- (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
+- } else if (ab->pci.ops->window_write32) {
++ else
+ ab->pci.ops->window_write32(ab, offset, value);
+- }
+
+ if (test_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags) &&
+ offset >= ATH11K_PCI_ACCESS_ALWAYS_OFF && ab->pci.ops->release &&
+@@ -185,9 +165,8 @@ EXPORT_SYMBOL(ath11k_pcic_write32);
+
+ u32 ath11k_pcic_read32(struct ath11k_base *ab, u32 offset)
+ {
+- u32 val = 0;
+- u32 window_start;
+ int ret = 0;
++ u32 val;
+
+ /* for offset beyond BAR + 4K - 32, may
+ * need to wakeup the device to access.
+@@ -196,15 +175,10 @@ u32 ath11k_pcic_read32(struct ath11k_base *ab, u32 offset)
+ offset >= ATH11K_PCI_ACCESS_ALWAYS_OFF && ab->pci.ops->wakeup)
+ ret = ab->pci.ops->wakeup(ab);
+
+- if (offset < ATH11K_PCI_WINDOW_START) {
++ if (offset < ATH11K_PCI_WINDOW_START)
+ val = ioread32(ab->mem + offset);
+- } else if (ab->hw_params.static_window_map) {
+- window_start = ath11k_pcic_get_window_start(ab, offset);
+- val = ioread32(ab->mem + window_start +
+- (offset & ATH11K_PCI_WINDOW_RANGE_MASK));
+- } else if (ab->pci.ops->window_read32) {
++ else
+ val = ab->pci.ops->window_read32(ab, offset);
+- }
+
+ if (test_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags) &&
+ offset >= ATH11K_PCI_ACCESS_ALWAYS_OFF && ab->pci.ops->release &&
+@@ -516,11 +490,6 @@ static irqreturn_t ath11k_pcic_ext_interrupt_handler(int irq, void *arg)
+ static int
+ ath11k_pcic_get_msi_irq(struct ath11k_base *ab, unsigned int vector)
+ {
+- if (!ab->pci.ops->get_msi_irq) {
+- WARN_ONCE(1, "get_msi_irq pci op not defined");
+- return -EOPNOTSUPP;
+- }
+-
+ return ab->pci.ops->get_msi_irq(ab, vector);
+ }
+
+@@ -746,3 +715,19 @@ int ath11k_pcic_map_service_to_pipe(struct ath11k_base *ab, u16 service_id,
+ return 0;
+ }
+ EXPORT_SYMBOL(ath11k_pcic_map_service_to_pipe);
++
++int ath11k_pcic_register_pci_ops(struct ath11k_base *ab,
++ const struct ath11k_pci_ops *pci_ops)
++{
++ if (!pci_ops)
++ return 0;
++
++ /* Return error if mandatory pci_ops callbacks are missing */
++ if (!pci_ops->get_msi_irq || !pci_ops->window_write32 ||
++ !pci_ops->window_read32)
++ return -EINVAL;
++
++ ab->pci.ops = pci_ops;
++ return 0;
++}
++EXPORT_SYMBOL(ath11k_pcic_register_pci_ops);
+diff --git a/drivers/net/wireless/ath/ath11k/pcic.h b/drivers/net/wireless/ath/ath11k/pcic.h
+index c53d86289a8eb..0afbb34510dbb 100644
+--- a/drivers/net/wireless/ath/ath11k/pcic.h
++++ b/drivers/net/wireless/ath/ath11k/pcic.h
+@@ -43,4 +43,6 @@ int ath11k_pcic_map_service_to_pipe(struct ath11k_base *ab, u16 service_id,
+ void ath11k_pcic_ce_irqs_enable(struct ath11k_base *ab);
+ void ath11k_pcic_ce_irq_disable_sync(struct ath11k_base *ab);
+ int ath11k_pcic_init_msi_config(struct ath11k_base *ab);
++int ath11k_pcic_register_pci_ops(struct ath11k_base *ab,
++ const struct ath11k_pci_ops *pci_ops);
+ #endif
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 7b1dc19c565ef..cc84bd53ddae9 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -1700,7 +1700,7 @@ int ath11k_wmi_bcn_tmpl(struct ath11k *ar, u32 vdev_id,
+ cmd->vdev_id = vdev_id;
+ cmd->tim_ie_offset = offs->tim_offset;
+
+- if (vif->csa_active) {
++ if (vif->bss_conf.csa_active) {
+ cmd->csa_switch_count_offset = offs->cntdwn_counter_offs[0];
+ cmd->ext_csa_switch_count_offset = offs->cntdwn_counter_offs[1];
+ }
+@@ -7476,7 +7476,7 @@ ath11k_wmi_process_csa_switch_count_event(struct ath11k_base *ab,
+ continue;
+ }
+
+- if (arvif->is_up && arvif->vif->csa_active)
++ if (arvif->is_up && arvif->vif->bss_conf.csa_active)
+ ieee80211_csa_finish(arvif->vif);
+ }
+ rcu_read_unlock();
+diff --git a/drivers/net/wireless/ath/ath6kl/cfg80211.c b/drivers/net/wireless/ath/ath6kl/cfg80211.c
+index bd1183830e911..33ed54738d470 100644
+--- a/drivers/net/wireless/ath/ath6kl/cfg80211.c
++++ b/drivers/net/wireless/ath/ath6kl/cfg80211.c
+@@ -1119,7 +1119,7 @@ void ath6kl_cfg80211_ch_switch_notify(struct ath6kl_vif *vif, int freq,
+ NL80211_CHAN_HT20 : NL80211_CHAN_NO_HT);
+
+ mutex_lock(&vif->wdev.mtx);
+- cfg80211_ch_switch_notify(vif->ndev, &chandef);
++ cfg80211_ch_switch_notify(vif->ndev, &chandef, 0);
+ mutex_unlock(&vif->wdev.mtx);
+ }
+
+@@ -2967,7 +2967,8 @@ static int ath6kl_change_beacon(struct wiphy *wiphy, struct net_device *dev,
+ return ath6kl_set_ies(vif, beacon);
+ }
+
+-static int ath6kl_stop_ap(struct wiphy *wiphy, struct net_device *dev)
++static int ath6kl_stop_ap(struct wiphy *wiphy, struct net_device *dev,
++ unsigned int link_id)
+ {
+ struct ath6kl *ar = ath6kl_priv(dev);
+ struct ath6kl_vif *vif = netdev_priv(dev);
+@@ -3368,6 +3369,7 @@ static int ath6kl_cfg80211_sscan_stop(struct wiphy *wiphy,
+
+ static int ath6kl_cfg80211_set_bitrate(struct wiphy *wiphy,
+ struct net_device *dev,
++ unsigned int link_id,
+ const u8 *addr,
+ const struct cfg80211_bitrate_mask *mask)
+ {
+diff --git a/drivers/net/wireless/ath/ath9k/beacon.c b/drivers/net/wireless/ath/ath9k/beacon.c
+index 72e2e71aac0e6..8b1b966bcef10 100644
+--- a/drivers/net/wireless/ath/ath9k/beacon.c
++++ b/drivers/net/wireless/ath/ath9k/beacon.c
+@@ -362,7 +362,7 @@ static void ath9k_set_tsfadjust(struct ath_softc *sc,
+
+ bool ath9k_csa_is_finished(struct ath_softc *sc, struct ieee80211_vif *vif)
+ {
+- if (!vif || !vif->csa_active)
++ if (!vif || !vif->bss_conf.csa_active)
+ return false;
+
+ if (!ieee80211_beacon_cntdwn_is_complete(vif))
+diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
+index 6b45e63fae4ba..e3d546ef71ddc 100644
+--- a/drivers/net/wireless/ath/ath9k/htc.h
++++ b/drivers/net/wireless/ath/ath9k/htc.h
+@@ -327,11 +327,11 @@ static inline struct ath9k_htc_tx_ctl *HTC_SKB_CB(struct sk_buff *skb)
+ }
+
+ #ifdef CONFIG_ATH9K_HTC_DEBUGFS
+-
+-#define TX_STAT_INC(c) (hif_dev->htc_handle->drv_priv->debug.tx_stats.c++)
+-#define TX_STAT_ADD(c, a) (hif_dev->htc_handle->drv_priv->debug.tx_stats.c += a)
+-#define RX_STAT_INC(c) (hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c++)
+-#define RX_STAT_ADD(c, a) (hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c += a)
++#define __STAT_SAFE(expr) (hif_dev->htc_handle->drv_priv ? (expr) : 0)
++#define TX_STAT_INC(c) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.tx_stats.c++)
++#define TX_STAT_ADD(c, a) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.tx_stats.c += a)
++#define RX_STAT_INC(c) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c++)
++#define RX_STAT_ADD(c, a) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c += a)
+ #define CAB_STAT_INC priv->debug.tx_stats.cab_queued++
+
+ #define TX_QSTAT_INC(q) (priv->debug.tx_stats.queue_stats[q]++)
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c b/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
+index c745897aa3d6c..468bc934d8485 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
+@@ -511,7 +511,7 @@ bool ath9k_htc_csa_is_finished(struct ath9k_htc_priv *priv)
+ struct ieee80211_vif *vif;
+
+ vif = priv->csa_vif;
+- if (!vif || !vif->csa_active)
++ if (!vif || !vif->bss_conf.csa_active)
+ return false;
+
+ if (!ieee80211_beacon_cntdwn_is_complete(vif))
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index ff61ae34ecdf0..07ac88fb1c577 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -944,7 +944,6 @@ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ priv->hw = hw;
+ priv->htc = htc_handle;
+ priv->dev = dev;
+- htc_handle->drv_priv = priv;
+ SET_IEEE80211_DEV(hw, priv->dev);
+
+ ret = ath9k_htc_wait_for_target(priv);
+@@ -965,6 +964,8 @@ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ if (ret)
+ goto err_init;
+
++ htc_handle->drv_priv = priv;
++
+ return 0;
+
+ err_init:
+diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
+index 8f2638f5b87bb..f93bdffa4d1dd 100644
+--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
++++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
+@@ -2098,8 +2098,8 @@ static int wil_cfg80211_change_beacon(struct wiphy *wiphy,
+ bcon->tail_len))
+ privacy = 1;
+
+- memcpy(vif->ssid, wdev->ssid, wdev->ssid_len);
+- vif->ssid_len = wdev->ssid_len;
++ memcpy(vif->ssid, wdev->u.ap.ssid, wdev->u.ap.ssid_len);
++ vif->ssid_len = wdev->u.ap.ssid_len;
+
+ /* in case privacy has changed, need to restart the AP */
+ if (vif->privacy != privacy) {
+@@ -2108,7 +2108,7 @@ static int wil_cfg80211_change_beacon(struct wiphy *wiphy,
+
+ rc = _wil_cfg80211_start_ap(wiphy, ndev, vif->ssid,
+ vif->ssid_len, privacy,
+- wdev->beacon_interval,
++ wdev->links[0].ap.beacon_interval,
+ vif->channel,
+ vif->wmi_edmg_channel, bcon,
+ vif->hidden_ssid,
+@@ -2186,7 +2186,8 @@ static int wil_cfg80211_start_ap(struct wiphy *wiphy,
+ }
+
+ static int wil_cfg80211_stop_ap(struct wiphy *wiphy,
+- struct net_device *ndev)
++ struct net_device *ndev,
++ unsigned int link_id)
+ {
+ struct wil6210_priv *wil = wiphy_to_wil(wiphy);
+ struct wil6210_vif *vif = ndev_to_vif(ndev);
+diff --git a/drivers/net/wireless/ath/wil6210/debugfs.c b/drivers/net/wireless/ath/wil6210/debugfs.c
+index 64d6c98174c8b..04d1aa0e2d357 100644
+--- a/drivers/net/wireless/ath/wil6210/debugfs.c
++++ b/drivers/net/wireless/ath/wil6210/debugfs.c
+@@ -1010,20 +1010,14 @@ static ssize_t wil_write_file_wmi(struct file *file, const char __user *buf,
+ void *cmd;
+ int cmdlen = len - sizeof(struct wmi_cmd_hdr);
+ u16 cmdid;
+- int rc, rc1;
++ int rc1;
+
+- if (cmdlen < 0)
++ if (cmdlen < 0 || *ppos != 0)
+ return -EINVAL;
+
+- wmi = kmalloc(len, GFP_KERNEL);
+- if (!wmi)
+- return -ENOMEM;
+-
+- rc = simple_write_to_buffer(wmi, len, ppos, buf, len);
+- if (rc < 0) {
+- kfree(wmi);
+- return rc;
+- }
++ wmi = memdup_user(buf, len);
++ if (IS_ERR(wmi))
++ return PTR_ERR(wmi);
+
+ cmd = (cmdlen > 0) ? &wmi[1] : NULL;
+ cmdid = le16_to_cpu(wmi->command_id);
+@@ -1033,7 +1027,7 @@ static ssize_t wil_write_file_wmi(struct file *file, const char __user *buf,
+
+ wil_info(wil, "0x%04x[%d] -> %d\n", cmdid, cmdlen, rc1);
+
+- return rc;
++ return len;
+ }
+
+ static const struct file_operations fops_wmi = {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 605206abe4246..11e1f07f83e0a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -4965,7 +4965,8 @@ exit:
+ return err;
+ }
+
+-static int brcmf_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *ndev)
++static int brcmf_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *ndev,
++ unsigned int link_id)
+ {
+ struct brcmf_cfg80211_info *cfg = wiphy_to_cfg(wiphy);
+ struct brcmf_if *ifp = netdev_priv(ndev);
+@@ -5302,6 +5303,7 @@ exit:
+
+ static int brcmf_cfg80211_get_channel(struct wiphy *wiphy,
+ struct wireless_dev *wdev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef)
+ {
+ struct brcmf_cfg80211_info *cfg = wiphy_to_cfg(wiphy);
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-rs.c b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+index 9dd2d890e35fe..c62f299b9e0a8 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-rs.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+@@ -2403,7 +2403,7 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ /* Repeat initial/next rate.
+ * For legacy IL_NUMBER_TRY == 1, this loop will not execute.
+ * For HT IL_HT_NUMBER_TRY == 3, this executes twice. */
+- while (repeat_rate > 0 && idx < LINK_QUAL_MAX_RETRY_NUM) {
++ while (repeat_rate > 0) {
+ if (is_legacy(tbl_type.lq_type)) {
+ if (ant_toggle_cnt < NUM_TRY_BEFORE_ANT_TOGGLE)
+ ant_toggle_cnt++;
+@@ -2422,6 +2422,8 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ cpu_to_le32(new_rate);
+ repeat_rate--;
+ idx++;
++ if (idx >= LINK_QUAL_MAX_RETRY_NUM)
++ goto out;
+ }
+
+ il4965_rs_get_tbl_info_from_mcs(new_rate, lq_sta->band,
+@@ -2466,6 +2468,7 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ repeat_rate--;
+ }
+
++out:
+ lq_cmd->agg_params.agg_frame_cnt_limit = LINK_QUAL_AGG_FRAME_LIMIT_DEF;
+ lq_cmd->agg_params.agg_dis_start_th = LINK_QUAL_AGG_DISABLE_START_DEF;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/coex.c b/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
+index 9b194cb8d65ed..8760f2c733696 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2013-2014, 2018-2020 Intel Corporation
++ * Copyright (C) 2013-2014, 2018-2020, 2022 Intel Corporation
+ * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+ */
+ #include <linux/ieee80211.h>
+@@ -106,7 +106,7 @@ iwl_get_coex_type(struct iwl_mvm *mvm, const struct ieee80211_vif *vif)
+
+ rcu_read_lock();
+
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+
+ if (!chanctx_conf ||
+ chanctx_conf->def.chan->band != NL80211_BAND_2GHZ) {
+@@ -283,7 +283,7 @@ static void iwl_mvm_bt_notif_iterator(void *_data, u8 *mac,
+ return;
+ }
+
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+
+ /* If channel context is invalid or not on 2.4GHz .. */
+ if ((!chanctx_conf ||
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 61f9136a333d6..8edc8646a23a0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -731,7 +731,7 @@ static int iwl_mvm_d3_reprogram(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ return -EINVAL;
+
+ rcu_read_lock();
+- ctx = rcu_dereference(vif->chanctx_conf);
++ ctx = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (WARN_ON(!ctx)) {
+ rcu_read_unlock();
+ return -EINVAL;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+index 7d9faeffd154a..78d8b37eb71ad 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+@@ -234,7 +234,7 @@ static ssize_t iwl_dbgfs_mac_params_read(struct file *file,
+ }
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (chanctx_conf)
+ pos += scnprintf(buf+pos, bufsz-pos,
+ "idle rx chains %d, active rx chains: %d\n",
+@@ -597,7 +597,7 @@ static ssize_t iwl_dbgfs_rx_phyinfo_write(struct ieee80211_vif *vif, char *buf,
+ mutex_lock(&mvm->mutex);
+ rcu_read_lock();
+
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ /* make sure the channel context is assigned */
+ if (!chanctx_conf) {
+ rcu_read_unlock();
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
+index 9729680476fdd..e862d1b43f217 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+ * Copyright (C) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2021 Intel Corporation
++ * Copyright (C) 2018-2022 Intel Corporation
+ */
+ #include <net/cfg80211.h>
+ #include <linux/etherdevice.h>
+@@ -398,7 +398,7 @@ int iwl_mvm_ftm_start_responder(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ }
+
+ rcu_read_lock();
+- pctx = rcu_dereference(vif->chanctx_conf);
++ pctx = rcu_dereference(vif->bss_conf.chanctx_conf);
+ /* Copy the ctx to unlock the rcu and send the phy ctxt. We don't care
+ * about changes in the ctx after releasing the lock because the driver
+ * is still protected by the mutex. */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index 56fa20596f168..7756ac0faf3fc 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -481,7 +481,7 @@ static void iwl_mvm_mac_ctxt_cmd_common(struct iwl_mvm *mvm,
+ eth_broadcast_addr(cmd->bssid_addr);
+
+ rcu_read_lock();
+- chanctx = rcu_dereference(vif->chanctx_conf);
++ chanctx = rcu_dereference(vif->bss_conf.chanctx_conf);
+ iwl_mvm_ack_rates(mvm, vif, chanctx ? chanctx->def.chan->band
+ : NL80211_BAND_2GHZ,
+ &cck_ack_rates, &ofdm_ack_rates);
+@@ -934,7 +934,7 @@ static int iwl_mvm_mac_ctxt_send_beacon_v9(struct iwl_mvm *mvm,
+
+ /* Enable FILS on PSC channels only */
+ rcu_read_lock();
+- ctx = rcu_dereference(vif->chanctx_conf);
++ ctx = rcu_dereference(vif->bss_conf.chanctx_conf);
+ channel = ieee80211_frequency_to_channel(ctx->def.chan->center_freq);
+ WARN_ON(channel == 0);
+ if (cfg80211_channel_is_psc(ctx->def.chan) &&
+@@ -1335,7 +1335,7 @@ void iwl_mvm_rx_beacon_notif(struct iwl_mvm *mvm,
+
+ csa_vif = rcu_dereference_protected(mvm->csa_vif,
+ lockdep_is_held(&mvm->mutex));
+- if (unlikely(csa_vif && csa_vif->csa_active))
++ if (unlikely(csa_vif && csa_vif->bss_conf.csa_active))
+ iwl_mvm_csa_count_down(mvm, csa_vif, mvm->ap_last_beacon_gp2,
+ (status == TX_STATUS_SUCCESS));
+
+@@ -1558,7 +1558,7 @@ void iwl_mvm_channel_switch_start_notif(struct iwl_mvm *mvm,
+ switch (vif->type) {
+ case NL80211_IFTYPE_AP:
+ csa_vif = rcu_dereference(mvm->csa_vif);
+- if (WARN_ON(!csa_vif || !csa_vif->csa_active ||
++ if (WARN_ON(!csa_vif || !csa_vif->bss_conf.csa_active ||
+ csa_vif != vif))
+ goto out_unlock;
+
+@@ -1587,7 +1587,7 @@ void iwl_mvm_channel_switch_start_notif(struct iwl_mvm *mvm,
+ */
+ if (iwl_fw_lookup_notif_ver(mvm->fw, MAC_CONF_GROUP,
+ CHANNEL_SWITCH_ERROR_NOTIF,
+- 0) && !vif->csa_active) {
++ 0) && !vif->bss_conf.csa_active) {
+ IWL_DEBUG_INFO(mvm, "Channel Switch was canceled\n");
+ iwl_mvm_cancel_channel_switch(mvm, vif, mac_id);
+ break;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index bb9bd21653555..c5626ff838058 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1768,7 +1768,7 @@ static int iwl_mvm_update_mu_groups(struct iwl_mvm *mvm,
+ static void iwl_mvm_mu_mimo_iface_iterator(void *_data, u8 *mac,
+ struct ieee80211_vif *vif)
+ {
+- if (vif->mu_mimo_owner) {
++ if (vif->bss_conf.mu_mimo_owner) {
+ struct iwl_mu_group_mgmt_notif *notif = _data;
+
+ /*
+@@ -1965,7 +1965,7 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
+
+ rcu_read_lock();
+
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ return;
+@@ -2337,7 +2337,7 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
+ * However, on HW restart we should restore this data.
+ */
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
+- (changes & BSS_CHANGED_MU_GROUPS) && vif->mu_mimo_owner) {
++ (changes & BSS_CHANGED_MU_GROUPS) && vif->bss_conf.mu_mimo_owner) {
+ ret = iwl_mvm_update_mu_groups(mvm, vif);
+ if (ret)
+ IWL_ERR(mvm,
+@@ -4004,7 +4004,7 @@ static void iwl_mvm_ftm_responder_chanctx_iter(void *_data, u8 *mac,
+ {
+ struct iwl_mvm_ftm_responder_iter_data *data = _data;
+
+- if (rcu_access_pointer(vif->chanctx_conf) == data->ctx &&
++ if (rcu_access_pointer(vif->bss_conf.chanctx_conf) == data->ctx &&
+ vif->type == NL80211_IFTYPE_AP && vif->bss_conf.ftmr_params)
+ data->responder = true;
+ }
+@@ -4631,7 +4631,7 @@ static int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
+ csa_vif =
+ rcu_dereference_protected(mvm->csa_vif,
+ lockdep_is_held(&mvm->mutex));
+- if (WARN_ONCE(csa_vif && csa_vif->csa_active,
++ if (WARN_ONCE(csa_vif && csa_vif->bss_conf.csa_active,
+ "Another CSA is already in progress")) {
+ ret = -EBUSY;
+ goto out_unlock;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+index b9bd81242b216..afdf3bb523e93 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+@@ -283,7 +283,7 @@ static bool iwl_mvm_power_is_radar(struct ieee80211_vif *vif)
+ bool radar_detect = false;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ WARN_ON(!chanctx_conf);
+ if (chanctx_conf) {
+ chan = chanctx_conf->def.chan;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 974eeecc91537..303975f9e2b58 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -1980,7 +1980,7 @@ static bool rs_tpc_perform(struct iwl_mvm *mvm,
+ #endif
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf))
+ band = NUM_NL80211_BANDS;
+ else
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index bbb1522e7280a..ae23950d566f0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -1861,6 +1861,7 @@ static void iwl_mvm_disable_sta_queues(struct iwl_mvm *mvm,
+ iwl_mvm_txq_from_mac80211(sta->txq[i]);
+
+ mvmtxq->txq_id = IWL_MVM_INVALID_QUEUE;
++ list_del_init(&mvmtxq->list);
+ }
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
+index bf04326e35ff0..674dd137fb9fe 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
+@@ -2,7 +2,7 @@
+ /*
+ * Copyright (C) 2014 Intel Mobile Communications GmbH
+ * Copyright (C) 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2020, 2022 Intel Corporation
+ */
+ #include <linux/etherdevice.h>
+ #include "mvm.h"
+@@ -380,7 +380,7 @@ iwl_mvm_tdls_config_channel_switch(struct iwl_mvm *mvm,
+ type == TDLS_MOVE_CH) {
+ /* we need to return to base channel */
+ struct ieee80211_chanctx_conf *chanctx =
+- rcu_dereference(vif->chanctx_conf);
++ rcu_dereference(vif->bss_conf.chanctx_conf);
+
+ if (WARN_ON_ONCE(!chanctx)) {
+ rcu_read_unlock();
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 6edf2b79db43a..4f0794a45bf58 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2021 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+ * Copyright (C) 2017 Intel Deutschland GmbH
+ */
+@@ -123,7 +123,7 @@ static void iwl_mvm_csa_noa_start(struct iwl_mvm *mvm)
+ rcu_read_lock();
+
+ csa_vif = rcu_dereference(mvm->csa_vif);
+- if (!csa_vif || !csa_vif->csa_active)
++ if (!csa_vif || !csa_vif->bss_conf.csa_active)
+ goto out_unlock;
+
+ IWL_DEBUG_TE(mvm, "CSA NOA started\n");
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 8125bb76f59e8..f9e08b339e0c4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1959,7 +1959,7 @@ static void iwl_mvm_tx_reclaim(struct iwl_mvm *mvm, int sta_id, int tid,
+
+ if (mvmsta->vif)
+ chanctx_conf =
+- rcu_dereference(mvmsta->vif->chanctx_conf);
++ rcu_dereference(mvmsta->vif->bss_conf.chanctx_conf);
+
+ if (WARN_ON_ONCE(!chanctx_conf))
+ goto out;
+diff --git a/drivers/net/wireless/intersil/p54/main.c b/drivers/net/wireless/intersil/p54/main.c
+index a3ca6620dc0c6..8fa3ec71603e3 100644
+--- a/drivers/net/wireless/intersil/p54/main.c
++++ b/drivers/net/wireless/intersil/p54/main.c
+@@ -682,7 +682,7 @@ static void p54_flush(struct ieee80211_hw *dev, struct ieee80211_vif *vif,
+ * queues have already been stopped and no new frames can sneak
+ * up from behind.
+ */
+- while ((total = p54_flush_count(priv) && i--)) {
++ while ((total = p54_flush_count(priv)) && i--) {
+ /* waste time */
+ msleep(20);
+ }
+diff --git a/drivers/net/wireless/intersil/p54/p54spi.c b/drivers/net/wireless/intersil/p54/p54spi.c
+index f99b7ba69fc3d..19152fd449ba7 100644
+--- a/drivers/net/wireless/intersil/p54/p54spi.c
++++ b/drivers/net/wireless/intersil/p54/p54spi.c
+@@ -164,7 +164,7 @@ static int p54spi_request_firmware(struct ieee80211_hw *dev)
+
+ ret = p54_parse_firmware(dev, priv->firmware);
+ if (ret) {
+- release_firmware(priv->firmware);
++ /* the firmware is released by the caller */
+ return ret;
+ }
+
+@@ -659,6 +659,7 @@ static int p54spi_probe(struct spi_device *spi)
+ return 0;
+
+ err_free_common:
++ release_firmware(priv->firmware);
+ free_irq(gpio_to_irq(p54spi_gpio_irq), spi);
+ err_free_gpio_irq:
+ gpio_free(p54spi_gpio_irq);
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 6f83af849f2e0..b511e705a46e4 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -680,7 +680,7 @@ struct mac80211_hwsim_data {
+ bool ps_poll_pending;
+ struct dentry *debugfs;
+
+- uintptr_t pending_cookie;
++ atomic_t pending_cookie;
+ struct sk_buff_head pending; /* packets pending */
+ /*
+ * Only radios in the same group can communicate together (the
+@@ -889,7 +889,7 @@ static void hwsim_send_ps_poll(void *dat, u8 *mac, struct ieee80211_vif *vif)
+
+ rcu_read_lock();
+ mac80211_hwsim_tx_frame(data->hw, skb,
+- rcu_dereference(vif->chanctx_conf)->def.chan);
++ rcu_dereference(vif->bss_conf.chanctx_conf)->def.chan);
+ rcu_read_unlock();
+ }
+
+@@ -922,7 +922,7 @@ static void hwsim_send_nullfunc(struct mac80211_hwsim_data *data, u8 *mac,
+
+ rcu_read_lock();
+ mac80211_hwsim_tx_frame(data->hw, skb,
+- rcu_dereference(vif->chanctx_conf)->def.chan);
++ rcu_dereference(vif->bss_conf.chanctx_conf)->def.chan);
+ rcu_read_unlock();
+ }
+
+@@ -1416,8 +1416,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw,
+ goto nla_put_failure;
+
+ /* We create a cookie to identify this skb */
+- data->pending_cookie++;
+- cookie = data->pending_cookie;
++ cookie = atomic_inc_return(&data->pending_cookie);
+ info->rate_driver_data[0] = (void *)cookie;
+ if (nla_put_u64_64bit(skb, HWSIM_ATTR_COOKIE, cookie, HWSIM_ATTR_PAD))
+ goto nla_put_failure;
+@@ -1465,11 +1464,11 @@ static void mac80211_hwsim_tx_iter(void *_data, u8 *addr,
+ {
+ struct tx_iter_data *data = _data;
+
+- if (!vif->chanctx_conf)
++ if (!vif->bss_conf.chanctx_conf)
+ return;
+
+ if (!hwsim_chans_compat(data->channel,
+- rcu_dereference(vif->chanctx_conf)->def.chan))
++ rcu_dereference(vif->bss_conf.chanctx_conf)->def.chan))
+ return;
+
+ data->receive = true;
+@@ -1687,7 +1686,11 @@ static void mac80211_hwsim_tx(struct ieee80211_hw *hw,
+ } else if (txi->hw_queue == 4) {
+ channel = data->tmp_chan;
+ } else {
+- chanctx_conf = rcu_dereference(txi->control.vif->chanctx_conf);
++ struct ieee80211_bss_conf *bss_conf;
++
++ bss_conf = &txi->control.vif->bss_conf;
++
++ chanctx_conf = rcu_dereference(bss_conf->chanctx_conf);
+ if (chanctx_conf) {
+ channel = chanctx_conf->def.chan;
+ confbw = chanctx_conf->def.width;
+@@ -1936,14 +1939,14 @@ static void mac80211_hwsim_beacon_tx(void *arg, u8 *mac,
+ }
+
+ mac80211_hwsim_tx_frame(hw, skb,
+- rcu_dereference(vif->chanctx_conf)->def.chan);
++ rcu_dereference(vif->bss_conf.chanctx_conf)->def.chan);
+
+ while ((skb = ieee80211_get_buffered_bc(hw, vif)) != NULL) {
+ mac80211_hwsim_tx_frame(hw, skb,
+- rcu_dereference(vif->chanctx_conf)->def.chan);
++ rcu_dereference(vif->bss_conf.chanctx_conf)->def.chan);
+ }
+
+- if (vif->csa_active && ieee80211_beacon_cntdwn_is_complete(vif))
++ if (vif->bss_conf.csa_active && ieee80211_beacon_cntdwn_is_complete(vif))
+ ieee80211_csa_finish(vif);
+ }
+
+@@ -2205,7 +2208,7 @@ mac80211_hwsim_sta_rc_update(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *chanctx_conf;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+
+ if (!WARN_ON(!chanctx_conf))
+ confbw = chanctx_conf->def.width;
+@@ -4080,6 +4083,7 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2,
+ const u8 *src;
+ unsigned int hwsim_flags;
+ int i;
++ unsigned long flags;
+ bool found = false;
+
+ if (!info->attrs[HWSIM_ATTR_ADDR_TRANSMITTER] ||
+@@ -4107,18 +4111,20 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2,
+ }
+
+ /* look for the skb matching the cookie passed back from user */
++ spin_lock_irqsave(&data2->pending.lock, flags);
+ skb_queue_walk_safe(&data2->pending, skb, tmp) {
+- u64 skb_cookie;
++ uintptr_t skb_cookie;
+
+ txi = IEEE80211_SKB_CB(skb);
+- skb_cookie = (u64)(uintptr_t)txi->rate_driver_data[0];
++ skb_cookie = (uintptr_t)txi->rate_driver_data[0];
+
+ if (skb_cookie == ret_skb_cookie) {
+- skb_unlink(skb, &data2->pending);
++ __skb_unlink(skb, &data2->pending);
+ found = true;
+ break;
+ }
+ }
++ spin_unlock_irqrestore(&data2->pending.lock, flags);
+
+ /* not found */
+ if (!found)
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index 5d6dc1dd050d4..32fdc4150b605 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -287,6 +287,7 @@ static int if_usb_probe(struct usb_interface *intf,
+ return 0;
+
+ err_get_fw:
++ usb_put_dev(udev);
+ lbs_remove_card(priv);
+ err_add_card:
+ if_usb_reset_device(cardp);
+diff --git a/drivers/net/wireless/marvell/libertas/mesh.c b/drivers/net/wireless/marvell/libertas/mesh.c
+index a58c1e141f2ca..90ffe8d1e0e81 100644
+--- a/drivers/net/wireless/marvell/libertas/mesh.c
++++ b/drivers/net/wireless/marvell/libertas/mesh.c
+@@ -109,9 +109,9 @@ static int lbs_mesh_config(struct lbs_private *priv, uint16_t action,
+
+ if (priv->mesh_dev) {
+ mesh_wdev = priv->mesh_dev->ieee80211_ptr;
+- ie->val.mesh_id_len = mesh_wdev->mesh_id_up_len;
+- memcpy(ie->val.mesh_id, mesh_wdev->ssid,
+- mesh_wdev->mesh_id_up_len);
++ ie->val.mesh_id_len = mesh_wdev->u.mesh.id_up_len;
++ memcpy(ie->val.mesh_id, mesh_wdev->u.mesh.id,
++ mesh_wdev->u.mesh.id_up_len);
+ }
+
+ ie->len = sizeof(struct mrvl_meshie_val) -
+@@ -986,8 +986,8 @@ static int lbs_add_mesh(struct lbs_private *priv)
+ mesh_wdev->wiphy = priv->wdev->wiphy;
+
+ if (priv->mesh_tlv) {
+- sprintf(mesh_wdev->ssid, "mesh");
+- mesh_wdev->mesh_id_up_len = 4;
++ sprintf(mesh_wdev->u.mesh.id, "mesh");
++ mesh_wdev->u.mesh.id_up_len = 4;
+ }
+
+ mesh_wdev->netdev = mesh_dev;
+diff --git a/drivers/net/wireless/marvell/mwifiex/11h.c b/drivers/net/wireless/marvell/mwifiex/11h.c
+index 3fa25cd64cda0..4ca8d01357081 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11h.c
++++ b/drivers/net/wireless/marvell/mwifiex/11h.c
+@@ -304,6 +304,6 @@ void mwifiex_dfs_chan_sw_work_queue(struct work_struct *work)
+ mwifiex_dbg(priv->adapter, MSG,
+ "indicating channel switch completion to kernel\n");
+ mutex_lock(&priv->wdev.mtx);
+- cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef);
++ cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef, 0);
+ mutex_unlock(&priv->wdev.mtx);
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 6f23ec34e2e2f..d68c40e0e1228 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -1753,10 +1753,12 @@ mwifiex_mgmt_stypes[NUM_NL80211_IFTYPES] = {
+ * Function configures data rates to firmware using bitrate mask
+ * provided by cfg80211.
+ */
+-static int mwifiex_cfg80211_set_bitrate_mask(struct wiphy *wiphy,
+- struct net_device *dev,
+- const u8 *peer,
+- const struct cfg80211_bitrate_mask *mask)
++static int
++mwifiex_cfg80211_set_bitrate_mask(struct wiphy *wiphy,
++ struct net_device *dev,
++ unsigned int link_id,
++ const u8 *peer,
++ const struct cfg80211_bitrate_mask *mask)
+ {
+ struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
+ u16 bitmap_rates[MAX_BITMAP_RATES_SIZE];
+@@ -1998,7 +2000,8 @@ mwifiex_cfg80211_get_antenna(struct wiphy *wiphy, u32 *tx_ant, u32 *rx_ant)
+ /* cfg80211 operation handler for stop ap.
+ * Function stops BSS running at uAP interface.
+ */
+-static int mwifiex_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *dev)
++static int mwifiex_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
++ unsigned int link_id)
+ {
+ struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
+
+@@ -2421,7 +2424,7 @@ mwifiex_cfg80211_connect(struct wiphy *wiphy, struct net_device *dev,
+ return -EINVAL;
+ }
+
+- if (priv->wdev.current_bss) {
++ if (priv->wdev.connected) {
+ mwifiex_dbg(adapter, ERROR,
+ "%s: already connected\n", dev->name);
+ return -EALREADY;
+@@ -2649,7 +2652,7 @@ mwifiex_cfg80211_scan(struct wiphy *wiphy,
+ return -EBUSY;
+ }
+
+- if (!priv->wdev.current_bss && priv->scan_block)
++ if (!priv->wdev.connected && priv->scan_block)
+ priv->scan_block = false;
+
+ if (!mwifiex_stop_bg_scan(priv))
+@@ -4025,6 +4028,7 @@ mwifiex_cfg80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
+
+ static int mwifiex_cfg80211_get_channel(struct wiphy *wiphy,
+ struct wireless_dev *wdev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef)
+ {
+ struct mwifiex_private *priv = mwifiex_netdev_get_priv(wdev->netdev);
+diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
+index a499861918fa3..9bc8758573fcc 100644
+--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
+@@ -162,10 +162,13 @@ mt76_find_power_limits_node(struct mt76_dev *dev)
+ }
+
+ if (mt76_string_prop_find(country, dev->alpha2) ||
+- mt76_string_prop_find(regd, region_name))
++ mt76_string_prop_find(regd, region_name)) {
++ of_node_put(np);
+ return cur;
++ }
+ }
+
++ of_node_put(np);
+ return fallback;
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 18b5de55334c8..a520f9ac27996 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -210,6 +210,7 @@ static int mt76_led_init(struct mt76_dev *dev)
+ if (!of_property_read_u32(np, "led-sources", &led_pin))
+ dev->led_pin = led_pin;
+ dev->led_al = of_property_read_bool(np, "led-active-low");
++ of_node_put(np);
+ }
+
+ return led_classdev_register(dev->dev, &dev->led_cdev);
+@@ -1459,7 +1460,7 @@ EXPORT_SYMBOL_GPL(mt76_get_sar_power);
+ static void
+ __mt76_csa_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ {
+- if (vif->csa_active && ieee80211_beacon_cntdwn_is_complete(vif))
++ if (vif->bss_conf.csa_active && ieee80211_beacon_cntdwn_is_complete(vif))
+ ieee80211_csa_finish(vif);
+ }
+
+@@ -1481,7 +1482,7 @@ __mt76_csa_check(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ {
+ struct mt76_dev *dev = priv;
+
+- if (!vif->csa_active)
++ if (!vif->bss_conf.csa_active)
+ return;
+
+ dev->csa_complete |= ieee80211_beacon_cntdwn_is_complete(vif);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index bd687f7de6289..9e832b27170fe 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -2282,6 +2282,7 @@ mt7615_dfs_init_radar_specs(struct mt7615_phy *phy)
+
+ int mt7615_dfs_init_radar_detector(struct mt7615_phy *phy)
+ {
++ struct cfg80211_chan_def *chandef = &phy->mt76->chandef;
+ struct mt7615_dev *dev = phy->dev;
+ bool ext_phy = phy != &dev->phy;
+ enum mt76_dfs_state dfs_state, prev_state;
+@@ -2292,13 +2293,13 @@ int mt7615_dfs_init_radar_detector(struct mt7615_phy *phy)
+
+ prev_state = phy->mt76->dfs_state;
+ dfs_state = mt76_phy_dfs_state(phy->mt76);
++ if ((chandef->chan->flags & IEEE80211_CHAN_RADAR) &&
++ dfs_state < MT_DFS_STATE_CAC)
++ dfs_state = MT_DFS_STATE_ACTIVE;
+
+ if (prev_state == dfs_state)
+ return 0;
+
+- if (prev_state == MT_DFS_STATE_UNKNOWN)
+- mt7615_dfs_stop_radar_detector(phy);
+-
+ if (dfs_state == MT_DFS_STATE_DISABLED)
+ goto stop;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index a9c9b97d173e0..d722c3c177bee 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -282,26 +282,6 @@ static void mt7615_remove_interface(struct ieee80211_hw *hw,
+ mt76_packet_id_flush(&dev->mt76, &mvif->sta.wcid);
+ }
+
+-static void mt7615_init_dfs_state(struct mt7615_phy *phy)
+-{
+- struct mt76_phy *mphy = phy->mt76;
+- struct ieee80211_hw *hw = mphy->hw;
+- struct cfg80211_chan_def *chandef = &hw->conf.chandef;
+-
+- if (hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)
+- return;
+-
+- if (!(chandef->chan->flags & IEEE80211_CHAN_RADAR) &&
+- !(mphy->chandef.chan->flags & IEEE80211_CHAN_RADAR))
+- return;
+-
+- if (mphy->chandef.chan->center_freq == chandef->chan->center_freq &&
+- mphy->chandef.width == chandef->width)
+- return;
+-
+- phy->dfs_state = -1;
+-}
+-
+ int mt7615_set_channel(struct mt7615_phy *phy)
+ {
+ struct mt7615_dev *dev = phy->dev;
+@@ -314,7 +294,6 @@ int mt7615_set_channel(struct mt7615_phy *phy)
+
+ set_bit(MT76_RESET, &phy->mt76->state);
+
+- mt7615_init_dfs_state(phy);
+ mt76_set_channel(phy->mt76);
+
+ if (is_mt7615(&dev->mt76) && dev->flash_eeprom) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index 97e2a85cb7284..8a7bc78da9547 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -350,10 +350,11 @@ static int mt7615_mcu_fw_pmctrl(struct mt7615_dev *dev)
+ }
+
+ mt7622_trigger_hif_int(dev, false);
+-
+- pm->stats.last_doze_event = jiffies;
+- pm->stats.awake_time += pm->stats.last_doze_event -
+- pm->stats.last_wake_event;
++ if (!err) {
++ pm->stats.last_doze_event = jiffies;
++ pm->stats.awake_time += pm->stats.last_doze_event -
++ pm->stats.last_wake_event;
++ }
+ out:
+ mutex_unlock(&pm->mutex);
+
+@@ -363,7 +364,7 @@ out:
+ static void
+ mt7615_mcu_csa_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ {
+- if (vif->csa_active)
++ if (vif->bss_conf.csa_active)
+ ieee80211_csa_finish(vif);
+ }
+
+@@ -402,6 +403,9 @@ mt7615_mcu_rx_radar_detected(struct mt7615_dev *dev, struct sk_buff *skb)
+ if (r->band_idx && dev->mt76.phy2)
+ mphy = dev->mt76.phy2;
+
++ if (mt76_phy_dfs_state(mphy) < MT_DFS_STATE_CAC)
++ return;
++
+ ieee80211_radar_detected(mphy->hw);
+ dev->hw_pattern++;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+index 2e91f6a27d0ff..082c73b571ae7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+@@ -177,7 +177,6 @@ struct mt7615_phy {
+
+ u8 chfreq;
+ u8 rdd_state;
+- int dfs_state;
+
+ u32 rx_ampdu_ts;
+ u32 ampdu_ref;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+index 400ba514460e1..a9d7a269fcf3b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+@@ -12,6 +12,8 @@
+ #define MT76_CONNAC_MAX_SCHED_SCAN_SSID 10
+ #define MT76_CONNAC_MAX_SCAN_MATCH 16
+
++#define MT76_CONNAC_MAX_WMM_SETS 4
++
+ #define MT76_CONNAC_COREDUMP_TIMEOUT (HZ / 20)
+ #define MT76_CONNAC_COREDUMP_SZ (1300 * 1024)
+
+@@ -244,5 +246,9 @@ void mt76_connac_pm_queue_skb(struct ieee80211_hw *hw,
+ struct sk_buff *skb);
+ void mt76_connac_pm_dequeue_skbs(struct mt76_phy *phy,
+ struct mt76_connac_pm *pm);
++void mt76_connac2_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
++ struct sk_buff *skb, struct mt76_wcid *wcid,
++ struct ieee80211_key_conf *key, int pid,
++ u32 changed);
+
+ #endif /* __MT76_CONNAC_H */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac2_mac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac2_mac.h
+new file mode 100644
+index 0000000000000..c9d9c8475a388
+--- /dev/null
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac2_mac.h
+@@ -0,0 +1,167 @@
++/* SPDX-License-Identifier: ISC */
++/* Copyright (C) 2022 MediaTek Inc. */
++
++#ifndef __MT76_CONNAC2_MAC_H
++#define __MT76_CONNAC2_MAC_H
++
++enum tx_header_format {
++ MT_HDR_FORMAT_802_3,
++ MT_HDR_FORMAT_CMD,
++ MT_HDR_FORMAT_802_11,
++ MT_HDR_FORMAT_802_11_EXT,
++};
++
++enum tx_pkt_type {
++ MT_TX_TYPE_CT,
++ MT_TX_TYPE_SF,
++ MT_TX_TYPE_CMD,
++ MT_TX_TYPE_FW,
++};
++
++enum {
++ MT_CTX0,
++ MT_HIF0 = 0x0,
++
++ MT_LMAC_AC00 = 0x0,
++ MT_LMAC_AC01,
++ MT_LMAC_AC02,
++ MT_LMAC_AC03,
++ MT_LMAC_ALTX0 = 0x10,
++ MT_LMAC_BMC0,
++ MT_LMAC_BCN0,
++ MT_LMAC_PSMP0,
++};
++
++#define MT_TXD_SIZE (8 * 4)
++#define MT_SDIO_TXD_SIZE (MT_TXD_SIZE + 8 * 4)
++#define MT_SDIO_TAIL_SIZE 8
++#define MT_SDIO_HDR_SIZE 4
++#define MT_USB_TAIL_SIZE 4
++
++#define MT_TXD0_Q_IDX GENMASK(31, 25)
++#define MT_TXD0_PKT_FMT GENMASK(24, 23)
++#define MT_TXD0_ETH_TYPE_OFFSET GENMASK(22, 16)
++#define MT_TXD0_TX_BYTES GENMASK(15, 0)
++
++#define MT_TXD1_LONG_FORMAT BIT(31)
++#define MT_TXD1_TGID BIT(30)
++#define MT_TXD1_OWN_MAC GENMASK(29, 24)
++#define MT_TXD1_AMSDU BIT(23)
++#define MT_TXD1_TID GENMASK(22, 20)
++#define MT_TXD1_HDR_PAD GENMASK(19, 18)
++#define MT_TXD1_HDR_FORMAT GENMASK(17, 16)
++#define MT_TXD1_HDR_INFO GENMASK(15, 11)
++#define MT_TXD1_ETH_802_3 BIT(15)
++#define MT_TXD1_VTA BIT(10)
++#define MT_TXD1_WLAN_IDX GENMASK(9, 0)
++
++#define MT_TXD2_FIX_RATE BIT(31)
++#define MT_TXD2_FIXED_RATE BIT(30)
++#define MT_TXD2_POWER_OFFSET GENMASK(29, 24)
++#define MT_TXD2_MAX_TX_TIME GENMASK(23, 16)
++#define MT_TXD2_FRAG GENMASK(15, 14)
++#define MT_TXD2_HTC_VLD BIT(13)
++#define MT_TXD2_DURATION BIT(12)
++#define MT_TXD2_BIP BIT(11)
++#define MT_TXD2_MULTICAST BIT(10)
++#define MT_TXD2_RTS BIT(9)
++#define MT_TXD2_SOUNDING BIT(8)
++#define MT_TXD2_NDPA BIT(7)
++#define MT_TXD2_NDP BIT(6)
++#define MT_TXD2_FRAME_TYPE GENMASK(5, 4)
++#define MT_TXD2_SUB_TYPE GENMASK(3, 0)
++
++#define MT_TXD3_SN_VALID BIT(31)
++#define MT_TXD3_PN_VALID BIT(30)
++#define MT_TXD3_SW_POWER_MGMT BIT(29)
++#define MT_TXD3_BA_DISABLE BIT(28)
++#define MT_TXD3_SEQ GENMASK(27, 16)
++#define MT_TXD3_REM_TX_COUNT GENMASK(15, 11)
++#define MT_TXD3_TX_COUNT GENMASK(10, 6)
++#define MT_TXD3_TIMING_MEASURE BIT(5)
++#define MT_TXD3_DAS BIT(4)
++#define MT_TXD3_EEOSP BIT(3)
++#define MT_TXD3_EMRD BIT(2)
++#define MT_TXD3_PROTECT_FRAME BIT(1)
++#define MT_TXD3_NO_ACK BIT(0)
++
++#define MT_TXD4_PN_LOW GENMASK(31, 0)
++
++#define MT_TXD5_PN_HIGH GENMASK(31, 16)
++#define MT_TXD5_MD BIT(15)
++#define MT_TXD5_ADD_BA BIT(14)
++#define MT_TXD5_TX_STATUS_HOST BIT(10)
++#define MT_TXD5_TX_STATUS_MCU BIT(9)
++#define MT_TXD5_TX_STATUS_FMT BIT(8)
++#define MT_TXD5_PID GENMASK(7, 0)
++
++#define MT_TXD6_TX_IBF BIT(31)
++#define MT_TXD6_TX_EBF BIT(30)
++#define MT_TXD6_TX_RATE GENMASK(29, 16)
++#define MT_TXD6_SGI GENMASK(15, 14)
++#define MT_TXD6_HELTF GENMASK(13, 12)
++#define MT_TXD6_LDPC BIT(11)
++#define MT_TXD6_SPE_ID_IDX BIT(10)
++#define MT_TXD6_ANT_ID GENMASK(7, 4)
++#define MT_TXD6_DYN_BW BIT(3)
++#define MT_TXD6_FIXED_BW BIT(2)
++#define MT_TXD6_BW GENMASK(1, 0)
++
++#define MT_TXD7_TXD_LEN GENMASK(31, 30)
++#define MT_TXD7_UDP_TCP_SUM BIT(29)
++#define MT_TXD7_IP_SUM BIT(28)
++#define MT_TXD7_TYPE GENMASK(21, 20)
++#define MT_TXD7_SUB_TYPE GENMASK(19, 16)
++
++#define MT_TXD7_PSE_FID GENMASK(27, 16)
++#define MT_TXD7_SPE_IDX GENMASK(15, 11)
++#define MT_TXD7_HW_AMSDU BIT(10)
++#define MT_TXD7_TX_TIME GENMASK(9, 0)
++
++#define MT_TXD8_L_TYPE GENMASK(5, 4)
++#define MT_TXD8_L_SUB_TYPE GENMASK(3, 0)
++
++#define MT_TX_RATE_STBC BIT(13)
++#define MT_TX_RATE_NSS GENMASK(12, 10)
++#define MT_TX_RATE_MODE GENMASK(9, 6)
++#define MT_TX_RATE_SU_EXT_TONE BIT(5)
++#define MT_TX_RATE_DCM BIT(4)
++/* VHT/HE only use bits 0-3 */
++#define MT_TX_RATE_IDX GENMASK(5, 0)
++
++#define MT_TXS0_FIXED_RATE BIT(31)
++#define MT_TXS0_BW GENMASK(30, 29)
++#define MT_TXS0_TID GENMASK(28, 26)
++#define MT_TXS0_AMPDU BIT(25)
++#define MT_TXS0_TXS_FORMAT GENMASK(24, 23)
++#define MT_TXS0_BA_ERROR BIT(22)
++#define MT_TXS0_PS_FLAG BIT(21)
++#define MT_TXS0_TXOP_TIMEOUT BIT(20)
++#define MT_TXS0_BIP_ERROR BIT(19)
++
++#define MT_TXS0_QUEUE_TIMEOUT BIT(18)
++#define MT_TXS0_RTS_TIMEOUT BIT(17)
++#define MT_TXS0_ACK_TIMEOUT BIT(16)
++#define MT_TXS0_ACK_ERROR_MASK GENMASK(18, 16)
++
++#define MT_TXS0_TX_STATUS_HOST BIT(15)
++#define MT_TXS0_TX_STATUS_MCU BIT(14)
++#define MT_TXS0_TX_RATE GENMASK(13, 0)
++
++#define MT_TXS1_SEQNO GENMASK(31, 20)
++#define MT_TXS1_RESP_RATE GENMASK(19, 16)
++#define MT_TXS1_RXV_SEQNO GENMASK(15, 8)
++#define MT_TXS1_TX_POWER_DBM GENMASK(7, 0)
++
++#define MT_TXS2_BF_STATUS GENMASK(31, 30)
++#define MT_TXS2_LAST_TX_RATE GENMASK(29, 27)
++#define MT_TXS2_SHARED_ANTENNA BIT(26)
++#define MT_TXS2_WCID GENMASK(25, 16)
++#define MT_TXS2_TX_DELAY GENMASK(15, 0)
++
++#define MT_TXS3_PID GENMASK(31, 24)
++#define MT_TXS3_ANT_ID GENMASK(23, 0)
++
++#define MT_TXS4_TIMESTAMP GENMASK(31, 0)
++
++#endif /* __MT76_CONNAC2_MAC_H */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+index 306e9eaea9177..0ea795565c88b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+@@ -2,6 +2,7 @@
+ /* Copyright (C) 2020 MediaTek Inc. */
+
+ #include "mt76_connac.h"
++#include "mt76_connac2_mac.h"
+
+ int mt76_connac_pm_wake(struct mt76_phy *phy, struct mt76_connac_pm *pm)
+ {
+@@ -115,3 +116,286 @@ void mt76_connac_pm_dequeue_skbs(struct mt76_phy *phy,
+ mt76_worker_schedule(&phy->dev->tx_worker);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_pm_dequeue_skbs);
++
++static u16
++mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy, struct ieee80211_vif *vif,
++ bool beacon, bool mcast)
++{
++ u8 mode = 0, band = mphy->chandef.chan->band;
++ int rateidx = 0, mcast_rate;
++
++ if (!vif)
++ goto legacy;
++
++ if (is_mt7921(mphy->dev)) {
++ rateidx = ffs(vif->bss_conf.basic_rates) - 1;
++ goto legacy;
++ }
++
++ if (beacon) {
++ struct cfg80211_bitrate_mask *mask;
++
++ mask = &vif->bss_conf.beacon_tx_rate;
++ if (hweight16(mask->control[band].he_mcs[0]) == 1) {
++ rateidx = ffs(mask->control[band].he_mcs[0]) - 1;
++ mode = MT_PHY_TYPE_HE_SU;
++ goto out;
++ } else if (hweight16(mask->control[band].vht_mcs[0]) == 1) {
++ rateidx = ffs(mask->control[band].vht_mcs[0]) - 1;
++ mode = MT_PHY_TYPE_VHT;
++ goto out;
++ } else if (hweight8(mask->control[band].ht_mcs[0]) == 1) {
++ rateidx = ffs(mask->control[band].ht_mcs[0]) - 1;
++ mode = MT_PHY_TYPE_HT;
++ goto out;
++ } else if (hweight32(mask->control[band].legacy) == 1) {
++ rateidx = ffs(mask->control[band].legacy) - 1;
++ goto legacy;
++ }
++ }
++
++ mcast_rate = vif->bss_conf.mcast_rate[band];
++ if (mcast && mcast_rate > 0)
++ rateidx = mcast_rate - 1;
++ else
++ rateidx = ffs(vif->bss_conf.basic_rates) - 1;
++
++legacy:
++ rateidx = mt76_calculate_default_rate(mphy, rateidx);
++ mode = rateidx >> 8;
++ rateidx &= GENMASK(7, 0);
++
++out:
++ return FIELD_PREP(MT_TX_RATE_IDX, rateidx) |
++ FIELD_PREP(MT_TX_RATE_MODE, mode);
++}
++
++static void
++mt76_connac2_mac_write_txwi_8023(__le32 *txwi, struct sk_buff *skb,
++ struct mt76_wcid *wcid)
++{
++ u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
++ u8 fc_type, fc_stype;
++ u16 ethertype;
++ bool wmm = false;
++ u32 val;
++
++ if (wcid->sta) {
++ struct ieee80211_sta *sta;
++
++ sta = container_of((void *)wcid, struct ieee80211_sta, drv_priv);
++ wmm = sta->wme;
++ }
++
++ val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_3) |
++ FIELD_PREP(MT_TXD1_TID, tid);
++
++ ethertype = get_unaligned_be16(&skb->data[12]);
++ if (ethertype >= ETH_P_802_3_MIN)
++ val |= MT_TXD1_ETH_802_3;
++
++ txwi[1] |= cpu_to_le32(val);
++
++ fc_type = IEEE80211_FTYPE_DATA >> 2;
++ fc_stype = wmm ? IEEE80211_STYPE_QOS_DATA >> 4 : 0;
++
++ val = FIELD_PREP(MT_TXD2_FRAME_TYPE, fc_type) |
++ FIELD_PREP(MT_TXD2_SUB_TYPE, fc_stype);
++
++ txwi[2] |= cpu_to_le32(val);
++
++ val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
++ FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
++
++ txwi[7] |= cpu_to_le32(val);
++}
++
++static void
++mt76_connac2_mac_write_txwi_80211(struct mt76_dev *dev, __le32 *txwi,
++ struct sk_buff *skb,
++ struct ieee80211_key_conf *key)
++{
++ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
++ struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
++ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++ bool multicast = is_multicast_ether_addr(hdr->addr1);
++ u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
++ __le16 fc = hdr->frame_control;
++ u8 fc_type, fc_stype;
++ u32 val;
++
++ if (ieee80211_is_action(fc) &&
++ mgmt->u.action.category == WLAN_CATEGORY_BACK &&
++ mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ) {
++ u16 capab = le16_to_cpu(mgmt->u.action.u.addba_req.capab);
++
++ txwi[5] |= cpu_to_le32(MT_TXD5_ADD_BA);
++ tid = (capab >> 2) & IEEE80211_QOS_CTL_TID_MASK;
++ } else if (ieee80211_is_back_req(hdr->frame_control)) {
++ struct ieee80211_bar *bar = (struct ieee80211_bar *)hdr;
++ u16 control = le16_to_cpu(bar->control);
++
++ tid = FIELD_GET(IEEE80211_BAR_CTRL_TID_INFO_MASK, control);
++ }
++
++ val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_11) |
++ FIELD_PREP(MT_TXD1_HDR_INFO,
++ ieee80211_get_hdrlen_from_skb(skb) / 2) |
++ FIELD_PREP(MT_TXD1_TID, tid);
++
++ txwi[1] |= cpu_to_le32(val);
++
++ fc_type = (le16_to_cpu(fc) & IEEE80211_FCTL_FTYPE) >> 2;
++ fc_stype = (le16_to_cpu(fc) & IEEE80211_FCTL_STYPE) >> 4;
++
++ val = FIELD_PREP(MT_TXD2_FRAME_TYPE, fc_type) |
++ FIELD_PREP(MT_TXD2_SUB_TYPE, fc_stype) |
++ FIELD_PREP(MT_TXD2_MULTICAST, multicast);
++
++ if (key && multicast && ieee80211_is_robust_mgmt_frame(skb) &&
++ key->cipher == WLAN_CIPHER_SUITE_AES_CMAC) {
++ val |= MT_TXD2_BIP;
++ txwi[3] &= ~cpu_to_le32(MT_TXD3_PROTECT_FRAME);
++ }
++
++ if (!ieee80211_is_data(fc) || multicast ||
++ info->flags & IEEE80211_TX_CTL_USE_MINRATE)
++ val |= MT_TXD2_FIX_RATE;
++
++ txwi[2] |= cpu_to_le32(val);
++
++ if (ieee80211_is_beacon(fc)) {
++ txwi[3] &= ~cpu_to_le32(MT_TXD3_SW_POWER_MGMT);
++ txwi[3] |= cpu_to_le32(MT_TXD3_REM_TX_COUNT);
++ if (!is_mt7921(dev))
++ txwi[7] |= cpu_to_le32(FIELD_PREP(MT_TXD7_SPE_IDX,
++ 0x18));
++ }
++
++ if (info->flags & IEEE80211_TX_CTL_INJECTED) {
++ u16 seqno = le16_to_cpu(hdr->seq_ctrl);
++
++ if (ieee80211_is_back_req(hdr->frame_control)) {
++ struct ieee80211_bar *bar;
++
++ bar = (struct ieee80211_bar *)skb->data;
++ seqno = le16_to_cpu(bar->start_seq_num);
++ }
++
++ val = MT_TXD3_SN_VALID |
++ FIELD_PREP(MT_TXD3_SEQ, IEEE80211_SEQ_TO_SN(seqno));
++ txwi[3] |= cpu_to_le32(val);
++ txwi[7] &= ~cpu_to_le32(MT_TXD7_HW_AMSDU);
++ }
++
++ if (mt76_is_mmio(dev)) {
++ val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
++ FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
++ txwi[7] |= cpu_to_le32(val);
++ } else {
++ val = FIELD_PREP(MT_TXD8_L_TYPE, fc_type) |
++ FIELD_PREP(MT_TXD8_L_SUB_TYPE, fc_stype);
++ txwi[8] |= cpu_to_le32(val);
++ }
++}
++
++void mt76_connac2_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
++ struct sk_buff *skb, struct mt76_wcid *wcid,
++ struct ieee80211_key_conf *key, int pid,
++ u32 changed)
++{
++ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++ bool ext_phy = info->hw_queue & MT_TX_HW_QUEUE_EXT_PHY;
++ struct ieee80211_vif *vif = info->control.vif;
++ struct mt76_phy *mphy = &dev->phy;
++ u8 p_fmt, q_idx, omac_idx = 0, wmm_idx = 0, band_idx = 0;
++ u32 val, sz_txd = mt76_is_mmio(dev) ? MT_TXD_SIZE : MT_SDIO_TXD_SIZE;
++ bool is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP;
++ bool beacon = !!(changed & (BSS_CHANGED_BEACON |
++ BSS_CHANGED_BEACON_ENABLED));
++ bool inband_disc = !!(changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
++ BSS_CHANGED_FILS_DISCOVERY));
++
++ if (vif) {
++ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
++
++ omac_idx = mvif->omac_idx;
++ wmm_idx = mvif->wmm_idx;
++ band_idx = mvif->band_idx;
++ }
++
++ if (ext_phy && dev->phy2)
++ mphy = dev->phy2;
++
++ if (inband_disc) {
++ p_fmt = MT_TX_TYPE_FW;
++ q_idx = MT_LMAC_ALTX0;
++ } else if (beacon) {
++ p_fmt = MT_TX_TYPE_FW;
++ q_idx = MT_LMAC_BCN0;
++ } else if (skb_get_queue_mapping(skb) >= MT_TXQ_PSD) {
++ p_fmt = mt76_is_mmio(dev) ? MT_TX_TYPE_CT : MT_TX_TYPE_SF;
++ q_idx = MT_LMAC_ALTX0;
++ } else {
++ p_fmt = mt76_is_mmio(dev) ? MT_TX_TYPE_CT : MT_TX_TYPE_SF;
++ q_idx = wmm_idx * MT76_CONNAC_MAX_WMM_SETS +
++ mt76_connac_lmac_mapping(skb_get_queue_mapping(skb));
++ }
++
++ val = FIELD_PREP(MT_TXD0_TX_BYTES, skb->len + sz_txd) |
++ FIELD_PREP(MT_TXD0_PKT_FMT, p_fmt) |
++ FIELD_PREP(MT_TXD0_Q_IDX, q_idx);
++ txwi[0] = cpu_to_le32(val);
++
++ val = MT_TXD1_LONG_FORMAT |
++ FIELD_PREP(MT_TXD1_WLAN_IDX, wcid->idx) |
++ FIELD_PREP(MT_TXD1_OWN_MAC, omac_idx);
++ if (!is_mt7921(dev))
++ val |= MT_TXD1_VTA;
++ if (ext_phy || band_idx)
++ val |= MT_TXD1_TGID;
++
++ txwi[1] = cpu_to_le32(val);
++ txwi[2] = 0;
++
++ val = FIELD_PREP(MT_TXD3_REM_TX_COUNT, 15);
++ if (!is_mt7921(dev))
++ val |= MT_TXD3_SW_POWER_MGMT;
++ if (key)
++ val |= MT_TXD3_PROTECT_FRAME;
++ if (info->flags & IEEE80211_TX_CTL_NO_ACK)
++ val |= MT_TXD3_NO_ACK;
++
++ txwi[3] = cpu_to_le32(val);
++ txwi[4] = 0;
++
++ val = FIELD_PREP(MT_TXD5_PID, pid);
++ if (pid >= MT_PACKET_ID_FIRST)
++ val |= MT_TXD5_TX_STATUS_HOST;
++
++ txwi[5] = cpu_to_le32(val);
++ txwi[6] = 0;
++ txwi[7] = wcid->amsdu ? cpu_to_le32(MT_TXD7_HW_AMSDU) : 0;
++
++ if (is_8023)
++ mt76_connac2_mac_write_txwi_8023(txwi, skb, wcid);
++ else
++ mt76_connac2_mac_write_txwi_80211(dev, txwi, skb, key);
++
++ if (txwi[2] & cpu_to_le32(MT_TXD2_FIX_RATE)) {
++ /* Fixed rata is available just for 802.11 txd */
++ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
++ bool multicast = is_multicast_ether_addr(hdr->addr1);
++ u16 rate = mt76_connac2_mac_tx_rate_val(mphy, vif, beacon,
++ multicast);
++ u32 val = MT_TXD6_FIXED_BW;
++
++ /* hardware won't add HTC for mgmt/ctrl frame */
++ txwi[2] |= cpu_to_le32(MT_TXD2_HTC_VLD);
++
++ val |= FIELD_PREP(MT_TXD6_TX_RATE, rate);
++ txwi[6] |= cpu_to_le32(val);
++ txwi[3] |= cpu_to_le32(MT_TXD3_BA_DISABLE);
++ }
++}
++EXPORT_SYMBOL_GPL(mt76_connac2_mac_write_txwi);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+index 2953df7d8388d..c6c16fe8ee859 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+@@ -108,7 +108,7 @@ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
+ ret = mt76u_bulk_msg(dev, skb->data, skb->len, NULL, 500,
+ MT_EP_OUT_INBAND_CMD);
+ if (ret)
+- return ret;
++ goto out;
+
+ if (wait_resp)
+ ret = mt76x02u_mcu_wait_resp(dev, seq);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index cab6e02e1f8cc..fd76db8f5269c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -976,7 +976,7 @@ mt7915_rf_regval_get(void *data, u64 *val)
+ if (ret)
+ return ret;
+
+- *val = le32_to_cpu(regval);
++ *val = regval;
+
+ return 0;
+ }
+@@ -985,8 +985,9 @@ static int
+ mt7915_rf_regval_set(void *data, u64 val)
+ {
+ struct mt7915_dev *dev = data;
++ u32 val32 = val;
+
+- return mt7915_mcu_rf_regval(dev, dev->mt76.debugfs_reg, (u32 *)&val, true);
++ return mt7915_mcu_rf_regval(dev, dev->mt76.debugfs_reg, &val32, true);
+ }
+
+ DEFINE_DEBUGFS_ATTRIBUTE(fops_rf_regval, mt7915_rf_regval_get,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 086244d9be766..89f10bf885ba8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1009,266 +1009,18 @@ mt7915_mac_write_txwi_tm(struct mt7915_phy *phy, __le32 *txwi,
+ #endif
+ }
+
+-static void
+-mt7915_mac_write_txwi_8023(struct mt7915_dev *dev, __le32 *txwi,
+- struct sk_buff *skb, struct mt76_wcid *wcid)
+-{
+-
+- u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+- u8 fc_type, fc_stype;
+- u16 ethertype;
+- bool wmm = false;
+- u32 val;
+-
+- if (wcid->sta) {
+- struct ieee80211_sta *sta;
+-
+- sta = container_of((void *)wcid, struct ieee80211_sta, drv_priv);
+- wmm = sta->wme;
+- }
+-
+- val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_3) |
+- FIELD_PREP(MT_TXD1_TID, tid);
+-
+- ethertype = get_unaligned_be16(&skb->data[12]);
+- if (ethertype >= ETH_P_802_3_MIN)
+- val |= MT_TXD1_ETH_802_3;
+-
+- txwi[1] |= cpu_to_le32(val);
+-
+- fc_type = IEEE80211_FTYPE_DATA >> 2;
+- fc_stype = wmm ? IEEE80211_STYPE_QOS_DATA >> 4 : 0;
+-
+- val = FIELD_PREP(MT_TXD2_FRAME_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD2_SUB_TYPE, fc_stype);
+-
+- txwi[2] |= cpu_to_le32(val);
+-
+- val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
+- txwi[7] |= cpu_to_le32(val);
+-}
+-
+-static void
+-mt7915_mac_write_txwi_80211(struct mt7915_dev *dev, __le32 *txwi,
+- struct sk_buff *skb, struct ieee80211_key_conf *key,
+- bool *mcast)
+-{
+- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+- struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
+- struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+- u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+- __le16 fc = hdr->frame_control;
+- u8 fc_type, fc_stype;
+- u32 val;
+-
+- *mcast = is_multicast_ether_addr(hdr->addr1);
+-
+- if (ieee80211_is_action(fc) &&
+- mgmt->u.action.category == WLAN_CATEGORY_BACK &&
+- mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ) {
+- u16 capab = le16_to_cpu(mgmt->u.action.u.addba_req.capab);
+-
+- txwi[5] |= cpu_to_le32(MT_TXD5_ADD_BA);
+- tid = (capab >> 2) & IEEE80211_QOS_CTL_TID_MASK;
+- } else if (ieee80211_is_back_req(hdr->frame_control)) {
+- struct ieee80211_bar *bar = (struct ieee80211_bar *)hdr;
+- u16 control = le16_to_cpu(bar->control);
+-
+- tid = FIELD_GET(IEEE80211_BAR_CTRL_TID_INFO_MASK, control);
+- }
+-
+- val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_11) |
+- FIELD_PREP(MT_TXD1_HDR_INFO,
+- ieee80211_get_hdrlen_from_skb(skb) / 2) |
+- FIELD_PREP(MT_TXD1_TID, tid);
+- txwi[1] |= cpu_to_le32(val);
+-
+- fc_type = (le16_to_cpu(fc) & IEEE80211_FCTL_FTYPE) >> 2;
+- fc_stype = (le16_to_cpu(fc) & IEEE80211_FCTL_STYPE) >> 4;
+-
+- val = FIELD_PREP(MT_TXD2_FRAME_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD2_SUB_TYPE, fc_stype) |
+- FIELD_PREP(MT_TXD2_MULTICAST, *mcast);
+-
+- if (key && *mcast && ieee80211_is_robust_mgmt_frame(skb) &&
+- key->cipher == WLAN_CIPHER_SUITE_AES_CMAC) {
+- val |= MT_TXD2_BIP;
+- txwi[3] &= ~cpu_to_le32(MT_TXD3_PROTECT_FRAME);
+- }
+-
+- if (!ieee80211_is_data(fc) || *mcast ||
+- info->flags & IEEE80211_TX_CTL_USE_MINRATE)
+- val |= MT_TXD2_FIX_RATE;
+-
+- txwi[2] |= cpu_to_le32(val);
+-
+- if (ieee80211_is_beacon(fc)) {
+- txwi[3] &= ~cpu_to_le32(MT_TXD3_SW_POWER_MGMT);
+- txwi[3] |= cpu_to_le32(MT_TXD3_REM_TX_COUNT);
+- txwi[7] |= cpu_to_le32(FIELD_PREP(MT_TXD7_SPE_IDX, 0x18));
+- }
+-
+- if (info->flags & IEEE80211_TX_CTL_INJECTED) {
+- u16 seqno = le16_to_cpu(hdr->seq_ctrl);
+-
+- if (ieee80211_is_back_req(hdr->frame_control)) {
+- struct ieee80211_bar *bar;
+-
+- bar = (struct ieee80211_bar *)skb->data;
+- seqno = le16_to_cpu(bar->start_seq_num);
+- }
+-
+- val = MT_TXD3_SN_VALID |
+- FIELD_PREP(MT_TXD3_SEQ, IEEE80211_SEQ_TO_SN(seqno));
+- txwi[3] |= cpu_to_le32(val);
+- txwi[7] &= ~cpu_to_le32(MT_TXD7_HW_AMSDU);
+- }
+-
+- val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
+- txwi[7] |= cpu_to_le32(val);
+-}
+-
+-static u16
+-mt7915_mac_tx_rate_val(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+- bool beacon, bool mcast)
+-{
+- u8 mode = 0, band = mphy->chandef.chan->band;
+- int rateidx = 0, mcast_rate;
+-
+- if (beacon) {
+- struct cfg80211_bitrate_mask *mask;
+-
+- mask = &vif->bss_conf.beacon_tx_rate;
+- if (hweight16(mask->control[band].he_mcs[0]) == 1) {
+- rateidx = ffs(mask->control[band].he_mcs[0]) - 1;
+- mode = MT_PHY_TYPE_HE_SU;
+- goto out;
+- } else if (hweight16(mask->control[band].vht_mcs[0]) == 1) {
+- rateidx = ffs(mask->control[band].vht_mcs[0]) - 1;
+- mode = MT_PHY_TYPE_VHT;
+- goto out;
+- } else if (hweight8(mask->control[band].ht_mcs[0]) == 1) {
+- rateidx = ffs(mask->control[band].ht_mcs[0]) - 1;
+- mode = MT_PHY_TYPE_HT;
+- goto out;
+- } else if (hweight32(mask->control[band].legacy) == 1) {
+- rateidx = ffs(mask->control[band].legacy) - 1;
+- goto legacy;
+- }
+- }
+-
+- mcast_rate = vif->bss_conf.mcast_rate[band];
+- if (mcast && mcast_rate > 0)
+- rateidx = mcast_rate - 1;
+- else
+- rateidx = ffs(vif->bss_conf.basic_rates) - 1;
+-
+-legacy:
+- rateidx = mt76_calculate_default_rate(mphy, rateidx);
+- mode = rateidx >> 8;
+- rateidx &= GENMASK(7, 0);
+-
+-out:
+- return FIELD_PREP(MT_TX_RATE_IDX, rateidx) |
+- FIELD_PREP(MT_TX_RATE_MODE, mode);
+-}
+-
+-void mt7915_mac_write_txwi(struct mt7915_dev *dev, __le32 *txwi,
++void mt7915_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
+ struct sk_buff *skb, struct mt76_wcid *wcid, int pid,
+ struct ieee80211_key_conf *key, u32 changed)
+ {
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+- struct ieee80211_vif *vif = info->control.vif;
+- struct mt76_phy *mphy = &dev->mphy;
+- bool ext_phy = info->hw_queue & MT_TX_HW_QUEUE_EXT_PHY;
+- u8 p_fmt, q_idx, omac_idx = 0, wmm_idx = 0, band_idx = 0;
+- bool is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP;
+- bool mcast = false;
+- u16 tx_count = 15;
+- u32 val;
+- bool beacon = !!(changed & (BSS_CHANGED_BEACON |
+- BSS_CHANGED_BEACON_ENABLED));
+- bool inband_disc = !!(changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
+- BSS_CHANGED_FILS_DISCOVERY));
+-
+- if (vif) {
+- struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
+-
+- omac_idx = mvif->mt76.omac_idx;
+- wmm_idx = mvif->mt76.wmm_idx;
+- band_idx = mvif->mt76.band_idx;
+- }
+-
+- if (ext_phy && dev->mt76.phy2)
+- mphy = dev->mt76.phy2;
+-
+- if (inband_disc) {
+- p_fmt = MT_TX_TYPE_FW;
+- q_idx = MT_LMAC_ALTX0;
+- } else if (beacon) {
+- p_fmt = MT_TX_TYPE_FW;
+- q_idx = MT_LMAC_BCN0;
+- } else if (skb_get_queue_mapping(skb) >= MT_TXQ_PSD) {
+- p_fmt = MT_TX_TYPE_CT;
+- q_idx = MT_LMAC_ALTX0;
+- } else {
+- p_fmt = MT_TX_TYPE_CT;
+- q_idx = wmm_idx * MT7915_MAX_WMM_SETS +
+- mt76_connac_lmac_mapping(skb_get_queue_mapping(skb));
+- }
++ struct mt76_phy *mphy = &dev->phy;
+
+- val = FIELD_PREP(MT_TXD0_TX_BYTES, skb->len + MT_TXD_SIZE) |
+- FIELD_PREP(MT_TXD0_PKT_FMT, p_fmt) |
+- FIELD_PREP(MT_TXD0_Q_IDX, q_idx);
+- txwi[0] = cpu_to_le32(val);
+-
+- val = MT_TXD1_LONG_FORMAT | MT_TXD1_VTA |
+- FIELD_PREP(MT_TXD1_WLAN_IDX, wcid->idx) |
+- FIELD_PREP(MT_TXD1_OWN_MAC, omac_idx);
+-
+- if (ext_phy || band_idx)
+- val |= MT_TXD1_TGID;
+-
+- txwi[1] = cpu_to_le32(val);
++ if ((info->hw_queue & MT_TX_HW_QUEUE_EXT_PHY) && dev->phy2)
++ mphy = dev->phy2;
+
+- txwi[2] = 0;
++ mt76_connac2_mac_write_txwi(dev, txwi, skb, wcid, key, pid, changed);
+
+- val = MT_TXD3_SW_POWER_MGMT |
+- FIELD_PREP(MT_TXD3_REM_TX_COUNT, tx_count);
+- if (key)
+- val |= MT_TXD3_PROTECT_FRAME;
+- if (info->flags & IEEE80211_TX_CTL_NO_ACK)
+- val |= MT_TXD3_NO_ACK;
+-
+- txwi[3] = cpu_to_le32(val);
+- txwi[4] = 0;
+-
+- val = FIELD_PREP(MT_TXD5_PID, pid);
+- if (pid >= MT_PACKET_ID_FIRST)
+- val |= MT_TXD5_TX_STATUS_HOST;
+- txwi[5] = cpu_to_le32(val);
+-
+- txwi[6] = 0;
+- txwi[7] = wcid->amsdu ? cpu_to_le32(MT_TXD7_HW_AMSDU) : 0;
+-
+- if (is_8023)
+- mt7915_mac_write_txwi_8023(dev, txwi, skb, wcid);
+- else
+- mt7915_mac_write_txwi_80211(dev, txwi, skb, key, &mcast);
+-
+- if (txwi[2] & cpu_to_le32(MT_TXD2_FIX_RATE)) {
+- u16 rate = mt7915_mac_tx_rate_val(mphy, vif, beacon, mcast);
+-
+- /* hardware won't add HTC for mgmt/ctrl frame */
+- txwi[2] |= cpu_to_le32(MT_TXD2_HTC_VLD);
+-
+- val = MT_TXD6_FIXED_BW |
+- FIELD_PREP(MT_TXD6_TX_RATE, rate);
+- txwi[6] |= cpu_to_le32(val);
+- txwi[3] |= cpu_to_le32(MT_TXD3_BA_DISABLE);
+- }
+
+ if (mt76_testmode_enabled(mphy))
+ mt7915_mac_write_txwi_tm(mphy->priv, txwi, skb);
+@@ -1315,7 +1067,7 @@ int mt7915_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ return id;
+
+ pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
+- mt7915_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, pid, key, 0);
++ mt7915_mac_write_txwi(mdev, txwi_ptr, tx_info->skb, wcid, pid, key, 0);
+
+ txp = (struct mt7915_txp *)(txwi + MT_TXD_SIZE);
+ for (i = 0; i < nbuf; i++) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.h b/drivers/net/wireless/mediatek/mt76/mt7915/mac.h
+index c5fd1a618ae7c..f581ae27375bb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.h
+@@ -4,6 +4,8 @@
+ #ifndef __MT7915_MAC_H
+ #define __MT7915_MAC_H
+
++#include "../mt76_connac2_mac.h"
++
+ #define MT_CT_PARSE_LEN 72
+ #define MT_CT_DMA_BUF_NUM 2
+
+@@ -166,20 +168,6 @@ enum rx_pkt_type {
+ #define MT_CRXV_FOE_HI GENMASK(6, 0)
+ #define MT_CRXV_FOE_SHIFT 13
+
+-enum tx_header_format {
+- MT_HDR_FORMAT_802_3,
+- MT_HDR_FORMAT_CMD,
+- MT_HDR_FORMAT_802_11,
+- MT_HDR_FORMAT_802_11_EXT,
+-};
+-
+-enum tx_pkt_type {
+- MT_TX_TYPE_CT,
+- MT_TX_TYPE_SF,
+- MT_TX_TYPE_CMD,
+- MT_TX_TYPE_FW,
+-};
+-
+ enum tx_port_idx {
+ MT_TX_PORT_IDX_LMAC,
+ MT_TX_PORT_IDX_MCU
+@@ -200,97 +188,6 @@ enum tx_mcu_port_q_idx {
+ #define MT_CT_INFO_HSR2_TX BIT(4)
+ #define MT_CT_INFO_FROM_HOST BIT(7)
+
+-#define MT_TXD_SIZE (8 * 4)
+-
+-#define MT_TXD0_Q_IDX GENMASK(31, 25)
+-#define MT_TXD0_PKT_FMT GENMASK(24, 23)
+-#define MT_TXD0_ETH_TYPE_OFFSET GENMASK(22, 16)
+-#define MT_TXD0_TX_BYTES GENMASK(15, 0)
+-
+-#define MT_TXD1_LONG_FORMAT BIT(31)
+-#define MT_TXD1_TGID BIT(30)
+-#define MT_TXD1_OWN_MAC GENMASK(29, 24)
+-#define MT_TXD1_AMSDU BIT(23)
+-#define MT_TXD1_TID GENMASK(22, 20)
+-#define MT_TXD1_HDR_PAD GENMASK(19, 18)
+-#define MT_TXD1_HDR_FORMAT GENMASK(17, 16)
+-#define MT_TXD1_HDR_INFO GENMASK(15, 11)
+-#define MT_TXD1_ETH_802_3 BIT(15)
+-#define MT_TXD1_VTA BIT(10)
+-#define MT_TXD1_WLAN_IDX GENMASK(9, 0)
+-
+-#define MT_TXD2_FIX_RATE BIT(31)
+-#define MT_TXD2_FIXED_RATE BIT(30)
+-#define MT_TXD2_POWER_OFFSET GENMASK(29, 24)
+-#define MT_TXD2_MAX_TX_TIME GENMASK(23, 16)
+-#define MT_TXD2_FRAG GENMASK(15, 14)
+-#define MT_TXD2_HTC_VLD BIT(13)
+-#define MT_TXD2_DURATION BIT(12)
+-#define MT_TXD2_BIP BIT(11)
+-#define MT_TXD2_MULTICAST BIT(10)
+-#define MT_TXD2_RTS BIT(9)
+-#define MT_TXD2_SOUNDING BIT(8)
+-#define MT_TXD2_NDPA BIT(7)
+-#define MT_TXD2_NDP BIT(6)
+-#define MT_TXD2_FRAME_TYPE GENMASK(5, 4)
+-#define MT_TXD2_SUB_TYPE GENMASK(3, 0)
+-
+-#define MT_TXD3_SN_VALID BIT(31)
+-#define MT_TXD3_PN_VALID BIT(30)
+-#define MT_TXD3_SW_POWER_MGMT BIT(29)
+-#define MT_TXD3_BA_DISABLE BIT(28)
+-#define MT_TXD3_SEQ GENMASK(27, 16)
+-#define MT_TXD3_REM_TX_COUNT GENMASK(15, 11)
+-#define MT_TXD3_TX_COUNT GENMASK(10, 6)
+-#define MT_TXD3_TIMING_MEASURE BIT(5)
+-#define MT_TXD3_DAS BIT(4)
+-#define MT_TXD3_EEOSP BIT(3)
+-#define MT_TXD3_EMRD BIT(2)
+-#define MT_TXD3_PROTECT_FRAME BIT(1)
+-#define MT_TXD3_NO_ACK BIT(0)
+-
+-#define MT_TXD4_PN_LOW GENMASK(31, 0)
+-
+-#define MT_TXD5_PN_HIGH GENMASK(31, 16)
+-#define MT_TXD5_MD BIT(15)
+-#define MT_TXD5_ADD_BA BIT(14)
+-#define MT_TXD5_TX_STATUS_HOST BIT(10)
+-#define MT_TXD5_TX_STATUS_MCU BIT(9)
+-#define MT_TXD5_TX_STATUS_FMT BIT(8)
+-#define MT_TXD5_PID GENMASK(7, 0)
+-
+-#define MT_TXD6_TX_IBF BIT(31)
+-#define MT_TXD6_TX_EBF BIT(30)
+-#define MT_TXD6_TX_RATE GENMASK(29, 16)
+-#define MT_TXD6_SGI GENMASK(15, 14)
+-#define MT_TXD6_HELTF GENMASK(13, 12)
+-#define MT_TXD6_LDPC BIT(11)
+-#define MT_TXD6_SPE_ID_IDX BIT(10)
+-#define MT_TXD6_ANT_ID GENMASK(7, 4)
+-#define MT_TXD6_DYN_BW BIT(3)
+-#define MT_TXD6_FIXED_BW BIT(2)
+-#define MT_TXD6_BW GENMASK(1, 0)
+-
+-#define MT_TXD7_TXD_LEN GENMASK(31, 30)
+-#define MT_TXD7_UDP_TCP_SUM BIT(29)
+-#define MT_TXD7_IP_SUM BIT(28)
+-
+-#define MT_TXD7_TYPE GENMASK(21, 20)
+-#define MT_TXD7_SUB_TYPE GENMASK(19, 16)
+-
+-#define MT_TXD7_PSE_FID GENMASK(27, 16)
+-#define MT_TXD7_SPE_IDX GENMASK(15, 11)
+-#define MT_TXD7_HW_AMSDU BIT(10)
+-#define MT_TXD7_TX_TIME GENMASK(9, 0)
+-
+-#define MT_TX_RATE_STBC BIT(13)
+-#define MT_TX_RATE_NSS GENMASK(12, 10)
+-#define MT_TX_RATE_MODE GENMASK(9, 6)
+-#define MT_TX_RATE_SU_EXT_TONE BIT(5)
+-#define MT_TX_RATE_DCM BIT(4)
+-/* VHT/HE only use bits 0-3 */
+-#define MT_TX_RATE_IDX GENMASK(5, 0)
+-
+ #define MT_TXP_MAX_BUF_NUM 6
+
+ struct mt7915_txp {
+@@ -324,41 +221,6 @@ struct mt7915_tx_free {
+ /* will support this field in further revision */
+ #define MT_TX_FREE_RATE GENMASK(13, 0)
+
+-#define MT_TXS0_FIXED_RATE BIT(31)
+-#define MT_TXS0_BW GENMASK(30, 29)
+-#define MT_TXS0_TID GENMASK(28, 26)
+-#define MT_TXS0_AMPDU BIT(25)
+-#define MT_TXS0_TXS_FORMAT GENMASK(24, 23)
+-#define MT_TXS0_BA_ERROR BIT(22)
+-#define MT_TXS0_PS_FLAG BIT(21)
+-#define MT_TXS0_TXOP_TIMEOUT BIT(20)
+-#define MT_TXS0_BIP_ERROR BIT(19)
+-
+-#define MT_TXS0_QUEUE_TIMEOUT BIT(18)
+-#define MT_TXS0_RTS_TIMEOUT BIT(17)
+-#define MT_TXS0_ACK_TIMEOUT BIT(16)
+-#define MT_TXS0_ACK_ERROR_MASK GENMASK(18, 16)
+-
+-#define MT_TXS0_TX_STATUS_HOST BIT(15)
+-#define MT_TXS0_TX_STATUS_MCU BIT(14)
+-#define MT_TXS0_TX_RATE GENMASK(13, 0)
+-
+-#define MT_TXS1_SEQNO GENMASK(31, 20)
+-#define MT_TXS1_RESP_RATE GENMASK(19, 16)
+-#define MT_TXS1_RXV_SEQNO GENMASK(15, 8)
+-#define MT_TXS1_TX_POWER_DBM GENMASK(7, 0)
+-
+-#define MT_TXS2_BF_STATUS GENMASK(31, 30)
+-#define MT_TXS2_LAST_TX_RATE GENMASK(29, 27)
+-#define MT_TXS2_SHARED_ANTENNA BIT(26)
+-#define MT_TXS2_WCID GENMASK(25, 16)
+-#define MT_TXS2_TX_DELAY GENMASK(15, 0)
+-
+-#define MT_TXS3_PID GENMASK(31, 24)
+-#define MT_TXS3_ANT_ID GENMASK(23, 0)
+-
+-#define MT_TXS4_TIMESTAMP GENMASK(31, 0)
+-
+ #define MT_TXS5_F0_FINAL_MPDU BIT(31)
+ #define MT_TXS5_F0_QOS BIT(30)
+ #define MT_TXS5_F0_TX_COUNT GENMASK(29, 25)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index b7e2b365356c7..17fa2acc0d070 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -322,7 +322,7 @@ int mt7915_mcu_wa_cmd(struct mt7915_dev *dev, int cmd, u32 a1, u32 a2, u32 a3)
+ static void
+ mt7915_mcu_csa_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ {
+- if (vif->csa_active)
++ if (vif->bss_conf.csa_active)
+ ieee80211_csa_finish(vif);
+ }
+
+@@ -409,7 +409,7 @@ mt7915_mcu_rx_log_message(struct mt7915_dev *dev, struct sk_buff *skb)
+ static void
+ mt7915_mcu_cca_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ {
+- if (!vif->color_change_active)
++ if (!vif->bss_conf.color_change_active)
+ return;
+
+ ieee80211_color_change_finish(vif);
+@@ -1818,7 +1818,7 @@ mt7915_mcu_beacon_cntdwn(struct ieee80211_vif *vif, struct sk_buff *rskb,
+ if (!offs->cntdwn_counter_offs[0])
+ return;
+
+- sub_tag = vif->csa_active ? BSS_INFO_BCN_CSA : BSS_INFO_BCN_BCC;
++ sub_tag = vif->bss_conf.csa_active ? BSS_INFO_BCN_CSA : BSS_INFO_BCN_BCC;
+ tlv = mt7915_mcu_add_nested_subtlv(rskb, sub_tag, sizeof(*info),
+ &bcn->sub_ntlv, &bcn->len);
+ info = (struct bss_info_bcn_cntdwn *)tlv;
+@@ -1903,14 +1903,14 @@ mt7915_mcu_beacon_cont(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ if (offs->cntdwn_counter_offs[0]) {
+ u16 offset = offs->cntdwn_counter_offs[0];
+
+- if (vif->csa_active)
++ if (vif->bss_conf.csa_active)
+ cont->csa_ofs = cpu_to_le16(offset - 4);
+- if (vif->color_change_active)
++ if (vif->bss_conf.color_change_active)
+ cont->bcc_ofs = cpu_to_le16(offset - 3);
+ }
+
+ buf = (u8 *)tlv + sizeof(*cont);
+- mt7915_mac_write_txwi(dev, (__le32 *)buf, skb, wcid, 0, NULL,
++ mt7915_mac_write_txwi(&dev->mt76, (__le32 *)buf, skb, wcid, 0, NULL,
+ BSS_CHANGED_BEACON);
+ memcpy(buf + MT_TXD_SIZE, skb->data, skb->len);
+ }
+@@ -2049,7 +2049,7 @@ mt7915_mcu_beacon_inband_discov(struct mt7915_dev *dev, struct ieee80211_vif *vi
+
+ buf = (u8 *)tlv + sizeof(*discov);
+
+- mt7915_mac_write_txwi(dev, (__le32 *)buf, skb, wcid, 0, NULL,
++ mt7915_mac_write_txwi(&dev->mt76, (__le32 *)buf, skb, wcid, 0, NULL,
+ changed);
+ memcpy(buf + MT_TXD_SIZE, skb->data, skb->len);
+
+@@ -2685,7 +2685,7 @@ int mt7915_mcu_set_tx(struct mt7915_dev *dev, struct ieee80211_vif *vif)
+ struct edca *e = &req.edca[ac];
+
+ e->set = WMM_PARAM_SET;
+- e->queue = ac + mvif->mt76.wmm_idx * MT7915_MAX_WMM_SETS;
++ e->queue = ac + mvif->mt76.wmm_idx * MT76_CONNAC_MAX_WMM_SETS;
+ e->aifs = q->aifs;
+ e->txop = cpu_to_le16(q->txop);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index 4dcae69916694..2c1248ca0ed09 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -10,7 +10,6 @@
+ #include "regs.h"
+
+ #define MT7915_MAX_INTERFACES 19
+-#define MT7915_MAX_WMM_SETS 4
+ #define MT7915_WTBL_SIZE 288
+ #define MT7916_WTBL_SIZE 544
+ #define MT7915_WTBL_RESERVED (mt7915_wtbl_size(dev) - 1)
+@@ -341,20 +340,6 @@ enum {
+ __MT_WFDMA_MAX,
+ };
+
+-enum {
+- MT_CTX0,
+- MT_HIF0 = 0x0,
+-
+- MT_LMAC_AC00 = 0x0,
+- MT_LMAC_AC01,
+- MT_LMAC_AC02,
+- MT_LMAC_AC03,
+- MT_LMAC_ALTX0 = 0x10,
+- MT_LMAC_BMC0,
+- MT_LMAC_BCN0,
+- MT_LMAC_PSMP0,
+-};
+-
+ enum {
+ MT_RX_SEL0,
+ MT_RX_SEL1,
+@@ -557,7 +542,7 @@ bool mt7915_mac_wtbl_update(struct mt7915_dev *dev, int idx, u32 mask);
+ void mt7915_mac_reset_counters(struct mt7915_phy *phy);
+ void mt7915_mac_cca_stats_reset(struct mt7915_phy *phy);
+ void mt7915_mac_enable_nf(struct mt7915_dev *dev, bool ext_phy);
+-void mt7915_mac_write_txwi(struct mt7915_dev *dev, __le32 *txwi,
++void mt7915_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
+ struct sk_buff *skb, struct mt76_wcid *wcid, int pid,
+ struct ieee80211_key_conf *key, u32 changed);
+ void mt7915_mac_set_timing(struct mt7915_phy *phy);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c b/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
+index 20f63644e9295..0f5c1e5bffe1d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
+@@ -168,13 +168,14 @@ mt7915_tm_set_tam_arb(struct mt7915_phy *phy, bool enable, bool mu)
+ }
+
+ static int
+-mt7915_tm_set_wmm_qid(struct mt7915_dev *dev, u8 qid, u8 aifs, u8 cw_min,
++mt7915_tm_set_wmm_qid(struct mt7915_phy *phy, u8 qid, u8 aifs, u8 cw_min,
+ u16 cw_max, u16 txop)
+ {
++ struct mt7915_vif *mvif = (struct mt7915_vif *)phy->monitor_vif->drv_priv;
+ struct mt7915_mcu_tx req = { .total = 1 };
+ struct edca *e = &req.edca[0];
+
+- e->queue = qid;
++ e->queue = qid + mvif->mt76.wmm_idx * MT76_CONNAC_MAX_WMM_SETS;
+ e->set = WMM_PARAM_SET;
+
+ e->aifs = aifs;
+@@ -182,7 +183,7 @@ mt7915_tm_set_wmm_qid(struct mt7915_dev *dev, u8 qid, u8 aifs, u8 cw_min,
+ e->cw_max = cpu_to_le16(cw_max);
+ e->txop = cpu_to_le16(txop);
+
+- return mt7915_mcu_update_edca(dev, &req);
++ return mt7915_mcu_update_edca(phy->dev, &req);
+ }
+
+ static int
+@@ -244,7 +245,7 @@ done:
+
+ mt7915_tm_set_slot_time(phy, slot_time, sifs);
+
+- return mt7915_tm_set_wmm_qid(dev,
++ return mt7915_tm_set_wmm_qid(phy,
+ mt76_connac_lmac_mapping(IEEE80211_AC_BE),
+ aifsn, cw, cw, 0);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index 4a8675634f803..8ff1a0f2f076c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -53,8 +53,8 @@ mt7921_init_wiphy(struct ieee80211_hw *hw)
+ struct wiphy *wiphy = hw->wiphy;
+
+ hw->queues = 4;
+- hw->max_rx_aggregation_subframes = 64;
+- hw->max_tx_aggregation_subframes = 128;
++ hw->max_rx_aggregation_subframes = IEEE80211_MAX_AMPDU_BUF_HE;
++ hw->max_tx_aggregation_subframes = IEEE80211_MAX_AMPDU_BUF_HE;
+ hw->netdev_features = NETIF_F_RXCSUM;
+
+ hw->radiotap_timestamp.units_pos =
+@@ -304,7 +304,7 @@ int mt7921_register_device(struct mt7921_dev *dev)
+ IEEE80211_HT_CAP_LDPC_CODING |
+ IEEE80211_HT_CAP_MAX_AMSDU;
+ dev->mphy.sband_5g.sband.vht_cap.cap |=
+- IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 |
++ IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 |
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK |
+ IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
+ IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index a630ddbf19e54..2a2ea7b9977a4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -808,217 +808,6 @@ mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ return 0;
+ }
+
+-static void
+-mt7921_mac_write_txwi_8023(struct mt7921_dev *dev, __le32 *txwi,
+- struct sk_buff *skb, struct mt76_wcid *wcid)
+-{
+- u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+- u8 fc_type, fc_stype;
+- u16 ethertype;
+- bool wmm = false;
+- u32 val;
+-
+- if (wcid->sta) {
+- struct ieee80211_sta *sta;
+-
+- sta = container_of((void *)wcid, struct ieee80211_sta, drv_priv);
+- wmm = sta->wme;
+- }
+-
+- val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_3) |
+- FIELD_PREP(MT_TXD1_TID, tid);
+-
+- ethertype = get_unaligned_be16(&skb->data[12]);
+- if (ethertype >= ETH_P_802_3_MIN)
+- val |= MT_TXD1_ETH_802_3;
+-
+- txwi[1] |= cpu_to_le32(val);
+-
+- fc_type = IEEE80211_FTYPE_DATA >> 2;
+- fc_stype = wmm ? IEEE80211_STYPE_QOS_DATA >> 4 : 0;
+-
+- val = FIELD_PREP(MT_TXD2_FRAME_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD2_SUB_TYPE, fc_stype);
+-
+- txwi[2] |= cpu_to_le32(val);
+-
+- val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
+- txwi[7] |= cpu_to_le32(val);
+-}
+-
+-static void
+-mt7921_mac_write_txwi_80211(struct mt7921_dev *dev, __le32 *txwi,
+- struct sk_buff *skb, struct ieee80211_key_conf *key)
+-{
+- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+- struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
+- struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+- bool multicast = is_multicast_ether_addr(hdr->addr1);
+- u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+- __le16 fc = hdr->frame_control;
+- u8 fc_type, fc_stype;
+- u32 val;
+-
+- if (ieee80211_is_action(fc) &&
+- mgmt->u.action.category == WLAN_CATEGORY_BACK &&
+- mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ) {
+- u16 capab = le16_to_cpu(mgmt->u.action.u.addba_req.capab);
+-
+- txwi[5] |= cpu_to_le32(MT_TXD5_ADD_BA);
+- tid = (capab >> 2) & IEEE80211_QOS_CTL_TID_MASK;
+- } else if (ieee80211_is_back_req(hdr->frame_control)) {
+- struct ieee80211_bar *bar = (struct ieee80211_bar *)hdr;
+- u16 control = le16_to_cpu(bar->control);
+-
+- tid = FIELD_GET(IEEE80211_BAR_CTRL_TID_INFO_MASK, control);
+- }
+-
+- val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_11) |
+- FIELD_PREP(MT_TXD1_HDR_INFO,
+- ieee80211_get_hdrlen_from_skb(skb) / 2) |
+- FIELD_PREP(MT_TXD1_TID, tid);
+- txwi[1] |= cpu_to_le32(val);
+-
+- fc_type = (le16_to_cpu(fc) & IEEE80211_FCTL_FTYPE) >> 2;
+- fc_stype = (le16_to_cpu(fc) & IEEE80211_FCTL_STYPE) >> 4;
+-
+- val = FIELD_PREP(MT_TXD2_FRAME_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD2_SUB_TYPE, fc_stype) |
+- FIELD_PREP(MT_TXD2_MULTICAST, multicast);
+-
+- if (key && multicast && ieee80211_is_robust_mgmt_frame(skb) &&
+- key->cipher == WLAN_CIPHER_SUITE_AES_CMAC) {
+- val |= MT_TXD2_BIP;
+- txwi[3] &= ~cpu_to_le32(MT_TXD3_PROTECT_FRAME);
+- }
+-
+- if (!ieee80211_is_data(fc) || multicast ||
+- info->flags & IEEE80211_TX_CTL_USE_MINRATE)
+- val |= MT_TXD2_FIX_RATE;
+-
+- txwi[2] |= cpu_to_le32(val);
+-
+- if (ieee80211_is_beacon(fc)) {
+- txwi[3] &= ~cpu_to_le32(MT_TXD3_SW_POWER_MGMT);
+- txwi[3] |= cpu_to_le32(MT_TXD3_REM_TX_COUNT);
+- }
+-
+- if (info->flags & IEEE80211_TX_CTL_INJECTED) {
+- u16 seqno = le16_to_cpu(hdr->seq_ctrl);
+-
+- if (ieee80211_is_back_req(hdr->frame_control)) {
+- struct ieee80211_bar *bar;
+-
+- bar = (struct ieee80211_bar *)skb->data;
+- seqno = le16_to_cpu(bar->start_seq_num);
+- }
+-
+- val = MT_TXD3_SN_VALID |
+- FIELD_PREP(MT_TXD3_SEQ, IEEE80211_SEQ_TO_SN(seqno));
+- txwi[3] |= cpu_to_le32(val);
+- txwi[7] &= ~cpu_to_le32(MT_TXD7_HW_AMSDU);
+- }
+-
+- if (mt76_is_mmio(&dev->mt76)) {
+- val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
+- txwi[7] |= cpu_to_le32(val);
+- } else {
+- val = FIELD_PREP(MT_TXD8_L_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD8_L_SUB_TYPE, fc_stype);
+- txwi[8] |= cpu_to_le32(val);
+- }
+-}
+-
+-void mt7921_mac_write_txwi(struct mt7921_dev *dev, __le32 *txwi,
+- struct sk_buff *skb, struct mt76_wcid *wcid,
+- struct ieee80211_key_conf *key, int pid,
+- bool beacon)
+-{
+- struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+- struct ieee80211_vif *vif = info->control.vif;
+- struct mt76_phy *mphy = &dev->mphy;
+- u8 p_fmt, q_idx, omac_idx = 0, wmm_idx = 0;
+- bool is_mmio = mt76_is_mmio(&dev->mt76);
+- u32 sz_txd = is_mmio ? MT_TXD_SIZE : MT_SDIO_TXD_SIZE;
+- bool is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP;
+- u16 tx_count = 15;
+- u32 val;
+-
+- if (vif) {
+- struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+-
+- omac_idx = mvif->omac_idx;
+- wmm_idx = mvif->wmm_idx;
+- }
+-
+- if (beacon) {
+- p_fmt = MT_TX_TYPE_FW;
+- q_idx = MT_LMAC_BCN0;
+- } else if (skb_get_queue_mapping(skb) >= MT_TXQ_PSD) {
+- p_fmt = is_mmio ? MT_TX_TYPE_CT : MT_TX_TYPE_SF;
+- q_idx = MT_LMAC_ALTX0;
+- } else {
+- p_fmt = is_mmio ? MT_TX_TYPE_CT : MT_TX_TYPE_SF;
+- q_idx = wmm_idx * MT7921_MAX_WMM_SETS +
+- mt76_connac_lmac_mapping(skb_get_queue_mapping(skb));
+- }
+-
+- val = FIELD_PREP(MT_TXD0_TX_BYTES, skb->len + sz_txd) |
+- FIELD_PREP(MT_TXD0_PKT_FMT, p_fmt) |
+- FIELD_PREP(MT_TXD0_Q_IDX, q_idx);
+- txwi[0] = cpu_to_le32(val);
+-
+- val = MT_TXD1_LONG_FORMAT |
+- FIELD_PREP(MT_TXD1_WLAN_IDX, wcid->idx) |
+- FIELD_PREP(MT_TXD1_OWN_MAC, omac_idx);
+-
+- txwi[1] = cpu_to_le32(val);
+- txwi[2] = 0;
+-
+- val = FIELD_PREP(MT_TXD3_REM_TX_COUNT, tx_count);
+- if (key)
+- val |= MT_TXD3_PROTECT_FRAME;
+- if (info->flags & IEEE80211_TX_CTL_NO_ACK)
+- val |= MT_TXD3_NO_ACK;
+-
+- txwi[3] = cpu_to_le32(val);
+- txwi[4] = 0;
+-
+- val = FIELD_PREP(MT_TXD5_PID, pid);
+- if (pid >= MT_PACKET_ID_FIRST)
+- val |= MT_TXD5_TX_STATUS_HOST;
+- txwi[5] = cpu_to_le32(val);
+-
+- txwi[6] = 0;
+- txwi[7] = wcid->amsdu ? cpu_to_le32(MT_TXD7_HW_AMSDU) : 0;
+-
+- if (is_8023)
+- mt7921_mac_write_txwi_8023(dev, txwi, skb, wcid);
+- else
+- mt7921_mac_write_txwi_80211(dev, txwi, skb, key);
+-
+- if (txwi[2] & cpu_to_le32(MT_TXD2_FIX_RATE)) {
+- int rateidx = vif ? ffs(vif->bss_conf.basic_rates) - 1 : 0;
+- u16 rate, mode;
+-
+- /* hardware won't add HTC for mgmt/ctrl frame */
+- txwi[2] |= cpu_to_le32(MT_TXD2_HTC_VLD);
+-
+- rate = mt76_calculate_default_rate(mphy, rateidx);
+- mode = rate >> 8;
+- rate &= GENMASK(7, 0);
+- rate |= FIELD_PREP(MT_TX_RATE_MODE, mode);
+-
+- val = MT_TXD6_FIXED_BW |
+- FIELD_PREP(MT_TXD6_TX_RATE, rate);
+- txwi[6] |= cpu_to_le32(val);
+- txwi[3] |= cpu_to_le32(MT_TXD3_BA_DISABLE);
+- }
+-}
+-EXPORT_SYMBOL_GPL(mt7921_mac_write_txwi);
+-
+ void mt7921_tx_check_aggr(struct ieee80211_sta *sta, __le32 *txwi)
+ {
+ struct mt7921_sta *msta;
+@@ -1646,7 +1435,7 @@ mt7921_usb_sdio_write_txwi(struct mt7921_dev *dev, struct mt76_wcid *wcid,
+ __le32 *txwi = (__le32 *)(skb->data - MT_SDIO_TXD_SIZE);
+
+ memset(txwi, 0, MT_SDIO_TXD_SIZE);
+- mt7921_mac_write_txwi(dev, txwi, skb, wcid, key, pid, false);
++ mt76_connac2_mac_write_txwi(&dev->mt76, txwi, skb, wcid, key, pid, 0);
+ skb_push(skb, MT_SDIO_TXD_SIZE);
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+index 79447e2d0143b..556e687bd235d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+@@ -4,6 +4,8 @@
+ #ifndef __MT7921_MAC_H
+ #define __MT7921_MAC_H
+
++#include "../mt76_connac2_mac.h"
++
+ #define MT_CT_PARSE_LEN 72
+ #define MT_CT_DMA_BUF_NUM 2
+
+@@ -163,20 +165,6 @@ enum rx_pkt_type {
+ #define MT_CRXV_FOE_HI GENMASK(6, 0)
+ #define MT_CRXV_FOE_SHIFT 13
+
+-enum tx_header_format {
+- MT_HDR_FORMAT_802_3,
+- MT_HDR_FORMAT_CMD,
+- MT_HDR_FORMAT_802_11,
+- MT_HDR_FORMAT_802_11_EXT,
+-};
+-
+-enum tx_pkt_type {
+- MT_TX_TYPE_CT,
+- MT_TX_TYPE_SF,
+- MT_TX_TYPE_CMD,
+- MT_TX_TYPE_FW,
+-};
+-
+ enum tx_port_idx {
+ MT_TX_PORT_IDX_LMAC,
+ MT_TX_PORT_IDX_MCU
+@@ -197,104 +185,6 @@ enum tx_mcu_port_q_idx {
+ #define MT_CT_INFO_HSR2_TX BIT(4)
+ #define MT_CT_INFO_FROM_HOST BIT(7)
+
+-#define MT_TXD_SIZE (8 * 4)
+-
+-#define MT_SDIO_TXD_SIZE (MT_TXD_SIZE + 8 * 4)
+-#define MT_SDIO_TAIL_SIZE 8
+-#define MT_SDIO_HDR_SIZE 4
+-#define MT_USB_TAIL_SIZE 4
+-
+-#define MT_TXD0_Q_IDX GENMASK(31, 25)
+-#define MT_TXD0_PKT_FMT GENMASK(24, 23)
+-#define MT_TXD0_ETH_TYPE_OFFSET GENMASK(22, 16)
+-#define MT_TXD0_TX_BYTES GENMASK(15, 0)
+-
+-#define MT_TXD1_LONG_FORMAT BIT(31)
+-#define MT_TXD1_TGID BIT(30)
+-#define MT_TXD1_OWN_MAC GENMASK(29, 24)
+-#define MT_TXD1_AMSDU BIT(23)
+-#define MT_TXD1_TID GENMASK(22, 20)
+-#define MT_TXD1_HDR_PAD GENMASK(19, 18)
+-#define MT_TXD1_HDR_FORMAT GENMASK(17, 16)
+-#define MT_TXD1_HDR_INFO GENMASK(15, 11)
+-#define MT_TXD1_ETH_802_3 BIT(15)
+-#define MT_TXD1_VTA BIT(10)
+-#define MT_TXD1_WLAN_IDX GENMASK(9, 0)
+-
+-#define MT_TXD2_FIX_RATE BIT(31)
+-#define MT_TXD2_FIXED_RATE BIT(30)
+-#define MT_TXD2_POWER_OFFSET GENMASK(29, 24)
+-#define MT_TXD2_MAX_TX_TIME GENMASK(23, 16)
+-#define MT_TXD2_FRAG GENMASK(15, 14)
+-#define MT_TXD2_HTC_VLD BIT(13)
+-#define MT_TXD2_DURATION BIT(12)
+-#define MT_TXD2_BIP BIT(11)
+-#define MT_TXD2_MULTICAST BIT(10)
+-#define MT_TXD2_RTS BIT(9)
+-#define MT_TXD2_SOUNDING BIT(8)
+-#define MT_TXD2_NDPA BIT(7)
+-#define MT_TXD2_NDP BIT(6)
+-#define MT_TXD2_FRAME_TYPE GENMASK(5, 4)
+-#define MT_TXD2_SUB_TYPE GENMASK(3, 0)
+-
+-#define MT_TXD3_SN_VALID BIT(31)
+-#define MT_TXD3_PN_VALID BIT(30)
+-#define MT_TXD3_SW_POWER_MGMT BIT(29)
+-#define MT_TXD3_BA_DISABLE BIT(28)
+-#define MT_TXD3_SEQ GENMASK(27, 16)
+-#define MT_TXD3_REM_TX_COUNT GENMASK(15, 11)
+-#define MT_TXD3_TX_COUNT GENMASK(10, 6)
+-#define MT_TXD3_TIMING_MEASURE BIT(5)
+-#define MT_TXD3_DAS BIT(4)
+-#define MT_TXD3_EEOSP BIT(3)
+-#define MT_TXD3_EMRD BIT(2)
+-#define MT_TXD3_PROTECT_FRAME BIT(1)
+-#define MT_TXD3_NO_ACK BIT(0)
+-
+-#define MT_TXD4_PN_LOW GENMASK(31, 0)
+-
+-#define MT_TXD5_PN_HIGH GENMASK(31, 16)
+-#define MT_TXD5_MD BIT(15)
+-#define MT_TXD5_ADD_BA BIT(14)
+-#define MT_TXD5_TX_STATUS_HOST BIT(10)
+-#define MT_TXD5_TX_STATUS_MCU BIT(9)
+-#define MT_TXD5_TX_STATUS_FMT BIT(8)
+-#define MT_TXD5_PID GENMASK(7, 0)
+-
+-#define MT_TXD6_TX_IBF BIT(31)
+-#define MT_TXD6_TX_EBF BIT(30)
+-#define MT_TXD6_TX_RATE GENMASK(29, 16)
+-#define MT_TXD6_SGI GENMASK(15, 14)
+-#define MT_TXD6_HELTF GENMASK(13, 12)
+-#define MT_TXD6_LDPC BIT(11)
+-#define MT_TXD6_SPE_ID_IDX BIT(10)
+-#define MT_TXD6_ANT_ID GENMASK(7, 4)
+-#define MT_TXD6_DYN_BW BIT(3)
+-#define MT_TXD6_FIXED_BW BIT(2)
+-#define MT_TXD6_BW GENMASK(1, 0)
+-
+-#define MT_TXD7_TXD_LEN GENMASK(31, 30)
+-#define MT_TXD7_UDP_TCP_SUM BIT(29)
+-#define MT_TXD7_IP_SUM BIT(28)
+-
+-#define MT_TXD7_TYPE GENMASK(21, 20)
+-#define MT_TXD7_SUB_TYPE GENMASK(19, 16)
+-
+-#define MT_TXD7_PSE_FID GENMASK(27, 16)
+-#define MT_TXD7_SPE_IDX GENMASK(15, 11)
+-#define MT_TXD7_HW_AMSDU BIT(10)
+-#define MT_TXD7_TX_TIME GENMASK(9, 0)
+-
+-#define MT_TXD8_L_TYPE GENMASK(5, 4)
+-#define MT_TXD8_L_SUB_TYPE GENMASK(3, 0)
+-
+-#define MT_TX_RATE_STBC BIT(13)
+-#define MT_TX_RATE_NSS GENMASK(12, 10)
+-#define MT_TX_RATE_MODE GENMASK(9, 6)
+-#define MT_TX_RATE_SU_EXT_TONE BIT(5)
+-#define MT_TX_RATE_DCM BIT(4)
+-#define MT_TX_RATE_IDX GENMASK(3, 0)
+-
+ #define MT_TXP_MAX_BUF_NUM 6
+
+ struct mt7921_txp {
+@@ -325,15 +215,6 @@ struct mt7921_tx_free {
+ /* will support this field in further revision */
+ #define MT_TX_FREE_RATE GENMASK(13, 0)
+
+-#define MT_TXS0_BW GENMASK(30, 29)
+-#define MT_TXS0_TXS_FORMAT GENMASK(24, 23)
+-#define MT_TXS0_ACK_ERROR_MASK GENMASK(18, 16)
+-#define MT_TXS0_TX_RATE GENMASK(13, 0)
+-
+-#define MT_TXS2_WCID GENMASK(25, 16)
+-
+-#define MT_TXS3_PID GENMASK(31, 24)
+-
+ static inline struct mt7921_txp_common *
+ mt7921_txwi_to_txp(struct mt76_dev *dev, struct mt76_txwi_cache *t)
+ {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 80279f342109a..e86fe9ee4623e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -322,7 +322,7 @@ static int mt7921_add_interface(struct ieee80211_hw *hw,
+ mvif->mt76.omac_idx = mvif->mt76.idx;
+ mvif->phy = phy;
+ mvif->mt76.band_idx = 0;
+- mvif->mt76.wmm_idx = mvif->mt76.idx % MT7921_MAX_WMM_SETS;
++ mvif->mt76.wmm_idx = mvif->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
+
+ ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, vif, &mvif->sta.wcid,
+ true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 12bab18c41719..613a94be8ea44 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -582,13 +582,6 @@ static int mt7921_load_patch(struct mt7921_dev *dev)
+ if (ret)
+ dev_err(dev->mt76.dev, "Failed to start patch\n");
+
+- if (mt76_is_sdio(&dev->mt76)) {
+- /* activate again */
+- ret = __mt7921_mcu_fw_pmctrl(dev);
+- if (!ret)
+- ret = __mt7921_mcu_drv_pmctrl(dev);
+- }
+-
+ out:
+ sem = mt76_connac_mcu_patch_sem_ctrl(&dev->mt76, false);
+ switch (sem) {
+@@ -599,6 +592,14 @@ out:
+ dev_err(dev->mt76.dev, "Failed to release patch semaphore\n");
+ break;
+ }
++
++ if (!ret && mt76_is_sdio(&dev->mt76)) {
++ /* activate again */
++ ret = __mt7921_mcu_fw_pmctrl(dev);
++ if (!ret)
++ ret = __mt7921_mcu_drv_pmctrl(dev);
++ }
++
+ release_firmware(fw);
+
+ return ret;
+@@ -1255,8 +1256,11 @@ mt7921_mcu_uni_add_beacon_offload(struct mt7921_dev *dev,
+ };
+ struct sk_buff *skb;
+
++ /* support enable/update process only
++ * disable flow would be handled in bss stop handler automatically
++ */
+ if (!enable)
+- goto out;
++ return -EOPNOTSUPP;
+
+ skb = ieee80211_beacon_get_template(mt76_hw(dev), vif, &offs);
+ if (!skb)
+@@ -1268,8 +1272,8 @@ mt7921_mcu_uni_add_beacon_offload(struct mt7921_dev *dev,
+ return -EINVAL;
+ }
+
+- mt7921_mac_write_txwi(dev, (__le32 *)(req.beacon_tlv.pkt), skb,
+- wcid, NULL, 0, true);
++ mt76_connac2_mac_write_txwi(&dev->mt76, (__le32 *)(req.beacon_tlv.pkt),
++ skb, wcid, NULL, 0, BSS_CHANGED_BEACON);
+ memcpy(req.beacon_tlv.pkt + MT_TXD_SIZE, skb->data, skb->len);
+ req.beacon_tlv.pkt_len = cpu_to_le16(MT_TXD_SIZE + skb->len);
+ req.beacon_tlv.tim_ie_pos = cpu_to_le16(MT_TXD_SIZE + offs.tim_offset);
+@@ -1282,7 +1286,6 @@ mt7921_mcu_uni_add_beacon_offload(struct mt7921_dev *dev,
+ }
+ dev_kfree_skb(skb);
+
+-out:
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ &req, sizeof(req), true);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 5ca584bb2fc65..66054123bcc47 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -10,7 +10,6 @@
+ #include "regs.h"
+
+ #define MT7921_MAX_INTERFACES 4
+-#define MT7921_MAX_WMM_SETS 4
+ #define MT7921_WTBL_SIZE 20
+ #define MT7921_WTBL_RESERVED (MT7921_WTBL_SIZE - 1)
+ #define MT7921_WTBL_STA (MT7921_WTBL_RESERVED - \
+@@ -247,16 +246,6 @@ struct mt7921_txpwr {
+ } data[TXPWR_MAX_NUM];
+ };
+
+-enum {
+- MT_LMAC_AC00,
+- MT_LMAC_AC01,
+- MT_LMAC_AC02,
+- MT_LMAC_AC03,
+- MT_LMAC_ALTX0 = 0x10,
+- MT_LMAC_BMC0,
+- MT_LMAC_BCN0,
+-};
+-
+ static inline struct mt7921_phy *
+ mt7921_hw_phy(struct ieee80211_hw *hw)
+ {
+@@ -424,10 +413,6 @@ int mt7921_testmode_cmd(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ void *data, int len);
+ int mt7921_testmode_dump(struct ieee80211_hw *hw, struct sk_buff *msg,
+ struct netlink_callback *cb, void *data, int len);
+-void mt7921_mac_write_txwi(struct mt7921_dev *dev, __le32 *txwi,
+- struct sk_buff *skb, struct mt76_wcid *wcid,
+- struct ieee80211_key_conf *key, int pid,
+- bool beacon);
+ void mt7921_tx_check_aggr(struct ieee80211_sta *sta, __le32 *txwi);
+ void mt7921_mac_sta_poll(struct mt7921_dev *dev);
+ int mt7921_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
+index 5ca14dbbdd265..b0f58bcf70cb0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
+@@ -72,8 +72,8 @@ int mt7921e_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ }
+
+ pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
+- mt7921_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, key,
+- pid, false);
++ mt76_connac2_mac_write_txwi(mdev, txwi_ptr, tx_info->skb, wcid, key,
++ pid, 0);
+
+ txp = (struct mt7921_txp_common *)(txwi + MT_TXD_SIZE);
+ memset(txp, 0, sizeof(struct mt7921_txp_common));
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+index 36669e5aeef39..a1ab5f878f81a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+@@ -102,7 +102,7 @@ int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev)
+ {
+ struct mt76_phy *mphy = &dev->mt76.phy;
+ struct mt76_connac_pm *pm = &dev->pm;
+- int i, err = 0;
++ int i;
+
+ for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
+ mt76_wr(dev, MT_CONN_ON_LPCTL, PCIE_LPCR_HOST_SET_OWN);
+@@ -114,12 +114,12 @@ int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev)
+ if (i == MT7921_DRV_OWN_RETRY_COUNT) {
+ dev_err(dev->mt76.dev, "firmware own failed\n");
+ clear_bit(MT76_STATE_PM, &mphy->state);
+- err = -EIO;
++ return -EIO;
+ }
+
+ pm->stats.last_doze_event = jiffies;
+ pm->stats.awake_time += pm->stats.last_doze_event -
+ pm->stats.last_wake_event;
+
+- return err;
++ return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
+index 54a5c712a3c3e..c572a3107b8b7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
+@@ -136,8 +136,8 @@ int mt7921s_mcu_fw_pmctrl(struct mt7921_dev *dev)
+ struct sdio_func *func = dev->mt76.sdio.func;
+ struct mt76_phy *mphy = &dev->mt76.phy;
+ struct mt76_connac_pm *pm = &dev->pm;
+- int err = 0;
+ u32 status;
++ int err;
+
+ sdio_claim_host(func);
+
+@@ -148,7 +148,7 @@ int mt7921s_mcu_fw_pmctrl(struct mt7921_dev *dev)
+ 2000, 1000000);
+ if (err < 0) {
+ dev_err(dev->mt76.dev, "mailbox ACK not cleared\n");
+- goto err;
++ goto out;
+ }
+ }
+
+@@ -156,18 +156,18 @@ int mt7921s_mcu_fw_pmctrl(struct mt7921_dev *dev)
+
+ err = readx_poll_timeout(mt76s_read_pcr, &dev->mt76, status,
+ !(status & WHLPCR_IS_DRIVER_OWN), 2000, 1000000);
++out:
+ sdio_release_host(func);
+
+-err:
+ if (err < 0) {
+ dev_err(dev->mt76.dev, "firmware own failed\n");
+ clear_bit(MT76_STATE_PM, &mphy->state);
+- err = -EIO;
++ return -EIO;
+ }
+
+ pm->stats.last_doze_event = jiffies;
+ pm->stats.awake_time += pm->stats.last_doze_event -
+ pm->stats.last_wake_event;
+
+- return err;
++ return 0;
+ }
+diff --git a/drivers/net/wireless/microchip/wilc1000/cfg80211.c b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+index 8d8378bafd9b0..269748b9a1c40 100644
+--- a/drivers/net/wireless/microchip/wilc1000/cfg80211.c
++++ b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+@@ -1378,7 +1378,8 @@ static int change_beacon(struct wiphy *wiphy, struct net_device *dev,
+ return wilc_add_beacon(vif, 0, 0, beacon);
+ }
+
+-static int stop_ap(struct wiphy *wiphy, struct net_device *dev)
++static int stop_ap(struct wiphy *wiphy, struct net_device *dev,
++ unsigned int link_id)
+ {
+ int ret;
+ struct wilc_vif *vif = netdev_priv(dev);
+diff --git a/drivers/net/wireless/microchip/wilc1000/spi.c b/drivers/net/wireless/microchip/wilc1000/spi.c
+index 18420e954402f..2ae8dd3411aca 100644
+--- a/drivers/net/wireless/microchip/wilc1000/spi.c
++++ b/drivers/net/wireless/microchip/wilc1000/spi.c
+@@ -191,11 +191,11 @@ static void wilc_wlan_power(struct wilc *wilc, bool on)
+ /* assert ENABLE: */
+ gpiod_set_value(gpios->enable, 1);
+ mdelay(5);
+- /* deassert RESET: */
+- gpiod_set_value(gpios->reset, 0);
+- } else {
+ /* assert RESET: */
+ gpiod_set_value(gpios->reset, 1);
++ } else {
++ /* deassert RESET: */
++ gpiod_set_value(gpios->reset, 0);
+ /* deassert ENABLE: */
+ gpiod_set_value(gpios->enable, 0);
+ }
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
+index 84b15a655eab1..1593e810b3ca4 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
+@@ -352,7 +352,8 @@ static int qtnf_start_ap(struct wiphy *wiphy, struct net_device *dev,
+ return ret;
+ }
+
+-static int qtnf_stop_ap(struct wiphy *wiphy, struct net_device *dev)
++static int qtnf_stop_ap(struct wiphy *wiphy, struct net_device *dev,
++ unsigned int link_id)
+ {
+ struct qtnf_vif *vif = qtnf_netdev_get_priv(dev);
+ int ret;
+@@ -500,7 +501,7 @@ qtnf_dump_station(struct wiphy *wiphy, struct net_device *dev,
+
+ switch (vif->wdev.iftype) {
+ case NL80211_IFTYPE_STATION:
+- if (idx != 0 || !vif->wdev.current_bss)
++ if (idx != 0 || !vif->wdev.connected)
+ return -ENOENT;
+
+ ether_addr_copy(mac, vif->bssid);
+@@ -729,7 +730,7 @@ qtnf_disconnect(struct wiphy *wiphy, struct net_device *dev,
+ pr_err("VIF%u.%u: failed to disconnect\n",
+ mac->macid, vif->vifid);
+
+- if (vif->wdev.current_bss) {
++ if (vif->wdev.connected) {
+ netif_carrier_off(vif->netdev);
+ cfg80211_disconnected(vif->netdev, reason_code,
+ NULL, 0, true, GFP_KERNEL);
+@@ -745,10 +746,11 @@ qtnf_dump_survey(struct wiphy *wiphy, struct net_device *dev,
+ struct qtnf_wmac *mac = wiphy_priv(wiphy);
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ struct ieee80211_supported_band *sband;
+- const struct cfg80211_chan_def *chandef = &wdev->chandef;
++ const struct cfg80211_chan_def *chandef = wdev_chandef(wdev, 0);
+ struct ieee80211_channel *chan;
+ int ret;
+
++
+ sband = wiphy->bands[NL80211_BAND_2GHZ];
+ if (sband && idx >= sband->n_channels) {
+ idx -= sband->n_channels;
+@@ -765,7 +767,7 @@ qtnf_dump_survey(struct wiphy *wiphy, struct net_device *dev,
+ survey->channel = chan;
+ survey->filled = 0x0;
+
+- if (chan == chandef->chan)
++ if (chandef && chan == chandef->chan)
+ survey->filled = SURVEY_INFO_IN_USE;
+
+ ret = qtnf_cmd_get_chan_stats(mac, chan->center_freq, survey);
+@@ -778,7 +780,7 @@ qtnf_dump_survey(struct wiphy *wiphy, struct net_device *dev,
+
+ static int
+ qtnf_get_channel(struct wiphy *wiphy, struct wireless_dev *wdev,
+- struct cfg80211_chan_def *chandef)
++ unsigned int link_id, struct cfg80211_chan_def *chandef)
+ {
+ struct net_device *ndev = wdev->netdev;
+ struct qtnf_vif *vif;
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/commands.c b/drivers/net/wireless/quantenna/qtnfmac/commands.c
+index c68563c830981..3d734a7a5ba8e 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/commands.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/commands.c
+@@ -2005,7 +2005,7 @@ int qtnf_cmd_send_scan(struct qtnf_wmac *mac)
+ dwell_active = scan_req->duration;
+ dwell_passive = scan_req->duration;
+ } else if (wdev->iftype == NL80211_IFTYPE_STATION &&
+- wdev->current_bss) {
++ wdev->connected) {
+ /* let device select dwell based on traffic conditions */
+ dwell_active = QTNF_SCAN_TIME_AUTO;
+ dwell_passive = QTNF_SCAN_TIME_AUTO;
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c
+index 8dc80574d08d9..4fafe370101a2 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/event.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/event.c
+@@ -189,7 +189,7 @@ qtnf_event_handle_bss_join(struct qtnf_vif *vif,
+ vif->mac->macid, vif->vifid,
+ join_info->bssid, chandef.chan->hw_value);
+
+- if (!vif->wdev.ssid_len) {
++ if (!vif->wdev.u.client.ssid_len) {
+ pr_warn("VIF%u.%u: SSID unknown for BSS:%pM\n",
+ vif->mac->macid, vif->vifid,
+ join_info->bssid);
+@@ -197,7 +197,7 @@ qtnf_event_handle_bss_join(struct qtnf_vif *vif,
+ goto done;
+ }
+
+- ie = kzalloc(2 + vif->wdev.ssid_len, GFP_KERNEL);
++ ie = kzalloc(2 + vif->wdev.u.client.ssid_len, GFP_KERNEL);
+ if (!ie) {
+ pr_warn("VIF%u.%u: IE alloc failed for BSS:%pM\n",
+ vif->mac->macid, vif->vifid,
+@@ -207,14 +207,15 @@ qtnf_event_handle_bss_join(struct qtnf_vif *vif,
+ }
+
+ ie[0] = WLAN_EID_SSID;
+- ie[1] = vif->wdev.ssid_len;
+- memcpy(ie + 2, vif->wdev.ssid, vif->wdev.ssid_len);
++ ie[1] = vif->wdev.u.client.ssid_len;
++ memcpy(ie + 2, vif->wdev.u.client.ssid,
++ vif->wdev.u.client.ssid_len);
+
+ bss = cfg80211_inform_bss(wiphy, chandef.chan,
+ CFG80211_BSS_FTYPE_UNKNOWN,
+ join_info->bssid, 0,
+ WLAN_CAPABILITY_ESS, 100,
+- ie, 2 + vif->wdev.ssid_len,
++ ie, 2 + vif->wdev.u.client.ssid_len,
+ 0, GFP_KERNEL);
+ if (!bss) {
+ pr_warn("VIF%u.%u: can't connect to unknown BSS: %pM\n",
+@@ -470,14 +471,14 @@ qtnf_event_handle_freq_change(struct qtnf_wmac *mac,
+ continue;
+
+ if (vif->wdev.iftype == NL80211_IFTYPE_STATION &&
+- !vif->wdev.current_bss)
++ !vif->wdev.connected)
+ continue;
+
+ if (!vif->netdev)
+ continue;
+
+ mutex_lock(&vif->wdev.mtx);
+- cfg80211_ch_switch_notify(vif->netdev, &chandef);
++ cfg80211_ch_switch_notify(vif->netdev, &chandef, 0);
+ mutex_unlock(&vif->wdev.mtx);
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/debug.c b/drivers/net/wireless/realtek/rtlwifi/debug.c
+index 901cdfe3723cf..0b1bc04cb6adb 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/debug.c
++++ b/drivers/net/wireless/realtek/rtlwifi/debug.c
+@@ -329,8 +329,8 @@ static ssize_t rtl_debugfs_set_write_h2c(struct file *filp,
+
+ tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
+
+- if (!buffer || copy_from_user(tmp, buffer, tmp_len))
+- return count;
++ if (copy_from_user(tmp, buffer, tmp_len))
++ return -EFAULT;
+
+ tmp[tmp_len] = '\0';
+
+@@ -340,8 +340,8 @@ static ssize_t rtl_debugfs_set_write_h2c(struct file *filp,
+ &h2c_data[4], &h2c_data[5],
+ &h2c_data[6], &h2c_data[7]);
+
+- if (h2c_len <= 0)
+- return count;
++ if (h2c_len == 0)
++ return -EINVAL;
+
+ for (i = 0; i < h2c_len; i++)
+ h2c_data_packed[i] = (u8)h2c_data[i];
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index efabd5b1bf5b6..645ef1d018953 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -1984,6 +1984,10 @@ int rtw_core_init(struct rtw_dev *rtwdev)
+ timer_setup(&rtwdev->tx_report.purge_timer,
+ rtw_tx_report_purge_timer, 0);
+ rtwdev->tx_wq = alloc_workqueue("rtw_tx_wq", WQ_UNBOUND | WQ_HIGHPRI, 0);
++ if (!rtwdev->tx_wq) {
++ rtw_warn(rtwdev, "alloc_workqueue rtw_tx_wq failed\n");
++ return -ENOMEM;
++ }
+
+ INIT_DELAYED_WORK(&rtwdev->watch_dog_work, rtw_watch_dog_work);
+ INIT_DELAYED_WORK(&coex->bt_relink_work, rtw_coex_bt_relink_work);
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c
+index e3c2fce326516..3d60feb783121 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c
+@@ -2330,8 +2330,8 @@ static u8 _dpk_pas_read(struct rtw89_dev *rtwdev, bool is_check)
+ val2_q = abs(sign_extend32(val2_q, 11));
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK, "[DPK] PAS_delta = 0x%x\n",
+- (val1_i * val1_i + val1_q * val1_q) /
+- (val2_i * val2_i + val2_q * val2_q));
++ phy_div(val1_i * val1_i + val1_q * val1_q,
++ val2_i * val2_i + val2_q * val2_q));
+
+ } else {
+ for (i = 0; i < 32; i++) {
+diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
+index 6959efa4bfa9a..21a9e3b0cbac0 100644
+--- a/drivers/net/wireless/ti/wlcore/main.c
++++ b/drivers/net/wireless/ti/wlcore/main.c
+@@ -4675,7 +4675,7 @@ static void wlcore_op_change_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif = wl12xx_wlvif_to_vif(wlvif);
+
+ rcu_read_lock();
+- if (rcu_access_pointer(vif->chanctx_conf) != ctx) {
++ if (rcu_access_pointer(vif->bss_conf.chanctx_conf) != ctx) {
+ rcu_read_unlock();
+ continue;
+ }
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 6a12a906a11e4..2f965356f3453 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1927,8 +1927,10 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+
+ if (ns->head->ids.csi == NVME_CSI_ZNS) {
+ ret = nvme_update_zone_info(ns, lbaf);
+- if (ret)
+- goto out_unfreeze;
++ if (ret) {
++ blk_mq_unfreeze_queue(ns->disk->queue);
++ goto out;
++ }
+ }
+
+ set_disk_ro(ns->disk, (id->nsattr & NVME_NS_ATTR_RO) ||
+@@ -1939,7 +1941,7 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+ if (blk_queue_is_zoned(ns->queue)) {
+ ret = nvme_revalidate_zones(ns);
+ if (ret && !nvme_first_scan(ns->disk))
+- return ret;
++ goto out;
+ }
+
+ if (nvme_ns_head_multipath(ns->head)) {
+@@ -1954,9 +1956,9 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+ disk_update_readahead(ns->head->disk);
+ blk_mq_unfreeze_queue(ns->head->disk->queue);
+ }
+- return 0;
+
+-out_unfreeze:
++ ret = 0;
++out:
+ /*
+ * If probing fails due an unsupported feature, hide the block device,
+ * but still allow other access.
+@@ -1966,7 +1968,6 @@ out_unfreeze:
+ set_bit(NVME_NS_READY, &ns->flags);
+ ret = 0;
+ }
+- blk_mq_unfreeze_queue(ns->disk->queue);
+ return ret;
+ }
+
+@@ -2123,6 +2124,7 @@ static int nvme_report_zones(struct gendisk *disk, sector_t sector,
+ static const struct block_device_operations nvme_bdev_ops = {
+ .owner = THIS_MODULE,
+ .ioctl = nvme_ioctl,
++ .compat_ioctl = blkdev_compat_ptr_ioctl,
+ .open = nvme_open,
+ .release = nvme_release,
+ .getgeo = nvme_getgeo,
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index d3e2440d8abb0..432ea9793a849 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -408,6 +408,7 @@ const struct block_device_operations nvme_ns_head_ops = {
+ .open = nvme_ns_head_open,
+ .release = nvme_ns_head_release,
+ .ioctl = nvme_ns_head_ioctl,
++ .compat_ioctl = blkdev_compat_ptr_ioctl,
+ .getgeo = nvme_getgeo,
+ .report_zones = nvme_ns_head_report_zones,
+ .pr_ops = &nvme_pr_ops,
+diff --git a/drivers/nvme/host/trace.h b/drivers/nvme/host/trace.h
+index 37c7f4c89f92e..6f0eaf6a15282 100644
+--- a/drivers/nvme/host/trace.h
++++ b/drivers/nvme/host/trace.h
+@@ -98,7 +98,7 @@ TRACE_EVENT(nvme_complete_rq,
+ TP_fast_assign(
+ __entry->ctrl_id = nvme_req(req)->ctrl->instance;
+ __entry->qid = nvme_req_qid(req);
+- __entry->cid = req->tag;
++ __entry->cid = nvme_req(req)->cmd->common.command_id;
+ __entry->result = le64_to_cpu(nvme_req(req)->result.u64);
+ __entry->retries = nvme_req(req)->retries;
+ __entry->flags = nvme_req(req)->flags;
+diff --git a/drivers/of/device.c b/drivers/of/device.c
+index 874f031442dc7..75b6cbffa7558 100644
+--- a/drivers/of/device.c
++++ b/drivers/of/device.c
+@@ -81,8 +81,11 @@ of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+ * restricted-dma-pool region is allowed.
+ */
+ if (of_device_is_compatible(node, "restricted-dma-pool") &&
+- of_device_is_available(node))
++ of_device_is_available(node)) {
++ of_node_put(node);
+ break;
++ }
++ of_node_put(node);
+ }
+
+ /*
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index a8f5b65321657..520ed965bb7a4 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -246,7 +246,7 @@ static int populate_node(const void *blob,
+ }
+
+ *pnp = np;
+- return true;
++ return 0;
+ }
+
+ static void reverse_nodes(struct device_node *parent)
+diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
+index 8d374cc552be5..91b04b04eec45 100644
+--- a/drivers/of/kexec.c
++++ b/drivers/of/kexec.c
+@@ -126,6 +126,7 @@ int ima_get_kexec_buffer(void **addr, size_t *size)
+ {
+ int ret, len;
+ unsigned long tmp_addr;
++ unsigned long start_pfn, end_pfn;
+ size_t tmp_size;
+ const void *prop;
+
+@@ -140,6 +141,22 @@ int ima_get_kexec_buffer(void **addr, size_t *size)
+ if (ret)
+ return ret;
+
++ /* Do some sanity on the returned size for the ima-kexec buffer */
++ if (!tmp_size)
++ return -ENOENT;
++
++ /*
++ * Calculate the PFNs for the buffer and ensure
++ * they are with in addressable memory.
++ */
++ start_pfn = PHYS_PFN(tmp_addr);
++ end_pfn = PHYS_PFN(tmp_addr + tmp_size - 1);
++ if (!page_is_ram(start_pfn) || !page_is_ram(end_pfn)) {
++ pr_warn("IMA buffer at 0x%lx, size = 0x%zx beyond memory\n",
++ tmp_addr, tmp_size);
++ return -EINVAL;
++ }
++
+ *addr = __va(tmp_addr);
+ *size = tmp_size;
+
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 84063eaebb91d..ff0364733dcbf 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -2528,8 +2528,8 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
+ }
+
+ virt_dev = dev_pm_domain_attach_by_name(dev, *name);
+- if (IS_ERR(virt_dev)) {
+- ret = PTR_ERR(virt_dev);
++ if (IS_ERR_OR_NULL(virt_dev)) {
++ ret = PTR_ERR(virt_dev) ? : -ENODEV;
+ dev_err(dev, "Couldn't attach to pm_domain: %d\n", ret);
+ goto err;
+ }
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 30394929d700e..eb89c9a759859 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -1443,12 +1443,12 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_of_node);
+ * It provides the power used by @dev at @kHz if it is the frequency of an
+ * existing OPP, or at the frequency of the first OPP above @kHz otherwise
+ * (see dev_pm_opp_find_freq_ceil()). This function updates @kHz to the ceiled
+- * frequency and @mW to the associated power.
++ * frequency and @uW to the associated power.
+ *
+ * Returns 0 on success or a proper -EINVAL value in case of error.
+ */
+ static int __maybe_unused
+-_get_dt_power(struct device *dev, unsigned long *mW, unsigned long *kHz)
++_get_dt_power(struct device *dev, unsigned long *uW, unsigned long *kHz)
+ {
+ struct dev_pm_opp *opp;
+ unsigned long opp_freq, opp_power;
+@@ -1465,7 +1465,7 @@ _get_dt_power(struct device *dev, unsigned long *mW, unsigned long *kHz)
+ return -EINVAL;
+
+ *kHz = opp_freq / 1000;
+- *mW = opp_power / 1000;
++ *uW = opp_power;
+
+ return 0;
+ }
+@@ -1475,14 +1475,14 @@ _get_dt_power(struct device *dev, unsigned long *mW, unsigned long *kHz)
+ * This computes the power estimated by @dev at @kHz if it is the frequency
+ * of an existing OPP, or at the frequency of the first OPP above @kHz otherwise
+ * (see dev_pm_opp_find_freq_ceil()). This function updates @kHz to the ceiled
+- * frequency and @mW to the associated power. The power is estimated as
++ * frequency and @uW to the associated power. The power is estimated as
+ * P = C * V^2 * f with C being the device's capacitance and V and f
+ * respectively the voltage and frequency of the OPP.
+ *
+ * Returns -EINVAL if the power calculation failed because of missing
+ * parameters, 0 otherwise.
+ */
+-static int __maybe_unused _get_power(struct device *dev, unsigned long *mW,
++static int __maybe_unused _get_power(struct device *dev, unsigned long *uW,
+ unsigned long *kHz)
+ {
+ struct dev_pm_opp *opp;
+@@ -1512,9 +1512,10 @@ static int __maybe_unused _get_power(struct device *dev, unsigned long *mW,
+ return -EINVAL;
+
+ tmp = (u64)cap * mV * mV * (Hz / 1000000);
+- do_div(tmp, 1000000000);
++ /* Provide power in micro-Watts */
++ do_div(tmp, 1000000);
+
+- *mW = (unsigned long)tmp;
++ *uW = (unsigned long)tmp;
+ *kHz = Hz / 1000;
+
+ return 0;
+diff --git a/drivers/parisc/lba_pci.c b/drivers/parisc/lba_pci.c
+index 732b516c7bf84..afc6e66ddc31c 100644
+--- a/drivers/parisc/lba_pci.c
++++ b/drivers/parisc/lba_pci.c
+@@ -1476,9 +1476,13 @@ lba_driver_probe(struct parisc_device *dev)
+ u32 func_class;
+ void *tmp_obj;
+ char *version;
+- void __iomem *addr = ioremap(dev->hpa.start, 4096);
++ void __iomem *addr;
+ int max;
+
++ addr = ioremap(dev->hpa.start, 4096);
++ if (addr == NULL)
++ return -ENOMEM;
++
+ /* Read HW Rev First */
+ func_class = READ_REG32(addr + LBA_FCLASS);
+
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 0eda8236c125a..13c2e73f0eaf8 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -780,8 +780,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
+ ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys,
+ epc->mem->window.page_size);
+ if (!ep->msi_mem) {
++ ret = -ENOMEM;
+ dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
+- return -ENOMEM;
++ goto err_exit_epc_mem;
+ }
+
+ if (ep->ops->get_features) {
+@@ -790,6 +791,19 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
+ return 0;
+ }
+
+- return dw_pcie_ep_init_complete(ep);
++ ret = dw_pcie_ep_init_complete(ep);
++ if (ret)
++ goto err_free_epc_mem;
++
++ return 0;
++
++err_free_epc_mem:
++ pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem,
++ epc->mem->window.page_size);
++
++err_exit_epc_mem:
++ pci_epc_mem_exit(epc);
++
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(dw_pcie_ep_init);
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 9979302532b72..d0d768f22ac30 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -421,8 +421,14 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ bridge->sysdata = pp;
+
+ ret = pci_host_probe(bridge);
+- if (!ret)
+- return 0;
++ if (ret)
++ goto err_stop_link;
++
++ return 0;
++
++err_stop_link:
++ if (pci->ops && pci->ops->stop_link)
++ pci->ops->stop_link(pci);
+
+ err_free_msi:
+ if (pp->has_msi_ctrl)
+@@ -433,8 +439,14 @@ EXPORT_SYMBOL_GPL(dw_pcie_host_init);
+
+ void dw_pcie_host_deinit(struct pcie_port *pp)
+ {
++ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
++
+ pci_stop_root_bus(pp->bridge->bus);
+ pci_remove_root_bus(pp->bridge->bus);
++
++ if (pci->ops && pci->ops->stop_link)
++ pci->ops->stop_link(pci);
++
+ if (pp->has_msi_ctrl)
+ dw_pcie_free_msi(pp);
+ }
+@@ -531,7 +543,6 @@ static struct pci_ops dw_pcie_ops = {
+
+ void dw_pcie_setup_rc(struct pcie_port *pp)
+ {
+- int i;
+ u32 val, ctrl, num_ctrls;
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+
+@@ -582,19 +593,22 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
+ PCI_COMMAND_MASTER | PCI_COMMAND_SERR;
+ dw_pcie_writel_dbi(pci, PCI_COMMAND, val);
+
+- /* Ensure all outbound windows are disabled so there are multiple matches */
+- for (i = 0; i < pci->num_ob_windows; i++)
+- dw_pcie_disable_atu(pci, i, DW_PCIE_REGION_OUTBOUND);
+-
+ /*
+ * If the platform provides its own child bus config accesses, it means
+ * the platform uses its own address translation component rather than
+ * ATU, so we should not program the ATU here.
+ */
+ if (pp->bridge->child_ops == &dw_child_pcie_ops) {
+- int atu_idx = 0;
++ int i, atu_idx = 0;
+ struct resource_entry *entry;
+
++ /*
++ * Disable all outbound windows to make sure a transaction
++ * can't match multiple windows.
++ */
++ for (i = 0; i < pci->num_ob_windows; i++)
++ dw_pcie_disable_atu(pci, i, DW_PCIE_REGION_OUTBOUND);
++
+ /* Get last memory resource entry */
+ resource_list_for_each_entry(entry, &pp->bridge->windows) {
+ if (resource_type(entry->res) != IORESOURCE_MEM)
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index d92c8a25094fa..5848cc520b52e 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -287,8 +287,8 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no,
+ dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET,
+ upper_32_bits(pci_addr));
+ val = type | PCIE_ATU_FUNC_NUM(func_no);
+- val = upper_32_bits(size - 1) ?
+- val | PCIE_ATU_INCREASE_REGION_SIZE : val;
++ if (upper_32_bits(limit_addr) > upper_32_bits(cpu_addr))
++ val |= PCIE_ATU_INCREASE_REGION_SIZE;
+ if (pci->version == 0x490A)
+ val = dw_pcie_enable_ecrc(val);
+ dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, val);
+@@ -315,6 +315,7 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
+ u64 pci_addr, u64 size)
+ {
+ u32 retries, val;
++ u64 limit_addr;
+
+ if (pci->ops && pci->ops->cpu_addr_fixup)
+ cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr);
+@@ -325,6 +326,8 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
+ return;
+ }
+
++ limit_addr = cpu_addr + size - 1;
++
+ dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT,
+ PCIE_ATU_REGION_OUTBOUND | index);
+ dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_BASE,
+@@ -332,17 +335,18 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
+ dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_BASE,
+ upper_32_bits(cpu_addr));
+ dw_pcie_writel_dbi(pci, PCIE_ATU_LIMIT,
+- lower_32_bits(cpu_addr + size - 1));
++ lower_32_bits(limit_addr));
+ if (pci->version >= 0x460A)
+ dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_LIMIT,
+- upper_32_bits(cpu_addr + size - 1));
++ upper_32_bits(limit_addr));
+ dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET,
+ lower_32_bits(pci_addr));
+ dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET,
+ upper_32_bits(pci_addr));
+ val = type | PCIE_ATU_FUNC_NUM(func_no);
+- val = ((upper_32_bits(size - 1)) && (pci->version >= 0x460A)) ?
+- val | PCIE_ATU_INCREASE_REGION_SIZE : val;
++ if (upper_32_bits(limit_addr) > upper_32_bits(cpu_addr) &&
++ pci->version >= 0x460A)
++ val |= PCIE_ATU_INCREASE_REGION_SIZE;
+ if (pci->version == 0x490A)
+ val = dw_pcie_enable_ecrc(val);
+ dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, val);
+@@ -491,7 +495,7 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
+ void dw_pcie_disable_atu(struct dw_pcie *pci, int index,
+ enum dw_pcie_region_type type)
+ {
+- int region;
++ u32 region;
+
+ switch (type) {
+ case DW_PCIE_REGION_INBOUND:
+@@ -504,8 +508,18 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, int index,
+ return;
+ }
+
+- dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, region | index);
+- dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, ~(u32)PCIE_ATU_ENABLE);
++ if (pci->iatu_unroll_enabled) {
++ if (region == PCIE_ATU_REGION_INBOUND) {
++ dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
++ ~(u32)PCIE_ATU_ENABLE);
++ } else {
++ dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
++ ~(u32)PCIE_ATU_ENABLE);
++ }
++ } else {
++ dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, region | index);
++ dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, ~(u32)PCIE_ATU_ENABLE);
++ }
+ }
+
+ int dw_pcie_wait_for_link(struct dw_pcie *pci)
+@@ -726,6 +740,13 @@ void dw_pcie_setup(struct dw_pcie *pci)
+ val |= PORT_LINK_DLL_LINK_EN;
+ dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
+
++ if (of_property_read_bool(np, "snps,enable-cdm-check")) {
++ val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
++ val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
++ PCIE_PL_CHK_REG_CHK_REG_START;
++ dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
++ }
++
+ of_property_read_u32(np, "num-lanes", &pci->num_lanes);
+ if (!pci->num_lanes) {
+ dev_dbg(pci->dev, "Using h/w default number of lanes\n");
+@@ -772,11 +793,4 @@ void dw_pcie_setup(struct dw_pcie *pci)
+ break;
+ }
+ dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
+-
+- if (of_property_read_bool(np, "snps,enable-cdm-check")) {
+- val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
+- val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
+- PCIE_PL_CHK_REG_CHK_REG_START;
+- dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
+- }
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 2ea13750b4924..7c0877068347f 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -337,8 +337,6 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ reset_control_assert(res->ext_reset);
+ reset_control_assert(res->phy_reset);
+
+- writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL);
+-
+ ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies);
+ if (ret < 0) {
+ dev_err(dev, "cannot enable regulators\n");
+@@ -381,15 +379,15 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ goto err_deassert_axi;
+ }
+
+- ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
+- if (ret)
+- goto err_clks;
+-
+ /* enable PCIe clocks and resets */
+ val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
+ val &= ~BIT(0);
+ writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
+
++ ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
++ if (ret)
++ goto err_clks;
++
+ if (of_device_is_compatible(node, "qcom,pcie-ipq8064") ||
+ of_device_is_compatible(node, "qcom,pcie-ipq8064-v2")) {
+ writel(PCS_DEEMPH_TX_DEEMPH_GEN1(24) |
+@@ -1038,9 +1036,7 @@ static int qcom_pcie_init_2_3_3(struct qcom_pcie *pcie)
+ struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3;
+ struct dw_pcie *pci = pcie->pci;
+ struct device *dev = pci->dev;
+- u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
+ int i, ret;
+- u32 val;
+
+ for (i = 0; i < ARRAY_SIZE(res->rst); i++) {
+ ret = reset_control_assert(res->rst[i]);
+@@ -1097,6 +1093,33 @@ static int qcom_pcie_init_2_3_3(struct qcom_pcie *pcie)
+ goto err_clk_aux;
+ }
+
++ return 0;
++
++err_clk_aux:
++ clk_disable_unprepare(res->ahb_clk);
++err_clk_ahb:
++ clk_disable_unprepare(res->axi_s_clk);
++err_clk_axi_s:
++ clk_disable_unprepare(res->axi_m_clk);
++err_clk_axi_m:
++ clk_disable_unprepare(res->iface);
++err_clk_iface:
++ /*
++ * Not checking for failure, will anyway return
++ * the original failure in 'ret'.
++ */
++ for (i = 0; i < ARRAY_SIZE(res->rst); i++)
++ reset_control_assert(res->rst[i]);
++
++ return ret;
++}
++
++static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie)
++{
++ struct dw_pcie *pci = pcie->pci;
++ u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
++ u32 val;
++
+ writel(SLV_ADDR_SPACE_SZ,
+ pcie->parf + PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE);
+
+@@ -1124,24 +1147,6 @@ static int qcom_pcie_init_2_3_3(struct qcom_pcie *pcie)
+ PCI_EXP_DEVCTL2);
+
+ return 0;
+-
+-err_clk_aux:
+- clk_disable_unprepare(res->ahb_clk);
+-err_clk_ahb:
+- clk_disable_unprepare(res->axi_s_clk);
+-err_clk_axi_s:
+- clk_disable_unprepare(res->axi_m_clk);
+-err_clk_axi_m:
+- clk_disable_unprepare(res->iface);
+-err_clk_iface:
+- /*
+- * Not checking for failure, will anyway return
+- * the original failure in 'ret'.
+- */
+- for (i = 0; i < ARRAY_SIZE(res->rst); i++)
+- reset_control_assert(res->rst[i]);
+-
+- return ret;
+ }
+
+ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
+@@ -1467,6 +1472,7 @@ static const struct qcom_pcie_ops ops_2_4_0 = {
+ static const struct qcom_pcie_ops ops_2_3_3 = {
+ .get_resources = qcom_pcie_get_resources_2_3_3,
+ .init = qcom_pcie_init_2_3_3,
++ .post_init = qcom_pcie_post_init_2_3_3,
+ .deinit = qcom_pcie_deinit_2_3_3,
+ .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
+ };
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index cc26784901627..67e8372a32433 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -350,15 +350,14 @@ static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
+ struct tegra194_pcie *pcie = arg;
+ struct dw_pcie *pci = &pcie->pci;
+ struct pcie_port *pp = &pci->pp;
+- u32 val, tmp;
++ u32 val, status_l0, status_l1;
+ u16 val_w;
+
+- val = appl_readl(pcie, APPL_INTR_STATUS_L0);
+- if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
+- val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
+- if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) {
+- appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
+-
++ status_l0 = appl_readl(pcie, APPL_INTR_STATUS_L0);
++ if (status_l0 & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
++ status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
++ appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_0_0);
++ if (status_l1 & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) {
+ /* SBR & Surprise Link Down WAR */
+ val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
+ val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
+@@ -374,15 +373,15 @@ static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
+ }
+ }
+
+- if (val & APPL_INTR_STATUS_L0_INT_INT) {
+- val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0);
+- if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) {
++ if (status_l0 & APPL_INTR_STATUS_L0_INT_INT) {
++ status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0);
++ if (status_l1 & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) {
+ appl_writel(pcie,
+ APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS,
+ APPL_INTR_STATUS_L1_8_0);
+ apply_bad_link_workaround(pp);
+ }
+- if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) {
++ if (status_l1 & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) {
+ appl_writel(pcie,
+ APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS,
+ APPL_INTR_STATUS_L1_8_0);
+@@ -394,25 +393,24 @@ static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
+ }
+ }
+
+- val = appl_readl(pcie, APPL_INTR_STATUS_L0);
+- if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) {
+- val = appl_readl(pcie, APPL_INTR_STATUS_L1_18);
+- tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
+- if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) {
++ if (status_l0 & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) {
++ status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_18);
++ val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
++ if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) {
+ dev_info(pci->dev, "CDM check complete\n");
+- tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE;
++ val |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE;
+ }
+- if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) {
++ if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) {
+ dev_err(pci->dev, "CDM comparison mismatch\n");
+- tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR;
++ val |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR;
+ }
+- if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) {
++ if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) {
+ dev_err(pci->dev, "CDM Logic error\n");
+- tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR;
++ val |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR;
+ }
+- dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp);
+- tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR);
+- dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp);
++ dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
++ val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR);
++ dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", val);
+ }
+
+ return IRQ_HANDLED;
+@@ -978,7 +976,7 @@ retry_link:
+ offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF);
+ val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP);
+ val &= ~PCI_DLF_EXCHANGE_ENABLE;
+- dw_pcie_writel_dbi(pci, offset, val);
++ dw_pcie_writel_dbi(pci, offset + PCI_DLF_CAP, val);
+
+ tegra194_pcie_host_init(pp);
+ dw_pcie_setup_rc(pp);
+@@ -1949,6 +1947,7 @@ static int tegra_pcie_config_ep(struct tegra194_pcie *pcie,
+ if (ret) {
+ dev_err(dev, "Failed to initialize DWC Endpoint subsystem: %d\n",
+ ret);
++ pm_runtime_disable(dev);
+ return ret;
+ }
+
+diff --git a/drivers/pci/controller/pcie-mediatek-gen3.c b/drivers/pci/controller/pcie-mediatek-gen3.c
+index 5d9fd36b02d18..a02c466a597cd 100644
+--- a/drivers/pci/controller/pcie-mediatek-gen3.c
++++ b/drivers/pci/controller/pcie-mediatek-gen3.c
+@@ -600,7 +600,8 @@ static int mtk_pcie_init_irq_domains(struct mtk_gen3_pcie *pcie)
+ &intx_domain_ops, pcie);
+ if (!pcie->intx_domain) {
+ dev_err(dev, "failed to create INTx IRQ domain\n");
+- return -ENODEV;
++ ret = -ENODEV;
++ goto out_put_node;
+ }
+
+ /* Setup MSI */
+@@ -623,13 +624,15 @@ static int mtk_pcie_init_irq_domains(struct mtk_gen3_pcie *pcie)
+ goto err_msi_domain;
+ }
+
++ of_node_put(intc_node);
+ return 0;
+
+ err_msi_domain:
+ irq_domain_remove(pcie->msi_bottom_domain);
+ err_msi_bottom_domain:
+ irq_domain_remove(pcie->intx_domain);
+-
++out_put_node:
++ of_node_put(intc_node);
+ return ret;
+ }
+
+diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/pcie-microchip-host.c
+index dd5dba4190476..7263d175b5adb 100644
+--- a/drivers/pci/controller/pcie-microchip-host.c
++++ b/drivers/pci/controller/pcie-microchip-host.c
+@@ -904,6 +904,7 @@ static int mc_pcie_init_irq_domains(struct mc_pcie *port)
+ &event_domain_ops, port);
+ if (!port->event_domain) {
+ dev_err(dev, "failed to get event domain\n");
++ of_node_put(pcie_intc_node);
+ return -ENOMEM;
+ }
+
+@@ -913,6 +914,7 @@ static int mc_pcie_init_irq_domains(struct mc_pcie *port)
+ &intx_domain_ops, port);
+ if (!port->intx_domain) {
+ dev_err(dev, "failed to get an INTx IRQ domain\n");
++ of_node_put(pcie_intc_node);
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 5b833f00e9800..a5ed779b0a512 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -627,7 +627,6 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
+
+ cancel_delayed_work(&epf_test->cmd_handler);
+ pci_epf_test_clean_dma_chan(epf_test);
+- pci_epc_stop(epc);
+ for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
+ epf_bar = &epf->bar[bar];
+
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index 7952e5efd6cf3..a1e38ca93cd96 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -538,7 +538,7 @@ static const char *aer_agent_string[] = {
+ u64 *stats = pdev->aer_stats->stats_array; \
+ size_t len = 0; \
+ \
+- for (i = 0; i < ARRAY_SIZE(strings_array); i++) { \
++ for (i = 0; i < ARRAY_SIZE(pdev->aer_stats->stats_array); i++) {\
+ if (strings_array[i]) \
+ len += sysfs_emit_at(buf, len, "%s %llu\n", \
+ strings_array[i], \
+@@ -1347,6 +1347,11 @@ static int aer_probe(struct pcie_device *dev)
+ struct device *device = &dev->device;
+ struct pci_dev *port = dev->port;
+
++ BUILD_BUG_ON(ARRAY_SIZE(aer_correctable_error_string) <
++ AER_MAX_TYPEOF_COR_ERRS);
++ BUILD_BUG_ON(ARRAY_SIZE(aer_uncorrectable_error_string) <
++ AER_MAX_TYPEOF_UNCOR_ERRS);
++
+ /* Limit to Root Ports or Root Complex Event Collectors */
+ if ((pci_pcie_type(port) != PCI_EXP_TYPE_RC_EC) &&
+ (pci_pcie_type(port) != PCI_EXP_TYPE_ROOT_PORT))
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index 604feeb84ee40..1ac7fec47d6fb 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -222,15 +222,8 @@ static int get_port_device_capability(struct pci_dev *dev)
+
+ #ifdef CONFIG_PCIEAER
+ if (dev->aer_cap && pci_aer_available() &&
+- (pcie_ports_native || host->native_aer)) {
++ (pcie_ports_native || host->native_aer))
+ services |= PCIE_PORT_SERVICE_AER;
+-
+- /*
+- * Disable AER on this port in case it's been enabled by the
+- * BIOS (the AER service driver will enable it when necessary).
+- */
+- pci_disable_pcie_error_reporting(dev);
+- }
+ #endif
+
+ /* Root Ports and Root Complex Event Collectors may generate PMEs */
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index db670b2658971..b65a7d9640e15 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -39,6 +39,24 @@
+ #include <asm/mmu.h>
+ #include <asm/sysreg.h>
+
++/*
++ * Cache if the event is allowed to trace Context information.
++ * This allows us to perform the check, i.e, perfmon_capable(),
++ * in the context of the event owner, once, during the event_init().
++ */
++#define SPE_PMU_HW_FLAGS_CX BIT(0)
++
++static void set_spe_event_has_cx(struct perf_event *event)
++{
++ if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && perfmon_capable())
++ event->hw.flags |= SPE_PMU_HW_FLAGS_CX;
++}
++
++static bool get_spe_event_has_cx(struct perf_event *event)
++{
++ return !!(event->hw.flags & SPE_PMU_HW_FLAGS_CX);
++}
++
+ #define ARM_SPE_BUF_PAD_BYTE 0
+
+ struct arm_spe_pmu_buf {
+@@ -272,7 +290,7 @@ static u64 arm_spe_event_to_pmscr(struct perf_event *event)
+ if (!attr->exclude_kernel)
+ reg |= BIT(SYS_PMSCR_EL1_E1SPE_SHIFT);
+
+- if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && perfmon_capable())
++ if (get_spe_event_has_cx(event))
+ reg |= BIT(SYS_PMSCR_EL1_CX_SHIFT);
+
+ return reg;
+@@ -709,10 +727,10 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
+ !(spe_pmu->features & SPE_PMU_FEAT_FILT_LAT))
+ return -EOPNOTSUPP;
+
++ set_spe_event_has_cx(event);
+ reg = arm_spe_event_to_pmscr(event);
+ if (!perfmon_capable() &&
+ (reg & (BIT(SYS_PMSCR_EL1_PA_SHIFT) |
+- BIT(SYS_PMSCR_EL1_CX_SHIFT) |
+ BIT(SYS_PMSCR_EL1_PCT_SHIFT))))
+ return -EACCES;
+
+diff --git a/drivers/perf/riscv_pmu.c b/drivers/perf/riscv_pmu.c
+index b2b8d2074ed0d..130b9f1a40e08 100644
+--- a/drivers/perf/riscv_pmu.c
++++ b/drivers/perf/riscv_pmu.c
+@@ -170,7 +170,6 @@ int riscv_pmu_event_set_period(struct perf_event *event)
+ left = (max_period >> 1);
+
+ local64_set(&hwc->prev_count, (u64)-left);
+- perf_event_update_userpage(event);
+
+ return overflow;
+ }
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index dca3537a8dcce..231d86d3949c0 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -274,8 +274,13 @@ static int pmu_sbi_ctr_get_idx(struct perf_event *event)
+ cflags |= SBI_PMU_CFG_FLAG_SET_UINH;
+
+ /* retrieve the available counter index */
++#if defined(CONFIG_32BIT)
++ ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_CFG_MATCH, cbase, cmask,
++ cflags, hwc->event_base, hwc->config, hwc->config >> 32);
++#else
+ ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_CFG_MATCH, cbase, cmask,
+ cflags, hwc->event_base, hwc->config, 0);
++#endif
+ if (ret.error) {
+ pr_debug("Not able to find a counter for event %lx config %llx\n",
+ hwc->event_base, hwc->config);
+@@ -417,8 +422,13 @@ static void pmu_sbi_ctr_start(struct perf_event *event, u64 ival)
+ struct hw_perf_event *hwc = &event->hw;
+ unsigned long flag = SBI_PMU_START_FLAG_SET_INIT_VALUE;
+
++#if defined(CONFIG_32BIT)
+ ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_START, hwc->idx,
+ 1, flag, ival, ival >> 32, 0);
++#else
++ ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_START, hwc->idx,
++ 1, flag, ival, 0, 0);
++#endif
+ if (ret.error && (ret.error != SBI_ERR_ALREADY_STARTED))
+ pr_err("Starting counter idx %d failed with error %d\n",
+ hwc->idx, sbi_err_map_linux_errno(ret.error));
+@@ -525,8 +535,14 @@ static inline void pmu_sbi_start_overflow_mask(struct riscv_pmu *pmu,
+ hwc = &event->hw;
+ max_period = riscv_pmu_ctr_get_width_mask(event);
+ init_val = local64_read(&hwc->prev_count) & max_period;
++#if defined(CONFIG_32BIT)
++ sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_START, idx, 1,
++ flag, init_val, init_val >> 32, 0);
++#else
+ sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_START, idx, 1,
+ flag, init_val, 0, 0);
++#endif
++ perf_event_update_userpage(event);
+ }
+ ctr_ovf_mask = ctr_ovf_mask >> 1;
+ idx++;
+@@ -666,12 +682,15 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde
+ child = of_get_compatible_child(cpu, "riscv,cpu-intc");
+ if (!child) {
+ pr_err("Failed to find INTC node\n");
++ of_node_put(cpu);
+ return -ENODEV;
+ }
+ domain = irq_find_host(child);
+ of_node_put(child);
+- if (domain)
++ if (domain) {
++ of_node_put(cpu);
+ break;
++ }
+ }
+ if (!domain) {
+ pr_err("Failed to find INTC IRQ root domain\n");
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.h b/drivers/phy/qualcomm/phy-qcom-qmp.h
+index 06b2556ed93a5..b9a91520439cb 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.h
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.h
+@@ -1116,7 +1116,8 @@
+ #define QSERDES_V5_COM_CORE_CLK_EN 0x174
+ #define QSERDES_V5_COM_CMN_CONFIG 0x17c
+ #define QSERDES_V5_COM_CMN_MISC1 0x19c
+-#define QSERDES_V5_COM_CMN_MODE 0x1a4
++#define QSERDES_V5_COM_CMN_MODE 0x1a0
++#define QSERDES_V5_COM_CMN_MODE_CONTD 0x1a4
+ #define QSERDES_V5_COM_VCO_DC_LEVEL_CTRL 0x1a8
+ #define QSERDES_V5_COM_BIN_VCOCAL_CMP_CODE1_MODE0 0x1ac
+ #define QSERDES_V5_COM_BIN_VCOCAL_CMP_CODE2_MODE0 0x1b0
+diff --git a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+index 6711659f727c1..5223d4c9afdfc 100644
+--- a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
++++ b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+@@ -978,7 +978,9 @@ static irqreturn_t rockchip_usb2phy_irq(int irq, void *data)
+
+ switch (rport->port_id) {
+ case USB2PHY_PORT_OTG:
+- ret |= rockchip_usb2phy_otg_mux_irq(irq, rport);
++ if (rport->mode != USB_DR_MODE_HOST &&
++ rport->mode != USB_DR_MODE_UNKNOWN)
++ ret |= rockchip_usb2phy_otg_mux_irq(irq, rport);
+ break;
+ case USB2PHY_PORT_HOST:
+ ret |= rockchip_usb2phy_linestate_irq(irq, rport);
+@@ -1162,6 +1164,12 @@ static int rockchip_usb2phy_otg_port_init(struct rockchip_usb2phy *rphy,
+ EXTCON_USB_HOST, &rport->event_nb);
+ if (ret)
+ dev_err(rphy->dev, "register USB HOST notifier failed\n");
++
++ if (!of_property_read_bool(rphy->dev->of_node, "extcon")) {
++ /* do initial sync of usb state */
++ ret = property_enabled(rphy->grf, &rport->port_cfg->utmi_id);
++ extcon_set_state_sync(rphy->edev, EXTCON_USB_HOST, !ret);
++ }
+ }
+
+ out:
+diff --git a/drivers/phy/samsung/phy-exynosautov9-ufs.c b/drivers/phy/samsung/phy-exynosautov9-ufs.c
+index 36398a15c2db7..d043dfdb598a2 100644
+--- a/drivers/phy/samsung/phy-exynosautov9-ufs.c
++++ b/drivers/phy/samsung/phy-exynosautov9-ufs.c
+@@ -31,22 +31,22 @@ static const struct samsung_ufs_phy_cfg exynosautov9_pre_init_cfg[] = {
+ PHY_COMN_REG_CFG(0x023, 0xc0, PWR_MODE_ANY),
+ PHY_COMN_REG_CFG(0x023, 0x00, PWR_MODE_ANY),
+
+- PHY_TRSV_REG_CFG(0x042, 0x5d, PWR_MODE_ANY),
+- PHY_TRSV_REG_CFG(0x043, 0x80, PWR_MODE_ANY),
++ PHY_TRSV_REG_CFG_AUTOV9(0x042, 0x5d, PWR_MODE_ANY),
++ PHY_TRSV_REG_CFG_AUTOV9(0x043, 0x80, PWR_MODE_ANY),
+
+ END_UFS_PHY_CFG,
+ };
+
+ /* Calibration for HS mode series A/B */
+ static const struct samsung_ufs_phy_cfg exynosautov9_pre_pwr_hs_cfg[] = {
+- PHY_TRSV_REG_CFG(0x032, 0xbc, PWR_MODE_HS_ANY),
+- PHY_TRSV_REG_CFG(0x03c, 0x7f, PWR_MODE_HS_ANY),
+- PHY_TRSV_REG_CFG(0x048, 0xc0, PWR_MODE_HS_ANY),
++ PHY_TRSV_REG_CFG_AUTOV9(0x032, 0xbc, PWR_MODE_HS_ANY),
++ PHY_TRSV_REG_CFG_AUTOV9(0x03c, 0x7f, PWR_MODE_HS_ANY),
++ PHY_TRSV_REG_CFG_AUTOV9(0x048, 0xc0, PWR_MODE_HS_ANY),
+
+- PHY_TRSV_REG_CFG(0x04a, 0x00, PWR_MODE_HS_G3_SER_B),
+- PHY_TRSV_REG_CFG(0x04b, 0x10, PWR_MODE_HS_G1_SER_B |
+- PWR_MODE_HS_G3_SER_B),
+- PHY_TRSV_REG_CFG(0x04d, 0x63, PWR_MODE_HS_G3_SER_B),
++ PHY_TRSV_REG_CFG_AUTOV9(0x04a, 0x00, PWR_MODE_HS_G3_SER_B),
++ PHY_TRSV_REG_CFG_AUTOV9(0x04b, 0x10, PWR_MODE_HS_G1_SER_B |
++ PWR_MODE_HS_G3_SER_B),
++ PHY_TRSV_REG_CFG_AUTOV9(0x04d, 0x63, PWR_MODE_HS_G3_SER_B),
+
+ END_UFS_PHY_CFG,
+ };
+diff --git a/drivers/phy/st/phy-stm32-usbphyc.c b/drivers/phy/st/phy-stm32-usbphyc.c
+index 007a23c78d562..a98c911cc37ae 100644
+--- a/drivers/phy/st/phy-stm32-usbphyc.c
++++ b/drivers/phy/st/phy-stm32-usbphyc.c
+@@ -358,7 +358,9 @@ static int stm32_usbphyc_phy_init(struct phy *phy)
+ return 0;
+
+ pll_disable:
+- return stm32_usbphyc_pll_disable(usbphyc);
++ stm32_usbphyc_pll_disable(usbphyc);
++
++ return ret;
+ }
+
+ static int stm32_usbphyc_phy_exit(struct phy *phy)
+diff --git a/drivers/phy/ti/phy-tusb1210.c b/drivers/phy/ti/phy-tusb1210.c
+index c3ab4b69ea680..669c13d6e402f 100644
+--- a/drivers/phy/ti/phy-tusb1210.c
++++ b/drivers/phy/ti/phy-tusb1210.c
+@@ -105,8 +105,9 @@ static int tusb1210_power_on(struct phy *phy)
+ msleep(TUSB1210_RESET_TIME_MS);
+
+ /* Restore the optional eye diagram optimization value */
+- return tusb1210_ulpi_write(tusb, TUSB1210_VENDOR_SPECIFIC2,
+- tusb->vendor_specific2);
++ tusb1210_ulpi_write(tusb, TUSB1210_VENDOR_SPECIFIC2, tusb->vendor_specific2);
++
++ return 0;
+ }
+
+ static int tusb1210_power_off(struct phy *phy)
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index b3e94cdf7d1af..00381490dd3e3 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -135,16 +135,16 @@ static int cros_ec_sleep_event(struct cros_ec_device *ec_dev, u8 sleep_event)
+ buf.msg.command = EC_CMD_HOST_SLEEP_EVENT;
+
+ ret = cros_ec_cmd_xfer_status(ec_dev, &buf.msg);
+-
+- /* For now, report failure to transition to S0ix with a warning. */
++ /* Report failure to transition to system wide suspend with a warning. */
+ if (ret >= 0 && ec_dev->host_sleep_v1 &&
+- (sleep_event == HOST_SLEEP_EVENT_S0IX_RESUME)) {
++ (sleep_event == HOST_SLEEP_EVENT_S0IX_RESUME ||
++ sleep_event == HOST_SLEEP_EVENT_S3_RESUME)) {
+ ec_dev->last_resume_result =
+ buf.u.resp1.resume_response.sleep_transitions;
+
+ WARN_ONCE(buf.u.resp1.resume_response.sleep_transitions &
+ EC_HOST_RESUME_SLEEP_TIMEOUT,
+- "EC detected sleep transition timeout. Total slp_s0 transitions: %d",
++ "EC detected sleep transition timeout. Total sleep transitions: %d",
+ buf.u.resp1.resume_response.sleep_transitions &
+ EC_HOST_RESUME_SLEEP_TRANSITIONS_MASK);
+ }
+diff --git a/drivers/platform/mellanox/mlxreg-lc.c b/drivers/platform/mellanox/mlxreg-lc.c
+index c897a2f158404..55834ccb4ac7c 100644
+--- a/drivers/platform/mellanox/mlxreg-lc.c
++++ b/drivers/platform/mellanox/mlxreg-lc.c
+@@ -716,8 +716,12 @@ mlxreg_lc_config_init(struct mlxreg_lc *mlxreg_lc, void *regmap,
+ switch (regval) {
+ case MLXREG_LC_SN4800_C16:
+ err = mlxreg_lc_sn4800_c16_config_init(mlxreg_lc, regmap, data);
+- if (err)
++ if (err) {
++ dev_err(dev, "Failed to config client %s at bus %d at addr 0x%02x\n",
++ data->hpdev.brdinfo->type, data->hpdev.nr,
++ data->hpdev.brdinfo->addr);
+ return err;
++ }
+ break;
+ default:
+ return -ENODEV;
+@@ -730,8 +734,11 @@ mlxreg_lc_config_init(struct mlxreg_lc *mlxreg_lc, void *regmap,
+ mlxreg_lc->mux = platform_device_register_resndata(dev, "i2c-mux-mlxcpld", data->hpdev.nr,
+ NULL, 0, mlxreg_lc->mux_data,
+ sizeof(*mlxreg_lc->mux_data));
+- if (IS_ERR(mlxreg_lc->mux))
++ if (IS_ERR(mlxreg_lc->mux)) {
++ dev_err(dev, "Failed to create mux infra for client %s at bus %d at addr 0x%02x\n",
++ data->hpdev.brdinfo->type, data->hpdev.nr, data->hpdev.brdinfo->addr);
+ return PTR_ERR(mlxreg_lc->mux);
++ }
+
+ /* Register IO access driver. */
+ if (mlxreg_lc->io_data) {
+@@ -740,6 +747,9 @@ mlxreg_lc_config_init(struct mlxreg_lc *mlxreg_lc, void *regmap,
+ platform_device_register_resndata(dev, "mlxreg-io", data->hpdev.nr, NULL, 0,
+ mlxreg_lc->io_data, sizeof(*mlxreg_lc->io_data));
+ if (IS_ERR(mlxreg_lc->io_regs)) {
++ dev_err(dev, "Failed to create regio for client %s at bus %d at addr 0x%02x\n",
++ data->hpdev.brdinfo->type, data->hpdev.nr,
++ data->hpdev.brdinfo->addr);
+ err = PTR_ERR(mlxreg_lc->io_regs);
+ goto fail_register_io;
+ }
+@@ -753,6 +763,9 @@ mlxreg_lc_config_init(struct mlxreg_lc *mlxreg_lc, void *regmap,
+ mlxreg_lc->led_data,
+ sizeof(*mlxreg_lc->led_data));
+ if (IS_ERR(mlxreg_lc->led)) {
++ dev_err(dev, "Failed to create LED objects for client %s at bus %d at addr 0x%02x\n",
++ data->hpdev.brdinfo->type, data->hpdev.nr,
++ data->hpdev.brdinfo->addr);
+ err = PTR_ERR(mlxreg_lc->led);
+ goto fail_register_led;
+ }
+@@ -809,7 +822,8 @@ static int mlxreg_lc_probe(struct platform_device *pdev)
+ if (!data->hpdev.adapter) {
+ dev_err(&pdev->dev, "Failed to get adapter for bus %d\n",
+ data->hpdev.nr);
+- return -EFAULT;
++ err = -EFAULT;
++ goto i2c_get_adapter_fail;
+ }
+
+ /* Create device at the top of line card I2C tree.*/
+@@ -818,32 +832,40 @@ static int mlxreg_lc_probe(struct platform_device *pdev)
+ if (IS_ERR(data->hpdev.client)) {
+ dev_err(&pdev->dev, "Failed to create client %s at bus %d at addr 0x%02x\n",
+ data->hpdev.brdinfo->type, data->hpdev.nr, data->hpdev.brdinfo->addr);
+-
+- i2c_put_adapter(data->hpdev.adapter);
+- data->hpdev.adapter = NULL;
+- return PTR_ERR(data->hpdev.client);
++ err = PTR_ERR(data->hpdev.client);
++ goto i2c_new_device_fail;
+ }
+
+ regmap = devm_regmap_init_i2c(data->hpdev.client,
+ &mlxreg_lc_regmap_conf);
+ if (IS_ERR(regmap)) {
++ dev_err(&pdev->dev, "Failed to create regmap for client %s at bus %d at addr 0x%02x\n",
++ data->hpdev.brdinfo->type, data->hpdev.nr, data->hpdev.brdinfo->addr);
+ err = PTR_ERR(regmap);
+- goto mlxreg_lc_probe_fail;
++ goto devm_regmap_init_i2c_fail;
+ }
+
+ /* Set default registers. */
+ for (i = 0; i < mlxreg_lc_regmap_conf.num_reg_defaults; i++) {
+ err = regmap_write(regmap, mlxreg_lc_regmap_default[i].reg,
+ mlxreg_lc_regmap_default[i].def);
+- if (err)
+- goto mlxreg_lc_probe_fail;
++ if (err) {
++ dev_err(&pdev->dev, "Failed to set default regmap %d for client %s at bus %d at addr 0x%02x\n",
++ i, data->hpdev.brdinfo->type, data->hpdev.nr,
++ data->hpdev.brdinfo->addr);
++ goto regmap_write_fail;
++ }
+ }
+
+ /* Sync registers with hardware. */
+ regcache_mark_dirty(regmap);
+ err = regcache_sync(regmap);
+- if (err)
+- goto mlxreg_lc_probe_fail;
++ if (err) {
++ dev_err(&pdev->dev, "Failed to sync regmap for client %s at bus %d at addr 0x%02x\n",
++ data->hpdev.brdinfo->type, data->hpdev.nr, data->hpdev.brdinfo->addr);
++ err = PTR_ERR(regmap);
++ goto regcache_sync_fail;
++ }
+
+ par_pdata = data->hpdev.brdinfo->platform_data;
+ mlxreg_lc->par_regmap = par_pdata->regmap;
+@@ -854,12 +876,27 @@ static int mlxreg_lc_probe(struct platform_device *pdev)
+ /* Configure line card. */
+ err = mlxreg_lc_config_init(mlxreg_lc, regmap, data);
+ if (err)
+- goto mlxreg_lc_probe_fail;
++ goto mlxreg_lc_config_init_fail;
+
+ return err;
+
+-mlxreg_lc_probe_fail:
++mlxreg_lc_config_init_fail:
++regcache_sync_fail:
++regmap_write_fail:
++devm_regmap_init_i2c_fail:
++ if (data->hpdev.client) {
++ i2c_unregister_device(data->hpdev.client);
++ data->hpdev.client = NULL;
++ }
++i2c_new_device_fail:
+ i2c_put_adapter(data->hpdev.adapter);
++ data->hpdev.adapter = NULL;
++i2c_get_adapter_fail:
++ /* Clear event notification callback and handle. */
++ if (data->notifier) {
++ data->notifier->user_handler = NULL;
++ data->notifier->handle = NULL;
++ }
+ return err;
+ }
+
+@@ -868,11 +905,18 @@ static int mlxreg_lc_remove(struct platform_device *pdev)
+ struct mlxreg_core_data *data = dev_get_platdata(&pdev->dev);
+ struct mlxreg_lc *mlxreg_lc = platform_get_drvdata(pdev);
+
+- /* Clear event notification callback. */
+- if (data->notifier) {
+- data->notifier->user_handler = NULL;
+- data->notifier->handle = NULL;
+- }
++ /*
++ * Probing and removing are invoked by hotplug events raised upon line card insertion and
++ * removing. If probing procedure fails all data is cleared. However, hotplug event still
++ * will be raised on line card removing and activate removing procedure. In this case there
++ * is nothing to remove.
++ */
++ if (!data->notifier || !data->notifier->handle)
++ return 0;
++
++ /* Clear event notification callback and handle. */
++ data->notifier->user_handler = NULL;
++ data->notifier->handle = NULL;
+
+ /* Destroy static I2C device feeding by main power. */
+ mlxreg_lc_destroy_static_devices(mlxreg_lc, mlxreg_lc->main_devs,
+diff --git a/drivers/platform/olpc/olpc-ec.c b/drivers/platform/olpc/olpc-ec.c
+index 4ff5c3a12991c..921520475ff68 100644
+--- a/drivers/platform/olpc/olpc-ec.c
++++ b/drivers/platform/olpc/olpc-ec.c
+@@ -264,7 +264,7 @@ static ssize_t ec_dbgfs_cmd_write(struct file *file, const char __user *buf,
+ int i, m;
+ unsigned char ec_cmd[EC_MAX_CMD_ARGS];
+ unsigned int ec_cmd_int[EC_MAX_CMD_ARGS];
+- char cmdbuf[64];
++ char cmdbuf[64] = "";
+ int ec_cmd_bytes;
+
+ mutex_lock(&ec_dbgfs_lock);
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index b8b1ed1406de2..154317e9910d2 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -389,21 +389,16 @@ static const struct dmi_system_id critclk_systems[] = {
+ },
+ },
+ {
+- /* pmc_plt_clk0 - 3 are used for the 4 ethernet controllers */
+- .ident = "Lex 3I380D",
++ /*
++ * Lex System / Lex Computech Co. makes a lot of Bay Trail
++ * based embedded boards which often come with multiple
++ * ethernet controllers using multiple pmc_plt_clks. See:
++ * https://www.lex.com.tw/products/embedded-ipc-board/
++ */
++ .ident = "Lex BayTrail",
+ .callback = dmi_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Lex BayTrail"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "3I380D"),
+- },
+- },
+- {
+- /* pmc_plt_clk* - are used for ethernet controllers */
+- .ident = "Lex 2I385SW",
+- .callback = dmi_callback,
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Lex BayTrail"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "2I385SW"),
+ },
+ },
+ {
+diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c
+index f5eced0842b36..61c5ff80bd303 100644
+--- a/drivers/powercap/dtpm_cpu.c
++++ b/drivers/powercap/dtpm_cpu.c
+@@ -53,7 +53,7 @@ static u64 set_pd_power_limit(struct dtpm *dtpm, u64 power_limit)
+
+ for (i = 0; i < pd->nr_perf_states; i++) {
+
+- power = pd->table[i].power * MICROWATT_PER_MILLIWATT * nr_cpus;
++ power = pd->table[i].power * nr_cpus;
+
+ if (power > power_limit)
+ break;
+@@ -63,8 +63,7 @@ static u64 set_pd_power_limit(struct dtpm *dtpm, u64 power_limit)
+
+ freq_qos_update_request(&dtpm_cpu->qos_req, freq);
+
+- power_limit = pd->table[i - 1].power *
+- MICROWATT_PER_MILLIWATT * nr_cpus;
++ power_limit = pd->table[i - 1].power * nr_cpus;
+
+ return power_limit;
+ }
+diff --git a/drivers/pwm/pwm-lpc18xx-sct.c b/drivers/pwm/pwm-lpc18xx-sct.c
+index 272e0b5d01b89..0de5757477828 100644
+--- a/drivers/pwm/pwm-lpc18xx-sct.c
++++ b/drivers/pwm/pwm-lpc18xx-sct.c
+@@ -98,7 +98,7 @@ struct lpc18xx_pwm_chip {
+ unsigned long clk_rate;
+ unsigned int period_ns;
+ unsigned int min_period_ns;
+- unsigned int max_period_ns;
++ u64 max_period_ns;
+ unsigned int period_event;
+ unsigned long event_map;
+ struct mutex res_lock;
+@@ -145,40 +145,48 @@ static void lpc18xx_pwm_set_conflict_res(struct lpc18xx_pwm_chip *lpc18xx_pwm,
+ mutex_unlock(&lpc18xx_pwm->res_lock);
+ }
+
+-static void lpc18xx_pwm_config_period(struct pwm_chip *chip, int period_ns)
++static void lpc18xx_pwm_config_period(struct pwm_chip *chip, u64 period_ns)
+ {
+ struct lpc18xx_pwm_chip *lpc18xx_pwm = to_lpc18xx_pwm_chip(chip);
+- u64 val;
++ u32 val;
+
+- val = (u64)period_ns * lpc18xx_pwm->clk_rate;
+- do_div(val, NSEC_PER_SEC);
++ /*
++ * With clk_rate < NSEC_PER_SEC this cannot overflow.
++ * With period_ns < max_period_ns this also fits into an u32.
++ * As period_ns >= min_period_ns = DIV_ROUND_UP(NSEC_PER_SEC, lpc18xx_pwm->clk_rate);
++ * we have val >= 1.
++ */
++ val = mul_u64_u64_div_u64(period_ns, lpc18xx_pwm->clk_rate, NSEC_PER_SEC);
+
+ lpc18xx_pwm_writel(lpc18xx_pwm,
+ LPC18XX_PWM_MATCH(lpc18xx_pwm->period_event),
+- (u32)val - 1);
++ val - 1);
+
+ lpc18xx_pwm_writel(lpc18xx_pwm,
+ LPC18XX_PWM_MATCHREL(lpc18xx_pwm->period_event),
+- (u32)val - 1);
++ val - 1);
+ }
+
+ static void lpc18xx_pwm_config_duty(struct pwm_chip *chip,
+- struct pwm_device *pwm, int duty_ns)
++ struct pwm_device *pwm, u64 duty_ns)
+ {
+ struct lpc18xx_pwm_chip *lpc18xx_pwm = to_lpc18xx_pwm_chip(chip);
+ struct lpc18xx_pwm_data *lpc18xx_data = &lpc18xx_pwm->channeldata[pwm->hwpwm];
+- u64 val;
++ u32 val;
+
+- val = (u64)duty_ns * lpc18xx_pwm->clk_rate;
+- do_div(val, NSEC_PER_SEC);
++ /*
++ * With clk_rate < NSEC_PER_SEC this cannot overflow.
++ * With duty_ns <= period_ns < max_period_ns this also fits into an u32.
++ */
++ val = mul_u64_u64_div_u64(duty_ns, lpc18xx_pwm->clk_rate, NSEC_PER_SEC);
+
+ lpc18xx_pwm_writel(lpc18xx_pwm,
+ LPC18XX_PWM_MATCH(lpc18xx_data->duty_event),
+- (u32)val);
++ val);
+
+ lpc18xx_pwm_writel(lpc18xx_pwm,
+ LPC18XX_PWM_MATCHREL(lpc18xx_data->duty_event),
+- (u32)val);
++ val);
+ }
+
+ static int lpc18xx_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+@@ -377,12 +385,27 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ goto disable_pwmclk;
+ }
+
++ /*
++ * If clkrate is too fast, the calculations in .apply() might overflow.
++ */
++ if (lpc18xx_pwm->clk_rate > NSEC_PER_SEC) {
++ ret = dev_err_probe(&pdev->dev, -EINVAL, "pwm clock to fast\n");
++ goto disable_pwmclk;
++ }
++
++ /*
++ * If clkrate is too fast, the calculations in .apply() might overflow.
++ */
++ if (lpc18xx_pwm->clk_rate > NSEC_PER_SEC) {
++ ret = dev_err_probe(&pdev->dev, -EINVAL, "pwm clock to fast\n");
++ goto disable_pwmclk;
++ }
++
+ mutex_init(&lpc18xx_pwm->res_lock);
+ mutex_init(&lpc18xx_pwm->period_lock);
+
+- val = (u64)NSEC_PER_SEC * LPC18XX_PWM_TIMER_MAX;
+- do_div(val, lpc18xx_pwm->clk_rate);
+- lpc18xx_pwm->max_period_ns = val;
++ lpc18xx_pwm->max_period_ns =
++ mul_u64_u64_div_u64(NSEC_PER_SEC, LPC18XX_PWM_TIMER_MAX, lpc18xx_pwm->clk_rate);
+
+ lpc18xx_pwm->min_period_ns = DIV_ROUND_UP(NSEC_PER_SEC,
+ lpc18xx_pwm->clk_rate);
+diff --git a/drivers/pwm/pwm-sifive.c b/drivers/pwm/pwm-sifive.c
+index e6d05a3290026..1b61344c7cd10 100644
+--- a/drivers/pwm/pwm-sifive.c
++++ b/drivers/pwm/pwm-sifive.c
+@@ -23,7 +23,7 @@
+ #define PWM_SIFIVE_PWMCFG 0x0
+ #define PWM_SIFIVE_PWMCOUNT 0x8
+ #define PWM_SIFIVE_PWMS 0x10
+-#define PWM_SIFIVE_PWMCMP0 0x20
++#define PWM_SIFIVE_PWMCMP(i) (0x20 + 4 * (i))
+
+ /* PWMCFG fields */
+ #define PWM_SIFIVE_PWMCFG_SCALE GENMASK(3, 0)
+@@ -36,8 +36,6 @@
+ #define PWM_SIFIVE_PWMCFG_GANG BIT(24)
+ #define PWM_SIFIVE_PWMCFG_IP BIT(28)
+
+-/* PWM_SIFIVE_SIZE_PWMCMP is used to calculate offset for pwmcmpX registers */
+-#define PWM_SIFIVE_SIZE_PWMCMP 4
+ #define PWM_SIFIVE_CMPWIDTH 16
+ #define PWM_SIFIVE_DEFAULT_PERIOD 10000000
+
+@@ -112,8 +110,7 @@ static void pwm_sifive_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ struct pwm_sifive_ddata *ddata = pwm_sifive_chip_to_ddata(chip);
+ u32 duty, val;
+
+- duty = readl(ddata->regs + PWM_SIFIVE_PWMCMP0 +
+- pwm->hwpwm * PWM_SIFIVE_SIZE_PWMCMP);
++ duty = readl(ddata->regs + PWM_SIFIVE_PWMCMP(pwm->hwpwm));
+
+ state->enabled = duty > 0;
+
+@@ -193,8 +190,7 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ pwm_sifive_update_clock(ddata, clk_get_rate(ddata->clk));
+ }
+
+- writel(frac, ddata->regs + PWM_SIFIVE_PWMCMP0 +
+- pwm->hwpwm * PWM_SIFIVE_SIZE_PWMCMP);
++ writel(frac, ddata->regs + PWM_SIFIVE_PWMCMP(pwm->hwpwm));
+
+ if (state->enabled != enabled)
+ pwm_sifive_enable(chip, state->enabled);
+@@ -232,6 +228,8 @@ static int pwm_sifive_probe(struct platform_device *pdev)
+ struct pwm_sifive_ddata *ddata;
+ struct pwm_chip *chip;
+ int ret;
++ u32 val;
++ unsigned int enabled_pwms = 0, enabled_clks = 1;
+
+ ddata = devm_kzalloc(dev, sizeof(*ddata), GFP_KERNEL);
+ if (!ddata)
+@@ -258,6 +256,33 @@ static int pwm_sifive_probe(struct platform_device *pdev)
+ return ret;
+ }
+
++ val = readl(ddata->regs + PWM_SIFIVE_PWMCFG);
++ if (val & PWM_SIFIVE_PWMCFG_EN_ALWAYS) {
++ unsigned int i;
++
++ for (i = 0; i < chip->npwm; ++i) {
++ val = readl(ddata->regs + PWM_SIFIVE_PWMCMP(i));
++ if (val > 0)
++ ++enabled_pwms;
++ }
++ }
++
++ /* The clk should be on once for each running PWM. */
++ if (enabled_pwms) {
++ while (enabled_clks < enabled_pwms) {
++ /* This is not expected to fail as the clk is already on */
++ ret = clk_enable(ddata->clk);
++ if (unlikely(ret)) {
++ dev_err_probe(dev, ret, "Failed to enable clk\n");
++ goto disable_clk;
++ }
++ ++enabled_clks;
++ }
++ } else {
++ clk_disable(ddata->clk);
++ enabled_clks = 0;
++ }
++
+ /* Watch for changes to underlying clock frequency */
+ ddata->notifier.notifier_call = pwm_sifive_clock_notifier;
+ ret = clk_notifier_register(ddata->clk, &ddata->notifier);
+@@ -280,7 +305,11 @@ static int pwm_sifive_probe(struct platform_device *pdev)
+ unregister_clk:
+ clk_notifier_unregister(ddata->clk, &ddata->notifier);
+ disable_clk:
+- clk_disable_unprepare(ddata->clk);
++ while (enabled_clks) {
++ clk_disable(ddata->clk);
++ --enabled_clks;
++ }
++ clk_unprepare(ddata->clk);
+
+ return ret;
+ }
+@@ -288,23 +317,19 @@ disable_clk:
+ static int pwm_sifive_remove(struct platform_device *dev)
+ {
+ struct pwm_sifive_ddata *ddata = platform_get_drvdata(dev);
+- bool is_enabled = false;
+ struct pwm_device *pwm;
+ int ch;
+
++ pwmchip_remove(&ddata->chip);
++ clk_notifier_unregister(ddata->clk, &ddata->notifier);
++
+ for (ch = 0; ch < ddata->chip.npwm; ch++) {
+ pwm = &ddata->chip.pwms[ch];
+- if (pwm->state.enabled) {
+- is_enabled = true;
+- break;
+- }
++ if (pwm->state.enabled)
++ clk_disable(ddata->clk);
+ }
+- if (is_enabled)
+- clk_disable(ddata->clk);
+
+- clk_disable_unprepare(ddata->clk);
+- pwmchip_remove(&ddata->chip);
+- clk_notifier_unregister(ddata->clk, &ddata->notifier);
++ clk_unprepare(ddata->clk);
+
+ return 0;
+ }
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index f54d4f176882a..e12b681c72e5e 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -264,8 +264,12 @@ static int of_get_regulation_constraints(struct device *dev,
+ }
+
+ suspend_np = of_get_child_by_name(np, regulator_states[i]);
+- if (!suspend_np || !suspend_state)
++ if (!suspend_np)
+ continue;
++ if (!suspend_state) {
++ of_node_put(suspend_np);
++ continue;
++ }
+
+ if (!of_property_read_u32(suspend_np, "regulator-mode",
+ &pval)) {
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index ef6e47d025cad..60f3513f7038b 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -357,10 +357,10 @@ static const struct regulator_desc pm8941_switch = {
+
+ static const struct regulator_desc pm8916_pldo = {
+ .linear_ranges = (struct linear_range[]) {
+- REGULATOR_LINEAR_RANGE(750000, 0, 208, 12500),
++ REGULATOR_LINEAR_RANGE(1750000, 0, 127, 12500),
+ },
+ .n_linear_ranges = 1,
+- .n_voltages = 209,
++ .n_voltages = 128,
+ .ops = &rpm_smps_ldo_ops,
+ };
+
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 4a3352821b1da..38383e7de3c1e 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -594,16 +594,17 @@ static int imx_rproc_addr_init(struct imx_rproc *priv,
+
+ node = of_parse_phandle(np, "memory-region", a);
+ /* Not map vdevbuffer, vdevring region */
+- if (!strncmp(node->name, "vdev", strlen("vdev")))
++ if (!strncmp(node->name, "vdev", strlen("vdev"))) {
++ of_node_put(node);
+ continue;
++ }
+ err = of_address_to_resource(node, 0, &res);
++ of_node_put(node);
+ if (err) {
+ dev_err(dev, "unable to resolve memory region\n");
+ return err;
+ }
+
+- of_node_put(node);
+-
+ if (b >= IMX_RPROC_MEM_MAX)
+ break;
+
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 6ae39c5653b1c..1c170d278b29f 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -87,6 +87,9 @@ static void adsp_minidump(struct rproc *rproc)
+ {
+ struct qcom_adsp *adsp = rproc->priv;
+
++ if (rproc->dump_conf == RPROC_COREDUMP_DISABLED)
++ return;
++
+ qcom_minidump(rproc, adsp->minidump_id);
+ }
+
+diff --git a/drivers/remoteproc/qcom_sysmon.c b/drivers/remoteproc/qcom_sysmon.c
+index 9fca814928635..a9f04dd83ab68 100644
+--- a/drivers/remoteproc/qcom_sysmon.c
++++ b/drivers/remoteproc/qcom_sysmon.c
+@@ -41,6 +41,7 @@ struct qcom_sysmon {
+ struct completion comp;
+ struct completion ind_comp;
+ struct completion shutdown_comp;
++ struct completion ssctl_comp;
+ struct mutex lock;
+
+ bool ssr_ack;
+@@ -445,6 +446,8 @@ static int ssctl_new_server(struct qmi_handle *qmi, struct qmi_service *svc)
+
+ svc->priv = sysmon;
+
++ complete(&sysmon->ssctl_comp);
++
+ return 0;
+ }
+
+@@ -501,6 +504,7 @@ static int sysmon_start(struct rproc_subdev *subdev)
+ .ssr_event = SSCTL_SSR_EVENT_AFTER_POWERUP
+ };
+
++ reinit_completion(&sysmon->ssctl_comp);
+ mutex_lock(&sysmon->state_lock);
+ sysmon->state = SSCTL_SSR_EVENT_AFTER_POWERUP;
+ blocking_notifier_call_chain(&sysmon_notifiers, 0, (void *)&event);
+@@ -545,6 +549,11 @@ static void sysmon_stop(struct rproc_subdev *subdev, bool crashed)
+ if (crashed)
+ return;
+
++ if (sysmon->ssctl_instance) {
++ if (!wait_for_completion_timeout(&sysmon->ssctl_comp, HZ / 2))
++ dev_err(sysmon->dev, "timeout waiting for ssctl service\n");
++ }
++
+ if (sysmon->ssctl_version)
+ sysmon->shutdown_acked = ssctl_request_shutdown(sysmon);
+ else if (sysmon->ept)
+@@ -631,6 +640,7 @@ struct qcom_sysmon *qcom_add_sysmon_subdev(struct rproc *rproc,
+ init_completion(&sysmon->comp);
+ init_completion(&sysmon->ind_comp);
+ init_completion(&sysmon->shutdown_comp);
++ init_completion(&sysmon->ssctl_comp);
+ mutex_init(&sysmon->lock);
+ mutex_init(&sysmon->state_lock);
+
+diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c
+index 9a223d394087f..68f37296b1516 100644
+--- a/drivers/remoteproc/qcom_wcnss.c
++++ b/drivers/remoteproc/qcom_wcnss.c
+@@ -467,6 +467,7 @@ static int wcnss_request_irq(struct qcom_wcnss *wcnss,
+ irq_handler_t thread_fn)
+ {
+ int ret;
++ int irq_number;
+
+ ret = platform_get_irq_byname(pdev, name);
+ if (ret < 0 && optional) {
+@@ -477,14 +478,19 @@ static int wcnss_request_irq(struct qcom_wcnss *wcnss,
+ return ret;
+ }
+
++ irq_number = ret;
++
+ ret = devm_request_threaded_irq(&pdev->dev, ret,
+ NULL, thread_fn,
+ IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+ "wcnss", wcnss);
+- if (ret)
++ if (ret) {
+ dev_err(&pdev->dev, "request %s IRQ failed\n", name);
++ return ret;
++ }
+
+- return ret;
++ /* Return the IRQ number if the IRQ was successfully acquired */
++ return irq_number;
+ }
+
+ static int wcnss_alloc_memory_region(struct qcom_wcnss *wcnss)
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index 4840ad906018e..0481926c69752 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -1655,6 +1655,7 @@ static int k3_r5_cluster_of_init(struct platform_device *pdev)
+ if (!cpdev) {
+ ret = -ENODEV;
+ dev_err(dev, "could not get R5 core platform device\n");
++ of_node_put(child);
+ goto fail;
+ }
+
+@@ -1663,6 +1664,7 @@ static int k3_r5_cluster_of_init(struct platform_device *pdev)
+ dev_err(dev, "k3_r5_core_of_init failed, ret = %d\n",
+ ret);
+ put_device(&cpdev->dev);
++ of_node_put(child);
+ goto fail;
+ }
+
+diff --git a/drivers/rpmsg/mtk_rpmsg.c b/drivers/rpmsg/mtk_rpmsg.c
+index 5b4404b8be4c7..d1213c33da204 100644
+--- a/drivers/rpmsg/mtk_rpmsg.c
++++ b/drivers/rpmsg/mtk_rpmsg.c
+@@ -234,7 +234,9 @@ static void mtk_register_device_work_function(struct work_struct *register_work)
+ if (info->registered)
+ continue;
+
++ mutex_unlock(&subdev->channels_lock);
+ ret = mtk_rpmsg_register_device(subdev, &info->info);
++ mutex_lock(&subdev->channels_lock);
+ if (ret) {
+ dev_err(&pdev->dev, "Can't create rpmsg_device\n");
+ continue;
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 1957b27c4cf37..f7af53891ef92 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1383,6 +1383,7 @@ static int qcom_smd_parse_edge(struct device *dev,
+ }
+
+ edge->ipc_regmap = syscon_node_to_regmap(syscon_np);
++ of_node_put(syscon_np);
+ if (IS_ERR(edge->ipc_regmap)) {
+ ret = PTR_ERR(edge->ipc_regmap);
+ goto put_node;
+diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c
+index b6183d4f62a22..4f2189111494a 100644
+--- a/drivers/rpmsg/rpmsg_char.c
++++ b/drivers/rpmsg/rpmsg_char.c
+@@ -120,8 +120,11 @@ static int rpmsg_eptdev_open(struct inode *inode, struct file *filp)
+ struct rpmsg_device *rpdev = eptdev->rpdev;
+ struct device *dev = &eptdev->dev;
+
+- if (eptdev->ept)
++ mutex_lock(&eptdev->ept_lock);
++ if (eptdev->ept) {
++ mutex_unlock(&eptdev->ept_lock);
+ return -EBUSY;
++ }
+
+ get_device(dev);
+
+@@ -137,11 +140,13 @@ static int rpmsg_eptdev_open(struct inode *inode, struct file *filp)
+ if (!ept) {
+ dev_err(dev, "failed to open %s\n", eptdev->chinfo.name);
+ put_device(dev);
++ mutex_unlock(&eptdev->ept_lock);
+ return -EINVAL;
+ }
+
+ eptdev->ept = ept;
+ filp->private_data = eptdev;
++ mutex_unlock(&eptdev->ept_lock);
+
+ return 0;
+ }
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index 290c1f02da10a..5a47cad89fdc3 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -618,6 +618,7 @@ int rpmsg_register_device_override(struct rpmsg_device *rpdev,
+ strlen(driver_override));
+ if (ret) {
+ dev_err(dev, "device_set_override failed: %d\n", ret);
++ put_device(dev);
+ return ret;
+ }
+ }
+diff --git a/drivers/rtc/rtc-rx8025.c b/drivers/rtc/rtc-rx8025.c
+index b32117ccd74bd..dde86f3e2a4bd 100644
+--- a/drivers/rtc/rtc-rx8025.c
++++ b/drivers/rtc/rtc-rx8025.c
+@@ -55,6 +55,8 @@
+ #define RX8025_BIT_CTRL2_XST BIT(5)
+ #define RX8025_BIT_CTRL2_VDET BIT(6)
+
++#define RX8035_BIT_HOUR_1224 BIT(7)
++
+ /* Clock precision adjustment */
+ #define RX8025_ADJ_RESOLUTION 3050 /* in ppb */
+ #define RX8025_ADJ_DATA_MAX 62
+@@ -78,6 +80,7 @@ struct rx8025_data {
+ struct rtc_device *rtc;
+ enum rx_model model;
+ u8 ctrl1;
++ int is_24;
+ };
+
+ static s32 rx8025_read_reg(const struct i2c_client *client, u8 number)
+@@ -226,7 +229,7 @@ static int rx8025_get_time(struct device *dev, struct rtc_time *dt)
+
+ dt->tm_sec = bcd2bin(date[RX8025_REG_SEC] & 0x7f);
+ dt->tm_min = bcd2bin(date[RX8025_REG_MIN] & 0x7f);
+- if (rx8025->ctrl1 & RX8025_BIT_CTRL1_1224)
++ if (rx8025->is_24)
+ dt->tm_hour = bcd2bin(date[RX8025_REG_HOUR] & 0x3f);
+ else
+ dt->tm_hour = bcd2bin(date[RX8025_REG_HOUR] & 0x1f) % 12
+@@ -254,7 +257,7 @@ static int rx8025_set_time(struct device *dev, struct rtc_time *dt)
+ */
+ date[RX8025_REG_SEC] = bin2bcd(dt->tm_sec);
+ date[RX8025_REG_MIN] = bin2bcd(dt->tm_min);
+- if (rx8025->ctrl1 & RX8025_BIT_CTRL1_1224)
++ if (rx8025->is_24)
+ date[RX8025_REG_HOUR] = bin2bcd(dt->tm_hour);
+ else
+ date[RX8025_REG_HOUR] = (dt->tm_hour >= 12 ? 0x20 : 0)
+@@ -279,6 +282,7 @@ static int rx8025_init_client(struct i2c_client *client)
+ struct rx8025_data *rx8025 = i2c_get_clientdata(client);
+ u8 ctrl[2], ctrl2;
+ int need_clear = 0;
++ int hour_reg;
+ int err;
+
+ err = rx8025_read_regs(client, RX8025_REG_CTRL1, 2, ctrl);
+@@ -303,6 +307,16 @@ static int rx8025_init_client(struct i2c_client *client)
+
+ err = rx8025_write_reg(client, RX8025_REG_CTRL2, ctrl2);
+ }
++
++ if (rx8025->model == model_rx_8035) {
++ /* In RX-8035, 12/24 flag is in the hour register */
++ hour_reg = rx8025_read_reg(client, RX8025_REG_HOUR);
++ if (hour_reg < 0)
++ return hour_reg;
++ rx8025->is_24 = (hour_reg & RX8035_BIT_HOUR_1224);
++ } else {
++ rx8025->is_24 = (ctrl[1] & RX8025_BIT_CTRL1_1224);
++ }
+ out:
+ return err;
+ }
+@@ -329,7 +343,7 @@ static int rx8025_read_alarm(struct device *dev, struct rtc_wkalrm *t)
+ /* Hardware alarms precision is 1 minute! */
+ t->time.tm_sec = 0;
+ t->time.tm_min = bcd2bin(ald[0] & 0x7f);
+- if (rx8025->ctrl1 & RX8025_BIT_CTRL1_1224)
++ if (rx8025->is_24)
+ t->time.tm_hour = bcd2bin(ald[1] & 0x3f);
+ else
+ t->time.tm_hour = bcd2bin(ald[1] & 0x1f) % 12
+@@ -350,7 +364,7 @@ static int rx8025_set_alarm(struct device *dev, struct rtc_wkalrm *t)
+ int err;
+
+ ald[0] = bin2bcd(t->time.tm_min);
+- if (rx8025->ctrl1 & RX8025_BIT_CTRL1_1224)
++ if (rx8025->is_24)
+ ald[1] = bin2bcd(t->time.tm_hour);
+ else
+ ald[1] = (t->time.tm_hour >= 12 ? 0x20 : 0)
+diff --git a/drivers/s390/char/zcore.c b/drivers/s390/char/zcore.c
+index 516783ba950f8..92b32ce645b95 100644
+--- a/drivers/s390/char/zcore.c
++++ b/drivers/s390/char/zcore.c
+@@ -50,6 +50,7 @@ static struct dentry *zcore_reipl_file;
+ static struct dentry *zcore_hsa_file;
+ static struct ipl_parameter_block *zcore_ipl_block;
+
++static DEFINE_MUTEX(hsa_buf_mutex);
+ static char hsa_buf[PAGE_SIZE] __aligned(PAGE_SIZE);
+
+ /*
+@@ -66,19 +67,24 @@ int memcpy_hsa_user(void __user *dest, unsigned long src, size_t count)
+ if (!hsa_available)
+ return -ENODATA;
+
++ mutex_lock(&hsa_buf_mutex);
+ while (count) {
+ if (sclp_sdias_copy(hsa_buf, src / PAGE_SIZE + 2, 1)) {
+ TRACE("sclp_sdias_copy() failed\n");
++ mutex_unlock(&hsa_buf_mutex);
+ return -EIO;
+ }
+ offset = src % PAGE_SIZE;
+ bytes = min(PAGE_SIZE - offset, count);
+- if (copy_to_user(dest, hsa_buf + offset, bytes))
++ if (copy_to_user(dest, hsa_buf + offset, bytes)) {
++ mutex_unlock(&hsa_buf_mutex);
+ return -EFAULT;
++ }
+ src += bytes;
+ dest += bytes;
+ count -= bytes;
+ }
++ mutex_unlock(&hsa_buf_mutex);
+ return 0;
+ }
+
+@@ -96,9 +102,11 @@ int memcpy_hsa_kernel(void *dest, unsigned long src, size_t count)
+ if (!hsa_available)
+ return -ENODATA;
+
++ mutex_lock(&hsa_buf_mutex);
+ while (count) {
+ if (sclp_sdias_copy(hsa_buf, src / PAGE_SIZE + 2, 1)) {
+ TRACE("sclp_sdias_copy() failed\n");
++ mutex_unlock(&hsa_buf_mutex);
+ return -EIO;
+ }
+ offset = src % PAGE_SIZE;
+@@ -108,6 +116,7 @@ int memcpy_hsa_kernel(void *dest, unsigned long src, size_t count)
+ dest += bytes;
+ count -= bytes;
+ }
++ mutex_unlock(&hsa_buf_mutex);
+ return 0;
+ }
+
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index ee182cfb467d1..279ad2161f179 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -14,7 +14,6 @@
+ #include <linux/init.h>
+ #include <linux/device.h>
+ #include <linux/slab.h>
+-#include <linux/uuid.h>
+ #include <linux/mdev.h>
+
+ #include <asm/isc.h>
+@@ -107,9 +106,10 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
+ /*
+ * Reset to IDLE only if processing of a channel program
+ * has finished. Do not overwrite a possible processing
+- * state if the final interrupt was for HSCH or CSCH.
++ * state if the interrupt was unsolicited, or if the final
++ * interrupt was for HSCH or CSCH.
+ */
+- if (private->mdev && cp_is_finished)
++ if (cp_is_finished)
+ private->state = VFIO_CCW_STATE_IDLE;
+
+ if (private->io_trigger)
+@@ -301,19 +301,11 @@ static int vfio_ccw_sch_event(struct subchannel *sch, int process)
+ if (work_pending(&sch->todo_work))
+ goto out_unlock;
+
+- if (cio_update_schib(sch)) {
+- vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
+- rc = 0;
+- goto out_unlock;
+- }
+-
+- private = dev_get_drvdata(&sch->dev);
+- if (private->state == VFIO_CCW_STATE_NOT_OPER) {
+- private->state = private->mdev ? VFIO_CCW_STATE_IDLE :
+- VFIO_CCW_STATE_STANDBY;
+- }
+ rc = 0;
+
++ if (cio_update_schib(sch))
++ vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
++
+ out_unlock:
+ spin_unlock_irqrestore(sch->lock, flags);
+
+@@ -358,8 +350,8 @@ static int vfio_ccw_chp_event(struct subchannel *sch,
+ return 0;
+
+ trace_vfio_ccw_chp_event(private->sch->schid, mask, event);
+- VFIO_CCW_MSG_EVENT(2, "%pUl (%x.%x.%04x): mask=0x%x event=%d\n",
+- mdev_uuid(private->mdev), sch->schid.cssid,
++ VFIO_CCW_MSG_EVENT(2, "sch %x.%x.%04x: mask=0x%x event=%d\n",
++ sch->schid.cssid,
+ sch->schid.ssid, sch->schid.sch_no,
+ mask, event);
+
+diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
+index 8483a266051c2..bbcc5b4867496 100644
+--- a/drivers/s390/cio/vfio_ccw_fsm.c
++++ b/drivers/s390/cio/vfio_ccw_fsm.c
+@@ -10,7 +10,6 @@
+ */
+
+ #include <linux/vfio.h>
+-#include <linux/mdev.h>
+
+ #include "ioasm.h"
+ #include "vfio_ccw_private.h"
+@@ -242,7 +241,6 @@ static void fsm_io_request(struct vfio_ccw_private *private,
+ union orb *orb;
+ union scsw *scsw = &private->scsw;
+ struct ccw_io_region *io_region = private->io_region;
+- struct mdev_device *mdev = private->mdev;
+ char *errstr = "request";
+ struct subchannel_id schid = get_schid(private);
+
+@@ -256,8 +254,8 @@ static void fsm_io_request(struct vfio_ccw_private *private,
+ if (orb->tm.b) {
+ io_region->ret_code = -EOPNOTSUPP;
+ VFIO_CCW_MSG_EVENT(2,
+- "%pUl (%x.%x.%04x): transport mode\n",
+- mdev_uuid(mdev), schid.cssid,
++ "sch %x.%x.%04x: transport mode\n",
++ schid.cssid,
+ schid.ssid, schid.sch_no);
+ errstr = "transport mode";
+ goto err_out;
+@@ -265,8 +263,8 @@ static void fsm_io_request(struct vfio_ccw_private *private,
+ io_region->ret_code = cp_init(&private->cp, orb);
+ if (io_region->ret_code) {
+ VFIO_CCW_MSG_EVENT(2,
+- "%pUl (%x.%x.%04x): cp_init=%d\n",
+- mdev_uuid(mdev), schid.cssid,
++ "sch %x.%x.%04x: cp_init=%d\n",
++ schid.cssid,
+ schid.ssid, schid.sch_no,
+ io_region->ret_code);
+ errstr = "cp init";
+@@ -276,8 +274,8 @@ static void fsm_io_request(struct vfio_ccw_private *private,
+ io_region->ret_code = cp_prefetch(&private->cp);
+ if (io_region->ret_code) {
+ VFIO_CCW_MSG_EVENT(2,
+- "%pUl (%x.%x.%04x): cp_prefetch=%d\n",
+- mdev_uuid(mdev), schid.cssid,
++ "sch %x.%x.%04x: cp_prefetch=%d\n",
++ schid.cssid,
+ schid.ssid, schid.sch_no,
+ io_region->ret_code);
+ errstr = "cp prefetch";
+@@ -289,8 +287,8 @@ static void fsm_io_request(struct vfio_ccw_private *private,
+ io_region->ret_code = fsm_io_helper(private);
+ if (io_region->ret_code) {
+ VFIO_CCW_MSG_EVENT(2,
+- "%pUl (%x.%x.%04x): fsm_io_helper=%d\n",
+- mdev_uuid(mdev), schid.cssid,
++ "sch %x.%x.%04x: fsm_io_helper=%d\n",
++ schid.cssid,
+ schid.ssid, schid.sch_no,
+ io_region->ret_code);
+ errstr = "cp fsm_io_helper";
+@@ -300,16 +298,16 @@ static void fsm_io_request(struct vfio_ccw_private *private,
+ return;
+ } else if (scsw->cmd.fctl & SCSW_FCTL_HALT_FUNC) {
+ VFIO_CCW_MSG_EVENT(2,
+- "%pUl (%x.%x.%04x): halt on io_region\n",
+- mdev_uuid(mdev), schid.cssid,
++ "sch %x.%x.%04x: halt on io_region\n",
++ schid.cssid,
+ schid.ssid, schid.sch_no);
+ /* halt is handled via the async cmd region */
+ io_region->ret_code = -EOPNOTSUPP;
+ goto err_out;
+ } else if (scsw->cmd.fctl & SCSW_FCTL_CLEAR_FUNC) {
+ VFIO_CCW_MSG_EVENT(2,
+- "%pUl (%x.%x.%04x): clear on io_region\n",
+- mdev_uuid(mdev), schid.cssid,
++ "sch %x.%x.%04x: clear on io_region\n",
++ schid.cssid,
+ schid.ssid, schid.sch_no);
+ /* clear is handled via the async cmd region */
+ io_region->ret_code = -EOPNOTSUPP;
+diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
+index b49e2e9db2dc6..9a05dadcbb754 100644
+--- a/drivers/s390/cio/vfio_ccw_ops.c
++++ b/drivers/s390/cio/vfio_ccw_ops.c
+@@ -131,8 +131,8 @@ static int vfio_ccw_mdev_probe(struct mdev_device *mdev)
+ private->mdev = mdev;
+ private->state = VFIO_CCW_STATE_IDLE;
+
+- VFIO_CCW_MSG_EVENT(2, "mdev %pUl, sch %x.%x.%04x: create\n",
+- mdev_uuid(mdev), private->sch->schid.cssid,
++ VFIO_CCW_MSG_EVENT(2, "sch %x.%x.%04x: create\n",
++ private->sch->schid.cssid,
+ private->sch->schid.ssid,
+ private->sch->schid.sch_no);
+
+@@ -146,7 +146,7 @@ err_atomic:
+ vfio_uninit_group_dev(&private->vdev);
+ atomic_inc(&private->avail);
+ private->mdev = NULL;
+- private->state = VFIO_CCW_STATE_IDLE;
++ private->state = VFIO_CCW_STATE_STANDBY;
+ return ret;
+ }
+
+@@ -154,8 +154,8 @@ static void vfio_ccw_mdev_remove(struct mdev_device *mdev)
+ {
+ struct vfio_ccw_private *private = dev_get_drvdata(mdev->dev.parent);
+
+- VFIO_CCW_MSG_EVENT(2, "mdev %pUl, sch %x.%x.%04x: remove\n",
+- mdev_uuid(mdev), private->sch->schid.cssid,
++ VFIO_CCW_MSG_EVENT(2, "sch %x.%x.%04x: remove\n",
++ private->sch->schid.cssid,
+ private->sch->schid.ssid,
+ private->sch->schid.sch_no);
+
+diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
+index 511bf8e0a436c..b61acbb09be3b 100644
+--- a/drivers/s390/scsi/zfcp_fc.c
++++ b/drivers/s390/scsi/zfcp_fc.c
+@@ -145,27 +145,33 @@ void zfcp_fc_enqueue_event(struct zfcp_adapter *adapter,
+
+ static int zfcp_fc_wka_port_get(struct zfcp_fc_wka_port *wka_port)
+ {
++ int ret = -EIO;
++
+ if (mutex_lock_interruptible(&wka_port->mutex))
+ return -ERESTARTSYS;
+
+ if (wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE ||
+ wka_port->status == ZFCP_FC_WKA_PORT_CLOSING) {
+ wka_port->status = ZFCP_FC_WKA_PORT_OPENING;
+- if (zfcp_fsf_open_wka_port(wka_port))
++ if (zfcp_fsf_open_wka_port(wka_port)) {
++ /* could not even send request, nothing to wait for */
+ wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
++ goto out;
++ }
+ }
+
+- mutex_unlock(&wka_port->mutex);
+-
+- wait_event(wka_port->completion_wq,
++ wait_event(wka_port->opened,
+ wka_port->status == ZFCP_FC_WKA_PORT_ONLINE ||
+ wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE);
+
+ if (wka_port->status == ZFCP_FC_WKA_PORT_ONLINE) {
+ atomic_inc(&wka_port->refcount);
+- return 0;
++ ret = 0;
++ goto out;
+ }
+- return -EIO;
++out:
++ mutex_unlock(&wka_port->mutex);
++ return ret;
+ }
+
+ static void zfcp_fc_wka_port_offline(struct work_struct *work)
+@@ -181,9 +187,12 @@ static void zfcp_fc_wka_port_offline(struct work_struct *work)
+
+ wka_port->status = ZFCP_FC_WKA_PORT_CLOSING;
+ if (zfcp_fsf_close_wka_port(wka_port)) {
++ /* could not even send request, nothing to wait for */
+ wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
+- wake_up(&wka_port->completion_wq);
++ goto out;
+ }
++ wait_event(wka_port->closed,
++ wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE);
+ out:
+ mutex_unlock(&wka_port->mutex);
+ }
+@@ -193,13 +202,15 @@ static void zfcp_fc_wka_port_put(struct zfcp_fc_wka_port *wka_port)
+ if (atomic_dec_return(&wka_port->refcount) != 0)
+ return;
+ /* wait 10 milliseconds, other reqs might pop in */
+- schedule_delayed_work(&wka_port->work, HZ / 100);
++ queue_delayed_work(wka_port->adapter->work_queue, &wka_port->work,
++ msecs_to_jiffies(10));
+ }
+
+ static void zfcp_fc_wka_port_init(struct zfcp_fc_wka_port *wka_port, u32 d_id,
+ struct zfcp_adapter *adapter)
+ {
+- init_waitqueue_head(&wka_port->completion_wq);
++ init_waitqueue_head(&wka_port->opened);
++ init_waitqueue_head(&wka_port->closed);
+
+ wka_port->adapter = adapter;
+ wka_port->d_id = d_id;
+diff --git a/drivers/s390/scsi/zfcp_fc.h b/drivers/s390/scsi/zfcp_fc.h
+index 8aaf409ce9cba..97755407ce1b5 100644
+--- a/drivers/s390/scsi/zfcp_fc.h
++++ b/drivers/s390/scsi/zfcp_fc.h
+@@ -185,7 +185,8 @@ enum zfcp_fc_wka_status {
+ /**
+ * struct zfcp_fc_wka_port - representation of well-known-address (WKA) FC port
+ * @adapter: Pointer to adapter structure this WKA port belongs to
+- * @completion_wq: Wait for completion of open/close command
++ * @opened: Wait for completion of open command
++ * @closed: Wait for completion of close command
+ * @status: Current status of WKA port
+ * @refcount: Reference count to keep port open as long as it is in use
+ * @d_id: FC destination id or well-known-address
+@@ -195,7 +196,8 @@ enum zfcp_fc_wka_status {
+ */
+ struct zfcp_fc_wka_port {
+ struct zfcp_adapter *adapter;
+- wait_queue_head_t completion_wq;
++ wait_queue_head_t opened;
++ wait_queue_head_t closed;
+ enum zfcp_fc_wka_status status;
+ atomic_t refcount;
+ u32 d_id;
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index 4f1e4385ce58a..19223b0755686 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -1907,7 +1907,7 @@ static void zfcp_fsf_open_wka_port_handler(struct zfcp_fsf_req *req)
+ wka_port->status = ZFCP_FC_WKA_PORT_ONLINE;
+ }
+ out:
+- wake_up(&wka_port->completion_wq);
++ wake_up(&wka_port->opened);
+ }
+
+ /**
+@@ -1966,7 +1966,7 @@ static void zfcp_fsf_close_wka_port_handler(struct zfcp_fsf_req *req)
+ }
+
+ wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
+- wake_up(&wka_port->completion_wq);
++ wake_up(&wka_port->closed);
+ }
+
+ /**
+diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
+index 3bb0adefbe06f..02026476c39c9 100644
+--- a/drivers/scsi/be2iscsi/be_main.c
++++ b/drivers/scsi/be2iscsi/be_main.c
+@@ -5745,7 +5745,7 @@ static void beiscsi_remove(struct pci_dev *pcidev)
+ cancel_work_sync(&phba->sess_work);
+
+ beiscsi_iface_destroy_default(phba);
+- iscsi_host_remove(phba->shost);
++ iscsi_host_remove(phba->shost, false);
+ beiscsi_disable_port(phba, 1);
+
+ /* after cancelling boot_work */
+diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+index 15fbd09baa943..a3c800e04a2e8 100644
+--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
++++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+@@ -909,7 +909,7 @@ void bnx2i_free_hba(struct bnx2i_hba *hba)
+ {
+ struct Scsi_Host *shost = hba->shost;
+
+- iscsi_host_remove(shost);
++ iscsi_host_remove(shost, false);
+ INIT_LIST_HEAD(&hba->ep_ofld_list);
+ INIT_LIST_HEAD(&hba->ep_active_list);
+ INIT_LIST_HEAD(&hba->ep_destroy_list);
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index 4365d52c6430e..32abdf0fa9aab 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -328,7 +328,7 @@ void cxgbi_hbas_remove(struct cxgbi_device *cdev)
+ chba = cdev->hbas[i];
+ if (chba) {
+ cdev->hbas[i] = NULL;
+- iscsi_host_remove(chba->shost);
++ iscsi_host_remove(chba->shost, false);
+ pci_dev_put(cdev->pdev);
+ iscsi_host_free(chba->shost);
+ }
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index 9fee70d6434a8..52c6f70d60ec4 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -898,7 +898,7 @@ iscsi_sw_tcp_session_create(struct iscsi_endpoint *ep, uint16_t cmds_max,
+ remove_session:
+ iscsi_session_teardown(cls_session);
+ remove_host:
+- iscsi_host_remove(shost);
++ iscsi_host_remove(shost, false);
+ free_host:
+ iscsi_host_free(shost);
+ return NULL;
+@@ -915,7 +915,7 @@ static void iscsi_sw_tcp_session_destroy(struct iscsi_cls_session *cls_session)
+ iscsi_tcp_r2tpool_free(cls_session->dd_data);
+ iscsi_session_teardown(cls_session);
+
+- iscsi_host_remove(shost);
++ iscsi_host_remove(shost, false);
+ iscsi_host_free(shost);
+ }
+
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 797abf4f53995..3ddb701cd29c7 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -2828,11 +2828,12 @@ static void iscsi_notify_host_removed(struct iscsi_cls_session *cls_session)
+ /**
+ * iscsi_host_remove - remove host and sessions
+ * @shost: scsi host
++ * @is_shutdown: true if called from a driver shutdown callout
+ *
+ * If there are any sessions left, this will initiate the removal and wait
+ * for the completion.
+ */
+-void iscsi_host_remove(struct Scsi_Host *shost)
++void iscsi_host_remove(struct Scsi_Host *shost, bool is_shutdown)
+ {
+ struct iscsi_host *ihost = shost_priv(shost);
+ unsigned long flags;
+@@ -2841,7 +2842,11 @@ void iscsi_host_remove(struct Scsi_Host *shost)
+ ihost->state = ISCSI_HOST_REMOVED;
+ spin_unlock_irqrestore(&ihost->lock, flags);
+
+- iscsi_host_for_each_session(shost, iscsi_notify_host_removed);
++ if (!is_shutdown)
++ iscsi_host_for_each_session(shost, iscsi_notify_host_removed);
++ else
++ iscsi_host_for_each_session(shost, iscsi_force_destroy_session);
++
+ wait_event_interruptible(ihost->session_removal_wq,
+ ihost->num_sessions == 0);
+ if (signal_pending(current))
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index da9070cdad91a..212f9b9621878 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -604,7 +604,6 @@ struct lpfc_vport {
+ #define FC_VFI_REGISTERED 0x800000 /* VFI is registered */
+ #define FC_FDISC_COMPLETED 0x1000000/* FDISC completed */
+ #define FC_DISC_DELAYED 0x2000000/* Delay NPort discovery */
+-#define FC_RSCN_MEMENTO 0x4000000/* RSCN cmd processed */
+
+ uint32_t ct_flags;
+ #define FC_CT_RFF_ID 0x1 /* RFF_ID accepted by switch */
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 3fababb7c1818..c904e9486b921 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1886,7 +1886,6 @@ lpfc_end_rscn(struct lpfc_vport *vport)
+ else {
+ spin_lock_irq(shost->host_lock);
+ vport->fc_flag &= ~FC_RSCN_MODE;
+- vport->fc_flag |= FC_RSCN_MEMENTO;
+ spin_unlock_irq(shost->host_lock);
+ }
+ }
+@@ -2434,14 +2433,13 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ u32 local_nlp_type, elscmd;
+
+ /*
+- * If discovery was kicked off from RSCN mode,
+- * the FC4 types supported from a
++ * If we are in RSCN mode, the FC4 types supported from a
+ * previous GFT_ID command may not be accurate. So, if we
+ * are a NVME Initiator, always look for the possibility of
+ * the remote NPort beng a NVME Target.
+ */
+ if (phba->sli_rev == LPFC_SLI_REV4 &&
+- vport->fc_flag & (FC_RSCN_MODE | FC_RSCN_MEMENTO) &&
++ vport->fc_flag & FC_RSCN_MODE &&
+ vport->nvmei_support)
+ ndlp->nlp_fc4_type |= NLP_FC4_NVME;
+ local_nlp_type = ndlp->nlp_fc4_type;
+@@ -7915,7 +7913,6 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ if ((rscn_cnt < FC_MAX_HOLD_RSCN) &&
+ !(vport->fc_flag & FC_RSCN_DISCOVERY)) {
+ vport->fc_flag |= FC_RSCN_MODE;
+- vport->fc_flag &= ~FC_RSCN_MEMENTO;
+ spin_unlock_irq(shost->host_lock);
+ if (rscn_cnt) {
+ cmd = vport->fc_rscn_id_list[rscn_cnt-1]->virt;
+@@ -7965,7 +7962,6 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+
+ spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_RSCN_MODE;
+- vport->fc_flag &= ~FC_RSCN_MEMENTO;
+ spin_unlock_irq(shost->host_lock);
+ vport->fc_rscn_id_list[vport->fc_rscn_id_cnt++] = pcmd;
+ /* Indicate we are done walking fc_rscn_id_list on this vport */
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index fb36f26170e4e..5cd838eac455c 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -1354,8 +1354,7 @@ lpfc_linkup_port(struct lpfc_vport *vport)
+
+ spin_lock_irq(shost->host_lock);
+ vport->fc_flag &= ~(FC_PT2PT | FC_PT2PT_PLOGI | FC_ABORT_DISCOVERY |
+- FC_RSCN_MEMENTO | FC_RSCN_MODE |
+- FC_NLP_MORE | FC_RSCN_DISCOVERY);
++ FC_RSCN_MODE | FC_NLP_MORE | FC_RSCN_DISCOVERY);
+ vport->fc_flag |= FC_NDISC_ACTIVE;
+ vport->fc_ns_retry = 0;
+ spin_unlock_irq(shost->host_lock);
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index ba5e4016262e2..084c0f9fdc3a6 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -5456,7 +5456,6 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
+ cur_iocbq->cmd_flag |= LPFC_IO_VMID;
+ }
+ }
+- atomic_inc(&ndlp->cmd_pending);
+
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+ if (unlikely(phba->hdwqstat_on & LPFC_CHECK_SCSI_IO))
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 83ffba7f51da1..780d975c85b5b 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2414,9 +2414,12 @@ static void __qedi_remove(struct pci_dev *pdev, int mode)
+ int rval;
+ u16 retry = 10;
+
+- if (mode == QEDI_MODE_NORMAL || mode == QEDI_MODE_SHUTDOWN) {
+- iscsi_host_remove(qedi->shost);
++ if (mode == QEDI_MODE_NORMAL)
++ iscsi_host_remove(qedi->shost, false);
++ else if (mode == QEDI_MODE_SHUTDOWN)
++ iscsi_host_remove(qedi->shost, true);
+
++ if (mode == QEDI_MODE_NORMAL || mode == QEDI_MODE_SHUTDOWN) {
+ if (qedi->tmf_thread) {
+ destroy_workqueue(qedi->tmf_thread);
+ qedi->tmf_thread = NULL;
+@@ -2791,7 +2794,7 @@ remove_host:
+ #ifdef CONFIG_DEBUG_FS
+ qedi_dbg_host_exit(&qedi->dbg_ctx);
+ #endif
+- iscsi_host_remove(qedi->shost);
++ iscsi_host_remove(qedi->shost, false);
+ stop_iscsi_func:
+ qedi_ops->stop(qedi->cdev);
+ stop_slowpath:
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 3b3e4234f37a0..412ad888bdc17 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -2716,17 +2716,24 @@ qla2x00_dev_loss_tmo_callbk(struct fc_rport *rport)
+ if (!fcport)
+ return;
+
+- /* Now that the rport has been deleted, set the fcport state to
+- FCS_DEVICE_DEAD */
+- qla2x00_set_fcport_state(fcport, FCS_DEVICE_DEAD);
++
++ /*
++ * Now that the rport has been deleted, set the fcport state to
++ * FCS_DEVICE_DEAD, if the fcport is still lost.
++ */
++ if (fcport->scan_state != QLA_FCPORT_FOUND)
++ qla2x00_set_fcport_state(fcport, FCS_DEVICE_DEAD);
+
+ /*
+ * Transport has effectively 'deleted' the rport, clear
+ * all local references.
+ */
+ spin_lock_irqsave(host->host_lock, flags);
+- fcport->rport = fcport->drport = NULL;
+- *((fc_port_t **)rport->dd_data) = NULL;
++ /* Confirm port has not reappeared before clearing pointers. */
++ if (rport->port_state != FC_PORTSTATE_ONLINE) {
++ fcport->rport = fcport->drport = NULL;
++ *((fc_port_t **)rport->dd_data) = NULL;
++ }
+ spin_unlock_irqrestore(host->host_lock, flags);
+
+ if (test_bit(ABORT_ISP_ACTIVE, &fcport->vha->dpc_flags))
+@@ -2759,9 +2766,12 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
+ /*
+ * At this point all fcport's software-states are cleared. Perform any
+ * final cleanup of firmware resources (PCBs and XCBs).
++ *
++ * Attempt to cleanup only lost devices.
+ */
+ if (fcport->loop_id != FC_NO_LOOP_ID) {
+- if (IS_FWI2_CAPABLE(fcport->vha->hw)) {
++ if (IS_FWI2_CAPABLE(fcport->vha->hw) &&
++ fcport->scan_state != QLA_FCPORT_FOUND) {
+ if (fcport->loop_id != FC_NO_LOOP_ID)
+ fcport->logout_on_delete = 1;
+
+@@ -2771,7 +2781,7 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
+ __LINE__);
+ qlt_schedule_sess_for_deletion(fcport);
+ }
+- } else {
++ } else if (!IS_FWI2_CAPABLE(fcport->vha->hw)) {
+ qla2x00_port_logout(fcport->vha, fcport);
+ }
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index c2f00f076f799..726af9e405728 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -2975,6 +2975,13 @@ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+
+ ql_log(ql_log_info, vha, 0x708b, "%s CMD timeout. bsg ptr %p.\n",
+ __func__, bsg_job);
++
++ if (qla2x00_isp_reg_stat(ha)) {
++ ql_log(ql_log_info, vha, 0x9007,
++ "PCI/Register disconnect.\n");
++ qla_pci_set_eeh_busy(vha);
++ }
++
+ /* find the bsg job from the active list of commands */
+ spin_lock_irqsave(&ha->hardware_lock, flags);
+ for (que = 0; que < ha->max_req_queues; que++) {
+@@ -2992,7 +2999,8 @@ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+ sp->u.bsg_job == bsg_job) {
+ req->outstanding_cmds[cnt] = NULL;
+ spin_unlock_irqrestore(&ha->hardware_lock, flags);
+- if (ha->isp_ops->abort_command(sp)) {
++
++ if (!ha->flags.eeh_busy && ha->isp_ops->abort_command(sp)) {
+ ql_log(ql_log_warn, vha, 0x7089,
+ "mbx abort_command failed.\n");
+ bsg_reply->result = -EIO;
+diff --git a/drivers/scsi/qla2xxx/qla_dbg.h b/drivers/scsi/qla2xxx/qla_dbg.h
+index f1f6c740bdcd8..feeb1666227f1 100644
+--- a/drivers/scsi/qla2xxx/qla_dbg.h
++++ b/drivers/scsi/qla2xxx/qla_dbg.h
+@@ -383,5 +383,5 @@ ql_mask_match(uint level)
+ if (ql2xextended_error_logging == 1)
+ ql2xextended_error_logging = QL_DBG_DEFAULT1_MASK;
+
+- return (level & ql2xextended_error_logging) == level;
++ return level && ((level & ql2xextended_error_logging) == level);
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index e8f69c486be10..01cdd5f8723c7 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -2158,6 +2158,11 @@ typedef struct {
+ #define CS_IOCB_ERROR 0x31 /* Generic error for IOCB request
+ failure */
+ #define CS_REJECT_RECEIVED 0x4E /* Reject received */
++#define CS_EDIF_AUTH_ERROR 0x63 /* decrypt error */
++#define CS_EDIF_PAD_LEN_ERROR 0x65 /* pad > frame size, not 4byte align */
++#define CS_EDIF_INV_REQ 0x66 /* invalid request */
++#define CS_EDIF_SPI_ERROR 0x67 /* rx frame unable to locate sa */
++#define CS_EDIF_HDR_ERROR 0x69 /* data frame != expected len */
+ #define CS_BAD_PAYLOAD 0x80 /* Driver defined */
+ #define CS_UNKNOWN 0x81 /* Driver defined */
+ #define CS_RETRY 0x82 /* Driver defined */
+@@ -2626,7 +2631,6 @@ typedef struct fc_port {
+ struct {
+ uint32_t enable:1; /* device is edif enabled/req'd */
+ uint32_t app_stop:2;
+- uint32_t app_started:1;
+ uint32_t aes_gmac:1;
+ uint32_t app_sess_online:1;
+ uint32_t tx_sa_set:1;
+@@ -2637,6 +2641,7 @@ typedef struct fc_port {
+ uint32_t rx_rekey_cnt;
+ uint64_t tx_bytes;
+ uint64_t rx_bytes;
++ uint8_t sess_down_acked;
+ uint8_t auth_state;
+ uint16_t authok:1;
+ uint16_t rekey_cnt;
+@@ -3204,6 +3209,8 @@ struct ct_sns_rsp {
+ #define GFF_NVME_OFFSET 23 /* type = 28h */
+ struct {
+ uint8_t fc4_features[128];
++#define FC4_FF_TARGET BIT_0
++#define FC4_FF_INITIATOR BIT_1
+ } gff_id;
+ struct {
+ uint8_t reserved;
+@@ -3975,6 +3982,7 @@ struct qla_hw_data {
+ /* SRB cache. */
+ #define SRB_MIN_REQ 128
+ mempool_t *srb_mempool;
++ u8 port_name[WWN_SIZE];
+
+ volatile struct {
+ uint32_t mbox_int :1;
+@@ -4040,6 +4048,9 @@ struct qla_hw_data {
+ uint32_t n2n_fw_acc_sec:1;
+ uint32_t plogi_template_valid:1;
+ uint32_t port_isolated:1;
++ uint32_t eeh_flush:2;
++#define EEH_FLUSH_RDY 1
++#define EEH_FLUSH_DONE 2
+ } flags;
+
+ uint16_t max_exchg;
+@@ -4074,6 +4085,7 @@ struct qla_hw_data {
+ uint32_t rsp_que_len;
+ uint32_t req_que_off;
+ uint32_t rsp_que_off;
++ unsigned long eeh_jif;
+
+ /* Multi queue data structs */
+ device_reg_t *mqiobase;
+@@ -4256,8 +4268,8 @@ struct qla_hw_data {
+ #define IS_OEM_001(ha) ((ha)->device_type & DT_OEM_001)
+ #define HAS_EXTENDED_IDS(ha) ((ha)->device_type & DT_EXTENDED_IDS)
+ #define IS_CT6_SUPPORTED(ha) ((ha)->device_type & DT_CT6_SUPPORTED)
+-#define IS_MQUE_CAPABLE(ha) ((ha)->mqenable || IS_QLA83XX(ha) || \
+- IS_QLA27XX(ha) || IS_QLA28XX(ha))
++#define IS_MQUE_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
++ IS_QLA28XX(ha))
+ #define IS_BIDI_CAPABLE(ha) \
+ (IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
+ /* Bit 21 of fw_attributes decides the MCTP capabilities */
+diff --git a/drivers/scsi/qla2xxx/qla_edif.c b/drivers/scsi/qla2xxx/qla_edif.c
+index cb8145a9ac09a..ee8931392ce2b 100644
+--- a/drivers/scsi/qla2xxx/qla_edif.c
++++ b/drivers/scsi/qla2xxx/qla_edif.c
+@@ -52,6 +52,31 @@ const char *sc_to_str(uint16_t cmd)
+ return "unknown";
+ }
+
++static struct edb_node *qla_edb_getnext(scsi_qla_host_t *vha)
++{
++ unsigned long flags;
++ struct edb_node *edbnode = NULL;
++
++ spin_lock_irqsave(&vha->e_dbell.db_lock, flags);
++
++ /* db nodes are fifo - no qualifications done */
++ if (!list_empty(&vha->e_dbell.head)) {
++ edbnode = list_first_entry(&vha->e_dbell.head,
++ struct edb_node, list);
++ list_del_init(&edbnode->list);
++ }
++
++ spin_unlock_irqrestore(&vha->e_dbell.db_lock, flags);
++
++ return edbnode;
++}
++
++static void qla_edb_node_free(scsi_qla_host_t *vha, struct edb_node *node)
++{
++ list_del_init(&node->list);
++ kfree(node);
++}
++
+ static struct edif_list_entry *qla_edif_list_find_sa_index(fc_port_t *fcport,
+ uint16_t handle)
+ {
+@@ -257,14 +282,8 @@ qla2x00_find_fcport_by_pid(scsi_qla_host_t *vha, port_id_t *id)
+
+ f = NULL;
+ list_for_each_entry_safe(f, tf, &vha->vp_fcports, list) {
+- if ((f->flags & FCF_FCSP_DEVICE)) {
+- ql_dbg(ql_dbg_edif + ql_dbg_verbose, vha, 0x2058,
+- "Found secure fcport - nn %8phN pn %8phN portid=0x%x, 0x%x.\n",
+- f->node_name, f->port_name,
+- f->d_id.b24, id->b24);
+- if (f->d_id.b24 == id->b24)
+- return f;
+- }
++ if (f->d_id.b24 == id->b24)
++ return f;
+ }
+ return NULL;
+ }
+@@ -280,14 +299,19 @@ qla_edif_app_check(scsi_qla_host_t *vha, struct app_id appid)
+ {
+ /* check that the app is allow/known to the driver */
+
+- if (appid.app_vid == EDIF_APP_ID) {
+- ql_dbg(ql_dbg_edif + ql_dbg_verbose, vha, 0x911d, "%s app id ok\n", __func__);
+- return true;
++ if (appid.app_vid != EDIF_APP_ID) {
++ ql_dbg(ql_dbg_edif, vha, 0x911d, "%s app id not ok (%x)",
++ __func__, appid.app_vid);
++ return false;
++ }
++
++ if (appid.version != EDIF_VERSION1) {
++ ql_dbg(ql_dbg_edif, vha, 0x911d, "%s app version is not ok (%x)",
++ __func__, appid.version);
++ return false;
+ }
+- ql_dbg(ql_dbg_edif, vha, 0x911d, "%s app id not ok (%x)",
+- __func__, appid.app_vid);
+
+- return false;
++ return true;
+ }
+
+ static void
+@@ -486,16 +510,35 @@ qla_edif_app_start(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ /* mark doorbell as active since an app is now present */
+ vha->e_dbell.db_flags |= EDB_ACTIVE;
+ } else {
+- ql_dbg(ql_dbg_edif, vha, 0x911e, "%s doorbell already active\n",
+- __func__);
++ goto out;
+ }
+
+ if (N2N_TOPO(vha->hw)) {
+- if (vha->hw->flags.n2n_fw_acc_sec)
+- set_bit(N2N_LINK_RESET, &vha->dpc_flags);
+- else
++ list_for_each_entry_safe(fcport, tf, &vha->vp_fcports, list)
++ fcport->n2n_link_reset_cnt = 0;
++
++ if (vha->hw->flags.n2n_fw_acc_sec) {
++ list_for_each_entry_safe(fcport, tf, &vha->vp_fcports, list)
++ qla_edif_sa_ctl_init(vha, fcport);
++
++ /*
++ * While authentication app was not running, remote device
++ * could still try to login with this local port. Let's
++ * clear the state and try again.
++ */
++ qla2x00_wait_for_sess_deletion(vha);
++
++ /* bounce the link to get the other guy to relogin */
++ if (!vha->hw->flags.n2n_bigger) {
++ set_bit(N2N_LINK_RESET, &vha->dpc_flags);
++ qla2xxx_wake_dpc(vha);
++ }
++ } else {
++ qla2x00_wait_for_hba_online(vha);
+ set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
+- qla2xxx_wake_dpc(vha);
++ qla2xxx_wake_dpc(vha);
++ qla2x00_wait_for_hba_online(vha);
++ }
+ } else {
+ list_for_each_entry_safe(fcport, tf, &vha->vp_fcports, list) {
+ ql_dbg(ql_dbg_edif, vha, 0x2058,
+@@ -517,19 +560,31 @@ qla_edif_app_start(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ if (atomic_read(&vha->loop_state) == LOOP_DOWN)
+ break;
+
+- fcport->edif.app_started = 1;
+ fcport->login_retry = vha->hw->login_retry_count;
+
+- /* no activity */
+ fcport->edif.app_stop = 0;
++ fcport->edif.app_sess_online = 0;
++
++ if (fcport->scan_state != QLA_FCPORT_FOUND)
++ continue;
++
++ if (fcport->port_type == FCT_UNKNOWN &&
++ !fcport->fc4_features)
++ rval = qla24xx_async_gffid(vha, fcport, true);
++
++ if (!rval && !(fcport->fc4_features & FC4_FF_TARGET ||
++ fcport->port_type & (FCT_TARGET|FCT_NVME_TARGET)))
++ continue;
++
++ rval = 0;
+
+ ql_dbg(ql_dbg_edif, vha, 0x911e,
+ "%s wwpn %8phC calling qla_edif_reset_auth_wait\n",
+ __func__, fcport->port_name);
+- fcport->edif.app_sess_online = 0;
+ qlt_schedule_sess_for_deletion(fcport);
+ qla_edif_sa_ctl_init(vha, fcport);
+ }
++ set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ }
+
+ if (vha->pur_cinfo.enode_flags != ENODE_ACTIVE) {
+@@ -540,9 +595,11 @@ qla_edif_app_start(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ __func__);
+ }
+
++out:
+ appreply.host_support_edif = vha->hw->flags.edif_enabled;
+ appreply.edif_enode_active = vha->pur_cinfo.enode_flags;
+ appreply.edif_edb_active = vha->e_dbell.db_flags;
++ appreply.version = EDIF_VERSION1;
+
+ bsg_job->reply_len = sizeof(struct fc_bsg_reply);
+
+@@ -610,9 +667,6 @@ qla_edif_app_stop(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+
+ fcport->send_els_logo = 1;
+ qlt_schedule_sess_for_deletion(fcport);
+-
+- /* qla_edif_flush_sa_ctl_lists(fcport); */
+- fcport->edif.app_started = 0;
+ }
+ }
+
+@@ -672,6 +726,7 @@ qla_edif_app_authok(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ portid.b.area = appplogiok.u.d_id.b.area;
+ portid.b.al_pa = appplogiok.u.d_id.b.al_pa;
+
++ appplogireply.version = EDIF_VERSION1;
+ switch (appplogiok.type) {
+ case PL_TYPE_WWPN:
+ fcport = qla2x00_find_fcport_by_wwpn(vha,
+@@ -864,6 +919,8 @@ qla_edif_app_getfcinfo(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ } else {
+ struct fc_port *fcport = NULL, *tf;
+
++ app_reply->version = EDIF_VERSION1;
++
+ list_for_each_entry_safe(fcport, tf, &vha->vp_fcports, list) {
+ if (!(fcport->flags & FCF_FCSP_DEVICE))
+ continue;
+@@ -880,9 +937,25 @@ qla_edif_app_getfcinfo(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ if (tdid.b24 != 0 && tdid.b24 != fcport->d_id.b24)
+ continue;
+
+- app_reply->ports[pcnt].rekey_count =
+- fcport->edif.rekey_cnt;
++ if (!N2N_TOPO(vha->hw)) {
++ if (fcport->scan_state != QLA_FCPORT_FOUND)
++ continue;
++
++ if (fcport->port_type == FCT_UNKNOWN &&
++ !fcport->fc4_features)
++ rval = qla24xx_async_gffid(vha, fcport,
++ true);
++
++ if (!rval &&
++ !(fcport->fc4_features & FC4_FF_TARGET ||
++ fcport->port_type &
++ (FCT_TARGET | FCT_NVME_TARGET)))
++ continue;
++ }
++
++ rval = 0;
+
++ app_reply->ports[pcnt].version = EDIF_VERSION1;
+ app_reply->ports[pcnt].remote_type =
+ VND_CMD_RTYPE_UNKNOWN;
+ if (fcport->port_type & (FCT_NVME_TARGET | FCT_TARGET))
+@@ -979,6 +1052,8 @@ qla_edif_app_getstats(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ } else {
+ struct fc_port *fcport = NULL, *tf;
+
++ app_reply->version = EDIF_VERSION1;
++
+ list_for_each_entry_safe(fcport, tf, &vha->vp_fcports, list) {
+ if (fcport->edif.enable) {
+ if (pcnt > app_req.num_ports)
+@@ -1012,6 +1087,164 @@ qla_edif_app_getstats(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ return rval;
+ }
+
++static int32_t
++qla_edif_ack(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
++{
++ struct fc_port *fcport;
++ struct aen_complete_cmd ack;
++ struct fc_bsg_reply *bsg_reply = bsg_job->reply;
++
++ sg_copy_to_buffer(bsg_job->request_payload.sg_list,
++ bsg_job->request_payload.sg_cnt, &ack, sizeof(ack));
++
++ ql_dbg(ql_dbg_edif, vha, 0x70cf,
++ "%s: %06x event_code %x\n",
++ __func__, ack.port_id.b24, ack.event_code);
++
++ fcport = qla2x00_find_fcport_by_pid(vha, &ack.port_id);
++ SET_DID_STATUS(bsg_reply->result, DID_OK);
++
++ if (!fcport) {
++ ql_dbg(ql_dbg_edif, vha, 0x70cf,
++ "%s: unable to find fcport %06x \n",
++ __func__, ack.port_id.b24);
++ return 0;
++ }
++
++ switch (ack.event_code) {
++ case VND_CMD_AUTH_STATE_SESSION_SHUTDOWN:
++ fcport->edif.sess_down_acked = 1;
++ break;
++ default:
++ break;
++ }
++ return 0;
++}
++
++static int qla_edif_consume_dbell(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
++{
++ struct fc_bsg_reply *bsg_reply = bsg_job->reply;
++ u32 sg_skip, reply_payload_len;
++ bool keep;
++ struct edb_node *dbnode = NULL;
++ struct edif_app_dbell ap;
++ int dat_size = 0;
++
++ sg_skip = 0;
++ reply_payload_len = bsg_job->reply_payload.payload_len;
++
++ while ((reply_payload_len - sg_skip) >= sizeof(struct edb_node)) {
++ dbnode = qla_edb_getnext(vha);
++ if (dbnode) {
++ keep = true;
++ dat_size = 0;
++ ap.event_code = dbnode->ntype;
++ switch (dbnode->ntype) {
++ case VND_CMD_AUTH_STATE_SESSION_SHUTDOWN:
++ case VND_CMD_AUTH_STATE_NEEDED:
++ ap.port_id = dbnode->u.plogi_did;
++ dat_size += sizeof(ap.port_id);
++ break;
++ case VND_CMD_AUTH_STATE_ELS_RCVD:
++ ap.port_id = dbnode->u.els_sid;
++ dat_size += sizeof(ap.port_id);
++ break;
++ case VND_CMD_AUTH_STATE_SAUPDATE_COMPL:
++ ap.port_id = dbnode->u.sa_aen.port_id;
++ memcpy(&ap.event_data, &dbnode->u,
++ sizeof(struct edif_sa_update_aen));
++ dat_size += sizeof(struct edif_sa_update_aen);
++ break;
++ default:
++ keep = false;
++ ql_log(ql_log_warn, vha, 0x09102,
++ "%s unknown DB type=%d %p\n",
++ __func__, dbnode->ntype, dbnode);
++ break;
++ }
++ ap.event_data_size = dat_size;
++ /* 8 = sizeof(ap.event_code + ap.event_data_size) */
++ dat_size += 8;
++ if (keep)
++ sg_skip += sg_copy_buffer(bsg_job->reply_payload.sg_list,
++ bsg_job->reply_payload.sg_cnt,
++ &ap, dat_size, sg_skip, false);
++
++ ql_dbg(ql_dbg_edif, vha, 0x09102,
++ "%s Doorbell consumed : type=%d %p\n",
++ __func__, dbnode->ntype, dbnode);
++
++ kfree(dbnode);
++ } else {
++ break;
++ }
++ }
++
++ SET_DID_STATUS(bsg_reply->result, DID_OK);
++ bsg_reply->reply_payload_rcv_len = sg_skip;
++ bsg_job->reply_len = sizeof(struct fc_bsg_reply);
++
++ return 0;
++}
++
++static void __qla_edif_dbell_bsg_done(scsi_qla_host_t *vha, struct bsg_job *bsg_job,
++ u32 delay)
++{
++ struct fc_bsg_reply *bsg_reply = bsg_job->reply;
++
++ /* small sleep for doorbell events to accumulate */
++ if (delay)
++ msleep(delay);
++
++ qla_edif_consume_dbell(vha, bsg_job);
++
++ bsg_job_done(bsg_job, bsg_reply->result, bsg_reply->reply_payload_rcv_len);
++}
++
++static void qla_edif_dbell_bsg_done(scsi_qla_host_t *vha)
++{
++ unsigned long flags;
++ struct bsg_job *prev_bsg_job = NULL;
++
++ spin_lock_irqsave(&vha->e_dbell.db_lock, flags);
++ if (vha->e_dbell.dbell_bsg_job) {
++ prev_bsg_job = vha->e_dbell.dbell_bsg_job;
++ vha->e_dbell.dbell_bsg_job = NULL;
++ }
++ spin_unlock_irqrestore(&vha->e_dbell.db_lock, flags);
++
++ if (prev_bsg_job)
++ __qla_edif_dbell_bsg_done(vha, prev_bsg_job, 0);
++}
++
++static int
++qla_edif_dbell_bsg(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
++{
++ unsigned long flags;
++ bool return_bsg = false;
++
++ /* flush previous dbell bsg */
++ qla_edif_dbell_bsg_done(vha);
++
++ spin_lock_irqsave(&vha->e_dbell.db_lock, flags);
++ if (list_empty(&vha->e_dbell.head) && DBELL_ACTIVE(vha)) {
++ /*
++ * when the next db event happens, bsg_job will return.
++ * Otherwise, timer will return it.
++ */
++ vha->e_dbell.dbell_bsg_job = bsg_job;
++ vha->e_dbell.bsg_expire = jiffies + 10 * HZ;
++ } else {
++ return_bsg = true;
++ }
++ spin_unlock_irqrestore(&vha->e_dbell.db_lock, flags);
++
++ if (return_bsg)
++ __qla_edif_dbell_bsg_done(vha, bsg_job, 1);
++
++ return 0;
++}
++
+ int32_t
+ qla_edif_app_mgmt(struct bsg_job *bsg_job)
+ {
+@@ -1023,8 +1256,13 @@ qla_edif_app_mgmt(struct bsg_job *bsg_job)
+ bool done = true;
+ int32_t rval = 0;
+ uint32_t vnd_sc = bsg_request->rqst_data.h_vendor.vendor_cmd[1];
++ u32 level = ql_dbg_edif;
++
++ /* doorbell is high traffic */
++ if (vnd_sc == QL_VND_SC_READ_DBELL)
++ level = 0;
+
+- ql_dbg(ql_dbg_edif, vha, 0x911d, "%s vnd subcmd=%x\n",
++ ql_dbg(level, vha, 0x911d, "%s vnd subcmd=%x\n",
+ __func__, vnd_sc);
+
+ sg_copy_to_buffer(bsg_job->request_payload.sg_list,
+@@ -1033,7 +1271,7 @@ qla_edif_app_mgmt(struct bsg_job *bsg_job)
+
+ if (!vha->hw->flags.edif_enabled ||
+ test_bit(VPORT_DELETE, &vha->dpc_flags)) {
+- ql_dbg(ql_dbg_edif, vha, 0x911d,
++ ql_dbg(level, vha, 0x911d,
+ "%s edif not enabled or vp delete. bsg ptr done %p. dpc_flags %lx\n",
+ __func__, bsg_job, vha->dpc_flags);
+
+@@ -1042,7 +1280,7 @@ qla_edif_app_mgmt(struct bsg_job *bsg_job)
+ }
+
+ if (!qla_edif_app_check(vha, appcheck)) {
+- ql_dbg(ql_dbg_edif, vha, 0x911d,
++ ql_dbg(level, vha, 0x911d,
+ "%s app checked failed.\n",
+ __func__);
+
+@@ -1074,6 +1312,13 @@ qla_edif_app_mgmt(struct bsg_job *bsg_job)
+ case QL_VND_SC_GET_STATS:
+ rval = qla_edif_app_getstats(vha, bsg_job);
+ break;
++ case QL_VND_SC_AEN_COMPLETE:
++ rval = qla_edif_ack(vha, bsg_job);
++ break;
++ case QL_VND_SC_READ_DBELL:
++ rval = qla_edif_dbell_bsg(vha, bsg_job);
++ done = false;
++ break;
+ default:
+ ql_dbg(ql_dbg_edif, vha, 0x911d, "%s unknown cmd=%x\n",
+ __func__,
+@@ -1085,7 +1330,7 @@ qla_edif_app_mgmt(struct bsg_job *bsg_job)
+
+ done:
+ if (done) {
+- ql_dbg(ql_dbg_user, vha, 0x7009,
++ ql_dbg(level, vha, 0x7009,
+ "%s: %d bsg ptr done %p\n", __func__, __LINE__, bsg_job);
+ bsg_job_done(bsg_job, bsg_reply->result,
+ bsg_reply->reply_payload_rcv_len);
+@@ -1247,6 +1492,8 @@ qla24xx_check_sadb_avail_slot(struct bsg_job *bsg_job, fc_port_t *fcport,
+
+ #define QLA_SA_UPDATE_FLAGS_RX_KEY 0x0
+ #define QLA_SA_UPDATE_FLAGS_TX_KEY 0x2
++#define EDIF_MSLEEP_INTERVAL 100
++#define EDIF_RETRY_COUNT 50
+
+ int
+ qla24xx_sadb_update(struct bsg_job *bsg_job)
+@@ -1259,7 +1506,7 @@ qla24xx_sadb_update(struct bsg_job *bsg_job)
+ struct edif_list_entry *edif_entry = NULL;
+ int found = 0;
+ int rval = 0;
+- int result = 0;
++ int result = 0, cnt;
+ struct qla_sa_update_frame sa_frame;
+ struct srb_iocb *iocb_cmd;
+ port_id_t portid;
+@@ -1500,11 +1747,23 @@ force_rx_delete:
+ sp->done = qla2x00_bsg_job_done;
+ iocb_cmd = &sp->u.iocb_cmd;
+ iocb_cmd->u.sa_update.sa_frame = sa_frame;
+-
++ cnt = 0;
++retry:
+ rval = qla2x00_start_sp(sp);
+- if (rval != QLA_SUCCESS) {
++ switch (rval) {
++ case QLA_SUCCESS:
++ break;
++ case EAGAIN:
++ msleep(EDIF_MSLEEP_INTERVAL);
++ cnt++;
++ if (cnt < EDIF_RETRY_COUNT)
++ goto retry;
++
++ fallthrough;
++ default:
+ ql_log(ql_dbg_edif, vha, 0x70e3,
+- "qla2x00_start_sp failed=%d.\n", rval);
++ "%s qla2x00_start_sp failed=%d.\n",
++ __func__, rval);
+
+ qla2x00_rel_sp(sp);
+ rval = -EIO;
+@@ -1797,30 +2056,6 @@ qla_edb_init(scsi_qla_host_t *vha)
+ /* initialize lock which protects doorbell & init list */
+ spin_lock_init(&vha->e_dbell.db_lock);
+ INIT_LIST_HEAD(&vha->e_dbell.head);
+-
+- /* create and initialize doorbell */
+- init_completion(&vha->e_dbell.dbell);
+-}
+-
+-static void
+-qla_edb_node_free(scsi_qla_host_t *vha, struct edb_node *node)
+-{
+- /*
+- * releases the space held by this edb node entry
+- * this function does _not_ free the edb node itself
+- * NB: the edb node entry passed should not be on any list
+- *
+- * currently for doorbell there's no additional cleanup
+- * needed, but here as a placeholder for furture use.
+- */
+-
+- if (!node) {
+- ql_dbg(ql_dbg_edif, vha, 0x09122,
+- "%s error - no valid node passed\n", __func__);
+- return;
+- }
+-
+- node->ntype = N_UNDEF;
+ }
+
+ static void qla_edb_clear(scsi_qla_host_t *vha, port_id_t portid)
+@@ -1867,11 +2102,8 @@ static void qla_edb_clear(scsi_qla_host_t *vha, port_id_t portid)
+ }
+ spin_unlock_irqrestore(&vha->e_dbell.db_lock, flags);
+
+- list_for_each_entry_safe(e, tmp, &edb_list, list) {
++ list_for_each_entry_safe(e, tmp, &edb_list, list)
+ qla_edb_node_free(vha, e);
+- list_del_init(&e->list);
+- kfree(e);
+- }
+ }
+
+ /* function called when app is stopping */
+@@ -1899,14 +2131,10 @@ qla_edb_stop(scsi_qla_host_t *vha)
+ "%s freeing edb_node type=%x\n",
+ __func__, node->ntype);
+ qla_edb_node_free(vha, node);
+- list_del(&node->list);
+-
+- kfree(node);
+ }
+ spin_unlock_irqrestore(&vha->e_dbell.db_lock, flags);
+
+- /* wake up doorbell waiters - they'll be dismissed with error code */
+- complete_all(&vha->e_dbell.dbell);
++ qla_edif_dbell_bsg_done(vha);
+ }
+
+ static struct edb_node *
+@@ -1944,9 +2172,6 @@ qla_edb_node_add(scsi_qla_host_t *vha, struct edb_node *ptr)
+ list_add_tail(&ptr->list, &vha->e_dbell.head);
+ spin_unlock_irqrestore(&vha->e_dbell.db_lock, flags);
+
+- /* ring doorbell for waiters */
+- complete(&vha->e_dbell.dbell);
+-
+ return true;
+ }
+
+@@ -2010,47 +2235,29 @@ qla_edb_eventcreate(scsi_qla_host_t *vha, uint32_t dbtype,
+ edbnode->u.sa_aen.port_id = fcport->d_id;
+ edbnode->u.sa_aen.status = data;
+ edbnode->u.sa_aen.key_type = data2;
++ edbnode->u.sa_aen.version = EDIF_VERSION1;
+ break;
+ default:
+ ql_dbg(ql_dbg_edif, vha, 0x09102,
+ "%s unknown type: %x\n", __func__, dbtype);
+- qla_edb_node_free(vha, edbnode);
+ kfree(edbnode);
+ edbnode = NULL;
+ break;
+ }
+
+- if (edbnode && (!qla_edb_node_add(vha, edbnode))) {
++ if (edbnode) {
++ if (!qla_edb_node_add(vha, edbnode)) {
++ ql_dbg(ql_dbg_edif, vha, 0x09102,
++ "%s unable to add dbnode\n", __func__);
++ kfree(edbnode);
++ return;
++ }
+ ql_dbg(ql_dbg_edif, vha, 0x09102,
+- "%s unable to add dbnode\n", __func__);
+- qla_edb_node_free(vha, edbnode);
+- kfree(edbnode);
+- return;
+- }
+- if (edbnode && fcport)
+- fcport->edif.auth_state = dbtype;
+- ql_dbg(ql_dbg_edif, vha, 0x09102,
+- "%s Doorbell produced : type=%d %p\n", __func__, dbtype, edbnode);
+-}
+-
+-static struct edb_node *
+-qla_edb_getnext(scsi_qla_host_t *vha)
+-{
+- unsigned long flags;
+- struct edb_node *edbnode = NULL;
+-
+- spin_lock_irqsave(&vha->e_dbell.db_lock, flags);
+-
+- /* db nodes are fifo - no qualifications done */
+- if (!list_empty(&vha->e_dbell.head)) {
+- edbnode = list_first_entry(&vha->e_dbell.head,
+- struct edb_node, list);
+- list_del(&edbnode->list);
++ "%s Doorbell produced : type=%d %p\n", __func__, dbtype, edbnode);
++ qla_edif_dbell_bsg_done(vha);
++ if (fcport)
++ fcport->edif.auth_state = dbtype;
+ }
+-
+- spin_unlock_irqrestore(&vha->e_dbell.db_lock, flags);
+-
+- return edbnode;
+ }
+
+ void
+@@ -2078,6 +2285,9 @@ qla_edif_timer(scsi_qla_host_t *vha)
+ ha->edif_post_stop_cnt_down = 60;
+ }
+ }
++
++ if (vha->e_dbell.dbell_bsg_job && time_after_eq(jiffies, vha->e_dbell.bsg_expire))
++ qla_edif_dbell_bsg_done(vha);
+ }
+
+ /*
+@@ -2145,7 +2355,6 @@ edif_doorbell_show(struct device *dev, struct device_attribute *attr,
+ "%s Doorbell consumed : type=%d %p\n",
+ __func__, dbnode->ntype, dbnode);
+ /* we're done with the db node, so free it up */
+- qla_edb_node_free(vha, dbnode);
+ kfree(dbnode);
+ } else {
+ break;
+@@ -2161,6 +2370,7 @@ edif_doorbell_show(struct device *dev, struct device_attribute *attr,
+
+ static void qla_noop_sp_done(srb_t *sp, int res)
+ {
++ sp->fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ /* ref: INIT */
+ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+@@ -2185,7 +2395,8 @@ qla24xx_issue_sa_replace_iocb(scsi_qla_host_t *vha, struct qla_work_evt *e)
+ if (!sa_ctl) {
+ ql_dbg(ql_dbg_edif, vha, 0x70e6,
+ "sa_ctl allocation failed\n");
+- return -ENOMEM;
++ rval = -ENOMEM;
++ goto done;
+ }
+
+ fcport = sa_ctl->fcport;
+@@ -2195,7 +2406,8 @@ qla24xx_issue_sa_replace_iocb(scsi_qla_host_t *vha, struct qla_work_evt *e)
+ if (!sp) {
+ ql_dbg(ql_dbg_edif, vha, 0x70e6,
+ "SRB allocation failed\n");
+- return -ENOMEM;
++ rval = -ENOMEM;
++ goto done;
+ }
+
+ fcport->flags |= FCF_ASYNC_SENT;
+@@ -2224,9 +2436,16 @@ qla24xx_issue_sa_replace_iocb(scsi_qla_host_t *vha, struct qla_work_evt *e)
+
+ rval = qla2x00_start_sp(sp);
+
+- if (rval != QLA_SUCCESS)
+- rval = QLA_FUNCTION_FAILED;
++ if (rval != QLA_SUCCESS) {
++ goto done_free_sp;
++ }
+
++ return rval;
++done_free_sp:
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
++ fcport->flags &= ~FCF_ASYNC_SENT;
++done:
++ fcport->flags &= ~FCF_ASYNC_ACTIVE;
+ return rval;
+ }
+
+@@ -2446,8 +2665,7 @@ void qla24xx_auth_els(scsi_qla_host_t *vha, void **pkt, struct rsp_que **rsp)
+
+ fcport = qla2x00_find_fcport_by_pid(host, &purex->pur_info.pur_sid);
+
+- if (DBELL_INACTIVE(vha) ||
+- (fcport && EDIF_SESSION_DOWN(fcport))) {
++ if (DBELL_INACTIVE(vha)) {
+ ql_dbg(ql_dbg_edif, host, 0x0910c, "%s e_dbell.db_flags =%x %06x\n",
+ __func__, host->e_dbell.db_flags,
+ fcport ? fcport->d_id.b24 : 0);
+@@ -2457,6 +2675,22 @@ void qla24xx_auth_els(scsi_qla_host_t *vha, void **pkt, struct rsp_que **rsp)
+ return;
+ }
+
++ if (fcport && EDIF_SESSION_DOWN(fcport)) {
++ ql_dbg(ql_dbg_edif, host, 0x13b6,
++ "%s terminate exchange. Send logo to 0x%x\n",
++ __func__, a.did.b24);
++
++ a.tx_byte_count = a.tx_len = 0;
++ a.tx_addr = 0;
++ a.control_flags = EPD_RX_XCHG; /* EPD_RX_XCHG = terminate cmd */
++ qla_els_reject_iocb(host, (*rsp)->qpair, &a);
++ qla_enode_free(host, ptr);
++ /* send logo to let remote port knows to tear down session */
++ fcport->send_els_logo = 1;
++ qlt_schedule_sess_for_deletion(fcport);
++ return;
++ }
++
+ /* add the local enode to the list */
+ qla_enode_add(host, ptr);
+
+@@ -3349,10 +3583,14 @@ int qla_edif_process_els(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ fc_port_t *fcport = NULL;
+ struct qla_hw_data *ha = vha->hw;
+ srb_t *sp;
+- int rval = (DID_ERROR << 16);
++ int rval = (DID_ERROR << 16), cnt;
+ port_id_t d_id;
+ struct qla_bsg_auth_els_request *p =
+ (struct qla_bsg_auth_els_request *)bsg_job->request;
++ struct qla_bsg_auth_els_reply *rpl =
++ (struct qla_bsg_auth_els_reply *)bsg_job->reply;
++
++ rpl->version = EDIF_VERSION1;
+
+ d_id.b.al_pa = bsg_request->rqst_data.h_els.port_id[2];
+ d_id.b.area = bsg_request->rqst_data.h_els.port_id[1];
+@@ -3371,7 +3609,7 @@ int qla_edif_process_els(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ if (qla_bsg_check(vha, bsg_job, fcport))
+ return 0;
+
+- if (fcport->loop_id == FC_NO_LOOP_ID) {
++ if (EDIF_SESS_DELETE(fcport)) {
+ ql_dbg(ql_dbg_edif, vha, 0x910d,
+ "%s ELS code %x, no loop id.\n", __func__,
+ bsg_request->rqst_data.r_els.els_code);
+@@ -3440,17 +3678,26 @@ int qla_edif_process_els(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ sp->free = qla2x00_bsg_sp_free;
+ sp->done = qla2x00_bsg_job_done;
+
++ cnt = 0;
++retry:
+ rval = qla2x00_start_sp(sp);
+-
+- ql_dbg(ql_dbg_edif, vha, 0x700a,
+- "%s %s %8phN xchg %x ctlflag %x hdl %x reqlen %xh bsg ptr %p\n",
+- __func__, sc_to_str(p->e.sub_cmd), fcport->port_name,
+- p->e.extra_rx_xchg_address, p->e.extra_control_flags,
+- sp->handle, sp->remap.req.len, bsg_job);
+-
+- if (rval != QLA_SUCCESS) {
++ switch (rval) {
++ case QLA_SUCCESS:
++ ql_dbg(ql_dbg_edif, vha, 0x700a,
++ "%s %s %8phN xchg %x ctlflag %x hdl %x reqlen %xh bsg ptr %p\n",
++ __func__, sc_to_str(p->e.sub_cmd), fcport->port_name,
++ p->e.extra_rx_xchg_address, p->e.extra_control_flags,
++ sp->handle, sp->remap.req.len, bsg_job);
++ break;
++ case EAGAIN:
++ msleep(EDIF_MSLEEP_INTERVAL);
++ cnt++;
++ if (cnt < EDIF_RETRY_COUNT)
++ goto retry;
++ fallthrough;
++ default:
+ ql_log(ql_log_warn, vha, 0x700e,
+- "qla2x00_start_sp failed = %d\n", rval);
++ "%s qla2x00_start_sp failed = %d\n", __func__, rval);
+ SET_DID_STATUS(bsg_reply->result, DID_IMM_RETRY);
+ rval = -EIO;
+ goto done_free_remap_rsp;
+@@ -3472,14 +3719,29 @@ done:
+
+ void qla_edif_sess_down(struct scsi_qla_host *vha, struct fc_port *sess)
+ {
++ u16 cnt = 0;
++
+ if (sess->edif.app_sess_online && DBELL_ACTIVE(vha)) {
+ ql_dbg(ql_dbg_disc, vha, 0xf09c,
+ "%s: sess %8phN send port_offline event\n",
+ __func__, sess->port_name);
+ sess->edif.app_sess_online = 0;
++ sess->edif.sess_down_acked = 0;
+ qla_edb_eventcreate(vha, VND_CMD_AUTH_STATE_SESSION_SHUTDOWN,
+ sess->d_id.b24, 0, sess);
+ qla2x00_post_aen_work(vha, FCH_EVT_PORT_OFFLINE, sess->d_id.b24);
++
++ while (!READ_ONCE(sess->edif.sess_down_acked) &&
++ !test_bit(VPORT_DELETE, &vha->dpc_flags)) {
++ msleep(100);
++ cnt++;
++ if (cnt > 100)
++ break;
++ }
++ sess->edif.sess_down_acked = 0;
++ ql_dbg(ql_dbg_disc, vha, 0xf09c,
++ "%s: sess %8phN port_offline event completed\n",
++ __func__, sess->port_name);
+ }
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_edif.h b/drivers/scsi/qla2xxx/qla_edif.h
+index a965ca8e47ce7..7cdb89ccdc6ea 100644
+--- a/drivers/scsi/qla2xxx/qla_edif.h
++++ b/drivers/scsi/qla2xxx/qla_edif.h
+@@ -51,7 +51,8 @@ struct edif_dbell {
+ enum db_flags_t db_flags;
+ spinlock_t db_lock;
+ struct list_head head;
+- struct completion dbell;
++ struct bsg_job *dbell_bsg_job;
++ unsigned long bsg_expire;
+ };
+
+ #define SA_UPDATE_IOCB_TYPE 0x71 /* Security Association Update IOCB entry */
+@@ -140,4 +141,8 @@ struct enode {
+ (DBELL_ACTIVE(_fcport->vha) && \
+ (_fcport->disc_state == DSC_LOGIN_AUTH_PEND))
+
++#define EDIF_SESS_DELETE(_s) \
++ (qla_ini_mode_enabled(_s->vha) && (_s->disc_state == DSC_DELETE_PEND || \
++ _s->disc_state == DSC_DELETED))
++
+ #endif /* __QLA_EDIF_H */
+diff --git a/drivers/scsi/qla2xxx/qla_edif_bsg.h b/drivers/scsi/qla2xxx/qla_edif_bsg.h
+index 5a26c77157da2..0931f4e4e127a 100644
+--- a/drivers/scsi/qla2xxx/qla_edif_bsg.h
++++ b/drivers/scsi/qla2xxx/qla_edif_bsg.h
+@@ -7,13 +7,15 @@
+ #ifndef __QLA_EDIF_BSG_H
+ #define __QLA_EDIF_BSG_H
+
++#define EDIF_VERSION1 1
++
+ /* BSG Vendor specific commands */
+ #define ELS_MAX_PAYLOAD 2112
+ #ifndef WWN_SIZE
+ #define WWN_SIZE 8
+ #endif
+-#define VND_CMD_APP_RESERVED_SIZE 32
+-
++#define VND_CMD_APP_RESERVED_SIZE 28
++#define VND_CMD_PAD_SIZE 3
+ enum auth_els_sub_cmd {
+ SEND_ELS = 0,
+ SEND_ELS_REPLY,
+@@ -28,7 +30,9 @@ struct extra_auth_els {
+ #define BSG_CTL_FLAG_LS_ACC 1
+ #define BSG_CTL_FLAG_LS_RJT 2
+ #define BSG_CTL_FLAG_TRM 3
+- uint8_t extra_rsvd[3];
++ uint8_t version;
++ uint8_t pad[2];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+ struct qla_bsg_auth_els_request {
+@@ -39,51 +43,46 @@ struct qla_bsg_auth_els_request {
+ struct qla_bsg_auth_els_reply {
+ struct fc_bsg_reply r;
+ uint32_t rx_xchg_address;
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ };
+
+ struct app_id {
+ int app_vid;
+- uint8_t app_key[32];
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+ struct app_start_reply {
+ uint32_t host_support_edif;
+ uint32_t edif_enode_active;
+ uint32_t edif_edb_active;
+- uint32_t reserved[VND_CMD_APP_RESERVED_SIZE];
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+ struct app_start {
+ struct app_id app_info;
+- uint32_t prli_to;
+- uint32_t key_shred;
+ uint8_t app_start_flags;
+- uint8_t reserved[VND_CMD_APP_RESERVED_SIZE - 1];
++ uint8_t version;
++ uint8_t pad[2];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+ struct app_stop {
+ struct app_id app_info;
+- char buf[16];
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+ struct app_plogi_reply {
+ uint32_t prli_status;
+- uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+-} __packed;
+-
+-#define RECFG_TIME 1
+-#define RECFG_BYTES 2
+-
+-struct app_rekey_cfg {
+- struct app_id app_info;
+- uint8_t rekey_mode;
+- port_id_t d_id;
+- uint8_t force;
+- union {
+- int64_t bytes;
+- int64_t time;
+- } rky_units;
+-
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
+ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+@@ -91,7 +90,9 @@ struct app_pinfo_req {
+ struct app_id app_info;
+ uint8_t num_ports;
+ port_id_t remote_pid;
+- uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+ struct app_pinfo {
+@@ -103,11 +104,8 @@ struct app_pinfo {
+ #define VND_CMD_RTYPE_INITIATOR 2
+ uint8_t remote_state;
+ uint8_t auth_state;
+- uint8_t rekey_mode;
+- int64_t rekey_count;
+- int64_t rekey_config_value;
+- int64_t rekey_consumed_value;
+-
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
+ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+@@ -120,6 +118,8 @@ struct app_pinfo {
+
+ struct app_pinfo_reply {
+ uint8_t port_count;
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
+ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ struct app_pinfo ports[];
+ } __packed;
+@@ -127,6 +127,8 @@ struct app_pinfo_reply {
+ struct app_sinfo_req {
+ struct app_id app_info;
+ uint8_t num_ports;
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
+ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+@@ -140,6 +142,9 @@ struct app_sinfo {
+
+ struct app_stats_reply {
+ uint8_t elem_count;
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ struct app_sinfo elem[];
+ } __packed;
+
+@@ -163,9 +168,11 @@ struct qla_sa_update_frame {
+ uint8_t node_name[WWN_SIZE];
+ uint8_t port_name[WWN_SIZE];
+ port_id_t port_id;
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved2[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+-// used for edif mgmt bsg interface
+ #define QL_VND_SC_UNDEF 0
+ #define QL_VND_SC_SA_UPDATE 1
+ #define QL_VND_SC_APP_START 2
+@@ -175,6 +182,22 @@ struct qla_sa_update_frame {
+ #define QL_VND_SC_REKEY_CONFIG 6
+ #define QL_VND_SC_GET_FCINFO 7
+ #define QL_VND_SC_GET_STATS 8
++#define QL_VND_SC_AEN_COMPLETE 9
++#define QL_VND_SC_READ_DBELL 10
++
++/*
++ * bsg caller to provide empty buffer for doorbell events.
++ *
++ * sg_io_v4.din_xferp = empty buffer for door bell events
++ * sg_io_v4.dout_xferp = struct edif_read_dbell *buf
++ */
++struct edif_read_dbell {
++ struct app_id app_info;
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
++};
++
+
+ /* Application interface data structure for rtn data */
+ #define EXT_DEF_EVENT_DATA_SIZE 64
+@@ -191,7 +214,9 @@ struct edif_sa_update_aen {
+ port_id_t port_id;
+ uint32_t key_type; /* Tx (1) or RX (2) */
+ uint32_t status; /* 0 succes, 1 failed, 2 timeout , 3 error */
+- uint8_t reserved[16];
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+ #define QL_VND_SA_STAT_SUCCESS 0
+@@ -212,9 +237,22 @@ struct auth_complete_cmd {
+ uint8_t wwpn[WWN_SIZE];
+ port_id_t d_id;
+ } u;
+- uint32_t reserved[VND_CMD_APP_RESERVED_SIZE];
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
++} __packed;
++
++struct aen_complete_cmd {
++ struct app_id app_info;
++ port_id_t port_id;
++ uint32_t event_code;
++ uint8_t version;
++ uint8_t pad[VND_CMD_PAD_SIZE];
++ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+ } __packed;
+
+ #define RX_DELAY_DELETE_TIMEOUT 20
+
++#define FCH_EVT_VENDOR_UNIQUE_VPORT_DOWN 1
++
+ #endif /* QLA_EDIF_BSG_H */
+diff --git a/drivers/scsi/qla2xxx/qla_fw.h b/drivers/scsi/qla2xxx/qla_fw.h
+index 0bb1d562f0bfc..361015b5763ef 100644
+--- a/drivers/scsi/qla2xxx/qla_fw.h
++++ b/drivers/scsi/qla2xxx/qla_fw.h
+@@ -807,7 +807,7 @@ struct els_entry_24xx {
+ #define EPD_ELS_COMMAND (0 << 13)
+ #define EPD_ELS_ACC (1 << 13)
+ #define EPD_ELS_RJT (2 << 13)
+-#define EPD_RX_XCHG (3 << 13)
++#define EPD_RX_XCHG (3 << 13) /* terminate exchange */
+ #define ECF_CLR_PASSTHRU_PEND BIT_12
+ #define ECF_INCL_FRAME_HDR BIT_11
+ #define ECF_SEC_LOGIN BIT_3
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index dac27b5ff0ac7..2e5b65072b757 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -335,6 +335,7 @@ extern int qla24xx_configure_prot_mode(srb_t *, uint16_t *);
+ extern int qla24xx_issue_sa_replace_iocb(scsi_qla_host_t *vha,
+ struct qla_work_evt *e);
+ void qla2x00_sp_release(struct kref *kref);
++void qla2x00_els_dcmd2_iocb_timeout(void *data);
+
+ /*
+ * Global Function Prototypes in qla_mbx.c source file.
+@@ -433,7 +434,8 @@ extern int
+ qla2x00_get_resource_cnts(scsi_qla_host_t *);
+
+ extern int
+-qla2x00_get_fcal_position_map(scsi_qla_host_t *ha, char *pos_map);
++qla2x00_get_fcal_position_map(scsi_qla_host_t *ha, char *pos_map,
++ u8 *num_entries);
+
+ extern int
+ qla2x00_get_link_status(scsi_qla_host_t *, uint16_t, struct link_statistics *,
+@@ -727,7 +729,7 @@ int qla24xx_async_gpsc(scsi_qla_host_t *, fc_port_t *);
+ void qla24xx_handle_gpsc_event(scsi_qla_host_t *, struct event_arg *);
+ int qla2x00_mgmt_svr_login(scsi_qla_host_t *);
+ void qla24xx_handle_gffid_event(scsi_qla_host_t *vha, struct event_arg *ea);
+-int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport);
++int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport, bool);
+ int qla24xx_async_gpnft(scsi_qla_host_t *, u8, srb_t *);
+ void qla24xx_async_gpnft_done(scsi_qla_host_t *, srb_t *);
+ void qla24xx_async_gnnft_done(scsi_qla_host_t *, srb_t *);
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index e811de2f6a25f..7ca7343370005 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -1596,7 +1596,6 @@ qla2x00_hba_attributes(scsi_qla_host_t *vha, void *entries,
+ unsigned int callopt)
+ {
+ struct qla_hw_data *ha = vha->hw;
+- struct init_cb_24xx *icb24 = (void *)ha->init_cb;
+ struct new_utsname *p_sysid = utsname();
+ struct ct_fdmi_hba_attr *eiter;
+ uint16_t alen;
+@@ -1758,8 +1757,8 @@ qla2x00_hba_attributes(scsi_qla_host_t *vha, void *entries,
+ /* MAX CT Payload Length */
+ eiter = entries + size;
+ eiter->type = cpu_to_be16(FDMI_HBA_MAXIMUM_CT_PAYLOAD_LENGTH);
+- eiter->a.max_ct_len = cpu_to_be32(le16_to_cpu(IS_FWI2_CAPABLE(ha) ?
+- icb24->frame_payload_size : ha->init_cb->frame_payload_size));
++ eiter->a.max_ct_len = cpu_to_be32(ha->frame_payload_size >> 2);
++
+ alen = sizeof(eiter->a.max_ct_len);
+ alen += FDMI_ATTR_TYPELEN(eiter);
+ eiter->len = cpu_to_be16(alen);
+@@ -1851,7 +1850,6 @@ qla2x00_port_attributes(scsi_qla_host_t *vha, void *entries,
+ unsigned int callopt)
+ {
+ struct qla_hw_data *ha = vha->hw;
+- struct init_cb_24xx *icb24 = (void *)ha->init_cb;
+ struct new_utsname *p_sysid = utsname();
+ char *hostname = p_sysid ?
+ p_sysid->nodename : fc_host_system_hostname(vha->host);
+@@ -1903,8 +1901,7 @@ qla2x00_port_attributes(scsi_qla_host_t *vha, void *entries,
+ /* Max frame size. */
+ eiter = entries + size;
+ eiter->type = cpu_to_be16(FDMI_PORT_MAX_FRAME_SIZE);
+- eiter->a.max_frame_size = cpu_to_be32(le16_to_cpu(IS_FWI2_CAPABLE(ha) ?
+- icb24->frame_payload_size : ha->init_cb->frame_payload_size));
++ eiter->a.max_frame_size = cpu_to_be32(ha->frame_payload_size);
+ alen = sizeof(eiter->a.max_frame_size);
+ alen += FDMI_ATTR_TYPELEN(eiter);
+ eiter->len = cpu_to_be16(alen);
+@@ -3280,19 +3277,12 @@ done:
+ return rval;
+ }
+
+-void qla24xx_handle_gffid_event(scsi_qla_host_t *vha, struct event_arg *ea)
+-{
+- fc_port_t *fcport = ea->fcport;
+-
+- qla24xx_post_gnl_work(vha, fcport);
+-}
+
+ void qla24xx_async_gffid_sp_done(srb_t *sp, int res)
+ {
+ struct scsi_qla_host *vha = sp->vha;
+ fc_port_t *fcport = sp->fcport;
+ struct ct_sns_rsp *ct_rsp;
+- struct event_arg ea;
+ uint8_t fc4_scsi_feat;
+ uint8_t fc4_nvme_feat;
+
+@@ -3300,10 +3290,10 @@ void qla24xx_async_gffid_sp_done(srb_t *sp, int res)
+ "Async done-%s res %x ID %x. %8phC\n",
+ sp->name, res, fcport->d_id.b24, fcport->port_name);
+
+- fcport->flags &= ~FCF_ASYNC_SENT;
+- ct_rsp = &fcport->ct_desc.ct_sns->p.rsp;
++ ct_rsp = sp->u.iocb_cmd.u.ctarg.rsp;
+ fc4_scsi_feat = ct_rsp->rsp.gff_id.fc4_features[GFF_FCP_SCSI_OFFSET];
+ fc4_nvme_feat = ct_rsp->rsp.gff_id.fc4_features[GFF_NVME_OFFSET];
++ sp->rc = res;
+
+ /*
+ * FC-GS-7, 5.2.3.12 FC-4 Features - format
+@@ -3324,24 +3314,42 @@ void qla24xx_async_gffid_sp_done(srb_t *sp, int res)
+ }
+ }
+
+- memset(&ea, 0, sizeof(ea));
+- ea.sp = sp;
+- ea.fcport = sp->fcport;
+- ea.rc = res;
++ if (sp->flags & SRB_WAKEUP_ON_COMP) {
++ complete(sp->comp);
++ } else {
++ if (sp->u.iocb_cmd.u.ctarg.req) {
++ dma_free_coherent(&vha->hw->pdev->dev,
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
++ sp->u.iocb_cmd.u.ctarg.req,
++ sp->u.iocb_cmd.u.ctarg.req_dma);
++ sp->u.iocb_cmd.u.ctarg.req = NULL;
++ }
+
+- qla24xx_handle_gffid_event(vha, &ea);
+- /* ref: INIT */
+- kref_put(&sp->cmd_kref, qla2x00_sp_release);
++ if (sp->u.iocb_cmd.u.ctarg.rsp) {
++ dma_free_coherent(&vha->hw->pdev->dev,
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
++ sp->u.iocb_cmd.u.ctarg.rsp,
++ sp->u.iocb_cmd.u.ctarg.rsp_dma);
++ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
++ }
++
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
++ /* we should not be here */
++ dump_stack();
++ }
+ }
+
+ /* Get FC4 Feature with Nport ID. */
+-int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
++int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport, bool wait)
+ {
+ int rval = QLA_FUNCTION_FAILED;
+ struct ct_sns_req *ct_req;
+ srb_t *sp;
++ DECLARE_COMPLETION_ONSTACK(comp);
+
+- if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
++ /* this routine does not have handling for no wait */
++ if (!vha->flags.online || !wait)
+ return rval;
+
+ /* ref: INIT */
+@@ -3349,43 +3357,86 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ if (!sp)
+ return rval;
+
+- fcport->flags |= FCF_ASYNC_SENT;
+ sp->type = SRB_CT_PTHRU_CMD;
+ sp->name = "gffid";
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
+ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
+ qla24xx_async_gffid_sp_done);
++ sp->comp = ∁
++ sp->u.iocb_cmd.timeout = qla2x00_els_dcmd2_iocb_timeout;
++
++ if (wait)
++ sp->flags = SRB_WAKEUP_ON_COMP;
++
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt);
++ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
++ &sp->u.iocb_cmd.u.ctarg.req_dma,
++ GFP_KERNEL);
++ if (!sp->u.iocb_cmd.u.ctarg.req) {
++ ql_log(ql_log_warn, vha, 0xd041,
++ "%s: Failed to allocate ct_sns request.\n",
++ __func__);
++ goto done_free_sp;
++ }
++
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt);
++ sp->u.iocb_cmd.u.ctarg.rsp = dma_alloc_coherent(&vha->hw->pdev->dev,
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
++ &sp->u.iocb_cmd.u.ctarg.rsp_dma,
++ GFP_KERNEL);
++ if (!sp->u.iocb_cmd.u.ctarg.rsp) {
++ ql_log(ql_log_warn, vha, 0xd041,
++ "%s: Failed to allocate ct_sns response.\n",
++ __func__);
++ goto done_free_sp;
++ }
+
+ /* CT_IU preamble */
+- ct_req = qla2x00_prep_ct_req(fcport->ct_desc.ct_sns, GFF_ID_CMD,
+- GFF_ID_RSP_SIZE);
++ ct_req = qla2x00_prep_ct_req(sp->u.iocb_cmd.u.ctarg.req, GFF_ID_CMD, GFF_ID_RSP_SIZE);
+
+ ct_req->req.gff_id.port_id[0] = fcport->d_id.b.domain;
+ ct_req->req.gff_id.port_id[1] = fcport->d_id.b.area;
+ ct_req->req.gff_id.port_id[2] = fcport->d_id.b.al_pa;
+
+- sp->u.iocb_cmd.u.ctarg.req = fcport->ct_desc.ct_sns;
+- sp->u.iocb_cmd.u.ctarg.req_dma = fcport->ct_desc.ct_sns_dma;
+- sp->u.iocb_cmd.u.ctarg.rsp = fcport->ct_desc.ct_sns;
+- sp->u.iocb_cmd.u.ctarg.rsp_dma = fcport->ct_desc.ct_sns_dma;
+ sp->u.iocb_cmd.u.ctarg.req_size = GFF_ID_REQ_SIZE;
+ sp->u.iocb_cmd.u.ctarg.rsp_size = GFF_ID_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- ql_dbg(ql_dbg_disc, vha, 0x2132,
+- "Async-%s hdl=%x %8phC.\n", sp->name,
+- sp->handle, fcport->port_name);
+-
+ rval = qla2x00_start_sp(sp);
+- if (rval != QLA_SUCCESS)
++
++ if (rval != QLA_SUCCESS) {
++ rval = QLA_FUNCTION_FAILED;
+ goto done_free_sp;
++ } else {
++ ql_dbg(ql_dbg_disc, vha, 0x3074,
++ "Async-%s hdl=%x portid %06x\n",
++ sp->name, sp->handle, fcport->d_id.b24);
++ }
++
++ wait_for_completion(sp->comp);
++ rval = sp->rc;
+
+- return rval;
+ done_free_sp:
++ if (sp->u.iocb_cmd.u.ctarg.req) {
++ dma_free_coherent(&vha->hw->pdev->dev,
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
++ sp->u.iocb_cmd.u.ctarg.req,
++ sp->u.iocb_cmd.u.ctarg.req_dma);
++ sp->u.iocb_cmd.u.ctarg.req = NULL;
++ }
++
++ if (sp->u.iocb_cmd.u.ctarg.rsp) {
++ dma_free_coherent(&vha->hw->pdev->dev,
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
++ sp->u.iocb_cmd.u.ctarg.rsp,
++ sp->u.iocb_cmd.u.ctarg.rsp_dma);
++ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
++ }
++
+ /* ref: INIT */
+ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+- fcport->flags &= ~FCF_ASYNC_SENT;
+ return rval;
+ }
+
+@@ -3578,7 +3629,7 @@ login_logout:
+ do_delete) {
+ if (fcport->loop_id != FC_NO_LOOP_ID) {
+ if (fcport->flags & FCF_FCP2_DEVICE)
+- fcport->logout_on_delete = 0;
++ continue;
+
+ ql_log(ql_log_warn, vha, 0x20f0,
+ "%s %d %8phC post del sess\n",
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 3f3417a3e8911..51503a316b10f 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -47,6 +47,7 @@ qla2x00_sp_timeout(struct timer_list *t)
+ {
+ srb_t *sp = from_timer(sp, t, u.iocb_cmd.timer);
+ struct srb_iocb *iocb;
++ scsi_qla_host_t *vha = sp->vha;
+
+ WARN_ON(irqs_disabled());
+ iocb = &sp->u.iocb_cmd;
+@@ -54,6 +55,12 @@ qla2x00_sp_timeout(struct timer_list *t)
+
+ /* ref: TMR */
+ kref_put(&sp->cmd_kref, qla2x00_sp_release);
++
++ if (vha && qla2x00_isp_reg_stat(vha->hw)) {
++ ql_log(ql_log_info, vha, 0x9008,
++ "PCI/Register disconnect.\n");
++ qla_pci_set_eeh_busy(vha);
++ }
+ }
+
+ void qla2x00_sp_free(srb_t *sp)
+@@ -161,6 +168,7 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ struct srb_iocb *abt_iocb;
+ srb_t *sp;
+ int rval = QLA_FUNCTION_FAILED;
++ uint8_t bail;
+
+ /* ref: INIT for ABTS command */
+ sp = qla2xxx_get_qpair_sp(cmd_sp->vha, cmd_sp->qpair, cmd_sp->fcport,
+@@ -168,6 +176,7 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ if (!sp)
+ return QLA_MEMORY_ALLOC_FAILED;
+
++ QLA_VHA_MARK_BUSY(vha, bail);
+ abt_iocb = &sp->u.iocb_cmd;
+ sp->type = SRB_ABT_CMD;
+ sp->name = "abort";
+@@ -1480,7 +1489,6 @@ static int qla_chk_secure_login(scsi_qla_host_t *vha, fc_port_t *fcport,
+ ql_dbg(ql_dbg_disc, vha, 0x20ef,
+ "%s %d %8phC EDIF: post DB_AUTH: AUTH needed\n",
+ __func__, __LINE__, fcport->port_name);
+- fcport->edif.app_started = 1;
+ fcport->edif.app_sess_online = 1;
+
+ qla_edb_eventcreate(vha, VND_CMD_AUTH_STATE_NEEDED,
+@@ -1763,8 +1771,16 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ break;
+
+ case DSC_LOGIN_PEND:
+- if (fcport->fw_login_state == DSC_LS_PLOGI_COMP)
++ if (vha->hw->flags.edif_enabled)
++ break;
++
++ if (fcport->fw_login_state == DSC_LS_PLOGI_COMP) {
++ ql_dbg(ql_dbg_disc, vha, 0x2118,
++ "%s %d %8phC post %s PRLI\n",
++ __func__, __LINE__, fcport->port_name,
++ NVME_TARGET(vha->hw, fcport) ? "NVME" : "FC");
+ qla24xx_post_prli_work(vha, fcport);
++ }
+ break;
+
+ case DSC_UPD_FCPORT:
+@@ -1818,7 +1834,8 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ case RSCN_PORT_ADDR:
+ fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1);
+ if (fcport) {
+- if (fcport->flags & FCF_FCP2_DEVICE) {
++ if (fcport->flags & FCF_FCP2_DEVICE &&
++ atomic_read(&fcport->state) == FCS_ONLINE) {
+ ql_dbg(ql_dbg_disc, vha, 0x2115,
+ "Delaying session delete for FCP2 portid=%06x %8phC ",
+ fcport->d_id.b24, fcport->port_name);
+@@ -1850,7 +1867,8 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ break;
+ case RSCN_AREA_ADDR:
+ list_for_each_entry(fcport, &vha->vp_fcports, list) {
+- if (fcport->flags & FCF_FCP2_DEVICE)
++ if (fcport->flags & FCF_FCP2_DEVICE &&
++ atomic_read(&fcport->state) == FCS_ONLINE)
+ continue;
+
+ if ((ea->id.b24 & 0xffff00) == (fcport->d_id.b24 & 0xffff00)) {
+@@ -1861,7 +1879,8 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ break;
+ case RSCN_DOM_ADDR:
+ list_for_each_entry(fcport, &vha->vp_fcports, list) {
+- if (fcport->flags & FCF_FCP2_DEVICE)
++ if (fcport->flags & FCF_FCP2_DEVICE &&
++ atomic_read(&fcport->state) == FCS_ONLINE)
+ continue;
+
+ if ((ea->id.b24 & 0xff0000) == (fcport->d_id.b24 & 0xff0000)) {
+@@ -1873,7 +1892,8 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ case RSCN_FAB_ADDR:
+ default:
+ list_for_each_entry(fcport, &vha->vp_fcports, list) {
+- if (fcport->flags & FCF_FCP2_DEVICE)
++ if (fcport->flags & FCF_FCP2_DEVICE &&
++ atomic_read(&fcport->state) == FCS_ONLINE)
+ continue;
+
+ fcport->scan_needed = 1;
+@@ -2000,12 +2020,14 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
+ struct srb_iocb *tm_iocb;
+ srb_t *sp;
+ int rval = QLA_FUNCTION_FAILED;
++ uint8_t bail;
+
+ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
++ QLA_VHA_MARK_BUSY(vha, bail);
+ sp->type = SRB_TM_CMD;
+ sp->name = "tmf";
+ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha),
+@@ -2124,6 +2146,13 @@ qla24xx_handle_prli_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ }
+
+ if (N2N_TOPO(vha->hw)) {
++ if (ea->fcport->n2n_link_reset_cnt ==
++ vha->hw->login_retry_count &&
++ ea->fcport->flags & FCF_FCSP_DEVICE) {
++ /* remote authentication app just started */
++ ea->fcport->n2n_link_reset_cnt = 0;
++ }
++
+ if (ea->fcport->n2n_link_reset_cnt <
+ vha->hw->login_retry_count) {
+ ea->fcport->n2n_link_reset_cnt++;
+@@ -4509,6 +4538,8 @@ qla2x00_init_rings(scsi_qla_host_t *vha)
+ BIT_6) != 0;
+ ql_dbg(ql_dbg_init, vha, 0x00bc, "FA-WWPN Support: %s.\n",
+ (ha->flags.fawwpn_enabled) ? "enabled" : "disabled");
++ /* Init_cb will be reused for other command(s). Save a backup copy of port_name */
++ memcpy(ha->port_name, ha->init_cb->port_name, WWN_SIZE);
+ }
+
+ /* ELS pass through payload is limit by frame size. */
+@@ -5273,9 +5304,6 @@ qla2x00_alloc_fcport(scsi_qla_host_t *vha, gfp_t flags)
+ INIT_LIST_HEAD(&fcport->edif.tx_sa_list);
+ INIT_LIST_HEAD(&fcport->edif.rx_sa_list);
+
+- if (vha->e_dbell.db_flags == EDB_ACTIVE)
+- fcport->edif.app_started = 1;
+-
+ spin_lock_init(&fcport->edif.indx_list_lock);
+ INIT_LIST_HEAD(&fcport->edif.edif_indx_list);
+
+@@ -5488,6 +5516,22 @@ static int qla2x00_configure_n2n_loop(scsi_qla_host_t *vha)
+ return QLA_FUNCTION_FAILED;
+ }
+
++static void
++qla_reinitialize_link(scsi_qla_host_t *vha)
++{
++ int rval;
++
++ atomic_set(&vha->loop_state, LOOP_DOWN);
++ atomic_set(&vha->loop_down_timer, LOOP_DOWN_TIME);
++ rval = qla2x00_full_login_lip(vha);
++ if (rval == QLA_SUCCESS) {
++ ql_dbg(ql_dbg_disc, vha, 0xd050, "Link reinitialized\n");
++ } else {
++ ql_dbg(ql_dbg_disc, vha, 0xd051,
++ "Link reinitialization failed (%d)\n", rval);
++ }
++}
++
+ /*
+ * qla2x00_configure_local_loop
+ * Updates Fibre Channel Device Database with local loop devices.
+@@ -5539,6 +5583,19 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ spin_unlock_irqrestore(&vha->work_lock, flags);
+
+ if (vha->scan.scan_retry < MAX_SCAN_RETRIES) {
++ u8 loop_map_entries = 0;
++ int rc;
++
++ rc = qla2x00_get_fcal_position_map(vha, NULL,
++ &loop_map_entries);
++ if (rc == QLA_SUCCESS && loop_map_entries > 1) {
++ /*
++ * There are devices that are still not logged
++ * in. Reinitialize to give them a chance.
++ */
++ qla_reinitialize_link(vha);
++ return QLA_FUNCTION_FAILED;
++ }
+ set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+ set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
+ }
+@@ -5767,8 +5824,6 @@ qla2x00_reg_remote_port(scsi_qla_host_t *vha, fc_port_t *fcport)
+ if (atomic_read(&fcport->state) == FCS_ONLINE)
+ return;
+
+- qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+-
+ rport_ids.node_name = wwn_to_u64(fcport->node_name);
+ rport_ids.port_name = wwn_to_u64(fcport->port_name);
+ rport_ids.port_id = fcport->d_id.b.domain << 16 |
+@@ -5869,7 +5924,6 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ qla2x00_reg_remote_port(vha, fcport);
+ break;
+ case MODE_TARGET:
+- qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+ if (!vha->vha_tgt.qla_tgt->tgt_stop &&
+ !vha->vha_tgt.qla_tgt->tgt_stopped)
+ qlt_fc_port_added(vha, fcport);
+@@ -5887,6 +5941,8 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ if (NVME_TARGET(vha->hw, fcport))
+ qla_nvme_register_remote(vha, fcport);
+
++ qla2x00_set_fcport_state(fcport, FCS_ONLINE);
++
+ if (IS_IIDMA_CAPABLE(vha->hw) && vha->hw->flags.gpsc_supported) {
+ if (fcport->id_changed) {
+ fcport->id_changed = 0;
+@@ -9657,6 +9713,12 @@ int qla2xxx_disable_port(struct Scsi_Host *host)
+
+ vha->hw->flags.port_isolated = 1;
+
++ if (qla2x00_isp_reg_stat(vha->hw)) {
++ ql_log(ql_log_info, vha, 0x9006,
++ "PCI/Register disconnect, exiting.\n");
++ qla_pci_set_eeh_busy(vha);
++ return FAILED;
++ }
+ if (qla2x00_chip_is_down(vha))
+ return 0;
+
+@@ -9672,6 +9734,13 @@ int qla2xxx_enable_port(struct Scsi_Host *host)
+ {
+ scsi_qla_host_t *vha = shost_priv(host);
+
++ if (qla2x00_isp_reg_stat(vha->hw)) {
++ ql_log(ql_log_info, vha, 0x9001,
++ "PCI/Register disconnect, exiting.\n");
++ qla_pci_set_eeh_busy(vha);
++ return FAILED;
++ }
++
+ vha->hw->flags.port_isolated = 0;
+ /* Set the flag to 1, so that isp_abort can proceed */
+ vha->flags.online = 1;
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index e0fe9ddb4bd2c..42ce4e1fe7441 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2819,7 +2819,7 @@ qla24xx_els_logo_iocb(srb_t *sp, struct els_entry_24xx *els_iocb)
+ sp->vha->qla_stats.control_requests++;
+ }
+
+-static void
++void
+ qla2x00_els_dcmd2_iocb_timeout(void *data)
+ {
+ srb_t *sp = data;
+@@ -2882,6 +2882,9 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ sp->name, res, sp->handle, fcport->d_id.b24, fcport->port_name);
+
+ fcport->flags &= ~(FCF_ASYNC_SENT|FCF_ASYNC_ACTIVE);
++ /* For edif, set logout on delete to ensure any residual key from FW is flushed.*/
++ fcport->logout_on_delete = 1;
++ fcport->chip_reset = vha->hw->base_qpair->chip_reset;
+
+ if (sp->flags & SRB_WAKEUP_ON_COMP)
+ complete(&lio->u.els_plogi.comp);
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 21b31d6359c8a..de348628aa535 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -1354,9 +1354,7 @@ skip_rio:
+ if (!vha->vp_idx) {
+ if (ha->flags.fawwpn_enabled &&
+ (ha->current_topology == ISP_CFG_F)) {
+- void *wwpn = ha->init_cb->port_name;
+-
+- memcpy(vha->port_name, wwpn, WWN_SIZE);
++ memcpy(vha->port_name, ha->port_name, WWN_SIZE);
+ fc_host_port_name(vha->host) =
+ wwn_to_u64(vha->port_name);
+ ql_dbg(ql_dbg_init + ql_dbg_verbose,
+@@ -2639,7 +2637,7 @@ static void qla24xx_nvme_iocb_entry(scsi_qla_host_t *vha, struct req_que *req,
+ }
+
+ if (unlikely(logit))
+- ql_log(ql_dbg_io, fcport->vha, 0x5060,
++ ql_dbg(ql_dbg_io, fcport->vha, 0x5060,
+ "NVME-%s ERR Handling - hdl=%x status(%x) tr_len:%x resid=%x ox_id=%x\n",
+ sp->name, sp->handle, comp_status,
+ fd->transferred_length, le32_to_cpu(sts->residual_len),
+@@ -3426,6 +3424,7 @@ check_scsi_status:
+ case CS_PORT_UNAVAILABLE:
+ case CS_TIMEOUT:
+ case CS_RESET:
++ case CS_EDIF_INV_REQ:
+
+ /*
+ * We are going to have the fc class block the rport
+@@ -3496,7 +3495,7 @@ check_scsi_status:
+
+ out:
+ if (logit)
+- ql_log(ql_dbg_io, fcport->vha, 0x3022,
++ ql_dbg(ql_dbg_io, fcport->vha, 0x3022,
+ "FCP command status: 0x%x-0x%x (0x%x) nexus=%ld:%d:%llu portid=%02x%02x%02x oxid=0x%x cdb=%10phN len=0x%x rsp_info=0x%x resid=0x%x fw_resid=0x%x sp=%p cp=%p.\n",
+ comp_status, scsi_status, res, vha->host_no,
+ cp->device->id, cp->device->lun, fcport->d_id.b.domain,
+@@ -4420,16 +4419,12 @@ msix_register_fail:
+ }
+
+ /* Enable MSI-X vector for response queue update for queue 0 */
+- if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+- if (ha->msixbase && ha->mqiobase &&
+- (ha->max_rsp_queues > 1 || ha->max_req_queues > 1 ||
+- ql2xmqsupport))
+- ha->mqenable = 1;
+- } else
+- if (ha->mqiobase &&
+- (ha->max_rsp_queues > 1 || ha->max_req_queues > 1 ||
+- ql2xmqsupport))
+- ha->mqenable = 1;
++ if (IS_MQUE_CAPABLE(ha) &&
++ (ha->msixbase && ha->mqiobase && ha->max_qpairs))
++ ha->mqenable = 1;
++ else
++ ha->mqenable = 0;
++
+ ql_dbg(ql_dbg_multiq, vha, 0xc005,
+ "mqiobase=%p, max_rsp_queues=%d, max_req_queues=%d.\n",
+ ha->mqiobase, ha->max_rsp_queues, ha->max_req_queues);
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 892caf2475dff..86d8c455c07ab 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -238,6 +238,8 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ ql_dbg(ql_dbg_mbx, vha, 0x1112,
+ "mbox[%d]<-0x%04x\n", cnt, *iptr);
+ wrt_reg_word(optr, *iptr);
++ } else {
++ wrt_reg_word(optr, 0);
+ }
+
+ mboxes >>= 1;
+@@ -274,6 +276,12 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ atomic_inc(&ha->num_pend_mbx_stage3);
+ if (!wait_for_completion_timeout(&ha->mbx_intr_comp,
+ mcp->tov * HZ)) {
++ ql_dbg(ql_dbg_mbx, vha, 0x117a,
++ "cmd=%x Timeout.\n", command);
++ spin_lock_irqsave(&ha->hardware_lock, flags);
++ clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
++ spin_unlock_irqrestore(&ha->hardware_lock, flags);
++
+ if (chip_reset != ha->chip_reset) {
+ eeh_delay = ha->flags.eeh_busy ? 1 : 0;
+
+@@ -286,12 +294,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ rval = QLA_ABORTED;
+ goto premature_exit;
+ }
+- ql_dbg(ql_dbg_mbx, vha, 0x117a,
+- "cmd=%x Timeout.\n", command);
+- spin_lock_irqsave(&ha->hardware_lock, flags);
+- clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
+- spin_unlock_irqrestore(&ha->hardware_lock, flags);
+-
+ } else if (ha->flags.purge_mbox ||
+ chip_reset != ha->chip_reset) {
+ eeh_delay = ha->flags.eeh_busy ? 1 : 0;
+@@ -3066,7 +3068,8 @@ qla2x00_get_resource_cnts(scsi_qla_host_t *vha)
+ * Kernel context.
+ */
+ int
+-qla2x00_get_fcal_position_map(scsi_qla_host_t *vha, char *pos_map)
++qla2x00_get_fcal_position_map(scsi_qla_host_t *vha, char *pos_map,
++ u8 *num_entries)
+ {
+ int rval;
+ mbx_cmd_t mc;
+@@ -3106,6 +3109,8 @@ qla2x00_get_fcal_position_map(scsi_qla_host_t *vha, char *pos_map)
+
+ if (pos_map)
+ memcpy(pos_map, pmap, FCAL_MAP_SIZE);
++ if (num_entries)
++ *num_entries = pmap[0];
+ }
+ dma_pool_free(ha->s_dma_pool, pmap, pmap_dma);
+
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index 346d47b61c078..16a9f22bb8600 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -166,9 +166,13 @@ qla24xx_disable_vp(scsi_qla_host_t *vha)
+ int ret = QLA_SUCCESS;
+ fc_port_t *fcport;
+
+- if (vha->hw->flags.edif_enabled)
++ if (vha->hw->flags.edif_enabled) {
++ if (DBELL_ACTIVE(vha))
++ qla2x00_post_aen_work(vha, FCH_EVT_VENDOR_UNIQUE,
++ FCH_EVT_VENDOR_UNIQUE_VPORT_DOWN);
+ /* delete sessions and flush sa_indexes */
+ qla2x00_wait_for_sess_deletion(vha);
++ }
+
+ if (vha->hw->flags.fw_started)
+ ret = qla24xx_control_vp(vha, VCE_COMMAND_DISABLE_VPS_LOGO_ALL);
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 87c9404aa4018..7450c3458be7e 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -37,11 +37,6 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
+ (fcport->nvme_flag & NVME_FLAG_REGISTERED))
+ return 0;
+
+- if (atomic_read(&fcport->state) == FCS_ONLINE)
+- return 0;
+-
+- qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+-
+ fcport->nvme_flag &= ~NVME_FLAG_RESETTING;
+
+ memset(&req, 0, sizeof(struct nvme_fc_port_info));
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 73073fb08369c..1c7fb6484db20 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -333,6 +333,11 @@ MODULE_PARM_DESC(ql2xabts_wait_nvme,
+ "To wait for ABTS response on I/O timeouts for NVMe. (default: 1)");
+
+
++u32 ql2xdelay_before_pci_error_handling = 5;
++module_param(ql2xdelay_before_pci_error_handling, uint, 0644);
++MODULE_PARM_DESC(ql2xdelay_before_pci_error_handling,
++ "Number of seconds delayed before qla begin PCI error self-handling (default: 5).\n");
++
+ static void qla2x00_clear_drv_active(struct qla_hw_data *);
+ static void qla2x00_free_device(scsi_qla_host_t *);
+ static int qla2xxx_map_queues(struct Scsi_Host *shost);
+@@ -1337,21 +1342,20 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ /*
+ * Returns: QLA_SUCCESS or QLA_FUNCTION_FAILED.
+ */
+-int
+-qla2x00_eh_wait_for_pending_commands(scsi_qla_host_t *vha, unsigned int t,
+- uint64_t l, enum nexus_wait_type type)
++static int
++__qla2x00_eh_wait_for_pending_commands(struct qla_qpair *qpair, unsigned int t,
++ uint64_t l, enum nexus_wait_type type)
+ {
+ int cnt, match, status;
+ unsigned long flags;
+- struct qla_hw_data *ha = vha->hw;
+- struct req_que *req;
++ scsi_qla_host_t *vha = qpair->vha;
++ struct req_que *req = qpair->req;
+ srb_t *sp;
+ struct scsi_cmnd *cmd;
+
+ status = QLA_SUCCESS;
+
+- spin_lock_irqsave(&ha->hardware_lock, flags);
+- req = vha->req;
++ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ for (cnt = 1; status == QLA_SUCCESS &&
+ cnt < req->num_outstanding_cmds; cnt++) {
+ sp = req->outstanding_cmds[cnt];
+@@ -1378,15 +1382,35 @@ qla2x00_eh_wait_for_pending_commands(scsi_qla_host_t *vha, unsigned int t,
+ if (!match)
+ continue;
+
+- spin_unlock_irqrestore(&ha->hardware_lock, flags);
++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+ status = qla2x00_eh_wait_on_command(cmd);
+- spin_lock_irqsave(&ha->hardware_lock, flags);
++ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ }
+- spin_unlock_irqrestore(&ha->hardware_lock, flags);
++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+
+ return status;
+ }
+
++int
++qla2x00_eh_wait_for_pending_commands(scsi_qla_host_t *vha, unsigned int t,
++ uint64_t l, enum nexus_wait_type type)
++{
++ struct qla_qpair *qpair;
++ struct qla_hw_data *ha = vha->hw;
++ int i, status = QLA_SUCCESS;
++
++ status = __qla2x00_eh_wait_for_pending_commands(ha->base_qpair, t, l,
++ type);
++ for (i = 0; status == QLA_SUCCESS && i < ha->max_qpairs; i++) {
++ qpair = ha->queue_pair_map[i];
++ if (!qpair)
++ continue;
++ status = __qla2x00_eh_wait_for_pending_commands(qpair, t, l,
++ type);
++ }
++ return status;
++}
++
+ static char *reset_errors[] = {
+ "HBA not online",
+ "HBA not ready",
+@@ -1420,7 +1444,7 @@ qla2xxx_eh_device_reset(struct scsi_cmnd *cmd)
+ return err;
+
+ if (fcport->deleted)
+- return SUCCESS;
++ return FAILED;
+
+ ql_log(ql_log_info, vha, 0x8009,
+ "DEVICE RESET ISSUED nexus=%ld:%d:%llu cmd=%p.\n", vha->host_no,
+@@ -1488,7 +1512,7 @@ qla2xxx_eh_target_reset(struct scsi_cmnd *cmd)
+ return err;
+
+ if (fcport->deleted)
+- return SUCCESS;
++ return FAILED;
+
+ ql_log(ql_log_info, vha, 0x8009,
+ "TARGET RESET ISSUED nexus=%ld:%d cmd=%p.\n", vha->host_no,
+@@ -5472,7 +5496,7 @@ qla2x00_do_work(struct scsi_qla_host *vha)
+ e->u.fcport.fcport, false);
+ break;
+ case QLA_EVT_SA_REPLACE:
+- qla24xx_issue_sa_replace_iocb(vha, e);
++ rc = qla24xx_issue_sa_replace_iocb(vha, e);
+ break;
+ }
+
+@@ -7238,6 +7262,44 @@ static void qla_heart_beat(struct scsi_qla_host *vha, u16 dpc_started)
+ }
+ }
+
++static void qla_wind_down_chip(scsi_qla_host_t *vha)
++{
++ struct qla_hw_data *ha = vha->hw;
++
++ if (!ha->flags.eeh_busy)
++ return;
++ if (ha->pci_error_state)
++ /* system is trying to recover */
++ return;
++
++ /*
++ * Current system is not handling PCIE error. At this point, this is
++ * best effort to wind down the adapter.
++ */
++ if (time_after_eq(jiffies, ha->eeh_jif + ql2xdelay_before_pci_error_handling * HZ) &&
++ !ha->flags.eeh_flush) {
++ ql_log(ql_log_info, vha, 0x9009,
++ "PCI Error detected, attempting to reset hardware.\n");
++
++ ha->isp_ops->reset_chip(vha);
++ ha->isp_ops->disable_intrs(ha);
++
++ ha->flags.eeh_flush = EEH_FLUSH_RDY;
++ ha->eeh_jif = jiffies;
++
++ } else if (ha->flags.eeh_flush == EEH_FLUSH_RDY &&
++ time_after_eq(jiffies, ha->eeh_jif + 5 * HZ)) {
++ pci_clear_master(ha->pdev);
++
++ /* flush all command */
++ qla2x00_abort_isp_cleanup(vha);
++ ha->flags.eeh_flush = EEH_FLUSH_DONE;
++
++ ql_log(ql_log_info, vha, 0x900a,
++ "PCI Error handling complete, all IOs aborted.\n");
++ }
++}
++
+ /**************************************************************************
+ * qla2x00_timer
+ *
+@@ -7261,6 +7323,8 @@ qla2x00_timer(struct timer_list *t)
+ fc_port_t *fcport = NULL;
+
+ if (ha->flags.eeh_busy) {
++ qla_wind_down_chip(vha);
++
+ ql_dbg(ql_dbg_timer, vha, 0x6000,
+ "EEH = %d, restarting timer.\n",
+ ha->flags.eeh_busy);
+@@ -7841,6 +7905,9 @@ void qla_pci_set_eeh_busy(struct scsi_qla_host *vha)
+
+ spin_lock_irqsave(&base_vha->work_lock, flags);
+ if (!ha->flags.eeh_busy) {
++ ha->eeh_jif = jiffies;
++ ha->flags.eeh_flush = 0;
++
+ ha->flags.eeh_busy = 1;
+ do_cleanup = true;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index cb97f625970d0..2b2f682883752 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -981,22 +981,6 @@ void qlt_free_session_done(struct work_struct *work)
+ sess->send_els_logo);
+
+ if (!IS_SW_RESV_ADDR(sess->d_id)) {
+- if (ha->flags.edif_enabled &&
+- (!own || own->iocb.u.isp24.status_subcode == ELS_PLOGI)) {
+- sess->edif.authok = 0;
+- if (!ha->flags.host_shutting_down) {
+- ql_dbg(ql_dbg_edif, vha, 0x911e,
+- "%s wwpn %8phC calling qla2x00_release_all_sadb\n",
+- __func__, sess->port_name);
+- qla2x00_release_all_sadb(vha, sess);
+- } else {
+- ql_dbg(ql_dbg_edif, vha, 0x911e,
+- "%s bypassing release_all_sadb\n",
+- __func__);
+- }
+- qla_edif_clear_appdata(vha, sess);
+- qla_edif_sess_down(vha, sess);
+- }
+ qla2x00_mark_device_lost(vha, sess, 0);
+
+ if (sess->send_els_logo) {
+@@ -1042,6 +1026,25 @@ void qlt_free_session_done(struct work_struct *work)
+ sess->nvme_flag |= NVME_FLAG_DELETING;
+ qla_nvme_unregister_remote_port(sess);
+ }
++
++ if (ha->flags.edif_enabled &&
++ (!own || (own &&
++ own->iocb.u.isp24.status_subcode == ELS_PLOGI))) {
++ sess->edif.authok = 0;
++ if (!ha->flags.host_shutting_down) {
++ ql_dbg(ql_dbg_edif, vha, 0x911e,
++ "%s wwpn %8phC calling qla2x00_release_all_sadb\n",
++ __func__, sess->port_name);
++ qla2x00_release_all_sadb(vha, sess);
++ } else {
++ ql_dbg(ql_dbg_edif, vha, 0x911e,
++ "%s bypassing release_all_sadb\n",
++ __func__);
++ }
++
++ qla_edif_clear_appdata(vha, sess);
++ qla_edif_sess_down(vha, sess);
++ }
+ }
+
+ /*
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 5d21f07456c6d..2a38cd2d24eff 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2264,16 +2264,8 @@ static void iscsi_if_disconnect_bound_ep(struct iscsi_cls_conn *conn,
+ }
+ }
+
+-static int iscsi_if_stop_conn(struct iscsi_transport *transport,
+- struct iscsi_uevent *ev)
++static int iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag)
+ {
+- int flag = ev->u.stop_conn.flag;
+- struct iscsi_cls_conn *conn;
+-
+- conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
+- if (!conn)
+- return -EINVAL;
+-
+ ISCSI_DBG_TRANS_CONN(conn, "iscsi if conn stop.\n");
+ /*
+ * If this is a termination we have to call stop_conn with that flag
+@@ -2349,6 +2341,55 @@ static void iscsi_cleanup_conn_work_fn(struct work_struct *work)
+ ISCSI_DBG_TRANS_CONN(conn, "cleanup done.\n");
+ }
+
++static int iscsi_iter_force_destroy_conn_fn(struct device *dev, void *data)
++{
++ struct iscsi_transport *transport;
++ struct iscsi_cls_conn *conn;
++
++ if (!iscsi_is_conn_dev(dev))
++ return 0;
++
++ conn = iscsi_dev_to_conn(dev);
++ transport = conn->transport;
++
++ if (READ_ONCE(conn->state) != ISCSI_CONN_DOWN)
++ iscsi_if_stop_conn(conn, STOP_CONN_TERM);
++
++ transport->destroy_conn(conn);
++ return 0;
++}
++
++/**
++ * iscsi_force_destroy_session - destroy a session from the kernel
++ * @session: session to destroy
++ *
++ * Force the destruction of a session from the kernel. This should only be
++ * used when userspace is no longer running during system shutdown.
++ */
++void iscsi_force_destroy_session(struct iscsi_cls_session *session)
++{
++ struct iscsi_transport *transport = session->transport;
++ unsigned long flags;
++
++ WARN_ON_ONCE(system_state == SYSTEM_RUNNING);
++
++ spin_lock_irqsave(&sesslock, flags);
++ if (list_empty(&session->sess_list)) {
++ spin_unlock_irqrestore(&sesslock, flags);
++ /*
++ * Conn/ep is already freed. Session is being torn down via
++ * async path. For shutdown we don't care about it so return.
++ */
++ return;
++ }
++ spin_unlock_irqrestore(&sesslock, flags);
++
++ device_for_each_child(&session->dev, NULL,
++ iscsi_iter_force_destroy_conn_fn);
++ transport->destroy_session(session);
++}
++EXPORT_SYMBOL_GPL(iscsi_force_destroy_session);
++
+ void iscsi_free_session(struct iscsi_cls_session *session)
+ {
+ ISCSI_DBG_TRANS_SESSION(session, "Freeing session\n");
+@@ -3720,7 +3761,12 @@ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
+ case ISCSI_UEVENT_DESTROY_CONN:
+ return iscsi_if_destroy_conn(transport, ev);
+ case ISCSI_UEVENT_STOP_CONN:
+- return iscsi_if_stop_conn(transport, ev);
++ conn = iscsi_conn_lookup(ev->u.stop_conn.sid,
++ ev->u.stop_conn.cid);
++ if (!conn)
++ return -EINVAL;
++
++ return iscsi_if_stop_conn(conn, ev->u.stop_conn.flag);
+ }
+
+ /*
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 118c7b4a8af2c..340b050ad28d1 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -195,7 +195,7 @@ static void sg_link_reserve(Sg_fd * sfp, Sg_request * srp, int size);
+ static void sg_unlink_reserve(Sg_fd * sfp, Sg_request * srp);
+ static Sg_fd *sg_add_sfp(Sg_device * sdp);
+ static void sg_remove_sfp(struct kref *);
+-static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id);
++static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id, bool *busy);
+ static Sg_request *sg_add_request(Sg_fd * sfp);
+ static int sg_remove_request(Sg_fd * sfp, Sg_request * srp);
+ static Sg_device *sg_get_dev(int dev);
+@@ -444,6 +444,7 @@ sg_read(struct file *filp, char __user *buf, size_t count, loff_t * ppos)
+ Sg_fd *sfp;
+ Sg_request *srp;
+ int req_pack_id = -1;
++ bool busy;
+ sg_io_hdr_t *hp;
+ struct sg_header *old_hdr;
+ int retval;
+@@ -466,20 +467,16 @@ sg_read(struct file *filp, char __user *buf, size_t count, loff_t * ppos)
+ if (retval)
+ return retval;
+
+- srp = sg_get_rq_mark(sfp, req_pack_id);
++ srp = sg_get_rq_mark(sfp, req_pack_id, &busy);
+ if (!srp) { /* now wait on packet to arrive */
+- if (atomic_read(&sdp->detaching))
+- return -ENODEV;
+ if (filp->f_flags & O_NONBLOCK)
+ return -EAGAIN;
+ retval = wait_event_interruptible(sfp->read_wait,
+- (atomic_read(&sdp->detaching) ||
+- (srp = sg_get_rq_mark(sfp, req_pack_id))));
+- if (atomic_read(&sdp->detaching))
+- return -ENODEV;
+- if (retval)
+- /* -ERESTARTSYS as signal hit process */
+- return retval;
++ ((srp = sg_get_rq_mark(sfp, req_pack_id, &busy)) ||
++ (!busy && atomic_read(&sdp->detaching))));
++ if (!srp)
++ /* signal or detaching */
++ return retval ? retval : -ENODEV;
+ }
+ if (srp->header.interface_id != '\0')
+ return sg_new_read(sfp, buf, count, srp);
+@@ -940,9 +937,7 @@ sg_ioctl_common(struct file *filp, Sg_device *sdp, Sg_fd *sfp,
+ if (result < 0)
+ return result;
+ result = wait_event_interruptible(sfp->read_wait,
+- (srp_done(sfp, srp) || atomic_read(&sdp->detaching)));
+- if (atomic_read(&sdp->detaching))
+- return -ENODEV;
++ srp_done(sfp, srp));
+ write_lock_irq(&sfp->rq_list_lock);
+ if (srp->done) {
+ srp->done = 2;
+@@ -2079,19 +2074,28 @@ sg_unlink_reserve(Sg_fd * sfp, Sg_request * srp)
+ }
+
+ static Sg_request *
+-sg_get_rq_mark(Sg_fd * sfp, int pack_id)
++sg_get_rq_mark(Sg_fd * sfp, int pack_id, bool *busy)
+ {
+ Sg_request *resp;
+ unsigned long iflags;
+
++ *busy = false;
+ write_lock_irqsave(&sfp->rq_list_lock, iflags);
+ list_for_each_entry(resp, &sfp->rq_list, entry) {
+- /* look for requests that are ready + not SG_IO owned */
+- if ((1 == resp->done) && (!resp->sg_io_owned) &&
++ /* look for requests that are not SG_IO owned */
++ if ((!resp->sg_io_owned) &&
+ ((-1 == pack_id) || (resp->header.pack_id == pack_id))) {
+- resp->done = 2; /* guard against other readers */
+- write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+- return resp;
++ switch (resp->done) {
++ case 0: /* request active */
++ *busy = true;
++ break;
++ case 1: /* request done; response ready to return */
++ resp->done = 2; /* guard against other readers */
++ write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
++ return resp;
++ case 2: /* response already being returned */
++ break;
++ }
+ }
+ }
+ write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+@@ -2145,6 +2149,15 @@ sg_remove_request(Sg_fd * sfp, Sg_request * srp)
+ res = 1;
+ }
+ write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
++
++ /*
++ * If the device is detaching, wakeup any readers in case we just
++ * removed the last response, which would leave nothing for them to
++ * return other than -ENODEV.
++ */
++ if (unlikely(atomic_read(&sfp->parentdp->detaching)))
++ wake_up_interruptible_all(&sfp->read_wait);
++
+ return res;
+ }
+
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 7c0d069a31583..e1fc6f5b96124 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -5484,10 +5484,10 @@ static int pqi_raid_submit_scsi_cmd_with_io_request(
+ }
+
+ switch (scmd->sc_data_direction) {
+- case DMA_TO_DEVICE:
++ case DMA_FROM_DEVICE:
+ request->data_direction = SOP_READ_FLAG;
+ break;
+- case DMA_FROM_DEVICE:
++ case DMA_TO_DEVICE:
+ request->data_direction = SOP_WRITE_FLAG;
+ break;
+ case DMA_NONE:
+diff --git a/drivers/soc/amlogic/meson-mx-socinfo.c b/drivers/soc/amlogic/meson-mx-socinfo.c
+index 78f0f1aeca578..92125dd65f338 100644
+--- a/drivers/soc/amlogic/meson-mx-socinfo.c
++++ b/drivers/soc/amlogic/meson-mx-socinfo.c
+@@ -126,6 +126,7 @@ static int __init meson_mx_socinfo_init(void)
+ np = of_find_matching_node(NULL, meson_mx_socinfo_analog_top_ids);
+ if (np) {
+ analog_top_regmap = syscon_node_to_regmap(np);
++ of_node_put(np);
+ if (IS_ERR(analog_top_regmap))
+ return PTR_ERR(analog_top_regmap);
+
+diff --git a/drivers/soc/amlogic/meson-secure-pwrc.c b/drivers/soc/amlogic/meson-secure-pwrc.c
+index a10a417a87db8..e935187635267 100644
+--- a/drivers/soc/amlogic/meson-secure-pwrc.c
++++ b/drivers/soc/amlogic/meson-secure-pwrc.c
+@@ -152,8 +152,10 @@ static int meson_secure_pwrc_probe(struct platform_device *pdev)
+ }
+
+ pwrc = devm_kzalloc(&pdev->dev, sizeof(*pwrc), GFP_KERNEL);
+- if (!pwrc)
++ if (!pwrc) {
++ of_node_put(sm_np);
+ return -ENOMEM;
++ }
+
+ pwrc->fw = meson_sm_get(sm_np);
+ of_node_put(sm_np);
+diff --git a/drivers/soc/fsl/guts.c b/drivers/soc/fsl/guts.c
+index 5ed2fc1c53a0e..be18d46c7b0fb 100644
+--- a/drivers/soc/fsl/guts.c
++++ b/drivers/soc/fsl/guts.c
+@@ -140,7 +140,7 @@ static int fsl_guts_probe(struct platform_device *pdev)
+ struct device_node *root, *np = pdev->dev.of_node;
+ struct device *dev = &pdev->dev;
+ const struct fsl_soc_die_attr *soc_die;
+- const char *machine;
++ const char *machine = NULL;
+ u32 svr;
+
+ /* Initialize guts */
+diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
+index e718b87354444..4472fb22ba045 100644
+--- a/drivers/soc/qcom/Kconfig
++++ b/drivers/soc/qcom/Kconfig
+@@ -129,6 +129,7 @@ config QCOM_RPMHPD
+
+ config QCOM_RPMPD
+ tristate "Qualcomm RPM Power domain driver"
++ depends on PM
+ depends on QCOM_SMD_RPM
+ help
+ QCOM RPM Power domain driver to support power-domains with
+diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c
+index 97fd24c178f8d..c92d26b73e6fc 100644
+--- a/drivers/soc/qcom/ocmem.c
++++ b/drivers/soc/qcom/ocmem.c
+@@ -194,14 +194,17 @@ struct ocmem *of_get_ocmem(struct device *dev)
+ devnode = of_parse_phandle(dev->of_node, "sram", 0);
+ if (!devnode || !devnode->parent) {
+ dev_err(dev, "Cannot look up sram phandle\n");
++ of_node_put(devnode);
+ return ERR_PTR(-ENODEV);
+ }
+
+ pdev = of_find_device_by_node(devnode->parent);
+ if (!pdev) {
+ dev_err(dev, "Cannot find device node %s\n", devnode->name);
++ of_node_put(devnode);
+ return ERR_PTR(-EPROBE_DEFER);
+ }
++ of_node_put(devnode);
+
+ ocmem = platform_get_drvdata(pdev);
+ if (!ocmem) {
+diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c
+index a59bb34e5ebaf..18c856056475c 100644
+--- a/drivers/soc/qcom/qcom_aoss.c
++++ b/drivers/soc/qcom/qcom_aoss.c
+@@ -399,8 +399,10 @@ static int qmp_cooling_devices_register(struct qmp *qmp)
+ continue;
+ ret = qmp_cooling_device_add(qmp, &qmp->cooling_devs[count++],
+ child);
+- if (ret)
++ if (ret) {
++ of_node_put(child);
+ goto unroll;
++ }
+ }
+
+ if (!count)
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index cee579a267a6b..3af195b8583a1 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -328,7 +328,8 @@ static const struct soc_id soc_id[] = {
+ { 455, "QRB5165" },
+ { 457, "SM8450" },
+ { 459, "SM7225" },
+- { 460, "SA8540P" },
++ { 460, "SA8295P" },
++ { 461, "SA8540P" },
+ { 480, "SM8450" },
+ { 482, "SM8450" },
+ { 487, "SC7280" },
+diff --git a/drivers/soc/renesas/r8a779a0-sysc.c b/drivers/soc/renesas/r8a779a0-sysc.c
+index fdfc857df3349..04f1bc322ae7b 100644
+--- a/drivers/soc/renesas/r8a779a0-sysc.c
++++ b/drivers/soc/renesas/r8a779a0-sysc.c
+@@ -57,11 +57,11 @@ static struct rcar_gen4_sysc_area r8a779a0_areas[] __initdata = {
+ { "a2cv6", R8A779A0_PD_A2CV6, R8A779A0_PD_A3IR },
+ { "a2cn2", R8A779A0_PD_A2CN2, R8A779A0_PD_A3IR },
+ { "a2imp23", R8A779A0_PD_A2IMP23, R8A779A0_PD_A3IR },
+- { "a2dp1", R8A779A0_PD_A2DP0, R8A779A0_PD_A3IR },
+- { "a2cv2", R8A779A0_PD_A2CV0, R8A779A0_PD_A3IR },
+- { "a2cv3", R8A779A0_PD_A2CV1, R8A779A0_PD_A3IR },
+- { "a2cv5", R8A779A0_PD_A2CV4, R8A779A0_PD_A3IR },
+- { "a2cv7", R8A779A0_PD_A2CV6, R8A779A0_PD_A3IR },
++ { "a2dp1", R8A779A0_PD_A2DP1, R8A779A0_PD_A3IR },
++ { "a2cv2", R8A779A0_PD_A2CV2, R8A779A0_PD_A3IR },
++ { "a2cv3", R8A779A0_PD_A2CV3, R8A779A0_PD_A3IR },
++ { "a2cv5", R8A779A0_PD_A2CV5, R8A779A0_PD_A3IR },
++ { "a2cv7", R8A779A0_PD_A2CV7, R8A779A0_PD_A3IR },
+ { "a2cn1", R8A779A0_PD_A2CN1, R8A779A0_PD_A3IR },
+ { "a1cnn0", R8A779A0_PD_A1CNN0, R8A779A0_PD_A2CN0 },
+ { "a1cnn2", R8A779A0_PD_A1CNN2, R8A779A0_PD_A2CN2 },
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index a2bfb0434a675..8d4000664fa34 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -7,6 +7,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/soundwire/sdw_registers.h>
+ #include <linux/soundwire/sdw.h>
++#include <linux/soundwire/sdw_type.h>
+ #include "bus.h"
+ #include "sysfs_local.h"
+
+@@ -842,15 +843,21 @@ static int sdw_slave_clk_stop_callback(struct sdw_slave *slave,
+ enum sdw_clk_stop_mode mode,
+ enum sdw_clk_stop_type type)
+ {
+- int ret;
++ int ret = 0;
+
+- if (slave->ops && slave->ops->clk_stop) {
+- ret = slave->ops->clk_stop(slave, mode, type);
+- if (ret < 0)
+- return ret;
++ mutex_lock(&slave->sdw_dev_lock);
++
++ if (slave->probed) {
++ struct device *dev = &slave->dev;
++ struct sdw_driver *drv = drv_to_sdw_driver(dev->driver);
++
++ if (drv->ops && drv->ops->clk_stop)
++ ret = drv->ops->clk_stop(slave, mode, type);
+ }
+
+- return 0;
++ mutex_unlock(&slave->sdw_dev_lock);
++
++ return ret;
+ }
+
+ static int sdw_slave_clk_stop_prepare(struct sdw_slave *slave,
+@@ -1611,14 +1618,24 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave)
+ }
+
+ /* Update the Slave driver */
+- if (slave_notify && slave->ops &&
+- slave->ops->interrupt_callback) {
+- slave_intr.sdca_cascade = sdca_cascade;
+- slave_intr.control_port = clear;
+- memcpy(slave_intr.port, &port_status,
+- sizeof(slave_intr.port));
+-
+- slave->ops->interrupt_callback(slave, &slave_intr);
++ if (slave_notify) {
++ mutex_lock(&slave->sdw_dev_lock);
++
++ if (slave->probed) {
++ struct device *dev = &slave->dev;
++ struct sdw_driver *drv = drv_to_sdw_driver(dev->driver);
++
++ if (drv->ops && drv->ops->interrupt_callback) {
++ slave_intr.sdca_cascade = sdca_cascade;
++ slave_intr.control_port = clear;
++ memcpy(slave_intr.port, &port_status,
++ sizeof(slave_intr.port));
++
++ drv->ops->interrupt_callback(slave, &slave_intr);
++ }
++ }
++
++ mutex_unlock(&slave->sdw_dev_lock);
+ }
+
+ /* Ack interrupt */
+@@ -1692,29 +1709,21 @@ io_err:
+ static int sdw_update_slave_status(struct sdw_slave *slave,
+ enum sdw_slave_status status)
+ {
+- unsigned long time;
++ int ret = 0;
+
+- if (!slave->probed) {
+- /*
+- * the slave status update is typically handled in an
+- * interrupt thread, which can race with the driver
+- * probe, e.g. when a module needs to be loaded.
+- *
+- * make sure the probe is complete before updating
+- * status.
+- */
+- time = wait_for_completion_timeout(&slave->probe_complete,
+- msecs_to_jiffies(DEFAULT_PROBE_TIMEOUT));
+- if (!time) {
+- dev_err(&slave->dev, "Probe not complete, timed out\n");
+- return -ETIMEDOUT;
+- }
++ mutex_lock(&slave->sdw_dev_lock);
++
++ if (slave->probed) {
++ struct device *dev = &slave->dev;
++ struct sdw_driver *drv = drv_to_sdw_driver(dev->driver);
++
++ if (drv->ops && drv->ops->update_status)
++ ret = drv->ops->update_status(slave, status);
+ }
+
+- if (!slave->ops || !slave->ops->update_status)
+- return 0;
++ mutex_unlock(&slave->sdw_dev_lock);
+
+- return slave->ops->update_status(slave, status);
++ return ret;
+ }
+
+ /**
+diff --git a/drivers/soundwire/bus_type.c b/drivers/soundwire/bus_type.c
+index 893296f3fe395..04b3529f89293 100644
+--- a/drivers/soundwire/bus_type.c
++++ b/drivers/soundwire/bus_type.c
+@@ -98,8 +98,6 @@ static int sdw_drv_probe(struct device *dev)
+ if (!id)
+ return -ENODEV;
+
+- slave->ops = drv->ops;
+-
+ /*
+ * attach to power domain but don't turn on (last arg)
+ */
+@@ -107,19 +105,23 @@ static int sdw_drv_probe(struct device *dev)
+ if (ret)
+ return ret;
+
++ mutex_lock(&slave->sdw_dev_lock);
++
+ ret = drv->probe(slave, id);
+ if (ret) {
+ name = drv->name;
+ if (!name)
+ name = drv->driver.name;
++ mutex_unlock(&slave->sdw_dev_lock);
++
+ dev_err(dev, "Probe of %s failed: %d\n", name, ret);
+ dev_pm_domain_detach(dev, false);
+ return ret;
+ }
+
+ /* device is probed so let's read the properties now */
+- if (slave->ops && slave->ops->read_prop)
+- slave->ops->read_prop(slave);
++ if (drv->ops && drv->ops->read_prop)
++ drv->ops->read_prop(slave);
+
+ /* init the sysfs as we have properties now */
+ ret = sdw_slave_sysfs_init(slave);
+@@ -139,7 +141,19 @@ static int sdw_drv_probe(struct device *dev)
+ slave->prop.clk_stop_timeout);
+
+ slave->probed = true;
+- complete(&slave->probe_complete);
++
++ /*
++ * if the probe happened after the bus was started, notify the codec driver
++ * of the current hardware status to e.g. start the initialization.
++ * Errors are only logged as warnings to avoid failing the probe.
++ */
++ if (drv->ops && drv->ops->update_status) {
++ ret = drv->ops->update_status(slave, slave->status);
++ if (ret < 0)
++ dev_warn(dev, "%s: update_status failed with status %d\n", __func__, ret);
++ }
++
++ mutex_unlock(&slave->sdw_dev_lock);
+
+ dev_dbg(dev, "probe complete\n");
+
+@@ -152,9 +166,15 @@ static int sdw_drv_remove(struct device *dev)
+ struct sdw_driver *drv = drv_to_sdw_driver(dev->driver);
+ int ret = 0;
+
++ mutex_lock(&slave->sdw_dev_lock);
++
++ slave->probed = false;
++
+ if (drv->remove)
+ ret = drv->remove(slave);
+
++ mutex_unlock(&slave->sdw_dev_lock);
++
+ dev_pm_domain_detach(dev, false);
+
+ return ret;
+@@ -193,12 +213,8 @@ int __sdw_register_driver(struct sdw_driver *drv, struct module *owner)
+
+ drv->driver.owner = owner;
+ drv->driver.probe = sdw_drv_probe;
+-
+- if (drv->remove)
+- drv->driver.remove = sdw_drv_remove;
+-
+- if (drv->shutdown)
+- drv->driver.shutdown = sdw_drv_shutdown;
++ drv->driver.remove = sdw_drv_remove;
++ drv->driver.shutdown = sdw_drv_shutdown;
+
+ return driver_register(&drv->driver);
+ }
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index 22b706350ead3..b5ec7726592c8 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -471,6 +471,10 @@ static int qcom_swrm_enumerate(struct sdw_bus *bus)
+ char *buf1 = (char *)&val1, *buf2 = (char *)&val2;
+
+ for (i = 1; i <= SDW_MAX_DEVICES; i++) {
++ /* do not continue if the status is Not Present */
++ if (!ctrl->status[i])
++ continue;
++
+ /*SCP_Devid5 - Devid 4*/
+ ctrl->reg_read(ctrl, SWRM_ENUMERATOR_SLAVE_DEV_ID_1(i), &val1);
+
+diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c
+index 669d7573320b7..25e76b5d4a1a3 100644
+--- a/drivers/soundwire/slave.c
++++ b/drivers/soundwire/slave.c
+@@ -12,6 +12,7 @@ static void sdw_slave_release(struct device *dev)
+ {
+ struct sdw_slave *slave = dev_to_sdw_dev(dev);
+
++ mutex_destroy(&slave->sdw_dev_lock);
+ kfree(slave);
+ }
+
+@@ -58,9 +59,9 @@ int sdw_slave_add(struct sdw_bus *bus,
+ init_completion(&slave->enumeration_complete);
+ init_completion(&slave->initialization_complete);
+ slave->dev_num = 0;
+- init_completion(&slave->probe_complete);
+ slave->probed = false;
+ slave->first_interrupt_done = false;
++ mutex_init(&slave->sdw_dev_lock);
+
+ for (i = 0; i < SDW_MAX_PORTS; i++)
+ init_completion(&slave->port_ready[i]);
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index d34150559142f..bd502368339e5 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -13,6 +13,7 @@
+ #include <linux/slab.h>
+ #include <linux/soundwire/sdw_registers.h>
+ #include <linux/soundwire/sdw.h>
++#include <linux/soundwire/sdw_type.h>
+ #include <sound/soc.h>
+ #include "bus.h"
+
+@@ -401,20 +402,26 @@ static int sdw_do_port_prep(struct sdw_slave_runtime *s_rt,
+ struct sdw_prepare_ch prep_ch,
+ enum sdw_port_prep_ops cmd)
+ {
+- const struct sdw_slave_ops *ops = s_rt->slave->ops;
+- int ret;
++ int ret = 0;
++ struct sdw_slave *slave = s_rt->slave;
+
+- if (ops->port_prep) {
+- ret = ops->port_prep(s_rt->slave, &prep_ch, cmd);
+- if (ret < 0) {
+- dev_err(&s_rt->slave->dev,
+- "Slave Port Prep cmd %d failed: %d\n",
+- cmd, ret);
+- return ret;
++ mutex_lock(&slave->sdw_dev_lock);
++
++ if (slave->probed) {
++ struct device *dev = &slave->dev;
++ struct sdw_driver *drv = drv_to_sdw_driver(dev->driver);
++
++ if (drv->ops && drv->ops->port_prep) {
++ ret = drv->ops->port_prep(slave, &prep_ch, cmd);
++ if (ret < 0)
++ dev_err(dev, "Slave Port Prep cmd %d failed: %d\n",
++ cmd, ret);
+ }
+ }
+
+- return 0;
++ mutex_unlock(&slave->sdw_dev_lock);
++
++ return ret;
+ }
+
+ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus,
+@@ -578,7 +585,7 @@ static int sdw_notify_config(struct sdw_master_runtime *m_rt)
+ struct sdw_slave_runtime *s_rt;
+ struct sdw_bus *bus = m_rt->bus;
+ struct sdw_slave *slave;
+- int ret = 0;
++ int ret;
+
+ if (bus->ops->set_bus_conf) {
+ ret = bus->ops->set_bus_conf(bus, &bus->params);
+@@ -589,17 +596,27 @@ static int sdw_notify_config(struct sdw_master_runtime *m_rt)
+ list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) {
+ slave = s_rt->slave;
+
+- if (slave->ops->bus_config) {
+- ret = slave->ops->bus_config(slave, &bus->params);
+- if (ret < 0) {
+- dev_err(bus->dev, "Notify Slave: %d failed\n",
+- slave->dev_num);
+- return ret;
++ mutex_lock(&slave->sdw_dev_lock);
++
++ if (slave->probed) {
++ struct device *dev = &slave->dev;
++ struct sdw_driver *drv = drv_to_sdw_driver(dev->driver);
++
++ if (drv->ops && drv->ops->bus_config) {
++ ret = drv->ops->bus_config(slave, &bus->params);
++ if (ret < 0) {
++ dev_err(dev, "Notify Slave: %d failed\n",
++ slave->dev_num);
++ mutex_unlock(&slave->sdw_dev_lock);
++ return ret;
++ }
+ }
+ }
++
++ mutex_unlock(&slave->sdw_dev_lock);
+ }
+
+- return ret;
++ return 0;
+ }
+
+ /**
+diff --git a/drivers/spi/spi-altera-dfl.c b/drivers/spi/spi-altera-dfl.c
+index ca40923258af3..596e181ae1368 100644
+--- a/drivers/spi/spi-altera-dfl.c
++++ b/drivers/spi/spi-altera-dfl.c
+@@ -128,9 +128,9 @@ static int dfl_spi_altera_probe(struct dfl_device *dfl_dev)
+ struct spi_master *master;
+ struct altera_spi *hw;
+ void __iomem *base;
+- int err = -ENODEV;
++ int err;
+
+- master = spi_alloc_master(dev, sizeof(struct altera_spi));
++ master = devm_spi_alloc_master(dev, sizeof(struct altera_spi));
+ if (!master)
+ return -ENOMEM;
+
+@@ -159,10 +159,9 @@ static int dfl_spi_altera_probe(struct dfl_device *dfl_dev)
+ altera_spi_init_master(master);
+
+ err = devm_spi_register_master(dev, master);
+- if (err) {
+- dev_err(dev, "%s failed to register spi master %d\n", __func__, err);
+- goto exit;
+- }
++ if (err)
++ return dev_err_probe(dev, err, "%s failed to register spi master\n",
++ __func__);
+
+ if (dfl_dev->revision == FME_FEATURE_REV_MAX10_SPI_N5010)
+ strscpy(board_info.modalias, "m10-n5010", SPI_NAME_SIZE);
+@@ -179,9 +178,6 @@ static int dfl_spi_altera_probe(struct dfl_device *dfl_dev)
+ }
+
+ return 0;
+-exit:
+- spi_master_put(master);
+- return err;
+ }
+
+ static const struct dfl_device_id dfl_spi_altera_ids[] = {
+diff --git a/drivers/spi/spi-dw.h b/drivers/spi/spi-dw.h
+index d5ee5130601e1..79d853f6d1920 100644
+--- a/drivers/spi/spi-dw.h
++++ b/drivers/spi/spi-dw.h
+@@ -23,7 +23,7 @@
+ ((_dws)->ip == DW_ ## _ip ## _ID)
+
+ #define __dw_spi_ver_cmp(_dws, _ip, _ver, _op) \
+- (dw_spi_ip_is(_dws, _ip) && (_dws)->ver _op DW_ ## _ip ## _ver)
++ (dw_spi_ip_is(_dws, _ip) && (_dws)->ver _op DW_ ## _ip ## _ ## _ver)
+
+ #define dw_spi_ver_is(_dws, _ip, _ver) __dw_spi_ver_cmp(_dws, _ip, _ver, ==)
+
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index c26440e9058d7..8fa21afc6a35b 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -1413,7 +1413,7 @@ static const struct s3c64xx_spi_port_config exynos5433_spi_port_config = {
+ .quirks = S3C64XX_SPI_QUIRK_CS_AUTO,
+ };
+
+-static struct s3c64xx_spi_port_config fsd_spi_port_config = {
++static const struct s3c64xx_spi_port_config fsd_spi_port_config = {
+ .fifo_lvl_mask = { 0x7f, 0x7f, 0x7f, 0x7f, 0x7f},
+ .rx_lvl_offset = 15,
+ .tx_st_done = 25,
+diff --git a/drivers/spi/spi-synquacer.c b/drivers/spi/spi-synquacer.c
+index ea706d9629cb1..47cbe73137c23 100644
+--- a/drivers/spi/spi-synquacer.c
++++ b/drivers/spi/spi-synquacer.c
+@@ -783,6 +783,7 @@ static int __maybe_unused synquacer_spi_resume(struct device *dev)
+
+ ret = synquacer_spi_enable(master);
+ if (ret) {
++ clk_disable_unprepare(sspi->clk);
+ dev_err(dev, "failed to enable spi (%d)\n", ret);
+ return ret;
+ }
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index 38360434d6e9e..148043d0c2b84 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1136,7 +1136,7 @@ exit_free_master:
+
+ static int tegra_slink_remove(struct platform_device *pdev)
+ {
+- struct spi_master *master = platform_get_drvdata(pdev);
++ struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
+ struct tegra_slink_data *tspi = spi_master_get_devdata(master);
+
+ spi_unregister_master(master);
+@@ -1151,6 +1151,7 @@ static int tegra_slink_remove(struct platform_device *pdev)
+ if (tspi->rx_dma_chan)
+ tegra_slink_deinit_dma_param(tspi, true);
+
++ spi_master_put(master);
+ return 0;
+ }
+
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index ea09d1b42bf63..2c616024f7c02 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2398,7 +2398,7 @@ static int acpi_spi_add_resource(struct acpi_resource *ares, void *data)
+
+ ctlr = acpi_spi_find_controller_by_adev(adev);
+ if (!ctlr)
+- return -ENODEV;
++ return -EPROBE_DEFER;
+
+ lookup->ctlr = ctlr;
+ }
+@@ -3050,9 +3050,9 @@ free_bus_id:
+ }
+ EXPORT_SYMBOL_GPL(spi_register_controller);
+
+-static void devm_spi_unregister(void *ctlr)
++static void devm_spi_unregister(struct device *dev, void *res)
+ {
+- spi_unregister_controller(ctlr);
++ spi_unregister_controller(*(struct spi_controller **)res);
+ }
+
+ /**
+@@ -3071,13 +3071,22 @@ static void devm_spi_unregister(void *ctlr)
+ int devm_spi_register_controller(struct device *dev,
+ struct spi_controller *ctlr)
+ {
++ struct spi_controller **ptr;
+ int ret;
+
++ ptr = devres_alloc(devm_spi_unregister, sizeof(*ptr), GFP_KERNEL);
++ if (!ptr)
++ return -ENOMEM;
++
+ ret = spi_register_controller(ctlr);
+- if (ret)
+- return ret;
++ if (!ret) {
++ *ptr = ctlr;
++ devres_add(dev, ptr);
++ } else {
++ devres_free(ptr);
++ }
+
+- return devm_add_action_or_reset(dev, devm_spi_unregister, ctlr);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(devm_spi_register_controller);
+
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index 60b2278d8b160..ebf4e8ce4de99 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -655,7 +655,6 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+ fbdefio->delay = HZ / fps;
+ fbdefio->sort_pagereflist = true;
+ fbdefio->deferred_io = fbtft_deferred_io;
+- fb_deferred_io_init(info);
+
+ snprintf(info->fix.id, sizeof(info->fix.id), "%s", dev->driver->name);
+ info->fix.type = FB_TYPE_PACKED_PIXELS;
+@@ -666,6 +665,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+ info->fix.line_length = width * bpp / 8;
+ info->fix.accel = FB_ACCEL_NONE;
+ info->fix.smem_len = vmem_size;
++ fb_deferred_io_init(info);
+
+ info->var.rotate = pdata->rotate;
+ info->var.xres = width;
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_cmd.c b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+index 97d5a528969b8..0da0b69a46375 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_cmd.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+@@ -901,9 +901,9 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
+ int err;
+ unsigned long irqflags;
+ struct ia_css_frame *frame = NULL;
+- struct atomisp_s3a_buf *s3a_buf = NULL, *_s3a_buf_tmp;
+- struct atomisp_dis_buf *dis_buf = NULL, *_dis_buf_tmp;
+- struct atomisp_metadata_buf *md_buf = NULL, *_md_buf_tmp;
++ struct atomisp_s3a_buf *s3a_buf = NULL, *_s3a_buf_tmp, *s3a_iter;
++ struct atomisp_dis_buf *dis_buf = NULL, *_dis_buf_tmp, *dis_iter;
++ struct atomisp_metadata_buf *md_buf = NULL, *_md_buf_tmp, *md_iter;
+ enum atomisp_metadata_type md_type;
+ struct atomisp_device *isp = asd->isp;
+ struct v4l2_control ctrl;
+@@ -942,60 +942,75 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
+
+ switch (buf_type) {
+ case IA_CSS_BUFFER_TYPE_3A_STATISTICS:
+- list_for_each_entry_safe(s3a_buf, _s3a_buf_tmp,
++ list_for_each_entry_safe(s3a_iter, _s3a_buf_tmp,
+ &asd->s3a_stats_in_css, list) {
+- if (s3a_buf->s3a_data ==
++ if (s3a_iter->s3a_data ==
+ buffer.css_buffer.data.stats_3a) {
+- list_del_init(&s3a_buf->list);
+- list_add_tail(&s3a_buf->list,
++ list_del_init(&s3a_iter->list);
++ list_add_tail(&s3a_iter->list,
+ &asd->s3a_stats_ready);
++ s3a_buf = s3a_iter;
+ break;
+ }
+ }
+
+ asd->s3a_bufs_in_css[css_pipe_id]--;
+ atomisp_3a_stats_ready_event(asd, buffer.css_buffer.exp_id);
+- dev_dbg(isp->dev, "%s: s3a stat with exp_id %d is ready\n",
+- __func__, s3a_buf->s3a_data->exp_id);
++ if (s3a_buf)
++ dev_dbg(isp->dev, "%s: s3a stat with exp_id %d is ready\n",
++ __func__, s3a_buf->s3a_data->exp_id);
++ else
++ dev_dbg(isp->dev, "%s: s3a stat is ready with no exp_id found\n",
++ __func__);
+ break;
+ case IA_CSS_BUFFER_TYPE_METADATA:
+ if (error)
+ break;
+
+ md_type = atomisp_get_metadata_type(asd, css_pipe_id);
+- list_for_each_entry_safe(md_buf, _md_buf_tmp,
++ list_for_each_entry_safe(md_iter, _md_buf_tmp,
+ &asd->metadata_in_css[md_type], list) {
+- if (md_buf->metadata ==
++ if (md_iter->metadata ==
+ buffer.css_buffer.data.metadata) {
+- list_del_init(&md_buf->list);
+- list_add_tail(&md_buf->list,
++ list_del_init(&md_iter->list);
++ list_add_tail(&md_iter->list,
+ &asd->metadata_ready[md_type]);
++ md_buf = md_iter;
+ break;
+ }
+ }
+ asd->metadata_bufs_in_css[stream_id][css_pipe_id]--;
+ atomisp_metadata_ready_event(asd, md_type);
+- dev_dbg(isp->dev, "%s: metadata with exp_id %d is ready\n",
+- __func__, md_buf->metadata->exp_id);
++ if (md_buf)
++ dev_dbg(isp->dev, "%s: metadata with exp_id %d is ready\n",
++ __func__, md_buf->metadata->exp_id);
++ else
++ dev_dbg(isp->dev, "%s: metadata is ready with no exp_id found\n",
++ __func__);
+ break;
+ case IA_CSS_BUFFER_TYPE_DIS_STATISTICS:
+- list_for_each_entry_safe(dis_buf, _dis_buf_tmp,
++ list_for_each_entry_safe(dis_iter, _dis_buf_tmp,
+ &asd->dis_stats_in_css, list) {
+- if (dis_buf->dis_data ==
++ if (dis_iter->dis_data ==
+ buffer.css_buffer.data.stats_dvs) {
+ spin_lock_irqsave(&asd->dis_stats_lock,
+ irqflags);
+- list_del_init(&dis_buf->list);
+- list_add(&dis_buf->list, &asd->dis_stats);
++ list_del_init(&dis_iter->list);
++ list_add(&dis_iter->list, &asd->dis_stats);
+ asd->params.dis_proj_data_valid = true;
+ spin_unlock_irqrestore(&asd->dis_stats_lock,
+ irqflags);
++ dis_buf = dis_iter;
+ break;
+ }
+ }
+ asd->dis_bufs_in_css--;
+- dev_dbg(isp->dev, "%s: dis stat with exp_id %d is ready\n",
+- __func__, dis_buf->dis_data->exp_id);
++ if (dis_buf)
++ dev_dbg(isp->dev, "%s: dis stat with exp_id %d is ready\n",
++ __func__, dis_buf->dis_data->exp_id);
++ else
++ dev_dbg(isp->dev, "%s: dis stat is ready with no exp_id found\n",
++ __func__);
+ break;
+ case IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME:
+ case IA_CSS_BUFFER_TYPE_SEC_VF_OUTPUT_FRAME:
+diff --git a/drivers/staging/media/atomisp/pci/runtime/rmgr/src/rmgr_vbuf.c b/drivers/staging/media/atomisp/pci/runtime/rmgr/src/rmgr_vbuf.c
+index 39604752785bd..d96aaa4bc75d6 100644
+--- a/drivers/staging/media/atomisp/pci/runtime/rmgr/src/rmgr_vbuf.c
++++ b/drivers/staging/media/atomisp/pci/runtime/rmgr/src/rmgr_vbuf.c
+@@ -254,7 +254,7 @@ void rmgr_pop_handle(struct ia_css_rmgr_vbuf_pool *pool,
+ void ia_css_rmgr_acq_vbuf(struct ia_css_rmgr_vbuf_pool *pool,
+ struct ia_css_rmgr_vbuf_handle **handle)
+ {
+- struct ia_css_rmgr_vbuf_handle h = { 0 };
++ struct ia_css_rmgr_vbuf_handle h;
+
+ if ((!pool) || (!handle) || (!*handle)) {
+ IA_CSS_LOG("Invalid inputs");
+@@ -272,7 +272,7 @@ void ia_css_rmgr_acq_vbuf(struct ia_css_rmgr_vbuf_pool *pool,
+ h.size = (*handle)->size;
+ /* release ref to current buffer */
+ ia_css_rmgr_refcount_release_vbuf(handle);
+- **handle = h;
++ *handle = &h;
+ }
+ /* get new buffer for needed size */
+ if ((*handle)->vptr == 0x0) {
+diff --git a/drivers/staging/media/hantro/hantro_g2_hevc_dec.c b/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
+index 5df6f08e26f52..d28653d04d20f 100644
+--- a/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
++++ b/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
+@@ -390,11 +390,10 @@ static int set_ref(struct hantro_ctx *ctx)
+ !!(pps->flags & V4L2_HEVC_PPS_FLAG_LOOP_FILTER_ACROSS_TILES_ENABLED));
+
+ /*
+- * Write POC count diff from current pic. For frame decoding only compute
+- * pic_order_cnt[0] and ignore pic_order_cnt[1] used in field-coding.
++ * Write POC count diff from current pic.
+ */
+ for (i = 0; i < decode_params->num_active_dpb_entries && i < ARRAY_SIZE(cur_poc); i++) {
+- char poc_diff = decode_params->pic_order_cnt_val - dpb[i].pic_order_cnt[0];
++ char poc_diff = decode_params->pic_order_cnt_val - dpb[i].pic_order_cnt_val;
+
+ hantro_reg_write(vpu, &cur_poc[i], poc_diff);
+ }
+@@ -421,7 +420,7 @@ static int set_ref(struct hantro_ctx *ctx)
+ dpb_longterm_e = 0;
+ for (i = 0; i < decode_params->num_active_dpb_entries &&
+ i < (V4L2_HEVC_DPB_ENTRIES_NUM_MAX - 1); i++) {
+- luma_addr = hantro_hevc_get_ref_buf(ctx, dpb[i].pic_order_cnt[0]);
++ luma_addr = hantro_hevc_get_ref_buf(ctx, dpb[i].pic_order_cnt_val);
+ if (!luma_addr)
+ return -ENOMEM;
+
+diff --git a/drivers/staging/media/hantro/hantro_g2_regs.h b/drivers/staging/media/hantro/hantro_g2_regs.h
+index 877d663a81813..82606783591a2 100644
+--- a/drivers/staging/media/hantro/hantro_g2_regs.h
++++ b/drivers/staging/media/hantro/hantro_g2_regs.h
+@@ -107,7 +107,7 @@
+
+ #define g2_start_code_e G2_DEC_REG(10, 31, 0x1)
+ #define g2_init_qp_old G2_DEC_REG(10, 25, 0x3f)
+-#define g2_init_qp G2_DEC_REG(10, 24, 0x3f)
++#define g2_init_qp G2_DEC_REG(10, 24, 0x7f)
+ #define g2_num_tile_cols_old G2_DEC_REG(10, 20, 0x1f)
+ #define g2_num_tile_cols G2_DEC_REG(10, 19, 0x1f)
+ #define g2_num_tile_rows_old G2_DEC_REG(10, 15, 0x1f)
+diff --git a/drivers/staging/media/hantro/hantro_hevc.c b/drivers/staging/media/hantro/hantro_hevc.c
+index f86c98e191776..df1f81952bba1 100644
+--- a/drivers/staging/media/hantro/hantro_hevc.c
++++ b/drivers/staging/media/hantro/hantro_hevc.c
+@@ -33,7 +33,7 @@ void hantro_hevc_ref_init(struct hantro_ctx *ctx)
+ }
+
+ dma_addr_t hantro_hevc_get_ref_buf(struct hantro_ctx *ctx,
+- int poc)
++ s32 poc)
+ {
+ struct hantro_hevc_dec_hw_ctx *hevc_dec = &ctx->hevc_dec;
+ int i;
+@@ -154,6 +154,25 @@ err_free_tile_buffers:
+ return -ENOMEM;
+ }
+
++static int hantro_hevc_validate_sps(struct hantro_ctx *ctx, const struct v4l2_ctrl_hevc_sps *sps)
++{
++ /*
++ * for tile pixel format check if the width and height match
++ * hardware constraints
++ */
++ if (ctx->vpu_dst_fmt->fourcc == V4L2_PIX_FMT_NV12_4L4) {
++ if (ctx->dst_fmt.width !=
++ ALIGN(sps->pic_width_in_luma_samples, ctx->vpu_dst_fmt->frmsize.step_width))
++ return -EINVAL;
++
++ if (ctx->dst_fmt.height !=
++ ALIGN(sps->pic_height_in_luma_samples, ctx->vpu_dst_fmt->frmsize.step_height))
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
+ int hantro_hevc_dec_prepare_run(struct hantro_ctx *ctx)
+ {
+ struct hantro_hevc_dec_hw_ctx *hevc_ctx = &ctx->hevc_dec;
+@@ -177,6 +196,10 @@ int hantro_hevc_dec_prepare_run(struct hantro_ctx *ctx)
+ if (WARN_ON(!ctrls->sps))
+ return -EINVAL;
+
++ ret = hantro_hevc_validate_sps(ctx, ctrls->sps);
++ if (ret)
++ return ret;
++
+ ctrls->pps =
+ hantro_get_ctrl(ctx, V4L2_CID_MPEG_VIDEO_HEVC_PPS);
+ if (WARN_ON(!ctrls->pps))
+diff --git a/drivers/staging/media/hantro/hantro_hw.h b/drivers/staging/media/hantro/hantro_hw.h
+index 52a960f6fa4a6..77769d2bb38e5 100644
+--- a/drivers/staging/media/hantro/hantro_hw.h
++++ b/drivers/staging/media/hantro/hantro_hw.h
+@@ -18,9 +18,21 @@
+ #define DEC_8190_ALIGN_MASK 0x07U
+
+ #define MB_DIM 16
++#define TILE_MB_DIM 4
+ #define MB_WIDTH(w) DIV_ROUND_UP(w, MB_DIM)
+ #define MB_HEIGHT(h) DIV_ROUND_UP(h, MB_DIM)
+
++#define FMT_MIN_WIDTH 48
++#define FMT_MIN_HEIGHT 48
++#define FMT_HD_WIDTH 1280
++#define FMT_HD_HEIGHT 720
++#define FMT_FHD_WIDTH 1920
++#define FMT_FHD_HEIGHT 1088
++#define FMT_UHD_WIDTH 3840
++#define FMT_UHD_HEIGHT 2160
++#define FMT_4K_WIDTH 4096
++#define FMT_4K_HEIGHT 2304
++
+ #define NUM_REF_PICTURES (V4L2_HEVC_DPB_ENTRIES_NUM_MAX + 1)
+
+ struct hantro_dev;
+@@ -133,7 +145,7 @@ struct hantro_hevc_dec_hw_ctx {
+ struct hantro_aux_buf tile_bsd;
+ struct hantro_aux_buf ref_bufs[NUM_REF_PICTURES];
+ struct hantro_aux_buf scaling_lists;
+- int ref_bufs_poc[NUM_REF_PICTURES];
++ s32 ref_bufs_poc[NUM_REF_PICTURES];
+ u32 ref_bufs_used;
+ struct hantro_hevc_dec_ctrls ctrls;
+ unsigned int num_tile_cols_allocated;
+@@ -345,9 +357,10 @@ void hantro_hevc_dec_exit(struct hantro_ctx *ctx);
+ int hantro_g2_hevc_dec_run(struct hantro_ctx *ctx);
+ int hantro_hevc_dec_prepare_run(struct hantro_ctx *ctx);
+ void hantro_hevc_ref_init(struct hantro_ctx *ctx);
+-dma_addr_t hantro_hevc_get_ref_buf(struct hantro_ctx *ctx, int poc);
++dma_addr_t hantro_hevc_get_ref_buf(struct hantro_ctx *ctx, s32 poc);
+ int hantro_hevc_add_ref_buf(struct hantro_ctx *ctx, int poc, dma_addr_t addr);
+
++
+ static inline unsigned short hantro_vp9_num_sbs(unsigned short dimension)
+ {
+ return (dimension + 63) / 64;
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index 22ad182ee972c..29cc61d53b71a 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -259,7 +259,7 @@ static int hantro_try_fmt(const struct hantro_ctx *ctx,
+ } else if (ctx->is_encoder) {
+ vpu_fmt = ctx->vpu_dst_fmt;
+ } else {
+- vpu_fmt = ctx->vpu_src_fmt;
++ vpu_fmt = fmt;
+ /*
+ * Width/height on the CAPTURE end of a decoder are ignored and
+ * replaced by the OUTPUT ones.
+diff --git a/drivers/staging/media/hantro/imx8m_vpu_hw.c b/drivers/staging/media/hantro/imx8m_vpu_hw.c
+index 9802508bade27..77f574fdfa77b 100644
+--- a/drivers/staging/media/hantro/imx8m_vpu_hw.c
++++ b/drivers/staging/media/hantro/imx8m_vpu_hw.c
+@@ -83,6 +83,14 @@ static const struct hantro_fmt imx8m_vpu_postproc_fmts[] = {
+ .fourcc = V4L2_PIX_FMT_YUYV,
+ .codec_mode = HANTRO_MODE_NONE,
+ .postprocessed = true,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ };
+
+@@ -90,17 +98,25 @@ static const struct hantro_fmt imx8m_vpu_dec_fmts[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .codec_mode = HANTRO_MODE_NONE,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_MPEG2_SLICE,
+ .codec_mode = HANTRO_MODE_MPEG2_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1920,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 1088,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -109,11 +125,11 @@ static const struct hantro_fmt imx8m_vpu_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_VP8_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 3840,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 2160,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -122,11 +138,11 @@ static const struct hantro_fmt imx8m_vpu_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_H264_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 3840,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 2160,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -137,6 +153,14 @@ static const struct hantro_fmt imx8m_vpu_g2_postproc_fmts[] = {
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .codec_mode = HANTRO_MODE_NONE,
+ .postprocessed = true,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ };
+
+@@ -144,18 +168,26 @@ static const struct hantro_fmt imx8m_vpu_g2_dec_fmts[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_NV12_4L4,
+ .codec_mode = HANTRO_MODE_NONE,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
++ .step_width = TILE_MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = TILE_MB_DIM,
++ },
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_HEVC_SLICE,
+ .codec_mode = HANTRO_MODE_HEVC_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 3840,
+- .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 2160,
+- .step_height = MB_DIM,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
++ .step_width = TILE_MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = TILE_MB_DIM,
+ },
+ },
+ {
+@@ -163,12 +195,12 @@ static const struct hantro_fmt imx8m_vpu_g2_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_VP9_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 3840,
+- .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 2160,
+- .step_height = MB_DIM,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
++ .step_width = TILE_MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = TILE_MB_DIM,
+ },
+ },
+ };
+diff --git a/drivers/staging/media/hantro/rockchip_vpu_hw.c b/drivers/staging/media/hantro/rockchip_vpu_hw.c
+index fc96501f3bc87..26e16b5a6a703 100644
+--- a/drivers/staging/media/hantro/rockchip_vpu_hw.c
++++ b/drivers/staging/media/hantro/rockchip_vpu_hw.c
+@@ -63,6 +63,14 @@ static const struct hantro_fmt rockchip_vpu1_postproc_fmts[] = {
+ .fourcc = V4L2_PIX_FMT_YUYV,
+ .codec_mode = HANTRO_MODE_NONE,
+ .postprocessed = true,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ };
+
+@@ -70,17 +78,25 @@ static const struct hantro_fmt rk3066_vpu_dec_fmts[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .codec_mode = HANTRO_MODE_NONE,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_H264_SLICE,
+ .codec_mode = HANTRO_MODE_H264_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1920,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 1088,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -89,11 +105,11 @@ static const struct hantro_fmt rk3066_vpu_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_MPEG2_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1920,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 1088,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -102,11 +118,11 @@ static const struct hantro_fmt rk3066_vpu_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_VP8_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1920,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 1088,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -116,17 +132,25 @@ static const struct hantro_fmt rk3288_vpu_dec_fmts[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .codec_mode = HANTRO_MODE_NONE,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_4K_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_4K_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_H264_SLICE,
+ .codec_mode = HANTRO_MODE_H264_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 4096,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_4K_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 2304,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_4K_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -135,11 +159,11 @@ static const struct hantro_fmt rk3288_vpu_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_MPEG2_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1920,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 1088,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -148,31 +172,80 @@ static const struct hantro_fmt rk3288_vpu_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_VP8_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 3840,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 2160,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+ };
+
+-static const struct hantro_fmt rk3399_vpu_dec_fmts[] = {
++static const struct hantro_fmt rockchip_vdpu2_dec_fmts[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .codec_mode = HANTRO_MODE_NONE,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_H264_SLICE,
+ .codec_mode = HANTRO_MODE_H264_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1920,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
++ },
++ {
++ .fourcc = V4L2_PIX_FMT_MPEG2_SLICE,
++ .codec_mode = HANTRO_MODE_MPEG2_DEC,
++ .max_depth = 2,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
++ },
++ {
++ .fourcc = V4L2_PIX_FMT_VP8_FRAME,
++ .codec_mode = HANTRO_MODE_VP8_DEC,
++ .max_depth = 2,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 1088,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = MB_DIM,
++ },
++ },
++};
++
++static const struct hantro_fmt rk3399_vpu_dec_fmts[] = {
++ {
++ .fourcc = V4L2_PIX_FMT_NV12,
++ .codec_mode = HANTRO_MODE_NONE,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -181,11 +254,11 @@ static const struct hantro_fmt rk3399_vpu_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_MPEG2_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1920,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_FHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 1088,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_FHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -194,11 +267,11 @@ static const struct hantro_fmt rk3399_vpu_dec_fmts[] = {
+ .codec_mode = HANTRO_MODE_VP8_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 3840,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 2160,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -516,8 +589,8 @@ const struct hantro_variant rk3288_vpu_variant = {
+
+ const struct hantro_variant rk3328_vpu_variant = {
+ .dec_offset = 0x400,
+- .dec_fmts = rk3399_vpu_dec_fmts,
+- .num_dec_fmts = ARRAY_SIZE(rk3399_vpu_dec_fmts),
++ .dec_fmts = rockchip_vdpu2_dec_fmts,
++ .num_dec_fmts = ARRAY_SIZE(rockchip_vdpu2_dec_fmts),
+ .codec = HANTRO_MPEG2_DECODER | HANTRO_VP8_DECODER |
+ HANTRO_H264_DECODER,
+ .codec_ops = rk3399_vpu_codec_ops,
+@@ -528,6 +601,11 @@ const struct hantro_variant rk3328_vpu_variant = {
+ .num_clocks = ARRAY_SIZE(rockchip_vpu_clk_names),
+ };
+
++/*
++ * H.264 decoding explicitly disabled in RK3399.
++ * This ensures userspace applications use the Rockchip VDEC core,
++ * which has better performance.
++ */
+ const struct hantro_variant rk3399_vpu_variant = {
+ .enc_offset = 0x0,
+ .enc_fmts = rockchip_vpu_enc_fmts,
+@@ -547,8 +625,8 @@ const struct hantro_variant rk3399_vpu_variant = {
+
+ const struct hantro_variant rk3568_vpu_variant = {
+ .dec_offset = 0x400,
+- .dec_fmts = rk3399_vpu_dec_fmts,
+- .num_dec_fmts = ARRAY_SIZE(rk3399_vpu_dec_fmts),
++ .dec_fmts = rockchip_vdpu2_dec_fmts,
++ .num_dec_fmts = ARRAY_SIZE(rockchip_vdpu2_dec_fmts),
+ .codec = HANTRO_MPEG2_DECODER |
+ HANTRO_VP8_DECODER | HANTRO_H264_DECODER,
+ .codec_ops = rk3399_vpu_codec_ops,
+@@ -564,8 +642,8 @@ const struct hantro_variant px30_vpu_variant = {
+ .enc_fmts = rockchip_vpu_enc_fmts,
+ .num_enc_fmts = ARRAY_SIZE(rockchip_vpu_enc_fmts),
+ .dec_offset = 0x400,
+- .dec_fmts = rk3399_vpu_dec_fmts,
+- .num_dec_fmts = ARRAY_SIZE(rk3399_vpu_dec_fmts),
++ .dec_fmts = rockchip_vdpu2_dec_fmts,
++ .num_dec_fmts = ARRAY_SIZE(rockchip_vdpu2_dec_fmts),
+ .codec = HANTRO_JPEG_ENCODER | HANTRO_MPEG2_DECODER |
+ HANTRO_VP8_DECODER | HANTRO_H264_DECODER,
+ .codec_ops = rk3399_vpu_codec_ops,
+diff --git a/drivers/staging/media/hantro/sama5d4_vdec_hw.c b/drivers/staging/media/hantro/sama5d4_vdec_hw.c
+index b2fc1c5613e19..b205e2db5b04d 100644
+--- a/drivers/staging/media/hantro/sama5d4_vdec_hw.c
++++ b/drivers/staging/media/hantro/sama5d4_vdec_hw.c
+@@ -16,6 +16,14 @@ static const struct hantro_fmt sama5d4_vdec_postproc_fmts[] = {
+ .fourcc = V4L2_PIX_FMT_YUYV,
+ .codec_mode = HANTRO_MODE_NONE,
+ .postprocessed = true,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_HD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_HD_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ };
+
+@@ -23,17 +31,25 @@ static const struct hantro_fmt sama5d4_vdec_fmts[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .codec_mode = HANTRO_MODE_NONE,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_HD_WIDTH,
++ .step_width = MB_DIM,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_HD_HEIGHT,
++ .step_height = MB_DIM,
++ },
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_MPEG2_SLICE,
+ .codec_mode = HANTRO_MODE_MPEG2_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1280,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_HD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 720,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_HD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -42,11 +58,11 @@ static const struct hantro_fmt sama5d4_vdec_fmts[] = {
+ .codec_mode = HANTRO_MODE_VP8_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1280,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_HD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 720,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_HD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+@@ -55,11 +71,11 @@ static const struct hantro_fmt sama5d4_vdec_fmts[] = {
+ .codec_mode = HANTRO_MODE_H264_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 1280,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_HD_WIDTH,
+ .step_width = MB_DIM,
+- .min_height = 48,
+- .max_height = 720,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_HD_HEIGHT,
+ .step_height = MB_DIM,
+ },
+ },
+diff --git a/drivers/staging/media/hantro/sunxi_vpu_hw.c b/drivers/staging/media/hantro/sunxi_vpu_hw.c
+index c0edd5856a0c8..fbeac81e59e13 100644
+--- a/drivers/staging/media/hantro/sunxi_vpu_hw.c
++++ b/drivers/staging/media/hantro/sunxi_vpu_hw.c
+@@ -14,6 +14,14 @@ static const struct hantro_fmt sunxi_vpu_postproc_fmts[] = {
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .codec_mode = HANTRO_MODE_NONE,
+ .postprocessed = true,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
++ .step_width = 32,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = 32,
++ },
+ },
+ };
+
+@@ -21,17 +29,25 @@ static const struct hantro_fmt sunxi_vpu_dec_fmts[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_NV12_4L4,
+ .codec_mode = HANTRO_MODE_NONE,
++ .frmsize = {
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
++ .step_width = 32,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
++ .step_height = 32,
++ },
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_VP9_FRAME,
+ .codec_mode = HANTRO_MODE_VP9_DEC,
+ .max_depth = 2,
+ .frmsize = {
+- .min_width = 48,
+- .max_width = 3840,
++ .min_width = FMT_MIN_WIDTH,
++ .max_width = FMT_UHD_WIDTH,
+ .step_width = 32,
+- .min_height = 48,
+- .max_height = 2160,
++ .min_height = FMT_MIN_HEIGHT,
++ .max_height = FMT_UHD_HEIGHT,
+ .step_height = 32,
+ },
+ },
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index 44f385be9f6c6..04419381ea56b 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -143,10 +143,13 @@ static void cedrus_h265_frame_info_write_dpb(struct cedrus_ctx *ctx,
+ for (i = 0; i < num_active_dpb_entries; i++) {
+ int buffer_index = vb2_find_timestamp(vq, dpb[i].timestamp, 0);
+ u32 pic_order_cnt[2] = {
+- dpb[i].pic_order_cnt[0],
+- dpb[i].pic_order_cnt[1]
++ dpb[i].pic_order_cnt_val,
++ dpb[i].pic_order_cnt_val
+ };
+
++ if (buffer_index < 0)
++ continue;
++
+ cedrus_h265_frame_info_write_single(ctx, i, dpb[i].field_pic,
+ pic_order_cnt,
+ buffer_index);
+@@ -301,6 +304,31 @@ static void cedrus_h265_write_scaling_list(struct cedrus_ctx *ctx,
+ }
+ }
+
++static int cedrus_h265_is_low_delay(struct cedrus_run *run)
++{
++ const struct v4l2_ctrl_hevc_slice_params *slice_params;
++ const struct v4l2_hevc_dpb_entry *dpb;
++ s32 poc;
++ int i;
++
++ slice_params = run->h265.slice_params;
++ poc = run->h265.decode_params->pic_order_cnt_val;
++ dpb = run->h265.decode_params->dpb;
++
++ for (i = 0; i < slice_params->num_ref_idx_l0_active_minus1 + 1; i++)
++ if (dpb[slice_params->ref_idx_l0[i]].pic_order_cnt_val > poc)
++ return 1;
++
++ if (slice_params->slice_type != V4L2_HEVC_SLICE_TYPE_B)
++ return 0;
++
++ for (i = 0; i < slice_params->num_ref_idx_l1_active_minus1 + 1; i++)
++ if (dpb[slice_params->ref_idx_l1[i]].pic_order_cnt_val > poc)
++ return 1;
++
++ return 0;
++}
++
+ static void cedrus_h265_setup(struct cedrus_ctx *ctx,
+ struct cedrus_run *run)
+ {
+@@ -559,7 +587,6 @@ static void cedrus_h265_setup(struct cedrus_ctx *ctx,
+
+ reg = VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_TC_OFFSET_DIV2(slice_params->slice_tc_offset_div2) |
+ VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_BETA_OFFSET_DIV2(slice_params->slice_beta_offset_div2) |
+- VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_POC_BIGEST_IN_RPS_ST(decode_params->num_poc_st_curr_after == 0) |
+ VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_CR_QP_OFFSET(slice_params->slice_cr_qp_offset) |
+ VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_CB_QP_OFFSET(slice_params->slice_cb_qp_offset) |
+ VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_QP_DELTA(slice_params->slice_qp_delta);
+@@ -572,6 +599,9 @@ static void cedrus_h265_setup(struct cedrus_ctx *ctx,
+ V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED,
+ slice_params->flags);
+
++ if (slice_params->slice_type != V4L2_HEVC_SLICE_TYPE_I && !cedrus_h265_is_low_delay(run))
++ reg |= VE_DEC_H265_DEC_SLICE_HDR_INFO1_FLAG_SLICE_NOT_LOW_DELAY;
++
+ cedrus_write(dev, VE_DEC_H265_DEC_SLICE_HDR_INFO1, reg);
+
+ chroma_log2_weight_denom = pred_weight_table->luma_log2_weight_denom +
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
+index bdb062ad86823..d81f7513ade0d 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
+@@ -377,13 +377,12 @@
+
+ #define VE_DEC_H265_DEC_SLICE_HDR_INFO1_FLAG_SLICE_DEBLOCKING_FILTER_DISABLED BIT(23)
+ #define VE_DEC_H265_DEC_SLICE_HDR_INFO1_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED BIT(22)
++#define VE_DEC_H265_DEC_SLICE_HDR_INFO1_FLAG_SLICE_NOT_LOW_DELAY BIT(21)
+
+ #define VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_TC_OFFSET_DIV2(v) \
+ SHIFT_AND_MASK_BITS(v, 31, 28)
+ #define VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_BETA_OFFSET_DIV2(v) \
+ SHIFT_AND_MASK_BITS(v, 27, 24)
+-#define VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_POC_BIGEST_IN_RPS_ST(v) \
+- ((v) ? BIT(21) : 0)
+ #define VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_CR_QP_OFFSET(v) \
+ SHIFT_AND_MASK_BITS(v, 20, 16)
+ #define VE_DEC_H265_DEC_SLICE_HDR_INFO1_SLICE_CB_QP_OFFSET(v) \
+diff --git a/drivers/staging/rtl8192u/r8192U.h b/drivers/staging/rtl8192u/r8192U.h
+index 14ca00a2789b0..1942cb8493748 100644
+--- a/drivers/staging/rtl8192u/r8192U.h
++++ b/drivers/staging/rtl8192u/r8192U.h
+@@ -1013,7 +1013,7 @@ typedef struct r8192_priv {
+ bool bis_any_nonbepkts;
+ bool bcurrent_turbo_EDCA;
+ bool bis_cur_rdlstate;
+- struct timer_list fsync_timer;
++ struct delayed_work fsync_work;
+ bool bfsync_processing; /* 500ms Fsync timer is active or not */
+ u32 rate_record;
+ u32 rateCountDiffRecord;
+diff --git a/drivers/staging/rtl8192u/r8192U_dm.c b/drivers/staging/rtl8192u/r8192U_dm.c
+index 725bf5ca9e34d..0fcfcaa6500bf 100644
+--- a/drivers/staging/rtl8192u/r8192U_dm.c
++++ b/drivers/staging/rtl8192u/r8192U_dm.c
+@@ -2578,19 +2578,20 @@ static void dm_init_fsync(struct net_device *dev)
+ priv->ieee80211->fsync_seconddiff_ratethreshold = 200;
+ priv->ieee80211->fsync_state = Default_Fsync;
+ priv->framesyncMonitor = 1; /* current default 0xc38 monitor on */
+- timer_setup(&priv->fsync_timer, dm_fsync_timer_callback, 0);
++ INIT_DELAYED_WORK(&priv->fsync_work, dm_fsync_work_callback);
+ }
+
+ static void dm_deInit_fsync(struct net_device *dev)
+ {
+ struct r8192_priv *priv = ieee80211_priv(dev);
+
+- del_timer_sync(&priv->fsync_timer);
++ cancel_delayed_work_sync(&priv->fsync_work);
+ }
+
+-void dm_fsync_timer_callback(struct timer_list *t)
++void dm_fsync_work_callback(struct work_struct *work)
+ {
+- struct r8192_priv *priv = from_timer(priv, t, fsync_timer);
++ struct r8192_priv *priv =
++ container_of(work, struct r8192_priv, fsync_work.work);
+ struct net_device *dev = priv->ieee80211->dev;
+ u32 rate_index, rate_count = 0, rate_count_diff = 0;
+ bool bSwitchFromCountDiff = false;
+@@ -2657,17 +2658,16 @@ void dm_fsync_timer_callback(struct timer_list *t)
+ }
+ }
+ if (bDoubleTimeInterval) {
+- if (timer_pending(&priv->fsync_timer))
+- del_timer_sync(&priv->fsync_timer);
+- priv->fsync_timer.expires = jiffies +
+- msecs_to_jiffies(priv->ieee80211->fsync_time_interval*priv->ieee80211->fsync_multiple_timeinterval);
+- add_timer(&priv->fsync_timer);
++ cancel_delayed_work_sync(&priv->fsync_work);
++ schedule_delayed_work(&priv->fsync_work,
++ msecs_to_jiffies(priv
++ ->ieee80211->fsync_time_interval *
++ priv->ieee80211->fsync_multiple_timeinterval));
+ } else {
+- if (timer_pending(&priv->fsync_timer))
+- del_timer_sync(&priv->fsync_timer);
+- priv->fsync_timer.expires = jiffies +
+- msecs_to_jiffies(priv->ieee80211->fsync_time_interval);
+- add_timer(&priv->fsync_timer);
++ cancel_delayed_work_sync(&priv->fsync_work);
++ schedule_delayed_work(&priv->fsync_work,
++ msecs_to_jiffies(priv
++ ->ieee80211->fsync_time_interval));
+ }
+ } else {
+ /* Let Register return to default value; */
+@@ -2695,7 +2695,7 @@ static void dm_EndSWFsync(struct net_device *dev)
+ struct r8192_priv *priv = ieee80211_priv(dev);
+
+ RT_TRACE(COMP_HALDM, "%s\n", __func__);
+- del_timer_sync(&(priv->fsync_timer));
++ cancel_delayed_work_sync(&priv->fsync_work);
+
+ /* Let Register return to default value; */
+ if (priv->bswitch_fsync) {
+@@ -2736,11 +2736,9 @@ static void dm_StartSWFsync(struct net_device *dev)
+ if (priv->ieee80211->fsync_rate_bitmap & rateBitmap)
+ priv->rate_record += priv->stats.received_rate_histogram[1][rateIndex];
+ }
+- if (timer_pending(&priv->fsync_timer))
+- del_timer_sync(&priv->fsync_timer);
+- priv->fsync_timer.expires = jiffies +
+- msecs_to_jiffies(priv->ieee80211->fsync_time_interval);
+- add_timer(&priv->fsync_timer);
++ cancel_delayed_work_sync(&priv->fsync_work);
++ schedule_delayed_work(&priv->fsync_work,
++ msecs_to_jiffies(priv->ieee80211->fsync_time_interval));
+
+ write_nic_dword(dev, rOFDM0_RxDetector2, 0x465c12cd);
+ }
+diff --git a/drivers/staging/rtl8192u/r8192U_dm.h b/drivers/staging/rtl8192u/r8192U_dm.h
+index 0b2a1c688597c..2159018b4e38f 100644
+--- a/drivers/staging/rtl8192u/r8192U_dm.h
++++ b/drivers/staging/rtl8192u/r8192U_dm.h
+@@ -166,7 +166,7 @@ void dm_force_tx_fw_info(struct net_device *dev,
+ void dm_init_edca_turbo(struct net_device *dev);
+ void dm_rf_operation_test_callback(unsigned long data);
+ void dm_rf_pathcheck_workitemcallback(struct work_struct *work);
+-void dm_fsync_timer_callback(struct timer_list *t);
++void dm_fsync_work_callback(struct work_struct *work);
+ void dm_cck_txpower_adjust(struct net_device *dev, bool binch14);
+ void dm_shadow_init(struct net_device *dev);
+ void dm_initialize_txpower_tracking(struct net_device *dev);
+diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
+index 43b5604c0bcad..349aa3c4b6686 100644
+--- a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
++++ b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
+@@ -2086,6 +2086,7 @@ static u8 rtw_get_chan_type(struct adapter *adapter)
+ }
+
+ static int cfg80211_rtw_get_channel(struct wiphy *wiphy, struct wireless_dev *wdev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef)
+ {
+ struct adapter *adapter = wiphy_to_adapter(wiphy);
+@@ -2446,7 +2447,8 @@ static int cfg80211_rtw_change_beacon(struct wiphy *wiphy, struct net_device *nd
+ return rtw_add_beacon(adapter, info->head, info->head_len, info->tail, info->tail_len);
+ }
+
+-static int cfg80211_rtw_stop_ap(struct wiphy *wiphy, struct net_device *ndev)
++static int cfg80211_rtw_stop_ap(struct wiphy *wiphy, struct net_device *ndev,
++ unsigned int link_id)
+ {
+ return 0;
+ }
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index b8151d95a8068..dc19e7c80751a 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -21,6 +21,7 @@
+ #include <linux/pm_qos.h>
+ #include <linux/slab.h>
+ #include <linux/thermal.h>
++#include <linux/units.h>
+
+ #include <trace/events/thermal.h>
+
+@@ -101,6 +102,7 @@ static unsigned long get_level(struct cpufreq_cooling_device *cpufreq_cdev,
+ static u32 cpu_freq_to_power(struct cpufreq_cooling_device *cpufreq_cdev,
+ u32 freq)
+ {
++ unsigned long power_mw;
+ int i;
+
+ for (i = cpufreq_cdev->max_level - 1; i >= 0; i--) {
+@@ -108,16 +110,23 @@ static u32 cpu_freq_to_power(struct cpufreq_cooling_device *cpufreq_cdev,
+ break;
+ }
+
+- return cpufreq_cdev->em->table[i + 1].power;
++ power_mw = cpufreq_cdev->em->table[i + 1].power;
++ power_mw /= MICROWATT_PER_MILLIWATT;
++
++ return power_mw;
+ }
+
+ static u32 cpu_power_to_freq(struct cpufreq_cooling_device *cpufreq_cdev,
+ u32 power)
+ {
++ unsigned long em_power_mw;
+ int i;
+
+ for (i = cpufreq_cdev->max_level; i > 0; i--) {
+- if (power >= cpufreq_cdev->em->table[i].power)
++ /* Convert EM power to milli-Watts to make safe comparison */
++ em_power_mw = cpufreq_cdev->em->table[i].power;
++ em_power_mw /= MICROWATT_PER_MILLIWATT;
++ if (power >= em_power_mw)
+ break;
+ }
+
+diff --git a/drivers/thermal/devfreq_cooling.c b/drivers/thermal/devfreq_cooling.c
+index 8c76f9655e577..8d1260f65061e 100644
+--- a/drivers/thermal/devfreq_cooling.c
++++ b/drivers/thermal/devfreq_cooling.c
+@@ -200,7 +200,11 @@ static int devfreq_cooling_get_requested_power(struct thermal_cooling_device *cd
+ res = dfc->power_ops->get_real_power(df, power, freq, voltage);
+ if (!res) {
+ state = dfc->capped_state;
++
++ /* Convert EM power into milli-Watts first */
+ dfc->res_util = dfc->em_pd->table[state].power;
++ dfc->res_util /= MICROWATT_PER_MILLIWATT;
++
+ dfc->res_util *= SCALE_ERROR_MITIGATION;
+
+ if (*power > 1)
+@@ -218,8 +222,10 @@ static int devfreq_cooling_get_requested_power(struct thermal_cooling_device *cd
+
+ _normalize_load(&status);
+
+- /* Scale power for utilization */
++ /* Convert EM power into milli-Watts first */
+ *power = dfc->em_pd->table[perf_idx].power;
++ *power /= MICROWATT_PER_MILLIWATT;
++ /* Scale power for utilization */
+ *power *= status.busy_time;
+ *power >>= 10;
+ }
+@@ -244,6 +250,7 @@ static int devfreq_cooling_state2power(struct thermal_cooling_device *cdev,
+
+ perf_idx = dfc->max_state - state;
+ *power = dfc->em_pd->table[perf_idx].power;
++ *power /= MICROWATT_PER_MILLIWATT;
+
+ return 0;
+ }
+@@ -254,7 +261,7 @@ static int devfreq_cooling_power2state(struct thermal_cooling_device *cdev,
+ struct devfreq_cooling_device *dfc = cdev->devdata;
+ struct devfreq *df = dfc->devfreq;
+ struct devfreq_dev_status status;
+- unsigned long freq;
++ unsigned long freq, em_power_mw;
+ s32 est_power;
+ int i;
+
+@@ -279,9 +286,13 @@ static int devfreq_cooling_power2state(struct thermal_cooling_device *cdev,
+ * Find the first cooling state that is within the power
+ * budget. The EM power table is sorted ascending.
+ */
+- for (i = dfc->max_state; i > 0; i--)
+- if (est_power >= dfc->em_pd->table[i].power)
++ for (i = dfc->max_state; i > 0; i--) {
++ /* Convert EM power to milli-Watts to make safe comparison */
++ em_power_mw = dfc->em_pd->table[i].power;
++ em_power_mw /= MICROWATT_PER_MILLIWATT;
++ if (est_power >= em_power_mw)
+ break;
++ }
+
+ *state = dfc->max_state - i;
+ dfc->capped_state = *state;
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index 1c4aac8464a70..1e5a78131aba9 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -813,12 +813,13 @@ static const struct attribute_group cooling_device_stats_attr_group = {
+
+ static void cooling_device_stats_setup(struct thermal_cooling_device *cdev)
+ {
++ const struct attribute_group *stats_attr_group = NULL;
+ struct cooling_dev_stats *stats;
+ unsigned long states;
+ int var;
+
+ if (cdev->ops->get_max_state(cdev, &states))
+- return;
++ goto out;
+
+ states++; /* Total number of states is highest state + 1 */
+
+@@ -828,7 +829,7 @@ static void cooling_device_stats_setup(struct thermal_cooling_device *cdev)
+
+ stats = kzalloc(var, GFP_KERNEL);
+ if (!stats)
+- return;
++ goto out;
+
+ stats->time_in_state = (ktime_t *)(stats + 1);
+ stats->trans_table = (unsigned int *)(stats->time_in_state + states);
+@@ -838,9 +839,12 @@ static void cooling_device_stats_setup(struct thermal_cooling_device *cdev)
+
+ spin_lock_init(&stats->lock);
+
++ stats_attr_group = &cooling_device_stats_attr_group;
++
++out:
+ /* Fill the empty slot left in cooling_device_attr_groups */
+ var = ARRAY_SIZE(cooling_device_attr_groups) - 2;
+- cooling_device_attr_groups[var] = &cooling_device_stats_attr_group;
++ cooling_device_attr_groups[var] = stats_attr_group;
+ }
+
+ static void cooling_device_stats_destroy(struct thermal_cooling_device *cdev)
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index fd4d24f61c46b..caa5c14ed57f0 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -5,6 +5,14 @@
+ *
+ * * THIS IS A DEVELOPMENT SNAPSHOT IT IS NOT A FINAL RELEASE *
+ *
++ * Outgoing path:
++ * tty -> DLCI fifo -> scheduler -> GSM MUX data queue ---o-> ldisc
++ * control message -> GSM MUX control queue --´
++ *
++ * Incoming path:
++ * ldisc -> gsm_queue() -o--> tty
++ * `-> gsm_control_response()
++ *
+ * TO DO:
+ * Mostly done: ioctls for setting modes/timing
+ * Partly done: hooks so you can pull off frames to non tty devs
+@@ -210,6 +218,9 @@ struct gsm_mux {
+ /* Events on the GSM channel */
+ wait_queue_head_t event;
+
++ /* ldisc send work */
++ struct work_struct tx_work;
++
+ /* Bits for GSM mode decoding */
+
+ /* Framing Layer */
+@@ -235,14 +246,17 @@ struct gsm_mux {
+ struct gsm_dlci *dlci[NUM_DLCI];
+ int old_c_iflag; /* termios c_iflag value before attach */
+ bool constipated; /* Asked by remote to shut up */
++ bool has_devices; /* Devices were registered */
+
+ spinlock_t tx_lock;
+ unsigned int tx_bytes; /* TX data outstanding */
+ #define TX_THRESH_HI 8192
+ #define TX_THRESH_LO 2048
+- struct list_head tx_list; /* Pending data packets */
++ struct list_head tx_ctrl_list; /* Pending control packets */
++ struct list_head tx_data_list; /* Pending data packets */
+
+ /* Control messages */
++ struct timer_list kick_timer; /* Kick TX queuing on timeout */
+ struct timer_list t2_timer; /* Retransmit timer for commands */
+ int cretries; /* Command retry counter */
+ struct gsm_control *pending_cmd;/* Our current pending command */
+@@ -369,6 +383,11 @@ static const u8 gsm_fcs8[256] = {
+
+ static int gsmld_output(struct gsm_mux *gsm, u8 *data, int len);
+ static int gsm_modem_update(struct gsm_dlci *dlci, u8 brk);
++static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
++ u8 ctrl);
++static int gsm_send_packet(struct gsm_mux *gsm, struct gsm_msg *msg);
++static void gsmld_write_trigger(struct gsm_mux *gsm);
++static void gsmld_write_task(struct work_struct *work);
+
+ /**
+ * gsm_fcs_add - update FCS
+@@ -419,6 +438,27 @@ static int gsm_read_ea(unsigned int *val, u8 c)
+ return c & EA;
+ }
+
++/**
++ * gsm_read_ea_val - read a value until EA
++ * @val: variable holding value
++ * @data: buffer of data
++ * @dlen: length of data
++ *
++ * Processes an EA value. Updates the passed variable and
++ * returns the processed data length.
++ */
++static unsigned int gsm_read_ea_val(unsigned int *val, const u8 *data, int dlen)
++{
++ unsigned int len = 0;
++
++ for (; dlen > 0; dlen--) {
++ len++;
++ if (gsm_read_ea(val, *data++))
++ break;
++ }
++ return len;
++}
++
+ /**
+ * gsm_encode_modem - encode modem data bits
+ * @dlci: DLCI to encode from
+@@ -463,6 +503,68 @@ static void gsm_hex_dump_bytes(const char *fname, const u8 *data,
+ kfree(prefix);
+ }
+
++/**
++ * gsm_register_devices - register all tty devices for a given mux index
++ *
++ * @driver: the tty driver that describes the tty devices
++ * @index: the mux number is used to calculate the minor numbers of the
++ * ttys for this mux and may differ from the position in the
++ * mux array.
++ */
++static int gsm_register_devices(struct tty_driver *driver, unsigned int index)
++{
++ struct device *dev;
++ int i;
++ unsigned int base;
++
++ if (!driver || index >= MAX_MUX)
++ return -EINVAL;
++
++ base = index * NUM_DLCI; /* first minor for this index */
++ for (i = 1; i < NUM_DLCI; i++) {
++ /* Don't register device 0 - this is the control channel
++ * and not a usable tty interface
++ */
++ dev = tty_register_device(gsm_tty_driver, base + i, NULL);
++ if (IS_ERR(dev)) {
++ if (debug & 8)
++ pr_info("%s failed to register device minor %u",
++ __func__, base + i);
++ for (i--; i >= 1; i--)
++ tty_unregister_device(gsm_tty_driver, base + i);
++ return PTR_ERR(dev);
++ }
++ }
++
++ return 0;
++}
++
++/**
++ * gsm_unregister_devices - unregister all tty devices for a given mux index
++ *
++ * @driver: the tty driver that describes the tty devices
++ * @index: the mux number is used to calculate the minor numbers of the
++ * ttys for this mux and may differ from the position in the
++ * mux array.
++ */
++static void gsm_unregister_devices(struct tty_driver *driver,
++ unsigned int index)
++{
++ int i;
++ unsigned int base;
++
++ if (!driver || index >= MAX_MUX)
++ return;
++
++ base = index * NUM_DLCI; /* first minor for this index */
++ for (i = 1; i < NUM_DLCI; i++) {
++ /* Don't unregister device 0 - this is the control
++ * channel and not a usable tty interface
++ */
++ tty_unregister_device(gsm_tty_driver, base + i);
++ }
++}
++
+ /**
+ * gsm_print_packet - display a frame for debug
+ * @hdr: header to print before decode
+@@ -570,57 +672,73 @@ static int gsm_stuff_frame(const u8 *input, u8 *output, int len)
+ * @cr: command/response bit seen as initiator
+ * @control: control byte including PF bit
+ *
+- * Format up and transmit a control frame. These do not go via the
+- * queueing logic as they should be transmitted ahead of data when
+- * they are needed.
+- *
+- * FIXME: Lock versus data TX path
++ * Format up and transmit a control frame. These should be transmitted
++ * ahead of data when they are needed.
+ */
+-
+-static void gsm_send(struct gsm_mux *gsm, int addr, int cr, int control)
++static int gsm_send(struct gsm_mux *gsm, int addr, int cr, int control)
+ {
+- int len;
+- u8 cbuf[10];
+- u8 ibuf[3];
++ struct gsm_msg *msg;
++ u8 *dp;
+ int ocr;
++ unsigned long flags;
++
++ msg = gsm_data_alloc(gsm, addr, 0, control);
++ if (!msg)
++ return -ENOMEM;
+
+ /* toggle C/R coding if not initiator */
+ ocr = cr ^ (gsm->initiator ? 0 : 1);
+
+- switch (gsm->encoding) {
+- case 0:
+- cbuf[0] = GSM0_SOF;
+- cbuf[1] = (addr << 2) | (ocr << 1) | EA;
+- cbuf[2] = control;
+- cbuf[3] = EA; /* Length of data = 0 */
+- cbuf[4] = 0xFF - gsm_fcs_add_block(INIT_FCS, cbuf + 1, 3);
+- cbuf[5] = GSM0_SOF;
+- len = 6;
+- break;
+- case 1:
+- case 2:
+- /* Control frame + packing (but not frame stuffing) in mode 1 */
+- ibuf[0] = (addr << 2) | (ocr << 1) | EA;
+- ibuf[1] = control;
+- ibuf[2] = 0xFF - gsm_fcs_add_block(INIT_FCS, ibuf, 2);
+- /* Stuffing may double the size worst case */
+- len = gsm_stuff_frame(ibuf, cbuf + 1, 3);
+- /* Now add the SOF markers */
+- cbuf[0] = GSM1_SOF;
+- cbuf[len + 1] = GSM1_SOF;
+- /* FIXME: we can omit the lead one in many cases */
+- len += 2;
+- break;
+- default:
+- WARN_ON(1);
+- return;
+- }
+- gsmld_output(gsm, cbuf, len);
+- if (!gsm->initiator) {
+- cr = cr & gsm->initiator;
+- control = control & ~PF;
++ msg->data -= 3;
++ dp = msg->data;
++ *dp++ = (addr << 2) | (ocr << 1) | EA;
++ *dp++ = control;
++
++ if (gsm->encoding == 0)
++ *dp++ = EA; /* Length of data = 0 */
++
++ *dp = 0xFF - gsm_fcs_add_block(INIT_FCS, msg->data, dp - msg->data);
++ msg->len = (dp - msg->data) + 1;
++
++ gsm_print_packet("Q->", addr, cr, control, NULL, 0);
++
++ spin_lock_irqsave(&gsm->tx_lock, flags);
++ list_add_tail(&msg->list, &gsm->tx_ctrl_list);
++ gsm->tx_bytes += msg->len;
++ spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ gsmld_write_trigger(gsm);
++
++ return 0;
++}
++
++/**
++ * gsm_dlci_clear_queues - remove outstanding data for a DLCI
++ * @gsm: mux
++ * @dlci: clear for this DLCI
++ *
++ * Clears the data queues for a given DLCI.
++ */
++static void gsm_dlci_clear_queues(struct gsm_mux *gsm, struct gsm_dlci *dlci)
++{
++ struct gsm_msg *msg, *nmsg;
++ int addr = dlci->addr;
++ unsigned long flags;
++
++ /* Clear DLCI write fifo first */
++ spin_lock_irqsave(&dlci->lock, flags);
++ kfifo_reset(&dlci->fifo);
++ spin_unlock_irqrestore(&dlci->lock, flags);
++
++ /* Clear data packets in MUX write queue */
++ spin_lock_irqsave(&gsm->tx_lock, flags);
++ list_for_each_entry_safe(msg, nmsg, &gsm->tx_data_list, list) {
++ if (msg->addr != addr)
++ continue;
++ gsm->tx_bytes -= msg->len;
++ list_del(&msg->list);
++ kfree(msg);
+ }
+- gsm_print_packet("-->", addr, cr, control, NULL, 0);
++ spin_unlock_irqrestore(&gsm->tx_lock, flags);
+ }
+
+ /**
+@@ -683,59 +801,151 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
+ }
+
+ /**
+- * gsm_data_kick - poke the queue
++ * gsm_send_packet - sends a single packet
+ * @gsm: GSM Mux
+- * @dlci: DLCI sending the data
++ * @msg: packet to send
+ *
+- * The tty device has called us to indicate that room has appeared in
+- * the transmit queue. Ram more data into the pipe if we have any
+- * If we have been flow-stopped by a CMD_FCOFF, then we can only
+- * send messages on DLCI0 until CMD_FCON
++ * The given packet is encoded and sent out. No memory is freed.
++ * The caller must hold the gsm tx lock.
++ */
++static int gsm_send_packet(struct gsm_mux *gsm, struct gsm_msg *msg)
++{
++ int len, ret;
++
++
++ if (gsm->encoding == 0) {
++ gsm->txframe[0] = GSM0_SOF;
++ memcpy(gsm->txframe + 1, msg->data, msg->len);
++ gsm->txframe[msg->len + 1] = GSM0_SOF;
++ len = msg->len + 2;
++ } else {
++ gsm->txframe[0] = GSM1_SOF;
++ len = gsm_stuff_frame(msg->data, gsm->txframe + 1, msg->len);
++ gsm->txframe[len + 1] = GSM1_SOF;
++ len += 2;
++ }
++
++ if (debug & 4)
++ gsm_hex_dump_bytes(__func__, gsm->txframe, len);
++ gsm_print_packet("-->", msg->addr, gsm->initiator, msg->ctrl, msg->data,
++ msg->len);
++
++ ret = gsmld_output(gsm, gsm->txframe, len);
++ if (ret <= 0)
++ return ret;
++ /* FIXME: Can eliminate one SOF in many more cases */
++ gsm->tx_bytes -= msg->len;
++
++ return 0;
++}
++
++/**
++ * gsm_is_flow_ctrl_msg - checks if flow control message
++ * @msg: message to check
+ *
+- * FIXME: lock against link layer control transmissions
++ * Returns true if the given message is a flow control command of the
++ * control channel. False is returned in any other case.
+ */
++static bool gsm_is_flow_ctrl_msg(struct gsm_msg *msg)
++{
++ unsigned int cmd;
++
++ if (msg->addr > 0)
++ return false;
++
++ switch (msg->ctrl & ~PF) {
++ case UI:
++ case UIH:
++ cmd = 0;
++ if (gsm_read_ea_val(&cmd, msg->data + 2, msg->len - 2) < 1)
++ break;
++ switch (cmd & ~PF) {
++ case CMD_FCOFF:
++ case CMD_FCON:
++ return true;
++ }
++ break;
++ }
++
++ return false;
++}
+
+-static void gsm_data_kick(struct gsm_mux *gsm, struct gsm_dlci *dlci)
++/**
++ * gsm_data_kick - poke the queue
++ * @gsm: GSM Mux
++ *
++ * The tty device has called us to indicate that room has appeared in
++ * the transmit queue. Ram more data into the pipe if we have any.
++ * If we have been flow-stopped by a CMD_FCOFF, then we can only
++ * send messages on DLCI0 until CMD_FCON. The caller must hold
++ * the gsm tx lock.
++ */
++static int gsm_data_kick(struct gsm_mux *gsm)
+ {
+ struct gsm_msg *msg, *nmsg;
+- int len;
++ struct gsm_dlci *dlci;
++ int ret;
+
+- list_for_each_entry_safe(msg, nmsg, &gsm->tx_list, list) {
+- if (gsm->constipated && msg->addr)
+- continue;
+- if (gsm->encoding != 0) {
+- gsm->txframe[0] = GSM1_SOF;
+- len = gsm_stuff_frame(msg->data,
+- gsm->txframe + 1, msg->len);
+- gsm->txframe[len + 1] = GSM1_SOF;
+- len += 2;
+- } else {
+- gsm->txframe[0] = GSM0_SOF;
+- memcpy(gsm->txframe + 1 , msg->data, msg->len);
+- gsm->txframe[msg->len + 1] = GSM0_SOF;
+- len = msg->len + 2;
+- }
++ clear_bit(TTY_DO_WRITE_WAKEUP, &gsm->tty->flags);
+
+- if (debug & 4)
+- gsm_hex_dump_bytes(__func__, gsm->txframe, len);
+- if (gsmld_output(gsm, gsm->txframe, len) <= 0)
++ /* Serialize control messages and control channel messages first */
++ list_for_each_entry_safe(msg, nmsg, &gsm->tx_ctrl_list, list) {
++ if (gsm->constipated && !gsm_is_flow_ctrl_msg(msg))
++ continue;
++ ret = gsm_send_packet(gsm, msg);
++ switch (ret) {
++ case -ENOSPC:
++ return -ENOSPC;
++ case -ENODEV:
++ /* ldisc not open */
++ gsm->tx_bytes -= msg->len;
++ list_del(&msg->list);
++ kfree(msg);
++ continue;
++ default:
++ if (ret >= 0) {
++ list_del(&msg->list);
++ kfree(msg);
++ }
+ break;
+- /* FIXME: Can eliminate one SOF in many more cases */
+- gsm->tx_bytes -= msg->len;
+-
+- list_del(&msg->list);
+- kfree(msg);
++ }
++ }
+
+- if (dlci) {
+- tty_port_tty_wakeup(&dlci->port);
+- } else {
+- int i = 0;
++ if (gsm->constipated)
++ return -EAGAIN;
+
+- for (i = 0; i < NUM_DLCI; i++)
+- if (gsm->dlci[i])
+- tty_port_tty_wakeup(&gsm->dlci[i]->port);
++ /* Serialize other channels */
++ if (list_empty(&gsm->tx_data_list))
++ return 0;
++ list_for_each_entry_safe(msg, nmsg, &gsm->tx_data_list, list) {
++ dlci = gsm->dlci[msg->addr];
++ /* Send only messages for DLCIs with valid state */
++ if (dlci->state != DLCI_OPEN) {
++ gsm->tx_bytes -= msg->len;
++ list_del(&msg->list);
++ kfree(msg);
++ continue;
++ }
++ ret = gsm_send_packet(gsm, msg);
++ switch (ret) {
++ case -ENOSPC:
++ return -ENOSPC;
++ case -ENODEV:
++ /* ldisc not open */
++ gsm->tx_bytes -= msg->len;
++ list_del(&msg->list);
++ kfree(msg);
++ continue;
++ default:
++ if (ret >= 0) {
++ list_del(&msg->list);
++ kfree(msg);
++ }
++ break;
+ }
+ }
++
++ return 1;
+ }
+
+ /**
+@@ -784,9 +994,22 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+ msg->data = dp;
+
+ /* Add to the actual output queue */
+- list_add_tail(&msg->list, &gsm->tx_list);
++ switch (msg->ctrl & ~PF) {
++ case UI:
++ case UIH:
++ if (msg->addr > 0) {
++ list_add_tail(&msg->list, &gsm->tx_data_list);
++ break;
++ }
++ fallthrough;
++ default:
++ list_add_tail(&msg->list, &gsm->tx_ctrl_list);
++ break;
++ }
+ gsm->tx_bytes += msg->len;
+- gsm_data_kick(gsm, dlci);
++
++ gsmld_write_trigger(gsm);
++ mod_timer(&gsm->kick_timer, jiffies + 10 * gsm->t1 * HZ / 100);
+ }
+
+ /**
+@@ -823,41 +1046,48 @@ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ {
+ struct gsm_msg *msg;
+ u8 *dp;
+- int len, total_size, size;
+- int h = dlci->adaption - 1;
++ int h, len, size;
+
+- total_size = 0;
+- while (1) {
+- len = kfifo_len(&dlci->fifo);
+- if (len == 0)
+- return total_size;
+-
+- /* MTU/MRU count only the data bits */
+- if (len > gsm->mtu)
+- len = gsm->mtu;
+-
+- size = len + h;
+-
+- msg = gsm_data_alloc(gsm, dlci->addr, size, gsm->ftype);
+- /* FIXME: need a timer or something to kick this so it can't
+- get stuck with no work outstanding and no buffer free */
+- if (msg == NULL)
+- return -ENOMEM;
+- dp = msg->data;
+- switch (dlci->adaption) {
+- case 1: /* Unstructured */
+- break;
+- case 2: /* Unstructed with modem bits.
+- Always one byte as we never send inline break data */
+- *dp++ = (gsm_encode_modem(dlci) << 1) | EA;
+- break;
+- }
+- WARN_ON(kfifo_out_locked(&dlci->fifo, dp , len, &dlci->lock) != len);
+- __gsm_data_queue(dlci, msg);
+- total_size += size;
++ /* for modem bits without break data */
++ h = ((dlci->adaption == 1) ? 0 : 1);
++
++ len = kfifo_len(&dlci->fifo);
++ if (len == 0)
++ return 0;
++
++ /* MTU/MRU count only the data bits but watch adaption mode */
++ if ((len + h) > gsm->mtu)
++ len = gsm->mtu - h;
++
++ size = len + h;
++
++ msg = gsm_data_alloc(gsm, dlci->addr, size, gsm->ftype);
++ if (!msg)
++ return -ENOMEM;
++ dp = msg->data;
++ switch (dlci->adaption) {
++ case 1: /* Unstructured */
++ break;
++ case 2: /* Unstructured with modem bits.
++ * Always one byte as we never send inline break data
++ */
++ *dp++ = (gsm_encode_modem(dlci) << 1) | EA;
++ break;
++ default:
++ pr_err("%s: unsupported adaption %d\n", __func__,
++ dlci->adaption);
++ break;
+ }
++
++ WARN_ON(len != kfifo_out_locked(&dlci->fifo, dp, len,
++ &dlci->lock));
++
++ /* Notify upper layer about available send space. */
++ tty_port_tty_wakeup(&dlci->port);
++
++ __gsm_data_queue(dlci, msg);
+ /* Bytes of data we used up */
+- return total_size;
++ return size;
+ }
+
+ /**
+@@ -908,9 +1138,6 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+
+ size = len + overhead;
+ msg = gsm_data_alloc(gsm, dlci->addr, size, gsm->ftype);
+-
+- /* FIXME: need a timer or something to kick this so it can't
+- get stuck with no work outstanding and no buffer free */
+ if (msg == NULL) {
+ skb_queue_tail(&dlci->skb_list, dlci->skb);
+ dlci->skb = NULL;
+@@ -1006,32 +1233,43 @@ static int gsm_dlci_modem_output(struct gsm_mux *gsm, struct gsm_dlci *dlci,
+ * renegotiate DLCI priorities with optional stuff. Needs optimising.
+ */
+
+-static void gsm_dlci_data_sweep(struct gsm_mux *gsm)
++static int gsm_dlci_data_sweep(struct gsm_mux *gsm)
+ {
+- int len;
+ /* Priority ordering: We should do priority with RR of the groups */
+- int i = 1;
+-
+- while (i < NUM_DLCI) {
+- struct gsm_dlci *dlci;
++ int i, len, ret = 0;
++ bool sent;
++ struct gsm_dlci *dlci;
+
+- if (gsm->tx_bytes > TX_THRESH_HI)
+- break;
+- dlci = gsm->dlci[i];
+- if (dlci == NULL || dlci->constipated) {
+- i++;
+- continue;
++ while (gsm->tx_bytes < TX_THRESH_HI) {
++ for (sent = false, i = 1; i < NUM_DLCI; i++) {
++ dlci = gsm->dlci[i];
++ /* skip unused or blocked channel */
++ if (!dlci || dlci->constipated)
++ continue;
++ /* skip channels with invalid state */
++ if (dlci->state != DLCI_OPEN)
++ continue;
++ /* count the sent data per adaption */
++ if (dlci->adaption < 3 && !dlci->net)
++ len = gsm_dlci_data_output(gsm, dlci);
++ else
++ len = gsm_dlci_data_output_framed(gsm, dlci);
++ /* on error exit */
++ if (len < 0)
++ return ret;
++ if (len > 0) {
++ ret++;
++ sent = true;
++ /* The lower DLCs can starve the higher DLCs! */
++ break;
++ }
++ /* try next */
+ }
+- if (dlci->adaption < 3 && !dlci->net)
+- len = gsm_dlci_data_output(gsm, dlci);
+- else
+- len = gsm_dlci_data_output_framed(gsm, dlci);
+- if (len < 0)
++ if (!sent)
+ break;
+- /* DLCI empty - try the next */
+- if (len == 0)
+- i++;
+- }
++ };
++
++ return ret;
+ }
+
+ /**
+@@ -1277,7 +1515,6 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ const u8 *data, int clen)
+ {
+ u8 buf[1];
+- unsigned long flags;
+
+ switch (command) {
+ case CMD_CLD: {
+@@ -1299,9 +1536,7 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ gsm->constipated = false;
+ gsm_control_reply(gsm, CMD_FCON, NULL, 0);
+ /* Kick the link in case it is idling */
+- spin_lock_irqsave(&gsm->tx_lock, flags);
+- gsm_data_kick(gsm, NULL);
+- spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ gsmld_write_trigger(gsm);
+ break;
+ case CMD_FCOFF:
+ /* Modem wants us to STFU */
+@@ -1407,7 +1642,7 @@ static void gsm_control_retransmit(struct timer_list *t)
+ spin_lock_irqsave(&gsm->control_lock, flags);
+ ctrl = gsm->pending_cmd;
+ if (ctrl) {
+- if (gsm->cretries == 0) {
++ if (gsm->cretries == 0 || !gsm->dlci[0] || gsm->dlci[0]->dead) {
+ gsm->pending_cmd = NULL;
+ ctrl->error = -ETIMEDOUT;
+ ctrl->done = 1;
+@@ -1504,25 +1739,24 @@ static int gsm_control_wait(struct gsm_mux *gsm, struct gsm_control *control)
+
+ static void gsm_dlci_close(struct gsm_dlci *dlci)
+ {
+- unsigned long flags;
+-
+ del_timer(&dlci->t1);
+ if (debug & 8)
+ pr_debug("DLCI %d goes closed.\n", dlci->addr);
+ dlci->state = DLCI_CLOSED;
++ /* Prevent us from sending data before the link is up again */
++ dlci->constipated = true;
+ if (dlci->addr != 0) {
+ tty_port_tty_hangup(&dlci->port, false);
+- spin_lock_irqsave(&dlci->lock, flags);
+- kfifo_reset(&dlci->fifo);
+- spin_unlock_irqrestore(&dlci->lock, flags);
++ gsm_dlci_clear_queues(dlci->gsm, dlci);
+ /* Ensure that gsmtty_open() can return. */
+ tty_port_set_initialized(&dlci->port, 0);
+ wake_up_interruptible(&dlci->port.open_wait);
+ } else
+ dlci->gsm->dead = true;
+- wake_up(&dlci->gsm->event);
+ /* A DLCI 0 close is a MUX termination so we need to kick that
+ back to userspace somehow */
++ gsm_dlci_data_kick(dlci);
++ wake_up(&dlci->gsm->event);
+ }
+
+ /**
+@@ -1539,11 +1773,13 @@ static void gsm_dlci_open(struct gsm_dlci *dlci)
+ del_timer(&dlci->t1);
+ /* This will let a tty open continue */
+ dlci->state = DLCI_OPEN;
++ dlci->constipated = false;
+ if (debug & 8)
+ pr_debug("DLCI %d goes open.\n", dlci->addr);
+ /* Send current modem state */
+ if (dlci->addr)
+ gsm_modem_update(dlci, 0);
++ gsm_dlci_data_kick(dlci);
+ wake_up(&dlci->gsm->event);
+ }
+
+@@ -1569,8 +1805,8 @@ static void gsm_dlci_t1(struct timer_list *t)
+
+ switch (dlci->state) {
+ case DLCI_OPENING:
+- dlci->retries--;
+ if (dlci->retries) {
++ dlci->retries--;
+ gsm_command(dlci->gsm, dlci->addr, SABM|PF);
+ mod_timer(&dlci->t1, jiffies + gsm->t1 * HZ / 100);
+ } else if (!dlci->addr && gsm->control == (DM | PF)) {
+@@ -1585,8 +1821,8 @@ static void gsm_dlci_t1(struct timer_list *t)
+
+ break;
+ case DLCI_CLOSING:
+- dlci->retries--;
+ if (dlci->retries) {
++ dlci->retries--;
+ gsm_command(dlci->gsm, dlci->addr, DISC|PF);
+ mod_timer(&dlci->t1, jiffies + gsm->t1 * HZ / 100);
+ } else
+@@ -1619,6 +1855,25 @@ static void gsm_dlci_begin_open(struct gsm_dlci *dlci)
+ mod_timer(&dlci->t1, jiffies + gsm->t1 * HZ / 100);
+ }
+
++/**
++ * gsm_dlci_set_opening - change state to opening
++ * @dlci: DLCI to open
++ *
++ * Change internal state to wait for DLCI open from initiator side.
++ * We set off timers and responses upon reception of an SABM.
++ */
++static void gsm_dlci_set_opening(struct gsm_dlci *dlci)
++{
++ switch (dlci->state) {
++ case DLCI_CLOSED:
++ case DLCI_CLOSING:
++ dlci->state = DLCI_OPENING;
++ break;
++ default:
++ break;
++ }
++}
++
+ /**
+ * gsm_dlci_begin_close - start channel open procedure
+ * @dlci: DLCI to open
+@@ -1728,6 +1983,30 @@ static void gsm_dlci_command(struct gsm_dlci *dlci, const u8 *data, int len)
+ }
+ }
+
++/**
++ * gsm_kick_timer - transmit if possible
++ * @t: timer contained in our gsm object
++ *
++ * Transmit data from DLCIs if the queue is empty. We can't rely on
++ * a tty wakeup except when we filled the pipe so we need to fire off
++ * new data ourselves in other cases.
++ */
++static void gsm_kick_timer(struct timer_list *t)
++{
++ struct gsm_mux *gsm = from_timer(gsm, t, kick_timer);
++ unsigned long flags;
++ int sent = 0;
++
++ spin_lock_irqsave(&gsm->tx_lock, flags);
++ /* If we have nothing running then we need to fire up */
++ if (gsm->tx_bytes < TX_THRESH_LO)
++ sent = gsm_dlci_data_sweep(gsm);
++ spin_unlock_irqrestore(&gsm->tx_lock, flags);
++
++ if (sent && debug & 4)
++ pr_info("%s TX queue stalled\n", __func__);
++}
++
+ /*
+ * Allocate/Free DLCI channels
+ */
+@@ -1762,10 +2041,13 @@ static struct gsm_dlci *gsm_dlci_alloc(struct gsm_mux *gsm, int addr)
+ dlci->addr = addr;
+ dlci->adaption = gsm->adaption;
+ dlci->state = DLCI_CLOSED;
+- if (addr)
++ if (addr) {
+ dlci->data = gsm_dlci_data;
+- else
++ /* Prevent us from sending data before the link is up */
++ dlci->constipated = true;
++ } else {
+ dlci->data = gsm_dlci_command;
++ }
+ gsm->dlci[addr] = dlci;
+ return dlci;
+ }
+@@ -1925,7 +2207,7 @@ static void gsm_queue(struct gsm_mux *gsm)
+ case UIH:
+ case UIH|PF:
+ if (dlci == NULL || dlci->state != DLCI_OPEN) {
+- gsm_command(gsm, address, DM|PF);
++ gsm_response(gsm, address, DM|PF);
+ return;
+ }
+ dlci->data(dlci, gsm->buf, gsm->len);
+@@ -2048,7 +2330,7 @@ static void gsm1_receive(struct gsm_mux *gsm, unsigned char c)
+ } else if ((c & ISO_IEC_646_MASK) == XOFF) {
+ gsm->constipated = false;
+ /* Kick the link in case it is idling */
+- gsm_data_kick(gsm, NULL);
++ gsmld_write_trigger(gsm);
+ return;
+ }
+ if (c == GSM1_SOF) {
+@@ -2176,18 +2458,29 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ }
+
+ /* Finish outstanding timers, making sure they are done */
++ del_timer_sync(&gsm->kick_timer);
+ del_timer_sync(&gsm->t2_timer);
+
++ /* Finish writing to ldisc */
++ flush_work(&gsm->tx_work);
++
+ /* Free up any link layer users and finally the control channel */
++ if (gsm->has_devices) {
++ gsm_unregister_devices(gsm_tty_driver, gsm->num);
++ gsm->has_devices = false;
++ }
+ for (i = NUM_DLCI - 1; i >= 0; i--)
+ if (gsm->dlci[i])
+ gsm_dlci_release(gsm->dlci[i]);
+ mutex_unlock(&gsm->mutex);
+ /* Now wipe the queues */
+ tty_ldisc_flush(gsm->tty);
+- list_for_each_entry_safe(txq, ntxq, &gsm->tx_list, list)
++ list_for_each_entry_safe(txq, ntxq, &gsm->tx_ctrl_list, list)
++ kfree(txq);
++ INIT_LIST_HEAD(&gsm->tx_ctrl_list);
++ list_for_each_entry_safe(txq, ntxq, &gsm->tx_data_list, list)
+ kfree(txq);
+- INIT_LIST_HEAD(&gsm->tx_list);
++ INIT_LIST_HEAD(&gsm->tx_data_list);
+ }
+
+ /**
+@@ -2202,8 +2495,15 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ static int gsm_activate_mux(struct gsm_mux *gsm)
+ {
+ struct gsm_dlci *dlci;
++ int ret;
++
++ dlci = gsm_dlci_alloc(gsm, 0);
++ if (dlci == NULL)
++ return -ENOMEM;
+
++ timer_setup(&gsm->kick_timer, gsm_kick_timer, 0);
+ timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
++ INIT_WORK(&gsm->tx_work, gsmld_write_task);
+ init_waitqueue_head(&gsm->event);
+ spin_lock_init(&gsm->control_lock);
+ spin_lock_init(&gsm->tx_lock);
+@@ -2213,9 +2513,11 @@ static int gsm_activate_mux(struct gsm_mux *gsm)
+ else
+ gsm->receive = gsm1_receive;
+
+- dlci = gsm_dlci_alloc(gsm, 0);
+- if (dlci == NULL)
+- return -ENOMEM;
++ ret = gsm_register_devices(gsm_tty_driver, gsm->num);
++ if (ret)
++ return ret;
++
++ gsm->has_devices = true;
+ gsm->dead = false; /* Tty opens are now permissible */
+ return 0;
+ }
+@@ -2308,7 +2610,8 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ spin_lock_init(&gsm->lock);
+ mutex_init(&gsm->mutex);
+ kref_init(&gsm->ref);
+- INIT_LIST_HEAD(&gsm->tx_list);
++ INIT_LIST_HEAD(&gsm->tx_ctrl_list);
++ INIT_LIST_HEAD(&gsm->tx_data_list);
+
+ gsm->t1 = T1;
+ gsm->t2 = T2;
+@@ -2465,6 +2768,47 @@ static int gsmld_output(struct gsm_mux *gsm, u8 *data, int len)
+ return gsm->tty->ops->write(gsm->tty, data, len);
+ }
+
++
++/**
++ * gsmld_write_trigger - schedule ldisc write task
++ * @gsm: our mux
++ */
++static void gsmld_write_trigger(struct gsm_mux *gsm)
++{
++ if (!gsm || !gsm->dlci[0] || gsm->dlci[0]->dead)
++ return;
++ schedule_work(&gsm->tx_work);
++}
++
++
++/**
++ * gsmld_write_task - ldisc write task
++ * @work: our tx write work
++ *
++ * Writes out data to the ldisc if possible. We are doing this here to
++ * avoid dead-locking. This returns if no space or data is left for output.
++ */
++static void gsmld_write_task(struct work_struct *work)
++{
++ struct gsm_mux *gsm = container_of(work, struct gsm_mux, tx_work);
++ unsigned long flags;
++ int i, ret;
++
++ /* All outstanding control channel and control messages and one data
++ * frame is sent.
++ */
++ ret = -ENODEV;
++ spin_lock_irqsave(&gsm->tx_lock, flags);
++ if (gsm->tty)
++ ret = gsm_data_kick(gsm);
++ spin_unlock_irqrestore(&gsm->tx_lock, flags);
++
++ if (ret >= 0)
++ for (i = 0; i < NUM_DLCI; i++)
++ if (gsm->dlci[i])
++ tty_port_tty_wakeup(&gsm->dlci[i]->port);
++}
++
+ /**
+ * gsmld_attach_gsm - mode set up
+ * @tty: our tty structure
+@@ -2475,39 +2819,14 @@ static int gsmld_output(struct gsm_mux *gsm, u8 *data, int len)
+ * will need moving to an ioctl path.
+ */
+
+-static int gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
++static void gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ {
+- unsigned int base;
+- int ret, i;
+-
+ gsm->tty = tty_kref_get(tty);
+ /* Turn off tty XON/XOFF handling to handle it explicitly. */
+ gsm->old_c_iflag = tty->termios.c_iflag;
+ tty->termios.c_iflag &= (IXON | IXOFF);
+- ret = gsm_activate_mux(gsm);
+- if (ret != 0)
+- tty_kref_put(gsm->tty);
+- else {
+- /* Don't register device 0 - this is the control channel and not
+- a usable tty interface */
+- base = mux_num_to_base(gsm); /* Base for this MUX */
+- for (i = 1; i < NUM_DLCI; i++) {
+- struct device *dev;
+-
+- dev = tty_register_device(gsm_tty_driver,
+- base + i, NULL);
+- if (IS_ERR(dev)) {
+- for (i--; i >= 1; i--)
+- tty_unregister_device(gsm_tty_driver,
+- base + i);
+- return PTR_ERR(dev);
+- }
+- }
+- }
+- return ret;
+ }
+
+-
+ /**
+ * gsmld_detach_gsm - stop doing 0710 mux
+ * @tty: tty attached to the mux
+@@ -2518,12 +2837,7 @@ static int gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+
+ static void gsmld_detach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ {
+- unsigned int base = mux_num_to_base(gsm); /* Base for this MUX */
+- int i;
+-
+ WARN_ON(tty != gsm->tty);
+- for (i = 1; i < NUM_DLCI; i++)
+- tty_unregister_device(gsm_tty_driver, base + i);
+ /* Restore tty XON/XOFF handling. */
+ gsm->tty->termios.c_iflag = gsm->old_c_iflag;
+ tty_kref_put(gsm->tty);
+@@ -2615,7 +2929,6 @@ static void gsmld_close(struct tty_struct *tty)
+ static int gsmld_open(struct tty_struct *tty)
+ {
+ struct gsm_mux *gsm;
+- int ret;
+
+ if (tty->ops->write == NULL)
+ return -EINVAL;
+@@ -2631,12 +2944,13 @@ static int gsmld_open(struct tty_struct *tty)
+ /* Attach the initial passive connection */
+ gsm->encoding = 1;
+
+- ret = gsmld_attach_gsm(tty, gsm);
+- if (ret != 0) {
+- gsm_cleanup_mux(gsm, false);
+- mux_put(gsm);
+- }
+- return ret;
++ gsmld_attach_gsm(tty, gsm);
++
++ timer_setup(&gsm->kick_timer, gsm_kick_timer, 0);
++ timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
++ INIT_WORK(&gsm->tx_work, gsmld_write_task);
++
++ return 0;
+ }
+
+ /**
+@@ -2651,16 +2965,9 @@ static int gsmld_open(struct tty_struct *tty)
+ static void gsmld_write_wakeup(struct tty_struct *tty)
+ {
+ struct gsm_mux *gsm = tty->disc_data;
+- unsigned long flags;
+
+ /* Queue poll */
+- clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+- spin_lock_irqsave(&gsm->tx_lock, flags);
+- gsm_data_kick(gsm, NULL);
+- if (gsm->tx_bytes < TX_THRESH_LO) {
+- gsm_dlci_data_sweep(gsm);
+- }
+- spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ gsmld_write_trigger(gsm);
+ }
+
+ /**
+@@ -2704,11 +3011,24 @@ static ssize_t gsmld_read(struct tty_struct *tty, struct file *file,
+ static ssize_t gsmld_write(struct tty_struct *tty, struct file *file,
+ const unsigned char *buf, size_t nr)
+ {
+- int space = tty_write_room(tty);
++ struct gsm_mux *gsm = tty->disc_data;
++ unsigned long flags;
++ int space;
++ int ret;
++
++ if (!gsm)
++ return -ENODEV;
++
++ ret = -ENOBUFS;
++ spin_lock_irqsave(&gsm->tx_lock, flags);
++ space = tty_write_room(tty);
+ if (space >= nr)
+- return tty->ops->write(tty, buf, nr);
+- set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+- return -ENOBUFS;
++ ret = tty->ops->write(tty, buf, nr);
++ else
++ set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
++ spin_unlock_irqrestore(&gsm->tx_lock, flags);
++
++ return ret;
+ }
+
+ /**
+@@ -2733,12 +3053,15 @@ static __poll_t gsmld_poll(struct tty_struct *tty, struct file *file,
+
+ poll_wait(file, &tty->read_wait, wait);
+ poll_wait(file, &tty->write_wait, wait);
++
++ if (gsm->dead)
++ mask |= EPOLLHUP;
+ if (tty_hung_up_p(file))
+ mask |= EPOLLHUP;
++ if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
++ mask |= EPOLLHUP;
+ if (!tty_is_writelocked(tty) && tty_write_room(tty) > 0)
+ mask |= EPOLLOUT | EPOLLWRNORM;
+- if (gsm->dead)
+- mask |= EPOLLHUP;
+ return mask;
+ }
+
+@@ -3174,6 +3497,8 @@ static int gsmtty_open(struct tty_struct *tty, struct file *filp)
+ /* Start sending off SABM messages */
+ if (gsm->initiator)
+ gsm_dlci_begin_open(dlci);
++ else
++ gsm_dlci_set_opening(dlci);
+ /* And wait for virtual carrier */
+ return tty_port_block_til_ready(port, tty, filp);
+ }
+diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
+index 696030cfcb092..c89cb881d9b04 100644
+--- a/drivers/tty/serial/8250/8250.h
++++ b/drivers/tty/serial/8250/8250.h
+@@ -123,6 +123,26 @@ static inline void serial_out(struct uart_8250_port *up, int offset, int value)
+ up->port.serial_out(&up->port, offset, value);
+ }
+
++/**
++ * serial_lsr_in - Read LSR register and preserve flags across reads
++ * @up: uart 8250 port
++ *
++ * Read LSR register and handle saving non-preserved flags across reads.
++ * The flags that are not preserved across reads are stored into
++ * up->lsr_saved_flags.
++ *
++ * Returns LSR value or'ed with the preserved flags (if any).
++ */
++static inline unsigned int serial_lsr_in(struct uart_8250_port *up)
++{
++ unsigned int lsr = up->lsr_saved_flags;
++
++ lsr |= serial_in(up, UART_LSR);
++ up->lsr_saved_flags = lsr & LSR_SAVE_FLAGS;
++
++ return lsr;
++}
++
+ /*
+ * For the 16C950
+ */
+diff --git a/drivers/tty/serial/8250/8250_bcm2835aux.c b/drivers/tty/serial/8250/8250_bcm2835aux.c
+index 2a1226a78a0c2..21939bb44613e 100644
+--- a/drivers/tty/serial/8250/8250_bcm2835aux.c
++++ b/drivers/tty/serial/8250/8250_bcm2835aux.c
+@@ -166,8 +166,10 @@ static int bcm2835aux_serial_probe(struct platform_device *pdev)
+ uartclk = clk_get_rate(data->clk);
+ if (!uartclk) {
+ ret = device_property_read_u32(&pdev->dev, "clock-frequency", &uartclk);
+- if (ret)
+- return dev_err_probe(&pdev->dev, ret, "could not get clk rate\n");
++ if (ret) {
++ dev_err_probe(&pdev->dev, ret, "could not get clk rate\n");
++ goto dis_clk;
++ }
+ }
+
+ /* the HW-clock divider for bcm2835aux is 8,
+diff --git a/drivers/tty/serial/8250/8250_bcm7271.c b/drivers/tty/serial/8250/8250_bcm7271.c
+index 9b878d023dac8..8efdc271eb75f 100644
+--- a/drivers/tty/serial/8250/8250_bcm7271.c
++++ b/drivers/tty/serial/8250/8250_bcm7271.c
+@@ -1139,16 +1139,19 @@ static int __maybe_unused brcmuart_suspend(struct device *dev)
+ struct brcmuart_priv *priv = dev_get_drvdata(dev);
+ struct uart_8250_port *up = serial8250_get_port(priv->line);
+ struct uart_port *port = &up->port;
+-
+- serial8250_suspend_port(priv->line);
+- clk_disable_unprepare(priv->baud_mux_clk);
++ unsigned long flags;
+
+ /*
+ * This will prevent resume from enabling RTS before the
+- * baud rate has been resored.
++ * baud rate has been restored.
+ */
++ spin_lock_irqsave(&port->lock, flags);
+ priv->saved_mctrl = port->mctrl;
+- port->mctrl = 0;
++ port->mctrl &= ~TIOCM_RTS;
++ spin_unlock_irqrestore(&port->lock, flags);
++
++ serial8250_suspend_port(priv->line);
++ clk_disable_unprepare(priv->baud_mux_clk);
+
+ return 0;
+ }
+@@ -1158,6 +1161,7 @@ static int __maybe_unused brcmuart_resume(struct device *dev)
+ struct brcmuart_priv *priv = dev_get_drvdata(dev);
+ struct uart_8250_port *up = serial8250_get_port(priv->line);
+ struct uart_port *port = &up->port;
++ unsigned long flags;
+ int ret;
+
+ ret = clk_prepare_enable(priv->baud_mux_clk);
+@@ -1180,7 +1184,15 @@ static int __maybe_unused brcmuart_resume(struct device *dev)
+ start_rx_dma(serial8250_get_port(priv->line));
+ }
+ serial8250_resume_port(priv->line);
+- port->mctrl = priv->saved_mctrl;
++
++ if (priv->saved_mctrl & TIOCM_RTS) {
++ /* Restore RTS */
++ spin_lock_irqsave(&port->lock, flags);
++ port->mctrl |= TIOCM_RTS;
++ port->ops->set_mctrl(port, port->mctrl);
++ spin_unlock_irqrestore(&port->lock, flags);
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 3f56dbc9432b3..82726cda60663 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -277,8 +277,7 @@ static void serial8250_backup_timeout(struct timer_list *t)
+ * the "Diva" UART used on the management processor on many HP
+ * ia64 and parisc boxes.
+ */
+- lsr = serial_in(up, UART_LSR);
+- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
++ lsr = serial_lsr_in(up);
+ if ((iir & UART_IIR_NO_INT) && (up->ier & UART_IER_THRI) &&
+ (!uart_circ_empty(&up->port.state->xmit) || up->port.x_char) &&
+ (lsr & UART_LSR_THRE)) {
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index bb6aca07ab563..d0dfbf1fc9d89 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -122,12 +122,15 @@ static void dw8250_check_lcr(struct uart_port *p, int value)
+ /* Returns once the transmitter is empty or we run out of retries */
+ static void dw8250_tx_wait_empty(struct uart_port *p)
+ {
++ struct uart_8250_port *up = up_to_u8250p(p);
+ unsigned int tries = 20000;
+ unsigned int delay_threshold = tries - 1000;
+ unsigned int lsr;
+
+ while (tries--) {
+ lsr = readb (p->membase + (UART_LSR << p->regshift));
++ up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
++
+ if (lsr & UART_LSR_TEMT)
+ break;
+
+@@ -253,7 +256,7 @@ static int dw8250_handle_irq(struct uart_port *p)
+ */
+ if (!up->dma && rx_timeout) {
+ spin_lock_irqsave(&p->lock, flags);
+- status = p->serial_in(p, UART_LSR);
++ status = serial_lsr_in(up);
+
+ if (!(status & (UART_LSR_DR | UART_LSR_BI)))
+ (void) p->serial_in(p, UART_RX);
+@@ -263,7 +266,10 @@ static int dw8250_handle_irq(struct uart_port *p)
+
+ /* Manually stop the Rx DMA transfer when acting as flow controller */
+ if (quirks & DW_UART_QUIRK_IS_DMA_FC && up->dma && up->dma->rx_running && rx_timeout) {
+- status = p->serial_in(p, UART_LSR);
++ spin_lock_irqsave(&p->lock, flags);
++ status = serial_lsr_in(up);
++ spin_unlock_irqrestore(&p->lock, flags);
++
+ if (status & (UART_LSR_DR | UART_LSR_BI)) {
+ dw8250_writel_ext(p, RZN1_UART_RDMACR, 0);
+ dw8250_writel_ext(p, DW_UART_DMASA, 1);
+diff --git a/drivers/tty/serial/8250/8250_fsl.c b/drivers/tty/serial/8250/8250_fsl.c
+index 9c01c531349df..71ce436857977 100644
+--- a/drivers/tty/serial/8250/8250_fsl.c
++++ b/drivers/tty/serial/8250/8250_fsl.c
+@@ -77,7 +77,7 @@ int fsl8250_handle_irq(struct uart_port *port)
+ if ((lsr & UART_LSR_THRE) && (up->ier & UART_IER_THRI))
+ serial8250_tx_chars(up);
+
+- up->lsr_saved_flags = orig_lsr;
++ up->lsr_saved_flags |= orig_lsr & UART_LSR_BI;
+
+ uart_unlock_and_check_sysrq_irqrestore(&up->port, flags);
+
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index a17619db79393..f6732c1ed2385 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -5076,6 +5076,115 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ pbn_b2_4_115200 },
++ /*
++ * Brainboxes PX-101
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4005,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b0_2_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x4019,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_2_15625000 },
++ /*
++ * Brainboxes PX-235/246
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4004,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b0_1_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x4016,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_1_15625000 },
++ /*
++ * Brainboxes PX-203/PX-257
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4006,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b0_2_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x4015,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_4_15625000 },
++ /*
++ * Brainboxes PX-260/PX-701
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x400A,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_4_15625000 },
++ /*
++ * Brainboxes PX-310
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x400E,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_2_15625000 },
++ /*
++ * Brainboxes PX-313
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x400C,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_2_15625000 },
++ /*
++ * Brainboxes PX-320/324/PX-376/PX-387
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x400B,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_1_15625000 },
++ /*
++ * Brainboxes PX-335/346
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x400F,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_4_15625000 },
++ /*
++ * Brainboxes PX-368
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4010,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_4_15625000 },
++ /*
++ * Brainboxes PX-420
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4000,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b0_4_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x4011,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_4_15625000 },
++ /*
++ * Brainboxes PX-803
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4009,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b0_1_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x401E,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_1_15625000 },
++ /*
++ * Brainboxes PX-846
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4008,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b0_1_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x4017,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_1_15625000 },
++
+ /*
+ * Perle PCI-RAS cards
+ */
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 3c36a06a20b04..2b86c55ed374e 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1514,11 +1514,9 @@ static inline void __stop_tx(struct uart_8250_port *p)
+ struct uart_8250_em485 *em485 = p->em485;
+
+ if (em485) {
+- unsigned char lsr = serial_in(p, UART_LSR);
++ unsigned char lsr = serial_lsr_in(p);
+ u64 stop_delay = 0;
+
+- p->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+-
+ if (!(lsr & UART_LSR_THRE))
+ return;
+ /*
+@@ -1573,10 +1571,8 @@ static inline void __start_tx(struct uart_port *port)
+
+ if (serial8250_set_THRI(up)) {
+ if (up->bugs & UART_BUG_TXEN) {
+- unsigned char lsr;
++ unsigned char lsr = serial_lsr_in(up);
+
+- lsr = serial_in(up, UART_LSR);
+- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+ if (lsr & UART_LSR_THRE)
+ serial8250_tx_chars(up);
+ }
+@@ -1926,7 +1922,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+
+ spin_lock_irqsave(&port->lock, flags);
+
+- status = serial_port_in(port, UART_LSR);
++ status = serial_lsr_in(up);
+
+ /*
+ * If port is stopped and there are no error conditions in the
+@@ -2007,8 +2003,7 @@ static unsigned int serial8250_tx_empty(struct uart_port *port)
+ serial8250_rpm_get(up);
+
+ spin_lock_irqsave(&port->lock, flags);
+- lsr = serial_port_in(port, UART_LSR);
+- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
++ lsr = serial_lsr_in(up);
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ serial8250_rpm_put(up);
+@@ -2084,9 +2079,7 @@ static void wait_for_lsr(struct uart_8250_port *up, int bits)
+
+ /* Wait up to 10ms for the character(s) to be sent. */
+ for (;;) {
+- status = serial_in(up, UART_LSR);
+-
+- up->lsr_saved_flags |= status & LSR_SAVE_FLAGS;
++ status = serial_lsr_in(up);
+
+ if ((status & bits) == bits)
+ break;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 0d6e62f6bb075..561d6d0b7c945 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -990,12 +990,12 @@ static void lpuart32_rxint(struct lpuart_port *sport)
+
+ if (sr & (UARTSTAT_PE | UARTSTAT_OR | UARTSTAT_FE)) {
+ if (sr & UARTSTAT_PE) {
++ sport->port.icount.parity++;
++ } else if (sr & UARTSTAT_FE) {
+ if (is_break)
+ sport->port.icount.brk++;
+ else
+- sport->port.icount.parity++;
+- } else if (sr & UARTSTAT_FE) {
+- sport->port.icount.frame++;
++ sport->port.icount.frame++;
+ }
+
+ if (sr & UARTSTAT_OR)
+@@ -1010,12 +1010,12 @@ static void lpuart32_rxint(struct lpuart_port *sport)
+ sr &= sport->port.read_status_mask;
+
+ if (sr & UARTSTAT_PE) {
++ flg = TTY_PARITY;
++ } else if (sr & UARTSTAT_FE) {
+ if (is_break)
+ flg = TTY_BREAK;
+ else
+- flg = TTY_PARITY;
+- } else if (sr & UARTSTAT_FE) {
+- flg = TTY_FRAME;
++ flg = TTY_FRAME;
+ }
+
+ if (sr & UARTSTAT_OR)
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 93489fe334d0f..65eaecd10b7ca 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -265,6 +265,7 @@ static void mvebu_uart_rx_chars(struct uart_port *port, unsigned int status)
+ struct tty_port *tport = &port->state->port;
+ unsigned char ch = 0;
+ char flag = 0;
++ int ret;
+
+ do {
+ if (status & STAT_RX_RDY(port)) {
+@@ -277,6 +278,16 @@ static void mvebu_uart_rx_chars(struct uart_port *port, unsigned int status)
+ port->icount.parity++;
+ }
+
++ /*
++ * For UART2, error bits are not cleared on buffer read.
++ * This causes interrupt loop and system hang.
++ */
++ if (IS_EXTENDED(port) && (status & STAT_BRK_ERR)) {
++ ret = readl(port->membase + UART_STAT);
++ ret |= STAT_BRK_ERR;
++ writel(ret, port->membase + UART_STAT);
++ }
++
+ if (status & STAT_BRK_DET) {
+ port->icount.brk++;
+ status &= ~(STAT_FRM_ERR | STAT_PAR_ERR);
+diff --git a/drivers/tty/serial/pic32_uart.c b/drivers/tty/serial/pic32_uart.c
+index b399aac530fe6..f418f1de66b35 100644
+--- a/drivers/tty/serial/pic32_uart.c
++++ b/drivers/tty/serial/pic32_uart.c
+@@ -503,7 +503,7 @@ static int pic32_uart_startup(struct uart_port *port)
+ if (!sport->irq_fault_name) {
+ dev_err(port->dev, "%s: kasprintf err!", __func__);
+ ret = -ENOMEM;
+- goto out_done;
++ goto out_disable_clk;
+ }
+ irq_set_status_flags(sport->irq_fault, IRQ_NOAUTOEN);
+ ret = request_irq(sport->irq_fault, pic32_uart_fault_interrupt,
+@@ -579,6 +579,8 @@ out_r:
+ out_f:
+ free_irq(sport->irq_fault, port);
+ kfree(sport->irq_fault_name);
++out_disable_clk:
++ clk_disable_unprepare(sport->clk);
+ out_done:
+ return ret;
+ }
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index f8f950641ad9f..f7c1f18070403 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -940,52 +940,63 @@ static int qcom_geni_serial_startup(struct uart_port *uport)
+ return 0;
+ }
+
+-static unsigned long get_clk_div_rate(struct clk *clk, unsigned int baud,
+- unsigned int sampling_rate, unsigned int *clk_div)
++static unsigned long find_clk_rate_in_tol(struct clk *clk, unsigned int desired_clk,
++ unsigned int *clk_div, unsigned int percent_tol)
+ {
+- unsigned long ser_clk;
+- unsigned long desired_clk;
+- unsigned long freq, prev;
++ unsigned long freq;
+ unsigned long div, maxdiv;
+- int64_t mult;
+-
+- desired_clk = baud * sampling_rate;
+- if (!desired_clk) {
+- pr_err("%s: Invalid frequency\n", __func__);
+- return 0;
+- }
++ u64 mult;
++ unsigned long offset, abs_tol, achieved;
+
++ abs_tol = div_u64((u64)desired_clk * percent_tol, 100);
+ maxdiv = CLK_DIV_MSK >> CLK_DIV_SHFT;
+- prev = 0;
+-
+- for (div = 1; div <= maxdiv; div++) {
+- mult = div * desired_clk;
+- if (mult > ULONG_MAX)
++ div = 1;
++ while (div <= maxdiv) {
++ mult = (u64)div * desired_clk;
++ if (mult != (unsigned long)mult)
+ break;
+
+- freq = clk_round_rate(clk, (unsigned long)mult);
+- if (!(freq % desired_clk)) {
+- ser_clk = freq;
+- break;
+- }
++ offset = div * abs_tol;
++ freq = clk_round_rate(clk, mult - offset);
+
+- if (!prev)
+- ser_clk = freq;
+- else if (prev == freq)
++ /* Can only get lower if we're done */
++ if (freq < mult - offset)
+ break;
+
+- prev = freq;
+- }
++ /*
++ * Re-calculate div in case rounding skipped rates but we
++ * ended up at a good one, then check for a match.
++ */
++ div = DIV_ROUND_CLOSEST(freq, desired_clk);
++ achieved = DIV_ROUND_CLOSEST(freq, div);
++ if (achieved <= desired_clk + abs_tol &&
++ achieved >= desired_clk - abs_tol) {
++ *clk_div = div;
++ return freq;
++ }
+
+- if (!ser_clk) {
+- pr_err("%s: Can't find matching DFS entry for baud %d\n",
+- __func__, baud);
+- return ser_clk;
++ div = DIV_ROUND_UP(freq, desired_clk);
+ }
+
+- *clk_div = ser_clk / desired_clk;
+- if (!(*clk_div))
+- *clk_div = 1;
++ return 0;
++}
++
++static unsigned long get_clk_div_rate(struct clk *clk, unsigned int baud,
++ unsigned int sampling_rate, unsigned int *clk_div)
++{
++ unsigned long ser_clk;
++ unsigned long desired_clk;
++
++ desired_clk = baud * sampling_rate;
++ if (!desired_clk)
++ return 0;
++
++ /*
++ * try to find a clock rate within 2% tolerance, then within 5%
++ */
++ ser_clk = find_clk_rate_in_tol(clk, desired_clk, clk_div, 2);
++ if (!ser_clk)
++ ser_clk = find_clk_rate_in_tol(clk, desired_clk, clk_div, 5);
+
+ return ser_clk;
+ }
+@@ -1020,8 +1031,15 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+
+ clk_rate = get_clk_div_rate(port->se.clk, baud,
+ sampling_rate, &clk_div);
+- if (!clk_rate)
++ if (!clk_rate) {
++ dev_err(port->se.dev,
++ "Couldn't find suitable clock rate for %u\n",
++ baud * sampling_rate);
+ goto out_restart_rx;
++ }
++
++ dev_dbg(port->se.dev, "desired_rate-%u, clk_rate-%lu, clk_div-%u\n",
++ baud * sampling_rate, clk_rate, clk_div);
+
+ uport->uartclk = clk_rate;
+ dev_pm_opp_set_rate(uport->dev, clk_rate);
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index dfc1f4b445f3b..6eaf8eb846619 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -344,7 +344,7 @@ static struct uni_screen *vc_uniscr_alloc(unsigned int cols, unsigned int rows)
+ /* allocate everything in one go */
+ memsize = cols * rows * sizeof(char32_t);
+ memsize += rows * sizeof(char32_t *);
+- p = vmalloc(memsize);
++ p = vzalloc(memsize);
+ if (!p)
+ return NULL;
+
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 3d367be717286..8d91be0fd1a4e 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -9484,12 +9484,8 @@ EXPORT_SYMBOL(ufshcd_runtime_resume);
+ int ufshcd_shutdown(struct ufs_hba *hba)
+ {
+ if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba))
+- goto out;
+-
+- pm_runtime_get_sync(hba->dev);
++ ufshcd_suspend(hba);
+
+- ufshcd_suspend(hba);
+-out:
+ hba->is_powered = false;
+ /* allow force shutdown even in case of errors */
+ return 0;
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index 5c15c48952a61..87cfa91a758df 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -220,7 +220,7 @@ int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
+
+ if (!priv_ep->trb_pool) {
+ priv_ep->trb_pool = dma_pool_alloc(priv_dev->eps_dma_pool,
+- GFP_DMA32 | GFP_ATOMIC,
++ GFP_ATOMIC,
+ &priv_ep->trb_pool_dma);
+
+ if (!priv_ep->trb_pool)
+@@ -2284,11 +2284,16 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
+ int ret = 0;
+ int val;
+
++ if (!ep) {
++ pr_debug("usbss: ep not configured?\n");
++ return -EINVAL;
++ }
++
+ priv_ep = ep_to_cdns3_ep(ep);
+ priv_dev = priv_ep->cdns3_dev;
+ comp_desc = priv_ep->endpoint.comp_desc;
+
+- if (!ep || !desc || desc->bDescriptorType != USB_DT_ENDPOINT) {
++ if (!desc || desc->bDescriptorType != USB_DT_ENDPOINT) {
+ dev_dbg(priv_dev->dev, "usbss: invalid parameters\n");
+ return -EINVAL;
+ }
+@@ -2600,7 +2605,7 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
+ struct usb_request *request)
+ {
+ struct cdns3_endpoint *priv_ep = ep_to_cdns3_ep(ep);
+- struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
++ struct cdns3_device *priv_dev;
+ struct usb_request *req, *req_temp;
+ struct cdns3_request *priv_req;
+ struct cdns3_trb *link_trb;
+@@ -2611,6 +2616,8 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
+ if (!ep || !request || !ep->desc)
+ return -EINVAL;
+
++ priv_dev = priv_ep->cdns3_dev;
++
+ spin_lock_irqsave(&priv_dev->lock, flags);
+
+ priv_req = to_cdns3_request(request);
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 06eea8848ccc2..a6a87c5d1b05c 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -1691,7 +1691,6 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t)
+
+ spin_lock_irq(&bh->lock);
+ bh->running = true;
+- restart:
+ list_replace_init(&bh->head, &local_list);
+ spin_unlock_irq(&bh->lock);
+
+@@ -1705,10 +1704,17 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t)
+ bh->completing_ep = NULL;
+ }
+
+- /* check if there are new URBs to giveback */
++ /*
++ * giveback new URBs next time to prevent this function
++ * from not exiting for a long time.
++ */
+ spin_lock_irq(&bh->lock);
+- if (!list_empty(&bh->head))
+- goto restart;
++ if (!list_empty(&bh->head)) {
++ if (bh->high_prio)
++ tasklet_hi_schedule(&bh->bh);
++ else
++ tasklet_schedule(&bh->bh);
++ }
+ bh->running = false;
+ spin_unlock_irq(&bh->lock);
+ }
+@@ -1737,7 +1743,7 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t)
+ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
+ {
+ struct giveback_urb_bh *bh;
+- bool running, high_prio_bh;
++ bool running;
+
+ /* pass status to tasklet via unlinked */
+ if (likely(!urb->unlinked))
+@@ -1748,13 +1754,10 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
+ return;
+ }
+
+- if (usb_pipeisoc(urb->pipe) || usb_pipeint(urb->pipe)) {
++ if (usb_pipeisoc(urb->pipe) || usb_pipeint(urb->pipe))
+ bh = &hcd->high_prio_bh;
+- high_prio_bh = true;
+- } else {
++ else
+ bh = &hcd->low_prio_bh;
+- high_prio_bh = false;
+- }
+
+ spin_lock(&bh->lock);
+ list_add_tail(&urb->urb_list, &bh->head);
+@@ -1763,7 +1766,7 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
+
+ if (running)
+ ;
+- else if (high_prio_bh)
++ else if (bh->high_prio)
+ tasklet_hi_schedule(&bh->bh);
+ else
+ tasklet_schedule(&bh->bh);
+@@ -2959,6 +2962,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
+
+ /* initialize tasklets */
+ init_giveback_urb_bh(&hcd->high_prio_bh);
++ hcd->high_prio_bh.high_prio = true;
+ init_giveback_urb_bh(&hcd->low_prio_bh);
+
+ /* enable irqs just before we start the controller,
+@@ -3033,9 +3037,15 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
+ */
+ void usb_remove_hcd(struct usb_hcd *hcd)
+ {
+- struct usb_device *rhdev = hcd->self.root_hub;
++ struct usb_device *rhdev;
+ bool rh_registered;
+
++ if (!hcd) {
++ pr_debug("%s: hcd is NULL\n", __func__);
++ return;
++ }
++ rhdev = hcd->self.root_hub;
++
+ dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
+
+ usb_get_dev(rhdev);
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 573421984948a..ba2fa91be1d64 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -158,8 +158,13 @@ static void __dwc3_set_mode(struct work_struct *work)
+ break;
+ }
+
+- /* For DRD host or device mode only */
+- if (dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG) {
++ /*
++ * When current_dr_role is not set, there's no role switching.
++ * Only perform GCTL.CoreSoftReset when there's DRD role switching.
++ */
++ if (dwc->current_dr_role && ((DWC3_IP_IS(DWC3) ||
++ DWC3_VER_IS_PRIOR(DWC31, 190A)) &&
++ dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG)) {
+ reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+ reg |= DWC3_GCTL_CORESOFTRESET;
+ dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 6cba990da32ef..3582fd6dfa141 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -443,9 +443,9 @@ static int dwc3_qcom_get_irq(struct platform_device *pdev,
+ int ret;
+
+ if (np)
+- ret = platform_get_irq_byname(pdev_irq, name);
++ ret = platform_get_irq_byname_optional(pdev_irq, name);
+ else
+- ret = platform_get_irq(pdev_irq, num);
++ ret = platform_get_irq_optional(pdev_irq, num);
+
+ return ret;
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 0d89dfa6eef57..52d5a7c81362a 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1182,17 +1182,49 @@ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
+ return trbs_left;
+ }
+
+-static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+- dma_addr_t dma, unsigned int length, unsigned int chain,
+- unsigned int node, unsigned int stream_id,
+- unsigned int short_not_ok, unsigned int no_interrupt,
+- unsigned int is_last, bool must_interrupt)
++/**
++ * dwc3_prepare_one_trb - setup one TRB from one request
++ * @dep: endpoint for which this request is prepared
++ * @req: dwc3_request pointer
++ * @trb_length: buffer size of the TRB
++ * @chain: should this TRB be chained to the next?
++ * @node: only for isochronous endpoints. First TRB needs different type.
++ * @use_bounce_buffer: set to use bounce buffer
++ * @must_interrupt: set to interrupt on TRB completion
++ */
++static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
++ struct dwc3_request *req, unsigned int trb_length,
++ unsigned int chain, unsigned int node, bool use_bounce_buffer,
++ bool must_interrupt)
+ {
++ struct dwc3_trb *trb;
++ dma_addr_t dma;
++ unsigned int stream_id = req->request.stream_id;
++ unsigned int short_not_ok = req->request.short_not_ok;
++ unsigned int no_interrupt = req->request.no_interrupt;
++ unsigned int is_last = req->request.is_last;
+ struct dwc3 *dwc = dep->dwc;
+ struct usb_gadget *gadget = dwc->gadget;
+ enum usb_device_speed speed = gadget->speed;
+
+- trb->size = DWC3_TRB_SIZE_LENGTH(length);
++ if (use_bounce_buffer)
++ dma = dep->dwc->bounce_addr;
++ else if (req->request.num_sgs > 0)
++ dma = sg_dma_address(req->start_sg);
++ else
++ dma = req->request.dma;
++
++ trb = &dep->trb_pool[dep->trb_enqueue];
++
++ if (!req->trb) {
++ dwc3_gadget_move_started_request(req);
++ req->trb = trb;
++ req->trb_dma = dwc3_trb_dma_offset(dep, trb);
++ }
++
++ req->num_trbs++;
++
++ trb->size = DWC3_TRB_SIZE_LENGTH(trb_length);
+ trb->bpl = lower_32_bits(dma);
+ trb->bph = upper_32_bits(dma);
+
+@@ -1232,10 +1264,10 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+ unsigned int mult = 2;
+ unsigned int maxp = usb_endpoint_maxp(ep->desc);
+
+- if (length <= (2 * maxp))
++ if (req->request.length <= (2 * maxp))
+ mult--;
+
+- if (length <= maxp)
++ if (req->request.length <= maxp)
+ mult--;
+
+ trb->size |= DWC3_TRB_SIZE_PCM1(mult);
+@@ -1309,50 +1341,6 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+ trace_dwc3_prepare_trb(dep, trb);
+ }
+
+-/**
+- * dwc3_prepare_one_trb - setup one TRB from one request
+- * @dep: endpoint for which this request is prepared
+- * @req: dwc3_request pointer
+- * @trb_length: buffer size of the TRB
+- * @chain: should this TRB be chained to the next?
+- * @node: only for isochronous endpoints. First TRB needs different type.
+- * @use_bounce_buffer: set to use bounce buffer
+- * @must_interrupt: set to interrupt on TRB completion
+- */
+-static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+- struct dwc3_request *req, unsigned int trb_length,
+- unsigned int chain, unsigned int node, bool use_bounce_buffer,
+- bool must_interrupt)
+-{
+- struct dwc3_trb *trb;
+- dma_addr_t dma;
+- unsigned int stream_id = req->request.stream_id;
+- unsigned int short_not_ok = req->request.short_not_ok;
+- unsigned int no_interrupt = req->request.no_interrupt;
+- unsigned int is_last = req->request.is_last;
+-
+- if (use_bounce_buffer)
+- dma = dep->dwc->bounce_addr;
+- else if (req->request.num_sgs > 0)
+- dma = sg_dma_address(req->start_sg);
+- else
+- dma = req->request.dma;
+-
+- trb = &dep->trb_pool[dep->trb_enqueue];
+-
+- if (!req->trb) {
+- dwc3_gadget_move_started_request(req);
+- req->trb = trb;
+- req->trb_dma = dwc3_trb_dma_offset(dep, trb);
+- }
+-
+- req->num_trbs++;
+-
+- __dwc3_prepare_one_trb(dep, trb, dma, trb_length, chain, node,
+- stream_id, short_not_ok, no_interrupt, is_last,
+- must_interrupt);
+-}
+-
+ static bool dwc3_needs_extra_trb(struct dwc3_ep *dep, struct dwc3_request *req)
+ {
+ unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index 3a77bca0ebe1c..e884f295504f6 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -1192,13 +1192,14 @@ static int do_read_toc(struct fsg_common *common, struct fsg_buffhd *bh)
+ u8 format;
+ int i, len;
+
++ format = common->cmnd[2] & 0xf;
++
+ if ((common->cmnd[1] & ~0x02) != 0 || /* Mask away MSF */
+- start_track > 1) {
++ (start_track > 1 && format != 0x1)) {
+ curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
+ return -EINVAL;
+ }
+
+- format = common->cmnd[2] & 0xf;
+ /*
+ * Check if CDB is old style SFF-8020i
+ * i.e. format is in 2 MSBs of byte 9
+@@ -1208,8 +1209,8 @@ static int do_read_toc(struct fsg_common *common, struct fsg_buffhd *bh)
+ format = (common->cmnd[9] >> 6) & 0x3;
+
+ switch (format) {
+- case 0:
+- /* Formatted TOC */
++ case 0: /* Formatted TOC */
++ case 1: /* Multi-session info */
+ len = 4 + 2*8; /* 4 byte header + 2 descriptors */
+ memset(buf, 0, len);
+ buf[1] = len - 2; /* TOC Length excludes length field */
+@@ -1250,7 +1251,7 @@ static int do_read_toc(struct fsg_common *common, struct fsg_buffhd *bh)
+ return len;
+
+ default:
+- /* Multi-session, PMA, ATIP, CD-TEXT not supported/required */
++ /* PMA, ATIP, CD-TEXT not supported/required */
+ curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
+ return -EINVAL;
+ }
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index d3feeeb50841b..71669e0e4d007 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -141,7 +141,8 @@ static struct usb_endpoint_descriptor uvc_fs_streaming_ep = {
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_SYNC_ASYNC
+ | USB_ENDPOINT_XFER_ISOC,
+- /* The wMaxPacketSize and bInterval values will be initialized from
++ /*
++ * The wMaxPacketSize and bInterval values will be initialized from
+ * module parameters.
+ */
+ };
+@@ -152,7 +153,8 @@ static struct usb_endpoint_descriptor uvc_hs_streaming_ep = {
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_SYNC_ASYNC
+ | USB_ENDPOINT_XFER_ISOC,
+- /* The wMaxPacketSize and bInterval values will be initialized from
++ /*
++ * The wMaxPacketSize and bInterval values will be initialized from
+ * module parameters.
+ */
+ };
+@@ -164,7 +166,8 @@ static struct usb_endpoint_descriptor uvc_ss_streaming_ep = {
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_SYNC_ASYNC
+ | USB_ENDPOINT_XFER_ISOC,
+- /* The wMaxPacketSize and bInterval values will be initialized from
++ /*
++ * The wMaxPacketSize and bInterval values will be initialized from
+ * module parameters.
+ */
+ };
+@@ -172,7 +175,8 @@ static struct usb_endpoint_descriptor uvc_ss_streaming_ep = {
+ static struct usb_ss_ep_comp_descriptor uvc_ss_streaming_comp = {
+ .bLength = sizeof(uvc_ss_streaming_comp),
+ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
+- /* The bMaxBurst, bmAttributes and wBytesPerInterval values will be
++ /*
++ * The bMaxBurst, bmAttributes and wBytesPerInterval values will be
+ * initialized from module parameters.
+ */
+ };
+@@ -234,7 +238,8 @@ uvc_function_setup(struct usb_function *f, const struct usb_ctrlrequest *ctrl)
+ if (le16_to_cpu(ctrl->wLength) > UVC_MAX_REQUEST_SIZE)
+ return -EINVAL;
+
+- /* Tell the complete callback to generate an event for the next request
++ /*
++ * Tell the complete callback to generate an event for the next request
+ * that will be enqueued by UVCIOC_SEND_RESPONSE.
+ */
+ uvc->event_setup_out = !(ctrl->bRequestType & USB_DIR_IN);
+@@ -500,7 +505,8 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
+ if (!uvc_control_desc || !uvc_streaming_cls)
+ return ERR_PTR(-ENODEV);
+
+- /* Descriptors layout
++ /*
++ * Descriptors layout
+ *
+ * uvc_iad
+ * uvc_control_intf
+@@ -597,8 +603,7 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
+ uvcg_info(f, "%s()\n", __func__);
+
+ opts = fi_to_f_uvc_opts(f->fi);
+- /* Sanity check the streaming endpoint module parameters.
+- */
++ /* Sanity check the streaming endpoint module parameters. */
+ opts->streaming_interval = clamp(opts->streaming_interval, 1U, 16U);
+ opts->streaming_maxpacket = clamp(opts->streaming_maxpacket, 1U, 3072U);
+ opts->streaming_maxburst = min(opts->streaming_maxburst, 15U);
+@@ -611,7 +616,8 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
+ opts->streaming_maxpacket);
+ }
+
+- /* Fill in the FS/HS/SS Video Streaming specific descriptors from the
++ /*
++ * Fill in the FS/HS/SS Video Streaming specific descriptors from the
+ * module parameters.
+ *
+ * NOTE: We assume that the user knows what they are doing and won't
+@@ -895,7 +901,8 @@ static void uvc_function_unbind(struct usb_configuration *c,
+
+ uvcg_info(f, "%s()\n", __func__);
+
+- /* If we know we're connected via v4l2, then there should be a cleanup
++ /*
++ * If we know we're connected via v4l2, then there should be a cleanup
+ * of the device from userspace either via UVC_EVENT_DISCONNECT or
+ * though the video device removal uevent. Allow some time for the
+ * application to close out before things get deleted.
+@@ -912,7 +919,8 @@ static void uvc_function_unbind(struct usb_configuration *c,
+ v4l2_device_unregister(&uvc->v4l2_dev);
+
+ if (uvc->func_connected) {
+- /* Wait for the release to occur to ensure there are no longer any
++ /*
++ * Wait for the release to occur to ensure there are no longer any
+ * pending operations that may cause panics when resources are cleaned
+ * up.
+ */
+diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
+index d25edc3d2174e..951934aa44541 100644
+--- a/drivers/usb/gadget/function/uvc_queue.c
++++ b/drivers/usb/gadget/function/uvc_queue.c
+@@ -104,7 +104,8 @@ static void uvc_buffer_queue(struct vb2_buffer *vb)
+ if (likely(!(queue->flags & UVC_QUEUE_DISCONNECTED))) {
+ list_add_tail(&buf->queue, &queue->irqqueue);
+ } else {
+- /* If the device is disconnected return the buffer to userspace
++ /*
++ * If the device is disconnected return the buffer to userspace
+ * directly. The next QBUF call will fail with -ENODEV.
+ */
+ buf->state = UVC_BUF_STATE_ERROR;
+@@ -255,7 +256,8 @@ void uvcg_queue_cancel(struct uvc_video_queue *queue, int disconnect)
+ }
+ queue->buf_used = 0;
+
+- /* This must be protected by the irqlock spinlock to avoid race
++ /*
++ * This must be protected by the irqlock spinlock to avoid race
+ * conditions between uvc_queue_buffer and the disconnection event that
+ * could result in an interruptible wait in uvc_dequeue_buffer. Do not
+ * blindly replace this logic by checking for the UVC_DEV_DISCONNECTED
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index d42bb3346745c..ce421d9cc241b 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -378,7 +378,8 @@ static void uvcg_video_pump(struct work_struct *work)
+ int ret;
+
+ while (video->ep->enabled) {
+- /* Retrieve the first available USB request, protected by the
++ /*
++ * Retrieve the first available USB request, protected by the
+ * request lock.
+ */
+ spin_lock_irqsave(&video->req_lock, flags);
+@@ -391,7 +392,8 @@ static void uvcg_video_pump(struct work_struct *work)
+ list_del(&req->list);
+ spin_unlock_irqrestore(&video->req_lock, flags);
+
+- /* Retrieve the first available video buffer and fill the
++ /*
++ * Retrieve the first available video buffer and fill the
+ * request, protected by the video queue irqlock.
+ */
+ spin_lock_irqsave(&queue->irqlock, flags);
+@@ -403,9 +405,11 @@ static void uvcg_video_pump(struct work_struct *work)
+
+ video->encode(req, video, buf);
+
+- /* With usb3 we have more requests. This will decrease the
++ /*
++ * With usb3 we have more requests. This will decrease the
+ * interrupt load to a quarter but also catches the corner
+- * cases, which needs to be handled */
++ * cases, which needs to be handled.
++ */
+ if (list_empty(&video->req_free) ||
+ buf->state == UVC_BUF_STATE_DONE ||
+ !(video->req_int_count %
+diff --git a/drivers/usb/gadget/udc/Kconfig b/drivers/usb/gadget/udc/Kconfig
+index 69394dc1cdfb6..2cdd37be165a4 100644
+--- a/drivers/usb/gadget/udc/Kconfig
++++ b/drivers/usb/gadget/udc/Kconfig
+@@ -311,7 +311,7 @@ source "drivers/usb/gadget/udc/bdc/Kconfig"
+
+ config USB_AMD5536UDC
+ tristate "AMD5536 UDC"
+- depends on USB_PCI
++ depends on USB_PCI && HAS_DMA
+ select USB_SNP_CORE
+ help
+ The AMD5536 UDC is part of the AMD Geode CS5536, an x86 southbridge.
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/hub.c b/drivers/usb/gadget/udc/aspeed-vhub/hub.c
+index 65cd4e46f031f..e2207d0146204 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/hub.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/hub.c
+@@ -1059,8 +1059,10 @@ static int ast_vhub_init_desc(struct ast_vhub *vhub)
+ /* Initialize vhub String Descriptors. */
+ INIT_LIST_HEAD(&vhub->vhub_str_desc);
+ desc_np = of_get_child_by_name(vhub_np, "vhub-strings");
+- if (desc_np)
++ if (desc_np) {
+ ret = ast_vhub_of_parse_str_desc(vhub, desc_np);
++ of_node_put(desc_np);
++ }
+ else
+ ret = ast_vhub_str_alloc_add(vhub, &ast_vhub_strings);
+
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 6d31ccf6aee5c..3c37effdfa643 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3691,15 +3691,15 @@ static int tegra_xudc_powerdomain_init(struct tegra_xudc *xudc)
+ int err;
+
+ xudc->genpd_dev_device = dev_pm_domain_attach_by_name(dev, "dev");
+- if (IS_ERR(xudc->genpd_dev_device)) {
+- err = PTR_ERR(xudc->genpd_dev_device);
++ if (IS_ERR_OR_NULL(xudc->genpd_dev_device)) {
++ err = PTR_ERR(xudc->genpd_dev_device) ? : -ENODATA;
+ dev_err(dev, "failed to get device power domain: %d\n", err);
+ return err;
+ }
+
+ xudc->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "ss");
+- if (IS_ERR(xudc->genpd_dev_ss)) {
+- err = PTR_ERR(xudc->genpd_dev_ss);
++ if (IS_ERR_OR_NULL(xudc->genpd_dev_ss)) {
++ err = PTR_ERR(xudc->genpd_dev_ss) ? : -ENODATA;
+ dev_err(dev, "failed to get SuperSpeed power domain: %d\n", err);
+ return err;
+ }
+diff --git a/drivers/usb/host/ehci-ppc-of.c b/drivers/usb/host/ehci-ppc-of.c
+index 6bbaee74f7e7d..28a19693c19fe 100644
+--- a/drivers/usb/host/ehci-ppc-of.c
++++ b/drivers/usb/host/ehci-ppc-of.c
+@@ -148,6 +148,7 @@ static int ehci_hcd_ppc_of_probe(struct platform_device *op)
+ } else {
+ ehci->has_amcc_usb23 = 1;
+ }
++ of_node_put(np);
+ }
+
+ if (of_get_property(dn, "big-endian", NULL)) {
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index a24aea3d2759e..98326465e2dc2 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -13,6 +13,7 @@
+ * This file is licenced under the GPL.
+ */
+
++#include <linux/arm-smccc.h>
+ #include <linux/clk.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/gpio/consumer.h>
+@@ -55,6 +56,7 @@ struct ohci_at91_priv {
+ bool clocked;
+ bool wakeup; /* Saved wake-up state for resume */
+ struct regmap *sfr_regmap;
++ u32 suspend_smc_id;
+ };
+ /* interface and function clocks; sometimes also an AHB clock */
+
+@@ -135,6 +137,19 @@ static void at91_stop_hc(struct platform_device *pdev)
+
+ static void usb_hcd_at91_remove (struct usb_hcd *, struct platform_device *);
+
++static u32 at91_dt_suspend_smc(struct device *dev)
++{
++ u32 suspend_smc_id;
++
++ if (!dev->of_node)
++ return 0;
++
++ if (of_property_read_u32(dev->of_node, "microchip,suspend-smc-id", &suspend_smc_id))
++ return 0;
++
++ return suspend_smc_id;
++}
++
+ static struct regmap *at91_dt_syscon_sfr(void)
+ {
+ struct regmap *regmap;
+@@ -215,9 +230,13 @@ static int usb_hcd_at91_probe(const struct hc_driver *driver,
+ goto err;
+ }
+
+- ohci_at91->sfr_regmap = at91_dt_syscon_sfr();
+- if (!ohci_at91->sfr_regmap)
+- dev_dbg(dev, "failed to find sfr node\n");
++ ohci_at91->suspend_smc_id = at91_dt_suspend_smc(dev);
++ if (!ohci_at91->suspend_smc_id) {
++ dev_dbg(dev, "failed to find sfr suspend smc id, using regmap\n");
++ ohci_at91->sfr_regmap = at91_dt_syscon_sfr();
++ if (!ohci_at91->sfr_regmap)
++ dev_dbg(dev, "failed to find sfr node\n");
++ }
+
+ board = hcd->self.controller->platform_data;
+ ohci = hcd_to_ohci(hcd);
+@@ -303,24 +322,30 @@ static int ohci_at91_hub_status_data(struct usb_hcd *hcd, char *buf)
+ return length;
+ }
+
+-static int ohci_at91_port_suspend(struct regmap *regmap, u8 set)
++static int ohci_at91_port_suspend(struct ohci_at91_priv *ohci_at91, u8 set)
+ {
++ struct regmap *regmap = ohci_at91->sfr_regmap;
+ u32 regval;
+ int ret;
+
+- if (!regmap)
+- return 0;
++ if (ohci_at91->suspend_smc_id) {
++ struct arm_smccc_res res;
+
+- ret = regmap_read(regmap, AT91_SFR_OHCIICR, ®val);
+- if (ret)
+- return ret;
++ arm_smccc_smc(ohci_at91->suspend_smc_id, set, 0, 0, 0, 0, 0, 0, &res);
++ if (res.a0)
++ return -EINVAL;
++ } else if (regmap) {
++ ret = regmap_read(regmap, AT91_SFR_OHCIICR, ®val);
++ if (ret)
++ return ret;
+
+- if (set)
+- regval |= AT91_OHCIICR_USB_SUSPEND;
+- else
+- regval &= ~AT91_OHCIICR_USB_SUSPEND;
++ if (set)
++ regval |= AT91_OHCIICR_USB_SUSPEND;
++ else
++ regval &= ~AT91_OHCIICR_USB_SUSPEND;
+
+- regmap_write(regmap, AT91_SFR_OHCIICR, regval);
++ regmap_write(regmap, AT91_SFR_OHCIICR, regval);
++ }
+
+ return 0;
+ }
+@@ -357,9 +382,8 @@ static int ohci_at91_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+
+ case USB_PORT_FEAT_SUSPEND:
+ dev_dbg(hcd->self.controller, "SetPortFeat: SUSPEND\n");
+- if (valid_port(wIndex) && ohci_at91->sfr_regmap) {
+- ohci_at91_port_suspend(ohci_at91->sfr_regmap,
+- 1);
++ if (valid_port(wIndex)) {
++ ohci_at91_port_suspend(ohci_at91, 1);
+ return 0;
+ }
+ break;
+@@ -400,9 +424,8 @@ static int ohci_at91_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+
+ case USB_PORT_FEAT_SUSPEND:
+ dev_dbg(hcd->self.controller, "ClearPortFeature: SUSPEND\n");
+- if (valid_port(wIndex) && ohci_at91->sfr_regmap) {
+- ohci_at91_port_suspend(ohci_at91->sfr_regmap,
+- 0);
++ if (valid_port(wIndex)) {
++ ohci_at91_port_suspend(ohci_at91, 0);
+ return 0;
+ }
+ break;
+@@ -630,10 +653,10 @@ ohci_hcd_at91_drv_suspend(struct device *dev)
+ /* flush the writes */
+ (void) ohci_readl (ohci, &ohci->regs->control);
+ msleep(1);
+- ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1);
++ ohci_at91_port_suspend(ohci_at91, 1);
+ at91_stop_clock(ohci_at91);
+ } else {
+- ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1);
++ ohci_at91_port_suspend(ohci_at91, 1);
+ }
+
+ return ret;
+@@ -645,7 +668,7 @@ ohci_hcd_at91_drv_resume(struct device *dev)
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+ struct ohci_at91_priv *ohci_at91 = hcd_to_ohci_at91_priv(hcd);
+
+- ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0);
++ ohci_at91_port_suspend(ohci_at91, 0);
+
+ if (ohci_at91->wakeup)
+ disable_irq_wake(hcd->irq);
+diff --git a/drivers/usb/host/ohci-nxp.c b/drivers/usb/host/ohci-nxp.c
+index 85878e8ad3311..106a6bcefb087 100644
+--- a/drivers/usb/host/ohci-nxp.c
++++ b/drivers/usb/host/ohci-nxp.c
+@@ -164,6 +164,7 @@ static int ohci_hcd_nxp_probe(struct platform_device *pdev)
+ }
+
+ isp1301_i2c_client = isp1301_get_client(isp1301_node);
++ of_node_put(isp1301_node);
+ if (!isp1301_i2c_client)
+ return -EPROBE_DEFER;
+
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 996958a6565c3..bdb776553826b 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -1010,15 +1010,15 @@ static int tegra_xusb_powerdomain_init(struct device *dev,
+ int err;
+
+ tegra->genpd_dev_host = dev_pm_domain_attach_by_name(dev, "xusb_host");
+- if (IS_ERR(tegra->genpd_dev_host)) {
+- err = PTR_ERR(tegra->genpd_dev_host);
++ if (IS_ERR_OR_NULL(tegra->genpd_dev_host)) {
++ err = PTR_ERR(tegra->genpd_dev_host) ? : -ENODATA;
+ dev_err(dev, "failed to get host pm-domain: %d\n", err);
+ return err;
+ }
+
+ tegra->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "xusb_ss");
+- if (IS_ERR(tegra->genpd_dev_ss)) {
+- err = PTR_ERR(tegra->genpd_dev_ss);
++ if (IS_ERR_OR_NULL(tegra->genpd_dev_ss)) {
++ err = PTR_ERR(tegra->genpd_dev_ss) ? : -ENODATA;
+ dev_err(dev, "failed to get superspeed pm-domain: %d\n", err);
+ return err;
+ }
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 28aaf031f9a8b..1960b47acfb28 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -2417,7 +2417,7 @@ static inline const char *xhci_decode_trb(char *str, size_t size,
+ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_STOP_RING:
+- sprintf(str,
++ snprintf(str, size,
+ "%s: slot %d sp %d ep %d flags %c",
+ xhci_trb_type_string(type),
+ TRB_TO_SLOT_ID(field3),
+diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
+index 9d56138133a97..ef6a2891f290c 100644
+--- a/drivers/usb/serial/sierra.c
++++ b/drivers/usb/serial/sierra.c
+@@ -737,7 +737,8 @@ static void sierra_close(struct usb_serial_port *port)
+
+ /*
+ * Need to take susp_lock to make sure port is not already being
+- * resumed, but no need to hold it due to initialized
++ * resumed, but no need to hold it due to the tty-port initialized
++ * flag.
+ */
+ spin_lock_irq(&intfdata->susp_lock);
+ if (--intfdata->open_ports == 0)
+diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c
+index 24101bd7fcad2..e35bea2235c1c 100644
+--- a/drivers/usb/serial/usb-serial.c
++++ b/drivers/usb/serial/usb-serial.c
+@@ -295,7 +295,7 @@ static int serial_open(struct tty_struct *tty, struct file *filp)
+ *
+ * Shut down a USB serial port. Serialized against activate by the
+ * tport mutex and kept to matching open/close pairs
+- * of calls by the initialized flag.
++ * of calls by the tty-port initialized flag.
+ *
+ * Not called if tty is console.
+ */
+diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
+index dab38b63eaf7f..cc81ab7ef4da1 100644
+--- a/drivers/usb/serial/usb_wwan.c
++++ b/drivers/usb/serial/usb_wwan.c
+@@ -388,7 +388,8 @@ void usb_wwan_close(struct usb_serial_port *port)
+
+ /*
+ * Need to take susp_lock to make sure port is not already being
+- * resumed, but no need to hold it due to initialized
++ * resumed, but no need to hold it due to the tty-port initialized
++ * flag.
+ */
+ spin_lock_irq(&intfdata->susp_lock);
+ if (--intfdata->open_ports == 0)
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index cbd862f9f2a15..1aea46493b852 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -76,6 +76,10 @@ static int ucsi_read_error(struct ucsi *ucsi)
+ if (ret)
+ return ret;
+
++ ret = ucsi_acknowledge_command(ucsi);
++ if (ret)
++ return ret;
++
+ switch (error) {
+ case UCSI_ERROR_INCOMPATIBLE_PARTNER:
+ return -EOPNOTSUPP;
+diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
+index d1cf6b51bf85d..c95e6b2bfd32a 100644
+--- a/drivers/usb/usbip/vudc_sysfs.c
++++ b/drivers/usb/usbip/vudc_sysfs.c
+@@ -128,7 +128,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ goto unlock;
+ }
+
+- spin_lock_irq(&udc->ud.lock);
++ spin_lock(&udc->ud.lock);
+
+ if (udc->ud.status != SDEV_ST_AVAILABLE) {
+ ret = -EINVAL;
+@@ -150,7 +150,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ }
+
+ /* unlock and create threads and get tasks */
+- spin_unlock_irq(&udc->ud.lock);
++ spin_unlock(&udc->ud.lock);
+ spin_unlock_irqrestore(&udc->lock, flags);
+
+ tcp_rx = kthread_create(&v_rx_loop, &udc->ud, "vudc_rx");
+@@ -173,14 +173,14 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+
+ /* lock and update udc->ud state */
+ spin_lock_irqsave(&udc->lock, flags);
+- spin_lock_irq(&udc->ud.lock);
++ spin_lock(&udc->ud.lock);
+
+ udc->ud.tcp_socket = socket;
+ udc->ud.tcp_rx = tcp_rx;
+ udc->ud.tcp_tx = tcp_tx;
+ udc->ud.status = SDEV_ST_USED;
+
+- spin_unlock_irq(&udc->ud.lock);
++ spin_unlock(&udc->ud.lock);
+
+ ktime_get_ts64(&udc->start_time);
+ v_start_timer(udc);
+@@ -201,12 +201,12 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ goto unlock;
+ }
+
+- spin_lock_irq(&udc->ud.lock);
++ spin_lock(&udc->ud.lock);
+ if (udc->ud.status != SDEV_ST_USED) {
+ ret = -EINVAL;
+ goto unlock_ud;
+ }
+- spin_unlock_irq(&udc->ud.lock);
++ spin_unlock(&udc->ud.lock);
+
+ usbip_event_add(&udc->ud, VUDC_EVENT_DOWN);
+ }
+@@ -219,7 +219,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ sock_err:
+ sockfd_put(socket);
+ unlock_ud:
+- spin_unlock_irq(&udc->ud.lock);
++ spin_unlock(&udc->ud.lock);
+ unlock:
+ spin_unlock_irqrestore(&udc->lock, flags);
+ mutex_unlock(&udc->ud.sysfs_lock);
+diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+index 4def43f5f7b61..ea762e28c1cc6 100644
+--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+@@ -1185,7 +1185,7 @@ static int hisi_acc_vfio_pci_open_device(struct vfio_device *core_vdev)
+ if (ret)
+ return ret;
+
+- if (core_vdev->ops->migration_set_state) {
++ if (core_vdev->mig_ops) {
+ ret = hisi_acc_vf_qm_init(hisi_acc_vdev);
+ if (ret) {
+ vfio_pci_core_disable(vdev);
+@@ -1208,6 +1208,11 @@ static void hisi_acc_vfio_pci_close_device(struct vfio_device *core_vdev)
+ vfio_pci_core_close_device(core_vdev);
+ }
+
++static const struct vfio_migration_ops hisi_acc_vfio_pci_migrn_state_ops = {
++ .migration_set_state = hisi_acc_vfio_pci_set_device_state,
++ .migration_get_state = hisi_acc_vfio_pci_get_device_state,
++};
++
+ static const struct vfio_device_ops hisi_acc_vfio_pci_migrn_ops = {
+ .name = "hisi-acc-vfio-pci-migration",
+ .open_device = hisi_acc_vfio_pci_open_device,
+@@ -1219,8 +1224,6 @@ static const struct vfio_device_ops hisi_acc_vfio_pci_migrn_ops = {
+ .mmap = hisi_acc_vfio_pci_mmap,
+ .request = vfio_pci_core_request,
+ .match = vfio_pci_core_match,
+- .migration_set_state = hisi_acc_vfio_pci_set_device_state,
+- .migration_get_state = hisi_acc_vfio_pci_get_device_state,
+ };
+
+ static const struct vfio_device_ops hisi_acc_vfio_pci_ops = {
+@@ -1272,6 +1275,8 @@ static int hisi_acc_vfio_pci_probe(struct pci_dev *pdev, const struct pci_device
+ if (!ret) {
+ vfio_pci_core_init_device(&hisi_acc_vdev->core_device, pdev,
+ &hisi_acc_vfio_pci_migrn_ops);
++ hisi_acc_vdev->core_device.vdev.mig_ops =
++ &hisi_acc_vfio_pci_migrn_state_ops;
+ } else {
+ pci_warn(pdev, "migration support failed, continue with generic interface\n");
+ vfio_pci_core_init_device(&hisi_acc_vdev->core_device, pdev,
+diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c
+index 9b9f33ca270a2..dd5d7bfe0a498 100644
+--- a/drivers/vfio/pci/mlx5/cmd.c
++++ b/drivers/vfio/pci/mlx5/cmd.c
+@@ -88,6 +88,16 @@ static int mlx5fv_vf_event(struct notifier_block *nb,
+ return 0;
+ }
+
++void mlx5vf_cmd_close_migratable(struct mlx5vf_pci_core_device *mvdev)
++{
++ if (!mvdev->migrate_cap)
++ return;
++
++ mutex_lock(&mvdev->state_mutex);
++ mlx5vf_disable_fds(mvdev);
++ mlx5vf_state_mutex_unlock(mvdev);
++}
++
+ void mlx5vf_cmd_remove_migratable(struct mlx5vf_pci_core_device *mvdev)
+ {
+ if (!mvdev->migrate_cap)
+@@ -98,7 +108,8 @@ void mlx5vf_cmd_remove_migratable(struct mlx5vf_pci_core_device *mvdev)
+ destroy_workqueue(mvdev->cb_wq);
+ }
+
+-void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev)
++void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev,
++ const struct vfio_migration_ops *mig_ops)
+ {
+ struct pci_dev *pdev = mvdev->core_device.pdev;
+ int ret;
+@@ -139,6 +150,7 @@ void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev)
+ mvdev->core_device.vdev.migration_flags =
+ VFIO_MIGRATION_STOP_COPY |
+ VFIO_MIGRATION_P2P;
++ mvdev->core_device.vdev.mig_ops = mig_ops;
+
+ end:
+ mlx5_vf_put_core_dev(mvdev->mdev);
+diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h
+index 6c3112fdd8b1d..8208f4701a908 100644
+--- a/drivers/vfio/pci/mlx5/cmd.h
++++ b/drivers/vfio/pci/mlx5/cmd.h
+@@ -62,8 +62,10 @@ int mlx5vf_cmd_suspend_vhca(struct mlx5vf_pci_core_device *mvdev, u16 op_mod);
+ int mlx5vf_cmd_resume_vhca(struct mlx5vf_pci_core_device *mvdev, u16 op_mod);
+ int mlx5vf_cmd_query_vhca_migration_state(struct mlx5vf_pci_core_device *mvdev,
+ size_t *state_size);
+-void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev);
++void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev,
++ const struct vfio_migration_ops *mig_ops);
+ void mlx5vf_cmd_remove_migratable(struct mlx5vf_pci_core_device *mvdev);
++void mlx5vf_cmd_close_migratable(struct mlx5vf_pci_core_device *mvdev);
+ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev,
+ struct mlx5_vf_migration_file *migf);
+ int mlx5vf_cmd_load_vhca_state(struct mlx5vf_pci_core_device *mvdev,
+diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c
+index 0558d0649ddb8..a9b63d15c5d34 100644
+--- a/drivers/vfio/pci/mlx5/main.c
++++ b/drivers/vfio/pci/mlx5/main.c
+@@ -570,10 +570,15 @@ static void mlx5vf_pci_close_device(struct vfio_device *core_vdev)
+ struct mlx5vf_pci_core_device *mvdev = container_of(
+ core_vdev, struct mlx5vf_pci_core_device, core_device.vdev);
+
+- mlx5vf_disable_fds(mvdev);
++ mlx5vf_cmd_close_migratable(mvdev);
+ vfio_pci_core_close_device(core_vdev);
+ }
+
++static const struct vfio_migration_ops mlx5vf_pci_mig_ops = {
++ .migration_set_state = mlx5vf_pci_set_device_state,
++ .migration_get_state = mlx5vf_pci_get_device_state,
++};
++
+ static const struct vfio_device_ops mlx5vf_pci_ops = {
+ .name = "mlx5-vfio-pci",
+ .open_device = mlx5vf_pci_open_device,
+@@ -585,8 +590,6 @@ static const struct vfio_device_ops mlx5vf_pci_ops = {
+ .mmap = vfio_pci_core_mmap,
+ .request = vfio_pci_core_request,
+ .match = vfio_pci_core_match,
+- .migration_set_state = mlx5vf_pci_set_device_state,
+- .migration_get_state = mlx5vf_pci_get_device_state,
+ };
+
+ static int mlx5vf_pci_probe(struct pci_dev *pdev,
+@@ -599,7 +602,7 @@ static int mlx5vf_pci_probe(struct pci_dev *pdev,
+ if (!mvdev)
+ return -ENOMEM;
+ vfio_pci_core_init_device(&mvdev->core_device, pdev, &mlx5vf_pci_ops);
+- mlx5vf_cmd_set_migratable(mvdev);
++ mlx5vf_cmd_set_migratable(mvdev, &mlx5vf_pci_mig_ops);
+ dev_set_drvdata(&pdev->dev, &mvdev->core_device);
+ ret = vfio_pci_core_register_device(&mvdev->core_device);
+ if (ret)
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index a0d69ddaf90d8..2efa06b1fafaa 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -1855,6 +1855,13 @@ int vfio_pci_core_register_device(struct vfio_pci_core_device *vdev)
+ if (pdev->hdr_type != PCI_HEADER_TYPE_NORMAL)
+ return -EINVAL;
+
++ if (vdev->vdev.mig_ops) {
++ if (!(vdev->vdev.mig_ops->migration_get_state &&
++ vdev->vdev.mig_ops->migration_set_state) ||
++ !(vdev->vdev.migration_flags & VFIO_MIGRATION_STOP_COPY))
++ return -EINVAL;
++ }
++
+ /*
+ * Prevent binding to PFs with VFs enabled, the VFs might be in use
+ * by the host or other users. We cannot capture the VFs if they
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index e60b06f2ac223..18fc0916587ec 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -1544,8 +1544,7 @@ vfio_ioctl_device_feature_mig_device_state(struct vfio_device *device,
+ struct file *filp = NULL;
+ int ret;
+
+- if (!device->ops->migration_set_state ||
+- !device->ops->migration_get_state)
++ if (!device->mig_ops)
+ return -ENOTTY;
+
+ ret = vfio_check_feature(flags, argsz,
+@@ -1561,7 +1560,8 @@ vfio_ioctl_device_feature_mig_device_state(struct vfio_device *device,
+ if (flags & VFIO_DEVICE_FEATURE_GET) {
+ enum vfio_device_mig_state curr_state;
+
+- ret = device->ops->migration_get_state(device, &curr_state);
++ ret = device->mig_ops->migration_get_state(device,
++ &curr_state);
+ if (ret)
+ return ret;
+ mig.device_state = curr_state;
+@@ -1569,7 +1569,7 @@ vfio_ioctl_device_feature_mig_device_state(struct vfio_device *device,
+ }
+
+ /* Handle the VFIO_DEVICE_FEATURE_SET */
+- filp = device->ops->migration_set_state(device, mig.device_state);
++ filp = device->mig_ops->migration_set_state(device, mig.device_state);
+ if (IS_ERR(filp) || !filp)
+ goto out_copy;
+
+@@ -1592,8 +1592,7 @@ static int vfio_ioctl_device_feature_migration(struct vfio_device *device,
+ };
+ int ret;
+
+- if (!device->ops->migration_set_state ||
+- !device->ops->migration_get_state)
++ if (!device->mig_ops)
+ return -ENOTTY;
+
+ ret = vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_GET,
+diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c
+index 8080116aea844..f65c96d1394d3 100644
+--- a/drivers/video/fbdev/amba-clcd.c
++++ b/drivers/video/fbdev/amba-clcd.c
+@@ -698,16 +698,18 @@ static int clcdfb_of_init_display(struct clcd_fb *fb)
+ return -ENODEV;
+
+ panel = of_graph_get_remote_port_parent(endpoint);
+- if (!panel)
+- return -ENODEV;
++ if (!panel) {
++ err = -ENODEV;
++ goto out_endpoint_put;
++ }
+
+ err = clcdfb_of_get_backlight(&fb->dev->dev, fb->panel);
+ if (err)
+- return err;
++ goto out_panel_put;
+
+ err = clcdfb_of_get_mode(&fb->dev->dev, panel, fb->panel);
+ if (err)
+- return err;
++ goto out_panel_put;
+
+ err = of_property_read_u32(fb->dev->dev.of_node, "max-memory-bandwidth",
+ &max_bandwidth);
+@@ -736,11 +738,21 @@ static int clcdfb_of_init_display(struct clcd_fb *fb)
+
+ if (of_property_read_u32_array(endpoint,
+ "arm,pl11x,tft-r0g0b0-pads",
+- tft_r0b0g0, ARRAY_SIZE(tft_r0b0g0)) != 0)
+- return -ENOENT;
++ tft_r0b0g0, ARRAY_SIZE(tft_r0b0g0)) != 0) {
++ err = -ENOENT;
++ goto out_panel_put;
++ }
++
++ of_node_put(panel);
++ of_node_put(endpoint);
+
+ return clcdfb_of_init_tft_panel(fb, tft_r0b0g0[0],
+ tft_r0b0g0[1], tft_r0b0g0[2]);
++out_panel_put:
++ of_node_put(panel);
++out_endpoint_put:
++ of_node_put(endpoint);
++ return err;
+ }
+
+ static int clcdfb_of_vram_setup(struct clcd_fb *fb)
+diff --git a/drivers/video/fbdev/arkfb.c b/drivers/video/fbdev/arkfb.c
+index eb3e47c58c5f7..a2a381631628e 100644
+--- a/drivers/video/fbdev/arkfb.c
++++ b/drivers/video/fbdev/arkfb.c
+@@ -781,7 +781,12 @@ static int arkfb_set_par(struct fb_info *info)
+ return -EINVAL;
+ }
+
+- ark_set_pixclock(info, (hdiv * info->var.pixclock) / hmul);
++ value = (hdiv * info->var.pixclock) / hmul;
++ if (!value) {
++ fb_dbg(info, "invalid pixclock\n");
++ value = 1;
++ }
++ ark_set_pixclock(info, value);
+ svga_set_timings(par->state.vgabase, &ark_timing_regs, &(info->var), hmul, hdiv,
+ (info->var.vmode & FB_VMODE_DOUBLE) ? 2 : 1,
+ (info->var.vmode & FB_VMODE_INTERLACED) ? 2 : 1,
+@@ -792,6 +797,8 @@ static int arkfb_set_par(struct fb_info *info)
+ value = ((value * hmul / hdiv) / 8) - 5;
+ vga_wcrt(par->state.vgabase, 0x42, (value + 1) / 2);
+
++ if (screen_size > info->screen_size)
++ screen_size = info->screen_size;
+ memset_io(info->screen_base, 0x00, screen_size);
+ /* Device and screen back on */
+ svga_wcrt_mask(par->state.vgabase, 0x17, 0x80, 0x80);
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 1a9aa12cf8860..b89075f3b6ab7 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -125,8 +125,8 @@ static int logo_lines;
+ enums. */
+ static int logo_shown = FBCON_LOGO_CANSHOW;
+ /* console mappings */
+-static int first_fb_vc;
+-static int last_fb_vc = MAX_NR_CONSOLES - 1;
++static unsigned int first_fb_vc;
++static unsigned int last_fb_vc = MAX_NR_CONSOLES - 1;
+ static int fbcon_is_default = 1;
+ static int primary_device = -1;
+ static int fbcon_has_console_bind;
+@@ -440,10 +440,12 @@ static int __init fb_console_setup(char *this_opt)
+ options += 3;
+ if (*options)
+ first_fb_vc = simple_strtoul(options, &options, 10) - 1;
+- if (first_fb_vc < 0)
++ if (first_fb_vc >= MAX_NR_CONSOLES)
+ first_fb_vc = 0;
+ if (*options++ == '-')
+ last_fb_vc = simple_strtoul(options, &options, 10) - 1;
++ if (last_fb_vc < first_fb_vc || last_fb_vc >= MAX_NR_CONSOLES)
++ last_fb_vc = MAX_NR_CONSOLES - 1;
+ fbcon_is_default = 0;
+ continue;
+ }
+@@ -1758,8 +1760,6 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ case SM_UP:
+ if (count > vc->vc_rows) /* Maximum realistic size */
+ count = vc->vc_rows;
+- if (logo_shown >= 0)
+- goto redraw_up;
+ switch (fb_scrollmode(p)) {
+ case SCROLL_MOVE:
+ fbcon_redraw_blit(vc, info, p, t, b - t - count,
+@@ -1848,8 +1848,6 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ case SM_DOWN:
+ if (count > vc->vc_rows) /* Maximum realistic size */
+ count = vc->vc_rows;
+- if (logo_shown >= 0)
+- goto redraw_down;
+ switch (fb_scrollmode(p)) {
+ case SCROLL_MOVE:
+ fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+diff --git a/drivers/video/fbdev/offb.c b/drivers/video/fbdev/offb.c
+index b1acb1ebebe90..91001990e351c 100644
+--- a/drivers/video/fbdev/offb.c
++++ b/drivers/video/fbdev/offb.c
+@@ -26,6 +26,7 @@
+ #include <linux/init.h>
+ #include <linux/ioport.h>
+ #include <linux/pci.h>
++#include <linux/platform_device.h>
+ #include <asm/io.h>
+
+ #ifdef CONFIG_PPC32
+diff --git a/drivers/video/fbdev/s3fb.c b/drivers/video/fbdev/s3fb.c
+index b93c8eb023369..5069f6f67923f 100644
+--- a/drivers/video/fbdev/s3fb.c
++++ b/drivers/video/fbdev/s3fb.c
+@@ -905,6 +905,8 @@ static int s3fb_set_par(struct fb_info *info)
+ value = clamp((htotal + hsstart + 1) / 2 + 2, hsstart + 4, htotal + 1);
+ svga_wcrt_multi(par->state.vgabase, s3_dtpc_regs, value);
+
++ if (screen_size > info->screen_size)
++ screen_size = info->screen_size;
+ memset_io(info->screen_base, 0x00, screen_size);
+ /* Device and screen back on */
+ svga_wcrt_mask(par->state.vgabase, 0x17, 0x80, 0x80);
+diff --git a/drivers/video/fbdev/sis/init.c b/drivers/video/fbdev/sis/init.c
+index b568c646a76c2..2ba91d62af92e 100644
+--- a/drivers/video/fbdev/sis/init.c
++++ b/drivers/video/fbdev/sis/init.c
+@@ -355,12 +355,12 @@ SiS_GetModeID(int VGAEngine, unsigned int VBFlags, int HDisplay, int VDisplay,
+ }
+ break;
+ case 400:
+- if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 800) && (LCDwidth >= 600))) {
++ if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 800) && (LCDheight >= 600))) {
+ if(VDisplay == 300) ModeIndex = ModeIndex_400x300[Depth];
+ }
+ break;
+ case 512:
+- if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 1024) && (LCDwidth >= 768))) {
++ if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 1024) && (LCDheight >= 768))) {
+ if(VDisplay == 384) ModeIndex = ModeIndex_512x384[Depth];
+ }
+ break;
+diff --git a/drivers/video/fbdev/vt8623fb.c b/drivers/video/fbdev/vt8623fb.c
+index a92a8c670cf0f..4274c6efb2490 100644
+--- a/drivers/video/fbdev/vt8623fb.c
++++ b/drivers/video/fbdev/vt8623fb.c
+@@ -507,6 +507,8 @@ static int vt8623fb_set_par(struct fb_info *info)
+ (info->var.vmode & FB_VMODE_DOUBLE) ? 2 : 1, 1,
+ 1, info->node);
+
++ if (screen_size > info->screen_size)
++ screen_size = info->screen_size;
+ memset_io(info->screen_base, 0x00, screen_size);
+
+ /* Device and screen back on */
+diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
+index e1556d2a355ae..56c77f63cd224 100644
+--- a/drivers/virtio/Kconfig
++++ b/drivers/virtio/Kconfig
+@@ -1,6 +1,10 @@
+ # SPDX-License-Identifier: GPL-2.0-only
++config VIRTIO_ANCHOR
++ bool
++
+ config VIRTIO
+ tristate
++ select VIRTIO_ANCHOR
+ help
+ This option is selected by any driver which implements the virtio
+ bus, such as CONFIG_VIRTIO_PCI, CONFIG_VIRTIO_MMIO, CONFIG_RPMSG
+diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
+index 0a82d08732484..8e98d24917cc0 100644
+--- a/drivers/virtio/Makefile
++++ b/drivers/virtio/Makefile
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-$(CONFIG_VIRTIO) += virtio.o virtio_ring.o
++obj-$(CONFIG_VIRTIO_ANCHOR) += virtio_anchor.o
+ obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio_pci_modern_dev.o
+ obj-$(CONFIG_VIRTIO_PCI_LIB_LEGACY) += virtio_pci_legacy_dev.o
+ obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 7deeed30d1f3a..14c142d77fba1 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -2,10 +2,10 @@
+ #include <linux/virtio.h>
+ #include <linux/spinlock.h>
+ #include <linux/virtio_config.h>
++#include <linux/virtio_anchor.h>
+ #include <linux/module.h>
+ #include <linux/idr.h>
+ #include <linux/of.h>
+-#include <linux/platform-feature.h>
+ #include <uapi/linux/virtio_ids.h>
+
+ /* Unique numbering for virtio devices. */
+@@ -174,7 +174,7 @@ static int virtio_features_ok(struct virtio_device *dev)
+
+ might_sleep();
+
+- if (platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS)) {
++ if (virtio_check_mem_acc_cb(dev)) {
+ if (!virtio_has_feature(dev, VIRTIO_F_VERSION_1)) {
+ dev_warn(&dev->dev,
+ "device must provide VIRTIO_F_VERSION_1\n");
+diff --git a/drivers/virtio/virtio_anchor.c b/drivers/virtio/virtio_anchor.c
+new file mode 100644
+index 0000000000000..4d6a5d269b554
+--- /dev/null
++++ b/drivers/virtio/virtio_anchor.c
+@@ -0,0 +1,18 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include <linux/virtio.h>
++#include <linux/virtio_anchor.h>
++
++bool virtio_require_restricted_mem_acc(struct virtio_device *dev)
++{
++ return true;
++}
++EXPORT_SYMBOL_GPL(virtio_require_restricted_mem_acc);
++
++static bool virtio_no_restricted_mem_acc(struct virtio_device *dev)
++{
++ return false;
++}
++
++bool (*virtio_check_mem_acc_cb)(struct virtio_device *dev) =
++ virtio_no_restricted_mem_acc;
++EXPORT_SYMBOL_GPL(virtio_check_mem_acc_cb);
+diff --git a/drivers/watchdog/armada_37xx_wdt.c b/drivers/watchdog/armada_37xx_wdt.c
+index 1635f421ef2c3..854b1cc723cb6 100644
+--- a/drivers/watchdog/armada_37xx_wdt.c
++++ b/drivers/watchdog/armada_37xx_wdt.c
+@@ -274,6 +274,8 @@ static int armada_37xx_wdt_probe(struct platform_device *pdev)
+ if (!res)
+ return -ENODEV;
+ dev->reg = devm_ioremap(&pdev->dev, res->start, resource_size(res));
++ if (!dev->reg)
++ return -ENOMEM;
+
+ /* init clock */
+ dev->clk = devm_clk_get(&pdev->dev, NULL);
+diff --git a/drivers/watchdog/f71808e_wdt.c b/drivers/watchdog/f71808e_wdt.c
+index 7f59c680de253..6a16d3d0bb1e6 100644
+--- a/drivers/watchdog/f71808e_wdt.c
++++ b/drivers/watchdog/f71808e_wdt.c
+@@ -634,7 +634,9 @@ static int __init fintek_wdt_init(void)
+
+ pdata.type = ret;
+
+- platform_driver_register(&fintek_wdt_driver);
++ ret = platform_driver_register(&fintek_wdt_driver);
++ if (ret)
++ return ret;
+
+ wdt_res.name = "superio port";
+ wdt_res.flags = IORESOURCE_IO;
+diff --git a/drivers/watchdog/sp5100_tco.c b/drivers/watchdog/sp5100_tco.c
+index 86ffb58fbc854..ae54dd33e2336 100644
+--- a/drivers/watchdog/sp5100_tco.c
++++ b/drivers/watchdog/sp5100_tco.c
+@@ -402,6 +402,7 @@ out:
+ iounmap(addr);
+
+ release_resource(res);
++ kfree(res);
+
+ return ret;
+ }
+diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
+index bfd5f4f706bcc..a65bd92121a5d 100644
+--- a/drivers/xen/Kconfig
++++ b/drivers/xen/Kconfig
+@@ -355,4 +355,13 @@ config XEN_VIRTIO
+
+ If in doubt, say n.
+
++config XEN_VIRTIO_FORCE_GRANT
++ bool "Require Xen virtio support to use grants"
++ depends on XEN_VIRTIO
++ help
++ Require virtio for Xen guests to use grant mappings.
++ This will avoid the need to give the backend the right to map all
++ of the guest memory. This will need support on the backend side
++ (e.g. qemu or kernel, depending on the virtio device types used).
++
+ endmenu
+diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
+index fc01424840017..8973fc1e9cccd 100644
+--- a/drivers/xen/grant-dma-ops.c
++++ b/drivers/xen/grant-dma-ops.c
+@@ -12,6 +12,8 @@
+ #include <linux/of.h>
+ #include <linux/pfn.h>
+ #include <linux/xarray.h>
++#include <linux/virtio_anchor.h>
++#include <linux/virtio.h>
+ #include <xen/xen.h>
+ #include <xen/xen-ops.h>
+ #include <xen/grant_table.h>
+@@ -287,6 +289,14 @@ bool xen_is_grant_dma_device(struct device *dev)
+ return has_iommu;
+ }
+
++bool xen_virtio_mem_acc(struct virtio_device *dev)
++{
++ if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
++ return true;
++
++ return xen_is_grant_dma_device(dev->dev.parent);
++}
++
+ void xen_grant_setup_dma_ops(struct device *dev)
+ {
+ struct xen_grant_dma_data *data;
+diff --git a/fs/Makefile b/fs/Makefile
+index 208a74e0b00e1..93b80529f8e82 100644
+--- a/fs/Makefile
++++ b/fs/Makefile
+@@ -34,8 +34,6 @@ obj-$(CONFIG_TIMERFD) += timerfd.o
+ obj-$(CONFIG_EVENTFD) += eventfd.o
+ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
+ obj-$(CONFIG_AIO) += aio.o
+-obj-$(CONFIG_IO_URING) += io_uring.o
+-obj-$(CONFIG_IO_WQ) += io-wq.o
+ obj-$(CONFIG_FS_DAX) += dax.o
+ obj-$(CONFIG_FS_ENCRYPTION) += crypto/
+ obj-$(CONFIG_FS_VERITY) += verity/
+diff --git a/fs/attr.c b/fs/attr.c
+index dbe996b0dedfc..f581c4d008971 100644
+--- a/fs/attr.c
++++ b/fs/attr.c
+@@ -184,6 +184,8 @@ EXPORT_SYMBOL(setattr_prepare);
+ */
+ int inode_newsize_ok(const struct inode *inode, loff_t offset)
+ {
++ if (offset < 0)
++ return -EINVAL;
+ if (inode->i_size < offset) {
+ unsigned long limit;
+
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index ede389f2602d5..5627b43d4cc24 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1051,8 +1051,13 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ < block_group->zone_unusable);
+ WARN_ON(block_group->space_info->disk_total
+ < block_group->length * factor);
++ WARN_ON(block_group->zone_is_active &&
++ block_group->space_info->active_total_bytes
++ < block_group->length);
+ }
+ block_group->space_info->total_bytes -= block_group->length;
++ if (block_group->zone_is_active)
++ block_group->space_info->active_total_bytes -= block_group->length;
+ block_group->space_info->bytes_readonly -=
+ (block_group->length - block_group->zone_unusable);
+ block_group->space_info->bytes_zone_unusable -=
+@@ -2108,7 +2113,8 @@ static int read_one_block_group(struct btrfs_fs_info *info,
+ trace_btrfs_add_block_group(info, cache, 0);
+ btrfs_update_space_info(info, cache->flags, cache->length,
+ cache->used, cache->bytes_super,
+- cache->zone_unusable, &space_info);
++ cache->zone_unusable, cache->zone_is_active,
++ &space_info);
+
+ cache->space_info = space_info;
+
+@@ -2178,7 +2184,7 @@ static int fill_dummy_bgs(struct btrfs_fs_info *fs_info)
+ }
+
+ btrfs_update_space_info(fs_info, bg->flags, em->len, em->len,
+- 0, 0, &space_info);
++ 0, 0, false, &space_info);
+ bg->space_info = space_info;
+ link_block_group(bg);
+
+@@ -2559,7 +2565,7 @@ struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *tran
+ trace_btrfs_add_block_group(fs_info, cache, 1);
+ btrfs_update_space_info(fs_info, cache->flags, size, bytes_used,
+ cache->bytes_super, cache->zone_unusable,
+- &cache->space_info);
++ cache->zone_is_active, &cache->space_info);
+ btrfs_update_global_block_rsv(fs_info);
+
+ link_block_group(cache);
+@@ -2659,6 +2665,14 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
+ ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
+ if (ret < 0)
+ goto out;
++ /*
++ * We have allocated a new chunk. We also need to activate that chunk to
++ * grant metadata tickets for zoned filesystem.
++ */
++ ret = btrfs_zoned_activate_one_bg(fs_info, cache->space_info, true);
++ if (ret < 0)
++ goto out;
++
+ ret = inc_block_group_ro(cache, 0);
+ if (ret == -ETXTBSY)
+ goto unlock_out;
+@@ -3761,6 +3775,7 @@ int btrfs_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags,
+ * attempt.
+ */
+ wait_for_alloc = true;
++ force = CHUNK_ALLOC_NO_FORCE;
+ spin_unlock(&space_info->lock);
+ mutex_lock(&fs_info->chunk_mutex);
+ mutex_unlock(&fs_info->chunk_mutex);
+@@ -3883,6 +3898,14 @@ static void reserve_chunk_space(struct btrfs_trans_handle *trans,
+ if (IS_ERR(bg)) {
+ ret = PTR_ERR(bg);
+ } else {
++ /*
++ * We have a new chunk. We also need to activate it for
++ * zoned filesystem.
++ */
++ ret = btrfs_zoned_activate_one_bg(fs_info, info, true);
++ if (ret < 0)
++ return;
++
+ /*
+ * If we fail to add the chunk item here, we end up
+ * trying again at phase 2 of chunk allocation, at
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 9c21e214d29e4..3a51d0c13a957 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -107,14 +107,6 @@ struct btrfs_ioctl_encoded_io_args;
+ #define BTRFS_STAT_CURR 0
+ #define BTRFS_STAT_PREV 1
+
+-/*
+- * Count how many BTRFS_MAX_EXTENT_SIZE cover the @size
+- */
+-static inline u32 count_max_extents(u64 size)
+-{
+- return div_u64(size + BTRFS_MAX_EXTENT_SIZE - 1, BTRFS_MAX_EXTENT_SIZE);
+-}
+-
+ static inline unsigned long btrfs_chunk_item_size(int num_stripes)
+ {
+ BUG_ON(num_stripes == 0);
+@@ -635,6 +627,9 @@ enum {
+ /* Indicate we have half completed snapshot deletions pending. */
+ BTRFS_FS_UNFINISHED_DROPS,
+
++ /* Indicate we have to finish a zone to do next allocation. */
++ BTRFS_FS_NEED_ZONE_FINISH,
++
+ #if BITS_PER_LONG == 32
+ /* Indicate if we have error/warn message printed on 32bit systems */
+ BTRFS_FS_32BIT_ERROR,
+@@ -1032,6 +1027,12 @@ struct btrfs_fs_info {
+ u32 csums_per_leaf;
+ u32 stripesize;
+
++ /*
++ * Maximum size of an extent. BTRFS_MAX_EXTENT_SIZE on regular
++ * filesystem, on zoned it depends on the device constraints.
++ */
++ u64 max_extent_size;
++
+ /* Block groups and devices containing active swapfiles. */
+ spinlock_t swapfile_pins_lock;
+ struct rb_root swapfile_pins;
+@@ -1047,6 +1048,8 @@ struct btrfs_fs_info {
+ */
+ u64 zone_size;
+
++ /* Max size to emit ZONE_APPEND write command */
++ u64 max_zone_append_size;
+ struct mutex zoned_meta_io_lock;
+ spinlock_t treelog_bg_lock;
+ u64 treelog_bg;
+@@ -1063,6 +1066,8 @@ struct btrfs_fs_info {
+
+ spinlock_t zone_active_bgs_lock;
+ struct list_head zone_active_bgs;
++ /* Waiters when BTRFS_FS_NEED_ZONE_FINISH is set */
++ wait_queue_head_t zone_finish_wait;
+
+ #ifdef CONFIG_BTRFS_FS_REF_VERIFY
+ spinlock_t ref_verify_lock;
+@@ -4009,6 +4014,19 @@ static inline bool btrfs_is_zoned(const struct btrfs_fs_info *fs_info)
+ return fs_info->zone_size > 0;
+ }
+
++/*
++ * Count how many fs_info->max_extent_size cover the @size
++ */
++static inline u32 count_max_extents(struct btrfs_fs_info *fs_info, u64 size)
++{
++#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
++ if (!fs_info)
++ return div_u64(size + BTRFS_MAX_EXTENT_SIZE - 1, BTRFS_MAX_EXTENT_SIZE);
++#endif
++
++ return div_u64(size + fs_info->max_extent_size - 1, fs_info->max_extent_size);
++}
++
+ static inline bool btrfs_is_data_reloc_root(const struct btrfs_root *root)
+ {
+ return root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID;
+diff --git a/fs/btrfs/delalloc-space.c b/fs/btrfs/delalloc-space.c
+index 36ab0859a2634..1e8f17ff829e3 100644
+--- a/fs/btrfs/delalloc-space.c
++++ b/fs/btrfs/delalloc-space.c
+@@ -273,7 +273,7 @@ static void calc_inode_reservations(struct btrfs_fs_info *fs_info,
+ u64 num_bytes, u64 disk_num_bytes,
+ u64 *meta_reserve, u64 *qgroup_reserve)
+ {
+- u64 nr_extents = count_max_extents(num_bytes);
++ u64 nr_extents = count_max_extents(fs_info, num_bytes);
+ u64 csum_leaves = btrfs_csum_bytes_to_leaves(fs_info, disk_num_bytes);
+ u64 inode_update = btrfs_calc_metadata_size(fs_info, 1);
+
+@@ -350,7 +350,7 @@ int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes,
+ * needs to free the reservation we just made.
+ */
+ spin_lock(&inode->lock);
+- nr_extents = count_max_extents(num_bytes);
++ nr_extents = count_max_extents(fs_info, num_bytes);
+ btrfs_mod_outstanding_extents(inode, nr_extents);
+ inode->csum_bytes += disk_num_bytes;
+ btrfs_calculate_inode_block_rsv_size(fs_info, inode);
+@@ -413,7 +413,7 @@ void btrfs_delalloc_release_extents(struct btrfs_inode *inode, u64 num_bytes)
+ unsigned num_extents;
+
+ spin_lock(&inode->lock);
+- num_extents = count_max_extents(num_bytes);
++ num_extents = count_max_extents(fs_info, num_bytes);
+ btrfs_mod_outstanding_extents(inode, -num_extents);
+ btrfs_calculate_inode_block_rsv_size(fs_info, inode);
+ spin_unlock(&inode->lock);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index de440ebf5648b..bc30306615837 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3255,6 +3255,7 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
+ init_waitqueue_head(&fs_info->transaction_blocked_wait);
+ init_waitqueue_head(&fs_info->async_submit_wait);
+ init_waitqueue_head(&fs_info->delayed_iputs_wait);
++ init_waitqueue_head(&fs_info->zone_finish_wait);
+
+ /* Usable values until the real ones are cached from the superblock */
+ fs_info->nodesize = 4096;
+@@ -3262,6 +3263,8 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
+ fs_info->sectorsize_bits = ilog2(4096);
+ fs_info->stripesize = 4096;
+
++ fs_info->max_extent_size = BTRFS_MAX_EXTENT_SIZE;
++
+ spin_lock_init(&fs_info->swapfile_pins_lock);
+ fs_info->swapfile_pins = RB_ROOT;
+
+@@ -3593,16 +3596,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ */
+ fs_info->compress_type = BTRFS_COMPRESS_ZLIB;
+
+- /*
+- * Flag our filesystem as having big metadata blocks if they are bigger
+- * than the page size.
+- */
+- if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
+- if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
+- btrfs_info(fs_info,
+- "flagging fs with big metadata feature");
+- features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
+- }
+
+ /* Set up fs_info before parsing mount options */
+ nodesize = btrfs_super_nodesize(disk_super);
+@@ -3643,6 +3636,17 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ if (features & BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA)
+ btrfs_info(fs_info, "has skinny extents");
+
++ /*
++ * Flag our filesystem as having big metadata blocks if they are bigger
++ * than the page size.
++ */
++ if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
++ if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
++ btrfs_info(fs_info,
++ "flagging fs with big metadata feature");
++ features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
++ }
++
+ /*
+ * mixed block groups end up with duplicate but slightly offset
+ * extent buffers for the same range. It leads to corruptions
+@@ -3670,6 +3674,20 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ err = -EINVAL;
+ goto fail_alloc;
+ }
++ /*
++ * We have unsupported RO compat features, although RO mounted, we
++ * should not cause any metadata write, including log replay.
++ * Or we could screw up whatever the new feature requires.
++ */
++ if (unlikely(features && btrfs_super_log_root(disk_super) &&
++ !btrfs_test_opt(fs_info, NOLOGREPLAY))) {
++ btrfs_err(fs_info,
++"cannot replay dirty log with unsupported compat_ro features (0x%llx), try rescue=nologreplay",
++ features);
++ err = -EINVAL;
++ goto fail_alloc;
++ }
++
+
+ if (sectorsize < PAGE_SIZE) {
+ struct btrfs_subpage_info *subpage_info;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index a3afc15430cea..f2c79838ebe52 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -3981,23 +3981,63 @@ static void found_extent(struct find_free_extent_ctl *ffe_ctl,
+ }
+ }
+
+-static bool can_allocate_chunk(struct btrfs_fs_info *fs_info,
+- struct find_free_extent_ctl *ffe_ctl)
++static int can_allocate_chunk_zoned(struct btrfs_fs_info *fs_info,
++ struct find_free_extent_ctl *ffe_ctl)
++{
++ /* If we can activate new zone, just allocate a chunk and use it */
++ if (btrfs_can_activate_zone(fs_info->fs_devices, ffe_ctl->flags))
++ return 0;
++
++ /*
++ * We already reached the max active zones. Try to finish one block
++ * group to make a room for a new block group. This is only possible
++ * for a data block group because btrfs_zone_finish() may need to wait
++ * for a running transaction which can cause a deadlock for metadata
++ * allocation.
++ */
++ if (ffe_ctl->flags & BTRFS_BLOCK_GROUP_DATA) {
++ int ret = btrfs_zone_finish_one_bg(fs_info);
++
++ if (ret == 1)
++ return 0;
++ else if (ret < 0)
++ return ret;
++ }
++
++ /*
++ * If we have enough free space left in an already active block group
++ * and we can't activate any other zone now, do not allow allocating a
++ * new chunk and let find_free_extent() retry with a smaller size.
++ */
++ if (ffe_ctl->max_extent_size >= ffe_ctl->min_alloc_size)
++ return -ENOSPC;
++
++ /*
++ * Even min_alloc_size is not left in any block groups. Since we cannot
++ * activate a new block group, allocating it may not help. Let's tell a
++ * caller to try again and hope it progress something by writing some
++ * parts of the region. That is only possible for data block groups,
++ * where a part of the region can be written.
++ */
++ if (ffe_ctl->flags & BTRFS_BLOCK_GROUP_DATA)
++ return -EAGAIN;
++
++ /*
++ * We cannot activate a new block group and no enough space left in any
++ * block groups. So, allocating a new block group may not help. But,
++ * there is nothing to do anyway, so let's go with it.
++ */
++ return 0;
++}
++
++static int can_allocate_chunk(struct btrfs_fs_info *fs_info,
++ struct find_free_extent_ctl *ffe_ctl)
+ {
+ switch (ffe_ctl->policy) {
+ case BTRFS_EXTENT_ALLOC_CLUSTERED:
+- return true;
++ return 0;
+ case BTRFS_EXTENT_ALLOC_ZONED:
+- /*
+- * If we have enough free space left in an already
+- * active block group and we can't activate any other
+- * zone now, do not allow allocating a new chunk and
+- * let find_free_extent() retry with a smaller size.
+- */
+- if (ffe_ctl->max_extent_size >= ffe_ctl->min_alloc_size &&
+- !btrfs_can_activate_zone(fs_info->fs_devices, ffe_ctl->flags))
+- return false;
+- return true;
++ return can_allocate_chunk_zoned(fs_info, ffe_ctl);
+ default:
+ BUG();
+ }
+@@ -4079,8 +4119,9 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
+ int exist = 0;
+
+ /*Check if allocation policy allows to create a new chunk */
+- if (!can_allocate_chunk(fs_info, ffe_ctl))
+- return -ENOSPC;
++ ret = can_allocate_chunk(fs_info, ffe_ctl);
++ if (ret)
++ return ret;
+
+ trans = current->journal_info;
+ if (trans)
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index f03ab5dbda7ae..cda25018ebd74 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2007,10 +2007,12 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
+ struct page *locked_page, u64 *start,
+ u64 *end)
+ {
++ struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
+ const u64 orig_start = *start;
+ const u64 orig_end = *end;
+- u64 max_bytes = BTRFS_MAX_EXTENT_SIZE;
++ /* The sanity tests may not set a valid fs_info. */
++ u64 max_bytes = fs_info ? fs_info->max_extent_size : BTRFS_MAX_EXTENT_SIZE;
+ u64 delalloc_start;
+ u64 delalloc_end;
+ bool found;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 9dfde1af8a64a..89c6d7ff19874 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2308,7 +2308,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ btrfs_release_log_ctx_extents(&ctx);
+ if (ret < 0) {
+ /* Fallthrough and commit/free transaction. */
+- ret = 1;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ }
+
+ /* we've logged all the items and now have a consistent
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index d50448bf8eedd..61496ecb1e201 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -118,7 +118,8 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent);
+ static noinline int cow_file_range(struct btrfs_inode *inode,
+ struct page *locked_page,
+ u64 start, u64 end, int *page_started,
+- unsigned long *nr_written, int unlock);
++ unsigned long *nr_written, int unlock,
++ u64 *done_offset);
+ static struct extent_map *create_io_em(struct btrfs_inode *inode, u64 start,
+ u64 len, u64 orig_start, u64 block_start,
+ u64 block_len, u64 orig_block_len,
+@@ -920,15 +921,25 @@ static int submit_uncompressed_range(struct btrfs_inode *inode,
+ * can directly submit them without interruption.
+ */
+ ret = cow_file_range(inode, locked_page, start, end, &page_started,
+- &nr_written, 0);
++ &nr_written, 0, NULL);
+ /* Inline extent inserted, page gets unlocked and everything is done */
+ if (page_started) {
+ ret = 0;
+ goto out;
+ }
+ if (ret < 0) {
+- if (locked_page)
++ btrfs_cleanup_ordered_extents(inode, locked_page, start, end - start + 1);
++ if (locked_page) {
++ const u64 page_start = page_offset(locked_page);
++ const u64 page_end = page_start + PAGE_SIZE - 1;
++
++ btrfs_page_set_error(inode->root->fs_info, locked_page,
++ page_start, PAGE_SIZE);
++ set_page_writeback(locked_page);
++ end_page_writeback(locked_page);
++ end_extent_writepage(locked_page, ret, page_start, page_end);
+ unlock_page(locked_page);
++ }
+ goto out;
+ }
+
+@@ -1133,15 +1144,39 @@ static u64 get_extent_allocation_hint(struct btrfs_inode *inode, u64 start,
+ * *page_started is set to one if we unlock locked_page and do everything
+ * required to start IO on it. It may be clean and already done with
+ * IO when we return.
++ *
++ * When unlock == 1, we unlock the pages in successfully allocated regions.
++ * When unlock == 0, we leave them locked for writing them out.
++ *
++ * However, we unlock all the pages except @locked_page in case of failure.
++ *
++ * In summary, page locking state will be as follow:
++ *
++ * - page_started == 1 (return value)
++ * - All the pages are unlocked. IO is started.
++ * - Note that this can happen only on success
++ * - unlock == 1
++ * - All the pages except @locked_page are unlocked in any case
++ * - unlock == 0
++ * - On success, all the pages are locked for writing out them
++ * - On failure, all the pages except @locked_page are unlocked
++ *
++ * When a failure happens in the second or later iteration of the
++ * while-loop, the ordered extents created in previous iterations are kept
++ * intact. So, the caller must clean them up by calling
++ * btrfs_cleanup_ordered_extents(). See btrfs_run_delalloc_range() for
++ * example.
+ */
+ static noinline int cow_file_range(struct btrfs_inode *inode,
+ struct page *locked_page,
+ u64 start, u64 end, int *page_started,
+- unsigned long *nr_written, int unlock)
++ unsigned long *nr_written, int unlock,
++ u64 *done_offset)
+ {
+ struct btrfs_root *root = inode->root;
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ u64 alloc_hint = 0;
++ u64 orig_start = start;
+ u64 num_bytes;
+ unsigned long ram_size;
+ u64 cur_alloc_size = 0;
+@@ -1329,18 +1364,62 @@ out_reserve:
+ btrfs_dec_block_group_reservations(fs_info, ins.objectid);
+ btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1);
+ out_unlock:
++ /*
++ * If done_offset is non-NULL and ret == -EAGAIN, we expect the
++ * caller to write out the successfully allocated region and retry.
++ */
++ if (done_offset && ret == -EAGAIN) {
++ if (orig_start < start)
++ *done_offset = start - 1;
++ else
++ *done_offset = start;
++ return ret;
++ } else if (ret == -EAGAIN) {
++ /* Convert to -ENOSPC since the caller cannot retry. */
++ ret = -ENOSPC;
++ }
++
++ /*
++ * Now, we have three regions to clean up:
++ *
++ * |-------(1)----|---(2)---|-------------(3)----------|
++ * `- orig_start `- start `- start + cur_alloc_size `- end
++ *
++ * We process each region below.
++ */
++
+ clear_bits = EXTENT_LOCKED | EXTENT_DELALLOC | EXTENT_DELALLOC_NEW |
+ EXTENT_DEFRAG | EXTENT_CLEAR_META_RESV;
+ page_ops = PAGE_UNLOCK | PAGE_START_WRITEBACK | PAGE_END_WRITEBACK;
++
++ /*
++ * For the range (1). We have already instantiated the ordered extents
++ * for this region. They are cleaned up by
++ * btrfs_cleanup_ordered_extents() in e.g,
++ * btrfs_run_delalloc_range(). EXTENT_LOCKED | EXTENT_DELALLOC are
++ * already cleared in the above loop. And, EXTENT_DELALLOC_NEW |
++ * EXTENT_DEFRAG | EXTENT_CLEAR_META_RESV are handled by the cleanup
++ * function.
++ *
++ * However, in case of unlock == 0, we still need to unlock the pages
++ * (except @locked_page) to ensure all the pages are unlocked.
++ */
++ if (!unlock && orig_start < start) {
++ if (!locked_page)
++ mapping_set_error(inode->vfs_inode.i_mapping, ret);
++ extent_clear_unlock_delalloc(inode, orig_start, start - 1,
++ locked_page, 0, page_ops);
++ }
++
+ /*
+- * If we reserved an extent for our delalloc range (or a subrange) and
+- * failed to create the respective ordered extent, then it means that
+- * when we reserved the extent we decremented the extent's size from
+- * the data space_info's bytes_may_use counter and incremented the
+- * space_info's bytes_reserved counter by the same amount. We must make
+- * sure extent_clear_unlock_delalloc() does not try to decrement again
+- * the data space_info's bytes_may_use counter, therefore we do not pass
+- * it the flag EXTENT_CLEAR_DATA_RESV.
++ * For the range (2). If we reserved an extent for our delalloc range
++ * (or a subrange) and failed to create the respective ordered extent,
++ * then it means that when we reserved the extent we decremented the
++ * extent's size from the data space_info's bytes_may_use counter and
++ * incremented the space_info's bytes_reserved counter by the same
++ * amount. We must make sure extent_clear_unlock_delalloc() does not try
++ * to decrement again the data space_info's bytes_may_use counter,
++ * therefore we do not pass it the flag EXTENT_CLEAR_DATA_RESV.
+ */
+ if (extent_reserved) {
+ extent_clear_unlock_delalloc(inode, start,
+@@ -1352,6 +1431,13 @@ out_unlock:
+ if (start >= end)
+ goto out;
+ }
++
++ /*
++ * For the range (3). We never touched the region. In addition to the
++ * clear_bits above, we add EXTENT_CLEAR_DATA_RESV to release the data
++ * space_info's bytes_may_use counter, reserved in
++ * btrfs_check_data_free_space().
++ */
+ extent_clear_unlock_delalloc(inode, start, end, locked_page,
+ clear_bits | EXTENT_CLEAR_DATA_RESV,
+ page_ops);
+@@ -1538,19 +1624,42 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode,
+ u64 end, int *page_started,
+ unsigned long *nr_written)
+ {
++ u64 done_offset = end;
+ int ret;
++ bool locked_page_done = false;
+
+- ret = cow_file_range(inode, locked_page, start, end, page_started,
+- nr_written, 0);
+- if (ret)
+- return ret;
++ while (start <= end) {
++ ret = cow_file_range(inode, locked_page, start, end, page_started,
++ nr_written, 0, &done_offset);
++ if (ret && ret != -EAGAIN)
++ return ret;
+
+- if (*page_started)
+- return 0;
++ if (*page_started) {
++ ASSERT(ret == 0);
++ return 0;
++ }
++
++ if (ret == 0)
++ done_offset = end;
++
++ if (done_offset == start) {
++ struct btrfs_fs_info *info = inode->root->fs_info;
++
++ wait_var_event(&info->zone_finish_wait,
++ !test_bit(BTRFS_FS_NEED_ZONE_FINISH, &info->flags));
++ continue;
++ }
++
++ if (!locked_page_done) {
++ __set_page_dirty_nobuffers(locked_page);
++ account_page_redirty(locked_page);
++ }
++ locked_page_done = true;
++ extent_write_locked_range(&inode->vfs_inode, start, done_offset);
++
++ start = done_offset + 1;
++ }
+
+- __set_page_dirty_nobuffers(locked_page);
+- account_page_redirty(locked_page);
+- extent_write_locked_range(&inode->vfs_inode, start, end);
+ *page_started = 1;
+
+ return 0;
+@@ -1642,7 +1751,7 @@ static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page,
+ }
+
+ return cow_file_range(inode, locked_page, start, end, page_started,
+- nr_written, 1);
++ nr_written, 1, NULL);
+ }
+
+ struct can_nocow_file_extent_args {
+@@ -2115,7 +2224,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
+ page_started, nr_written);
+ else
+ ret = cow_file_range(inode, locked_page, start, end,
+- page_started, nr_written, 1);
++ page_started, nr_written, 1, NULL);
+ } else {
+ set_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, &inode->runtime_flags);
+ ret = cow_file_range_async(inode, wbc, locked_page, start, end,
+@@ -2131,6 +2240,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
+ void btrfs_split_delalloc_extent(struct inode *inode,
+ struct extent_state *orig, u64 split)
+ {
++ struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ u64 size;
+
+ /* not delalloc, ignore it */
+@@ -2138,7 +2248,7 @@ void btrfs_split_delalloc_extent(struct inode *inode,
+ return;
+
+ size = orig->end - orig->start + 1;
+- if (size > BTRFS_MAX_EXTENT_SIZE) {
++ if (size > fs_info->max_extent_size) {
+ u32 num_extents;
+ u64 new_size;
+
+@@ -2147,10 +2257,10 @@ void btrfs_split_delalloc_extent(struct inode *inode,
+ * applies here, just in reverse.
+ */
+ new_size = orig->end - split + 1;
+- num_extents = count_max_extents(new_size);
++ num_extents = count_max_extents(fs_info, new_size);
+ new_size = split - orig->start;
+- num_extents += count_max_extents(new_size);
+- if (count_max_extents(size) >= num_extents)
++ num_extents += count_max_extents(fs_info, new_size);
++ if (count_max_extents(fs_info, size) >= num_extents)
+ return;
+ }
+
+@@ -2167,6 +2277,7 @@ void btrfs_split_delalloc_extent(struct inode *inode,
+ void btrfs_merge_delalloc_extent(struct inode *inode, struct extent_state *new,
+ struct extent_state *other)
+ {
++ struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ u64 new_size, old_size;
+ u32 num_extents;
+
+@@ -2180,7 +2291,7 @@ void btrfs_merge_delalloc_extent(struct inode *inode, struct extent_state *new,
+ new_size = other->end - new->start + 1;
+
+ /* we're not bigger than the max, unreserve the space and go */
+- if (new_size <= BTRFS_MAX_EXTENT_SIZE) {
++ if (new_size <= fs_info->max_extent_size) {
+ spin_lock(&BTRFS_I(inode)->lock);
+ btrfs_mod_outstanding_extents(BTRFS_I(inode), -1);
+ spin_unlock(&BTRFS_I(inode)->lock);
+@@ -2206,10 +2317,10 @@ void btrfs_merge_delalloc_extent(struct inode *inode, struct extent_state *new,
+ * this case.
+ */
+ old_size = other->end - other->start + 1;
+- num_extents = count_max_extents(old_size);
++ num_extents = count_max_extents(fs_info, old_size);
+ old_size = new->end - new->start + 1;
+- num_extents += count_max_extents(old_size);
+- if (count_max_extents(new_size) >= num_extents)
++ num_extents += count_max_extents(fs_info, old_size);
++ if (count_max_extents(fs_info, new_size) >= num_extents)
+ return;
+
+ spin_lock(&BTRFS_I(inode)->lock);
+@@ -2288,7 +2399,7 @@ void btrfs_set_delalloc_extent(struct inode *inode, struct extent_state *state,
+ if (!(state->state & EXTENT_DELALLOC) && (*bits & EXTENT_DELALLOC)) {
+ struct btrfs_root *root = BTRFS_I(inode)->root;
+ u64 len = state->end + 1 - state->start;
+- u32 num_extents = count_max_extents(len);
++ u32 num_extents = count_max_extents(fs_info, len);
+ bool do_list = !btrfs_is_free_space_inode(BTRFS_I(inode));
+
+ spin_lock(&BTRFS_I(inode)->lock);
+@@ -2330,7 +2441,7 @@ void btrfs_clear_delalloc_extent(struct inode *vfs_inode,
+ struct btrfs_inode *inode = BTRFS_I(vfs_inode);
+ struct btrfs_fs_info *fs_info = btrfs_sb(vfs_inode->i_sb);
+ u64 len = state->end + 1 - state->start;
+- u32 num_extents = count_max_extents(len);
++ u32 num_extents = count_max_extents(fs_info, len);
+
+ if ((state->state & EXTENT_DEFRAG) && (*bits & EXTENT_DEFRAG)) {
+ spin_lock(&inode->lock);
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index a5b623ee6facd..13e0bb0479e63 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -347,6 +347,24 @@ static void index_stripe_sectors(struct btrfs_raid_bio *rbio)
+ }
+ }
+
++static void steal_rbio_page(struct btrfs_raid_bio *src,
++ struct btrfs_raid_bio *dest, int page_nr)
++{
++ const u32 sectorsize = src->bioc->fs_info->sectorsize;
++ const u32 sectors_per_page = PAGE_SIZE / sectorsize;
++ int i;
++
++ if (dest->stripe_pages[page_nr])
++ __free_page(dest->stripe_pages[page_nr]);
++ dest->stripe_pages[page_nr] = src->stripe_pages[page_nr];
++ src->stripe_pages[page_nr] = NULL;
++
++ /* Also update the sector->uptodate bits. */
++ for (i = sectors_per_page * page_nr;
++ i < sectors_per_page * page_nr + sectors_per_page; i++)
++ dest->stripe_sectors[i].uptodate = true;
++}
++
+ /*
+ * Stealing an rbio means taking all the uptodate pages from the stripe array
+ * in the source rbio and putting them into the destination rbio.
+@@ -358,7 +376,6 @@ static void steal_rbio(struct btrfs_raid_bio *src, struct btrfs_raid_bio *dest)
+ {
+ int i;
+ struct page *s;
+- struct page *d;
+
+ if (!test_bit(RBIO_CACHE_READY_BIT, &src->flags))
+ return;
+@@ -368,12 +385,7 @@ static void steal_rbio(struct btrfs_raid_bio *src, struct btrfs_raid_bio *dest)
+ if (!s || !full_page_sectors_uptodate(src, i))
+ continue;
+
+- d = dest->stripe_pages[i];
+- if (d)
+- __free_page(d);
+-
+- dest->stripe_pages[i] = s;
+- src->stripe_pages[i] = NULL;
++ steal_rbio_page(src, dest, i);
+ }
+ index_stripe_sectors(dest);
+ index_stripe_sectors(src);
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 2dd8754cb990d..b0c5b4738b1f7 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -9,6 +9,7 @@
+ #include "ordered-data.h"
+ #include "transaction.h"
+ #include "block-group.h"
++#include "zoned.h"
+
+ /*
+ * HOW DOES SPACE RESERVATION WORK
+@@ -187,6 +188,37 @@ void btrfs_clear_space_info_full(struct btrfs_fs_info *info)
+ */
+ #define BTRFS_DEFAULT_ZONED_RECLAIM_THRESH (75)
+
++/*
++ * Calculate chunk size depending on volume type (regular or zoned).
++ */
++static u64 calc_chunk_size(const struct btrfs_fs_info *fs_info, u64 flags)
++{
++ if (btrfs_is_zoned(fs_info))
++ return fs_info->zone_size;
++
++ ASSERT(flags & BTRFS_BLOCK_GROUP_TYPE_MASK);
++
++ if (flags & BTRFS_BLOCK_GROUP_DATA)
++ return SZ_1G;
++ else if (flags & BTRFS_BLOCK_GROUP_SYSTEM)
++ return SZ_32M;
++
++ /* Handle BTRFS_BLOCK_GROUP_METADATA */
++ if (fs_info->fs_devices->total_rw_bytes > 50ULL * SZ_1G)
++ return SZ_1G;
++
++ return SZ_256M;
++}
++
++/*
++ * Update default chunk size.
++ */
++void btrfs_update_space_info_chunk_size(struct btrfs_space_info *space_info,
++ u64 chunk_size)
++{
++ WRITE_ONCE(space_info->chunk_size, chunk_size);
++}
++
+ static int create_space_info(struct btrfs_fs_info *info, u64 flags)
+ {
+
+@@ -208,6 +240,7 @@ static int create_space_info(struct btrfs_fs_info *info, u64 flags)
+ INIT_LIST_HEAD(&space_info->tickets);
+ INIT_LIST_HEAD(&space_info->priority_tickets);
+ space_info->clamp = 1;
++ btrfs_update_space_info_chunk_size(space_info, calc_chunk_size(info, flags));
+
+ if (btrfs_is_zoned(info))
+ space_info->bg_reclaim_threshold = BTRFS_DEFAULT_ZONED_RECLAIM_THRESH;
+@@ -263,7 +296,7 @@ out:
+ void btrfs_update_space_info(struct btrfs_fs_info *info, u64 flags,
+ u64 total_bytes, u64 bytes_used,
+ u64 bytes_readonly, u64 bytes_zone_unusable,
+- struct btrfs_space_info **space_info)
++ bool active, struct btrfs_space_info **space_info)
+ {
+ struct btrfs_space_info *found;
+ int factor;
+@@ -274,6 +307,8 @@ void btrfs_update_space_info(struct btrfs_fs_info *info, u64 flags,
+ ASSERT(found);
+ spin_lock(&found->lock);
+ found->total_bytes += total_bytes;
++ if (active)
++ found->active_total_bytes += total_bytes;
+ found->disk_total += total_bytes * factor;
+ found->bytes_used += bytes_used;
+ found->disk_used += bytes_used * factor;
+@@ -337,6 +372,22 @@ static u64 calc_available_free_space(struct btrfs_fs_info *fs_info,
+ return avail;
+ }
+
++static inline u64 writable_total_bytes(struct btrfs_fs_info *fs_info,
++ struct btrfs_space_info *space_info)
++{
++ /*
++ * On regular filesystem, all total_bytes are always writable. On zoned
++ * filesystem, there may be a limitation imposed by max_active_zones.
++ * For metadata allocation, we cannot finish an existing active block
++ * group to avoid a deadlock. Thus, we need to consider only the active
++ * groups to be writable for metadata space.
++ */
++ if (!btrfs_is_zoned(fs_info) || (space_info->flags & BTRFS_BLOCK_GROUP_DATA))
++ return space_info->total_bytes;
++
++ return space_info->active_total_bytes;
++}
++
+ int btrfs_can_overcommit(struct btrfs_fs_info *fs_info,
+ struct btrfs_space_info *space_info, u64 bytes,
+ enum btrfs_reserve_flush_enum flush)
+@@ -349,9 +400,12 @@ int btrfs_can_overcommit(struct btrfs_fs_info *fs_info,
+ return 0;
+
+ used = btrfs_space_info_used(space_info, true);
+- avail = calc_available_free_space(fs_info, space_info, flush);
++ if (btrfs_is_zoned(fs_info) && (space_info->flags & BTRFS_BLOCK_GROUP_METADATA))
++ avail = 0;
++ else
++ avail = calc_available_free_space(fs_info, space_info, flush);
+
+- if (used + bytes < space_info->total_bytes + avail)
++ if (used + bytes < writable_total_bytes(fs_info, space_info) + avail)
+ return 1;
+ return 0;
+ }
+@@ -387,7 +441,7 @@ again:
+ ticket = list_first_entry(head, struct reserve_ticket, list);
+
+ /* Check and see if our ticket can be satisfied now. */
+- if ((used + ticket->bytes <= space_info->total_bytes) ||
++ if ((used + ticket->bytes <= writable_total_bytes(fs_info, space_info)) ||
+ btrfs_can_overcommit(fs_info, space_info, ticket->bytes,
+ flush)) {
+ btrfs_space_info_update_bytes_may_use(fs_info,
+@@ -671,6 +725,18 @@ static void flush_space(struct btrfs_fs_info *fs_info,
+ break;
+ case ALLOC_CHUNK:
+ case ALLOC_CHUNK_FORCE:
++ /*
++ * For metadata space on zoned filesystem, reaching here means we
++ * don't have enough space left in active_total_bytes. Try to
++ * activate a block group first, because we may have inactive
++ * block group already allocated.
++ */
++ ret = btrfs_zoned_activate_one_bg(fs_info, space_info, false);
++ if (ret < 0)
++ break;
++ else if (ret == 1)
++ break;
++
+ trans = btrfs_join_transaction(root);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+@@ -681,6 +747,23 @@ static void flush_space(struct btrfs_fs_info *fs_info,
+ (state == ALLOC_CHUNK) ? CHUNK_ALLOC_NO_FORCE :
+ CHUNK_ALLOC_FORCE);
+ btrfs_end_transaction(trans);
++
++ /*
++ * For metadata space on zoned filesystem, allocating a new chunk
++ * is not enough. We still need to activate the block * group.
++ * Active the newly allocated block group by (maybe) finishing
++ * a block group.
++ */
++ if (ret == 1) {
++ ret = btrfs_zoned_activate_one_bg(fs_info, space_info, true);
++ /*
++ * Revert to the original ret regardless we could finish
++ * one block group or not.
++ */
++ if (ret >= 0)
++ ret = 1;
++ }
++
+ if (ret > 0 || ret == -ENOSPC)
+ ret = 0;
+ break;
+@@ -718,6 +801,7 @@ btrfs_calc_reclaim_metadata_size(struct btrfs_fs_info *fs_info,
+ {
+ u64 used;
+ u64 avail;
++ u64 total;
+ u64 to_reclaim = space_info->reclaim_size;
+
+ lockdep_assert_held(&space_info->lock);
+@@ -732,8 +816,9 @@ btrfs_calc_reclaim_metadata_size(struct btrfs_fs_info *fs_info,
+ * space. If that's the case add in our overage so we make sure to put
+ * appropriate pressure on the flushing state machine.
+ */
+- if (space_info->total_bytes + avail < used)
+- to_reclaim += used - (space_info->total_bytes + avail);
++ total = writable_total_bytes(fs_info, space_info);
++ if (total + avail < used)
++ to_reclaim += used - (total + avail);
+
+ return to_reclaim;
+ }
+@@ -743,9 +828,12 @@ static bool need_preemptive_reclaim(struct btrfs_fs_info *fs_info,
+ {
+ u64 global_rsv_size = fs_info->global_block_rsv.reserved;
+ u64 ordered, delalloc;
+- u64 thresh = div_factor_fine(space_info->total_bytes, 90);
++ u64 total = writable_total_bytes(fs_info, space_info);
++ u64 thresh;
+ u64 used;
+
++ thresh = div_factor_fine(total, 90);
++
+ lockdep_assert_held(&space_info->lock);
+
+ /* If we're just plain full then async reclaim just slows us down. */
+@@ -807,8 +895,8 @@ static bool need_preemptive_reclaim(struct btrfs_fs_info *fs_info,
+ BTRFS_RESERVE_FLUSH_ALL);
+ used = space_info->bytes_used + space_info->bytes_reserved +
+ space_info->bytes_readonly + global_rsv_size;
+- if (used < space_info->total_bytes)
+- thresh += space_info->total_bytes - used;
++ if (used < total)
++ thresh += total - used;
+ thresh >>= space_info->clamp;
+
+ used = space_info->bytes_pinned;
+@@ -1525,7 +1613,7 @@ static int __reserve_bytes(struct btrfs_fs_info *fs_info,
+ * can_overcommit() to ensure we can overcommit to continue.
+ */
+ if (!pending_tickets &&
+- ((used + orig_bytes <= space_info->total_bytes) ||
++ ((used + orig_bytes <= writable_total_bytes(fs_info, space_info)) ||
+ btrfs_can_overcommit(fs_info, space_info, orig_bytes, flush))) {
+ btrfs_space_info_update_bytes_may_use(fs_info, space_info,
+ orig_bytes);
+diff --git a/fs/btrfs/space-info.h b/fs/btrfs/space-info.h
+index c096695598c12..12fd6147f92d6 100644
+--- a/fs/btrfs/space-info.h
++++ b/fs/btrfs/space-info.h
+@@ -19,12 +19,16 @@ struct btrfs_space_info {
+ u64 bytes_may_use; /* number of bytes that may be used for
+ delalloc/allocations */
+ u64 bytes_readonly; /* total bytes that are read only */
++ /* Total bytes in the space, but only accounts active block groups. */
++ u64 active_total_bytes;
+ u64 bytes_zone_unusable; /* total bytes that are unusable until
+ resetting the device zone */
+
+ u64 max_extent_size; /* This will hold the maximum extent size of
+ the space info if we had an ENOSPC in the
+ allocator. */
++ /* Chunk size in bytes */
++ u64 chunk_size;
+
+ /*
+ * Once a block group drops below this threshold (percents) we'll
+@@ -122,7 +126,9 @@ int btrfs_init_space_info(struct btrfs_fs_info *fs_info);
+ void btrfs_update_space_info(struct btrfs_fs_info *info, u64 flags,
+ u64 total_bytes, u64 bytes_used,
+ u64 bytes_readonly, u64 bytes_zone_unusable,
+- struct btrfs_space_info **space_info);
++ bool active, struct btrfs_space_info **space_info);
++void btrfs_update_space_info_chunk_size(struct btrfs_space_info *space_info,
++ u64 chunk_size);
+ struct btrfs_space_info *btrfs_find_space_info(struct btrfs_fs_info *info,
+ u64 flags);
+ u64 __pure btrfs_space_info_used(struct btrfs_space_info *s_info,
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 370388fadf960..3c962bfd204f6 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -171,7 +171,7 @@ again:
+ int index = (root->log_transid + 1) % 2;
+
+ if (btrfs_need_log_full_commit(trans)) {
+- ret = -EAGAIN;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ goto out;
+ }
+
+@@ -194,7 +194,7 @@ again:
+ * writing.
+ */
+ if (zoned && !created) {
+- ret = -EAGAIN;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ goto out;
+ }
+
+@@ -3121,7 +3121,7 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+
+ /* bail out if we need to do a full commit */
+ if (btrfs_need_log_full_commit(trans)) {
+- ret = -EAGAIN;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ mutex_unlock(&root->log_mutex);
+ goto out;
+ }
+@@ -3222,7 +3222,7 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ }
+ btrfs_wait_tree_log_extents(log, mark);
+ mutex_unlock(&log_root_tree->log_mutex);
+- ret = -EAGAIN;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ goto out;
+ }
+
+@@ -3261,7 +3261,7 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ blk_finish_plug(&plug);
+ btrfs_wait_tree_log_extents(log, mark);
+ mutex_unlock(&log_root_tree->log_mutex);
+- ret = -EAGAIN;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ goto out_wake_log_root;
+ }
+
+@@ -5848,7 +5848,7 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ inode_only == LOG_INODE_ALL &&
+ inode->last_unlink_trans >= trans->transid) {
+ btrfs_set_log_full_commit(trans);
+- ret = 1;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ goto out_unlock;
+ }
+
+@@ -6562,12 +6562,12 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ bool log_dentries = false;
+
+ if (btrfs_test_opt(fs_info, NOTREELOG)) {
+- ret = 1;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ goto end_no_trans;
+ }
+
+ if (btrfs_root_refs(&root->root_item) == 0) {
+- ret = 1;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ goto end_no_trans;
+ }
+
+@@ -6665,7 +6665,7 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ end_trans:
+ if (ret < 0) {
+ btrfs_set_log_full_commit(trans);
+- ret = 1;
++ ret = BTRFS_LOG_FORCE_COMMIT;
+ }
+
+ if (ret)
+@@ -7029,8 +7029,15 @@ void btrfs_log_new_name(struct btrfs_trans_handle *trans,
+ * anyone from syncing the log until we have updated both inodes
+ * in the log.
+ */
++ ret = join_running_log_trans(root);
++ /*
++ * At least one of the inodes was logged before, so this should
++ * not fail, but if it does, it's not serious, just bail out and
++ * mark the log for a full commit.
++ */
++ if (WARN_ON_ONCE(ret < 0))
++ goto out;
+ log_pinned = true;
+- btrfs_pin_log_trans(root);
+
+ path = btrfs_alloc_path();
+ if (!path) {
+diff --git a/fs/btrfs/tree-log.h b/fs/btrfs/tree-log.h
+index 1620f8170629e..57ab5f3b8dc77 100644
+--- a/fs/btrfs/tree-log.h
++++ b/fs/btrfs/tree-log.h
+@@ -12,6 +12,9 @@
+ /* return value for btrfs_log_dentry_safe that means we don't need to log it at all */
+ #define BTRFS_NO_LOG_SYNC 256
+
++/* We can't use the tree log for whatever reason, force a transaction commit */
++#define BTRFS_LOG_FORCE_COMMIT (1)
++
+ struct btrfs_log_ctx {
+ int log_ret;
+ int log_transid;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 9c20049d1fecf..9cd9d06f54699 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -5071,26 +5071,16 @@ static void init_alloc_chunk_ctl_policy_regular(
+ struct btrfs_fs_devices *fs_devices,
+ struct alloc_chunk_ctl *ctl)
+ {
+- u64 type = ctl->type;
++ struct btrfs_space_info *space_info;
+
+- if (type & BTRFS_BLOCK_GROUP_DATA) {
+- ctl->max_stripe_size = SZ_1G;
+- ctl->max_chunk_size = BTRFS_MAX_DATA_CHUNK_SIZE;
+- } else if (type & BTRFS_BLOCK_GROUP_METADATA) {
+- /* For larger filesystems, use larger metadata chunks */
+- if (fs_devices->total_rw_bytes > 50ULL * SZ_1G)
+- ctl->max_stripe_size = SZ_1G;
+- else
+- ctl->max_stripe_size = SZ_256M;
+- ctl->max_chunk_size = ctl->max_stripe_size;
+- } else if (type & BTRFS_BLOCK_GROUP_SYSTEM) {
+- ctl->max_stripe_size = SZ_32M;
+- ctl->max_chunk_size = 2 * ctl->max_stripe_size;
+- ctl->devs_max = min_t(int, ctl->devs_max,
+- BTRFS_MAX_DEVS_SYS_CHUNK);
+- } else {
+- BUG();
+- }
++ space_info = btrfs_find_space_info(fs_devices->fs_info, ctl->type);
++ ASSERT(space_info);
++
++ ctl->max_chunk_size = READ_ONCE(space_info->chunk_size);
++ ctl->max_stripe_size = ctl->max_chunk_size;
++
++ if (ctl->type & BTRFS_BLOCK_GROUP_SYSTEM)
++ ctl->devs_max = min_t(int, ctl->devs_max, BTRFS_MAX_DEVS_SYS_CHUNK);
+
+ /* We don't want a chunk larger than 10% of writable space */
+ ctl->max_chunk_size = min(div_factor(fs_devices->total_rw_bytes, 1),
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index d99026df6f679..31cb11daa8e82 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -415,6 +415,16 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device, bool populate_cache)
+ nr_sectors = bdev_nr_sectors(bdev);
+ zone_info->zone_size_shift = ilog2(zone_info->zone_size);
+ zone_info->nr_zones = nr_sectors >> ilog2(zone_sectors);
++ /*
++ * We limit max_zone_append_size also by max_segments *
++ * PAGE_SIZE. Technically, we can have multiple pages per segment. But,
++ * since btrfs adds the pages one by one to a bio, and btrfs cannot
++ * increase the metadata reservation even if it increases the number of
++ * extents, it is safe to stick with the limit.
++ */
++ zone_info->max_zone_append_size =
++ min_t(u64, (u64)bdev_max_zone_append_sectors(bdev) << SECTOR_SHIFT,
++ (u64)bdev_max_segments(bdev) << PAGE_SHIFT);
+ if (!IS_ALIGNED(nr_sectors, zone_sectors))
+ zone_info->nr_zones++;
+
+@@ -640,6 +650,7 @@ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_info)
+ u64 zoned_devices = 0;
+ u64 nr_devices = 0;
+ u64 zone_size = 0;
++ u64 max_zone_append_size = 0;
+ const bool incompat_zoned = btrfs_fs_incompat(fs_info, ZONED);
+ int ret = 0;
+
+@@ -674,6 +685,11 @@ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_info)
+ ret = -EINVAL;
+ goto out;
+ }
++ if (!max_zone_append_size ||
++ (zone_info->max_zone_append_size &&
++ zone_info->max_zone_append_size < max_zone_append_size))
++ max_zone_append_size =
++ zone_info->max_zone_append_size;
+ }
+ nr_devices++;
+ }
+@@ -723,7 +739,11 @@ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_info)
+ }
+
+ fs_info->zone_size = zone_size;
++ fs_info->max_zone_append_size = ALIGN_DOWN(max_zone_append_size,
++ fs_info->sectorsize);
+ fs_info->fs_devices->chunk_alloc_policy = BTRFS_CHUNK_ALLOC_ZONED;
++ if (fs_info->max_zone_append_size < fs_info->max_extent_size)
++ fs_info->max_extent_size = fs_info->max_zone_append_size;
+
+ /*
+ * Check mount options here, because we might change fs_info->zoned
+@@ -1829,6 +1849,7 @@ struct btrfs_device *btrfs_zoned_get_device(struct btrfs_fs_info *fs_info,
+ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ {
+ struct btrfs_fs_info *fs_info = block_group->fs_info;
++ struct btrfs_space_info *space_info = block_group->space_info;
+ struct map_lookup *map;
+ struct btrfs_device *device;
+ u64 physical;
+@@ -1840,6 +1861,7 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+
+ map = block_group->physical_map;
+
++ spin_lock(&space_info->lock);
+ spin_lock(&block_group->lock);
+ if (block_group->zone_is_active) {
+ ret = true;
+@@ -1868,7 +1890,10 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+
+ /* Successfully activated all the zones */
+ block_group->zone_is_active = 1;
++ space_info->active_total_bytes += block_group->length;
+ spin_unlock(&block_group->lock);
++ btrfs_try_granting_tickets(fs_info, space_info);
++ spin_unlock(&space_info->lock);
+
+ /* For the active block group list */
+ btrfs_get_block_group(block_group);
+@@ -1881,6 +1906,7 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+
+ out_unlock:
+ spin_unlock(&block_group->lock);
++ spin_unlock(&space_info->lock);
+ return ret;
+ }
+
+@@ -1981,6 +2007,9 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
+ /* For active_bg_list */
+ btrfs_put_block_group(block_group);
+
++ clear_bit(BTRFS_FS_NEED_ZONE_FINISH, &fs_info->flags);
++ wake_up_all(&fs_info->zone_finish_wait);
++
+ return 0;
+ }
+
+@@ -2017,6 +2046,9 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
+ }
+ mutex_unlock(&fs_info->chunk_mutex);
+
++ if (!ret)
++ set_bit(BTRFS_FS_NEED_ZONE_FINISH, &fs_info->flags);
++
+ return ret;
+ }
+
+@@ -2160,3 +2192,96 @@ out:
+ spin_unlock(&block_group->lock);
+ btrfs_put_block_group(block_group);
+ }
++
++int btrfs_zone_finish_one_bg(struct btrfs_fs_info *fs_info)
++{
++ struct btrfs_block_group *block_group;
++ struct btrfs_block_group *min_bg = NULL;
++ u64 min_avail = U64_MAX;
++ int ret;
++
++ spin_lock(&fs_info->zone_active_bgs_lock);
++ list_for_each_entry(block_group, &fs_info->zone_active_bgs,
++ active_bg_list) {
++ u64 avail;
++
++ spin_lock(&block_group->lock);
++ if (block_group->reserved ||
++ (block_group->flags & BTRFS_BLOCK_GROUP_SYSTEM)) {
++ spin_unlock(&block_group->lock);
++ continue;
++ }
++
++ avail = block_group->zone_capacity - block_group->alloc_offset;
++ if (min_avail > avail) {
++ if (min_bg)
++ btrfs_put_block_group(min_bg);
++ min_bg = block_group;
++ min_avail = avail;
++ btrfs_get_block_group(min_bg);
++ }
++ spin_unlock(&block_group->lock);
++ }
++ spin_unlock(&fs_info->zone_active_bgs_lock);
++
++ if (!min_bg)
++ return 0;
++
++ ret = btrfs_zone_finish(min_bg);
++ btrfs_put_block_group(min_bg);
++
++ return ret < 0 ? ret : 1;
++}
++
++int btrfs_zoned_activate_one_bg(struct btrfs_fs_info *fs_info,
++ struct btrfs_space_info *space_info,
++ bool do_finish)
++{
++ struct btrfs_block_group *bg;
++ int index;
++
++ if (!btrfs_is_zoned(fs_info) || (space_info->flags & BTRFS_BLOCK_GROUP_DATA))
++ return 0;
++
++ /* No more block groups to activate */
++ if (space_info->active_total_bytes == space_info->total_bytes)
++ return 0;
++
++ for (;;) {
++ int ret;
++ bool need_finish = false;
++
++ down_read(&space_info->groups_sem);
++ for (index = 0; index < BTRFS_NR_RAID_TYPES; index++) {
++ list_for_each_entry(bg, &space_info->block_groups[index],
++ list) {
++ if (!spin_trylock(&bg->lock))
++ continue;
++ if (btrfs_zoned_bg_is_full(bg) || bg->zone_is_active) {
++ spin_unlock(&bg->lock);
++ continue;
++ }
++ spin_unlock(&bg->lock);
++
++ if (btrfs_zone_activate(bg)) {
++ up_read(&space_info->groups_sem);
++ return 1;
++ }
++
++ need_finish = true;
++ }
++ }
++ up_read(&space_info->groups_sem);
++
++ if (!do_finish || !need_finish)
++ break;
++
++ ret = btrfs_zone_finish_one_bg(fs_info);
++ if (ret == 0)
++ break;
++ if (ret < 0)
++ return ret;
++ }
++
++ return 0;
++}
+diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
+index 6b2eec99162bf..e17462db3a842 100644
+--- a/fs/btrfs/zoned.h
++++ b/fs/btrfs/zoned.h
+@@ -19,6 +19,7 @@ struct btrfs_zoned_device_info {
+ */
+ u64 zone_size;
+ u8 zone_size_shift;
++ u64 max_zone_append_size;
+ u32 nr_zones;
+ unsigned int max_active_zones;
+ atomic_t active_zones_left;
+@@ -79,6 +80,9 @@ void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info);
+ bool btrfs_zoned_should_reclaim(struct btrfs_fs_info *fs_info);
+ void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical,
+ u64 length);
++int btrfs_zone_finish_one_bg(struct btrfs_fs_info *fs_info);
++int btrfs_zoned_activate_one_bg(struct btrfs_fs_info *fs_info,
++ struct btrfs_space_info *space_info, bool do_finish);
+ #else /* CONFIG_BLK_DEV_ZONED */
+ static inline int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
+ struct blk_zone *zone)
+@@ -248,6 +252,20 @@ static inline bool btrfs_zoned_should_reclaim(struct btrfs_fs_info *fs_info)
+
+ static inline void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info,
+ u64 logical, u64 length) { }
++
++static inline int btrfs_zone_finish_one_bg(struct btrfs_fs_info *fs_info)
++{
++ return 1;
++}
++
++static inline int btrfs_zoned_activate_one_bg(struct btrfs_fs_info *fs_info,
++ struct btrfs_space_info *space_info,
++ bool do_finish)
++{
++ /* Consider all the block groups are active */
++ return 0;
++}
++
+ #endif
+
+ static inline bool btrfs_dev_is_sequential(struct btrfs_device *device, u64 pos)
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index a643c84ff1e93..600f2103adfb7 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -2085,9 +2085,9 @@ static inline bool cifs_is_referral_server(struct cifs_tcon *tcon,
+ return is_tcon_dfs(tcon) || (ref && (ref->flags & DFSREF_REFERRAL_SERVER));
+ }
+
+-static inline u64 cifs_flock_len(struct file_lock *fl)
++static inline u64 cifs_flock_len(const struct file_lock *fl)
+ {
+- return fl->fl_end == OFFSET_MAX ? 0 : fl->fl_end - fl->fl_start + 1;
++ return (u64)fl->fl_end - fl->fl_start + 1;
+ }
+
+ static inline size_t ntlmssp_workstation_name_size(const struct cifs_ses *ses)
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index e64cda7a76101..0f03c0bfdf280 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -1861,9 +1861,9 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *flock)
+ rc = -EACCES;
+ xid = get_xid();
+
+- cifs_dbg(FYI, "Lock parm: 0x%x flockflags: 0x%x flocktype: 0x%x start: %lld end: %lld\n",
+- cmd, flock->fl_flags, flock->fl_type,
+- flock->fl_start, flock->fl_end);
++ cifs_dbg(FYI, "%s: %pD2 cmd=0x%x type=0x%x flags=0x%x r=%lld:%lld\n", __func__, file, cmd,
++ flock->fl_flags, flock->fl_type, (long long)flock->fl_start,
++ (long long)flock->fl_end);
+
+ cfile = (struct cifsFileInfo *)file->private_data;
+ tcon = tlink_tcon(cfile->tlink);
+@@ -4459,10 +4459,10 @@ static void cifs_readahead(struct readahead_control *ractl)
+ * TODO: Send a whole batch of pages to be read
+ * by the cache.
+ */
+- page = readahead_page(ractl);
+- last_batch_size = 1 << thp_order(page);
++ struct folio *folio = readahead_folio(ractl);
++ last_batch_size = folio_nr_pages(folio);
+ if (cifs_readpage_from_fscache(ractl->mapping->host,
+- page) < 0) {
++ &folio->page) < 0) {
+ /*
+ * TODO: Deal with cache read failure
+ * here, but for the moment, delegate
+@@ -4470,7 +4470,7 @@ static void cifs_readahead(struct readahead_control *ractl)
+ */
+ caching = false;
+ }
+- unlock_page(page);
++ folio_unlock(folio);
+ next_cached++;
+ cache_nr_pages--;
+ if (cache_nr_pages == 0)
+@@ -4811,8 +4811,6 @@ void cifs_oplock_break(struct work_struct *work)
+ struct TCP_Server_Info *server = tcon->ses->server;
+ int rc = 0;
+ bool purge_cache = false;
+- bool is_deferred = false;
+- struct cifs_deferred_close *dclose;
+
+ wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
+ TASK_UNINTERRUPTIBLE);
+@@ -4848,22 +4846,6 @@ void cifs_oplock_break(struct work_struct *work)
+ cifs_dbg(VFS, "Push locks rc = %d\n", rc);
+
+ oplock_break_ack:
+- /*
+- * When oplock break is received and there are no active
+- * file handles but cached, then schedule deferred close immediately.
+- * So, new open will not use cached handle.
+- */
+- spin_lock(&CIFS_I(inode)->deferred_lock);
+- is_deferred = cifs_is_deferred_close(cfile, &dclose);
+- spin_unlock(&CIFS_I(inode)->deferred_lock);
+- if (is_deferred &&
+- cfile->deferred_close_scheduled &&
+- delayed_work_pending(&cfile->deferred)) {
+- if (cancel_delayed_work(&cfile->deferred)) {
+- _cifsFileInfo_put(cfile, false, false);
+- goto oplock_break_done;
+- }
+- }
+ /*
+ * releasing stale oplock after recent reconnect of smb session using
+ * a now incorrect file handle is not a data integrity issue but do
+@@ -4875,7 +4857,7 @@ oplock_break_ack:
+ cinode);
+ cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
+ }
+-oplock_break_done:
++
+ _cifsFileInfo_put(cfile, false /* do not wait for ourself */, false);
+ cifs_done_oplock_break(cinode);
+ }
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index 6dca1900c7331..45be8f4aeb688 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -91,14 +91,18 @@ static int z_erofs_lz4_prepare_dstpages(struct z_erofs_lz4_decompress_ctx *ctx,
+
+ if (page) {
+ __clear_bit(j, bounced);
+- if (kaddr) {
+- if (kaddr + PAGE_SIZE == page_address(page))
++ if (!PageHighMem(page)) {
++ if (!i) {
++ kaddr = page_address(page);
++ continue;
++ }
++ if (kaddr &&
++ kaddr + PAGE_SIZE == page_address(page)) {
+ kaddr += PAGE_SIZE;
+- else
+- kaddr = NULL;
+- } else if (!i) {
+- kaddr = page_address(page);
++ continue;
++ }
+ }
++ kaddr = NULL;
+ continue;
+ }
+ kaddr = NULL;
+diff --git a/fs/erofs/decompressor_lzma.c b/fs/erofs/decompressor_lzma.c
+index 05a3063cf2bc1..5e59b3f523eb6 100644
+--- a/fs/erofs/decompressor_lzma.c
++++ b/fs/erofs/decompressor_lzma.c
+@@ -143,6 +143,7 @@ again:
+ DBG_BUGON(z_erofs_lzma_head);
+ z_erofs_lzma_head = head;
+ spin_unlock(&z_erofs_lzma_lock);
++ wake_up_all(&z_erofs_lzma_wq);
+
+ z_erofs_lzma_max_dictsize = dict_size;
+ mutex_unlock(&lzma_resize_mutex);
+diff --git a/fs/erofs/dir.c b/fs/erofs/dir.c
+index 18e59821c5974..47c85f1b80d89 100644
+--- a/fs/erofs/dir.c
++++ b/fs/erofs/dir.c
+@@ -22,10 +22,9 @@ static void debug_one_dentry(unsigned char d_type, const char *de_name,
+ }
+
+ static int erofs_fill_dentries(struct inode *dir, struct dir_context *ctx,
+- void *dentry_blk, unsigned int *ofs,
++ void *dentry_blk, struct erofs_dirent *de,
+ unsigned int nameoff, unsigned int maxsize)
+ {
+- struct erofs_dirent *de = dentry_blk + *ofs;
+ const struct erofs_dirent *end = dentry_blk + nameoff;
+
+ while (de < end) {
+@@ -59,9 +58,8 @@ static int erofs_fill_dentries(struct inode *dir, struct dir_context *ctx,
+ /* stopped by some reason */
+ return 1;
+ ++de;
+- *ofs += sizeof(struct erofs_dirent);
++ ctx->pos += sizeof(struct erofs_dirent);
+ }
+- *ofs = maxsize;
+ return 0;
+ }
+
+@@ -95,7 +93,7 @@ static int erofs_readdir(struct file *f, struct dir_context *ctx)
+ "invalid de[0].nameoff %u @ nid %llu",
+ nameoff, EROFS_I(dir)->nid);
+ err = -EFSCORRUPTED;
+- goto skip_this;
++ break;
+ }
+
+ maxsize = min_t(unsigned int,
+@@ -106,17 +104,17 @@ static int erofs_readdir(struct file *f, struct dir_context *ctx)
+ initial = false;
+
+ ofs = roundup(ofs, sizeof(struct erofs_dirent));
++ ctx->pos = blknr_to_addr(i) + ofs;
+ if (ofs >= nameoff)
+ goto skip_this;
+ }
+
+- err = erofs_fill_dentries(dir, ctx, de, &ofs,
++ err = erofs_fill_dentries(dir, ctx, de, (void *)de + ofs,
+ nameoff, maxsize);
+-skip_this:
+- ctx->pos = blknr_to_addr(i) + ofs;
+-
+ if (err)
+ break;
++skip_this:
++ ctx->pos = blknr_to_addr(i) + maxsize;
+ ++i;
+ ofs = 0;
+ }
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index e2daa940ebce7..8b56b94e2f56f 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1747,6 +1747,21 @@ static struct timespec64 *ep_timeout_to_timespec(struct timespec64 *to, long ms)
+ return to;
+ }
+
++/*
++ * autoremove_wake_function, but remove even on failure to wake up, because we
++ * know that default_wake_function/ttwu will only fail if the thread is already
++ * woken, and in that case the ep_poll loop will remove the entry anyways, not
++ * try to reuse it.
++ */
++static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry,
++ unsigned int mode, int sync, void *key)
++{
++ int ret = default_wake_function(wq_entry, mode, sync, key);
++
++ list_del_init(&wq_entry->entry);
++ return ret;
++}
++
+ /**
+ * ep_poll - Retrieves ready events, and delivers them to the caller-supplied
+ * event buffer.
+@@ -1828,8 +1843,15 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
+ * normal wakeup path no need to call __remove_wait_queue()
+ * explicitly, thus ep->lock is not taken, which halts the
+ * event delivery.
++ *
++ * In fact, we now use an even more aggressive function that
++ * unconditionally removes, because we don't reuse the wait
++ * entry between loop iterations. This lets us also avoid the
++ * performance issue if a process is killed, causing all of its
++ * threads to wake up without being removed normally.
+ */
+ init_wait(&wait);
++ wait.func = ep_autoremove_wake_function;
+
+ write_lock_irq(&ep->lock);
+ /*
+diff --git a/fs/exec.c b/fs/exec.c
+index 778123259e424..1c6b477dad69b 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1301,6 +1301,9 @@ int begin_new_exec(struct linux_binprm * bprm)
+ bprm->mm = NULL;
+
+ #ifdef CONFIG_POSIX_TIMERS
++ spin_lock_irq(&me->sighand->siglock);
++ posix_cpu_timers_exit(me);
++ spin_unlock_irq(&me->sighand->siglock);
+ exit_itimers(me);
+ flush_itimer_signals();
+ #endif
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index f6a19f6d9f6d5..cdffa2a041af8 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -1059,9 +1059,10 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ sbi->s_frags_per_group);
+ goto failed_mount;
+ }
+- if (sbi->s_inodes_per_group > sb->s_blocksize * 8) {
++ if (sbi->s_inodes_per_group < sbi->s_inodes_per_block ||
++ sbi->s_inodes_per_group > sb->s_blocksize * 8) {
+ ext2_msg(sb, KERN_ERR,
+- "error: #inodes per group too big: %lu",
++ "error: invalid #inodes per group: %lu",
+ sbi->s_inodes_per_group);
+ goto failed_mount;
+ }
+@@ -1071,6 +1072,13 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ sbi->s_groups_count = ((le32_to_cpu(es->s_blocks_count) -
+ le32_to_cpu(es->s_first_data_block) - 1)
+ / EXT2_BLOCKS_PER_GROUP(sb)) + 1;
++ if ((u64)sbi->s_groups_count * sbi->s_inodes_per_group !=
++ le32_to_cpu(es->s_inodes_count)) {
++ ext2_msg(sb, KERN_ERR, "error: invalid #inodes: %u vs computed %llu",
++ le32_to_cpu(es->s_inodes_count),
++ (u64)sbi->s_groups_count * sbi->s_inodes_per_group);
++ goto failed_mount;
++ }
+ db_count = (sbi->s_groups_count + EXT2_DESC_PER_BLOCK(sb) - 1) /
+ EXT2_DESC_PER_BLOCK(sb);
+ sbi->s_group_desc = kmalloc_array(db_count,
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 75b8d81b24692..adfc30ee4b7be 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3583,6 +3583,7 @@ extern struct buffer_head *ext4_get_first_inline_block(struct inode *inode,
+ extern int ext4_inline_data_fiemap(struct inode *inode,
+ struct fiemap_extent_info *fieinfo,
+ int *has_inline, __u64 start, __u64 len);
++extern void *ext4_read_inline_link(struct inode *inode);
+
+ struct iomap;
+ extern int ext4_inline_data_iomap(struct inode *inode, struct iomap *iomap);
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index cff52ff6549d2..a4fbe825694b1 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -6,6 +6,7 @@
+
+ #include <linux/iomap.h>
+ #include <linux/fiemap.h>
++#include <linux/namei.h>
+ #include <linux/iversion.h>
+ #include <linux/sched/mm.h>
+
+@@ -35,6 +36,9 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
+ struct ext4_inode *raw_inode;
+ int free, min_offs;
+
++ if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
++ return 0;
++
+ min_offs = EXT4_SB(inode->i_sb)->s_inode_size -
+ EXT4_GOOD_OLD_INODE_SIZE -
+ EXT4_I(inode)->i_extra_isize -
+@@ -1588,6 +1592,35 @@ out:
+ return ret;
+ }
+
++void *ext4_read_inline_link(struct inode *inode)
++{
++ struct ext4_iloc iloc;
++ int ret, inline_size;
++ void *link;
++
++ ret = ext4_get_inode_loc(inode, &iloc);
++ if (ret)
++ return ERR_PTR(ret);
++
++ ret = -ENOMEM;
++ inline_size = ext4_get_inline_size(inode);
++ link = kmalloc(inline_size + 1, GFP_NOFS);
++ if (!link)
++ goto out;
++
++ ret = ext4_read_inline_data(inode, link, inline_size, &iloc);
++ if (ret < 0) {
++ kfree(link);
++ goto out;
++ }
++ nd_terminate_link(link, inode->i_size, ret);
++out:
++ if (ret < 0)
++ link = ERR_PTR(ret);
++ brelse(iloc.bh);
++ return link;
++}
++
+ struct buffer_head *ext4_get_first_inline_block(struct inode *inode,
+ struct ext4_dir_entry_2 **parent_de,
+ int *retval)
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 84c0eb55071d6..560cf8dc59359 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -177,6 +177,8 @@ void ext4_evict_inode(struct inode *inode)
+
+ trace_ext4_evict_inode(inode);
+
++ if (EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)
++ ext4_evict_ea_inode(inode);
+ if (inode->i_nlink) {
+ /*
+ * When journalling data dirty buffers are tracked only in the
+@@ -1571,7 +1573,14 @@ static void mpage_release_unused_pages(struct mpage_da_data *mpd,
+ ext4_lblk_t start, last;
+ start = index << (PAGE_SHIFT - inode->i_blkbits);
+ last = end << (PAGE_SHIFT - inode->i_blkbits);
++
++ /*
++ * avoid racing with extent status tree scans made by
++ * ext4_insert_delayed_block()
++ */
++ down_write(&EXT4_I(inode)->i_data_sem);
+ ext4_es_remove_extent(inode, start, last - start + 1);
++ up_write(&EXT4_I(inode)->i_data_sem);
+ }
+
+ pagevec_init(&pvec);
+@@ -3140,13 +3149,15 @@ static sector_t ext4_bmap(struct address_space *mapping, sector_t block)
+ {
+ struct inode *inode = mapping->host;
+ journal_t *journal;
++ sector_t ret = 0;
+ int err;
+
++ inode_lock_shared(inode);
+ /*
+ * We can get here for an inline file via the FIBMAP ioctl
+ */
+ if (ext4_has_inline_data(inode))
+- return 0;
++ goto out;
+
+ if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY) &&
+ test_opt(inode->i_sb, DELALLOC)) {
+@@ -3185,10 +3196,14 @@ static sector_t ext4_bmap(struct address_space *mapping, sector_t block)
+ jbd2_journal_unlock_updates(journal);
+
+ if (err)
+- return 0;
++ goto out;
+ }
+
+- return iomap_bmap(mapping, block, &ext4_iomap_ops);
++ ret = iomap_bmap(mapping, block, &ext4_iomap_ops);
++
++out:
++ inode_unlock_shared(inode);
++ return ret;
+ }
+
+ static int ext4_read_folio(struct file *file, struct folio *folio)
+@@ -4685,8 +4700,7 @@ static inline int ext4_iget_extra_inode(struct inode *inode,
+ __le32 *magic = (void *)raw_inode +
+ EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize;
+
+- if (EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize + sizeof(__le32) <=
+- EXT4_INODE_SIZE(inode->i_sb) &&
++ if (EXT4_INODE_HAS_XATTR_SPACE(inode) &&
+ *magic == cpu_to_le32(EXT4_XATTR_MAGIC)) {
+ ext4_set_inode_state(inode, EXT4_STATE_XATTR);
+ return ext4_find_inline_data_nolock(inode);
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index 42f590518b4ce..54e7d3c95fd71 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -417,7 +417,7 @@ int ext4_ext_migrate(struct inode *inode)
+ struct inode *tmp_inode = NULL;
+ struct migrate_struct lb;
+ unsigned long max_entries;
+- __u32 goal;
++ __u32 goal, tmp_csum_seed;
+ uid_t owner[2];
+
+ /*
+@@ -465,6 +465,7 @@ int ext4_ext_migrate(struct inode *inode)
+ * the migration.
+ */
+ ei = EXT4_I(inode);
++ tmp_csum_seed = EXT4_I(tmp_inode)->i_csum_seed;
+ EXT4_I(tmp_inode)->i_csum_seed = ei->i_csum_seed;
+ i_size_write(tmp_inode, i_size_read(inode));
+ /*
+@@ -575,6 +576,7 @@ err_out:
+ * the inode is not visible to user space.
+ */
+ tmp_inode->i_blocks = 0;
++ EXT4_I(tmp_inode)->i_csum_seed = tmp_csum_seed;
+
+ /* Reset the extent details */
+ ext4_ext_tree_init(handle, tmp_inode);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index db4ba99d1cebe..4af441494e09b 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -54,6 +54,7 @@ static struct buffer_head *ext4_append(handle_t *handle,
+ struct inode *inode,
+ ext4_lblk_t *block)
+ {
++ struct ext4_map_blocks map;
+ struct buffer_head *bh;
+ int err;
+
+@@ -63,6 +64,21 @@ static struct buffer_head *ext4_append(handle_t *handle,
+ return ERR_PTR(-ENOSPC);
+
+ *block = inode->i_size >> inode->i_sb->s_blocksize_bits;
++ map.m_lblk = *block;
++ map.m_len = 1;
++
++ /*
++ * We're appending new directory block. Make sure the block is not
++ * allocated yet, otherwise we will end up corrupting the
++ * directory.
++ */
++ err = ext4_map_blocks(NULL, inode, &map, 0);
++ if (err < 0)
++ return ERR_PTR(err);
++ if (err) {
++ EXT4_ERROR_INODE(inode, "Logical block already allocated");
++ return ERR_PTR(-EFSCORRUPTED);
++ }
+
+ bh = ext4_bread(handle, inode, *block, EXT4_GET_BLOCKS_CREATE);
+ if (IS_ERR(bh))
+@@ -110,6 +126,13 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode,
+ struct ext4_dir_entry *dirent;
+ int is_dx_block = 0;
+
++ if (block >= inode->i_size) {
++ ext4_error_inode(inode, func, line, block,
++ "Attempting to read directory block (%u) that is past i_size (%llu)",
++ block, inode->i_size);
++ return ERR_PTR(-EFSCORRUPTED);
++ }
++
+ if (ext4_simulate_fail(inode->i_sb, EXT4_SIM_DIRBLOCK_EIO))
+ bh = ERR_PTR(-EIO);
+ else
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 8b70a47012931..e5c2713aa11ad 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1484,6 +1484,7 @@ static void ext4_update_super(struct super_block *sb,
+ * Update the fs overhead information
+ */
+ ext4_calculate_overhead(sb);
++ es->s_overhead_clusters = cpu_to_le32(sbi->s_overhead);
+
+ if (test_opt(sb, DEBUG))
+ printk(KERN_DEBUG "EXT4-fs: added group %u:"
+diff --git a/fs/ext4/symlink.c b/fs/ext4/symlink.c
+index d281f5bcc5264..3d3ed3c38f564 100644
+--- a/fs/ext4/symlink.c
++++ b/fs/ext4/symlink.c
+@@ -74,6 +74,21 @@ static const char *ext4_get_link(struct dentry *dentry, struct inode *inode,
+ struct delayed_call *callback)
+ {
+ struct buffer_head *bh;
++ char *inline_link;
++
++ /*
++ * Create a new inlined symlink is not supported, just provide a
++ * method to read the leftovers.
++ */
++ if (ext4_has_inline_data(inode)) {
++ if (!dentry)
++ return ERR_PTR(-ECHILD);
++
++ inline_link = ext4_read_inline_link(inode);
++ if (!IS_ERR(inline_link))
++ set_delayed_call(callback, kfree_link, inline_link);
++ return inline_link;
++ }
+
+ if (!dentry) {
+ bh = ext4_getblk(NULL, inode, 0, EXT4_GET_BLOCKS_CACHED_NOWAIT);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 564e28a1aa942..533216e80fa2b 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -436,6 +436,21 @@ error:
+ return err;
+ }
+
++/* Remove entry from mbcache when EA inode is getting evicted */
++void ext4_evict_ea_inode(struct inode *inode)
++{
++ struct mb_cache_entry *oe;
++
++ if (!EA_INODE_CACHE(inode))
++ return;
++ /* Wait for entry to get unused so that we can remove it */
++ while ((oe = mb_cache_entry_delete_or_get(EA_INODE_CACHE(inode),
++ ext4_xattr_inode_get_hash(inode), inode->i_ino))) {
++ mb_cache_entry_wait_unused(oe);
++ mb_cache_entry_put(EA_INODE_CACHE(inode), oe);
++ }
++}
++
+ static int
+ ext4_xattr_inode_verify_hashes(struct inode *ea_inode,
+ struct ext4_xattr_entry *entry, void *buffer,
+@@ -976,10 +991,8 @@ int __ext4_xattr_set_credits(struct super_block *sb, struct inode *inode,
+ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+ int ref_change)
+ {
+- struct mb_cache *ea_inode_cache = EA_INODE_CACHE(ea_inode);
+ struct ext4_iloc iloc;
+ s64 ref_count;
+- u32 hash;
+ int ret;
+
+ inode_lock(ea_inode);
+@@ -1002,14 +1015,6 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+
+ set_nlink(ea_inode, 1);
+ ext4_orphan_del(handle, ea_inode);
+-
+- if (ea_inode_cache) {
+- hash = ext4_xattr_inode_get_hash(ea_inode);
+- mb_cache_entry_create(ea_inode_cache,
+- GFP_NOFS, hash,
+- ea_inode->i_ino,
+- true /* reusable */);
+- }
+ }
+ } else {
+ WARN_ONCE(ref_count < 0, "EA inode %lu ref_count=%lld",
+@@ -1022,12 +1027,6 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+
+ clear_nlink(ea_inode);
+ ext4_orphan_add(handle, ea_inode);
+-
+- if (ea_inode_cache) {
+- hash = ext4_xattr_inode_get_hash(ea_inode);
+- mb_cache_entry_delete(ea_inode_cache, hash,
+- ea_inode->i_ino);
+- }
+ }
+ }
+
+@@ -1237,6 +1236,7 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ if (error)
+ goto out;
+
++retry_ref:
+ lock_buffer(bh);
+ hash = le32_to_cpu(BHDR(bh)->h_hash);
+ ref = le32_to_cpu(BHDR(bh)->h_refcount);
+@@ -1246,9 +1246,18 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ * This must happen under buffer lock for
+ * ext4_xattr_block_set() to reliably detect freed block
+ */
+- if (ea_block_cache)
+- mb_cache_entry_delete(ea_block_cache, hash,
+- bh->b_blocknr);
++ if (ea_block_cache) {
++ struct mb_cache_entry *oe;
++
++ oe = mb_cache_entry_delete_or_get(ea_block_cache, hash,
++ bh->b_blocknr);
++ if (oe) {
++ unlock_buffer(bh);
++ mb_cache_entry_wait_unused(oe);
++ mb_cache_entry_put(ea_block_cache, oe);
++ goto retry_ref;
++ }
++ }
+ get_bh(bh);
+ unlock_buffer(bh);
+
+@@ -1858,6 +1867,8 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ #define header(x) ((struct ext4_xattr_header *)(x))
+
+ if (s->base) {
++ int offset = (char *)s->here - bs->bh->b_data;
++
+ BUFFER_TRACE(bs->bh, "get_write_access");
+ error = ext4_journal_get_write_access(handle, sb, bs->bh,
+ EXT4_JTR_NONE);
+@@ -1873,9 +1884,20 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ * ext4_xattr_block_set() to reliably detect modified
+ * block
+ */
+- if (ea_block_cache)
+- mb_cache_entry_delete(ea_block_cache, hash,
+- bs->bh->b_blocknr);
++ if (ea_block_cache) {
++ struct mb_cache_entry *oe;
++
++ oe = mb_cache_entry_delete_or_get(ea_block_cache,
++ hash, bs->bh->b_blocknr);
++ if (oe) {
++ /*
++ * Xattr block is getting reused. Leave
++ * it alone.
++ */
++ mb_cache_entry_put(ea_block_cache, oe);
++ goto clone_block;
++ }
++ }
+ ea_bdebug(bs->bh, "modifying in-place");
+ error = ext4_xattr_set_entry(i, s, handle, inode,
+ true /* is_block */);
+@@ -1890,49 +1912,47 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ if (error)
+ goto cleanup;
+ goto inserted;
+- } else {
+- int offset = (char *)s->here - bs->bh->b_data;
++ }
++clone_block:
++ unlock_buffer(bs->bh);
++ ea_bdebug(bs->bh, "cloning");
++ s->base = kmemdup(BHDR(bs->bh), bs->bh->b_size, GFP_NOFS);
++ error = -ENOMEM;
++ if (s->base == NULL)
++ goto cleanup;
++ s->first = ENTRY(header(s->base)+1);
++ header(s->base)->h_refcount = cpu_to_le32(1);
++ s->here = ENTRY(s->base + offset);
++ s->end = s->base + bs->bh->b_size;
+
+- unlock_buffer(bs->bh);
+- ea_bdebug(bs->bh, "cloning");
+- s->base = kmemdup(BHDR(bs->bh), bs->bh->b_size, GFP_NOFS);
+- error = -ENOMEM;
+- if (s->base == NULL)
++ /*
++ * If existing entry points to an xattr inode, we need
++ * to prevent ext4_xattr_set_entry() from decrementing
++ * ref count on it because the reference belongs to the
++ * original block. In this case, make the entry look
++ * like it has an empty value.
++ */
++ if (!s->not_found && s->here->e_value_inum) {
++ ea_ino = le32_to_cpu(s->here->e_value_inum);
++ error = ext4_xattr_inode_iget(inode, ea_ino,
++ le32_to_cpu(s->here->e_hash),
++ &tmp_inode);
++ if (error)
+ goto cleanup;
+- s->first = ENTRY(header(s->base)+1);
+- header(s->base)->h_refcount = cpu_to_le32(1);
+- s->here = ENTRY(s->base + offset);
+- s->end = s->base + bs->bh->b_size;
+-
+- /*
+- * If existing entry points to an xattr inode, we need
+- * to prevent ext4_xattr_set_entry() from decrementing
+- * ref count on it because the reference belongs to the
+- * original block. In this case, make the entry look
+- * like it has an empty value.
+- */
+- if (!s->not_found && s->here->e_value_inum) {
+- ea_ino = le32_to_cpu(s->here->e_value_inum);
+- error = ext4_xattr_inode_iget(inode, ea_ino,
+- le32_to_cpu(s->here->e_hash),
+- &tmp_inode);
+- if (error)
+- goto cleanup;
+-
+- if (!ext4_test_inode_state(tmp_inode,
+- EXT4_STATE_LUSTRE_EA_INODE)) {
+- /*
+- * Defer quota free call for previous
+- * inode until success is guaranteed.
+- */
+- old_ea_inode_quota = le32_to_cpu(
+- s->here->e_value_size);
+- }
+- iput(tmp_inode);
+
+- s->here->e_value_inum = 0;
+- s->here->e_value_size = 0;
++ if (!ext4_test_inode_state(tmp_inode,
++ EXT4_STATE_LUSTRE_EA_INODE)) {
++ /*
++ * Defer quota free call for previous
++ * inode until success is guaranteed.
++ */
++ old_ea_inode_quota = le32_to_cpu(
++ s->here->e_value_size);
+ }
++ iput(tmp_inode);
++
++ s->here->e_value_inum = 0;
++ s->here->e_value_size = 0;
+ }
+ } else {
+ /* Allocate a buffer where we construct the new block. */
+@@ -1999,18 +2019,13 @@ inserted:
+ lock_buffer(new_bh);
+ /*
+ * We have to be careful about races with
+- * freeing, rehashing or adding references to
+- * xattr block. Once we hold buffer lock xattr
+- * block's state is stable so we can check
+- * whether the block got freed / rehashed or
+- * not. Since we unhash mbcache entry under
+- * buffer lock when freeing / rehashing xattr
+- * block, checking whether entry is still
+- * hashed is reliable. Same rules hold for
+- * e_reusable handling.
++ * adding references to xattr block. Once we
++ * hold buffer lock xattr block's state is
++ * stable so we can check the additional
++ * reference fits.
+ */
+- if (hlist_bl_unhashed(&ce->e_hash_list) ||
+- !ce->e_reusable) {
++ ref = le32_to_cpu(BHDR(new_bh)->h_refcount) + 1;
++ if (ref > EXT4_XATTR_REFCOUNT_MAX) {
+ /*
+ * Undo everything and check mbcache
+ * again.
+@@ -2025,9 +2040,8 @@ inserted:
+ new_bh = NULL;
+ goto inserted;
+ }
+- ref = le32_to_cpu(BHDR(new_bh)->h_refcount) + 1;
+ BHDR(new_bh)->h_refcount = cpu_to_le32(ref);
+- if (ref >= EXT4_XATTR_REFCOUNT_MAX)
++ if (ref == EXT4_XATTR_REFCOUNT_MAX)
+ ce->e_reusable = 0;
+ ea_bdebug(new_bh, "reusing; refcount now=%d",
+ ref);
+@@ -2175,8 +2189,9 @@ int ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i,
+ struct ext4_inode *raw_inode;
+ int error;
+
+- if (EXT4_I(inode)->i_extra_isize == 0)
++ if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
+ return 0;
++
+ raw_inode = ext4_raw_inode(&is->iloc);
+ header = IHDR(inode, raw_inode);
+ is->s.base = is->s.first = IFIRST(header);
+@@ -2204,8 +2219,9 @@ int ext4_xattr_ibody_set(handle_t *handle, struct inode *inode,
+ struct ext4_xattr_search *s = &is->s;
+ int error;
+
+- if (EXT4_I(inode)->i_extra_isize == 0)
++ if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
+ return -ENOSPC;
++
+ error = ext4_xattr_set_entry(i, s, handle, inode, false /* is_block */);
+ if (error)
+ return error;
+diff --git a/fs/ext4/xattr.h b/fs/ext4/xattr.h
+index 77efb9a627ad2..e5e36bd11f055 100644
+--- a/fs/ext4/xattr.h
++++ b/fs/ext4/xattr.h
+@@ -95,6 +95,19 @@ struct ext4_xattr_entry {
+
+ #define EXT4_ZERO_XATTR_VALUE ((void *)-1)
+
++/*
++ * If we want to add an xattr to the inode, we should make sure that
++ * i_extra_isize is not 0 and that the inode size is not less than
++ * EXT4_GOOD_OLD_INODE_SIZE + extra_isize + pad.
++ * EXT4_GOOD_OLD_INODE_SIZE extra_isize header entry pad data
++ * |--------------------------|------------|------|---------|---|-------|
++ */
++#define EXT4_INODE_HAS_XATTR_SPACE(inode) \
++ ((EXT4_I(inode)->i_extra_isize != 0) && \
++ (EXT4_GOOD_OLD_INODE_SIZE + EXT4_I(inode)->i_extra_isize + \
++ sizeof(struct ext4_xattr_ibody_header) + EXT4_XATTR_PAD <= \
++ EXT4_INODE_SIZE((inode)->i_sb)))
++
+ struct ext4_xattr_info {
+ const char *name;
+ const void *value;
+@@ -178,6 +191,7 @@ extern void ext4_xattr_inode_array_free(struct ext4_xattr_inode_array *array);
+
+ extern int ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
+ struct ext4_inode *raw_inode, handle_t *handle);
++extern void ext4_evict_ea_inode(struct inode *inode);
+
+ extern const struct xattr_handler *ext4_xattr_handlers[];
+
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 7fcbcf9797372..f2a2726134779 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1463,9 +1463,12 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
+ *map->m_next_extent = pgofs + map->m_len;
+
+ /* for hardware encryption, but to avoid potential issue in future */
+- if (flag == F2FS_GET_BLOCK_DIO)
++ if (flag == F2FS_GET_BLOCK_DIO) {
+ f2fs_wait_on_block_writeback_range(inode,
+ map->m_pblk, map->m_len);
++ invalidate_mapping_pages(META_MAPPING(sbi),
++ map->m_pblk, map->m_pblk + map->m_len - 1);
++ }
+
+ if (map->m_multidev_dio) {
+ block_t blk_addr = map->m_pblk;
+@@ -1682,7 +1685,7 @@ sync_out:
+ f2fs_wait_on_block_writeback_range(inode,
+ map->m_pblk, map->m_len);
+ invalidate_mapping_pages(META_MAPPING(sbi),
+- map->m_pblk, map->m_pblk);
++ map->m_pblk, map->m_pblk + map->m_len - 1);
+
+ if (map->m_multidev_dio) {
+ block_t blk_addr = map->m_pblk;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index d9bbecd008d22..5c950298837f1 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -4401,7 +4401,7 @@ static inline bool f2fs_lfs_mode(struct f2fs_sb_info *sbi)
+ static inline bool f2fs_may_compress(struct inode *inode)
+ {
+ if (IS_SWAPFILE(inode) || f2fs_is_pinned_file(inode) ||
+- f2fs_is_atomic_file(inode))
++ f2fs_is_atomic_file(inode) || f2fs_has_inline_data(inode))
+ return false;
+ return S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode);
+ }
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index bd14cef1b08fd..fc0f30738b21c 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1873,10 +1873,7 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ if (masked_flags & F2FS_COMPR_FL) {
+ if (!f2fs_disable_compressed_file(inode))
+ return -EINVAL;
+- }
+- if (iflags & F2FS_NOCOMP_FL)
+- return -EINVAL;
+- if (iflags & F2FS_COMPR_FL) {
++ } else {
+ if (!f2fs_may_compress(inode))
+ return -EINVAL;
+ if (S_ISREG(inode->i_mode) && inode->i_size)
+@@ -1885,10 +1882,6 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ set_compress_context(inode);
+ }
+ }
+- if ((iflags ^ masked_flags) & F2FS_NOCOMP_FL) {
+- if (masked_flags & F2FS_COMPR_FL)
+- return -EINVAL;
+- }
+
+ fi->i_flags = iflags | (fi->i_flags & ~mask);
+ f2fs_bug_on(F2FS_I_SB(inode), (fi->i_flags & F2FS_COMPR_FL) &&
+@@ -3945,6 +3938,11 @@ static int f2fs_ioc_decompress_file(struct file *filp, unsigned long arg)
+ goto out;
+ }
+
++ if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+ if (ret)
+ goto out;
+@@ -4012,6 +4010,11 @@ static int f2fs_ioc_compress_file(struct file *filp, unsigned long arg)
+ goto out;
+ }
+
++ if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+ if (ret)
+ goto out;
+diff --git a/fs/fuse/control.c b/fs/fuse/control.c
+index 7cede9a3bc962..247ef4f767612 100644
+--- a/fs/fuse/control.c
++++ b/fs/fuse/control.c
+@@ -258,7 +258,7 @@ int fuse_ctl_add_conn(struct fuse_conn *fc)
+ struct dentry *parent;
+ char name[32];
+
+- if (!fuse_control_sb)
++ if (!fuse_control_sb || fc->no_control)
+ return 0;
+
+ parent = fuse_control_sb->s_root;
+@@ -296,7 +296,7 @@ void fuse_ctl_remove_conn(struct fuse_conn *fc)
+ {
+ int i;
+
+- if (!fuse_control_sb)
++ if (!fuse_control_sb || fc->no_control)
+ return;
+
+ for (i = fc->ctl_ndents - 1; i >= 0; i--) {
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 74303d6e987b3..a93d675a726a3 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -537,6 +537,7 @@ static int fuse_create_open(struct inode *dir, struct dentry *entry,
+ struct fuse_file *ff;
+ void *security_ctx = NULL;
+ u32 security_ctxlen;
++ bool trunc = flags & O_TRUNC;
+
+ /* Userspace expects S_IFREG in create mode */
+ BUG_ON((mode & S_IFMT) != S_IFREG);
+@@ -561,7 +562,7 @@ static int fuse_create_open(struct inode *dir, struct dentry *entry,
+ inarg.mode = mode;
+ inarg.umask = current_umask();
+
+- if (fm->fc->handle_killpriv_v2 && (flags & O_TRUNC) &&
++ if (fm->fc->handle_killpriv_v2 && trunc &&
+ !(flags & O_EXCL) && !capable(CAP_FSETID)) {
+ inarg.open_flags |= FUSE_OPEN_KILL_SUIDGID;
+ }
+@@ -623,6 +624,10 @@ static int fuse_create_open(struct inode *dir, struct dentry *entry,
+ } else {
+ file->private_data = ff;
+ fuse_finish_open(inode, file);
++ if (fm->fc->atomic_o_trunc && trunc)
++ truncate_pagecache(inode, 0);
++ else if (!(ff->open_flags & FOPEN_KEEP_CACHE))
++ invalidate_inode_pages2(inode->i_mapping);
+ }
+ return err;
+
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 05caa2b9272e8..dfee142bca5c6 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -210,13 +210,9 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ fi->attr_version = atomic64_inc_return(&fc->attr_version);
+ i_size_write(inode, 0);
+ spin_unlock(&fi->lock);
+- truncate_pagecache(inode, 0);
+ file_update_time(file);
+ fuse_invalidate_attr_mask(inode, FUSE_STATX_MODSIZE);
+- } else if (!(ff->open_flags & FOPEN_KEEP_CACHE)) {
+- invalidate_inode_pages2(inode->i_mapping);
+ }
+-
+ if ((file->f_mode & FMODE_WRITE) && fc->writeback_cache)
+ fuse_link_write_file(file);
+ }
+@@ -239,30 +235,38 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
+ if (err)
+ return err;
+
+- if (is_wb_truncate || dax_truncate) {
++ if (is_wb_truncate || dax_truncate)
+ inode_lock(inode);
+- fuse_set_nowrite(inode);
+- }
+
+ if (dax_truncate) {
+ filemap_invalidate_lock(inode->i_mapping);
+ err = fuse_dax_break_layouts(inode, 0, 0);
+ if (err)
+- goto out;
++ goto out_inode_unlock;
+ }
+
++ if (is_wb_truncate || dax_truncate)
++ fuse_set_nowrite(inode);
++
+ err = fuse_do_open(fm, get_node_id(inode), file, isdir);
+ if (!err)
+ fuse_finish_open(inode, file);
+
+-out:
++ if (is_wb_truncate || dax_truncate)
++ fuse_release_nowrite(inode);
++ if (!err) {
++ struct fuse_file *ff = file->private_data;
++
++ if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC))
++ truncate_pagecache(inode, 0);
++ else if (!(ff->open_flags & FOPEN_KEEP_CACHE))
++ invalidate_inode_pages2(inode->i_mapping);
++ }
+ if (dax_truncate)
+ filemap_invalidate_unlock(inode->i_mapping);
+-
+- if (is_wb_truncate | dax_truncate) {
+- fuse_release_nowrite(inode);
++out_inode_unlock:
++ if (is_wb_truncate || dax_truncate)
+ inode_unlock(inode);
+- }
+
+ return err;
+ }
+@@ -338,6 +342,15 @@ static int fuse_open(struct inode *inode, struct file *file)
+
+ static int fuse_release(struct inode *inode, struct file *file)
+ {
++ struct fuse_conn *fc = get_fuse_conn(inode);
++
++ /*
++ * Dirty pages might remain despite write_inode_now() call from
++ * fuse_flush() due to writes racing with the close.
++ */
++ if (fc->writeback_cache)
++ write_inode_now(inode, 1);
++
+ fuse_release_common(file, false);
+
+ /* return value is ignored by VFS */
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 8c0665c5dff88..7c290089e693e 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -180,6 +180,12 @@ void fuse_change_attributes_common(struct inode *inode, struct fuse_attr *attr,
+ inode->i_uid = make_kuid(fc->user_ns, attr->uid);
+ inode->i_gid = make_kgid(fc->user_ns, attr->gid);
+ inode->i_blocks = attr->blocks;
++
++ /* Sanitize nsecs */
++ attr->atimensec = min_t(u32, attr->atimensec, NSEC_PER_SEC - 1);
++ attr->mtimensec = min_t(u32, attr->mtimensec, NSEC_PER_SEC - 1);
++ attr->ctimensec = min_t(u32, attr->ctimensec, NSEC_PER_SEC - 1);
++
+ inode->i_atime.tv_sec = attr->atime;
+ inode->i_atime.tv_nsec = attr->atimensec;
+ /* mtime from server may be stale due to local buffered write */
+diff --git a/fs/fuse/ioctl.c b/fs/fuse/ioctl.c
+index 33cde4bbccdc1..61d8afcb10a3f 100644
+--- a/fs/fuse/ioctl.c
++++ b/fs/fuse/ioctl.c
+@@ -9,6 +9,17 @@
+ #include <linux/compat.h>
+ #include <linux/fileattr.h>
+
++static ssize_t fuse_send_ioctl(struct fuse_mount *fm, struct fuse_args *args)
++{
++ ssize_t ret = fuse_simple_request(fm, args);
++
++ /* Translate ENOSYS, which shouldn't be returned from fs */
++ if (ret == -ENOSYS)
++ ret = -ENOTTY;
++
++ return ret;
++}
++
+ /*
+ * CUSE servers compiled on 32bit broke on 64bit kernels because the
+ * ABI was defined to be 'struct iovec' which is different on 32bit
+@@ -259,7 +270,7 @@ long fuse_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg,
+ ap.args.out_pages = true;
+ ap.args.out_argvar = true;
+
+- transferred = fuse_simple_request(fm, &ap.args);
++ transferred = fuse_send_ioctl(fm, &ap.args);
+ err = transferred;
+ if (transferred < 0)
+ goto out;
+@@ -393,7 +404,7 @@ static int fuse_priv_ioctl(struct inode *inode, struct fuse_file *ff,
+ args.out_args[1].size = inarg.out_size;
+ args.out_args[1].value = ptr;
+
+- err = fuse_simple_request(fm, &args);
++ err = fuse_send_ioctl(fm, &args);
+ if (!err) {
+ if (outarg.result < 0)
+ err = outarg.result;
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+deleted file mode 100644
+index 824623bcf1a53..0000000000000
+--- a/fs/io-wq.c
++++ /dev/null
+@@ -1,1424 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Basic worker thread pool for io_uring
+- *
+- * Copyright (C) 2019 Jens Axboe
+- *
+- */
+-#include <linux/kernel.h>
+-#include <linux/init.h>
+-#include <linux/errno.h>
+-#include <linux/sched/signal.h>
+-#include <linux/percpu.h>
+-#include <linux/slab.h>
+-#include <linux/rculist_nulls.h>
+-#include <linux/cpu.h>
+-#include <linux/task_work.h>
+-#include <linux/audit.h>
+-#include <uapi/linux/io_uring.h>
+-
+-#include "io-wq.h"
+-
+-#define WORKER_IDLE_TIMEOUT (5 * HZ)
+-
+-enum {
+- IO_WORKER_F_UP = 1, /* up and active */
+- IO_WORKER_F_RUNNING = 2, /* account as running */
+- IO_WORKER_F_FREE = 4, /* worker on free list */
+- IO_WORKER_F_BOUND = 8, /* is doing bounded work */
+-};
+-
+-enum {
+- IO_WQ_BIT_EXIT = 0, /* wq exiting */
+-};
+-
+-enum {
+- IO_ACCT_STALLED_BIT = 0, /* stalled on hash */
+-};
+-
+-/*
+- * One for each thread in a wqe pool
+- */
+-struct io_worker {
+- refcount_t ref;
+- unsigned flags;
+- struct hlist_nulls_node nulls_node;
+- struct list_head all_list;
+- struct task_struct *task;
+- struct io_wqe *wqe;
+-
+- struct io_wq_work *cur_work;
+- struct io_wq_work *next_work;
+- raw_spinlock_t lock;
+-
+- struct completion ref_done;
+-
+- unsigned long create_state;
+- struct callback_head create_work;
+- int create_index;
+-
+- union {
+- struct rcu_head rcu;
+- struct work_struct work;
+- };
+-};
+-
+-#if BITS_PER_LONG == 64
+-#define IO_WQ_HASH_ORDER 6
+-#else
+-#define IO_WQ_HASH_ORDER 5
+-#endif
+-
+-#define IO_WQ_NR_HASH_BUCKETS (1u << IO_WQ_HASH_ORDER)
+-
+-struct io_wqe_acct {
+- unsigned nr_workers;
+- unsigned max_workers;
+- int index;
+- atomic_t nr_running;
+- raw_spinlock_t lock;
+- struct io_wq_work_list work_list;
+- unsigned long flags;
+-};
+-
+-enum {
+- IO_WQ_ACCT_BOUND,
+- IO_WQ_ACCT_UNBOUND,
+- IO_WQ_ACCT_NR,
+-};
+-
+-/*
+- * Per-node worker thread pool
+- */
+-struct io_wqe {
+- raw_spinlock_t lock;
+- struct io_wqe_acct acct[IO_WQ_ACCT_NR];
+-
+- int node;
+-
+- struct hlist_nulls_head free_list;
+- struct list_head all_list;
+-
+- struct wait_queue_entry wait;
+-
+- struct io_wq *wq;
+- struct io_wq_work *hash_tail[IO_WQ_NR_HASH_BUCKETS];
+-
+- cpumask_var_t cpu_mask;
+-};
+-
+-/*
+- * Per io_wq state
+- */
+-struct io_wq {
+- unsigned long state;
+-
+- free_work_fn *free_work;
+- io_wq_work_fn *do_work;
+-
+- struct io_wq_hash *hash;
+-
+- atomic_t worker_refs;
+- struct completion worker_done;
+-
+- struct hlist_node cpuhp_node;
+-
+- struct task_struct *task;
+-
+- struct io_wqe *wqes[];
+-};
+-
+-static enum cpuhp_state io_wq_online;
+-
+-struct io_cb_cancel_data {
+- work_cancel_fn *fn;
+- void *data;
+- int nr_running;
+- int nr_pending;
+- bool cancel_all;
+-};
+-
+-static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index);
+-static void io_wqe_dec_running(struct io_worker *worker);
+-static bool io_acct_cancel_pending_work(struct io_wqe *wqe,
+- struct io_wqe_acct *acct,
+- struct io_cb_cancel_data *match);
+-static void create_worker_cb(struct callback_head *cb);
+-static void io_wq_cancel_tw_create(struct io_wq *wq);
+-
+-static bool io_worker_get(struct io_worker *worker)
+-{
+- return refcount_inc_not_zero(&worker->ref);
+-}
+-
+-static void io_worker_release(struct io_worker *worker)
+-{
+- if (refcount_dec_and_test(&worker->ref))
+- complete(&worker->ref_done);
+-}
+-
+-static inline struct io_wqe_acct *io_get_acct(struct io_wqe *wqe, bool bound)
+-{
+- return &wqe->acct[bound ? IO_WQ_ACCT_BOUND : IO_WQ_ACCT_UNBOUND];
+-}
+-
+-static inline struct io_wqe_acct *io_work_get_acct(struct io_wqe *wqe,
+- struct io_wq_work *work)
+-{
+- return io_get_acct(wqe, !(work->flags & IO_WQ_WORK_UNBOUND));
+-}
+-
+-static inline struct io_wqe_acct *io_wqe_get_acct(struct io_worker *worker)
+-{
+- return io_get_acct(worker->wqe, worker->flags & IO_WORKER_F_BOUND);
+-}
+-
+-static void io_worker_ref_put(struct io_wq *wq)
+-{
+- if (atomic_dec_and_test(&wq->worker_refs))
+- complete(&wq->worker_done);
+-}
+-
+-static void io_worker_cancel_cb(struct io_worker *worker)
+-{
+- struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+- struct io_wqe *wqe = worker->wqe;
+- struct io_wq *wq = wqe->wq;
+-
+- atomic_dec(&acct->nr_running);
+- raw_spin_lock(&worker->wqe->lock);
+- acct->nr_workers--;
+- raw_spin_unlock(&worker->wqe->lock);
+- io_worker_ref_put(wq);
+- clear_bit_unlock(0, &worker->create_state);
+- io_worker_release(worker);
+-}
+-
+-static bool io_task_worker_match(struct callback_head *cb, void *data)
+-{
+- struct io_worker *worker;
+-
+- if (cb->func != create_worker_cb)
+- return false;
+- worker = container_of(cb, struct io_worker, create_work);
+- return worker == data;
+-}
+-
+-static void io_worker_exit(struct io_worker *worker)
+-{
+- struct io_wqe *wqe = worker->wqe;
+- struct io_wq *wq = wqe->wq;
+-
+- while (1) {
+- struct callback_head *cb = task_work_cancel_match(wq->task,
+- io_task_worker_match, worker);
+-
+- if (!cb)
+- break;
+- io_worker_cancel_cb(worker);
+- }
+-
+- io_worker_release(worker);
+- wait_for_completion(&worker->ref_done);
+-
+- raw_spin_lock(&wqe->lock);
+- if (worker->flags & IO_WORKER_F_FREE)
+- hlist_nulls_del_rcu(&worker->nulls_node);
+- list_del_rcu(&worker->all_list);
+- raw_spin_unlock(&wqe->lock);
+- io_wqe_dec_running(worker);
+- worker->flags = 0;
+- preempt_disable();
+- current->flags &= ~PF_IO_WORKER;
+- preempt_enable();
+-
+- kfree_rcu(worker, rcu);
+- io_worker_ref_put(wqe->wq);
+- do_exit(0);
+-}
+-
+-static inline bool io_acct_run_queue(struct io_wqe_acct *acct)
+-{
+- bool ret = false;
+-
+- raw_spin_lock(&acct->lock);
+- if (!wq_list_empty(&acct->work_list) &&
+- !test_bit(IO_ACCT_STALLED_BIT, &acct->flags))
+- ret = true;
+- raw_spin_unlock(&acct->lock);
+-
+- return ret;
+-}
+-
+-/*
+- * Check head of free list for an available worker. If one isn't available,
+- * caller must create one.
+- */
+-static bool io_wqe_activate_free_worker(struct io_wqe *wqe,
+- struct io_wqe_acct *acct)
+- __must_hold(RCU)
+-{
+- struct hlist_nulls_node *n;
+- struct io_worker *worker;
+-
+- /*
+- * Iterate free_list and see if we can find an idle worker to
+- * activate. If a given worker is on the free_list but in the process
+- * of exiting, keep trying.
+- */
+- hlist_nulls_for_each_entry_rcu(worker, n, &wqe->free_list, nulls_node) {
+- if (!io_worker_get(worker))
+- continue;
+- if (io_wqe_get_acct(worker) != acct) {
+- io_worker_release(worker);
+- continue;
+- }
+- if (wake_up_process(worker->task)) {
+- io_worker_release(worker);
+- return true;
+- }
+- io_worker_release(worker);
+- }
+-
+- return false;
+-}
+-
+-/*
+- * We need a worker. If we find a free one, we're good. If not, and we're
+- * below the max number of workers, create one.
+- */
+-static bool io_wqe_create_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+-{
+- /*
+- * Most likely an attempt to queue unbounded work on an io_wq that
+- * wasn't setup with any unbounded workers.
+- */
+- if (unlikely(!acct->max_workers))
+- pr_warn_once("io-wq is not configured for unbound workers");
+-
+- raw_spin_lock(&wqe->lock);
+- if (acct->nr_workers >= acct->max_workers) {
+- raw_spin_unlock(&wqe->lock);
+- return true;
+- }
+- acct->nr_workers++;
+- raw_spin_unlock(&wqe->lock);
+- atomic_inc(&acct->nr_running);
+- atomic_inc(&wqe->wq->worker_refs);
+- return create_io_worker(wqe->wq, wqe, acct->index);
+-}
+-
+-static void io_wqe_inc_running(struct io_worker *worker)
+-{
+- struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+-
+- atomic_inc(&acct->nr_running);
+-}
+-
+-static void create_worker_cb(struct callback_head *cb)
+-{
+- struct io_worker *worker;
+- struct io_wq *wq;
+- struct io_wqe *wqe;
+- struct io_wqe_acct *acct;
+- bool do_create = false;
+-
+- worker = container_of(cb, struct io_worker, create_work);
+- wqe = worker->wqe;
+- wq = wqe->wq;
+- acct = &wqe->acct[worker->create_index];
+- raw_spin_lock(&wqe->lock);
+- if (acct->nr_workers < acct->max_workers) {
+- acct->nr_workers++;
+- do_create = true;
+- }
+- raw_spin_unlock(&wqe->lock);
+- if (do_create) {
+- create_io_worker(wq, wqe, worker->create_index);
+- } else {
+- atomic_dec(&acct->nr_running);
+- io_worker_ref_put(wq);
+- }
+- clear_bit_unlock(0, &worker->create_state);
+- io_worker_release(worker);
+-}
+-
+-static bool io_queue_worker_create(struct io_worker *worker,
+- struct io_wqe_acct *acct,
+- task_work_func_t func)
+-{
+- struct io_wqe *wqe = worker->wqe;
+- struct io_wq *wq = wqe->wq;
+-
+- /* raced with exit, just ignore create call */
+- if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
+- goto fail;
+- if (!io_worker_get(worker))
+- goto fail;
+- /*
+- * create_state manages ownership of create_work/index. We should
+- * only need one entry per worker, as the worker going to sleep
+- * will trigger the condition, and waking will clear it once it
+- * runs the task_work.
+- */
+- if (test_bit(0, &worker->create_state) ||
+- test_and_set_bit_lock(0, &worker->create_state))
+- goto fail_release;
+-
+- atomic_inc(&wq->worker_refs);
+- init_task_work(&worker->create_work, func);
+- worker->create_index = acct->index;
+- if (!task_work_add(wq->task, &worker->create_work, TWA_SIGNAL)) {
+- /*
+- * EXIT may have been set after checking it above, check after
+- * adding the task_work and remove any creation item if it is
+- * now set. wq exit does that too, but we can have added this
+- * work item after we canceled in io_wq_exit_workers().
+- */
+- if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
+- io_wq_cancel_tw_create(wq);
+- io_worker_ref_put(wq);
+- return true;
+- }
+- io_worker_ref_put(wq);
+- clear_bit_unlock(0, &worker->create_state);
+-fail_release:
+- io_worker_release(worker);
+-fail:
+- atomic_dec(&acct->nr_running);
+- io_worker_ref_put(wq);
+- return false;
+-}
+-
+-static void io_wqe_dec_running(struct io_worker *worker)
+-{
+- struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+- struct io_wqe *wqe = worker->wqe;
+-
+- if (!(worker->flags & IO_WORKER_F_UP))
+- return;
+-
+- if (!atomic_dec_and_test(&acct->nr_running))
+- return;
+- if (!io_acct_run_queue(acct))
+- return;
+-
+- atomic_inc(&acct->nr_running);
+- atomic_inc(&wqe->wq->worker_refs);
+- io_queue_worker_create(worker, acct, create_worker_cb);
+-}
+-
+-/*
+- * Worker will start processing some work. Move it to the busy list, if
+- * it's currently on the freelist
+- */
+-static void __io_worker_busy(struct io_wqe *wqe, struct io_worker *worker)
+-{
+- if (worker->flags & IO_WORKER_F_FREE) {
+- worker->flags &= ~IO_WORKER_F_FREE;
+- raw_spin_lock(&wqe->lock);
+- hlist_nulls_del_init_rcu(&worker->nulls_node);
+- raw_spin_unlock(&wqe->lock);
+- }
+-}
+-
+-/*
+- * No work, worker going to sleep. Move to freelist, and unuse mm if we
+- * have one attached. Dropping the mm may potentially sleep, so we drop
+- * the lock in that case and return success. Since the caller has to
+- * retry the loop in that case (we changed task state), we don't regrab
+- * the lock if we return success.
+- */
+-static void __io_worker_idle(struct io_wqe *wqe, struct io_worker *worker)
+- __must_hold(wqe->lock)
+-{
+- if (!(worker->flags & IO_WORKER_F_FREE)) {
+- worker->flags |= IO_WORKER_F_FREE;
+- hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
+- }
+-}
+-
+-static inline unsigned int io_get_work_hash(struct io_wq_work *work)
+-{
+- return work->flags >> IO_WQ_HASH_SHIFT;
+-}
+-
+-static bool io_wait_on_hash(struct io_wqe *wqe, unsigned int hash)
+-{
+- struct io_wq *wq = wqe->wq;
+- bool ret = false;
+-
+- spin_lock_irq(&wq->hash->wait.lock);
+- if (list_empty(&wqe->wait.entry)) {
+- __add_wait_queue(&wq->hash->wait, &wqe->wait);
+- if (!test_bit(hash, &wq->hash->map)) {
+- __set_current_state(TASK_RUNNING);
+- list_del_init(&wqe->wait.entry);
+- ret = true;
+- }
+- }
+- spin_unlock_irq(&wq->hash->wait.lock);
+- return ret;
+-}
+-
+-static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct,
+- struct io_worker *worker)
+- __must_hold(acct->lock)
+-{
+- struct io_wq_work_node *node, *prev;
+- struct io_wq_work *work, *tail;
+- unsigned int stall_hash = -1U;
+- struct io_wqe *wqe = worker->wqe;
+-
+- wq_list_for_each(node, prev, &acct->work_list) {
+- unsigned int hash;
+-
+- work = container_of(node, struct io_wq_work, list);
+-
+- /* not hashed, can run anytime */
+- if (!io_wq_is_hashed(work)) {
+- wq_list_del(&acct->work_list, node, prev);
+- return work;
+- }
+-
+- hash = io_get_work_hash(work);
+- /* all items with this hash lie in [work, tail] */
+- tail = wqe->hash_tail[hash];
+-
+- /* hashed, can run if not already running */
+- if (!test_and_set_bit(hash, &wqe->wq->hash->map)) {
+- wqe->hash_tail[hash] = NULL;
+- wq_list_cut(&acct->work_list, &tail->list, prev);
+- return work;
+- }
+- if (stall_hash == -1U)
+- stall_hash = hash;
+- /* fast forward to a next hash, for-each will fix up @prev */
+- node = &tail->list;
+- }
+-
+- if (stall_hash != -1U) {
+- bool unstalled;
+-
+- /*
+- * Set this before dropping the lock to avoid racing with new
+- * work being added and clearing the stalled bit.
+- */
+- set_bit(IO_ACCT_STALLED_BIT, &acct->flags);
+- raw_spin_unlock(&acct->lock);
+- unstalled = io_wait_on_hash(wqe, stall_hash);
+- raw_spin_lock(&acct->lock);
+- if (unstalled) {
+- clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
+- if (wq_has_sleeper(&wqe->wq->hash->wait))
+- wake_up(&wqe->wq->hash->wait);
+- }
+- }
+-
+- return NULL;
+-}
+-
+-static bool io_flush_signals(void)
+-{
+- if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL))) {
+- __set_current_state(TASK_RUNNING);
+- clear_notify_signal();
+- if (task_work_pending(current))
+- task_work_run();
+- return true;
+- }
+- return false;
+-}
+-
+-static void io_assign_current_work(struct io_worker *worker,
+- struct io_wq_work *work)
+-{
+- if (work) {
+- io_flush_signals();
+- cond_resched();
+- }
+-
+- raw_spin_lock(&worker->lock);
+- worker->cur_work = work;
+- worker->next_work = NULL;
+- raw_spin_unlock(&worker->lock);
+-}
+-
+-static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work);
+-
+-static void io_worker_handle_work(struct io_worker *worker)
+-{
+- struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+- struct io_wqe *wqe = worker->wqe;
+- struct io_wq *wq = wqe->wq;
+- bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state);
+-
+- do {
+- struct io_wq_work *work;
+-
+- /*
+- * If we got some work, mark us as busy. If we didn't, but
+- * the list isn't empty, it means we stalled on hashed work.
+- * Mark us stalled so we don't keep looking for work when we
+- * can't make progress, any work completion or insertion will
+- * clear the stalled flag.
+- */
+- raw_spin_lock(&acct->lock);
+- work = io_get_next_work(acct, worker);
+- raw_spin_unlock(&acct->lock);
+- if (work) {
+- __io_worker_busy(wqe, worker);
+-
+- /*
+- * Make sure cancelation can find this, even before
+- * it becomes the active work. That avoids a window
+- * where the work has been removed from our general
+- * work list, but isn't yet discoverable as the
+- * current work item for this worker.
+- */
+- raw_spin_lock(&worker->lock);
+- worker->next_work = work;
+- raw_spin_unlock(&worker->lock);
+- } else {
+- break;
+- }
+- io_assign_current_work(worker, work);
+- __set_current_state(TASK_RUNNING);
+-
+- /* handle a whole dependent link */
+- do {
+- struct io_wq_work *next_hashed, *linked;
+- unsigned int hash = io_get_work_hash(work);
+-
+- next_hashed = wq_next_work(work);
+-
+- if (unlikely(do_kill) && (work->flags & IO_WQ_WORK_UNBOUND))
+- work->flags |= IO_WQ_WORK_CANCEL;
+- wq->do_work(work);
+- io_assign_current_work(worker, NULL);
+-
+- linked = wq->free_work(work);
+- work = next_hashed;
+- if (!work && linked && !io_wq_is_hashed(linked)) {
+- work = linked;
+- linked = NULL;
+- }
+- io_assign_current_work(worker, work);
+- if (linked)
+- io_wqe_enqueue(wqe, linked);
+-
+- if (hash != -1U && !next_hashed) {
+- /* serialize hash clear with wake_up() */
+- spin_lock_irq(&wq->hash->wait.lock);
+- clear_bit(hash, &wq->hash->map);
+- clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
+- spin_unlock_irq(&wq->hash->wait.lock);
+- if (wq_has_sleeper(&wq->hash->wait))
+- wake_up(&wq->hash->wait);
+- }
+- } while (work);
+- } while (1);
+-}
+-
+-static int io_wqe_worker(void *data)
+-{
+- struct io_worker *worker = data;
+- struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+- struct io_wqe *wqe = worker->wqe;
+- struct io_wq *wq = wqe->wq;
+- bool last_timeout = false;
+- char buf[TASK_COMM_LEN];
+-
+- worker->flags |= (IO_WORKER_F_UP | IO_WORKER_F_RUNNING);
+-
+- snprintf(buf, sizeof(buf), "iou-wrk-%d", wq->task->pid);
+- set_task_comm(current, buf);
+-
+- audit_alloc_kernel(current);
+-
+- while (!test_bit(IO_WQ_BIT_EXIT, &wq->state)) {
+- long ret;
+-
+- set_current_state(TASK_INTERRUPTIBLE);
+- while (io_acct_run_queue(acct))
+- io_worker_handle_work(worker);
+-
+- raw_spin_lock(&wqe->lock);
+- /* timed out, exit unless we're the last worker */
+- if (last_timeout && acct->nr_workers > 1) {
+- acct->nr_workers--;
+- raw_spin_unlock(&wqe->lock);
+- __set_current_state(TASK_RUNNING);
+- break;
+- }
+- last_timeout = false;
+- __io_worker_idle(wqe, worker);
+- raw_spin_unlock(&wqe->lock);
+- if (io_flush_signals())
+- continue;
+- ret = schedule_timeout(WORKER_IDLE_TIMEOUT);
+- if (signal_pending(current)) {
+- struct ksignal ksig;
+-
+- if (!get_signal(&ksig))
+- continue;
+- break;
+- }
+- last_timeout = !ret;
+- }
+-
+- if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
+- io_worker_handle_work(worker);
+-
+- audit_free(current);
+- io_worker_exit(worker);
+- return 0;
+-}
+-
+-/*
+- * Called when a worker is scheduled in. Mark us as currently running.
+- */
+-void io_wq_worker_running(struct task_struct *tsk)
+-{
+- struct io_worker *worker = tsk->worker_private;
+-
+- if (!worker)
+- return;
+- if (!(worker->flags & IO_WORKER_F_UP))
+- return;
+- if (worker->flags & IO_WORKER_F_RUNNING)
+- return;
+- worker->flags |= IO_WORKER_F_RUNNING;
+- io_wqe_inc_running(worker);
+-}
+-
+-/*
+- * Called when worker is going to sleep. If there are no workers currently
+- * running and we have work pending, wake up a free one or create a new one.
+- */
+-void io_wq_worker_sleeping(struct task_struct *tsk)
+-{
+- struct io_worker *worker = tsk->worker_private;
+-
+- if (!worker)
+- return;
+- if (!(worker->flags & IO_WORKER_F_UP))
+- return;
+- if (!(worker->flags & IO_WORKER_F_RUNNING))
+- return;
+-
+- worker->flags &= ~IO_WORKER_F_RUNNING;
+- io_wqe_dec_running(worker);
+-}
+-
+-static void io_init_new_worker(struct io_wqe *wqe, struct io_worker *worker,
+- struct task_struct *tsk)
+-{
+- tsk->worker_private = worker;
+- worker->task = tsk;
+- set_cpus_allowed_ptr(tsk, wqe->cpu_mask);
+- tsk->flags |= PF_NO_SETAFFINITY;
+-
+- raw_spin_lock(&wqe->lock);
+- hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
+- list_add_tail_rcu(&worker->all_list, &wqe->all_list);
+- worker->flags |= IO_WORKER_F_FREE;
+- raw_spin_unlock(&wqe->lock);
+- wake_up_new_task(tsk);
+-}
+-
+-static bool io_wq_work_match_all(struct io_wq_work *work, void *data)
+-{
+- return true;
+-}
+-
+-static inline bool io_should_retry_thread(long err)
+-{
+- /*
+- * Prevent perpetual task_work retry, if the task (or its group) is
+- * exiting.
+- */
+- if (fatal_signal_pending(current))
+- return false;
+-
+- switch (err) {
+- case -EAGAIN:
+- case -ERESTARTSYS:
+- case -ERESTARTNOINTR:
+- case -ERESTARTNOHAND:
+- return true;
+- default:
+- return false;
+- }
+-}
+-
+-static void create_worker_cont(struct callback_head *cb)
+-{
+- struct io_worker *worker;
+- struct task_struct *tsk;
+- struct io_wqe *wqe;
+-
+- worker = container_of(cb, struct io_worker, create_work);
+- clear_bit_unlock(0, &worker->create_state);
+- wqe = worker->wqe;
+- tsk = create_io_thread(io_wqe_worker, worker, wqe->node);
+- if (!IS_ERR(tsk)) {
+- io_init_new_worker(wqe, worker, tsk);
+- io_worker_release(worker);
+- return;
+- } else if (!io_should_retry_thread(PTR_ERR(tsk))) {
+- struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+-
+- atomic_dec(&acct->nr_running);
+- raw_spin_lock(&wqe->lock);
+- acct->nr_workers--;
+- if (!acct->nr_workers) {
+- struct io_cb_cancel_data match = {
+- .fn = io_wq_work_match_all,
+- .cancel_all = true,
+- };
+-
+- raw_spin_unlock(&wqe->lock);
+- while (io_acct_cancel_pending_work(wqe, acct, &match))
+- ;
+- } else {
+- raw_spin_unlock(&wqe->lock);
+- }
+- io_worker_ref_put(wqe->wq);
+- kfree(worker);
+- return;
+- }
+-
+- /* re-create attempts grab a new worker ref, drop the existing one */
+- io_worker_release(worker);
+- schedule_work(&worker->work);
+-}
+-
+-static void io_workqueue_create(struct work_struct *work)
+-{
+- struct io_worker *worker = container_of(work, struct io_worker, work);
+- struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+-
+- if (!io_queue_worker_create(worker, acct, create_worker_cont))
+- kfree(worker);
+-}
+-
+-static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
+-{
+- struct io_wqe_acct *acct = &wqe->acct[index];
+- struct io_worker *worker;
+- struct task_struct *tsk;
+-
+- __set_current_state(TASK_RUNNING);
+-
+- worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, wqe->node);
+- if (!worker) {
+-fail:
+- atomic_dec(&acct->nr_running);
+- raw_spin_lock(&wqe->lock);
+- acct->nr_workers--;
+- raw_spin_unlock(&wqe->lock);
+- io_worker_ref_put(wq);
+- return false;
+- }
+-
+- refcount_set(&worker->ref, 1);
+- worker->wqe = wqe;
+- raw_spin_lock_init(&worker->lock);
+- init_completion(&worker->ref_done);
+-
+- if (index == IO_WQ_ACCT_BOUND)
+- worker->flags |= IO_WORKER_F_BOUND;
+-
+- tsk = create_io_thread(io_wqe_worker, worker, wqe->node);
+- if (!IS_ERR(tsk)) {
+- io_init_new_worker(wqe, worker, tsk);
+- } else if (!io_should_retry_thread(PTR_ERR(tsk))) {
+- kfree(worker);
+- goto fail;
+- } else {
+- INIT_WORK(&worker->work, io_workqueue_create);
+- schedule_work(&worker->work);
+- }
+-
+- return true;
+-}
+-
+-/*
+- * Iterate the passed in list and call the specific function for each
+- * worker that isn't exiting
+- */
+-static bool io_wq_for_each_worker(struct io_wqe *wqe,
+- bool (*func)(struct io_worker *, void *),
+- void *data)
+-{
+- struct io_worker *worker;
+- bool ret = false;
+-
+- list_for_each_entry_rcu(worker, &wqe->all_list, all_list) {
+- if (io_worker_get(worker)) {
+- /* no task if node is/was offline */
+- if (worker->task)
+- ret = func(worker, data);
+- io_worker_release(worker);
+- if (ret)
+- break;
+- }
+- }
+-
+- return ret;
+-}
+-
+-static bool io_wq_worker_wake(struct io_worker *worker, void *data)
+-{
+- __set_notify_signal(worker->task);
+- wake_up_process(worker->task);
+- return false;
+-}
+-
+-static void io_run_cancel(struct io_wq_work *work, struct io_wqe *wqe)
+-{
+- struct io_wq *wq = wqe->wq;
+-
+- do {
+- work->flags |= IO_WQ_WORK_CANCEL;
+- wq->do_work(work);
+- work = wq->free_work(work);
+- } while (work);
+-}
+-
+-static void io_wqe_insert_work(struct io_wqe *wqe, struct io_wq_work *work)
+-{
+- struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
+- unsigned int hash;
+- struct io_wq_work *tail;
+-
+- if (!io_wq_is_hashed(work)) {
+-append:
+- wq_list_add_tail(&work->list, &acct->work_list);
+- return;
+- }
+-
+- hash = io_get_work_hash(work);
+- tail = wqe->hash_tail[hash];
+- wqe->hash_tail[hash] = work;
+- if (!tail)
+- goto append;
+-
+- wq_list_add_after(&work->list, &tail->list, &acct->work_list);
+-}
+-
+-static bool io_wq_work_match_item(struct io_wq_work *work, void *data)
+-{
+- return work == data;
+-}
+-
+-static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+-{
+- struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
+- struct io_cb_cancel_data match;
+- unsigned work_flags = work->flags;
+- bool do_create;
+-
+- /*
+- * If io-wq is exiting for this task, or if the request has explicitly
+- * been marked as one that should not get executed, cancel it here.
+- */
+- if (test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state) ||
+- (work->flags & IO_WQ_WORK_CANCEL)) {
+- io_run_cancel(work, wqe);
+- return;
+- }
+-
+- raw_spin_lock(&acct->lock);
+- io_wqe_insert_work(wqe, work);
+- clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
+- raw_spin_unlock(&acct->lock);
+-
+- raw_spin_lock(&wqe->lock);
+- rcu_read_lock();
+- do_create = !io_wqe_activate_free_worker(wqe, acct);
+- rcu_read_unlock();
+-
+- raw_spin_unlock(&wqe->lock);
+-
+- if (do_create && ((work_flags & IO_WQ_WORK_CONCURRENT) ||
+- !atomic_read(&acct->nr_running))) {
+- bool did_create;
+-
+- did_create = io_wqe_create_worker(wqe, acct);
+- if (likely(did_create))
+- return;
+-
+- raw_spin_lock(&wqe->lock);
+- if (acct->nr_workers) {
+- raw_spin_unlock(&wqe->lock);
+- return;
+- }
+- raw_spin_unlock(&wqe->lock);
+-
+- /* fatal condition, failed to create the first worker */
+- match.fn = io_wq_work_match_item,
+- match.data = work,
+- match.cancel_all = false,
+-
+- io_acct_cancel_pending_work(wqe, acct, &match);
+- }
+-}
+-
+-void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work)
+-{
+- struct io_wqe *wqe = wq->wqes[numa_node_id()];
+-
+- io_wqe_enqueue(wqe, work);
+-}
+-
+-/*
+- * Work items that hash to the same value will not be done in parallel.
+- * Used to limit concurrent writes, generally hashed by inode.
+- */
+-void io_wq_hash_work(struct io_wq_work *work, void *val)
+-{
+- unsigned int bit;
+-
+- bit = hash_ptr(val, IO_WQ_HASH_ORDER);
+- work->flags |= (IO_WQ_WORK_HASHED | (bit << IO_WQ_HASH_SHIFT));
+-}
+-
+-static bool __io_wq_worker_cancel(struct io_worker *worker,
+- struct io_cb_cancel_data *match,
+- struct io_wq_work *work)
+-{
+- if (work && match->fn(work, match->data)) {
+- work->flags |= IO_WQ_WORK_CANCEL;
+- __set_notify_signal(worker->task);
+- return true;
+- }
+-
+- return false;
+-}
+-
+-static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
+-{
+- struct io_cb_cancel_data *match = data;
+-
+- /*
+- * Hold the lock to avoid ->cur_work going out of scope, caller
+- * may dereference the passed in work.
+- */
+- raw_spin_lock(&worker->lock);
+- if (__io_wq_worker_cancel(worker, match, worker->cur_work) ||
+- __io_wq_worker_cancel(worker, match, worker->next_work))
+- match->nr_running++;
+- raw_spin_unlock(&worker->lock);
+-
+- return match->nr_running && !match->cancel_all;
+-}
+-
+-static inline void io_wqe_remove_pending(struct io_wqe *wqe,
+- struct io_wq_work *work,
+- struct io_wq_work_node *prev)
+-{
+- struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
+- unsigned int hash = io_get_work_hash(work);
+- struct io_wq_work *prev_work = NULL;
+-
+- if (io_wq_is_hashed(work) && work == wqe->hash_tail[hash]) {
+- if (prev)
+- prev_work = container_of(prev, struct io_wq_work, list);
+- if (prev_work && io_get_work_hash(prev_work) == hash)
+- wqe->hash_tail[hash] = prev_work;
+- else
+- wqe->hash_tail[hash] = NULL;
+- }
+- wq_list_del(&acct->work_list, &work->list, prev);
+-}
+-
+-static bool io_acct_cancel_pending_work(struct io_wqe *wqe,
+- struct io_wqe_acct *acct,
+- struct io_cb_cancel_data *match)
+-{
+- struct io_wq_work_node *node, *prev;
+- struct io_wq_work *work;
+-
+- raw_spin_lock(&acct->lock);
+- wq_list_for_each(node, prev, &acct->work_list) {
+- work = container_of(node, struct io_wq_work, list);
+- if (!match->fn(work, match->data))
+- continue;
+- io_wqe_remove_pending(wqe, work, prev);
+- raw_spin_unlock(&acct->lock);
+- io_run_cancel(work, wqe);
+- match->nr_pending++;
+- /* not safe to continue after unlock */
+- return true;
+- }
+- raw_spin_unlock(&acct->lock);
+-
+- return false;
+-}
+-
+-static void io_wqe_cancel_pending_work(struct io_wqe *wqe,
+- struct io_cb_cancel_data *match)
+-{
+- int i;
+-retry:
+- for (i = 0; i < IO_WQ_ACCT_NR; i++) {
+- struct io_wqe_acct *acct = io_get_acct(wqe, i == 0);
+-
+- if (io_acct_cancel_pending_work(wqe, acct, match)) {
+- if (match->cancel_all)
+- goto retry;
+- break;
+- }
+- }
+-}
+-
+-static void io_wqe_cancel_running_work(struct io_wqe *wqe,
+- struct io_cb_cancel_data *match)
+-{
+- rcu_read_lock();
+- io_wq_for_each_worker(wqe, io_wq_worker_cancel, match);
+- rcu_read_unlock();
+-}
+-
+-enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
+- void *data, bool cancel_all)
+-{
+- struct io_cb_cancel_data match = {
+- .fn = cancel,
+- .data = data,
+- .cancel_all = cancel_all,
+- };
+- int node;
+-
+- /*
+- * First check pending list, if we're lucky we can just remove it
+- * from there. CANCEL_OK means that the work is returned as-new,
+- * no completion will be posted for it.
+- *
+- * Then check if a free (going busy) or busy worker has the work
+- * currently running. If we find it there, we'll return CANCEL_RUNNING
+- * as an indication that we attempt to signal cancellation. The
+- * completion will run normally in this case.
+- *
+- * Do both of these while holding the wqe->lock, to ensure that
+- * we'll find a work item regardless of state.
+- */
+- for_each_node(node) {
+- struct io_wqe *wqe = wq->wqes[node];
+-
+- io_wqe_cancel_pending_work(wqe, &match);
+- if (match.nr_pending && !match.cancel_all)
+- return IO_WQ_CANCEL_OK;
+-
+- raw_spin_lock(&wqe->lock);
+- io_wqe_cancel_running_work(wqe, &match);
+- raw_spin_unlock(&wqe->lock);
+- if (match.nr_running && !match.cancel_all)
+- return IO_WQ_CANCEL_RUNNING;
+- }
+-
+- if (match.nr_running)
+- return IO_WQ_CANCEL_RUNNING;
+- if (match.nr_pending)
+- return IO_WQ_CANCEL_OK;
+- return IO_WQ_CANCEL_NOTFOUND;
+-}
+-
+-static int io_wqe_hash_wake(struct wait_queue_entry *wait, unsigned mode,
+- int sync, void *key)
+-{
+- struct io_wqe *wqe = container_of(wait, struct io_wqe, wait);
+- int i;
+-
+- list_del_init(&wait->entry);
+-
+- rcu_read_lock();
+- for (i = 0; i < IO_WQ_ACCT_NR; i++) {
+- struct io_wqe_acct *acct = &wqe->acct[i];
+-
+- if (test_and_clear_bit(IO_ACCT_STALLED_BIT, &acct->flags))
+- io_wqe_activate_free_worker(wqe, acct);
+- }
+- rcu_read_unlock();
+- return 1;
+-}
+-
+-struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+-{
+- int ret, node, i;
+- struct io_wq *wq;
+-
+- if (WARN_ON_ONCE(!data->free_work || !data->do_work))
+- return ERR_PTR(-EINVAL);
+- if (WARN_ON_ONCE(!bounded))
+- return ERR_PTR(-EINVAL);
+-
+- wq = kzalloc(struct_size(wq, wqes, nr_node_ids), GFP_KERNEL);
+- if (!wq)
+- return ERR_PTR(-ENOMEM);
+- ret = cpuhp_state_add_instance_nocalls(io_wq_online, &wq->cpuhp_node);
+- if (ret)
+- goto err_wq;
+-
+- refcount_inc(&data->hash->refs);
+- wq->hash = data->hash;
+- wq->free_work = data->free_work;
+- wq->do_work = data->do_work;
+-
+- ret = -ENOMEM;
+- for_each_node(node) {
+- struct io_wqe *wqe;
+- int alloc_node = node;
+-
+- if (!node_online(alloc_node))
+- alloc_node = NUMA_NO_NODE;
+- wqe = kzalloc_node(sizeof(struct io_wqe), GFP_KERNEL, alloc_node);
+- if (!wqe)
+- goto err;
+- if (!alloc_cpumask_var(&wqe->cpu_mask, GFP_KERNEL))
+- goto err;
+- cpumask_copy(wqe->cpu_mask, cpumask_of_node(node));
+- wq->wqes[node] = wqe;
+- wqe->node = alloc_node;
+- wqe->acct[IO_WQ_ACCT_BOUND].max_workers = bounded;
+- wqe->acct[IO_WQ_ACCT_UNBOUND].max_workers =
+- task_rlimit(current, RLIMIT_NPROC);
+- INIT_LIST_HEAD(&wqe->wait.entry);
+- wqe->wait.func = io_wqe_hash_wake;
+- for (i = 0; i < IO_WQ_ACCT_NR; i++) {
+- struct io_wqe_acct *acct = &wqe->acct[i];
+-
+- acct->index = i;
+- atomic_set(&acct->nr_running, 0);
+- INIT_WQ_LIST(&acct->work_list);
+- raw_spin_lock_init(&acct->lock);
+- }
+- wqe->wq = wq;
+- raw_spin_lock_init(&wqe->lock);
+- INIT_HLIST_NULLS_HEAD(&wqe->free_list, 0);
+- INIT_LIST_HEAD(&wqe->all_list);
+- }
+-
+- wq->task = get_task_struct(data->task);
+- atomic_set(&wq->worker_refs, 1);
+- init_completion(&wq->worker_done);
+- return wq;
+-err:
+- io_wq_put_hash(data->hash);
+- cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
+- for_each_node(node) {
+- if (!wq->wqes[node])
+- continue;
+- free_cpumask_var(wq->wqes[node]->cpu_mask);
+- kfree(wq->wqes[node]);
+- }
+-err_wq:
+- kfree(wq);
+- return ERR_PTR(ret);
+-}
+-
+-static bool io_task_work_match(struct callback_head *cb, void *data)
+-{
+- struct io_worker *worker;
+-
+- if (cb->func != create_worker_cb && cb->func != create_worker_cont)
+- return false;
+- worker = container_of(cb, struct io_worker, create_work);
+- return worker->wqe->wq == data;
+-}
+-
+-void io_wq_exit_start(struct io_wq *wq)
+-{
+- set_bit(IO_WQ_BIT_EXIT, &wq->state);
+-}
+-
+-static void io_wq_cancel_tw_create(struct io_wq *wq)
+-{
+- struct callback_head *cb;
+-
+- while ((cb = task_work_cancel_match(wq->task, io_task_work_match, wq)) != NULL) {
+- struct io_worker *worker;
+-
+- worker = container_of(cb, struct io_worker, create_work);
+- io_worker_cancel_cb(worker);
+- }
+-}
+-
+-static void io_wq_exit_workers(struct io_wq *wq)
+-{
+- int node;
+-
+- if (!wq->task)
+- return;
+-
+- io_wq_cancel_tw_create(wq);
+-
+- rcu_read_lock();
+- for_each_node(node) {
+- struct io_wqe *wqe = wq->wqes[node];
+-
+- io_wq_for_each_worker(wqe, io_wq_worker_wake, NULL);
+- }
+- rcu_read_unlock();
+- io_worker_ref_put(wq);
+- wait_for_completion(&wq->worker_done);
+-
+- for_each_node(node) {
+- spin_lock_irq(&wq->hash->wait.lock);
+- list_del_init(&wq->wqes[node]->wait.entry);
+- spin_unlock_irq(&wq->hash->wait.lock);
+- }
+- put_task_struct(wq->task);
+- wq->task = NULL;
+-}
+-
+-static void io_wq_destroy(struct io_wq *wq)
+-{
+- int node;
+-
+- cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
+-
+- for_each_node(node) {
+- struct io_wqe *wqe = wq->wqes[node];
+- struct io_cb_cancel_data match = {
+- .fn = io_wq_work_match_all,
+- .cancel_all = true,
+- };
+- io_wqe_cancel_pending_work(wqe, &match);
+- free_cpumask_var(wqe->cpu_mask);
+- kfree(wqe);
+- }
+- io_wq_put_hash(wq->hash);
+- kfree(wq);
+-}
+-
+-void io_wq_put_and_exit(struct io_wq *wq)
+-{
+- WARN_ON_ONCE(!test_bit(IO_WQ_BIT_EXIT, &wq->state));
+-
+- io_wq_exit_workers(wq);
+- io_wq_destroy(wq);
+-}
+-
+-struct online_data {
+- unsigned int cpu;
+- bool online;
+-};
+-
+-static bool io_wq_worker_affinity(struct io_worker *worker, void *data)
+-{
+- struct online_data *od = data;
+-
+- if (od->online)
+- cpumask_set_cpu(od->cpu, worker->wqe->cpu_mask);
+- else
+- cpumask_clear_cpu(od->cpu, worker->wqe->cpu_mask);
+- return false;
+-}
+-
+-static int __io_wq_cpu_online(struct io_wq *wq, unsigned int cpu, bool online)
+-{
+- struct online_data od = {
+- .cpu = cpu,
+- .online = online
+- };
+- int i;
+-
+- rcu_read_lock();
+- for_each_node(i)
+- io_wq_for_each_worker(wq->wqes[i], io_wq_worker_affinity, &od);
+- rcu_read_unlock();
+- return 0;
+-}
+-
+-static int io_wq_cpu_online(unsigned int cpu, struct hlist_node *node)
+-{
+- struct io_wq *wq = hlist_entry_safe(node, struct io_wq, cpuhp_node);
+-
+- return __io_wq_cpu_online(wq, cpu, true);
+-}
+-
+-static int io_wq_cpu_offline(unsigned int cpu, struct hlist_node *node)
+-{
+- struct io_wq *wq = hlist_entry_safe(node, struct io_wq, cpuhp_node);
+-
+- return __io_wq_cpu_online(wq, cpu, false);
+-}
+-
+-int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask)
+-{
+- int i;
+-
+- rcu_read_lock();
+- for_each_node(i) {
+- struct io_wqe *wqe = wq->wqes[i];
+-
+- if (mask)
+- cpumask_copy(wqe->cpu_mask, mask);
+- else
+- cpumask_copy(wqe->cpu_mask, cpumask_of_node(i));
+- }
+- rcu_read_unlock();
+- return 0;
+-}
+-
+-/*
+- * Set max number of unbounded workers, returns old value. If new_count is 0,
+- * then just return the old value.
+- */
+-int io_wq_max_workers(struct io_wq *wq, int *new_count)
+-{
+- int prev[IO_WQ_ACCT_NR];
+- bool first_node = true;
+- int i, node;
+-
+- BUILD_BUG_ON((int) IO_WQ_ACCT_BOUND != (int) IO_WQ_BOUND);
+- BUILD_BUG_ON((int) IO_WQ_ACCT_UNBOUND != (int) IO_WQ_UNBOUND);
+- BUILD_BUG_ON((int) IO_WQ_ACCT_NR != 2);
+-
+- for (i = 0; i < IO_WQ_ACCT_NR; i++) {
+- if (new_count[i] > task_rlimit(current, RLIMIT_NPROC))
+- new_count[i] = task_rlimit(current, RLIMIT_NPROC);
+- }
+-
+- for (i = 0; i < IO_WQ_ACCT_NR; i++)
+- prev[i] = 0;
+-
+- rcu_read_lock();
+- for_each_node(node) {
+- struct io_wqe *wqe = wq->wqes[node];
+- struct io_wqe_acct *acct;
+-
+- raw_spin_lock(&wqe->lock);
+- for (i = 0; i < IO_WQ_ACCT_NR; i++) {
+- acct = &wqe->acct[i];
+- if (first_node)
+- prev[i] = max_t(int, acct->max_workers, prev[i]);
+- if (new_count[i])
+- acct->max_workers = new_count[i];
+- }
+- raw_spin_unlock(&wqe->lock);
+- first_node = false;
+- }
+- rcu_read_unlock();
+-
+- for (i = 0; i < IO_WQ_ACCT_NR; i++)
+- new_count[i] = prev[i];
+-
+- return 0;
+-}
+-
+-static __init int io_wq_init(void)
+-{
+- int ret;
+-
+- ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "io-wq/online",
+- io_wq_cpu_online, io_wq_cpu_offline);
+- if (ret < 0)
+- return ret;
+- io_wq_online = ret;
+- return 0;
+-}
+-subsys_initcall(io_wq_init);
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+deleted file mode 100644
+index ba6eee76d028f..0000000000000
+--- a/fs/io-wq.h
++++ /dev/null
+@@ -1,228 +0,0 @@
+-#ifndef INTERNAL_IO_WQ_H
+-#define INTERNAL_IO_WQ_H
+-
+-#include <linux/refcount.h>
+-
+-struct io_wq;
+-
+-enum {
+- IO_WQ_WORK_CANCEL = 1,
+- IO_WQ_WORK_HASHED = 2,
+- IO_WQ_WORK_UNBOUND = 4,
+- IO_WQ_WORK_CONCURRENT = 16,
+-
+- IO_WQ_HASH_SHIFT = 24, /* upper 8 bits are used for hash key */
+-};
+-
+-enum io_wq_cancel {
+- IO_WQ_CANCEL_OK, /* cancelled before started */
+- IO_WQ_CANCEL_RUNNING, /* found, running, and attempted cancelled */
+- IO_WQ_CANCEL_NOTFOUND, /* work not found */
+-};
+-
+-struct io_wq_work_node {
+- struct io_wq_work_node *next;
+-};
+-
+-struct io_wq_work_list {
+- struct io_wq_work_node *first;
+- struct io_wq_work_node *last;
+-};
+-
+-#define wq_list_for_each(pos, prv, head) \
+- for (pos = (head)->first, prv = NULL; pos; prv = pos, pos = (pos)->next)
+-
+-#define wq_list_for_each_resume(pos, prv) \
+- for (; pos; prv = pos, pos = (pos)->next)
+-
+-#define wq_list_empty(list) (READ_ONCE((list)->first) == NULL)
+-#define INIT_WQ_LIST(list) do { \
+- (list)->first = NULL; \
+-} while (0)
+-
+-static inline void wq_list_add_after(struct io_wq_work_node *node,
+- struct io_wq_work_node *pos,
+- struct io_wq_work_list *list)
+-{
+- struct io_wq_work_node *next = pos->next;
+-
+- pos->next = node;
+- node->next = next;
+- if (!next)
+- list->last = node;
+-}
+-
+-/**
+- * wq_list_merge - merge the second list to the first one.
+- * @list0: the first list
+- * @list1: the second list
+- * Return the first node after mergence.
+- */
+-static inline struct io_wq_work_node *wq_list_merge(struct io_wq_work_list *list0,
+- struct io_wq_work_list *list1)
+-{
+- struct io_wq_work_node *ret;
+-
+- if (!list0->first) {
+- ret = list1->first;
+- } else {
+- ret = list0->first;
+- list0->last->next = list1->first;
+- }
+- INIT_WQ_LIST(list0);
+- INIT_WQ_LIST(list1);
+- return ret;
+-}
+-
+-static inline void wq_list_add_tail(struct io_wq_work_node *node,
+- struct io_wq_work_list *list)
+-{
+- node->next = NULL;
+- if (!list->first) {
+- list->last = node;
+- WRITE_ONCE(list->first, node);
+- } else {
+- list->last->next = node;
+- list->last = node;
+- }
+-}
+-
+-static inline void wq_list_add_head(struct io_wq_work_node *node,
+- struct io_wq_work_list *list)
+-{
+- node->next = list->first;
+- if (!node->next)
+- list->last = node;
+- WRITE_ONCE(list->first, node);
+-}
+-
+-static inline void wq_list_cut(struct io_wq_work_list *list,
+- struct io_wq_work_node *last,
+- struct io_wq_work_node *prev)
+-{
+- /* first in the list, if prev==NULL */
+- if (!prev)
+- WRITE_ONCE(list->first, last->next);
+- else
+- prev->next = last->next;
+-
+- if (last == list->last)
+- list->last = prev;
+- last->next = NULL;
+-}
+-
+-static inline void __wq_list_splice(struct io_wq_work_list *list,
+- struct io_wq_work_node *to)
+-{
+- list->last->next = to->next;
+- to->next = list->first;
+- INIT_WQ_LIST(list);
+-}
+-
+-static inline bool wq_list_splice(struct io_wq_work_list *list,
+- struct io_wq_work_node *to)
+-{
+- if (!wq_list_empty(list)) {
+- __wq_list_splice(list, to);
+- return true;
+- }
+- return false;
+-}
+-
+-static inline void wq_stack_add_head(struct io_wq_work_node *node,
+- struct io_wq_work_node *stack)
+-{
+- node->next = stack->next;
+- stack->next = node;
+-}
+-
+-static inline void wq_list_del(struct io_wq_work_list *list,
+- struct io_wq_work_node *node,
+- struct io_wq_work_node *prev)
+-{
+- wq_list_cut(list, node, prev);
+-}
+-
+-static inline
+-struct io_wq_work_node *wq_stack_extract(struct io_wq_work_node *stack)
+-{
+- struct io_wq_work_node *node = stack->next;
+-
+- stack->next = node->next;
+- return node;
+-}
+-
+-struct io_wq_work {
+- struct io_wq_work_node list;
+- unsigned flags;
+- int cancel_seq;
+-};
+-
+-static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
+-{
+- if (!work->list.next)
+- return NULL;
+-
+- return container_of(work->list.next, struct io_wq_work, list);
+-}
+-
+-typedef struct io_wq_work *(free_work_fn)(struct io_wq_work *);
+-typedef void (io_wq_work_fn)(struct io_wq_work *);
+-
+-struct io_wq_hash {
+- refcount_t refs;
+- unsigned long map;
+- struct wait_queue_head wait;
+-};
+-
+-static inline void io_wq_put_hash(struct io_wq_hash *hash)
+-{
+- if (refcount_dec_and_test(&hash->refs))
+- kfree(hash);
+-}
+-
+-struct io_wq_data {
+- struct io_wq_hash *hash;
+- struct task_struct *task;
+- io_wq_work_fn *do_work;
+- free_work_fn *free_work;
+-};
+-
+-struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data);
+-void io_wq_exit_start(struct io_wq *wq);
+-void io_wq_put_and_exit(struct io_wq *wq);
+-
+-void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work);
+-void io_wq_hash_work(struct io_wq_work *work, void *val);
+-
+-int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask);
+-int io_wq_max_workers(struct io_wq *wq, int *new_count);
+-
+-static inline bool io_wq_is_hashed(struct io_wq_work *work)
+-{
+- return work->flags & IO_WQ_WORK_HASHED;
+-}
+-
+-typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
+-
+-enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
+- void *data, bool cancel_all);
+-
+-#if defined(CONFIG_IO_WQ)
+-extern void io_wq_worker_sleeping(struct task_struct *);
+-extern void io_wq_worker_running(struct task_struct *);
+-#else
+-static inline void io_wq_worker_sleeping(struct task_struct *tsk)
+-{
+-}
+-static inline void io_wq_worker_running(struct task_struct *tsk)
+-{
+-}
+-#endif
+-
+-static inline bool io_wq_current_is_worker(void)
+-{
+- return in_task() && (current->flags & PF_IO_WORKER) &&
+- current->worker_private;
+-}
+-#endif
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+deleted file mode 100644
+index e8e769be9ed05..0000000000000
+--- a/fs/io_uring.c
++++ /dev/null
+@@ -1,13273 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Shared application/kernel submission and completion ring pairs, for
+- * supporting fast/efficient IO.
+- *
+- * A note on the read/write ordering memory barriers that are matched between
+- * the application and kernel side.
+- *
+- * After the application reads the CQ ring tail, it must use an
+- * appropriate smp_rmb() to pair with the smp_wmb() the kernel uses
+- * before writing the tail (using smp_load_acquire to read the tail will
+- * do). It also needs a smp_mb() before updating CQ head (ordering the
+- * entry load(s) with the head store), pairing with an implicit barrier
+- * through a control-dependency in io_get_cqe (smp_store_release to
+- * store head will do). Failure to do so could lead to reading invalid
+- * CQ entries.
+- *
+- * Likewise, the application must use an appropriate smp_wmb() before
+- * writing the SQ tail (ordering SQ entry stores with the tail store),
+- * which pairs with smp_load_acquire in io_get_sqring (smp_store_release
+- * to store the tail will do). And it needs a barrier ordering the SQ
+- * head load before writing new SQ entries (smp_load_acquire to read
+- * head will do).
+- *
+- * When using the SQ poll thread (IORING_SETUP_SQPOLL), the application
+- * needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*
+- * updating the SQ tail; a full memory barrier smp_mb() is needed
+- * between.
+- *
+- * Also see the examples in the liburing library:
+- *
+- * git://git.kernel.dk/liburing
+- *
+- * io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
+- * from data shared between the kernel and application. This is done both
+- * for ordering purposes, but also to ensure that once a value is loaded from
+- * data that the application could potentially modify, it remains stable.
+- *
+- * Copyright (C) 2018-2019 Jens Axboe
+- * Copyright (c) 2018-2019 Christoph Hellwig
+- */
+-#include <linux/kernel.h>
+-#include <linux/init.h>
+-#include <linux/errno.h>
+-#include <linux/syscalls.h>
+-#include <linux/compat.h>
+-#include <net/compat.h>
+-#include <linux/refcount.h>
+-#include <linux/uio.h>
+-#include <linux/bits.h>
+-
+-#include <linux/sched/signal.h>
+-#include <linux/fs.h>
+-#include <linux/file.h>
+-#include <linux/fdtable.h>
+-#include <linux/mm.h>
+-#include <linux/mman.h>
+-#include <linux/percpu.h>
+-#include <linux/slab.h>
+-#include <linux/blk-mq.h>
+-#include <linux/bvec.h>
+-#include <linux/net.h>
+-#include <net/sock.h>
+-#include <net/af_unix.h>
+-#include <net/scm.h>
+-#include <linux/anon_inodes.h>
+-#include <linux/sched/mm.h>
+-#include <linux/uaccess.h>
+-#include <linux/nospec.h>
+-#include <linux/sizes.h>
+-#include <linux/hugetlb.h>
+-#include <linux/highmem.h>
+-#include <linux/namei.h>
+-#include <linux/fsnotify.h>
+-#include <linux/fadvise.h>
+-#include <linux/eventpoll.h>
+-#include <linux/splice.h>
+-#include <linux/task_work.h>
+-#include <linux/pagemap.h>
+-#include <linux/io_uring.h>
+-#include <linux/audit.h>
+-#include <linux/security.h>
+-#include <linux/xattr.h>
+-
+-#define CREATE_TRACE_POINTS
+-#include <trace/events/io_uring.h>
+-
+-#include <uapi/linux/io_uring.h>
+-
+-#include "internal.h"
+-#include "io-wq.h"
+-
+-#define IORING_MAX_ENTRIES 32768
+-#define IORING_MAX_CQ_ENTRIES (2 * IORING_MAX_ENTRIES)
+-#define IORING_SQPOLL_CAP_ENTRIES_VALUE 8
+-
+-/* only define max */
+-#define IORING_MAX_FIXED_FILES (1U << 20)
+-#define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \
+- IORING_REGISTER_LAST + IORING_OP_LAST)
+-
+-#define IO_RSRC_TAG_TABLE_SHIFT (PAGE_SHIFT - 3)
+-#define IO_RSRC_TAG_TABLE_MAX (1U << IO_RSRC_TAG_TABLE_SHIFT)
+-#define IO_RSRC_TAG_TABLE_MASK (IO_RSRC_TAG_TABLE_MAX - 1)
+-
+-#define IORING_MAX_REG_BUFFERS (1U << 14)
+-
+-#define SQE_COMMON_FLAGS (IOSQE_FIXED_FILE | IOSQE_IO_LINK | \
+- IOSQE_IO_HARDLINK | IOSQE_ASYNC)
+-
+-#define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
+- IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
+-
+-#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
+- REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
+- REQ_F_ASYNC_DATA)
+-
+-#define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
+- IO_REQ_CLEAN_FLAGS)
+-
+-#define IO_APOLL_MULTI_POLLED (REQ_F_APOLL_MULTISHOT | REQ_F_POLLED)
+-
+-#define IO_TCTX_REFS_CACHE_NR (1U << 10)
+-
+-struct io_uring {
+- u32 head ____cacheline_aligned_in_smp;
+- u32 tail ____cacheline_aligned_in_smp;
+-};
+-
+-/*
+- * This data is shared with the application through the mmap at offsets
+- * IORING_OFF_SQ_RING and IORING_OFF_CQ_RING.
+- *
+- * The offsets to the member fields are published through struct
+- * io_sqring_offsets when calling io_uring_setup.
+- */
+-struct io_rings {
+- /*
+- * Head and tail offsets into the ring; the offsets need to be
+- * masked to get valid indices.
+- *
+- * The kernel controls head of the sq ring and the tail of the cq ring,
+- * and the application controls tail of the sq ring and the head of the
+- * cq ring.
+- */
+- struct io_uring sq, cq;
+- /*
+- * Bitmasks to apply to head and tail offsets (constant, equals
+- * ring_entries - 1)
+- */
+- u32 sq_ring_mask, cq_ring_mask;
+- /* Ring sizes (constant, power of 2) */
+- u32 sq_ring_entries, cq_ring_entries;
+- /*
+- * Number of invalid entries dropped by the kernel due to
+- * invalid index stored in array
+- *
+- * Written by the kernel, shouldn't be modified by the
+- * application (i.e. get number of "new events" by comparing to
+- * cached value).
+- *
+- * After a new SQ head value was read by the application this
+- * counter includes all submissions that were dropped reaching
+- * the new SQ head (and possibly more).
+- */
+- u32 sq_dropped;
+- /*
+- * Runtime SQ flags
+- *
+- * Written by the kernel, shouldn't be modified by the
+- * application.
+- *
+- * The application needs a full memory barrier before checking
+- * for IORING_SQ_NEED_WAKEUP after updating the sq tail.
+- */
+- atomic_t sq_flags;
+- /*
+- * Runtime CQ flags
+- *
+- * Written by the application, shouldn't be modified by the
+- * kernel.
+- */
+- u32 cq_flags;
+- /*
+- * Number of completion events lost because the queue was full;
+- * this should be avoided by the application by making sure
+- * there are not more requests pending than there is space in
+- * the completion queue.
+- *
+- * Written by the kernel, shouldn't be modified by the
+- * application (i.e. get number of "new events" by comparing to
+- * cached value).
+- *
+- * As completion events come in out of order this counter is not
+- * ordered with any other data.
+- */
+- u32 cq_overflow;
+- /*
+- * Ring buffer of completion events.
+- *
+- * The kernel writes completion events fresh every time they are
+- * produced, so the application is allowed to modify pending
+- * entries.
+- */
+- struct io_uring_cqe cqes[] ____cacheline_aligned_in_smp;
+-};
+-
+-struct io_mapped_ubuf {
+- u64 ubuf;
+- u64 ubuf_end;
+- unsigned int nr_bvecs;
+- unsigned long acct_pages;
+- struct bio_vec bvec[];
+-};
+-
+-struct io_ring_ctx;
+-
+-struct io_overflow_cqe {
+- struct list_head list;
+- struct io_uring_cqe cqe;
+-};
+-
+-/*
+- * FFS_SCM is only available on 64-bit archs, for 32-bit we just define it as 0
+- * and define IO_URING_SCM_ALL. For this case, we use SCM for all files as we
+- * can't safely always dereference the file when the task has exited and ring
+- * cleanup is done. If a file is tracked and part of SCM, then unix gc on
+- * process exit may reap it before __io_sqe_files_unregister() is run.
+- */
+-#define FFS_NOWAIT 0x1UL
+-#define FFS_ISREG 0x2UL
+-#if defined(CONFIG_64BIT)
+-#define FFS_SCM 0x4UL
+-#else
+-#define IO_URING_SCM_ALL
+-#define FFS_SCM 0x0UL
+-#endif
+-#define FFS_MASK ~(FFS_NOWAIT|FFS_ISREG|FFS_SCM)
+-
+-struct io_fixed_file {
+- /* file * with additional FFS_* flags */
+- unsigned long file_ptr;
+-};
+-
+-struct io_rsrc_put {
+- struct list_head list;
+- u64 tag;
+- union {
+- void *rsrc;
+- struct file *file;
+- struct io_mapped_ubuf *buf;
+- };
+-};
+-
+-struct io_file_table {
+- struct io_fixed_file *files;
+- unsigned long *bitmap;
+- unsigned int alloc_hint;
+-};
+-
+-struct io_rsrc_node {
+- struct percpu_ref refs;
+- struct list_head node;
+- struct list_head rsrc_list;
+- struct io_rsrc_data *rsrc_data;
+- struct llist_node llist;
+- bool done;
+-};
+-
+-typedef void (rsrc_put_fn)(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc);
+-
+-struct io_rsrc_data {
+- struct io_ring_ctx *ctx;
+-
+- u64 **tags;
+- unsigned int nr;
+- rsrc_put_fn *do_put;
+- atomic_t refs;
+- struct completion done;
+- bool quiesce;
+-};
+-
+-#define IO_BUFFER_LIST_BUF_PER_PAGE (PAGE_SIZE / sizeof(struct io_uring_buf))
+-struct io_buffer_list {
+- /*
+- * If ->buf_nr_pages is set, then buf_pages/buf_ring are used. If not,
+- * then these are classic provided buffers and ->buf_list is used.
+- */
+- union {
+- struct list_head buf_list;
+- struct {
+- struct page **buf_pages;
+- struct io_uring_buf_ring *buf_ring;
+- };
+- };
+- __u16 bgid;
+-
+- /* below is for ring provided buffers */
+- __u16 buf_nr_pages;
+- __u16 nr_entries;
+- __u16 head;
+- __u16 mask;
+-};
+-
+-struct io_buffer {
+- struct list_head list;
+- __u64 addr;
+- __u32 len;
+- __u16 bid;
+- __u16 bgid;
+-};
+-
+-struct io_restriction {
+- DECLARE_BITMAP(register_op, IORING_REGISTER_LAST);
+- DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
+- u8 sqe_flags_allowed;
+- u8 sqe_flags_required;
+- bool registered;
+-};
+-
+-enum {
+- IO_SQ_THREAD_SHOULD_STOP = 0,
+- IO_SQ_THREAD_SHOULD_PARK,
+-};
+-
+-struct io_sq_data {
+- refcount_t refs;
+- atomic_t park_pending;
+- struct mutex lock;
+-
+- /* ctx's that are using this sqd */
+- struct list_head ctx_list;
+-
+- struct task_struct *thread;
+- struct wait_queue_head wait;
+-
+- unsigned sq_thread_idle;
+- int sq_cpu;
+- pid_t task_pid;
+- pid_t task_tgid;
+-
+- unsigned long state;
+- struct completion exited;
+-};
+-
+-#define IO_COMPL_BATCH 32
+-#define IO_REQ_CACHE_SIZE 32
+-#define IO_REQ_ALLOC_BATCH 8
+-
+-struct io_submit_link {
+- struct io_kiocb *head;
+- struct io_kiocb *last;
+-};
+-
+-struct io_submit_state {
+- /* inline/task_work completion list, under ->uring_lock */
+- struct io_wq_work_node free_list;
+- /* batch completion logic */
+- struct io_wq_work_list compl_reqs;
+- struct io_submit_link link;
+-
+- bool plug_started;
+- bool need_plug;
+- bool flush_cqes;
+- unsigned short submit_nr;
+- struct blk_plug plug;
+-};
+-
+-struct io_ev_fd {
+- struct eventfd_ctx *cq_ev_fd;
+- unsigned int eventfd_async: 1;
+- struct rcu_head rcu;
+-};
+-
+-#define BGID_ARRAY 64
+-
+-struct io_ring_ctx {
+- /* const or read-mostly hot data */
+- struct {
+- struct percpu_ref refs;
+-
+- struct io_rings *rings;
+- unsigned int flags;
+- enum task_work_notify_mode notify_method;
+- unsigned int compat: 1;
+- unsigned int drain_next: 1;
+- unsigned int restricted: 1;
+- unsigned int off_timeout_used: 1;
+- unsigned int drain_active: 1;
+- unsigned int drain_disabled: 1;
+- unsigned int has_evfd: 1;
+- unsigned int syscall_iopoll: 1;
+- } ____cacheline_aligned_in_smp;
+-
+- /* submission data */
+- struct {
+- struct mutex uring_lock;
+-
+- /*
+- * Ring buffer of indices into array of io_uring_sqe, which is
+- * mmapped by the application using the IORING_OFF_SQES offset.
+- *
+- * This indirection could e.g. be used to assign fixed
+- * io_uring_sqe entries to operations and only submit them to
+- * the queue when needed.
+- *
+- * The kernel modifies neither the indices array nor the entries
+- * array.
+- */
+- u32 *sq_array;
+- struct io_uring_sqe *sq_sqes;
+- unsigned cached_sq_head;
+- unsigned sq_entries;
+- struct list_head defer_list;
+-
+- /*
+- * Fixed resources fast path, should be accessed only under
+- * uring_lock, and updated through io_uring_register(2)
+- */
+- struct io_rsrc_node *rsrc_node;
+- int rsrc_cached_refs;
+- atomic_t cancel_seq;
+- struct io_file_table file_table;
+- unsigned nr_user_files;
+- unsigned nr_user_bufs;
+- struct io_mapped_ubuf **user_bufs;
+-
+- struct io_submit_state submit_state;
+-
+- struct io_buffer_list *io_bl;
+- struct xarray io_bl_xa;
+- struct list_head io_buffers_cache;
+-
+- struct list_head timeout_list;
+- struct list_head ltimeout_list;
+- struct list_head cq_overflow_list;
+- struct list_head apoll_cache;
+- struct xarray personalities;
+- u32 pers_next;
+- unsigned sq_thread_idle;
+- } ____cacheline_aligned_in_smp;
+-
+- /* IRQ completion list, under ->completion_lock */
+- struct io_wq_work_list locked_free_list;
+- unsigned int locked_free_nr;
+-
+- const struct cred *sq_creds; /* cred used for __io_sq_thread() */
+- struct io_sq_data *sq_data; /* if using sq thread polling */
+-
+- struct wait_queue_head sqo_sq_wait;
+- struct list_head sqd_list;
+-
+- unsigned long check_cq;
+-
+- struct {
+- /*
+- * We cache a range of free CQEs we can use, once exhausted it
+- * should go through a slower range setup, see __io_get_cqe()
+- */
+- struct io_uring_cqe *cqe_cached;
+- struct io_uring_cqe *cqe_sentinel;
+-
+- unsigned cached_cq_tail;
+- unsigned cq_entries;
+- struct io_ev_fd __rcu *io_ev_fd;
+- struct wait_queue_head cq_wait;
+- unsigned cq_extra;
+- atomic_t cq_timeouts;
+- unsigned cq_last_tm_flush;
+- } ____cacheline_aligned_in_smp;
+-
+- struct {
+- spinlock_t completion_lock;
+-
+- spinlock_t timeout_lock;
+-
+- /*
+- * ->iopoll_list is protected by the ctx->uring_lock for
+- * io_uring instances that don't use IORING_SETUP_SQPOLL.
+- * For SQPOLL, only the single threaded io_sq_thread() will
+- * manipulate the list, hence no extra locking is needed there.
+- */
+- struct io_wq_work_list iopoll_list;
+- struct hlist_head *cancel_hash;
+- unsigned cancel_hash_bits;
+- bool poll_multi_queue;
+-
+- struct list_head io_buffers_comp;
+- } ____cacheline_aligned_in_smp;
+-
+- struct io_restriction restrictions;
+-
+- /* slow path rsrc auxilary data, used by update/register */
+- struct {
+- struct io_rsrc_node *rsrc_backup_node;
+- struct io_mapped_ubuf *dummy_ubuf;
+- struct io_rsrc_data *file_data;
+- struct io_rsrc_data *buf_data;
+-
+- struct delayed_work rsrc_put_work;
+- struct llist_head rsrc_put_llist;
+- struct list_head rsrc_ref_list;
+- spinlock_t rsrc_ref_lock;
+-
+- struct list_head io_buffers_pages;
+- };
+-
+- /* Keep this last, we don't need it for the fast path */
+- struct {
+- #if defined(CONFIG_UNIX)
+- struct socket *ring_sock;
+- #endif
+- /* hashed buffered write serialization */
+- struct io_wq_hash *hash_map;
+-
+- /* Only used for accounting purposes */
+- struct user_struct *user;
+- struct mm_struct *mm_account;
+-
+- /* ctx exit and cancelation */
+- struct llist_head fallback_llist;
+- struct delayed_work fallback_work;
+- struct work_struct exit_work;
+- struct list_head tctx_list;
+- struct completion ref_comp;
+- u32 iowq_limits[2];
+- bool iowq_limits_set;
+- };
+-};
+-
+-/*
+- * Arbitrary limit, can be raised if need be
+- */
+-#define IO_RINGFD_REG_MAX 16
+-
+-struct io_uring_task {
+- /* submission side */
+- int cached_refs;
+- struct xarray xa;
+- struct wait_queue_head wait;
+- const struct io_ring_ctx *last;
+- struct io_wq *io_wq;
+- struct percpu_counter inflight;
+- atomic_t inflight_tracked;
+- atomic_t in_idle;
+-
+- spinlock_t task_lock;
+- struct io_wq_work_list task_list;
+- struct io_wq_work_list prio_task_list;
+- struct callback_head task_work;
+- struct file **registered_rings;
+- bool task_running;
+-};
+-
+-/*
+- * First field must be the file pointer in all the
+- * iocb unions! See also 'struct kiocb' in <linux/fs.h>
+- */
+-struct io_poll_iocb {
+- struct file *file;
+- struct wait_queue_head *head;
+- __poll_t events;
+- struct wait_queue_entry wait;
+-};
+-
+-struct io_poll_update {
+- struct file *file;
+- u64 old_user_data;
+- u64 new_user_data;
+- __poll_t events;
+- bool update_events;
+- bool update_user_data;
+-};
+-
+-struct io_close {
+- struct file *file;
+- int fd;
+- u32 file_slot;
+-};
+-
+-struct io_timeout_data {
+- struct io_kiocb *req;
+- struct hrtimer timer;
+- struct timespec64 ts;
+- enum hrtimer_mode mode;
+- u32 flags;
+-};
+-
+-struct io_accept {
+- struct file *file;
+- struct sockaddr __user *addr;
+- int __user *addr_len;
+- int flags;
+- u32 file_slot;
+- unsigned long nofile;
+-};
+-
+-struct io_socket {
+- struct file *file;
+- int domain;
+- int type;
+- int protocol;
+- int flags;
+- u32 file_slot;
+- unsigned long nofile;
+-};
+-
+-struct io_sync {
+- struct file *file;
+- loff_t len;
+- loff_t off;
+- int flags;
+- int mode;
+-};
+-
+-struct io_cancel {
+- struct file *file;
+- u64 addr;
+- u32 flags;
+- s32 fd;
+-};
+-
+-struct io_timeout {
+- struct file *file;
+- u32 off;
+- u32 target_seq;
+- struct list_head list;
+- /* head of the link, used by linked timeouts only */
+- struct io_kiocb *head;
+- /* for linked completions */
+- struct io_kiocb *prev;
+-};
+-
+-struct io_timeout_rem {
+- struct file *file;
+- u64 addr;
+-
+- /* timeout update */
+- struct timespec64 ts;
+- u32 flags;
+- bool ltimeout;
+-};
+-
+-struct io_rw {
+- /* NOTE: kiocb has the file as the first member, so don't do it here */
+- struct kiocb kiocb;
+- u64 addr;
+- u32 len;
+- rwf_t flags;
+-};
+-
+-struct io_connect {
+- struct file *file;
+- struct sockaddr __user *addr;
+- int addr_len;
+-};
+-
+-struct io_sr_msg {
+- struct file *file;
+- union {
+- struct compat_msghdr __user *umsg_compat;
+- struct user_msghdr __user *umsg;
+- void __user *buf;
+- };
+- int msg_flags;
+- size_t len;
+- size_t done_io;
+- unsigned int flags;
+-};
+-
+-struct io_open {
+- struct file *file;
+- int dfd;
+- u32 file_slot;
+- struct filename *filename;
+- struct open_how how;
+- unsigned long nofile;
+-};
+-
+-struct io_rsrc_update {
+- struct file *file;
+- u64 arg;
+- u32 nr_args;
+- u32 offset;
+-};
+-
+-struct io_fadvise {
+- struct file *file;
+- u64 offset;
+- u32 len;
+- u32 advice;
+-};
+-
+-struct io_madvise {
+- struct file *file;
+- u64 addr;
+- u32 len;
+- u32 advice;
+-};
+-
+-struct io_epoll {
+- struct file *file;
+- int epfd;
+- int op;
+- int fd;
+- struct epoll_event event;
+-};
+-
+-struct io_splice {
+- struct file *file_out;
+- loff_t off_out;
+- loff_t off_in;
+- u64 len;
+- int splice_fd_in;
+- unsigned int flags;
+-};
+-
+-struct io_provide_buf {
+- struct file *file;
+- __u64 addr;
+- __u32 len;
+- __u32 bgid;
+- __u16 nbufs;
+- __u16 bid;
+-};
+-
+-struct io_statx {
+- struct file *file;
+- int dfd;
+- unsigned int mask;
+- unsigned int flags;
+- struct filename *filename;
+- struct statx __user *buffer;
+-};
+-
+-struct io_shutdown {
+- struct file *file;
+- int how;
+-};
+-
+-struct io_rename {
+- struct file *file;
+- int old_dfd;
+- int new_dfd;
+- struct filename *oldpath;
+- struct filename *newpath;
+- int flags;
+-};
+-
+-struct io_unlink {
+- struct file *file;
+- int dfd;
+- int flags;
+- struct filename *filename;
+-};
+-
+-struct io_mkdir {
+- struct file *file;
+- int dfd;
+- umode_t mode;
+- struct filename *filename;
+-};
+-
+-struct io_symlink {
+- struct file *file;
+- int new_dfd;
+- struct filename *oldpath;
+- struct filename *newpath;
+-};
+-
+-struct io_hardlink {
+- struct file *file;
+- int old_dfd;
+- int new_dfd;
+- struct filename *oldpath;
+- struct filename *newpath;
+- int flags;
+-};
+-
+-struct io_msg {
+- struct file *file;
+- u64 user_data;
+- u32 len;
+-};
+-
+-struct io_async_connect {
+- struct sockaddr_storage address;
+-};
+-
+-struct io_async_msghdr {
+- struct iovec fast_iov[UIO_FASTIOV];
+- /* points to an allocated iov, if NULL we use fast_iov instead */
+- struct iovec *free_iov;
+- struct sockaddr __user *uaddr;
+- struct msghdr msg;
+- struct sockaddr_storage addr;
+-};
+-
+-struct io_rw_state {
+- struct iov_iter iter;
+- struct iov_iter_state iter_state;
+- struct iovec fast_iov[UIO_FASTIOV];
+-};
+-
+-struct io_async_rw {
+- struct io_rw_state s;
+- const struct iovec *free_iovec;
+- size_t bytes_done;
+- struct wait_page_queue wpq;
+-};
+-
+-struct io_xattr {
+- struct file *file;
+- struct xattr_ctx ctx;
+- struct filename *filename;
+-};
+-
+-enum {
+- REQ_F_FIXED_FILE_BIT = IOSQE_FIXED_FILE_BIT,
+- REQ_F_IO_DRAIN_BIT = IOSQE_IO_DRAIN_BIT,
+- REQ_F_LINK_BIT = IOSQE_IO_LINK_BIT,
+- REQ_F_HARDLINK_BIT = IOSQE_IO_HARDLINK_BIT,
+- REQ_F_FORCE_ASYNC_BIT = IOSQE_ASYNC_BIT,
+- REQ_F_BUFFER_SELECT_BIT = IOSQE_BUFFER_SELECT_BIT,
+- REQ_F_CQE_SKIP_BIT = IOSQE_CQE_SKIP_SUCCESS_BIT,
+-
+- /* first byte is taken by user flags, shift it to not overlap */
+- REQ_F_FAIL_BIT = 8,
+- REQ_F_INFLIGHT_BIT,
+- REQ_F_CUR_POS_BIT,
+- REQ_F_NOWAIT_BIT,
+- REQ_F_LINK_TIMEOUT_BIT,
+- REQ_F_NEED_CLEANUP_BIT,
+- REQ_F_POLLED_BIT,
+- REQ_F_BUFFER_SELECTED_BIT,
+- REQ_F_BUFFER_RING_BIT,
+- REQ_F_COMPLETE_INLINE_BIT,
+- REQ_F_REISSUE_BIT,
+- REQ_F_CREDS_BIT,
+- REQ_F_REFCOUNT_BIT,
+- REQ_F_ARM_LTIMEOUT_BIT,
+- REQ_F_ASYNC_DATA_BIT,
+- REQ_F_SKIP_LINK_CQES_BIT,
+- REQ_F_SINGLE_POLL_BIT,
+- REQ_F_DOUBLE_POLL_BIT,
+- REQ_F_PARTIAL_IO_BIT,
+- REQ_F_CQE32_INIT_BIT,
+- REQ_F_APOLL_MULTISHOT_BIT,
+- /* keep async read/write and isreg together and in order */
+- REQ_F_SUPPORT_NOWAIT_BIT,
+- REQ_F_ISREG_BIT,
+-
+- /* not a real bit, just to check we're not overflowing the space */
+- __REQ_F_LAST_BIT,
+-};
+-
+-enum {
+- /* ctx owns file */
+- REQ_F_FIXED_FILE = BIT(REQ_F_FIXED_FILE_BIT),
+- /* drain existing IO first */
+- REQ_F_IO_DRAIN = BIT(REQ_F_IO_DRAIN_BIT),
+- /* linked sqes */
+- REQ_F_LINK = BIT(REQ_F_LINK_BIT),
+- /* doesn't sever on completion < 0 */
+- REQ_F_HARDLINK = BIT(REQ_F_HARDLINK_BIT),
+- /* IOSQE_ASYNC */
+- REQ_F_FORCE_ASYNC = BIT(REQ_F_FORCE_ASYNC_BIT),
+- /* IOSQE_BUFFER_SELECT */
+- REQ_F_BUFFER_SELECT = BIT(REQ_F_BUFFER_SELECT_BIT),
+- /* IOSQE_CQE_SKIP_SUCCESS */
+- REQ_F_CQE_SKIP = BIT(REQ_F_CQE_SKIP_BIT),
+-
+- /* fail rest of links */
+- REQ_F_FAIL = BIT(REQ_F_FAIL_BIT),
+- /* on inflight list, should be cancelled and waited on exit reliably */
+- REQ_F_INFLIGHT = BIT(REQ_F_INFLIGHT_BIT),
+- /* read/write uses file position */
+- REQ_F_CUR_POS = BIT(REQ_F_CUR_POS_BIT),
+- /* must not punt to workers */
+- REQ_F_NOWAIT = BIT(REQ_F_NOWAIT_BIT),
+- /* has or had linked timeout */
+- REQ_F_LINK_TIMEOUT = BIT(REQ_F_LINK_TIMEOUT_BIT),
+- /* needs cleanup */
+- REQ_F_NEED_CLEANUP = BIT(REQ_F_NEED_CLEANUP_BIT),
+- /* already went through poll handler */
+- REQ_F_POLLED = BIT(REQ_F_POLLED_BIT),
+- /* buffer already selected */
+- REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT),
+- /* buffer selected from ring, needs commit */
+- REQ_F_BUFFER_RING = BIT(REQ_F_BUFFER_RING_BIT),
+- /* completion is deferred through io_comp_state */
+- REQ_F_COMPLETE_INLINE = BIT(REQ_F_COMPLETE_INLINE_BIT),
+- /* caller should reissue async */
+- REQ_F_REISSUE = BIT(REQ_F_REISSUE_BIT),
+- /* supports async reads/writes */
+- REQ_F_SUPPORT_NOWAIT = BIT(REQ_F_SUPPORT_NOWAIT_BIT),
+- /* regular file */
+- REQ_F_ISREG = BIT(REQ_F_ISREG_BIT),
+- /* has creds assigned */
+- REQ_F_CREDS = BIT(REQ_F_CREDS_BIT),
+- /* skip refcounting if not set */
+- REQ_F_REFCOUNT = BIT(REQ_F_REFCOUNT_BIT),
+- /* there is a linked timeout that has to be armed */
+- REQ_F_ARM_LTIMEOUT = BIT(REQ_F_ARM_LTIMEOUT_BIT),
+- /* ->async_data allocated */
+- REQ_F_ASYNC_DATA = BIT(REQ_F_ASYNC_DATA_BIT),
+- /* don't post CQEs while failing linked requests */
+- REQ_F_SKIP_LINK_CQES = BIT(REQ_F_SKIP_LINK_CQES_BIT),
+- /* single poll may be active */
+- REQ_F_SINGLE_POLL = BIT(REQ_F_SINGLE_POLL_BIT),
+- /* double poll may active */
+- REQ_F_DOUBLE_POLL = BIT(REQ_F_DOUBLE_POLL_BIT),
+- /* request has already done partial IO */
+- REQ_F_PARTIAL_IO = BIT(REQ_F_PARTIAL_IO_BIT),
+- /* fast poll multishot mode */
+- REQ_F_APOLL_MULTISHOT = BIT(REQ_F_APOLL_MULTISHOT_BIT),
+- /* ->extra1 and ->extra2 are initialised */
+- REQ_F_CQE32_INIT = BIT(REQ_F_CQE32_INIT_BIT),
+-};
+-
+-struct async_poll {
+- struct io_poll_iocb poll;
+- struct io_poll_iocb *double_poll;
+-};
+-
+-typedef void (*io_req_tw_func_t)(struct io_kiocb *req, bool *locked);
+-
+-struct io_task_work {
+- union {
+- struct io_wq_work_node node;
+- struct llist_node fallback_node;
+- };
+- io_req_tw_func_t func;
+-};
+-
+-enum {
+- IORING_RSRC_FILE = 0,
+- IORING_RSRC_BUFFER = 1,
+-};
+-
+-struct io_cqe {
+- __u64 user_data;
+- __s32 res;
+- /* fd initially, then cflags for completion */
+- union {
+- __u32 flags;
+- int fd;
+- };
+-};
+-
+-enum {
+- IO_CHECK_CQ_OVERFLOW_BIT,
+- IO_CHECK_CQ_DROPPED_BIT,
+-};
+-
+-/*
+- * NOTE! Each of the iocb union members has the file pointer
+- * as the first entry in their struct definition. So you can
+- * access the file pointer through any of the sub-structs,
+- * or directly as just 'file' in this struct.
+- */
+-struct io_kiocb {
+- union {
+- struct file *file;
+- struct io_rw rw;
+- struct io_poll_iocb poll;
+- struct io_poll_update poll_update;
+- struct io_accept accept;
+- struct io_sync sync;
+- struct io_cancel cancel;
+- struct io_timeout timeout;
+- struct io_timeout_rem timeout_rem;
+- struct io_connect connect;
+- struct io_sr_msg sr_msg;
+- struct io_open open;
+- struct io_close close;
+- struct io_rsrc_update rsrc_update;
+- struct io_fadvise fadvise;
+- struct io_madvise madvise;
+- struct io_epoll epoll;
+- struct io_splice splice;
+- struct io_provide_buf pbuf;
+- struct io_statx statx;
+- struct io_shutdown shutdown;
+- struct io_rename rename;
+- struct io_unlink unlink;
+- struct io_mkdir mkdir;
+- struct io_symlink symlink;
+- struct io_hardlink hardlink;
+- struct io_msg msg;
+- struct io_xattr xattr;
+- struct io_socket sock;
+- struct io_uring_cmd uring_cmd;
+- };
+-
+- u8 opcode;
+- /* polled IO has completed */
+- u8 iopoll_completed;
+- /*
+- * Can be either a fixed buffer index, or used with provided buffers.
+- * For the latter, before issue it points to the buffer group ID,
+- * and after selection it points to the buffer ID itself.
+- */
+- u16 buf_index;
+- unsigned int flags;
+-
+- struct io_cqe cqe;
+-
+- struct io_ring_ctx *ctx;
+- struct task_struct *task;
+-
+- struct io_rsrc_node *rsrc_node;
+-
+- union {
+- /* store used ubuf, so we can prevent reloading */
+- struct io_mapped_ubuf *imu;
+-
+- /* stores selected buf, valid IFF REQ_F_BUFFER_SELECTED is set */
+- struct io_buffer *kbuf;
+-
+- /*
+- * stores buffer ID for ring provided buffers, valid IFF
+- * REQ_F_BUFFER_RING is set.
+- */
+- struct io_buffer_list *buf_list;
+- };
+-
+- union {
+- /* used by request caches, completion batching and iopoll */
+- struct io_wq_work_node comp_list;
+- /* cache ->apoll->events */
+- __poll_t apoll_events;
+- };
+- atomic_t refs;
+- atomic_t poll_refs;
+- struct io_task_work io_task_work;
+- /* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
+- union {
+- struct hlist_node hash_node;
+- struct {
+- u64 extra1;
+- u64 extra2;
+- };
+- };
+- /* internal polling, see IORING_FEAT_FAST_POLL */
+- struct async_poll *apoll;
+- /* opcode allocated if it needs to store data for async defer */
+- void *async_data;
+- /* linked requests, IFF REQ_F_HARDLINK or REQ_F_LINK are set */
+- struct io_kiocb *link;
+- /* custom credentials, valid IFF REQ_F_CREDS is set */
+- const struct cred *creds;
+- struct io_wq_work work;
+-};
+-
+-struct io_tctx_node {
+- struct list_head ctx_node;
+- struct task_struct *task;
+- struct io_ring_ctx *ctx;
+-};
+-
+-struct io_defer_entry {
+- struct list_head list;
+- struct io_kiocb *req;
+- u32 seq;
+-};
+-
+-struct io_cancel_data {
+- struct io_ring_ctx *ctx;
+- union {
+- u64 data;
+- struct file *file;
+- };
+- u32 flags;
+- int seq;
+-};
+-
+-/*
+- * The URING_CMD payload starts at 'cmd' in the first sqe, and continues into
+- * the following sqe if SQE128 is used.
+- */
+-#define uring_cmd_pdu_size(is_sqe128) \
+- ((1 + !!(is_sqe128)) * sizeof(struct io_uring_sqe) - \
+- offsetof(struct io_uring_sqe, cmd))
+-
+-struct io_op_def {
+- /* needs req->file assigned */
+- unsigned needs_file : 1;
+- /* should block plug */
+- unsigned plug : 1;
+- /* hash wq insertion if file is a regular file */
+- unsigned hash_reg_file : 1;
+- /* unbound wq insertion if file is a non-regular file */
+- unsigned unbound_nonreg_file : 1;
+- /* set if opcode supports polled "wait" */
+- unsigned pollin : 1;
+- unsigned pollout : 1;
+- unsigned poll_exclusive : 1;
+- /* op supports buffer selection */
+- unsigned buffer_select : 1;
+- /* do prep async if is going to be punted */
+- unsigned needs_async_setup : 1;
+- /* opcode is not supported by this kernel */
+- unsigned not_supported : 1;
+- /* skip auditing */
+- unsigned audit_skip : 1;
+- /* supports ioprio */
+- unsigned ioprio : 1;
+- /* supports iopoll */
+- unsigned iopoll : 1;
+- /* size of async data needed, if any */
+- unsigned short async_size;
+-};
+-
+-static const struct io_op_def io_op_defs[] = {
+- [IORING_OP_NOP] = {
+- .audit_skip = 1,
+- .iopoll = 1,
+- },
+- [IORING_OP_READV] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollin = 1,
+- .buffer_select = 1,
+- .needs_async_setup = 1,
+- .plug = 1,
+- .audit_skip = 1,
+- .ioprio = 1,
+- .iopoll = 1,
+- .async_size = sizeof(struct io_async_rw),
+- },
+- [IORING_OP_WRITEV] = {
+- .needs_file = 1,
+- .hash_reg_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollout = 1,
+- .needs_async_setup = 1,
+- .plug = 1,
+- .audit_skip = 1,
+- .ioprio = 1,
+- .iopoll = 1,
+- .async_size = sizeof(struct io_async_rw),
+- },
+- [IORING_OP_FSYNC] = {
+- .needs_file = 1,
+- .audit_skip = 1,
+- },
+- [IORING_OP_READ_FIXED] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollin = 1,
+- .plug = 1,
+- .audit_skip = 1,
+- .ioprio = 1,
+- .iopoll = 1,
+- .async_size = sizeof(struct io_async_rw),
+- },
+- [IORING_OP_WRITE_FIXED] = {
+- .needs_file = 1,
+- .hash_reg_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollout = 1,
+- .plug = 1,
+- .audit_skip = 1,
+- .ioprio = 1,
+- .iopoll = 1,
+- .async_size = sizeof(struct io_async_rw),
+- },
+- [IORING_OP_POLL_ADD] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .audit_skip = 1,
+- },
+- [IORING_OP_POLL_REMOVE] = {
+- .audit_skip = 1,
+- },
+- [IORING_OP_SYNC_FILE_RANGE] = {
+- .needs_file = 1,
+- .audit_skip = 1,
+- },
+- [IORING_OP_SENDMSG] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollout = 1,
+- .needs_async_setup = 1,
+- .ioprio = 1,
+- .async_size = sizeof(struct io_async_msghdr),
+- },
+- [IORING_OP_RECVMSG] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollin = 1,
+- .buffer_select = 1,
+- .needs_async_setup = 1,
+- .ioprio = 1,
+- .async_size = sizeof(struct io_async_msghdr),
+- },
+- [IORING_OP_TIMEOUT] = {
+- .audit_skip = 1,
+- .async_size = sizeof(struct io_timeout_data),
+- },
+- [IORING_OP_TIMEOUT_REMOVE] = {
+- /* used by timeout updates' prep() */
+- .audit_skip = 1,
+- },
+- [IORING_OP_ACCEPT] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollin = 1,
+- .poll_exclusive = 1,
+- .ioprio = 1, /* used for flags */
+- },
+- [IORING_OP_ASYNC_CANCEL] = {
+- .audit_skip = 1,
+- },
+- [IORING_OP_LINK_TIMEOUT] = {
+- .audit_skip = 1,
+- .async_size = sizeof(struct io_timeout_data),
+- },
+- [IORING_OP_CONNECT] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollout = 1,
+- .needs_async_setup = 1,
+- .async_size = sizeof(struct io_async_connect),
+- },
+- [IORING_OP_FALLOCATE] = {
+- .needs_file = 1,
+- },
+- [IORING_OP_OPENAT] = {},
+- [IORING_OP_CLOSE] = {},
+- [IORING_OP_FILES_UPDATE] = {
+- .audit_skip = 1,
+- .iopoll = 1,
+- },
+- [IORING_OP_STATX] = {
+- .audit_skip = 1,
+- },
+- [IORING_OP_READ] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollin = 1,
+- .buffer_select = 1,
+- .plug = 1,
+- .audit_skip = 1,
+- .ioprio = 1,
+- .iopoll = 1,
+- .async_size = sizeof(struct io_async_rw),
+- },
+- [IORING_OP_WRITE] = {
+- .needs_file = 1,
+- .hash_reg_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollout = 1,
+- .plug = 1,
+- .audit_skip = 1,
+- .ioprio = 1,
+- .iopoll = 1,
+- .async_size = sizeof(struct io_async_rw),
+- },
+- [IORING_OP_FADVISE] = {
+- .needs_file = 1,
+- .audit_skip = 1,
+- },
+- [IORING_OP_MADVISE] = {},
+- [IORING_OP_SEND] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollout = 1,
+- .audit_skip = 1,
+- .ioprio = 1,
+- },
+- [IORING_OP_RECV] = {
+- .needs_file = 1,
+- .unbound_nonreg_file = 1,
+- .pollin = 1,
+- .buffer_select = 1,
+- .audit_skip = 1,
+- .ioprio = 1,
+- },
+- [IORING_OP_OPENAT2] = {
+- },
+- [IORING_OP_EPOLL_CTL] = {
+- .unbound_nonreg_file = 1,
+- .audit_skip = 1,
+- },
+- [IORING_OP_SPLICE] = {
+- .needs_file = 1,
+- .hash_reg_file = 1,
+- .unbound_nonreg_file = 1,
+- .audit_skip = 1,
+- },
+- [IORING_OP_PROVIDE_BUFFERS] = {
+- .audit_skip = 1,
+- .iopoll = 1,
+- },
+- [IORING_OP_REMOVE_BUFFERS] = {
+- .audit_skip = 1,
+- .iopoll = 1,
+- },
+- [IORING_OP_TEE] = {
+- .needs_file = 1,
+- .hash_reg_file = 1,
+- .unbound_nonreg_file = 1,
+- .audit_skip = 1,
+- },
+- [IORING_OP_SHUTDOWN] = {
+- .needs_file = 1,
+- },
+- [IORING_OP_RENAMEAT] = {},
+- [IORING_OP_UNLINKAT] = {},
+- [IORING_OP_MKDIRAT] = {},
+- [IORING_OP_SYMLINKAT] = {},
+- [IORING_OP_LINKAT] = {},
+- [IORING_OP_MSG_RING] = {
+- .needs_file = 1,
+- .iopoll = 1,
+- },
+- [IORING_OP_FSETXATTR] = {
+- .needs_file = 1
+- },
+- [IORING_OP_SETXATTR] = {},
+- [IORING_OP_FGETXATTR] = {
+- .needs_file = 1
+- },
+- [IORING_OP_GETXATTR] = {},
+- [IORING_OP_SOCKET] = {
+- .audit_skip = 1,
+- },
+- [IORING_OP_URING_CMD] = {
+- .needs_file = 1,
+- .plug = 1,
+- .needs_async_setup = 1,
+- .async_size = uring_cmd_pdu_size(1),
+- },
+-};
+-
+-/* requests with any of those set should undergo io_disarm_next() */
+-#define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
+-#define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK)
+-
+-static bool io_disarm_next(struct io_kiocb *req);
+-static void io_uring_del_tctx_node(unsigned long index);
+-static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+- struct task_struct *task,
+- bool cancel_all);
+-static void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd);
+-
+-static void __io_req_complete_post(struct io_kiocb *req, s32 res, u32 cflags);
+-static void io_dismantle_req(struct io_kiocb *req);
+-static void io_queue_linked_timeout(struct io_kiocb *req);
+-static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
+- struct io_uring_rsrc_update2 *up,
+- unsigned nr_args);
+-static void io_clean_op(struct io_kiocb *req);
+-static inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
+- unsigned issue_flags);
+-static struct file *io_file_get_normal(struct io_kiocb *req, int fd);
+-static void io_queue_sqe(struct io_kiocb *req);
+-static void io_rsrc_put_work(struct work_struct *work);
+-
+-static void io_req_task_queue(struct io_kiocb *req);
+-static void __io_submit_flush_completions(struct io_ring_ctx *ctx);
+-static int io_req_prep_async(struct io_kiocb *req);
+-
+-static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
+- unsigned int issue_flags, u32 slot_index);
+-static int __io_close_fixed(struct io_kiocb *req, unsigned int issue_flags,
+- unsigned int offset);
+-static inline int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags);
+-
+-static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer);
+-static void io_eventfd_signal(struct io_ring_ctx *ctx);
+-static void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags);
+-
+-static struct kmem_cache *req_cachep;
+-
+-static const struct file_operations io_uring_fops;
+-
+-const char *io_uring_get_opcode(u8 opcode)
+-{
+- switch ((enum io_uring_op)opcode) {
+- case IORING_OP_NOP:
+- return "NOP";
+- case IORING_OP_READV:
+- return "READV";
+- case IORING_OP_WRITEV:
+- return "WRITEV";
+- case IORING_OP_FSYNC:
+- return "FSYNC";
+- case IORING_OP_READ_FIXED:
+- return "READ_FIXED";
+- case IORING_OP_WRITE_FIXED:
+- return "WRITE_FIXED";
+- case IORING_OP_POLL_ADD:
+- return "POLL_ADD";
+- case IORING_OP_POLL_REMOVE:
+- return "POLL_REMOVE";
+- case IORING_OP_SYNC_FILE_RANGE:
+- return "SYNC_FILE_RANGE";
+- case IORING_OP_SENDMSG:
+- return "SENDMSG";
+- case IORING_OP_RECVMSG:
+- return "RECVMSG";
+- case IORING_OP_TIMEOUT:
+- return "TIMEOUT";
+- case IORING_OP_TIMEOUT_REMOVE:
+- return "TIMEOUT_REMOVE";
+- case IORING_OP_ACCEPT:
+- return "ACCEPT";
+- case IORING_OP_ASYNC_CANCEL:
+- return "ASYNC_CANCEL";
+- case IORING_OP_LINK_TIMEOUT:
+- return "LINK_TIMEOUT";
+- case IORING_OP_CONNECT:
+- return "CONNECT";
+- case IORING_OP_FALLOCATE:
+- return "FALLOCATE";
+- case IORING_OP_OPENAT:
+- return "OPENAT";
+- case IORING_OP_CLOSE:
+- return "CLOSE";
+- case IORING_OP_FILES_UPDATE:
+- return "FILES_UPDATE";
+- case IORING_OP_STATX:
+- return "STATX";
+- case IORING_OP_READ:
+- return "READ";
+- case IORING_OP_WRITE:
+- return "WRITE";
+- case IORING_OP_FADVISE:
+- return "FADVISE";
+- case IORING_OP_MADVISE:
+- return "MADVISE";
+- case IORING_OP_SEND:
+- return "SEND";
+- case IORING_OP_RECV:
+- return "RECV";
+- case IORING_OP_OPENAT2:
+- return "OPENAT2";
+- case IORING_OP_EPOLL_CTL:
+- return "EPOLL_CTL";
+- case IORING_OP_SPLICE:
+- return "SPLICE";
+- case IORING_OP_PROVIDE_BUFFERS:
+- return "PROVIDE_BUFFERS";
+- case IORING_OP_REMOVE_BUFFERS:
+- return "REMOVE_BUFFERS";
+- case IORING_OP_TEE:
+- return "TEE";
+- case IORING_OP_SHUTDOWN:
+- return "SHUTDOWN";
+- case IORING_OP_RENAMEAT:
+- return "RENAMEAT";
+- case IORING_OP_UNLINKAT:
+- return "UNLINKAT";
+- case IORING_OP_MKDIRAT:
+- return "MKDIRAT";
+- case IORING_OP_SYMLINKAT:
+- return "SYMLINKAT";
+- case IORING_OP_LINKAT:
+- return "LINKAT";
+- case IORING_OP_MSG_RING:
+- return "MSG_RING";
+- case IORING_OP_FSETXATTR:
+- return "FSETXATTR";
+- case IORING_OP_SETXATTR:
+- return "SETXATTR";
+- case IORING_OP_FGETXATTR:
+- return "FGETXATTR";
+- case IORING_OP_GETXATTR:
+- return "GETXATTR";
+- case IORING_OP_SOCKET:
+- return "SOCKET";
+- case IORING_OP_URING_CMD:
+- return "URING_CMD";
+- case IORING_OP_LAST:
+- return "INVALID";
+- }
+- return "INVALID";
+-}
+-
+-struct sock *io_uring_get_socket(struct file *file)
+-{
+-#if defined(CONFIG_UNIX)
+- if (file->f_op == &io_uring_fops) {
+- struct io_ring_ctx *ctx = file->private_data;
+-
+- return ctx->ring_sock->sk;
+- }
+-#endif
+- return NULL;
+-}
+-EXPORT_SYMBOL(io_uring_get_socket);
+-
+-#if defined(CONFIG_UNIX)
+-static inline bool io_file_need_scm(struct file *filp)
+-{
+-#if defined(IO_URING_SCM_ALL)
+- return true;
+-#else
+- return !!unix_get_socket(filp);
+-#endif
+-}
+-#else
+-static inline bool io_file_need_scm(struct file *filp)
+-{
+- return false;
+-}
+-#endif
+-
+-static void io_ring_submit_unlock(struct io_ring_ctx *ctx, unsigned issue_flags)
+-{
+- lockdep_assert_held(&ctx->uring_lock);
+- if (issue_flags & IO_URING_F_UNLOCKED)
+- mutex_unlock(&ctx->uring_lock);
+-}
+-
+-static void io_ring_submit_lock(struct io_ring_ctx *ctx, unsigned issue_flags)
+-{
+- /*
+- * "Normal" inline submissions always hold the uring_lock, since we
+- * grab it from the system call. Same is true for the SQPOLL offload.
+- * The only exception is when we've detached the request and issue it
+- * from an async worker thread, grab the lock for that case.
+- */
+- if (issue_flags & IO_URING_F_UNLOCKED)
+- mutex_lock(&ctx->uring_lock);
+- lockdep_assert_held(&ctx->uring_lock);
+-}
+-
+-static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
+-{
+- if (!*locked) {
+- mutex_lock(&ctx->uring_lock);
+- *locked = true;
+- }
+-}
+-
+-#define io_for_each_link(pos, head) \
+- for (pos = (head); pos; pos = pos->link)
+-
+-/*
+- * Shamelessly stolen from the mm implementation of page reference checking,
+- * see commit f958d7b528b1 for details.
+- */
+-#define req_ref_zero_or_close_to_overflow(req) \
+- ((unsigned int) atomic_read(&(req->refs)) + 127u <= 127u)
+-
+-static inline bool req_ref_inc_not_zero(struct io_kiocb *req)
+-{
+- WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
+- return atomic_inc_not_zero(&req->refs);
+-}
+-
+-static inline bool req_ref_put_and_test(struct io_kiocb *req)
+-{
+- if (likely(!(req->flags & REQ_F_REFCOUNT)))
+- return true;
+-
+- WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
+- return atomic_dec_and_test(&req->refs);
+-}
+-
+-static inline void req_ref_get(struct io_kiocb *req)
+-{
+- WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
+- WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
+- atomic_inc(&req->refs);
+-}
+-
+-static inline void io_submit_flush_completions(struct io_ring_ctx *ctx)
+-{
+- if (!wq_list_empty(&ctx->submit_state.compl_reqs))
+- __io_submit_flush_completions(ctx);
+-}
+-
+-static inline void __io_req_set_refcount(struct io_kiocb *req, int nr)
+-{
+- if (!(req->flags & REQ_F_REFCOUNT)) {
+- req->flags |= REQ_F_REFCOUNT;
+- atomic_set(&req->refs, nr);
+- }
+-}
+-
+-static inline void io_req_set_refcount(struct io_kiocb *req)
+-{
+- __io_req_set_refcount(req, 1);
+-}
+-
+-#define IO_RSRC_REF_BATCH 100
+-
+-static void io_rsrc_put_node(struct io_rsrc_node *node, int nr)
+-{
+- percpu_ref_put_many(&node->refs, nr);
+-}
+-
+-static inline void io_req_put_rsrc_locked(struct io_kiocb *req,
+- struct io_ring_ctx *ctx)
+- __must_hold(&ctx->uring_lock)
+-{
+- struct io_rsrc_node *node = req->rsrc_node;
+-
+- if (node) {
+- if (node == ctx->rsrc_node)
+- ctx->rsrc_cached_refs++;
+- else
+- io_rsrc_put_node(node, 1);
+- }
+-}
+-
+-static inline void io_req_put_rsrc(struct io_kiocb *req)
+-{
+- if (req->rsrc_node)
+- io_rsrc_put_node(req->rsrc_node, 1);
+-}
+-
+-static __cold void io_rsrc_refs_drop(struct io_ring_ctx *ctx)
+- __must_hold(&ctx->uring_lock)
+-{
+- if (ctx->rsrc_cached_refs) {
+- io_rsrc_put_node(ctx->rsrc_node, ctx->rsrc_cached_refs);
+- ctx->rsrc_cached_refs = 0;
+- }
+-}
+-
+-static void io_rsrc_refs_refill(struct io_ring_ctx *ctx)
+- __must_hold(&ctx->uring_lock)
+-{
+- ctx->rsrc_cached_refs += IO_RSRC_REF_BATCH;
+- percpu_ref_get_many(&ctx->rsrc_node->refs, IO_RSRC_REF_BATCH);
+-}
+-
+-static inline void io_req_set_rsrc_node(struct io_kiocb *req,
+- struct io_ring_ctx *ctx,
+- unsigned int issue_flags)
+-{
+- if (!req->rsrc_node) {
+- req->rsrc_node = ctx->rsrc_node;
+-
+- if (!(issue_flags & IO_URING_F_UNLOCKED)) {
+- lockdep_assert_held(&ctx->uring_lock);
+- ctx->rsrc_cached_refs--;
+- if (unlikely(ctx->rsrc_cached_refs < 0))
+- io_rsrc_refs_refill(ctx);
+- } else {
+- percpu_ref_get(&req->rsrc_node->refs);
+- }
+- }
+-}
+-
+-static unsigned int __io_put_kbuf(struct io_kiocb *req, struct list_head *list)
+-{
+- if (req->flags & REQ_F_BUFFER_RING) {
+- if (req->buf_list)
+- req->buf_list->head++;
+- req->flags &= ~REQ_F_BUFFER_RING;
+- } else {
+- list_add(&req->kbuf->list, list);
+- req->flags &= ~REQ_F_BUFFER_SELECTED;
+- }
+-
+- return IORING_CQE_F_BUFFER | (req->buf_index << IORING_CQE_BUFFER_SHIFT);
+-}
+-
+-static inline unsigned int io_put_kbuf_comp(struct io_kiocb *req)
+-{
+- lockdep_assert_held(&req->ctx->completion_lock);
+-
+- if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
+- return 0;
+- return __io_put_kbuf(req, &req->ctx->io_buffers_comp);
+-}
+-
+-static inline unsigned int io_put_kbuf(struct io_kiocb *req,
+- unsigned issue_flags)
+-{
+- unsigned int cflags;
+-
+- if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
+- return 0;
+-
+- /*
+- * We can add this buffer back to two lists:
+- *
+- * 1) The io_buffers_cache list. This one is protected by the
+- * ctx->uring_lock. If we already hold this lock, add back to this
+- * list as we can grab it from issue as well.
+- * 2) The io_buffers_comp list. This one is protected by the
+- * ctx->completion_lock.
+- *
+- * We migrate buffers from the comp_list to the issue cache list
+- * when we need one.
+- */
+- if (req->flags & REQ_F_BUFFER_RING) {
+- /* no buffers to recycle for this case */
+- cflags = __io_put_kbuf(req, NULL);
+- } else if (issue_flags & IO_URING_F_UNLOCKED) {
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- spin_lock(&ctx->completion_lock);
+- cflags = __io_put_kbuf(req, &ctx->io_buffers_comp);
+- spin_unlock(&ctx->completion_lock);
+- } else {
+- lockdep_assert_held(&req->ctx->uring_lock);
+-
+- cflags = __io_put_kbuf(req, &req->ctx->io_buffers_cache);
+- }
+-
+- return cflags;
+-}
+-
+-static struct io_buffer_list *io_buffer_get_list(struct io_ring_ctx *ctx,
+- unsigned int bgid)
+-{
+- if (ctx->io_bl && bgid < BGID_ARRAY)
+- return &ctx->io_bl[bgid];
+-
+- return xa_load(&ctx->io_bl_xa, bgid);
+-}
+-
+-static void io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_buffer_list *bl;
+- struct io_buffer *buf;
+-
+- if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
+- return;
+- /*
+- * For legacy provided buffer mode, don't recycle if we already did
+- * IO to this buffer. For ring-mapped provided buffer mode, we should
+- * increment ring->head to explicitly monopolize the buffer to avoid
+- * multiple use.
+- */
+- if ((req->flags & REQ_F_BUFFER_SELECTED) &&
+- (req->flags & REQ_F_PARTIAL_IO))
+- return;
+-
+- /*
+- * READV uses fields in `struct io_rw` (len/addr) to stash the selected
+- * buffer data. However if that buffer is recycled the original request
+- * data stored in addr is lost. Therefore forbid recycling for now.
+- */
+- if (req->opcode == IORING_OP_READV)
+- return;
+-
+- /*
+- * We don't need to recycle for REQ_F_BUFFER_RING, we can just clear
+- * the flag and hence ensure that bl->head doesn't get incremented.
+- * If the tail has already been incremented, hang on to it.
+- */
+- if (req->flags & REQ_F_BUFFER_RING) {
+- if (req->buf_list) {
+- if (req->flags & REQ_F_PARTIAL_IO) {
+- req->buf_list->head++;
+- req->buf_list = NULL;
+- } else {
+- req->buf_index = req->buf_list->bgid;
+- req->flags &= ~REQ_F_BUFFER_RING;
+- }
+- }
+- return;
+- }
+-
+- io_ring_submit_lock(ctx, issue_flags);
+-
+- buf = req->kbuf;
+- bl = io_buffer_get_list(ctx, buf->bgid);
+- list_add(&buf->list, &bl->buf_list);
+- req->flags &= ~REQ_F_BUFFER_SELECTED;
+- req->buf_index = buf->bgid;
+-
+- io_ring_submit_unlock(ctx, issue_flags);
+-}
+-
+-static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
+- bool cancel_all)
+- __must_hold(&req->ctx->timeout_lock)
+-{
+- struct io_kiocb *req;
+-
+- if (task && head->task != task)
+- return false;
+- if (cancel_all)
+- return true;
+-
+- io_for_each_link(req, head) {
+- if (req->flags & REQ_F_INFLIGHT)
+- return true;
+- }
+- return false;
+-}
+-
+-static bool io_match_linked(struct io_kiocb *head)
+-{
+- struct io_kiocb *req;
+-
+- io_for_each_link(req, head) {
+- if (req->flags & REQ_F_INFLIGHT)
+- return true;
+- }
+- return false;
+-}
+-
+-/*
+- * As io_match_task() but protected against racing with linked timeouts.
+- * User must not hold timeout_lock.
+- */
+-static bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,
+- bool cancel_all)
+-{
+- bool matched;
+-
+- if (task && head->task != task)
+- return false;
+- if (cancel_all)
+- return true;
+-
+- if (head->flags & REQ_F_LINK_TIMEOUT) {
+- struct io_ring_ctx *ctx = head->ctx;
+-
+- /* protect against races with linked timeouts */
+- spin_lock_irq(&ctx->timeout_lock);
+- matched = io_match_linked(head);
+- spin_unlock_irq(&ctx->timeout_lock);
+- } else {
+- matched = io_match_linked(head);
+- }
+- return matched;
+-}
+-
+-static inline bool req_has_async_data(struct io_kiocb *req)
+-{
+- return req->flags & REQ_F_ASYNC_DATA;
+-}
+-
+-static inline void req_set_fail(struct io_kiocb *req)
+-{
+- req->flags |= REQ_F_FAIL;
+- if (req->flags & REQ_F_CQE_SKIP) {
+- req->flags &= ~REQ_F_CQE_SKIP;
+- req->flags |= REQ_F_SKIP_LINK_CQES;
+- }
+-}
+-
+-static inline void req_fail_link_node(struct io_kiocb *req, int res)
+-{
+- req_set_fail(req);
+- req->cqe.res = res;
+-}
+-
+-static inline void io_req_add_to_cache(struct io_kiocb *req, struct io_ring_ctx *ctx)
+-{
+- wq_stack_add_head(&req->comp_list, &ctx->submit_state.free_list);
+-}
+-
+-static __cold void io_ring_ctx_ref_free(struct percpu_ref *ref)
+-{
+- struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
+-
+- complete(&ctx->ref_comp);
+-}
+-
+-static inline bool io_is_timeout_noseq(struct io_kiocb *req)
+-{
+- return !req->timeout.off;
+-}
+-
+-static __cold void io_fallback_req_func(struct work_struct *work)
+-{
+- struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
+- fallback_work.work);
+- struct llist_node *node = llist_del_all(&ctx->fallback_llist);
+- struct io_kiocb *req, *tmp;
+- bool locked = false;
+-
+- percpu_ref_get(&ctx->refs);
+- llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node)
+- req->io_task_work.func(req, &locked);
+-
+- if (locked) {
+- io_submit_flush_completions(ctx);
+- mutex_unlock(&ctx->uring_lock);
+- }
+- percpu_ref_put(&ctx->refs);
+-}
+-
+-static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+-{
+- struct io_ring_ctx *ctx;
+- int hash_bits;
+-
+- ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+- if (!ctx)
+- return NULL;
+-
+- xa_init(&ctx->io_bl_xa);
+-
+- /*
+- * Use 5 bits less than the max cq entries, that should give us around
+- * 32 entries per hash list if totally full and uniformly spread.
+- */
+- hash_bits = ilog2(p->cq_entries);
+- hash_bits -= 5;
+- if (hash_bits <= 0)
+- hash_bits = 1;
+- ctx->cancel_hash_bits = hash_bits;
+- ctx->cancel_hash = kmalloc((1U << hash_bits) * sizeof(struct hlist_head),
+- GFP_KERNEL);
+- if (!ctx->cancel_hash)
+- goto err;
+- __hash_init(ctx->cancel_hash, 1U << hash_bits);
+-
+- ctx->dummy_ubuf = kzalloc(sizeof(*ctx->dummy_ubuf), GFP_KERNEL);
+- if (!ctx->dummy_ubuf)
+- goto err;
+- /* set invalid range, so io_import_fixed() fails meeting it */
+- ctx->dummy_ubuf->ubuf = -1UL;
+-
+- if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
+- PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
+- goto err;
+-
+- ctx->flags = p->flags;
+- init_waitqueue_head(&ctx->sqo_sq_wait);
+- INIT_LIST_HEAD(&ctx->sqd_list);
+- INIT_LIST_HEAD(&ctx->cq_overflow_list);
+- INIT_LIST_HEAD(&ctx->io_buffers_cache);
+- INIT_LIST_HEAD(&ctx->apoll_cache);
+- init_completion(&ctx->ref_comp);
+- xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
+- mutex_init(&ctx->uring_lock);
+- init_waitqueue_head(&ctx->cq_wait);
+- spin_lock_init(&ctx->completion_lock);
+- spin_lock_init(&ctx->timeout_lock);
+- INIT_WQ_LIST(&ctx->iopoll_list);
+- INIT_LIST_HEAD(&ctx->io_buffers_pages);
+- INIT_LIST_HEAD(&ctx->io_buffers_comp);
+- INIT_LIST_HEAD(&ctx->defer_list);
+- INIT_LIST_HEAD(&ctx->timeout_list);
+- INIT_LIST_HEAD(&ctx->ltimeout_list);
+- spin_lock_init(&ctx->rsrc_ref_lock);
+- INIT_LIST_HEAD(&ctx->rsrc_ref_list);
+- INIT_DELAYED_WORK(&ctx->rsrc_put_work, io_rsrc_put_work);
+- init_llist_head(&ctx->rsrc_put_llist);
+- INIT_LIST_HEAD(&ctx->tctx_list);
+- ctx->submit_state.free_list.next = NULL;
+- INIT_WQ_LIST(&ctx->locked_free_list);
+- INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func);
+- INIT_WQ_LIST(&ctx->submit_state.compl_reqs);
+- return ctx;
+-err:
+- kfree(ctx->dummy_ubuf);
+- kfree(ctx->cancel_hash);
+- kfree(ctx->io_bl);
+- xa_destroy(&ctx->io_bl_xa);
+- kfree(ctx);
+- return NULL;
+-}
+-
+-static void io_account_cq_overflow(struct io_ring_ctx *ctx)
+-{
+- struct io_rings *r = ctx->rings;
+-
+- WRITE_ONCE(r->cq_overflow, READ_ONCE(r->cq_overflow) + 1);
+- ctx->cq_extra--;
+-}
+-
+-static bool req_need_defer(struct io_kiocb *req, u32 seq)
+-{
+- if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
+- }
+-
+- return false;
+-}
+-
+-static inline bool io_req_ffs_set(struct io_kiocb *req)
+-{
+- return req->flags & REQ_F_FIXED_FILE;
+-}
+-
+-static inline void io_req_track_inflight(struct io_kiocb *req)
+-{
+- if (!(req->flags & REQ_F_INFLIGHT)) {
+- req->flags |= REQ_F_INFLIGHT;
+- atomic_inc(&req->task->io_uring->inflight_tracked);
+- }
+-}
+-
+-static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
+-{
+- if (WARN_ON_ONCE(!req->link))
+- return NULL;
+-
+- req->flags &= ~REQ_F_ARM_LTIMEOUT;
+- req->flags |= REQ_F_LINK_TIMEOUT;
+-
+- /* linked timeouts should have two refs once prep'ed */
+- io_req_set_refcount(req);
+- __io_req_set_refcount(req->link, 2);
+- return req->link;
+-}
+-
+-static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
+-{
+- if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT)))
+- return NULL;
+- return __io_prep_linked_timeout(req);
+-}
+-
+-static noinline void __io_arm_ltimeout(struct io_kiocb *req)
+-{
+- io_queue_linked_timeout(__io_prep_linked_timeout(req));
+-}
+-
+-static inline void io_arm_ltimeout(struct io_kiocb *req)
+-{
+- if (unlikely(req->flags & REQ_F_ARM_LTIMEOUT))
+- __io_arm_ltimeout(req);
+-}
+-
+-static void io_prep_async_work(struct io_kiocb *req)
+-{
+- const struct io_op_def *def = &io_op_defs[req->opcode];
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- if (!(req->flags & REQ_F_CREDS)) {
+- req->flags |= REQ_F_CREDS;
+- req->creds = get_current_cred();
+- }
+-
+- req->work.list.next = NULL;
+- req->work.flags = 0;
+- req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
+- if (req->flags & REQ_F_FORCE_ASYNC)
+- req->work.flags |= IO_WQ_WORK_CONCURRENT;
+-
+- if (req->flags & REQ_F_ISREG) {
+- if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
+- io_wq_hash_work(&req->work, file_inode(req->file));
+- } else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
+- if (def->unbound_nonreg_file)
+- req->work.flags |= IO_WQ_WORK_UNBOUND;
+- }
+-}
+-
+-static void io_prep_async_link(struct io_kiocb *req)
+-{
+- struct io_kiocb *cur;
+-
+- if (req->flags & REQ_F_LINK_TIMEOUT) {
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- spin_lock_irq(&ctx->timeout_lock);
+- io_for_each_link(cur, req)
+- io_prep_async_work(cur);
+- spin_unlock_irq(&ctx->timeout_lock);
+- } else {
+- io_for_each_link(cur, req)
+- io_prep_async_work(cur);
+- }
+-}
+-
+-static inline void io_req_add_compl_list(struct io_kiocb *req)
+-{
+- struct io_submit_state *state = &req->ctx->submit_state;
+-
+- if (!(req->flags & REQ_F_CQE_SKIP))
+- state->flush_cqes = true;
+- wq_list_add_tail(&req->comp_list, &state->compl_reqs);
+-}
+-
+-static void io_queue_iowq(struct io_kiocb *req, bool *dont_use)
+-{
+- struct io_kiocb *link = io_prep_linked_timeout(req);
+- struct io_uring_task *tctx = req->task->io_uring;
+-
+- BUG_ON(!tctx);
+- BUG_ON(!tctx->io_wq);
+-
+- /* init ->work of the whole link before punting */
+- io_prep_async_link(req);
+-
+- /*
+- * Not expected to happen, but if we do have a bug where this _can_
+- * happen, catch it here and ensure the request is marked as
+- * canceled. That will make io-wq go through the usual work cancel
+- * procedure rather than attempt to run this request (or create a new
+- * worker for it).
+- */
+- if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
+- req->work.flags |= IO_WQ_WORK_CANCEL;
+-
+- trace_io_uring_queue_async_work(req->ctx, req, req->cqe.user_data,
+- req->opcode, req->flags, &req->work,
+- io_wq_is_hashed(&req->work));
+- io_wq_enqueue(tctx->io_wq, &req->work);
+- if (link)
+- io_queue_linked_timeout(link);
+-}
+-
+-static void io_kill_timeout(struct io_kiocb *req, int status)
+- __must_hold(&req->ctx->completion_lock)
+- __must_hold(&req->ctx->timeout_lock)
+-{
+- struct io_timeout_data *io = req->async_data;
+-
+- if (hrtimer_try_to_cancel(&io->timer) != -1) {
+- if (status)
+- req_set_fail(req);
+- atomic_set(&req->ctx->cq_timeouts,
+- atomic_read(&req->ctx->cq_timeouts) + 1);
+- list_del_init(&req->timeout.list);
+- io_req_tw_post_queue(req, status, 0);
+- }
+-}
+-
+-static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
+-{
+- while (!list_empty(&ctx->defer_list)) {
+- struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
+- struct io_defer_entry, list);
+-
+- if (req_need_defer(de->req, de->seq))
+- break;
+- list_del_init(&de->list);
+- io_req_task_queue(de->req);
+- kfree(de);
+- }
+-}
+-
+-static __cold void io_flush_timeouts(struct io_ring_ctx *ctx)
+- __must_hold(&ctx->completion_lock)
+-{
+- u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+- struct io_kiocb *req, *tmp;
+-
+- spin_lock_irq(&ctx->timeout_lock);
+- list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+- u32 events_needed, events_got;
+-
+- if (io_is_timeout_noseq(req))
+- break;
+-
+- /*
+- * Since seq can easily wrap around over time, subtract
+- * the last seq at which timeouts were flushed before comparing.
+- * Assuming not more than 2^31-1 events have happened since,
+- * these subtractions won't have wrapped, so we can check if
+- * target is in [last_seq, current_seq] by comparing the two.
+- */
+- events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush;
+- events_got = seq - ctx->cq_last_tm_flush;
+- if (events_got < events_needed)
+- break;
+-
+- io_kill_timeout(req, 0);
+- }
+- ctx->cq_last_tm_flush = seq;
+- spin_unlock_irq(&ctx->timeout_lock);
+-}
+-
+-static inline void io_commit_cqring(struct io_ring_ctx *ctx)
+-{
+- /* order cqe stores with ring update */
+- smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
+-}
+-
+-static void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
+-{
+- if (ctx->off_timeout_used || ctx->drain_active) {
+- spin_lock(&ctx->completion_lock);
+- if (ctx->off_timeout_used)
+- io_flush_timeouts(ctx);
+- if (ctx->drain_active)
+- io_queue_deferred(ctx);
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- }
+- if (ctx->has_evfd)
+- io_eventfd_signal(ctx);
+-}
+-
+-static inline bool io_sqring_full(struct io_ring_ctx *ctx)
+-{
+- struct io_rings *r = ctx->rings;
+-
+- return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == ctx->sq_entries;
+-}
+-
+-static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx)
+-{
+- return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
+-}
+-
+-/*
+- * writes to the cq entry need to come after reading head; the
+- * control dependency is enough as we're using WRITE_ONCE to
+- * fill the cq entry
+- */
+-static noinline struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx)
+-{
+- struct io_rings *rings = ctx->rings;
+- unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1);
+- unsigned int shift = 0;
+- unsigned int free, queued, len;
+-
+- if (ctx->flags & IORING_SETUP_CQE32)
+- shift = 1;
+-
+- /* userspace may cheat modifying the tail, be safe and do min */
+- queued = min(__io_cqring_events(ctx), ctx->cq_entries);
+- free = ctx->cq_entries - queued;
+- /* we need a contiguous range, limit based on the current array offset */
+- len = min(free, ctx->cq_entries - off);
+- if (!len)
+- return NULL;
+-
+- ctx->cached_cq_tail++;
+- ctx->cqe_cached = &rings->cqes[off];
+- ctx->cqe_sentinel = ctx->cqe_cached + len;
+- ctx->cqe_cached++;
+- return &rings->cqes[off << shift];
+-}
+-
+-static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
+-{
+- if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) {
+- struct io_uring_cqe *cqe = ctx->cqe_cached;
+-
+- if (ctx->flags & IORING_SETUP_CQE32) {
+- unsigned int off = ctx->cqe_cached - ctx->rings->cqes;
+-
+- cqe += off;
+- }
+-
+- ctx->cached_cq_tail++;
+- ctx->cqe_cached++;
+- return cqe;
+- }
+-
+- return __io_get_cqe(ctx);
+-}
+-
+-static void io_eventfd_signal(struct io_ring_ctx *ctx)
+-{
+- struct io_ev_fd *ev_fd;
+-
+- rcu_read_lock();
+- /*
+- * rcu_dereference ctx->io_ev_fd once and use it for both for checking
+- * and eventfd_signal
+- */
+- ev_fd = rcu_dereference(ctx->io_ev_fd);
+-
+- /*
+- * Check again if ev_fd exists incase an io_eventfd_unregister call
+- * completed between the NULL check of ctx->io_ev_fd at the start of
+- * the function and rcu_read_lock.
+- */
+- if (unlikely(!ev_fd))
+- goto out;
+- if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
+- goto out;
+-
+- if (!ev_fd->eventfd_async || io_wq_current_is_worker())
+- eventfd_signal(ev_fd->cq_ev_fd, 1);
+-out:
+- rcu_read_unlock();
+-}
+-
+-static inline void io_cqring_wake(struct io_ring_ctx *ctx)
+-{
+- /*
+- * wake_up_all() may seem excessive, but io_wake_function() and
+- * io_should_wake() handle the termination of the loop and only
+- * wake as many waiters as we need to.
+- */
+- if (wq_has_sleeper(&ctx->cq_wait))
+- wake_up_all(&ctx->cq_wait);
+-}
+-
+-/*
+- * This should only get called when at least one event has been posted.
+- * Some applications rely on the eventfd notification count only changing
+- * IFF a new CQE has been added to the CQ ring. There's no depedency on
+- * 1:1 relationship between how many times this function is called (and
+- * hence the eventfd count) and number of CQEs posted to the CQ ring.
+- */
+-static inline void io_cqring_ev_posted(struct io_ring_ctx *ctx)
+-{
+- if (unlikely(ctx->off_timeout_used || ctx->drain_active ||
+- ctx->has_evfd))
+- __io_commit_cqring_flush(ctx);
+-
+- io_cqring_wake(ctx);
+-}
+-
+-static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
+-{
+- if (unlikely(ctx->off_timeout_used || ctx->drain_active ||
+- ctx->has_evfd))
+- __io_commit_cqring_flush(ctx);
+-
+- if (ctx->flags & IORING_SETUP_SQPOLL)
+- io_cqring_wake(ctx);
+-}
+-
+-/* Returns true if there are no backlogged entries after the flush */
+-static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
+-{
+- bool all_flushed, posted;
+- size_t cqe_size = sizeof(struct io_uring_cqe);
+-
+- if (!force && __io_cqring_events(ctx) == ctx->cq_entries)
+- return false;
+-
+- if (ctx->flags & IORING_SETUP_CQE32)
+- cqe_size <<= 1;
+-
+- posted = false;
+- spin_lock(&ctx->completion_lock);
+- while (!list_empty(&ctx->cq_overflow_list)) {
+- struct io_uring_cqe *cqe = io_get_cqe(ctx);
+- struct io_overflow_cqe *ocqe;
+-
+- if (!cqe && !force)
+- break;
+- ocqe = list_first_entry(&ctx->cq_overflow_list,
+- struct io_overflow_cqe, list);
+- if (cqe)
+- memcpy(cqe, &ocqe->cqe, cqe_size);
+- else
+- io_account_cq_overflow(ctx);
+-
+- posted = true;
+- list_del(&ocqe->list);
+- kfree(ocqe);
+- }
+-
+- all_flushed = list_empty(&ctx->cq_overflow_list);
+- if (all_flushed) {
+- clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
+- atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
+- }
+-
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- if (posted)
+- io_cqring_ev_posted(ctx);
+- return all_flushed;
+-}
+-
+-static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx)
+-{
+- bool ret = true;
+-
+- if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) {
+- /* iopoll syncs against uring_lock, not completion_lock */
+- if (ctx->flags & IORING_SETUP_IOPOLL)
+- mutex_lock(&ctx->uring_lock);
+- ret = __io_cqring_overflow_flush(ctx, false);
+- if (ctx->flags & IORING_SETUP_IOPOLL)
+- mutex_unlock(&ctx->uring_lock);
+- }
+-
+- return ret;
+-}
+-
+-static void __io_put_task(struct task_struct *task, int nr)
+-{
+- struct io_uring_task *tctx = task->io_uring;
+-
+- percpu_counter_sub(&tctx->inflight, nr);
+- if (unlikely(atomic_read(&tctx->in_idle)))
+- wake_up(&tctx->wait);
+- put_task_struct_many(task, nr);
+-}
+-
+-/* must to be called somewhat shortly after putting a request */
+-static inline void io_put_task(struct task_struct *task, int nr)
+-{
+- if (likely(task == current))
+- task->io_uring->cached_refs += nr;
+- else
+- __io_put_task(task, nr);
+-}
+-
+-static void io_task_refs_refill(struct io_uring_task *tctx)
+-{
+- unsigned int refill = -tctx->cached_refs + IO_TCTX_REFS_CACHE_NR;
+-
+- percpu_counter_add(&tctx->inflight, refill);
+- refcount_add(refill, ¤t->usage);
+- tctx->cached_refs += refill;
+-}
+-
+-static inline void io_get_task_refs(int nr)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+-
+- tctx->cached_refs -= nr;
+- if (unlikely(tctx->cached_refs < 0))
+- io_task_refs_refill(tctx);
+-}
+-
+-static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
+-{
+- struct io_uring_task *tctx = task->io_uring;
+- unsigned int refs = tctx->cached_refs;
+-
+- if (refs) {
+- tctx->cached_refs = 0;
+- percpu_counter_sub(&tctx->inflight, refs);
+- put_task_struct_many(task, refs);
+- }
+-}
+-
+-static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
+- s32 res, u32 cflags, u64 extra1,
+- u64 extra2)
+-{
+- struct io_overflow_cqe *ocqe;
+- size_t ocq_size = sizeof(struct io_overflow_cqe);
+- bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
+-
+- if (is_cqe32)
+- ocq_size += sizeof(struct io_uring_cqe);
+-
+- ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
+- trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
+- if (!ocqe) {
+- /*
+- * If we're in ring overflow flush mode, or in task cancel mode,
+- * or cannot allocate an overflow entry, then we need to drop it
+- * on the floor.
+- */
+- io_account_cq_overflow(ctx);
+- set_bit(IO_CHECK_CQ_DROPPED_BIT, &ctx->check_cq);
+- return false;
+- }
+- if (list_empty(&ctx->cq_overflow_list)) {
+- set_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
+- atomic_or(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
+-
+- }
+- ocqe->cqe.user_data = user_data;
+- ocqe->cqe.res = res;
+- ocqe->cqe.flags = cflags;
+- if (is_cqe32) {
+- ocqe->cqe.big_cqe[0] = extra1;
+- ocqe->cqe.big_cqe[1] = extra2;
+- }
+- list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
+- return true;
+-}
+-
+-static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx,
+- struct io_kiocb *req)
+-{
+- struct io_uring_cqe *cqe;
+-
+- if (!(ctx->flags & IORING_SETUP_CQE32)) {
+- trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
+- req->cqe.res, req->cqe.flags, 0, 0);
+-
+- /*
+- * If we can't get a cq entry, userspace overflowed the
+- * submission (by quite a lot). Increment the overflow count in
+- * the ring.
+- */
+- cqe = io_get_cqe(ctx);
+- if (likely(cqe)) {
+- memcpy(cqe, &req->cqe, sizeof(*cqe));
+- return true;
+- }
+-
+- return io_cqring_event_overflow(ctx, req->cqe.user_data,
+- req->cqe.res, req->cqe.flags,
+- 0, 0);
+- } else {
+- u64 extra1 = 0, extra2 = 0;
+-
+- if (req->flags & REQ_F_CQE32_INIT) {
+- extra1 = req->extra1;
+- extra2 = req->extra2;
+- }
+-
+- trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
+- req->cqe.res, req->cqe.flags, extra1, extra2);
+-
+- /*
+- * If we can't get a cq entry, userspace overflowed the
+- * submission (by quite a lot). Increment the overflow count in
+- * the ring.
+- */
+- cqe = io_get_cqe(ctx);
+- if (likely(cqe)) {
+- memcpy(cqe, &req->cqe, sizeof(struct io_uring_cqe));
+- WRITE_ONCE(cqe->big_cqe[0], extra1);
+- WRITE_ONCE(cqe->big_cqe[1], extra2);
+- return true;
+- }
+-
+- return io_cqring_event_overflow(ctx, req->cqe.user_data,
+- req->cqe.res, req->cqe.flags,
+- extra1, extra2);
+- }
+-}
+-
+-static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
+- s32 res, u32 cflags)
+-{
+- struct io_uring_cqe *cqe;
+-
+- ctx->cq_extra++;
+- trace_io_uring_complete(ctx, NULL, user_data, res, cflags, 0, 0);
+-
+- /*
+- * If we can't get a cq entry, userspace overflowed the
+- * submission (by quite a lot). Increment the overflow count in
+- * the ring.
+- */
+- cqe = io_get_cqe(ctx);
+- if (likely(cqe)) {
+- WRITE_ONCE(cqe->user_data, user_data);
+- WRITE_ONCE(cqe->res, res);
+- WRITE_ONCE(cqe->flags, cflags);
+-
+- if (ctx->flags & IORING_SETUP_CQE32) {
+- WRITE_ONCE(cqe->big_cqe[0], 0);
+- WRITE_ONCE(cqe->big_cqe[1], 0);
+- }
+- return true;
+- }
+- return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
+-}
+-
+-static void __io_req_complete_put(struct io_kiocb *req)
+-{
+- /*
+- * If we're the last reference to this request, add to our locked
+- * free_list cache.
+- */
+- if (req_ref_put_and_test(req)) {
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- if (req->flags & IO_REQ_LINK_FLAGS) {
+- if (req->flags & IO_DISARM_MASK)
+- io_disarm_next(req);
+- if (req->link) {
+- io_req_task_queue(req->link);
+- req->link = NULL;
+- }
+- }
+- io_req_put_rsrc(req);
+- /*
+- * Selected buffer deallocation in io_clean_op() assumes that
+- * we don't hold ->completion_lock. Clean them here to avoid
+- * deadlocks.
+- */
+- io_put_kbuf_comp(req);
+- io_dismantle_req(req);
+- io_put_task(req->task, 1);
+- wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
+- ctx->locked_free_nr++;
+- }
+-}
+-
+-static void __io_req_complete_post(struct io_kiocb *req, s32 res,
+- u32 cflags)
+-{
+- if (!(req->flags & REQ_F_CQE_SKIP)) {
+- req->cqe.res = res;
+- req->cqe.flags = cflags;
+- __io_fill_cqe_req(req->ctx, req);
+- }
+- __io_req_complete_put(req);
+-}
+-
+-static void io_req_complete_post(struct io_kiocb *req, s32 res, u32 cflags)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- spin_lock(&ctx->completion_lock);
+- __io_req_complete_post(req, res, cflags);
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- io_cqring_ev_posted(ctx);
+-}
+-
+-static inline void io_req_complete_state(struct io_kiocb *req, s32 res,
+- u32 cflags)
+-{
+- req->cqe.res = res;
+- req->cqe.flags = cflags;
+- req->flags |= REQ_F_COMPLETE_INLINE;
+-}
+-
+-static inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags,
+- s32 res, u32 cflags)
+-{
+- if (issue_flags & IO_URING_F_COMPLETE_DEFER)
+- io_req_complete_state(req, res, cflags);
+- else
+- io_req_complete_post(req, res, cflags);
+-}
+-
+-static inline void io_req_complete(struct io_kiocb *req, s32 res)
+-{
+- if (res < 0)
+- req_set_fail(req);
+- __io_req_complete(req, 0, res, 0);
+-}
+-
+-static void io_req_complete_failed(struct io_kiocb *req, s32 res)
+-{
+- req_set_fail(req);
+- io_req_complete_post(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
+-}
+-
+-/*
+- * Don't initialise the fields below on every allocation, but do that in
+- * advance and keep them valid across allocations.
+- */
+-static void io_preinit_req(struct io_kiocb *req, struct io_ring_ctx *ctx)
+-{
+- req->ctx = ctx;
+- req->link = NULL;
+- req->async_data = NULL;
+- /* not necessary, but safer to zero */
+- req->cqe.res = 0;
+-}
+-
+-static void io_flush_cached_locked_reqs(struct io_ring_ctx *ctx,
+- struct io_submit_state *state)
+-{
+- spin_lock(&ctx->completion_lock);
+- wq_list_splice(&ctx->locked_free_list, &state->free_list);
+- ctx->locked_free_nr = 0;
+- spin_unlock(&ctx->completion_lock);
+-}
+-
+-static inline bool io_req_cache_empty(struct io_ring_ctx *ctx)
+-{
+- return !ctx->submit_state.free_list.next;
+-}
+-
+-/*
+- * A request might get retired back into the request caches even before opcode
+- * handlers and io_issue_sqe() are done with it, e.g. inline completion path.
+- * Because of that, io_alloc_req() should be called only under ->uring_lock
+- * and with extra caution to not get a request that is still worked on.
+- */
+-static __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
+- __must_hold(&ctx->uring_lock)
+-{
+- gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
+- void *reqs[IO_REQ_ALLOC_BATCH];
+- int ret, i;
+-
+- /*
+- * If we have more than a batch's worth of requests in our IRQ side
+- * locked cache, grab the lock and move them over to our submission
+- * side cache.
+- */
+- if (data_race(ctx->locked_free_nr) > IO_COMPL_BATCH) {
+- io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
+- if (!io_req_cache_empty(ctx))
+- return true;
+- }
+-
+- ret = kmem_cache_alloc_bulk(req_cachep, gfp, ARRAY_SIZE(reqs), reqs);
+-
+- /*
+- * Bulk alloc is all-or-nothing. If we fail to get a batch,
+- * retry single alloc to be on the safe side.
+- */
+- if (unlikely(ret <= 0)) {
+- reqs[0] = kmem_cache_alloc(req_cachep, gfp);
+- if (!reqs[0])
+- return false;
+- ret = 1;
+- }
+-
+- percpu_ref_get_many(&ctx->refs, ret);
+- for (i = 0; i < ret; i++) {
+- struct io_kiocb *req = reqs[i];
+-
+- io_preinit_req(req, ctx);
+- io_req_add_to_cache(req, ctx);
+- }
+- return true;
+-}
+-
+-static inline bool io_alloc_req_refill(struct io_ring_ctx *ctx)
+-{
+- if (unlikely(io_req_cache_empty(ctx)))
+- return __io_alloc_req_refill(ctx);
+- return true;
+-}
+-
+-static inline struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
+-{
+- struct io_wq_work_node *node;
+-
+- node = wq_stack_extract(&ctx->submit_state.free_list);
+- return container_of(node, struct io_kiocb, comp_list);
+-}
+-
+-static inline void io_put_file(struct file *file)
+-{
+- if (file)
+- fput(file);
+-}
+-
+-static inline void io_dismantle_req(struct io_kiocb *req)
+-{
+- unsigned int flags = req->flags;
+-
+- if (unlikely(flags & IO_REQ_CLEAN_FLAGS))
+- io_clean_op(req);
+- if (!(flags & REQ_F_FIXED_FILE))
+- io_put_file(req->file);
+-}
+-
+-static __cold void io_free_req(struct io_kiocb *req)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- io_req_put_rsrc(req);
+- io_dismantle_req(req);
+- io_put_task(req->task, 1);
+-
+- spin_lock(&ctx->completion_lock);
+- wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
+- ctx->locked_free_nr++;
+- spin_unlock(&ctx->completion_lock);
+-}
+-
+-static inline void io_remove_next_linked(struct io_kiocb *req)
+-{
+- struct io_kiocb *nxt = req->link;
+-
+- req->link = nxt->link;
+- nxt->link = NULL;
+-}
+-
+-static struct io_kiocb *io_disarm_linked_timeout(struct io_kiocb *req)
+- __must_hold(&req->ctx->completion_lock)
+- __must_hold(&req->ctx->timeout_lock)
+-{
+- struct io_kiocb *link = req->link;
+-
+- if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
+- struct io_timeout_data *io = link->async_data;
+-
+- io_remove_next_linked(req);
+- link->timeout.head = NULL;
+- if (hrtimer_try_to_cancel(&io->timer) != -1) {
+- list_del(&link->timeout.list);
+- return link;
+- }
+- }
+- return NULL;
+-}
+-
+-static void io_fail_links(struct io_kiocb *req)
+- __must_hold(&req->ctx->completion_lock)
+-{
+- struct io_kiocb *nxt, *link = req->link;
+- bool ignore_cqes = req->flags & REQ_F_SKIP_LINK_CQES;
+-
+- req->link = NULL;
+- while (link) {
+- long res = -ECANCELED;
+-
+- if (link->flags & REQ_F_FAIL)
+- res = link->cqe.res;
+-
+- nxt = link->link;
+- link->link = NULL;
+-
+- trace_io_uring_fail_link(req->ctx, req, req->cqe.user_data,
+- req->opcode, link);
+-
+- if (ignore_cqes)
+- link->flags |= REQ_F_CQE_SKIP;
+- else
+- link->flags &= ~REQ_F_CQE_SKIP;
+- __io_req_complete_post(link, res, 0);
+- link = nxt;
+- }
+-}
+-
+-static bool io_disarm_next(struct io_kiocb *req)
+- __must_hold(&req->ctx->completion_lock)
+-{
+- struct io_kiocb *link = NULL;
+- bool posted = false;
+-
+- if (req->flags & REQ_F_ARM_LTIMEOUT) {
+- link = req->link;
+- req->flags &= ~REQ_F_ARM_LTIMEOUT;
+- if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
+- io_remove_next_linked(req);
+- io_req_tw_post_queue(link, -ECANCELED, 0);
+- posted = true;
+- }
+- } else if (req->flags & REQ_F_LINK_TIMEOUT) {
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- spin_lock_irq(&ctx->timeout_lock);
+- link = io_disarm_linked_timeout(req);
+- spin_unlock_irq(&ctx->timeout_lock);
+- if (link) {
+- posted = true;
+- io_req_tw_post_queue(link, -ECANCELED, 0);
+- }
+- }
+- if (unlikely((req->flags & REQ_F_FAIL) &&
+- !(req->flags & REQ_F_HARDLINK))) {
+- posted |= (req->link != NULL);
+- io_fail_links(req);
+- }
+- return posted;
+-}
+-
+-static void __io_req_find_next_prep(struct io_kiocb *req)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- bool posted;
+-
+- spin_lock(&ctx->completion_lock);
+- posted = io_disarm_next(req);
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- if (posted)
+- io_cqring_ev_posted(ctx);
+-}
+-
+-static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
+-{
+- struct io_kiocb *nxt;
+-
+- /*
+- * If LINK is set, we have dependent requests in this chain. If we
+- * didn't fail this request, queue the first one up, moving any other
+- * dependencies to the next request. In case of failure, fail the rest
+- * of the chain.
+- */
+- if (unlikely(req->flags & IO_DISARM_MASK))
+- __io_req_find_next_prep(req);
+- nxt = req->link;
+- req->link = NULL;
+- return nxt;
+-}
+-
+-static void ctx_flush_and_put(struct io_ring_ctx *ctx, bool *locked)
+-{
+- if (!ctx)
+- return;
+- if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
+- atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
+- if (*locked) {
+- io_submit_flush_completions(ctx);
+- mutex_unlock(&ctx->uring_lock);
+- *locked = false;
+- }
+- percpu_ref_put(&ctx->refs);
+-}
+-
+-static inline void ctx_commit_and_unlock(struct io_ring_ctx *ctx)
+-{
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- io_cqring_ev_posted(ctx);
+-}
+-
+-static void handle_prev_tw_list(struct io_wq_work_node *node,
+- struct io_ring_ctx **ctx, bool *uring_locked)
+-{
+- if (*ctx && !*uring_locked)
+- spin_lock(&(*ctx)->completion_lock);
+-
+- do {
+- struct io_wq_work_node *next = node->next;
+- struct io_kiocb *req = container_of(node, struct io_kiocb,
+- io_task_work.node);
+-
+- prefetch(container_of(next, struct io_kiocb, io_task_work.node));
+-
+- if (req->ctx != *ctx) {
+- if (unlikely(!*uring_locked && *ctx))
+- ctx_commit_and_unlock(*ctx);
+-
+- ctx_flush_and_put(*ctx, uring_locked);
+- *ctx = req->ctx;
+- /* if not contended, grab and improve batching */
+- *uring_locked = mutex_trylock(&(*ctx)->uring_lock);
+- percpu_ref_get(&(*ctx)->refs);
+- if (unlikely(!*uring_locked))
+- spin_lock(&(*ctx)->completion_lock);
+- }
+- if (likely(*uring_locked))
+- req->io_task_work.func(req, uring_locked);
+- else
+- __io_req_complete_post(req, req->cqe.res,
+- io_put_kbuf_comp(req));
+- node = next;
+- } while (node);
+-
+- if (unlikely(!*uring_locked))
+- ctx_commit_and_unlock(*ctx);
+-}
+-
+-static void handle_tw_list(struct io_wq_work_node *node,
+- struct io_ring_ctx **ctx, bool *locked)
+-{
+- do {
+- struct io_wq_work_node *next = node->next;
+- struct io_kiocb *req = container_of(node, struct io_kiocb,
+- io_task_work.node);
+-
+- prefetch(container_of(next, struct io_kiocb, io_task_work.node));
+-
+- if (req->ctx != *ctx) {
+- ctx_flush_and_put(*ctx, locked);
+- *ctx = req->ctx;
+- /* if not contended, grab and improve batching */
+- *locked = mutex_trylock(&(*ctx)->uring_lock);
+- percpu_ref_get(&(*ctx)->refs);
+- }
+- req->io_task_work.func(req, locked);
+- node = next;
+- } while (node);
+-}
+-
+-static void tctx_task_work(struct callback_head *cb)
+-{
+- bool uring_locked = false;
+- struct io_ring_ctx *ctx = NULL;
+- struct io_uring_task *tctx = container_of(cb, struct io_uring_task,
+- task_work);
+-
+- while (1) {
+- struct io_wq_work_node *node1, *node2;
+-
+- spin_lock_irq(&tctx->task_lock);
+- node1 = tctx->prio_task_list.first;
+- node2 = tctx->task_list.first;
+- INIT_WQ_LIST(&tctx->task_list);
+- INIT_WQ_LIST(&tctx->prio_task_list);
+- if (!node2 && !node1)
+- tctx->task_running = false;
+- spin_unlock_irq(&tctx->task_lock);
+- if (!node2 && !node1)
+- break;
+-
+- if (node1)
+- handle_prev_tw_list(node1, &ctx, &uring_locked);
+- if (node2)
+- handle_tw_list(node2, &ctx, &uring_locked);
+- cond_resched();
+-
+- if (data_race(!tctx->task_list.first) &&
+- data_race(!tctx->prio_task_list.first) && uring_locked)
+- io_submit_flush_completions(ctx);
+- }
+-
+- ctx_flush_and_put(ctx, &uring_locked);
+-
+- /* relaxed read is enough as only the task itself sets ->in_idle */
+- if (unlikely(atomic_read(&tctx->in_idle)))
+- io_uring_drop_tctx_refs(current);
+-}
+-
+-static void __io_req_task_work_add(struct io_kiocb *req,
+- struct io_uring_task *tctx,
+- struct io_wq_work_list *list)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_wq_work_node *node;
+- unsigned long flags;
+- bool running;
+-
+- spin_lock_irqsave(&tctx->task_lock, flags);
+- wq_list_add_tail(&req->io_task_work.node, list);
+- running = tctx->task_running;
+- if (!running)
+- tctx->task_running = true;
+- spin_unlock_irqrestore(&tctx->task_lock, flags);
+-
+- /* task_work already pending, we're done */
+- if (running)
+- return;
+-
+- if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
+- atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
+-
+- if (likely(!task_work_add(req->task, &tctx->task_work, ctx->notify_method)))
+- return;
+-
+- spin_lock_irqsave(&tctx->task_lock, flags);
+- tctx->task_running = false;
+- node = wq_list_merge(&tctx->prio_task_list, &tctx->task_list);
+- spin_unlock_irqrestore(&tctx->task_lock, flags);
+-
+- while (node) {
+- req = container_of(node, struct io_kiocb, io_task_work.node);
+- node = node->next;
+- if (llist_add(&req->io_task_work.fallback_node,
+- &req->ctx->fallback_llist))
+- schedule_delayed_work(&req->ctx->fallback_work, 1);
+- }
+-}
+-
+-static void io_req_task_work_add(struct io_kiocb *req)
+-{
+- struct io_uring_task *tctx = req->task->io_uring;
+-
+- __io_req_task_work_add(req, tctx, &tctx->task_list);
+-}
+-
+-static void io_req_task_prio_work_add(struct io_kiocb *req)
+-{
+- struct io_uring_task *tctx = req->task->io_uring;
+-
+- if (req->ctx->flags & IORING_SETUP_SQPOLL)
+- __io_req_task_work_add(req, tctx, &tctx->prio_task_list);
+- else
+- __io_req_task_work_add(req, tctx, &tctx->task_list);
+-}
+-
+-static void io_req_tw_post(struct io_kiocb *req, bool *locked)
+-{
+- io_req_complete_post(req, req->cqe.res, req->cqe.flags);
+-}
+-
+-static void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags)
+-{
+- req->cqe.res = res;
+- req->cqe.flags = cflags;
+- req->io_task_work.func = io_req_tw_post;
+- io_req_task_work_add(req);
+-}
+-
+-static void io_req_task_cancel(struct io_kiocb *req, bool *locked)
+-{
+- /* not needed for normal modes, but SQPOLL depends on it */
+- io_tw_lock(req->ctx, locked);
+- io_req_complete_failed(req, req->cqe.res);
+-}
+-
+-static void io_req_task_submit(struct io_kiocb *req, bool *locked)
+-{
+- io_tw_lock(req->ctx, locked);
+- /* req->task == current here, checking PF_EXITING is safe */
+- if (likely(!(req->task->flags & PF_EXITING)))
+- io_queue_sqe(req);
+- else
+- io_req_complete_failed(req, -EFAULT);
+-}
+-
+-static void io_req_task_queue_fail(struct io_kiocb *req, int ret)
+-{
+- req->cqe.res = ret;
+- req->io_task_work.func = io_req_task_cancel;
+- io_req_task_work_add(req);
+-}
+-
+-static void io_req_task_queue(struct io_kiocb *req)
+-{
+- req->io_task_work.func = io_req_task_submit;
+- io_req_task_work_add(req);
+-}
+-
+-static void io_req_task_queue_reissue(struct io_kiocb *req)
+-{
+- req->io_task_work.func = io_queue_iowq;
+- io_req_task_work_add(req);
+-}
+-
+-static void io_queue_next(struct io_kiocb *req)
+-{
+- struct io_kiocb *nxt = io_req_find_next(req);
+-
+- if (nxt)
+- io_req_task_queue(nxt);
+-}
+-
+-static void io_free_batch_list(struct io_ring_ctx *ctx,
+- struct io_wq_work_node *node)
+- __must_hold(&ctx->uring_lock)
+-{
+- struct task_struct *task = NULL;
+- int task_refs = 0;
+-
+- do {
+- struct io_kiocb *req = container_of(node, struct io_kiocb,
+- comp_list);
+-
+- if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
+- if (req->flags & REQ_F_REFCOUNT) {
+- node = req->comp_list.next;
+- if (!req_ref_put_and_test(req))
+- continue;
+- }
+- if ((req->flags & REQ_F_POLLED) && req->apoll) {
+- struct async_poll *apoll = req->apoll;
+-
+- if (apoll->double_poll)
+- kfree(apoll->double_poll);
+- list_add(&apoll->poll.wait.entry,
+- &ctx->apoll_cache);
+- req->flags &= ~REQ_F_POLLED;
+- }
+- if (req->flags & IO_REQ_LINK_FLAGS)
+- io_queue_next(req);
+- if (unlikely(req->flags & IO_REQ_CLEAN_FLAGS))
+- io_clean_op(req);
+- }
+- if (!(req->flags & REQ_F_FIXED_FILE))
+- io_put_file(req->file);
+-
+- io_req_put_rsrc_locked(req, ctx);
+-
+- if (req->task != task) {
+- if (task)
+- io_put_task(task, task_refs);
+- task = req->task;
+- task_refs = 0;
+- }
+- task_refs++;
+- node = req->comp_list.next;
+- io_req_add_to_cache(req, ctx);
+- } while (node);
+-
+- if (task)
+- io_put_task(task, task_refs);
+-}
+-
+-static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
+- __must_hold(&ctx->uring_lock)
+-{
+- struct io_wq_work_node *node, *prev;
+- struct io_submit_state *state = &ctx->submit_state;
+-
+- if (state->flush_cqes) {
+- spin_lock(&ctx->completion_lock);
+- wq_list_for_each(node, prev, &state->compl_reqs) {
+- struct io_kiocb *req = container_of(node, struct io_kiocb,
+- comp_list);
+-
+- if (!(req->flags & REQ_F_CQE_SKIP))
+- __io_fill_cqe_req(ctx, req);
+- }
+-
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- io_cqring_ev_posted(ctx);
+- state->flush_cqes = false;
+- }
+-
+- io_free_batch_list(ctx, state->compl_reqs.first);
+- INIT_WQ_LIST(&state->compl_reqs);
+-}
+-
+-/*
+- * Drop reference to request, return next in chain (if there is one) if this
+- * was the last reference to this request.
+- */
+-static inline struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
+-{
+- struct io_kiocb *nxt = NULL;
+-
+- if (req_ref_put_and_test(req)) {
+- if (unlikely(req->flags & IO_REQ_LINK_FLAGS))
+- nxt = io_req_find_next(req);
+- io_free_req(req);
+- }
+- return nxt;
+-}
+-
+-static inline void io_put_req(struct io_kiocb *req)
+-{
+- if (req_ref_put_and_test(req)) {
+- io_queue_next(req);
+- io_free_req(req);
+- }
+-}
+-
+-static unsigned io_cqring_events(struct io_ring_ctx *ctx)
+-{
+- /* See comment at the top of this file */
+- smp_rmb();
+- return __io_cqring_events(ctx);
+-}
+-
+-static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
+-{
+- struct io_rings *rings = ctx->rings;
+-
+- /* make sure SQ entry isn't read before tail */
+- return smp_load_acquire(&rings->sq.tail) - ctx->cached_sq_head;
+-}
+-
+-static inline bool io_run_task_work(void)
+-{
+- if (test_thread_flag(TIF_NOTIFY_SIGNAL) || task_work_pending(current)) {
+- __set_current_state(TASK_RUNNING);
+- clear_notify_signal();
+- if (task_work_pending(current))
+- task_work_run();
+- return true;
+- }
+-
+- return false;
+-}
+-
+-static int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
+-{
+- struct io_wq_work_node *pos, *start, *prev;
+- unsigned int poll_flags = BLK_POLL_NOSLEEP;
+- DEFINE_IO_COMP_BATCH(iob);
+- int nr_events = 0;
+-
+- /*
+- * Only spin for completions if we don't have multiple devices hanging
+- * off our complete list.
+- */
+- if (ctx->poll_multi_queue || force_nonspin)
+- poll_flags |= BLK_POLL_ONESHOT;
+-
+- wq_list_for_each(pos, start, &ctx->iopoll_list) {
+- struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list);
+- struct kiocb *kiocb = &req->rw.kiocb;
+- int ret;
+-
+- /*
+- * Move completed and retryable entries to our local lists.
+- * If we find a request that requires polling, break out
+- * and complete those lists first, if we have entries there.
+- */
+- if (READ_ONCE(req->iopoll_completed))
+- break;
+-
+- ret = kiocb->ki_filp->f_op->iopoll(kiocb, &iob, poll_flags);
+- if (unlikely(ret < 0))
+- return ret;
+- else if (ret)
+- poll_flags |= BLK_POLL_ONESHOT;
+-
+- /* iopoll may have completed current req */
+- if (!rq_list_empty(iob.req_list) ||
+- READ_ONCE(req->iopoll_completed))
+- break;
+- }
+-
+- if (!rq_list_empty(iob.req_list))
+- iob.complete(&iob);
+- else if (!pos)
+- return 0;
+-
+- prev = start;
+- wq_list_for_each_resume(pos, prev) {
+- struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list);
+-
+- /* order with io_complete_rw_iopoll(), e.g. ->result updates */
+- if (!smp_load_acquire(&req->iopoll_completed))
+- break;
+- nr_events++;
+- if (unlikely(req->flags & REQ_F_CQE_SKIP))
+- continue;
+-
+- req->cqe.flags = io_put_kbuf(req, 0);
+- __io_fill_cqe_req(req->ctx, req);
+- }
+-
+- if (unlikely(!nr_events))
+- return 0;
+-
+- io_commit_cqring(ctx);
+- io_cqring_ev_posted_iopoll(ctx);
+- pos = start ? start->next : ctx->iopoll_list.first;
+- wq_list_cut(&ctx->iopoll_list, prev, start);
+- io_free_batch_list(ctx, pos);
+- return nr_events;
+-}
+-
+-/*
+- * We can't just wait for polled events to come to us, we have to actively
+- * find and complete them.
+- */
+-static __cold void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
+-{
+- if (!(ctx->flags & IORING_SETUP_IOPOLL))
+- return;
+-
+- mutex_lock(&ctx->uring_lock);
+- while (!wq_list_empty(&ctx->iopoll_list)) {
+- /* let it sleep and repeat later if can't complete a request */
+- if (io_do_iopoll(ctx, true) == 0)
+- break;
+- /*
+- * Ensure we allow local-to-the-cpu processing to take place,
+- * in this case we need to ensure that we reap all events.
+- * Also let task_work, etc. to progress by releasing the mutex
+- */
+- if (need_resched()) {
+- mutex_unlock(&ctx->uring_lock);
+- cond_resched();
+- mutex_lock(&ctx->uring_lock);
+- }
+- }
+- mutex_unlock(&ctx->uring_lock);
+-}
+-
+-static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
+-{
+- unsigned int nr_events = 0;
+- int ret = 0;
+- unsigned long check_cq;
+-
+- /*
+- * Don't enter poll loop if we already have events pending.
+- * If we do, we can potentially be spinning for commands that
+- * already triggered a CQE (eg in error).
+- */
+- check_cq = READ_ONCE(ctx->check_cq);
+- if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
+- __io_cqring_overflow_flush(ctx, false);
+- if (io_cqring_events(ctx))
+- return 0;
+-
+- /*
+- * Similarly do not spin if we have not informed the user of any
+- * dropped CQE.
+- */
+- if (unlikely(check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)))
+- return -EBADR;
+-
+- do {
+- /*
+- * If a submit got punted to a workqueue, we can have the
+- * application entering polling for a command before it gets
+- * issued. That app will hold the uring_lock for the duration
+- * of the poll right here, so we need to take a breather every
+- * now and then to ensure that the issue has a chance to add
+- * the poll to the issued list. Otherwise we can spin here
+- * forever, while the workqueue is stuck trying to acquire the
+- * very same mutex.
+- */
+- if (wq_list_empty(&ctx->iopoll_list)) {
+- u32 tail = ctx->cached_cq_tail;
+-
+- mutex_unlock(&ctx->uring_lock);
+- io_run_task_work();
+- mutex_lock(&ctx->uring_lock);
+-
+- /* some requests don't go through iopoll_list */
+- if (tail != ctx->cached_cq_tail ||
+- wq_list_empty(&ctx->iopoll_list))
+- break;
+- }
+- ret = io_do_iopoll(ctx, !min);
+- if (ret < 0)
+- break;
+- nr_events += ret;
+- ret = 0;
+- } while (nr_events < min && !need_resched());
+-
+- return ret;
+-}
+-
+-static void kiocb_end_write(struct io_kiocb *req)
+-{
+- /*
+- * Tell lockdep we inherited freeze protection from submission
+- * thread.
+- */
+- if (req->flags & REQ_F_ISREG) {
+- struct super_block *sb = file_inode(req->file)->i_sb;
+-
+- __sb_writers_acquired(sb, SB_FREEZE_WRITE);
+- sb_end_write(sb);
+- }
+-}
+-
+-#ifdef CONFIG_BLOCK
+-static bool io_resubmit_prep(struct io_kiocb *req)
+-{
+- struct io_async_rw *rw = req->async_data;
+-
+- if (!req_has_async_data(req))
+- return !io_req_prep_async(req);
+- iov_iter_restore(&rw->s.iter, &rw->s.iter_state);
+- return true;
+-}
+-
+-static bool io_rw_should_reissue(struct io_kiocb *req)
+-{
+- umode_t mode = file_inode(req->file)->i_mode;
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- if (!S_ISBLK(mode) && !S_ISREG(mode))
+- return false;
+- if ((req->flags & REQ_F_NOWAIT) || (io_wq_current_is_worker() &&
+- !(ctx->flags & IORING_SETUP_IOPOLL)))
+- return false;
+- /*
+- * If ref is dying, we might be running poll reap from the exit work.
+- * Don't attempt to reissue from that path, just let it fail with
+- * -EAGAIN.
+- */
+- if (percpu_ref_is_dying(&ctx->refs))
+- return false;
+- /*
+- * Play it safe and assume not safe to re-import and reissue if we're
+- * not in the original thread group (or in task context).
+- */
+- if (!same_thread_group(req->task, current) || !in_task())
+- return false;
+- return true;
+-}
+-#else
+-static bool io_resubmit_prep(struct io_kiocb *req)
+-{
+- return false;
+-}
+-static bool io_rw_should_reissue(struct io_kiocb *req)
+-{
+- return false;
+-}
+-#endif
+-
+-static bool __io_complete_rw_common(struct io_kiocb *req, long res)
+-{
+- if (req->rw.kiocb.ki_flags & IOCB_WRITE) {
+- kiocb_end_write(req);
+- fsnotify_modify(req->file);
+- } else {
+- fsnotify_access(req->file);
+- }
+- if (unlikely(res != req->cqe.res)) {
+- if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
+- io_rw_should_reissue(req)) {
+- req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
+- return true;
+- }
+- req_set_fail(req);
+- req->cqe.res = res;
+- }
+- return false;
+-}
+-
+-static inline void io_req_task_complete(struct io_kiocb *req, bool *locked)
+-{
+- int res = req->cqe.res;
+-
+- if (*locked) {
+- io_req_complete_state(req, res, io_put_kbuf(req, 0));
+- io_req_add_compl_list(req);
+- } else {
+- io_req_complete_post(req, res,
+- io_put_kbuf(req, IO_URING_F_UNLOCKED));
+- }
+-}
+-
+-static void __io_complete_rw(struct io_kiocb *req, long res,
+- unsigned int issue_flags)
+-{
+- if (__io_complete_rw_common(req, res))
+- return;
+- __io_req_complete(req, issue_flags, req->cqe.res,
+- io_put_kbuf(req, issue_flags));
+-}
+-
+-static void io_complete_rw(struct kiocb *kiocb, long res)
+-{
+- struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
+-
+- if (__io_complete_rw_common(req, res))
+- return;
+- req->cqe.res = res;
+- req->io_task_work.func = io_req_task_complete;
+- io_req_task_prio_work_add(req);
+-}
+-
+-static void io_complete_rw_iopoll(struct kiocb *kiocb, long res)
+-{
+- struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
+-
+- if (kiocb->ki_flags & IOCB_WRITE)
+- kiocb_end_write(req);
+- if (unlikely(res != req->cqe.res)) {
+- if (res == -EAGAIN && io_rw_should_reissue(req)) {
+- req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
+- return;
+- }
+- req->cqe.res = res;
+- }
+-
+- /* order with io_iopoll_complete() checking ->iopoll_completed */
+- smp_store_release(&req->iopoll_completed, 1);
+-}
+-
+-/*
+- * After the iocb has been issued, it's safe to be found on the poll list.
+- * Adding the kiocb to the list AFTER submission ensures that we don't
+- * find it from a io_do_iopoll() thread before the issuer is done
+- * accessing the kiocb cookie.
+- */
+-static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- const bool needs_lock = issue_flags & IO_URING_F_UNLOCKED;
+-
+- /* workqueue context doesn't hold uring_lock, grab it now */
+- if (unlikely(needs_lock))
+- mutex_lock(&ctx->uring_lock);
+-
+- /*
+- * Track whether we have multiple files in our lists. This will impact
+- * how we do polling eventually, not spinning if we're on potentially
+- * different devices.
+- */
+- if (wq_list_empty(&ctx->iopoll_list)) {
+- ctx->poll_multi_queue = false;
+- } else if (!ctx->poll_multi_queue) {
+- struct io_kiocb *list_req;
+-
+- list_req = container_of(ctx->iopoll_list.first, struct io_kiocb,
+- comp_list);
+- if (list_req->file != req->file)
+- ctx->poll_multi_queue = true;
+- }
+-
+- /*
+- * For fast devices, IO may have already completed. If it has, add
+- * it to the front so we find it first.
+- */
+- if (READ_ONCE(req->iopoll_completed))
+- wq_list_add_head(&req->comp_list, &ctx->iopoll_list);
+- else
+- wq_list_add_tail(&req->comp_list, &ctx->iopoll_list);
+-
+- if (unlikely(needs_lock)) {
+- /*
+- * If IORING_SETUP_SQPOLL is enabled, sqes are either handle
+- * in sq thread task context or in io worker task context. If
+- * current task context is sq thread, we don't need to check
+- * whether should wake up sq thread.
+- */
+- if ((ctx->flags & IORING_SETUP_SQPOLL) &&
+- wq_has_sleeper(&ctx->sq_data->wait))
+- wake_up(&ctx->sq_data->wait);
+-
+- mutex_unlock(&ctx->uring_lock);
+- }
+-}
+-
+-static bool io_bdev_nowait(struct block_device *bdev)
+-{
+- return !bdev || blk_queue_nowait(bdev_get_queue(bdev));
+-}
+-
+-/*
+- * If we tracked the file through the SCM inflight mechanism, we could support
+- * any file. For now, just ensure that anything potentially problematic is done
+- * inline.
+- */
+-static bool __io_file_supports_nowait(struct file *file, umode_t mode)
+-{
+- if (S_ISBLK(mode)) {
+- if (IS_ENABLED(CONFIG_BLOCK) &&
+- io_bdev_nowait(I_BDEV(file->f_mapping->host)))
+- return true;
+- return false;
+- }
+- if (S_ISSOCK(mode))
+- return true;
+- if (S_ISREG(mode)) {
+- if (IS_ENABLED(CONFIG_BLOCK) &&
+- io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
+- file->f_op != &io_uring_fops)
+- return true;
+- return false;
+- }
+-
+- /* any ->read/write should understand O_NONBLOCK */
+- if (file->f_flags & O_NONBLOCK)
+- return true;
+- return file->f_mode & FMODE_NOWAIT;
+-}
+-
+-/*
+- * If we tracked the file through the SCM inflight mechanism, we could support
+- * any file. For now, just ensure that anything potentially problematic is done
+- * inline.
+- */
+-static unsigned int io_file_get_flags(struct file *file)
+-{
+- umode_t mode = file_inode(file)->i_mode;
+- unsigned int res = 0;
+-
+- if (S_ISREG(mode))
+- res |= FFS_ISREG;
+- if (__io_file_supports_nowait(file, mode))
+- res |= FFS_NOWAIT;
+- if (io_file_need_scm(file))
+- res |= FFS_SCM;
+- return res;
+-}
+-
+-static inline bool io_file_supports_nowait(struct io_kiocb *req)
+-{
+- return req->flags & REQ_F_SUPPORT_NOWAIT;
+-}
+-
+-static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct kiocb *kiocb = &req->rw.kiocb;
+- unsigned ioprio;
+- int ret;
+-
+- kiocb->ki_pos = READ_ONCE(sqe->off);
+- /* used for fixed read/write too - just read unconditionally */
+- req->buf_index = READ_ONCE(sqe->buf_index);
+-
+- if (req->opcode == IORING_OP_READ_FIXED ||
+- req->opcode == IORING_OP_WRITE_FIXED) {
+- struct io_ring_ctx *ctx = req->ctx;
+- u16 index;
+-
+- if (unlikely(req->buf_index >= ctx->nr_user_bufs))
+- return -EFAULT;
+- index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
+- req->imu = ctx->user_bufs[index];
+- io_req_set_rsrc_node(req, ctx, 0);
+- }
+-
+- ioprio = READ_ONCE(sqe->ioprio);
+- if (ioprio) {
+- ret = ioprio_check_cap(ioprio);
+- if (ret)
+- return ret;
+-
+- kiocb->ki_ioprio = ioprio;
+- } else {
+- kiocb->ki_ioprio = get_current_ioprio();
+- }
+-
+- req->rw.addr = READ_ONCE(sqe->addr);
+- req->rw.len = READ_ONCE(sqe->len);
+- req->rw.flags = READ_ONCE(sqe->rw_flags);
+- return 0;
+-}
+-
+-static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
+-{
+- switch (ret) {
+- case -EIOCBQUEUED:
+- break;
+- case -ERESTARTSYS:
+- case -ERESTARTNOINTR:
+- case -ERESTARTNOHAND:
+- case -ERESTART_RESTARTBLOCK:
+- /*
+- * We can't just restart the syscall, since previously
+- * submitted sqes may already be in progress. Just fail this
+- * IO with EINTR.
+- */
+- ret = -EINTR;
+- fallthrough;
+- default:
+- kiocb->ki_complete(kiocb, ret);
+- }
+-}
+-
+-static inline loff_t *io_kiocb_update_pos(struct io_kiocb *req)
+-{
+- struct kiocb *kiocb = &req->rw.kiocb;
+-
+- if (kiocb->ki_pos != -1)
+- return &kiocb->ki_pos;
+-
+- if (!(req->file->f_mode & FMODE_STREAM)) {
+- req->flags |= REQ_F_CUR_POS;
+- kiocb->ki_pos = req->file->f_pos;
+- return &kiocb->ki_pos;
+- }
+-
+- kiocb->ki_pos = 0;
+- return NULL;
+-}
+-
+-static void kiocb_done(struct io_kiocb *req, ssize_t ret,
+- unsigned int issue_flags)
+-{
+- struct io_async_rw *io = req->async_data;
+-
+- /* add previously done IO, if any */
+- if (req_has_async_data(req) && io->bytes_done > 0) {
+- if (ret < 0)
+- ret = io->bytes_done;
+- else
+- ret += io->bytes_done;
+- }
+-
+- if (req->flags & REQ_F_CUR_POS)
+- req->file->f_pos = req->rw.kiocb.ki_pos;
+- if (ret >= 0 && (req->rw.kiocb.ki_complete == io_complete_rw))
+- __io_complete_rw(req, ret, issue_flags);
+- else
+- io_rw_done(&req->rw.kiocb, ret);
+-
+- if (req->flags & REQ_F_REISSUE) {
+- req->flags &= ~REQ_F_REISSUE;
+- if (io_resubmit_prep(req))
+- io_req_task_queue_reissue(req);
+- else
+- io_req_task_queue_fail(req, ret);
+- }
+-}
+-
+-static int __io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
+- struct io_mapped_ubuf *imu)
+-{
+- size_t len = req->rw.len;
+- u64 buf_end, buf_addr = req->rw.addr;
+- size_t offset;
+-
+- if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end)))
+- return -EFAULT;
+- /* not inside the mapped region */
+- if (unlikely(buf_addr < imu->ubuf || buf_end > imu->ubuf_end))
+- return -EFAULT;
+-
+- /*
+- * May not be a start of buffer, set size appropriately
+- * and advance us to the beginning.
+- */
+- offset = buf_addr - imu->ubuf;
+- iov_iter_bvec(iter, rw, imu->bvec, imu->nr_bvecs, offset + len);
+-
+- if (offset) {
+- /*
+- * Don't use iov_iter_advance() here, as it's really slow for
+- * using the latter parts of a big fixed buffer - it iterates
+- * over each segment manually. We can cheat a bit here, because
+- * we know that:
+- *
+- * 1) it's a BVEC iter, we set it up
+- * 2) all bvecs are PAGE_SIZE in size, except potentially the
+- * first and last bvec
+- *
+- * So just find our index, and adjust the iterator afterwards.
+- * If the offset is within the first bvec (or the whole first
+- * bvec, just use iov_iter_advance(). This makes it easier
+- * since we can just skip the first segment, which may not
+- * be PAGE_SIZE aligned.
+- */
+- const struct bio_vec *bvec = imu->bvec;
+-
+- if (offset <= bvec->bv_len) {
+- iov_iter_advance(iter, offset);
+- } else {
+- unsigned long seg_skip;
+-
+- /* skip first vec */
+- offset -= bvec->bv_len;
+- seg_skip = 1 + (offset >> PAGE_SHIFT);
+-
+- iter->bvec = bvec + seg_skip;
+- iter->nr_segs -= seg_skip;
+- iter->count -= bvec->bv_len + offset;
+- iter->iov_offset = offset & ~PAGE_MASK;
+- }
+- }
+-
+- return 0;
+-}
+-
+-static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
+- unsigned int issue_flags)
+-{
+- if (WARN_ON_ONCE(!req->imu))
+- return -EFAULT;
+- return __io_import_fixed(req, rw, iter, req->imu);
+-}
+-
+-static int io_buffer_add_list(struct io_ring_ctx *ctx,
+- struct io_buffer_list *bl, unsigned int bgid)
+-{
+- bl->bgid = bgid;
+- if (bgid < BGID_ARRAY)
+- return 0;
+-
+- return xa_err(xa_store(&ctx->io_bl_xa, bgid, bl, GFP_KERNEL));
+-}
+-
+-static void __user *io_provided_buffer_select(struct io_kiocb *req, size_t *len,
+- struct io_buffer_list *bl)
+-{
+- if (!list_empty(&bl->buf_list)) {
+- struct io_buffer *kbuf;
+-
+- kbuf = list_first_entry(&bl->buf_list, struct io_buffer, list);
+- list_del(&kbuf->list);
+- if (*len > kbuf->len)
+- *len = kbuf->len;
+- req->flags |= REQ_F_BUFFER_SELECTED;
+- req->kbuf = kbuf;
+- req->buf_index = kbuf->bid;
+- return u64_to_user_ptr(kbuf->addr);
+- }
+- return NULL;
+-}
+-
+-static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+- struct io_buffer_list *bl,
+- unsigned int issue_flags)
+-{
+- struct io_uring_buf_ring *br = bl->buf_ring;
+- struct io_uring_buf *buf;
+- __u16 head = bl->head;
+-
+- if (unlikely(smp_load_acquire(&br->tail) == head))
+- return NULL;
+-
+- head &= bl->mask;
+- if (head < IO_BUFFER_LIST_BUF_PER_PAGE) {
+- buf = &br->bufs[head];
+- } else {
+- int off = head & (IO_BUFFER_LIST_BUF_PER_PAGE - 1);
+- int index = head / IO_BUFFER_LIST_BUF_PER_PAGE;
+- buf = page_address(bl->buf_pages[index]);
+- buf += off;
+- }
+- if (*len > buf->len)
+- *len = buf->len;
+- req->flags |= REQ_F_BUFFER_RING;
+- req->buf_list = bl;
+- req->buf_index = buf->bid;
+-
+- if (issue_flags & IO_URING_F_UNLOCKED || !file_can_poll(req->file)) {
+- /*
+- * If we came in unlocked, we have no choice but to consume the
+- * buffer here. This does mean it'll be pinned until the IO
+- * completes. But coming in unlocked means we're in io-wq
+- * context, hence there should be no further retry. For the
+- * locked case, the caller must ensure to call the commit when
+- * the transfer completes (or if we get -EAGAIN and must poll
+- * or retry).
+- */
+- req->buf_list = NULL;
+- bl->head++;
+- }
+- return u64_to_user_ptr(buf->addr);
+-}
+-
+-static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+- unsigned int issue_flags)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_buffer_list *bl;
+- void __user *ret = NULL;
+-
+- io_ring_submit_lock(req->ctx, issue_flags);
+-
+- bl = io_buffer_get_list(ctx, req->buf_index);
+- if (likely(bl)) {
+- if (bl->buf_nr_pages)
+- ret = io_ring_buffer_select(req, len, bl, issue_flags);
+- else
+- ret = io_provided_buffer_select(req, len, bl);
+- }
+- io_ring_submit_unlock(req->ctx, issue_flags);
+- return ret;
+-}
+-
+-#ifdef CONFIG_COMPAT
+-static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
+- unsigned int issue_flags)
+-{
+- struct compat_iovec __user *uiov;
+- compat_ssize_t clen;
+- void __user *buf;
+- size_t len;
+-
+- uiov = u64_to_user_ptr(req->rw.addr);
+- if (!access_ok(uiov, sizeof(*uiov)))
+- return -EFAULT;
+- if (__get_user(clen, &uiov->iov_len))
+- return -EFAULT;
+- if (clen < 0)
+- return -EINVAL;
+-
+- len = clen;
+- buf = io_buffer_select(req, &len, issue_flags);
+- if (!buf)
+- return -ENOBUFS;
+- req->rw.addr = (unsigned long) buf;
+- iov[0].iov_base = buf;
+- req->rw.len = iov[0].iov_len = (compat_size_t) len;
+- return 0;
+-}
+-#endif
+-
+-static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
+- unsigned int issue_flags)
+-{
+- struct iovec __user *uiov = u64_to_user_ptr(req->rw.addr);
+- void __user *buf;
+- ssize_t len;
+-
+- if (copy_from_user(iov, uiov, sizeof(*uiov)))
+- return -EFAULT;
+-
+- len = iov[0].iov_len;
+- if (len < 0)
+- return -EINVAL;
+- buf = io_buffer_select(req, &len, issue_flags);
+- if (!buf)
+- return -ENOBUFS;
+- req->rw.addr = (unsigned long) buf;
+- iov[0].iov_base = buf;
+- req->rw.len = iov[0].iov_len = len;
+- return 0;
+-}
+-
+-static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
+- unsigned int issue_flags)
+-{
+- if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)) {
+- iov[0].iov_base = u64_to_user_ptr(req->rw.addr);
+- iov[0].iov_len = req->rw.len;
+- return 0;
+- }
+- if (req->rw.len != 1)
+- return -EINVAL;
+-
+-#ifdef CONFIG_COMPAT
+- if (req->ctx->compat)
+- return io_compat_import(req, iov, issue_flags);
+-#endif
+-
+- return __io_iov_buffer_select(req, iov, issue_flags);
+-}
+-
+-static inline bool io_do_buffer_select(struct io_kiocb *req)
+-{
+- if (!(req->flags & REQ_F_BUFFER_SELECT))
+- return false;
+- return !(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING));
+-}
+-
+-static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
+- struct io_rw_state *s,
+- unsigned int issue_flags)
+-{
+- struct iov_iter *iter = &s->iter;
+- u8 opcode = req->opcode;
+- struct iovec *iovec;
+- void __user *buf;
+- size_t sqe_len;
+- ssize_t ret;
+-
+- if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) {
+- ret = io_import_fixed(req, rw, iter, issue_flags);
+- if (ret)
+- return ERR_PTR(ret);
+- return NULL;
+- }
+-
+- buf = u64_to_user_ptr(req->rw.addr);
+- sqe_len = req->rw.len;
+-
+- if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
+- if (io_do_buffer_select(req)) {
+- buf = io_buffer_select(req, &sqe_len, issue_flags);
+- if (!buf)
+- return ERR_PTR(-ENOBUFS);
+- req->rw.addr = (unsigned long) buf;
+- req->rw.len = sqe_len;
+- }
+-
+- ret = import_single_range(rw, buf, sqe_len, s->fast_iov, iter);
+- if (ret)
+- return ERR_PTR(ret);
+- return NULL;
+- }
+-
+- iovec = s->fast_iov;
+- if (req->flags & REQ_F_BUFFER_SELECT) {
+- ret = io_iov_buffer_select(req, iovec, issue_flags);
+- if (ret)
+- return ERR_PTR(ret);
+- iov_iter_init(iter, rw, iovec, 1, iovec->iov_len);
+- return NULL;
+- }
+-
+- ret = __import_iovec(rw, buf, sqe_len, UIO_FASTIOV, &iovec, iter,
+- req->ctx->compat);
+- if (unlikely(ret < 0))
+- return ERR_PTR(ret);
+- return iovec;
+-}
+-
+-static inline int io_import_iovec(int rw, struct io_kiocb *req,
+- struct iovec **iovec, struct io_rw_state *s,
+- unsigned int issue_flags)
+-{
+- *iovec = __io_import_iovec(rw, req, s, issue_flags);
+- if (unlikely(IS_ERR(*iovec)))
+- return PTR_ERR(*iovec);
+-
+- iov_iter_save_state(&s->iter, &s->iter_state);
+- return 0;
+-}
+-
+-static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
+-{
+- return (kiocb->ki_filp->f_mode & FMODE_STREAM) ? NULL : &kiocb->ki_pos;
+-}
+-
+-/*
+- * For files that don't have ->read_iter() and ->write_iter(), handle them
+- * by looping over ->read() or ->write() manually.
+- */
+-static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+-{
+- struct kiocb *kiocb = &req->rw.kiocb;
+- struct file *file = req->file;
+- ssize_t ret = 0;
+- loff_t *ppos;
+-
+- /*
+- * Don't support polled IO through this interface, and we can't
+- * support non-blocking either. For the latter, this just causes
+- * the kiocb to be handled from an async context.
+- */
+- if (kiocb->ki_flags & IOCB_HIPRI)
+- return -EOPNOTSUPP;
+- if ((kiocb->ki_flags & IOCB_NOWAIT) &&
+- !(kiocb->ki_filp->f_flags & O_NONBLOCK))
+- return -EAGAIN;
+-
+- ppos = io_kiocb_ppos(kiocb);
+-
+- while (iov_iter_count(iter)) {
+- struct iovec iovec;
+- ssize_t nr;
+-
+- if (!iov_iter_is_bvec(iter)) {
+- iovec = iov_iter_iovec(iter);
+- } else {
+- iovec.iov_base = u64_to_user_ptr(req->rw.addr);
+- iovec.iov_len = req->rw.len;
+- }
+-
+- if (rw == READ) {
+- nr = file->f_op->read(file, iovec.iov_base,
+- iovec.iov_len, ppos);
+- } else {
+- nr = file->f_op->write(file, iovec.iov_base,
+- iovec.iov_len, ppos);
+- }
+-
+- if (nr < 0) {
+- if (!ret)
+- ret = nr;
+- break;
+- }
+- ret += nr;
+- if (!iov_iter_is_bvec(iter)) {
+- iov_iter_advance(iter, nr);
+- } else {
+- req->rw.addr += nr;
+- req->rw.len -= nr;
+- if (!req->rw.len)
+- break;
+- }
+- if (nr != iovec.iov_len)
+- break;
+- }
+-
+- return ret;
+-}
+-
+-static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
+- const struct iovec *fast_iov, struct iov_iter *iter)
+-{
+- struct io_async_rw *rw = req->async_data;
+-
+- memcpy(&rw->s.iter, iter, sizeof(*iter));
+- rw->free_iovec = iovec;
+- rw->bytes_done = 0;
+- /* can only be fixed buffers, no need to do anything */
+- if (iov_iter_is_bvec(iter))
+- return;
+- if (!iovec) {
+- unsigned iov_off = 0;
+-
+- rw->s.iter.iov = rw->s.fast_iov;
+- if (iter->iov != fast_iov) {
+- iov_off = iter->iov - fast_iov;
+- rw->s.iter.iov += iov_off;
+- }
+- if (rw->s.fast_iov != fast_iov)
+- memcpy(rw->s.fast_iov + iov_off, fast_iov + iov_off,
+- sizeof(struct iovec) * iter->nr_segs);
+- } else {
+- req->flags |= REQ_F_NEED_CLEANUP;
+- }
+-}
+-
+-static inline bool io_alloc_async_data(struct io_kiocb *req)
+-{
+- WARN_ON_ONCE(!io_op_defs[req->opcode].async_size);
+- req->async_data = kmalloc(io_op_defs[req->opcode].async_size, GFP_KERNEL);
+- if (req->async_data) {
+- req->flags |= REQ_F_ASYNC_DATA;
+- return false;
+- }
+- return true;
+-}
+-
+-static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
+- struct io_rw_state *s, bool force)
+-{
+- if (!force && !io_op_defs[req->opcode].needs_async_setup)
+- return 0;
+- if (!req_has_async_data(req)) {
+- struct io_async_rw *iorw;
+-
+- if (io_alloc_async_data(req)) {
+- kfree(iovec);
+- return -ENOMEM;
+- }
+-
+- io_req_map_rw(req, iovec, s->fast_iov, &s->iter);
+- iorw = req->async_data;
+- /* we've copied and mapped the iter, ensure state is saved */
+- iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
+- }
+- return 0;
+-}
+-
+-static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
+-{
+- struct io_async_rw *iorw = req->async_data;
+- struct iovec *iov;
+- int ret;
+-
+- /* submission path, ->uring_lock should already be taken */
+- ret = io_import_iovec(rw, req, &iov, &iorw->s, 0);
+- if (unlikely(ret < 0))
+- return ret;
+-
+- iorw->bytes_done = 0;
+- iorw->free_iovec = iov;
+- if (iov)
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_readv_prep_async(struct io_kiocb *req)
+-{
+- return io_rw_prep_async(req, READ);
+-}
+-
+-static int io_writev_prep_async(struct io_kiocb *req)
+-{
+- return io_rw_prep_async(req, WRITE);
+-}
+-
+-/*
+- * This is our waitqueue callback handler, registered through __folio_lock_async()
+- * when we initially tried to do the IO with the iocb armed our waitqueue.
+- * This gets called when the page is unlocked, and we generally expect that to
+- * happen when the page IO is completed and the page is now uptodate. This will
+- * queue a task_work based retry of the operation, attempting to copy the data
+- * again. If the latter fails because the page was NOT uptodate, then we will
+- * do a thread based blocking retry of the operation. That's the unexpected
+- * slow path.
+- */
+-static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode,
+- int sync, void *arg)
+-{
+- struct wait_page_queue *wpq;
+- struct io_kiocb *req = wait->private;
+- struct wait_page_key *key = arg;
+-
+- wpq = container_of(wait, struct wait_page_queue, wait);
+-
+- if (!wake_page_match(wpq, key))
+- return 0;
+-
+- req->rw.kiocb.ki_flags &= ~IOCB_WAITQ;
+- list_del_init(&wait->entry);
+- io_req_task_queue(req);
+- return 1;
+-}
+-
+-/*
+- * This controls whether a given IO request should be armed for async page
+- * based retry. If we return false here, the request is handed to the async
+- * worker threads for retry. If we're doing buffered reads on a regular file,
+- * we prepare a private wait_page_queue entry and retry the operation. This
+- * will either succeed because the page is now uptodate and unlocked, or it
+- * will register a callback when the page is unlocked at IO completion. Through
+- * that callback, io_uring uses task_work to setup a retry of the operation.
+- * That retry will attempt the buffered read again. The retry will generally
+- * succeed, or in rare cases where it fails, we then fall back to using the
+- * async worker threads for a blocking retry.
+- */
+-static bool io_rw_should_retry(struct io_kiocb *req)
+-{
+- struct io_async_rw *rw = req->async_data;
+- struct wait_page_queue *wait = &rw->wpq;
+- struct kiocb *kiocb = &req->rw.kiocb;
+-
+- /* never retry for NOWAIT, we just complete with -EAGAIN */
+- if (req->flags & REQ_F_NOWAIT)
+- return false;
+-
+- /* Only for buffered IO */
+- if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_HIPRI))
+- return false;
+-
+- /*
+- * just use poll if we can, and don't attempt if the fs doesn't
+- * support callback based unlocks
+- */
+- if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC))
+- return false;
+-
+- wait->wait.func = io_async_buf_func;
+- wait->wait.private = req;
+- wait->wait.flags = 0;
+- INIT_LIST_HEAD(&wait->wait.entry);
+- kiocb->ki_flags |= IOCB_WAITQ;
+- kiocb->ki_flags &= ~IOCB_NOWAIT;
+- kiocb->ki_waitq = wait;
+- return true;
+-}
+-
+-static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
+-{
+- if (likely(req->file->f_op->read_iter))
+- return call_read_iter(req->file, &req->rw.kiocb, iter);
+- else if (req->file->f_op->read)
+- return loop_rw_iter(READ, req, iter);
+- else
+- return -EINVAL;
+-}
+-
+-static bool need_read_all(struct io_kiocb *req)
+-{
+- return req->flags & REQ_F_ISREG ||
+- S_ISBLK(file_inode(req->file)->i_mode);
+-}
+-
+-static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
+-{
+- struct kiocb *kiocb = &req->rw.kiocb;
+- struct io_ring_ctx *ctx = req->ctx;
+- struct file *file = req->file;
+- int ret;
+-
+- if (unlikely(!file || !(file->f_mode & mode)))
+- return -EBADF;
+-
+- if (!io_req_ffs_set(req))
+- req->flags |= io_file_get_flags(file) << REQ_F_SUPPORT_NOWAIT_BIT;
+-
+- kiocb->ki_flags = iocb_flags(file);
+- ret = kiocb_set_rw_flags(kiocb, req->rw.flags);
+- if (unlikely(ret))
+- return ret;
+-
+- /*
+- * If the file is marked O_NONBLOCK, still allow retry for it if it
+- * supports async. Otherwise it's impossible to use O_NONBLOCK files
+- * reliably. If not, or it IOCB_NOWAIT is set, don't retry.
+- */
+- if ((kiocb->ki_flags & IOCB_NOWAIT) ||
+- ((file->f_flags & O_NONBLOCK) && !io_file_supports_nowait(req)))
+- req->flags |= REQ_F_NOWAIT;
+-
+- if (ctx->flags & IORING_SETUP_IOPOLL) {
+- if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll)
+- return -EOPNOTSUPP;
+-
+- kiocb->private = NULL;
+- kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
+- kiocb->ki_complete = io_complete_rw_iopoll;
+- req->iopoll_completed = 0;
+- } else {
+- if (kiocb->ki_flags & IOCB_HIPRI)
+- return -EINVAL;
+- kiocb->ki_complete = io_complete_rw;
+- }
+-
+- return 0;
+-}
+-
+-static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_rw_state __s, *s = &__s;
+- struct iovec *iovec;
+- struct kiocb *kiocb = &req->rw.kiocb;
+- bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+- struct io_async_rw *rw;
+- ssize_t ret, ret2;
+- loff_t *ppos;
+-
+- if (!req_has_async_data(req)) {
+- ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
+- if (unlikely(ret < 0))
+- return ret;
+- } else {
+- rw = req->async_data;
+- s = &rw->s;
+-
+- /*
+- * Safe and required to re-import if we're using provided
+- * buffers, as we dropped the selected one before retry.
+- */
+- if (io_do_buffer_select(req)) {
+- ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
+- if (unlikely(ret < 0))
+- return ret;
+- }
+-
+- /*
+- * We come here from an earlier attempt, restore our state to
+- * match in case it doesn't. It's cheap enough that we don't
+- * need to make this conditional.
+- */
+- iov_iter_restore(&s->iter, &s->iter_state);
+- iovec = NULL;
+- }
+- ret = io_rw_init_file(req, FMODE_READ);
+- if (unlikely(ret)) {
+- kfree(iovec);
+- return ret;
+- }
+- req->cqe.res = iov_iter_count(&s->iter);
+-
+- if (force_nonblock) {
+- /* If the file doesn't support async, just async punt */
+- if (unlikely(!io_file_supports_nowait(req))) {
+- ret = io_setup_async_rw(req, iovec, s, true);
+- return ret ?: -EAGAIN;
+- }
+- kiocb->ki_flags |= IOCB_NOWAIT;
+- } else {
+- /* Ensure we clear previously set non-block flag */
+- kiocb->ki_flags &= ~IOCB_NOWAIT;
+- }
+-
+- ppos = io_kiocb_update_pos(req);
+-
+- ret = rw_verify_area(READ, req->file, ppos, req->cqe.res);
+- if (unlikely(ret)) {
+- kfree(iovec);
+- return ret;
+- }
+-
+- ret = io_iter_do_read(req, &s->iter);
+-
+- if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
+- req->flags &= ~REQ_F_REISSUE;
+- /* if we can poll, just do that */
+- if (req->opcode == IORING_OP_READ && file_can_poll(req->file))
+- return -EAGAIN;
+- /* IOPOLL retry should happen for io-wq threads */
+- if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL))
+- goto done;
+- /* no retry on NONBLOCK nor RWF_NOWAIT */
+- if (req->flags & REQ_F_NOWAIT)
+- goto done;
+- ret = 0;
+- } else if (ret == -EIOCBQUEUED) {
+- goto out_free;
+- } else if (ret == req->cqe.res || ret <= 0 || !force_nonblock ||
+- (req->flags & REQ_F_NOWAIT) || !need_read_all(req)) {
+- /* read all, failed, already did sync or don't want to retry */
+- goto done;
+- }
+-
+- /*
+- * Don't depend on the iter state matching what was consumed, or being
+- * untouched in case of error. Restore it and we'll advance it
+- * manually if we need to.
+- */
+- iov_iter_restore(&s->iter, &s->iter_state);
+-
+- ret2 = io_setup_async_rw(req, iovec, s, true);
+- if (ret2)
+- return ret2;
+-
+- iovec = NULL;
+- rw = req->async_data;
+- s = &rw->s;
+- /*
+- * Now use our persistent iterator and state, if we aren't already.
+- * We've restored and mapped the iter to match.
+- */
+-
+- do {
+- /*
+- * We end up here because of a partial read, either from
+- * above or inside this loop. Advance the iter by the bytes
+- * that were consumed.
+- */
+- iov_iter_advance(&s->iter, ret);
+- if (!iov_iter_count(&s->iter))
+- break;
+- rw->bytes_done += ret;
+- iov_iter_save_state(&s->iter, &s->iter_state);
+-
+- /* if we can retry, do so with the callbacks armed */
+- if (!io_rw_should_retry(req)) {
+- kiocb->ki_flags &= ~IOCB_WAITQ;
+- return -EAGAIN;
+- }
+-
+- /*
+- * Now retry read with the IOCB_WAITQ parts set in the iocb. If
+- * we get -EIOCBQUEUED, then we'll get a notification when the
+- * desired page gets unlocked. We can also get a partial read
+- * here, and if we do, then just retry at the new offset.
+- */
+- ret = io_iter_do_read(req, &s->iter);
+- if (ret == -EIOCBQUEUED)
+- return 0;
+- /* we got some bytes, but not all. retry. */
+- kiocb->ki_flags &= ~IOCB_WAITQ;
+- iov_iter_restore(&s->iter, &s->iter_state);
+- } while (ret > 0);
+-done:
+- kiocb_done(req, ret, issue_flags);
+-out_free:
+- /* it's faster to check here then delegate to kfree */
+- if (iovec)
+- kfree(iovec);
+- return 0;
+-}
+-
+-static int io_write(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_rw_state __s, *s = &__s;
+- struct iovec *iovec;
+- struct kiocb *kiocb = &req->rw.kiocb;
+- bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+- ssize_t ret, ret2;
+- loff_t *ppos;
+-
+- if (!req_has_async_data(req)) {
+- ret = io_import_iovec(WRITE, req, &iovec, s, issue_flags);
+- if (unlikely(ret < 0))
+- return ret;
+- } else {
+- struct io_async_rw *rw = req->async_data;
+-
+- s = &rw->s;
+- iov_iter_restore(&s->iter, &s->iter_state);
+- iovec = NULL;
+- }
+- ret = io_rw_init_file(req, FMODE_WRITE);
+- if (unlikely(ret)) {
+- kfree(iovec);
+- return ret;
+- }
+- req->cqe.res = iov_iter_count(&s->iter);
+-
+- if (force_nonblock) {
+- /* If the file doesn't support async, just async punt */
+- if (unlikely(!io_file_supports_nowait(req)))
+- goto copy_iov;
+-
+- /* file path doesn't support NOWAIT for non-direct_IO */
+- if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
+- (req->flags & REQ_F_ISREG))
+- goto copy_iov;
+-
+- kiocb->ki_flags |= IOCB_NOWAIT;
+- } else {
+- /* Ensure we clear previously set non-block flag */
+- kiocb->ki_flags &= ~IOCB_NOWAIT;
+- }
+-
+- ppos = io_kiocb_update_pos(req);
+-
+- ret = rw_verify_area(WRITE, req->file, ppos, req->cqe.res);
+- if (unlikely(ret))
+- goto out_free;
+-
+- /*
+- * Open-code file_start_write here to grab freeze protection,
+- * which will be released by another thread in
+- * io_complete_rw(). Fool lockdep by telling it the lock got
+- * released so that it doesn't complain about the held lock when
+- * we return to userspace.
+- */
+- if (req->flags & REQ_F_ISREG) {
+- sb_start_write(file_inode(req->file)->i_sb);
+- __sb_writers_release(file_inode(req->file)->i_sb,
+- SB_FREEZE_WRITE);
+- }
+- kiocb->ki_flags |= IOCB_WRITE;
+-
+- if (likely(req->file->f_op->write_iter))
+- ret2 = call_write_iter(req->file, kiocb, &s->iter);
+- else if (req->file->f_op->write)
+- ret2 = loop_rw_iter(WRITE, req, &s->iter);
+- else
+- ret2 = -EINVAL;
+-
+- if (req->flags & REQ_F_REISSUE) {
+- req->flags &= ~REQ_F_REISSUE;
+- ret2 = -EAGAIN;
+- }
+-
+- /*
+- * Raw bdev writes will return -EOPNOTSUPP for IOCB_NOWAIT. Just
+- * retry them without IOCB_NOWAIT.
+- */
+- if (ret2 == -EOPNOTSUPP && (kiocb->ki_flags & IOCB_NOWAIT))
+- ret2 = -EAGAIN;
+- /* no retry on NONBLOCK nor RWF_NOWAIT */
+- if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT))
+- goto done;
+- if (!force_nonblock || ret2 != -EAGAIN) {
+- /* IOPOLL retry should happen for io-wq threads */
+- if (ret2 == -EAGAIN && (req->ctx->flags & IORING_SETUP_IOPOLL))
+- goto copy_iov;
+-done:
+- kiocb_done(req, ret2, issue_flags);
+- } else {
+-copy_iov:
+- iov_iter_restore(&s->iter, &s->iter_state);
+- ret = io_setup_async_rw(req, iovec, s, false);
+- return ret ?: -EAGAIN;
+- }
+-out_free:
+- /* it's reportedly faster than delegating the null check to kfree() */
+- if (iovec)
+- kfree(iovec);
+- return ret;
+-}
+-
+-static int io_renameat_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_rename *ren = &req->rename;
+- const char __user *oldf, *newf;
+-
+- if (sqe->buf_index || sqe->splice_fd_in)
+- return -EINVAL;
+- if (unlikely(req->flags & REQ_F_FIXED_FILE))
+- return -EBADF;
+-
+- ren->old_dfd = READ_ONCE(sqe->fd);
+- oldf = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- newf = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+- ren->new_dfd = READ_ONCE(sqe->len);
+- ren->flags = READ_ONCE(sqe->rename_flags);
+-
+- ren->oldpath = getname(oldf);
+- if (IS_ERR(ren->oldpath))
+- return PTR_ERR(ren->oldpath);
+-
+- ren->newpath = getname(newf);
+- if (IS_ERR(ren->newpath)) {
+- putname(ren->oldpath);
+- return PTR_ERR(ren->newpath);
+- }
+-
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_renameat(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_rename *ren = &req->rename;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = do_renameat2(ren->old_dfd, ren->oldpath, ren->new_dfd,
+- ren->newpath, ren->flags);
+-
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static inline void __io_xattr_finish(struct io_kiocb *req)
+-{
+- struct io_xattr *ix = &req->xattr;
+-
+- if (ix->filename)
+- putname(ix->filename);
+-
+- kfree(ix->ctx.kname);
+- kvfree(ix->ctx.kvalue);
+-}
+-
+-static void io_xattr_finish(struct io_kiocb *req, int ret)
+-{
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+-
+- __io_xattr_finish(req);
+- io_req_complete(req, ret);
+-}
+-
+-static int __io_getxattr_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_xattr *ix = &req->xattr;
+- const char __user *name;
+- int ret;
+-
+- if (unlikely(req->flags & REQ_F_FIXED_FILE))
+- return -EBADF;
+-
+- ix->filename = NULL;
+- ix->ctx.kvalue = NULL;
+- name = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- ix->ctx.cvalue = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+- ix->ctx.size = READ_ONCE(sqe->len);
+- ix->ctx.flags = READ_ONCE(sqe->xattr_flags);
+-
+- if (ix->ctx.flags)
+- return -EINVAL;
+-
+- ix->ctx.kname = kmalloc(sizeof(*ix->ctx.kname), GFP_KERNEL);
+- if (!ix->ctx.kname)
+- return -ENOMEM;
+-
+- ret = strncpy_from_user(ix->ctx.kname->name, name,
+- sizeof(ix->ctx.kname->name));
+- if (!ret || ret == sizeof(ix->ctx.kname->name))
+- ret = -ERANGE;
+- if (ret < 0) {
+- kfree(ix->ctx.kname);
+- return ret;
+- }
+-
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_fgetxattr_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- return __io_getxattr_prep(req, sqe);
+-}
+-
+-static int io_getxattr_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_xattr *ix = &req->xattr;
+- const char __user *path;
+- int ret;
+-
+- ret = __io_getxattr_prep(req, sqe);
+- if (ret)
+- return ret;
+-
+- path = u64_to_user_ptr(READ_ONCE(sqe->addr3));
+-
+- ix->filename = getname_flags(path, LOOKUP_FOLLOW, NULL);
+- if (IS_ERR(ix->filename)) {
+- ret = PTR_ERR(ix->filename);
+- ix->filename = NULL;
+- }
+-
+- return ret;
+-}
+-
+-static int io_fgetxattr(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_xattr *ix = &req->xattr;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = do_getxattr(mnt_user_ns(req->file->f_path.mnt),
+- req->file->f_path.dentry,
+- &ix->ctx);
+-
+- io_xattr_finish(req, ret);
+- return 0;
+-}
+-
+-static int io_getxattr(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_xattr *ix = &req->xattr;
+- unsigned int lookup_flags = LOOKUP_FOLLOW;
+- struct path path;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+-retry:
+- ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL);
+- if (!ret) {
+- ret = do_getxattr(mnt_user_ns(path.mnt),
+- path.dentry,
+- &ix->ctx);
+-
+- path_put(&path);
+- if (retry_estale(ret, lookup_flags)) {
+- lookup_flags |= LOOKUP_REVAL;
+- goto retry;
+- }
+- }
+-
+- io_xattr_finish(req, ret);
+- return 0;
+-}
+-
+-static int __io_setxattr_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_xattr *ix = &req->xattr;
+- const char __user *name;
+- int ret;
+-
+- if (unlikely(req->flags & REQ_F_FIXED_FILE))
+- return -EBADF;
+-
+- ix->filename = NULL;
+- name = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- ix->ctx.cvalue = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+- ix->ctx.kvalue = NULL;
+- ix->ctx.size = READ_ONCE(sqe->len);
+- ix->ctx.flags = READ_ONCE(sqe->xattr_flags);
+-
+- ix->ctx.kname = kmalloc(sizeof(*ix->ctx.kname), GFP_KERNEL);
+- if (!ix->ctx.kname)
+- return -ENOMEM;
+-
+- ret = setxattr_copy(name, &ix->ctx);
+- if (ret) {
+- kfree(ix->ctx.kname);
+- return ret;
+- }
+-
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_setxattr_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_xattr *ix = &req->xattr;
+- const char __user *path;
+- int ret;
+-
+- ret = __io_setxattr_prep(req, sqe);
+- if (ret)
+- return ret;
+-
+- path = u64_to_user_ptr(READ_ONCE(sqe->addr3));
+-
+- ix->filename = getname_flags(path, LOOKUP_FOLLOW, NULL);
+- if (IS_ERR(ix->filename)) {
+- ret = PTR_ERR(ix->filename);
+- ix->filename = NULL;
+- }
+-
+- return ret;
+-}
+-
+-static int io_fsetxattr_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- return __io_setxattr_prep(req, sqe);
+-}
+-
+-static int __io_setxattr(struct io_kiocb *req, unsigned int issue_flags,
+- struct path *path)
+-{
+- struct io_xattr *ix = &req->xattr;
+- int ret;
+-
+- ret = mnt_want_write(path->mnt);
+- if (!ret) {
+- ret = do_setxattr(mnt_user_ns(path->mnt), path->dentry, &ix->ctx);
+- mnt_drop_write(path->mnt);
+- }
+-
+- return ret;
+-}
+-
+-static int io_fsetxattr(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = __io_setxattr(req, issue_flags, &req->file->f_path);
+- io_xattr_finish(req, ret);
+-
+- return 0;
+-}
+-
+-static int io_setxattr(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_xattr *ix = &req->xattr;
+- unsigned int lookup_flags = LOOKUP_FOLLOW;
+- struct path path;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+-retry:
+- ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL);
+- if (!ret) {
+- ret = __io_setxattr(req, issue_flags, &path);
+- path_put(&path);
+- if (retry_estale(ret, lookup_flags)) {
+- lookup_flags |= LOOKUP_REVAL;
+- goto retry;
+- }
+- }
+-
+- io_xattr_finish(req, ret);
+- return 0;
+-}
+-
+-static int io_unlinkat_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_unlink *un = &req->unlink;
+- const char __user *fname;
+-
+- if (sqe->off || sqe->len || sqe->buf_index || sqe->splice_fd_in)
+- return -EINVAL;
+- if (unlikely(req->flags & REQ_F_FIXED_FILE))
+- return -EBADF;
+-
+- un->dfd = READ_ONCE(sqe->fd);
+-
+- un->flags = READ_ONCE(sqe->unlink_flags);
+- if (un->flags & ~AT_REMOVEDIR)
+- return -EINVAL;
+-
+- fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- un->filename = getname(fname);
+- if (IS_ERR(un->filename))
+- return PTR_ERR(un->filename);
+-
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_unlinkat(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_unlink *un = &req->unlink;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- if (un->flags & AT_REMOVEDIR)
+- ret = do_rmdir(un->dfd, un->filename);
+- else
+- ret = do_unlinkat(un->dfd, un->filename);
+-
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static int io_mkdirat_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_mkdir *mkd = &req->mkdir;
+- const char __user *fname;
+-
+- if (sqe->off || sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
+- return -EINVAL;
+- if (unlikely(req->flags & REQ_F_FIXED_FILE))
+- return -EBADF;
+-
+- mkd->dfd = READ_ONCE(sqe->fd);
+- mkd->mode = READ_ONCE(sqe->len);
+-
+- fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- mkd->filename = getname(fname);
+- if (IS_ERR(mkd->filename))
+- return PTR_ERR(mkd->filename);
+-
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_mkdirat(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_mkdir *mkd = &req->mkdir;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = do_mkdirat(mkd->dfd, mkd->filename, mkd->mode);
+-
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static int io_symlinkat_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_symlink *sl = &req->symlink;
+- const char __user *oldpath, *newpath;
+-
+- if (sqe->len || sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
+- return -EINVAL;
+- if (unlikely(req->flags & REQ_F_FIXED_FILE))
+- return -EBADF;
+-
+- sl->new_dfd = READ_ONCE(sqe->fd);
+- oldpath = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- newpath = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+-
+- sl->oldpath = getname(oldpath);
+- if (IS_ERR(sl->oldpath))
+- return PTR_ERR(sl->oldpath);
+-
+- sl->newpath = getname(newpath);
+- if (IS_ERR(sl->newpath)) {
+- putname(sl->oldpath);
+- return PTR_ERR(sl->newpath);
+- }
+-
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_symlinkat(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_symlink *sl = &req->symlink;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = do_symlinkat(sl->oldpath, sl->new_dfd, sl->newpath);
+-
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static int io_linkat_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_hardlink *lnk = &req->hardlink;
+- const char __user *oldf, *newf;
+-
+- if (sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
+- return -EINVAL;
+- if (unlikely(req->flags & REQ_F_FIXED_FILE))
+- return -EBADF;
+-
+- lnk->old_dfd = READ_ONCE(sqe->fd);
+- lnk->new_dfd = READ_ONCE(sqe->len);
+- oldf = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- newf = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+- lnk->flags = READ_ONCE(sqe->hardlink_flags);
+-
+- lnk->oldpath = getname(oldf);
+- if (IS_ERR(lnk->oldpath))
+- return PTR_ERR(lnk->oldpath);
+-
+- lnk->newpath = getname(newf);
+- if (IS_ERR(lnk->newpath)) {
+- putname(lnk->oldpath);
+- return PTR_ERR(lnk->newpath);
+- }
+-
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_linkat(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_hardlink *lnk = &req->hardlink;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = do_linkat(lnk->old_dfd, lnk->oldpath, lnk->new_dfd,
+- lnk->newpath, lnk->flags);
+-
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static void io_uring_cmd_work(struct io_kiocb *req, bool *locked)
+-{
+- req->uring_cmd.task_work_cb(&req->uring_cmd);
+-}
+-
+-void io_uring_cmd_complete_in_task(struct io_uring_cmd *ioucmd,
+- void (*task_work_cb)(struct io_uring_cmd *))
+-{
+- struct io_kiocb *req = container_of(ioucmd, struct io_kiocb, uring_cmd);
+-
+- req->uring_cmd.task_work_cb = task_work_cb;
+- req->io_task_work.func = io_uring_cmd_work;
+- io_req_task_work_add(req);
+-}
+-EXPORT_SYMBOL_GPL(io_uring_cmd_complete_in_task);
+-
+-static inline void io_req_set_cqe32_extra(struct io_kiocb *req,
+- u64 extra1, u64 extra2)
+-{
+- req->extra1 = extra1;
+- req->extra2 = extra2;
+- req->flags |= REQ_F_CQE32_INIT;
+-}
+-
+-/*
+- * Called by consumers of io_uring_cmd, if they originally returned
+- * -EIOCBQUEUED upon receiving the command.
+- */
+-void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2)
+-{
+- struct io_kiocb *req = container_of(ioucmd, struct io_kiocb, uring_cmd);
+-
+- if (ret < 0)
+- req_set_fail(req);
+-
+- if (req->ctx->flags & IORING_SETUP_CQE32)
+- io_req_set_cqe32_extra(req, res2, 0);
+- io_req_complete(req, ret);
+-}
+-EXPORT_SYMBOL_GPL(io_uring_cmd_done);
+-
+-static int io_uring_cmd_prep_async(struct io_kiocb *req)
+-{
+- size_t cmd_size;
+-
+- cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128);
+-
+- memcpy(req->async_data, req->uring_cmd.cmd, cmd_size);
+- return 0;
+-}
+-
+-static int io_uring_cmd_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_uring_cmd *ioucmd = &req->uring_cmd;
+-
+- if (sqe->rw_flags || sqe->__pad1)
+- return -EINVAL;
+- ioucmd->cmd = sqe->cmd;
+- ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
+- return 0;
+-}
+-
+-static int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_uring_cmd *ioucmd = &req->uring_cmd;
+- struct io_ring_ctx *ctx = req->ctx;
+- struct file *file = req->file;
+- int ret;
+-
+- if (!req->file->f_op->uring_cmd)
+- return -EOPNOTSUPP;
+-
+- if (ctx->flags & IORING_SETUP_SQE128)
+- issue_flags |= IO_URING_F_SQE128;
+- if (ctx->flags & IORING_SETUP_CQE32)
+- issue_flags |= IO_URING_F_CQE32;
+- if (ctx->flags & IORING_SETUP_IOPOLL)
+- issue_flags |= IO_URING_F_IOPOLL;
+-
+- if (req_has_async_data(req))
+- ioucmd->cmd = req->async_data;
+-
+- ret = file->f_op->uring_cmd(ioucmd, issue_flags);
+- if (ret == -EAGAIN) {
+- if (!req_has_async_data(req)) {
+- if (io_alloc_async_data(req))
+- return -ENOMEM;
+- io_uring_cmd_prep_async(req);
+- }
+- return -EAGAIN;
+- }
+-
+- if (ret != -EIOCBQUEUED)
+- io_uring_cmd_done(ioucmd, ret, 0);
+- return 0;
+-}
+-
+-static int __io_splice_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_splice *sp = &req->splice;
+- unsigned int valid_flags = SPLICE_F_FD_IN_FIXED | SPLICE_F_ALL;
+-
+- sp->len = READ_ONCE(sqe->len);
+- sp->flags = READ_ONCE(sqe->splice_flags);
+- if (unlikely(sp->flags & ~valid_flags))
+- return -EINVAL;
+- sp->splice_fd_in = READ_ONCE(sqe->splice_fd_in);
+- return 0;
+-}
+-
+-static int io_tee_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- if (READ_ONCE(sqe->splice_off_in) || READ_ONCE(sqe->off))
+- return -EINVAL;
+- return __io_splice_prep(req, sqe);
+-}
+-
+-static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_splice *sp = &req->splice;
+- struct file *out = sp->file_out;
+- unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
+- struct file *in;
+- long ret = 0;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- if (sp->flags & SPLICE_F_FD_IN_FIXED)
+- in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags);
+- else
+- in = io_file_get_normal(req, sp->splice_fd_in);
+- if (!in) {
+- ret = -EBADF;
+- goto done;
+- }
+-
+- if (sp->len)
+- ret = do_tee(in, out, sp->len, flags);
+-
+- if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
+- io_put_file(in);
+-done:
+- if (ret != sp->len)
+- req_set_fail(req);
+- __io_req_complete(req, 0, ret, 0);
+- return 0;
+-}
+-
+-static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct io_splice *sp = &req->splice;
+-
+- sp->off_in = READ_ONCE(sqe->splice_off_in);
+- sp->off_out = READ_ONCE(sqe->off);
+- return __io_splice_prep(req, sqe);
+-}
+-
+-static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_splice *sp = &req->splice;
+- struct file *out = sp->file_out;
+- unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
+- loff_t *poff_in, *poff_out;
+- struct file *in;
+- long ret = 0;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- if (sp->flags & SPLICE_F_FD_IN_FIXED)
+- in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags);
+- else
+- in = io_file_get_normal(req, sp->splice_fd_in);
+- if (!in) {
+- ret = -EBADF;
+- goto done;
+- }
+-
+- poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;
+- poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;
+-
+- if (sp->len)
+- ret = do_splice(in, poff_in, out, poff_out, sp->len, flags);
+-
+- if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
+- io_put_file(in);
+-done:
+- if (ret != sp->len)
+- req_set_fail(req);
+- __io_req_complete(req, 0, ret, 0);
+- return 0;
+-}
+-
+-static int io_nop_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- return 0;
+-}
+-
+-/*
+- * IORING_OP_NOP just posts a completion event, nothing else.
+- */
+-static int io_nop(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- __io_req_complete(req, issue_flags, 0, 0);
+- return 0;
+-}
+-
+-static int io_msg_ring_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- if (unlikely(sqe->addr || sqe->rw_flags || sqe->splice_fd_in ||
+- sqe->buf_index || sqe->personality))
+- return -EINVAL;
+-
+- req->msg.user_data = READ_ONCE(sqe->off);
+- req->msg.len = READ_ONCE(sqe->len);
+- return 0;
+-}
+-
+-static int io_msg_ring(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_ring_ctx *target_ctx;
+- struct io_msg *msg = &req->msg;
+- bool filled;
+- int ret;
+-
+- ret = -EBADFD;
+- if (req->file->f_op != &io_uring_fops)
+- goto done;
+-
+- ret = -EOVERFLOW;
+- target_ctx = req->file->private_data;
+-
+- spin_lock(&target_ctx->completion_lock);
+- filled = io_fill_cqe_aux(target_ctx, msg->user_data, msg->len, 0);
+- io_commit_cqring(target_ctx);
+- spin_unlock(&target_ctx->completion_lock);
+-
+- if (filled) {
+- io_cqring_ev_posted(target_ctx);
+- ret = 0;
+- }
+-
+-done:
+- if (ret < 0)
+- req_set_fail(req);
+- __io_req_complete(req, issue_flags, ret, 0);
+- /* put file to avoid an attempt to IOPOLL the req */
+- io_put_file(req->file);
+- req->file = NULL;
+- return 0;
+-}
+-
+-static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- if (unlikely(sqe->addr || sqe->buf_index || sqe->splice_fd_in))
+- return -EINVAL;
+-
+- req->sync.flags = READ_ONCE(sqe->fsync_flags);
+- if (unlikely(req->sync.flags & ~IORING_FSYNC_DATASYNC))
+- return -EINVAL;
+-
+- req->sync.off = READ_ONCE(sqe->off);
+- req->sync.len = READ_ONCE(sqe->len);
+- return 0;
+-}
+-
+-static int io_fsync(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- loff_t end = req->sync.off + req->sync.len;
+- int ret;
+-
+- /* fsync always requires a blocking context */
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = vfs_fsync_range(req->file, req->sync.off,
+- end > 0 ? end : LLONG_MAX,
+- req->sync.flags & IORING_FSYNC_DATASYNC);
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static int io_fallocate_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- if (sqe->buf_index || sqe->rw_flags || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- req->sync.off = READ_ONCE(sqe->off);
+- req->sync.len = READ_ONCE(sqe->addr);
+- req->sync.mode = READ_ONCE(sqe->len);
+- return 0;
+-}
+-
+-static int io_fallocate(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- int ret;
+-
+- /* fallocate always requiring blocking context */
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+- ret = vfs_fallocate(req->file, req->sync.mode, req->sync.off,
+- req->sync.len);
+- if (ret >= 0)
+- fsnotify_modify(req->file);
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- const char __user *fname;
+- int ret;
+-
+- if (unlikely(sqe->buf_index))
+- return -EINVAL;
+- if (unlikely(req->flags & REQ_F_FIXED_FILE))
+- return -EBADF;
+-
+- /* open.how should be already initialised */
+- if (!(req->open.how.flags & O_PATH) && force_o_largefile())
+- req->open.how.flags |= O_LARGEFILE;
+-
+- req->open.dfd = READ_ONCE(sqe->fd);
+- fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- req->open.filename = getname(fname);
+- if (IS_ERR(req->open.filename)) {
+- ret = PTR_ERR(req->open.filename);
+- req->open.filename = NULL;
+- return ret;
+- }
+-
+- req->open.file_slot = READ_ONCE(sqe->file_index);
+- if (req->open.file_slot && (req->open.how.flags & O_CLOEXEC))
+- return -EINVAL;
+-
+- req->open.nofile = rlimit(RLIMIT_NOFILE);
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- u64 mode = READ_ONCE(sqe->len);
+- u64 flags = READ_ONCE(sqe->open_flags);
+-
+- req->open.how = build_open_how(flags, mode);
+- return __io_openat_prep(req, sqe);
+-}
+-
+-static int io_openat2_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct open_how __user *how;
+- size_t len;
+- int ret;
+-
+- how = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+- len = READ_ONCE(sqe->len);
+- if (len < OPEN_HOW_SIZE_VER0)
+- return -EINVAL;
+-
+- ret = copy_struct_from_user(&req->open.how, sizeof(req->open.how), how,
+- len);
+- if (ret)
+- return ret;
+-
+- return __io_openat_prep(req, sqe);
+-}
+-
+-static int io_file_bitmap_get(struct io_ring_ctx *ctx)
+-{
+- struct io_file_table *table = &ctx->file_table;
+- unsigned long nr = ctx->nr_user_files;
+- int ret;
+-
+- do {
+- ret = find_next_zero_bit(table->bitmap, nr, table->alloc_hint);
+- if (ret != nr)
+- return ret;
+-
+- if (!table->alloc_hint)
+- break;
+-
+- nr = table->alloc_hint;
+- table->alloc_hint = 0;
+- } while (1);
+-
+- return -ENFILE;
+-}
+-
+-/*
+- * Note when io_fixed_fd_install() returns error value, it will ensure
+- * fput() is called correspondingly.
+- */
+-static int io_fixed_fd_install(struct io_kiocb *req, unsigned int issue_flags,
+- struct file *file, unsigned int file_slot)
+-{
+- bool alloc_slot = file_slot == IORING_FILE_INDEX_ALLOC;
+- struct io_ring_ctx *ctx = req->ctx;
+- int ret;
+-
+- io_ring_submit_lock(ctx, issue_flags);
+-
+- if (alloc_slot) {
+- ret = io_file_bitmap_get(ctx);
+- if (unlikely(ret < 0))
+- goto err;
+- file_slot = ret;
+- } else {
+- file_slot--;
+- }
+-
+- ret = io_install_fixed_file(req, file, issue_flags, file_slot);
+- if (!ret && alloc_slot)
+- ret = file_slot;
+-err:
+- io_ring_submit_unlock(ctx, issue_flags);
+- if (unlikely(ret < 0))
+- fput(file);
+- return ret;
+-}
+-
+-static int io_openat2(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct open_flags op;
+- struct file *file;
+- bool resolve_nonblock, nonblock_set;
+- bool fixed = !!req->open.file_slot;
+- int ret;
+-
+- ret = build_open_flags(&req->open.how, &op);
+- if (ret)
+- goto err;
+- nonblock_set = op.open_flag & O_NONBLOCK;
+- resolve_nonblock = req->open.how.resolve & RESOLVE_CACHED;
+- if (issue_flags & IO_URING_F_NONBLOCK) {
+- /*
+- * Don't bother trying for O_TRUNC, O_CREAT, or O_TMPFILE open,
+- * it'll always -EAGAIN
+- */
+- if (req->open.how.flags & (O_TRUNC | O_CREAT | O_TMPFILE))
+- return -EAGAIN;
+- op.lookup_flags |= LOOKUP_CACHED;
+- op.open_flag |= O_NONBLOCK;
+- }
+-
+- if (!fixed) {
+- ret = __get_unused_fd_flags(req->open.how.flags, req->open.nofile);
+- if (ret < 0)
+- goto err;
+- }
+-
+- file = do_filp_open(req->open.dfd, req->open.filename, &op);
+- if (IS_ERR(file)) {
+- /*
+- * We could hang on to this 'fd' on retrying, but seems like
+- * marginal gain for something that is now known to be a slower
+- * path. So just put it, and we'll get a new one when we retry.
+- */
+- if (!fixed)
+- put_unused_fd(ret);
+-
+- ret = PTR_ERR(file);
+- /* only retry if RESOLVE_CACHED wasn't already set by application */
+- if (ret == -EAGAIN &&
+- (!resolve_nonblock && (issue_flags & IO_URING_F_NONBLOCK)))
+- return -EAGAIN;
+- goto err;
+- }
+-
+- if ((issue_flags & IO_URING_F_NONBLOCK) && !nonblock_set)
+- file->f_flags &= ~O_NONBLOCK;
+- fsnotify_open(file);
+-
+- if (!fixed)
+- fd_install(ret, file);
+- else
+- ret = io_fixed_fd_install(req, issue_flags, file,
+- req->open.file_slot);
+-err:
+- putname(req->open.filename);
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- if (ret < 0)
+- req_set_fail(req);
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static int io_openat(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- return io_openat2(req, issue_flags);
+-}
+-
+-static int io_remove_buffers_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_provide_buf *p = &req->pbuf;
+- u64 tmp;
+-
+- if (sqe->rw_flags || sqe->addr || sqe->len || sqe->off ||
+- sqe->splice_fd_in)
+- return -EINVAL;
+-
+- tmp = READ_ONCE(sqe->fd);
+- if (!tmp || tmp > USHRT_MAX)
+- return -EINVAL;
+-
+- memset(p, 0, sizeof(*p));
+- p->nbufs = tmp;
+- p->bgid = READ_ONCE(sqe->buf_group);
+- return 0;
+-}
+-
+-static int __io_remove_buffers(struct io_ring_ctx *ctx,
+- struct io_buffer_list *bl, unsigned nbufs)
+-{
+- unsigned i = 0;
+-
+- /* shouldn't happen */
+- if (!nbufs)
+- return 0;
+-
+- if (bl->buf_nr_pages) {
+- int j;
+-
+- i = bl->buf_ring->tail - bl->head;
+- for (j = 0; j < bl->buf_nr_pages; j++)
+- unpin_user_page(bl->buf_pages[j]);
+- kvfree(bl->buf_pages);
+- bl->buf_pages = NULL;
+- bl->buf_nr_pages = 0;
+- /* make sure it's seen as empty */
+- INIT_LIST_HEAD(&bl->buf_list);
+- return i;
+- }
+-
+- /* the head kbuf is the list itself */
+- while (!list_empty(&bl->buf_list)) {
+- struct io_buffer *nxt;
+-
+- nxt = list_first_entry(&bl->buf_list, struct io_buffer, list);
+- list_del(&nxt->list);
+- if (++i == nbufs)
+- return i;
+- cond_resched();
+- }
+- i++;
+-
+- return i;
+-}
+-
+-static int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_provide_buf *p = &req->pbuf;
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_buffer_list *bl;
+- int ret = 0;
+-
+- io_ring_submit_lock(ctx, issue_flags);
+-
+- ret = -ENOENT;
+- bl = io_buffer_get_list(ctx, p->bgid);
+- if (bl) {
+- ret = -EINVAL;
+- /* can't use provide/remove buffers command on mapped buffers */
+- if (!bl->buf_nr_pages)
+- ret = __io_remove_buffers(ctx, bl, p->nbufs);
+- }
+- if (ret < 0)
+- req_set_fail(req);
+-
+- /* complete before unlock, IOPOLL may need the lock */
+- __io_req_complete(req, issue_flags, ret, 0);
+- io_ring_submit_unlock(ctx, issue_flags);
+- return 0;
+-}
+-
+-static int io_provide_buffers_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- unsigned long size, tmp_check;
+- struct io_provide_buf *p = &req->pbuf;
+- u64 tmp;
+-
+- if (sqe->rw_flags || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- tmp = READ_ONCE(sqe->fd);
+- if (!tmp || tmp > USHRT_MAX)
+- return -E2BIG;
+- p->nbufs = tmp;
+- p->addr = READ_ONCE(sqe->addr);
+- p->len = READ_ONCE(sqe->len);
+-
+- if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
+- &size))
+- return -EOVERFLOW;
+- if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
+- return -EOVERFLOW;
+-
+- size = (unsigned long)p->len * p->nbufs;
+- if (!access_ok(u64_to_user_ptr(p->addr), size))
+- return -EFAULT;
+-
+- p->bgid = READ_ONCE(sqe->buf_group);
+- tmp = READ_ONCE(sqe->off);
+- if (tmp > USHRT_MAX)
+- return -E2BIG;
+- p->bid = tmp;
+- return 0;
+-}
+-
+-static int io_refill_buffer_cache(struct io_ring_ctx *ctx)
+-{
+- struct io_buffer *buf;
+- struct page *page;
+- int bufs_in_page;
+-
+- /*
+- * Completions that don't happen inline (eg not under uring_lock) will
+- * add to ->io_buffers_comp. If we don't have any free buffers, check
+- * the completion list and splice those entries first.
+- */
+- if (!list_empty_careful(&ctx->io_buffers_comp)) {
+- spin_lock(&ctx->completion_lock);
+- if (!list_empty(&ctx->io_buffers_comp)) {
+- list_splice_init(&ctx->io_buffers_comp,
+- &ctx->io_buffers_cache);
+- spin_unlock(&ctx->completion_lock);
+- return 0;
+- }
+- spin_unlock(&ctx->completion_lock);
+- }
+-
+- /*
+- * No free buffers and no completion entries either. Allocate a new
+- * page worth of buffer entries and add those to our freelist.
+- */
+- page = alloc_page(GFP_KERNEL_ACCOUNT);
+- if (!page)
+- return -ENOMEM;
+-
+- list_add(&page->lru, &ctx->io_buffers_pages);
+-
+- buf = page_address(page);
+- bufs_in_page = PAGE_SIZE / sizeof(*buf);
+- while (bufs_in_page) {
+- list_add_tail(&buf->list, &ctx->io_buffers_cache);
+- buf++;
+- bufs_in_page--;
+- }
+-
+- return 0;
+-}
+-
+-static int io_add_buffers(struct io_ring_ctx *ctx, struct io_provide_buf *pbuf,
+- struct io_buffer_list *bl)
+-{
+- struct io_buffer *buf;
+- u64 addr = pbuf->addr;
+- int i, bid = pbuf->bid;
+-
+- for (i = 0; i < pbuf->nbufs; i++) {
+- if (list_empty(&ctx->io_buffers_cache) &&
+- io_refill_buffer_cache(ctx))
+- break;
+- buf = list_first_entry(&ctx->io_buffers_cache, struct io_buffer,
+- list);
+- list_move_tail(&buf->list, &bl->buf_list);
+- buf->addr = addr;
+- buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
+- buf->bid = bid;
+- buf->bgid = pbuf->bgid;
+- addr += pbuf->len;
+- bid++;
+- cond_resched();
+- }
+-
+- return i ? 0 : -ENOMEM;
+-}
+-
+-static __cold int io_init_bl_list(struct io_ring_ctx *ctx)
+-{
+- int i;
+-
+- ctx->io_bl = kcalloc(BGID_ARRAY, sizeof(struct io_buffer_list),
+- GFP_KERNEL);
+- if (!ctx->io_bl)
+- return -ENOMEM;
+-
+- for (i = 0; i < BGID_ARRAY; i++) {
+- INIT_LIST_HEAD(&ctx->io_bl[i].buf_list);
+- ctx->io_bl[i].bgid = i;
+- }
+-
+- return 0;
+-}
+-
+-static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_provide_buf *p = &req->pbuf;
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_buffer_list *bl;
+- int ret = 0;
+-
+- io_ring_submit_lock(ctx, issue_flags);
+-
+- if (unlikely(p->bgid < BGID_ARRAY && !ctx->io_bl)) {
+- ret = io_init_bl_list(ctx);
+- if (ret)
+- goto err;
+- }
+-
+- bl = io_buffer_get_list(ctx, p->bgid);
+- if (unlikely(!bl)) {
+- bl = kzalloc(sizeof(*bl), GFP_KERNEL);
+- if (!bl) {
+- ret = -ENOMEM;
+- goto err;
+- }
+- INIT_LIST_HEAD(&bl->buf_list);
+- ret = io_buffer_add_list(ctx, bl, p->bgid);
+- if (ret) {
+- kfree(bl);
+- goto err;
+- }
+- }
+- /* can't add buffers via this command for a mapped buffer ring */
+- if (bl->buf_nr_pages) {
+- ret = -EINVAL;
+- goto err;
+- }
+-
+- ret = io_add_buffers(ctx, p, bl);
+-err:
+- if (ret < 0)
+- req_set_fail(req);
+- /* complete before unlock, IOPOLL may need the lock */
+- __io_req_complete(req, issue_flags, ret, 0);
+- io_ring_submit_unlock(ctx, issue_flags);
+- return 0;
+-}
+-
+-static int io_epoll_ctl_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+-#if defined(CONFIG_EPOLL)
+- if (sqe->buf_index || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- req->epoll.epfd = READ_ONCE(sqe->fd);
+- req->epoll.op = READ_ONCE(sqe->len);
+- req->epoll.fd = READ_ONCE(sqe->off);
+-
+- if (ep_op_has_event(req->epoll.op)) {
+- struct epoll_event __user *ev;
+-
+- ev = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- if (copy_from_user(&req->epoll.event, ev, sizeof(*ev)))
+- return -EFAULT;
+- }
+-
+- return 0;
+-#else
+- return -EOPNOTSUPP;
+-#endif
+-}
+-
+-static int io_epoll_ctl(struct io_kiocb *req, unsigned int issue_flags)
+-{
+-#if defined(CONFIG_EPOLL)
+- struct io_epoll *ie = &req->epoll;
+- int ret;
+- bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+-
+- ret = do_epoll_ctl(ie->epfd, ie->op, ie->fd, &ie->event, force_nonblock);
+- if (force_nonblock && ret == -EAGAIN)
+- return -EAGAIN;
+-
+- if (ret < 0)
+- req_set_fail(req);
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-#else
+- return -EOPNOTSUPP;
+-#endif
+-}
+-
+-static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+-#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
+- if (sqe->buf_index || sqe->off || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- req->madvise.addr = READ_ONCE(sqe->addr);
+- req->madvise.len = READ_ONCE(sqe->len);
+- req->madvise.advice = READ_ONCE(sqe->fadvise_advice);
+- return 0;
+-#else
+- return -EOPNOTSUPP;
+-#endif
+-}
+-
+-static int io_madvise(struct io_kiocb *req, unsigned int issue_flags)
+-{
+-#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
+- struct io_madvise *ma = &req->madvise;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = do_madvise(current->mm, ma->addr, ma->len, ma->advice);
+- io_req_complete(req, ret);
+- return 0;
+-#else
+- return -EOPNOTSUPP;
+-#endif
+-}
+-
+-static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- if (sqe->buf_index || sqe->addr || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- req->fadvise.offset = READ_ONCE(sqe->off);
+- req->fadvise.len = READ_ONCE(sqe->len);
+- req->fadvise.advice = READ_ONCE(sqe->fadvise_advice);
+- return 0;
+-}
+-
+-static int io_fadvise(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_fadvise *fa = &req->fadvise;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK) {
+- switch (fa->advice) {
+- case POSIX_FADV_NORMAL:
+- case POSIX_FADV_RANDOM:
+- case POSIX_FADV_SEQUENTIAL:
+- break;
+- default:
+- return -EAGAIN;
+- }
+- }
+-
+- ret = vfs_fadvise(req->file, fa->offset, fa->len, fa->advice);
+- if (ret < 0)
+- req_set_fail(req);
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- const char __user *path;
+-
+- if (sqe->buf_index || sqe->splice_fd_in)
+- return -EINVAL;
+- if (req->flags & REQ_F_FIXED_FILE)
+- return -EBADF;
+-
+- req->statx.dfd = READ_ONCE(sqe->fd);
+- req->statx.mask = READ_ONCE(sqe->len);
+- path = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- req->statx.buffer = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+- req->statx.flags = READ_ONCE(sqe->statx_flags);
+-
+- req->statx.filename = getname_flags(path,
+- getname_statx_lookup_flags(req->statx.flags),
+- NULL);
+-
+- if (IS_ERR(req->statx.filename)) {
+- int ret = PTR_ERR(req->statx.filename);
+-
+- req->statx.filename = NULL;
+- return ret;
+- }
+-
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return 0;
+-}
+-
+-static int io_statx(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_statx *ctx = &req->statx;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask,
+- ctx->buffer);
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- if (sqe->off || sqe->addr || sqe->len || sqe->rw_flags || sqe->buf_index)
+- return -EINVAL;
+- if (req->flags & REQ_F_FIXED_FILE)
+- return -EBADF;
+-
+- req->close.fd = READ_ONCE(sqe->fd);
+- req->close.file_slot = READ_ONCE(sqe->file_index);
+- if (req->close.file_slot && req->close.fd)
+- return -EINVAL;
+-
+- return 0;
+-}
+-
+-static int io_close(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct files_struct *files = current->files;
+- struct io_close *close = &req->close;
+- struct fdtable *fdt;
+- struct file *file;
+- int ret = -EBADF;
+-
+- if (req->close.file_slot) {
+- ret = io_close_fixed(req, issue_flags);
+- goto err;
+- }
+-
+- spin_lock(&files->file_lock);
+- fdt = files_fdtable(files);
+- if (close->fd >= fdt->max_fds) {
+- spin_unlock(&files->file_lock);
+- goto err;
+- }
+- file = rcu_dereference_protected(fdt->fd[close->fd],
+- lockdep_is_held(&files->file_lock));
+- if (!file || file->f_op == &io_uring_fops) {
+- spin_unlock(&files->file_lock);
+- goto err;
+- }
+-
+- /* if the file has a flush method, be safe and punt to async */
+- if (file->f_op->flush && (issue_flags & IO_URING_F_NONBLOCK)) {
+- spin_unlock(&files->file_lock);
+- return -EAGAIN;
+- }
+-
+- file = __close_fd_get_file(close->fd);
+- spin_unlock(&files->file_lock);
+- if (!file)
+- goto err;
+-
+- /* No ->flush() or already async, safely close from here */
+- ret = filp_close(file, current->files);
+-err:
+- if (ret < 0)
+- req_set_fail(req);
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- if (unlikely(sqe->addr || sqe->buf_index || sqe->splice_fd_in))
+- return -EINVAL;
+-
+- req->sync.off = READ_ONCE(sqe->off);
+- req->sync.len = READ_ONCE(sqe->len);
+- req->sync.flags = READ_ONCE(sqe->sync_range_flags);
+- return 0;
+-}
+-
+-static int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- int ret;
+-
+- /* sync_file_range always requires a blocking context */
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- ret = sync_file_range(req->file, req->sync.off, req->sync.len,
+- req->sync.flags);
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-#if defined(CONFIG_NET)
+-static int io_shutdown_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- if (unlikely(sqe->off || sqe->addr || sqe->rw_flags ||
+- sqe->buf_index || sqe->splice_fd_in))
+- return -EINVAL;
+-
+- req->shutdown.how = READ_ONCE(sqe->len);
+- return 0;
+-}
+-
+-static int io_shutdown(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct socket *sock;
+- int ret;
+-
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- return -EAGAIN;
+-
+- sock = sock_from_file(req->file);
+- if (unlikely(!sock))
+- return -ENOTSOCK;
+-
+- ret = __sys_shutdown_sock(sock, req->shutdown.how);
+- io_req_complete(req, ret);
+- return 0;
+-}
+-
+-static bool io_net_retry(struct socket *sock, int flags)
+-{
+- if (!(flags & MSG_WAITALL))
+- return false;
+- return sock->type == SOCK_STREAM || sock->type == SOCK_SEQPACKET;
+-}
+-
+-static int io_setup_async_msg(struct io_kiocb *req,
+- struct io_async_msghdr *kmsg)
+-{
+- struct io_async_msghdr *async_msg = req->async_data;
+-
+- if (async_msg)
+- return -EAGAIN;
+- if (io_alloc_async_data(req)) {
+- kfree(kmsg->free_iov);
+- return -ENOMEM;
+- }
+- async_msg = req->async_data;
+- req->flags |= REQ_F_NEED_CLEANUP;
+- memcpy(async_msg, kmsg, sizeof(*kmsg));
+- async_msg->msg.msg_name = &async_msg->addr;
+- /* if were using fast_iov, set it to the new one */
+- if (!async_msg->free_iov)
+- async_msg->msg.msg_iter.iov = async_msg->fast_iov;
+-
+- return -EAGAIN;
+-}
+-
+-static int io_sendmsg_copy_hdr(struct io_kiocb *req,
+- struct io_async_msghdr *iomsg)
+-{
+- iomsg->msg.msg_name = &iomsg->addr;
+- iomsg->free_iov = iomsg->fast_iov;
+- return sendmsg_copy_msghdr(&iomsg->msg, req->sr_msg.umsg,
+- req->sr_msg.msg_flags, &iomsg->free_iov);
+-}
+-
+-static int io_sendmsg_prep_async(struct io_kiocb *req)
+-{
+- int ret;
+-
+- ret = io_sendmsg_copy_hdr(req, req->async_data);
+- if (!ret)
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return ret;
+-}
+-
+-static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct io_sr_msg *sr = &req->sr_msg;
+-
+- if (unlikely(sqe->file_index || sqe->addr2))
+- return -EINVAL;
+-
+- sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- sr->len = READ_ONCE(sqe->len);
+- sr->flags = READ_ONCE(sqe->ioprio);
+- if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)
+- return -EINVAL;
+- sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
+- if (sr->msg_flags & MSG_DONTWAIT)
+- req->flags |= REQ_F_NOWAIT;
+-
+-#ifdef CONFIG_COMPAT
+- if (req->ctx->compat)
+- sr->msg_flags |= MSG_CMSG_COMPAT;
+-#endif
+- sr->done_io = 0;
+- return 0;
+-}
+-
+-static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_async_msghdr iomsg, *kmsg;
+- struct io_sr_msg *sr = &req->sr_msg;
+- struct socket *sock;
+- unsigned flags;
+- int min_ret = 0;
+- int ret;
+-
+- sock = sock_from_file(req->file);
+- if (unlikely(!sock))
+- return -ENOTSOCK;
+-
+- if (req_has_async_data(req)) {
+- kmsg = req->async_data;
+- } else {
+- ret = io_sendmsg_copy_hdr(req, &iomsg);
+- if (ret)
+- return ret;
+- kmsg = &iomsg;
+- }
+-
+- if (!(req->flags & REQ_F_POLLED) &&
+- (sr->flags & IORING_RECVSEND_POLL_FIRST))
+- return io_setup_async_msg(req, kmsg);
+-
+- flags = sr->msg_flags;
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- flags |= MSG_DONTWAIT;
+- if (flags & MSG_WAITALL)
+- min_ret = iov_iter_count(&kmsg->msg.msg_iter);
+-
+- ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
+-
+- if (ret < min_ret) {
+- if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
+- return io_setup_async_msg(req, kmsg);
+- if (ret == -ERESTARTSYS)
+- ret = -EINTR;
+- if (ret > 0 && io_net_retry(sock, flags)) {
+- sr->done_io += ret;
+- req->flags |= REQ_F_PARTIAL_IO;
+- return io_setup_async_msg(req, kmsg);
+- }
+- req_set_fail(req);
+- }
+- /* fast path, check for non-NULL to avoid function call */
+- if (kmsg->free_iov)
+- kfree(kmsg->free_iov);
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- if (ret >= 0)
+- ret += sr->done_io;
+- else if (sr->done_io)
+- ret = sr->done_io;
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static int io_send(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_sr_msg *sr = &req->sr_msg;
+- struct msghdr msg;
+- struct iovec iov;
+- struct socket *sock;
+- unsigned flags;
+- int min_ret = 0;
+- int ret;
+-
+- if (!(req->flags & REQ_F_POLLED) &&
+- (sr->flags & IORING_RECVSEND_POLL_FIRST))
+- return -EAGAIN;
+-
+- sock = sock_from_file(req->file);
+- if (unlikely(!sock))
+- return -ENOTSOCK;
+-
+- ret = import_single_range(WRITE, sr->buf, sr->len, &iov, &msg.msg_iter);
+- if (unlikely(ret))
+- return ret;
+-
+- msg.msg_name = NULL;
+- msg.msg_control = NULL;
+- msg.msg_controllen = 0;
+- msg.msg_namelen = 0;
+-
+- flags = sr->msg_flags;
+- if (issue_flags & IO_URING_F_NONBLOCK)
+- flags |= MSG_DONTWAIT;
+- if (flags & MSG_WAITALL)
+- min_ret = iov_iter_count(&msg.msg_iter);
+-
+- msg.msg_flags = flags;
+- ret = sock_sendmsg(sock, &msg);
+- if (ret < min_ret) {
+- if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
+- return -EAGAIN;
+- if (ret == -ERESTARTSYS)
+- ret = -EINTR;
+- if (ret > 0 && io_net_retry(sock, flags)) {
+- sr->len -= ret;
+- sr->buf += ret;
+- sr->done_io += ret;
+- req->flags |= REQ_F_PARTIAL_IO;
+- return -EAGAIN;
+- }
+- req_set_fail(req);
+- }
+- if (ret >= 0)
+- ret += sr->done_io;
+- else if (sr->done_io)
+- ret = sr->done_io;
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static int __io_recvmsg_copy_hdr(struct io_kiocb *req,
+- struct io_async_msghdr *iomsg)
+-{
+- struct io_sr_msg *sr = &req->sr_msg;
+- struct iovec __user *uiov;
+- size_t iov_len;
+- int ret;
+-
+- ret = __copy_msghdr_from_user(&iomsg->msg, sr->umsg,
+- &iomsg->uaddr, &uiov, &iov_len);
+- if (ret)
+- return ret;
+-
+- if (req->flags & REQ_F_BUFFER_SELECT) {
+- if (iov_len > 1)
+- return -EINVAL;
+- if (copy_from_user(iomsg->fast_iov, uiov, sizeof(*uiov)))
+- return -EFAULT;
+- sr->len = iomsg->fast_iov[0].iov_len;
+- iomsg->free_iov = NULL;
+- } else {
+- iomsg->free_iov = iomsg->fast_iov;
+- ret = __import_iovec(READ, uiov, iov_len, UIO_FASTIOV,
+- &iomsg->free_iov, &iomsg->msg.msg_iter,
+- false);
+- if (ret > 0)
+- ret = 0;
+- }
+-
+- return ret;
+-}
+-
+-#ifdef CONFIG_COMPAT
+-static int __io_compat_recvmsg_copy_hdr(struct io_kiocb *req,
+- struct io_async_msghdr *iomsg)
+-{
+- struct io_sr_msg *sr = &req->sr_msg;
+- struct compat_iovec __user *uiov;
+- compat_uptr_t ptr;
+- compat_size_t len;
+- int ret;
+-
+- ret = __get_compat_msghdr(&iomsg->msg, sr->umsg_compat, &iomsg->uaddr,
+- &ptr, &len);
+- if (ret)
+- return ret;
+-
+- uiov = compat_ptr(ptr);
+- if (req->flags & REQ_F_BUFFER_SELECT) {
+- compat_ssize_t clen;
+-
+- if (len > 1)
+- return -EINVAL;
+- if (!access_ok(uiov, sizeof(*uiov)))
+- return -EFAULT;
+- if (__get_user(clen, &uiov->iov_len))
+- return -EFAULT;
+- if (clen < 0)
+- return -EINVAL;
+- sr->len = clen;
+- iomsg->free_iov = NULL;
+- } else {
+- iomsg->free_iov = iomsg->fast_iov;
+- ret = __import_iovec(READ, (struct iovec __user *)uiov, len,
+- UIO_FASTIOV, &iomsg->free_iov,
+- &iomsg->msg.msg_iter, true);
+- if (ret < 0)
+- return ret;
+- }
+-
+- return 0;
+-}
+-#endif
+-
+-static int io_recvmsg_copy_hdr(struct io_kiocb *req,
+- struct io_async_msghdr *iomsg)
+-{
+- iomsg->msg.msg_name = &iomsg->addr;
+-
+-#ifdef CONFIG_COMPAT
+- if (req->ctx->compat)
+- return __io_compat_recvmsg_copy_hdr(req, iomsg);
+-#endif
+-
+- return __io_recvmsg_copy_hdr(req, iomsg);
+-}
+-
+-static int io_recvmsg_prep_async(struct io_kiocb *req)
+-{
+- int ret;
+-
+- ret = io_recvmsg_copy_hdr(req, req->async_data);
+- if (!ret)
+- req->flags |= REQ_F_NEED_CLEANUP;
+- return ret;
+-}
+-
+-static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct io_sr_msg *sr = &req->sr_msg;
+-
+- if (unlikely(sqe->file_index || sqe->addr2))
+- return -EINVAL;
+-
+- sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- sr->len = READ_ONCE(sqe->len);
+- sr->flags = READ_ONCE(sqe->ioprio);
+- if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)
+- return -EINVAL;
+- sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
+- if (sr->msg_flags & MSG_DONTWAIT)
+- req->flags |= REQ_F_NOWAIT;
+-
+-#ifdef CONFIG_COMPAT
+- if (req->ctx->compat)
+- sr->msg_flags |= MSG_CMSG_COMPAT;
+-#endif
+- sr->done_io = 0;
+- return 0;
+-}
+-
+-static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_async_msghdr iomsg, *kmsg;
+- struct io_sr_msg *sr = &req->sr_msg;
+- struct socket *sock;
+- unsigned int cflags;
+- unsigned flags;
+- int ret, min_ret = 0;
+- bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+-
+- sock = sock_from_file(req->file);
+- if (unlikely(!sock))
+- return -ENOTSOCK;
+-
+- if (req_has_async_data(req)) {
+- kmsg = req->async_data;
+- } else {
+- ret = io_recvmsg_copy_hdr(req, &iomsg);
+- if (ret)
+- return ret;
+- kmsg = &iomsg;
+- }
+-
+- if (!(req->flags & REQ_F_POLLED) &&
+- (sr->flags & IORING_RECVSEND_POLL_FIRST))
+- return io_setup_async_msg(req, kmsg);
+-
+- if (io_do_buffer_select(req)) {
+- void __user *buf;
+-
+- buf = io_buffer_select(req, &sr->len, issue_flags);
+- if (!buf)
+- return -ENOBUFS;
+- kmsg->fast_iov[0].iov_base = buf;
+- kmsg->fast_iov[0].iov_len = sr->len;
+- iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->fast_iov, 1,
+- sr->len);
+- }
+-
+- flags = sr->msg_flags;
+- if (force_nonblock)
+- flags |= MSG_DONTWAIT;
+- if (flags & MSG_WAITALL)
+- min_ret = iov_iter_count(&kmsg->msg.msg_iter);
+-
+- kmsg->msg.msg_get_inq = 1;
+- ret = __sys_recvmsg_sock(sock, &kmsg->msg, sr->umsg, kmsg->uaddr, flags);
+- if (ret < min_ret) {
+- if (ret == -EAGAIN && force_nonblock)
+- return io_setup_async_msg(req, kmsg);
+- if (ret == -ERESTARTSYS)
+- ret = -EINTR;
+- if (ret > 0 && io_net_retry(sock, flags)) {
+- sr->done_io += ret;
+- req->flags |= REQ_F_PARTIAL_IO;
+- return io_setup_async_msg(req, kmsg);
+- }
+- req_set_fail(req);
+- } else if ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
+- req_set_fail(req);
+- }
+-
+- /* fast path, check for non-NULL to avoid function call */
+- if (kmsg->free_iov)
+- kfree(kmsg->free_iov);
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- if (ret >= 0)
+- ret += sr->done_io;
+- else if (sr->done_io)
+- ret = sr->done_io;
+- cflags = io_put_kbuf(req, issue_flags);
+- if (kmsg->msg.msg_inq)
+- cflags |= IORING_CQE_F_SOCK_NONEMPTY;
+- __io_req_complete(req, issue_flags, ret, cflags);
+- return 0;
+-}
+-
+-static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_sr_msg *sr = &req->sr_msg;
+- struct msghdr msg;
+- struct socket *sock;
+- struct iovec iov;
+- unsigned int cflags;
+- unsigned flags;
+- int ret, min_ret = 0;
+- bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+-
+- if (!(req->flags & REQ_F_POLLED) &&
+- (sr->flags & IORING_RECVSEND_POLL_FIRST))
+- return -EAGAIN;
+-
+- sock = sock_from_file(req->file);
+- if (unlikely(!sock))
+- return -ENOTSOCK;
+-
+- if (io_do_buffer_select(req)) {
+- void __user *buf;
+-
+- buf = io_buffer_select(req, &sr->len, issue_flags);
+- if (!buf)
+- return -ENOBUFS;
+- sr->buf = buf;
+- }
+-
+- ret = import_single_range(READ, sr->buf, sr->len, &iov, &msg.msg_iter);
+- if (unlikely(ret))
+- goto out_free;
+-
+- msg.msg_name = NULL;
+- msg.msg_namelen = 0;
+- msg.msg_control = NULL;
+- msg.msg_get_inq = 1;
+- msg.msg_flags = 0;
+- msg.msg_controllen = 0;
+- msg.msg_iocb = NULL;
+-
+- flags = sr->msg_flags;
+- if (force_nonblock)
+- flags |= MSG_DONTWAIT;
+- if (flags & MSG_WAITALL)
+- min_ret = iov_iter_count(&msg.msg_iter);
+-
+- ret = sock_recvmsg(sock, &msg, flags);
+- if (ret < min_ret) {
+- if (ret == -EAGAIN && force_nonblock)
+- return -EAGAIN;
+- if (ret == -ERESTARTSYS)
+- ret = -EINTR;
+- if (ret > 0 && io_net_retry(sock, flags)) {
+- sr->len -= ret;
+- sr->buf += ret;
+- sr->done_io += ret;
+- req->flags |= REQ_F_PARTIAL_IO;
+- return -EAGAIN;
+- }
+- req_set_fail(req);
+- } else if ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
+-out_free:
+- req_set_fail(req);
+- }
+-
+- if (ret >= 0)
+- ret += sr->done_io;
+- else if (sr->done_io)
+- ret = sr->done_io;
+- cflags = io_put_kbuf(req, issue_flags);
+- if (msg.msg_inq)
+- cflags |= IORING_CQE_F_SOCK_NONEMPTY;
+- __io_req_complete(req, issue_flags, ret, cflags);
+- return 0;
+-}
+-
+-static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct io_accept *accept = &req->accept;
+- unsigned flags;
+-
+- if (sqe->len || sqe->buf_index)
+- return -EINVAL;
+-
+- accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+- accept->flags = READ_ONCE(sqe->accept_flags);
+- accept->nofile = rlimit(RLIMIT_NOFILE);
+- flags = READ_ONCE(sqe->ioprio);
+- if (flags & ~IORING_ACCEPT_MULTISHOT)
+- return -EINVAL;
+-
+- accept->file_slot = READ_ONCE(sqe->file_index);
+- if (accept->file_slot) {
+- if (accept->flags & SOCK_CLOEXEC)
+- return -EINVAL;
+- if (flags & IORING_ACCEPT_MULTISHOT &&
+- accept->file_slot != IORING_FILE_INDEX_ALLOC)
+- return -EINVAL;
+- }
+- if (accept->flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
+- return -EINVAL;
+- if (SOCK_NONBLOCK != O_NONBLOCK && (accept->flags & SOCK_NONBLOCK))
+- accept->flags = (accept->flags & ~SOCK_NONBLOCK) | O_NONBLOCK;
+- if (flags & IORING_ACCEPT_MULTISHOT)
+- req->flags |= REQ_F_APOLL_MULTISHOT;
+- return 0;
+-}
+-
+-static int io_accept(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_accept *accept = &req->accept;
+- bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+- unsigned int file_flags = force_nonblock ? O_NONBLOCK : 0;
+- bool fixed = !!accept->file_slot;
+- struct file *file;
+- int ret, fd;
+-
+-retry:
+- if (!fixed) {
+- fd = __get_unused_fd_flags(accept->flags, accept->nofile);
+- if (unlikely(fd < 0))
+- return fd;
+- }
+- file = do_accept(req->file, file_flags, accept->addr, accept->addr_len,
+- accept->flags);
+- if (IS_ERR(file)) {
+- if (!fixed)
+- put_unused_fd(fd);
+- ret = PTR_ERR(file);
+- if (ret == -EAGAIN && force_nonblock) {
+- /*
+- * if it's multishot and polled, we don't need to
+- * return EAGAIN to arm the poll infra since it
+- * has already been done
+- */
+- if ((req->flags & IO_APOLL_MULTI_POLLED) ==
+- IO_APOLL_MULTI_POLLED)
+- ret = 0;
+- return ret;
+- }
+- if (ret == -ERESTARTSYS)
+- ret = -EINTR;
+- req_set_fail(req);
+- } else if (!fixed) {
+- fd_install(fd, file);
+- ret = fd;
+- } else {
+- ret = io_fixed_fd_install(req, issue_flags, file,
+- accept->file_slot);
+- }
+-
+- if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+- }
+- if (ret >= 0) {
+- bool filled;
+-
+- spin_lock(&ctx->completion_lock);
+- filled = io_fill_cqe_aux(ctx, req->cqe.user_data, ret,
+- IORING_CQE_F_MORE);
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- if (filled) {
+- io_cqring_ev_posted(ctx);
+- goto retry;
+- }
+- ret = -ECANCELED;
+- }
+-
+- return ret;
+-}
+-
+-static int io_socket_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct io_socket *sock = &req->sock;
+-
+- if (sqe->addr || sqe->rw_flags || sqe->buf_index)
+- return -EINVAL;
+-
+- sock->domain = READ_ONCE(sqe->fd);
+- sock->type = READ_ONCE(sqe->off);
+- sock->protocol = READ_ONCE(sqe->len);
+- sock->file_slot = READ_ONCE(sqe->file_index);
+- sock->nofile = rlimit(RLIMIT_NOFILE);
+-
+- sock->flags = sock->type & ~SOCK_TYPE_MASK;
+- if (sock->file_slot && (sock->flags & SOCK_CLOEXEC))
+- return -EINVAL;
+- if (sock->flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
+- return -EINVAL;
+- return 0;
+-}
+-
+-static int io_socket(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_socket *sock = &req->sock;
+- bool fixed = !!sock->file_slot;
+- struct file *file;
+- int ret, fd;
+-
+- if (!fixed) {
+- fd = __get_unused_fd_flags(sock->flags, sock->nofile);
+- if (unlikely(fd < 0))
+- return fd;
+- }
+- file = __sys_socket_file(sock->domain, sock->type, sock->protocol);
+- if (IS_ERR(file)) {
+- if (!fixed)
+- put_unused_fd(fd);
+- ret = PTR_ERR(file);
+- if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
+- return -EAGAIN;
+- if (ret == -ERESTARTSYS)
+- ret = -EINTR;
+- req_set_fail(req);
+- } else if (!fixed) {
+- fd_install(fd, file);
+- ret = fd;
+- } else {
+- ret = io_fixed_fd_install(req, issue_flags, file,
+- sock->file_slot);
+- }
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static int io_connect_prep_async(struct io_kiocb *req)
+-{
+- struct io_async_connect *io = req->async_data;
+- struct io_connect *conn = &req->connect;
+-
+- return move_addr_to_kernel(conn->addr, conn->addr_len, &io->address);
+-}
+-
+-static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct io_connect *conn = &req->connect;
+-
+- if (sqe->len || sqe->buf_index || sqe->rw_flags || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+- conn->addr_len = READ_ONCE(sqe->addr2);
+- return 0;
+-}
+-
+-static int io_connect(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_async_connect __io, *io;
+- unsigned file_flags;
+- int ret;
+- bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+-
+- if (req_has_async_data(req)) {
+- io = req->async_data;
+- } else {
+- ret = move_addr_to_kernel(req->connect.addr,
+- req->connect.addr_len,
+- &__io.address);
+- if (ret)
+- goto out;
+- io = &__io;
+- }
+-
+- file_flags = force_nonblock ? O_NONBLOCK : 0;
+-
+- ret = __sys_connect_file(req->file, &io->address,
+- req->connect.addr_len, file_flags);
+- if ((ret == -EAGAIN || ret == -EINPROGRESS) && force_nonblock) {
+- if (req_has_async_data(req))
+- return -EAGAIN;
+- if (io_alloc_async_data(req)) {
+- ret = -ENOMEM;
+- goto out;
+- }
+- memcpy(req->async_data, &__io, sizeof(__io));
+- return -EAGAIN;
+- }
+- if (ret == -ERESTARTSYS)
+- ret = -EINTR;
+-out:
+- if (ret < 0)
+- req_set_fail(req);
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-#else /* !CONFIG_NET */
+-#define IO_NETOP_FN(op) \
+-static int io_##op(struct io_kiocb *req, unsigned int issue_flags) \
+-{ \
+- return -EOPNOTSUPP; \
+-}
+-
+-#define IO_NETOP_PREP(op) \
+-IO_NETOP_FN(op) \
+-static int io_##op##_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) \
+-{ \
+- return -EOPNOTSUPP; \
+-} \
+-
+-#define IO_NETOP_PREP_ASYNC(op) \
+-IO_NETOP_PREP(op) \
+-static int io_##op##_prep_async(struct io_kiocb *req) \
+-{ \
+- return -EOPNOTSUPP; \
+-}
+-
+-IO_NETOP_PREP_ASYNC(sendmsg);
+-IO_NETOP_PREP_ASYNC(recvmsg);
+-IO_NETOP_PREP_ASYNC(connect);
+-IO_NETOP_PREP(accept);
+-IO_NETOP_PREP(socket);
+-IO_NETOP_PREP(shutdown);
+-IO_NETOP_FN(send);
+-IO_NETOP_FN(recv);
+-#endif /* CONFIG_NET */
+-
+-struct io_poll_table {
+- struct poll_table_struct pt;
+- struct io_kiocb *req;
+- int nr_entries;
+- int error;
+-};
+-
+-#define IO_POLL_CANCEL_FLAG BIT(31)
+-#define IO_POLL_REF_MASK GENMASK(30, 0)
+-
+-/*
+- * If refs part of ->poll_refs (see IO_POLL_REF_MASK) is 0, it's free. We can
+- * bump it and acquire ownership. It's disallowed to modify requests while not
+- * owning it, that prevents from races for enqueueing task_work's and b/w
+- * arming poll and wakeups.
+- */
+-static inline bool io_poll_get_ownership(struct io_kiocb *req)
+-{
+- return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
+-}
+-
+-static void io_poll_mark_cancelled(struct io_kiocb *req)
+-{
+- atomic_or(IO_POLL_CANCEL_FLAG, &req->poll_refs);
+-}
+-
+-static struct io_poll_iocb *io_poll_get_double(struct io_kiocb *req)
+-{
+- /* pure poll stashes this in ->async_data, poll driven retry elsewhere */
+- if (req->opcode == IORING_OP_POLL_ADD)
+- return req->async_data;
+- return req->apoll->double_poll;
+-}
+-
+-static struct io_poll_iocb *io_poll_get_single(struct io_kiocb *req)
+-{
+- if (req->opcode == IORING_OP_POLL_ADD)
+- return &req->poll;
+- return &req->apoll->poll;
+-}
+-
+-static void io_poll_req_insert(struct io_kiocb *req)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct hlist_head *list;
+-
+- list = &ctx->cancel_hash[hash_long(req->cqe.user_data, ctx->cancel_hash_bits)];
+- hlist_add_head(&req->hash_node, list);
+-}
+-
+-static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
+- wait_queue_func_t wake_func)
+-{
+- poll->head = NULL;
+-#define IO_POLL_UNMASK (EPOLLERR|EPOLLHUP|EPOLLNVAL|EPOLLRDHUP)
+- /* mask in events that we always want/need */
+- poll->events = events | IO_POLL_UNMASK;
+- INIT_LIST_HEAD(&poll->wait.entry);
+- init_waitqueue_func_entry(&poll->wait, wake_func);
+-}
+-
+-static inline void io_poll_remove_entry(struct io_poll_iocb *poll)
+-{
+- struct wait_queue_head *head = smp_load_acquire(&poll->head);
+-
+- if (head) {
+- spin_lock_irq(&head->lock);
+- list_del_init(&poll->wait.entry);
+- poll->head = NULL;
+- spin_unlock_irq(&head->lock);
+- }
+-}
+-
+-static void io_poll_remove_entries(struct io_kiocb *req)
+-{
+- /*
+- * Nothing to do if neither of those flags are set. Avoid dipping
+- * into the poll/apoll/double cachelines if we can.
+- */
+- if (!(req->flags & (REQ_F_SINGLE_POLL | REQ_F_DOUBLE_POLL)))
+- return;
+-
+- /*
+- * While we hold the waitqueue lock and the waitqueue is nonempty,
+- * wake_up_pollfree() will wait for us. However, taking the waitqueue
+- * lock in the first place can race with the waitqueue being freed.
+- *
+- * We solve this as eventpoll does: by taking advantage of the fact that
+- * all users of wake_up_pollfree() will RCU-delay the actual free. If
+- * we enter rcu_read_lock() and see that the pointer to the queue is
+- * non-NULL, we can then lock it without the memory being freed out from
+- * under us.
+- *
+- * Keep holding rcu_read_lock() as long as we hold the queue lock, in
+- * case the caller deletes the entry from the queue, leaving it empty.
+- * In that case, only RCU prevents the queue memory from being freed.
+- */
+- rcu_read_lock();
+- if (req->flags & REQ_F_SINGLE_POLL)
+- io_poll_remove_entry(io_poll_get_single(req));
+- if (req->flags & REQ_F_DOUBLE_POLL)
+- io_poll_remove_entry(io_poll_get_double(req));
+- rcu_read_unlock();
+-}
+-
+-static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags);
+-/*
+- * All poll tw should go through this. Checks for poll events, manages
+- * references, does rewait, etc.
+- *
+- * Returns a negative error on failure. >0 when no action require, which is
+- * either spurious wakeup or multishot CQE is served. 0 when it's done with
+- * the request, then the mask is stored in req->cqe.res.
+- */
+-static int io_poll_check_events(struct io_kiocb *req, bool *locked)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- int v, ret;
+-
+- /* req->task == current here, checking PF_EXITING is safe */
+- if (unlikely(req->task->flags & PF_EXITING))
+- return -ECANCELED;
+-
+- do {
+- v = atomic_read(&req->poll_refs);
+-
+- /* tw handler should be the owner, and so have some references */
+- if (WARN_ON_ONCE(!(v & IO_POLL_REF_MASK)))
+- return 0;
+- if (v & IO_POLL_CANCEL_FLAG)
+- return -ECANCELED;
+-
+- if (!req->cqe.res) {
+- struct poll_table_struct pt = { ._key = req->apoll_events };
+- req->cqe.res = vfs_poll(req->file, &pt) & req->apoll_events;
+- }
+-
+- if ((unlikely(!req->cqe.res)))
+- continue;
+- if (req->apoll_events & EPOLLONESHOT)
+- return 0;
+-
+- /* multishot, just fill a CQE and proceed */
+- if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
+- __poll_t mask = mangle_poll(req->cqe.res &
+- req->apoll_events);
+- bool filled;
+-
+- spin_lock(&ctx->completion_lock);
+- filled = io_fill_cqe_aux(ctx, req->cqe.user_data,
+- mask, IORING_CQE_F_MORE);
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- if (filled) {
+- io_cqring_ev_posted(ctx);
+- continue;
+- }
+- return -ECANCELED;
+- }
+-
+- io_tw_lock(req->ctx, locked);
+- if (unlikely(req->task->flags & PF_EXITING))
+- return -EFAULT;
+- ret = io_issue_sqe(req,
+- IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
+- if (ret)
+- return ret;
+-
+- /*
+- * Release all references, retry if someone tried to restart
+- * task_work while we were executing it.
+- */
+- } while (atomic_sub_return(v & IO_POLL_REF_MASK, &req->poll_refs));
+-
+- return 1;
+-}
+-
+-static void io_poll_task_func(struct io_kiocb *req, bool *locked)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- int ret;
+-
+- ret = io_poll_check_events(req, locked);
+- if (ret > 0)
+- return;
+-
+- if (!ret) {
+- req->cqe.res = mangle_poll(req->cqe.res & req->poll.events);
+- } else {
+- req->cqe.res = ret;
+- req_set_fail(req);
+- }
+-
+- io_poll_remove_entries(req);
+- spin_lock(&ctx->completion_lock);
+- hash_del(&req->hash_node);
+- __io_req_complete_post(req, req->cqe.res, 0);
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- io_cqring_ev_posted(ctx);
+-}
+-
+-static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- int ret;
+-
+- ret = io_poll_check_events(req, locked);
+- if (ret > 0)
+- return;
+-
+- io_poll_remove_entries(req);
+- spin_lock(&ctx->completion_lock);
+- hash_del(&req->hash_node);
+- spin_unlock(&ctx->completion_lock);
+-
+- if (!ret)
+- io_req_task_submit(req, locked);
+- else
+- io_req_complete_failed(req, ret);
+-}
+-
+-static void __io_poll_execute(struct io_kiocb *req, int mask,
+- __poll_t __maybe_unused events)
+-{
+- req->cqe.res = mask;
+- /*
+- * This is useful for poll that is armed on behalf of another
+- * request, and where the wakeup path could be on a different
+- * CPU. We want to avoid pulling in req->apoll->events for that
+- * case.
+- */
+- if (req->opcode == IORING_OP_POLL_ADD)
+- req->io_task_work.func = io_poll_task_func;
+- else
+- req->io_task_work.func = io_apoll_task_func;
+-
+- trace_io_uring_task_add(req->ctx, req, req->cqe.user_data, req->opcode, mask);
+- io_req_task_work_add(req);
+-}
+-
+-static inline void io_poll_execute(struct io_kiocb *req, int res,
+- __poll_t events)
+-{
+- if (io_poll_get_ownership(req))
+- __io_poll_execute(req, res, events);
+-}
+-
+-static void io_poll_cancel_req(struct io_kiocb *req)
+-{
+- io_poll_mark_cancelled(req);
+- /* kick tw, which should complete the request */
+- io_poll_execute(req, 0, 0);
+-}
+-
+-#define wqe_to_req(wait) ((void *)((unsigned long) (wait)->private & ~1))
+-#define wqe_is_double(wait) ((unsigned long) (wait)->private & 1)
+-#define IO_ASYNC_POLL_COMMON (EPOLLONESHOT | EPOLLPRI)
+-
+-static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+- void *key)
+-{
+- struct io_kiocb *req = wqe_to_req(wait);
+- struct io_poll_iocb *poll = container_of(wait, struct io_poll_iocb,
+- wait);
+- __poll_t mask = key_to_poll(key);
+-
+- if (unlikely(mask & POLLFREE)) {
+- io_poll_mark_cancelled(req);
+- /* we have to kick tw in case it's not already */
+- io_poll_execute(req, 0, poll->events);
+-
+- /*
+- * If the waitqueue is being freed early but someone is already
+- * holds ownership over it, we have to tear down the request as
+- * best we can. That means immediately removing the request from
+- * its waitqueue and preventing all further accesses to the
+- * waitqueue via the request.
+- */
+- list_del_init(&poll->wait.entry);
+-
+- /*
+- * Careful: this *must* be the last step, since as soon
+- * as req->head is NULL'ed out, the request can be
+- * completed and freed, since aio_poll_complete_work()
+- * will no longer need to take the waitqueue lock.
+- */
+- smp_store_release(&poll->head, NULL);
+- return 1;
+- }
+-
+- /* for instances that support it check for an event match first */
+- if (mask && !(mask & (poll->events & ~IO_ASYNC_POLL_COMMON)))
+- return 0;
+-
+- if (io_poll_get_ownership(req)) {
+- /* optional, saves extra locking for removal in tw handler */
+- if (mask && poll->events & EPOLLONESHOT) {
+- list_del_init(&poll->wait.entry);
+- poll->head = NULL;
+- if (wqe_is_double(wait))
+- req->flags &= ~REQ_F_DOUBLE_POLL;
+- else
+- req->flags &= ~REQ_F_SINGLE_POLL;
+- }
+- __io_poll_execute(req, mask, poll->events);
+- }
+- return 1;
+-}
+-
+-static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+- struct wait_queue_head *head,
+- struct io_poll_iocb **poll_ptr)
+-{
+- struct io_kiocb *req = pt->req;
+- unsigned long wqe_private = (unsigned long) req;
+-
+- /*
+- * The file being polled uses multiple waitqueues for poll handling
+- * (e.g. one for read, one for write). Setup a separate io_poll_iocb
+- * if this happens.
+- */
+- if (unlikely(pt->nr_entries)) {
+- struct io_poll_iocb *first = poll;
+-
+- /* double add on the same waitqueue head, ignore */
+- if (first->head == head)
+- return;
+- /* already have a 2nd entry, fail a third attempt */
+- if (*poll_ptr) {
+- if ((*poll_ptr)->head == head)
+- return;
+- pt->error = -EINVAL;
+- return;
+- }
+-
+- poll = kmalloc(sizeof(*poll), GFP_ATOMIC);
+- if (!poll) {
+- pt->error = -ENOMEM;
+- return;
+- }
+- /* mark as double wq entry */
+- wqe_private |= 1;
+- req->flags |= REQ_F_DOUBLE_POLL;
+- io_init_poll_iocb(poll, first->events, first->wait.func);
+- *poll_ptr = poll;
+- if (req->opcode == IORING_OP_POLL_ADD)
+- req->flags |= REQ_F_ASYNC_DATA;
+- }
+-
+- req->flags |= REQ_F_SINGLE_POLL;
+- pt->nr_entries++;
+- poll->head = head;
+- poll->wait.private = (void *) wqe_private;
+-
+- if (poll->events & EPOLLEXCLUSIVE)
+- add_wait_queue_exclusive(head, &poll->wait);
+- else
+- add_wait_queue(head, &poll->wait);
+-}
+-
+-static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
+- struct poll_table_struct *p)
+-{
+- struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
+-
+- __io_queue_proc(&pt->req->poll, pt, head,
+- (struct io_poll_iocb **) &pt->req->async_data);
+-}
+-
+-static int __io_arm_poll_handler(struct io_kiocb *req,
+- struct io_poll_iocb *poll,
+- struct io_poll_table *ipt, __poll_t mask)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- int v;
+-
+- INIT_HLIST_NODE(&req->hash_node);
+- req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
+- io_init_poll_iocb(poll, mask, io_poll_wake);
+- poll->file = req->file;
+-
+- req->apoll_events = poll->events;
+-
+- ipt->pt._key = mask;
+- ipt->req = req;
+- ipt->error = 0;
+- ipt->nr_entries = 0;
+-
+- /*
+- * Take the ownership to delay any tw execution up until we're done
+- * with poll arming. see io_poll_get_ownership().
+- */
+- atomic_set(&req->poll_refs, 1);
+- mask = vfs_poll(req->file, &ipt->pt) & poll->events;
+-
+- if (mask && (poll->events & EPOLLONESHOT)) {
+- io_poll_remove_entries(req);
+- /* no one else has access to the req, forget about the ref */
+- return mask;
+- }
+- if (!mask && unlikely(ipt->error || !ipt->nr_entries)) {
+- io_poll_remove_entries(req);
+- if (!ipt->error)
+- ipt->error = -EINVAL;
+- return 0;
+- }
+-
+- spin_lock(&ctx->completion_lock);
+- io_poll_req_insert(req);
+- spin_unlock(&ctx->completion_lock);
+-
+- if (mask) {
+- /* can't multishot if failed, just queue the event we've got */
+- if (unlikely(ipt->error || !ipt->nr_entries)) {
+- poll->events |= EPOLLONESHOT;
+- req->apoll_events |= EPOLLONESHOT;
+- ipt->error = 0;
+- }
+- __io_poll_execute(req, mask, poll->events);
+- return 0;
+- }
+-
+- /*
+- * Release ownership. If someone tried to queue a tw while it was
+- * locked, kick it off for them.
+- */
+- v = atomic_dec_return(&req->poll_refs);
+- if (unlikely(v & IO_POLL_REF_MASK))
+- __io_poll_execute(req, 0, poll->events);
+- return 0;
+-}
+-
+-static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
+- struct poll_table_struct *p)
+-{
+- struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
+- struct async_poll *apoll = pt->req->apoll;
+-
+- __io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll);
+-}
+-
+-enum {
+- IO_APOLL_OK,
+- IO_APOLL_ABORTED,
+- IO_APOLL_READY
+-};
+-
+-static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
+-{
+- const struct io_op_def *def = &io_op_defs[req->opcode];
+- struct io_ring_ctx *ctx = req->ctx;
+- struct async_poll *apoll;
+- struct io_poll_table ipt;
+- __poll_t mask = POLLPRI | POLLERR;
+- int ret;
+-
+- if (!def->pollin && !def->pollout)
+- return IO_APOLL_ABORTED;
+- if (!file_can_poll(req->file))
+- return IO_APOLL_ABORTED;
+- if ((req->flags & (REQ_F_POLLED|REQ_F_PARTIAL_IO)) == REQ_F_POLLED)
+- return IO_APOLL_ABORTED;
+- if (!(req->flags & REQ_F_APOLL_MULTISHOT))
+- mask |= EPOLLONESHOT;
+-
+- if (def->pollin) {
+- mask |= EPOLLIN | EPOLLRDNORM;
+-
+- /* If reading from MSG_ERRQUEUE using recvmsg, ignore POLLIN */
+- if ((req->opcode == IORING_OP_RECVMSG) &&
+- (req->sr_msg.msg_flags & MSG_ERRQUEUE))
+- mask &= ~EPOLLIN;
+- } else {
+- mask |= EPOLLOUT | EPOLLWRNORM;
+- }
+- if (def->poll_exclusive)
+- mask |= EPOLLEXCLUSIVE;
+- if (req->flags & REQ_F_POLLED) {
+- apoll = req->apoll;
+- kfree(apoll->double_poll);
+- } else if (!(issue_flags & IO_URING_F_UNLOCKED) &&
+- !list_empty(&ctx->apoll_cache)) {
+- apoll = list_first_entry(&ctx->apoll_cache, struct async_poll,
+- poll.wait.entry);
+- list_del_init(&apoll->poll.wait.entry);
+- } else {
+- apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
+- if (unlikely(!apoll))
+- return IO_APOLL_ABORTED;
+- }
+- apoll->double_poll = NULL;
+- req->apoll = apoll;
+- req->flags |= REQ_F_POLLED;
+- ipt.pt._qproc = io_async_queue_proc;
+-
+- io_kbuf_recycle(req, issue_flags);
+-
+- ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask);
+- if (ret || ipt.error)
+- return ret ? IO_APOLL_READY : IO_APOLL_ABORTED;
+-
+- trace_io_uring_poll_arm(ctx, req, req->cqe.user_data, req->opcode,
+- mask, apoll->poll.events);
+- return IO_APOLL_OK;
+-}
+-
+-/*
+- * Returns true if we found and killed one or more poll requests
+- */
+-static __cold bool io_poll_remove_all(struct io_ring_ctx *ctx,
+- struct task_struct *tsk, bool cancel_all)
+-{
+- struct hlist_node *tmp;
+- struct io_kiocb *req;
+- bool found = false;
+- int i;
+-
+- spin_lock(&ctx->completion_lock);
+- for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
+- struct hlist_head *list;
+-
+- list = &ctx->cancel_hash[i];
+- hlist_for_each_entry_safe(req, tmp, list, hash_node) {
+- if (io_match_task_safe(req, tsk, cancel_all)) {
+- hlist_del_init(&req->hash_node);
+- io_poll_cancel_req(req);
+- found = true;
+- }
+- }
+- }
+- spin_unlock(&ctx->completion_lock);
+- return found;
+-}
+-
+-static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, bool poll_only,
+- struct io_cancel_data *cd)
+- __must_hold(&ctx->completion_lock)
+-{
+- struct hlist_head *list;
+- struct io_kiocb *req;
+-
+- list = &ctx->cancel_hash[hash_long(cd->data, ctx->cancel_hash_bits)];
+- hlist_for_each_entry(req, list, hash_node) {
+- if (cd->data != req->cqe.user_data)
+- continue;
+- if (poll_only && req->opcode != IORING_OP_POLL_ADD)
+- continue;
+- if (cd->flags & IORING_ASYNC_CANCEL_ALL) {
+- if (cd->seq == req->work.cancel_seq)
+- continue;
+- req->work.cancel_seq = cd->seq;
+- }
+- return req;
+- }
+- return NULL;
+-}
+-
+-static struct io_kiocb *io_poll_file_find(struct io_ring_ctx *ctx,
+- struct io_cancel_data *cd)
+- __must_hold(&ctx->completion_lock)
+-{
+- struct io_kiocb *req;
+- int i;
+-
+- for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
+- struct hlist_head *list;
+-
+- list = &ctx->cancel_hash[i];
+- hlist_for_each_entry(req, list, hash_node) {
+- if (!(cd->flags & IORING_ASYNC_CANCEL_ANY) &&
+- req->file != cd->file)
+- continue;
+- if (cd->seq == req->work.cancel_seq)
+- continue;
+- req->work.cancel_seq = cd->seq;
+- return req;
+- }
+- }
+- return NULL;
+-}
+-
+-static bool io_poll_disarm(struct io_kiocb *req)
+- __must_hold(&ctx->completion_lock)
+-{
+- if (!io_poll_get_ownership(req))
+- return false;
+- io_poll_remove_entries(req);
+- hash_del(&req->hash_node);
+- return true;
+-}
+-
+-static int io_poll_cancel(struct io_ring_ctx *ctx, struct io_cancel_data *cd)
+- __must_hold(&ctx->completion_lock)
+-{
+- struct io_kiocb *req;
+-
+- if (cd->flags & (IORING_ASYNC_CANCEL_FD|IORING_ASYNC_CANCEL_ANY))
+- req = io_poll_file_find(ctx, cd);
+- else
+- req = io_poll_find(ctx, false, cd);
+- if (!req)
+- return -ENOENT;
+- io_poll_cancel_req(req);
+- return 0;
+-}
+-
+-static __poll_t io_poll_parse_events(const struct io_uring_sqe *sqe,
+- unsigned int flags)
+-{
+- u32 events;
+-
+- events = READ_ONCE(sqe->poll32_events);
+-#ifdef __BIG_ENDIAN
+- events = swahw32(events);
+-#endif
+- if (!(flags & IORING_POLL_ADD_MULTI))
+- events |= EPOLLONESHOT;
+- return demangle_poll(events) | (events & (EPOLLEXCLUSIVE|EPOLLONESHOT));
+-}
+-
+-static int io_poll_remove_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_poll_update *upd = &req->poll_update;
+- u32 flags;
+-
+- if (sqe->buf_index || sqe->splice_fd_in)
+- return -EINVAL;
+- flags = READ_ONCE(sqe->len);
+- if (flags & ~(IORING_POLL_UPDATE_EVENTS | IORING_POLL_UPDATE_USER_DATA |
+- IORING_POLL_ADD_MULTI))
+- return -EINVAL;
+- /* meaningless without update */
+- if (flags == IORING_POLL_ADD_MULTI)
+- return -EINVAL;
+-
+- upd->old_user_data = READ_ONCE(sqe->addr);
+- upd->update_events = flags & IORING_POLL_UPDATE_EVENTS;
+- upd->update_user_data = flags & IORING_POLL_UPDATE_USER_DATA;
+-
+- upd->new_user_data = READ_ONCE(sqe->off);
+- if (!upd->update_user_data && upd->new_user_data)
+- return -EINVAL;
+- if (upd->update_events)
+- upd->events = io_poll_parse_events(sqe, flags);
+- else if (sqe->poll32_events)
+- return -EINVAL;
+-
+- return 0;
+-}
+-
+-static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- struct io_poll_iocb *poll = &req->poll;
+- u32 flags;
+-
+- if (sqe->buf_index || sqe->off || sqe->addr)
+- return -EINVAL;
+- flags = READ_ONCE(sqe->len);
+- if (flags & ~IORING_POLL_ADD_MULTI)
+- return -EINVAL;
+- if ((flags & IORING_POLL_ADD_MULTI) && (req->flags & REQ_F_CQE_SKIP))
+- return -EINVAL;
+-
+- io_req_set_refcount(req);
+- poll->events = io_poll_parse_events(sqe, flags);
+- return 0;
+-}
+-
+-static int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_poll_iocb *poll = &req->poll;
+- struct io_poll_table ipt;
+- int ret;
+-
+- ipt.pt._qproc = io_poll_queue_proc;
+-
+- ret = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events);
+- if (!ret && ipt.error)
+- req_set_fail(req);
+- ret = ret ?: ipt.error;
+- if (ret)
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_cancel_data cd = { .data = req->poll_update.old_user_data, };
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_kiocb *preq;
+- int ret2, ret = 0;
+- bool locked;
+-
+- spin_lock(&ctx->completion_lock);
+- preq = io_poll_find(ctx, true, &cd);
+- if (!preq || !io_poll_disarm(preq)) {
+- spin_unlock(&ctx->completion_lock);
+- ret = preq ? -EALREADY : -ENOENT;
+- goto out;
+- }
+- spin_unlock(&ctx->completion_lock);
+-
+- if (req->poll_update.update_events || req->poll_update.update_user_data) {
+- /* only mask one event flags, keep behavior flags */
+- if (req->poll_update.update_events) {
+- preq->poll.events &= ~0xffff;
+- preq->poll.events |= req->poll_update.events & 0xffff;
+- preq->poll.events |= IO_POLL_UNMASK;
+- }
+- if (req->poll_update.update_user_data)
+- preq->cqe.user_data = req->poll_update.new_user_data;
+-
+- ret2 = io_poll_add(preq, issue_flags);
+- /* successfully updated, don't complete poll request */
+- if (!ret2)
+- goto out;
+- }
+-
+- req_set_fail(preq);
+- preq->cqe.res = -ECANCELED;
+- locked = !(issue_flags & IO_URING_F_UNLOCKED);
+- io_req_task_complete(preq, &locked);
+-out:
+- if (ret < 0)
+- req_set_fail(req);
+- /* complete update request, we're done with it */
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
+-{
+- struct io_timeout_data *data = container_of(timer,
+- struct io_timeout_data, timer);
+- struct io_kiocb *req = data->req;
+- struct io_ring_ctx *ctx = req->ctx;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&ctx->timeout_lock, flags);
+- list_del_init(&req->timeout.list);
+- atomic_set(&req->ctx->cq_timeouts,
+- atomic_read(&req->ctx->cq_timeouts) + 1);
+- spin_unlock_irqrestore(&ctx->timeout_lock, flags);
+-
+- if (!(data->flags & IORING_TIMEOUT_ETIME_SUCCESS))
+- req_set_fail(req);
+-
+- req->cqe.res = -ETIME;
+- req->io_task_work.func = io_req_task_complete;
+- io_req_task_work_add(req);
+- return HRTIMER_NORESTART;
+-}
+-
+-static struct io_kiocb *io_timeout_extract(struct io_ring_ctx *ctx,
+- struct io_cancel_data *cd)
+- __must_hold(&ctx->timeout_lock)
+-{
+- struct io_timeout_data *io;
+- struct io_kiocb *req;
+- bool found = false;
+-
+- list_for_each_entry(req, &ctx->timeout_list, timeout.list) {
+- if (!(cd->flags & IORING_ASYNC_CANCEL_ANY) &&
+- cd->data != req->cqe.user_data)
+- continue;
+- if (cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY)) {
+- if (cd->seq == req->work.cancel_seq)
+- continue;
+- req->work.cancel_seq = cd->seq;
+- }
+- found = true;
+- break;
+- }
+- if (!found)
+- return ERR_PTR(-ENOENT);
+-
+- io = req->async_data;
+- if (hrtimer_try_to_cancel(&io->timer) == -1)
+- return ERR_PTR(-EALREADY);
+- list_del_init(&req->timeout.list);
+- return req;
+-}
+-
+-static int io_timeout_cancel(struct io_ring_ctx *ctx, struct io_cancel_data *cd)
+- __must_hold(&ctx->completion_lock)
+-{
+- struct io_kiocb *req;
+-
+- spin_lock_irq(&ctx->timeout_lock);
+- req = io_timeout_extract(ctx, cd);
+- spin_unlock_irq(&ctx->timeout_lock);
+-
+- if (IS_ERR(req))
+- return PTR_ERR(req);
+- io_req_task_queue_fail(req, -ECANCELED);
+- return 0;
+-}
+-
+-static clockid_t io_timeout_get_clock(struct io_timeout_data *data)
+-{
+- switch (data->flags & IORING_TIMEOUT_CLOCK_MASK) {
+- case IORING_TIMEOUT_BOOTTIME:
+- return CLOCK_BOOTTIME;
+- case IORING_TIMEOUT_REALTIME:
+- return CLOCK_REALTIME;
+- default:
+- /* can't happen, vetted at prep time */
+- WARN_ON_ONCE(1);
+- fallthrough;
+- case 0:
+- return CLOCK_MONOTONIC;
+- }
+-}
+-
+-static int io_linked_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
+- struct timespec64 *ts, enum hrtimer_mode mode)
+- __must_hold(&ctx->timeout_lock)
+-{
+- struct io_timeout_data *io;
+- struct io_kiocb *req;
+- bool found = false;
+-
+- list_for_each_entry(req, &ctx->ltimeout_list, timeout.list) {
+- found = user_data == req->cqe.user_data;
+- if (found)
+- break;
+- }
+- if (!found)
+- return -ENOENT;
+-
+- io = req->async_data;
+- if (hrtimer_try_to_cancel(&io->timer) == -1)
+- return -EALREADY;
+- hrtimer_init(&io->timer, io_timeout_get_clock(io), mode);
+- io->timer.function = io_link_timeout_fn;
+- hrtimer_start(&io->timer, timespec64_to_ktime(*ts), mode);
+- return 0;
+-}
+-
+-static int io_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
+- struct timespec64 *ts, enum hrtimer_mode mode)
+- __must_hold(&ctx->timeout_lock)
+-{
+- struct io_cancel_data cd = { .data = user_data, };
+- struct io_kiocb *req = io_timeout_extract(ctx, &cd);
+- struct io_timeout_data *data;
+-
+- if (IS_ERR(req))
+- return PTR_ERR(req);
+-
+- req->timeout.off = 0; /* noseq */
+- data = req->async_data;
+- list_add_tail(&req->timeout.list, &ctx->timeout_list);
+- hrtimer_init(&data->timer, io_timeout_get_clock(data), mode);
+- data->timer.function = io_timeout_fn;
+- hrtimer_start(&data->timer, timespec64_to_ktime(*ts), mode);
+- return 0;
+-}
+-
+-static int io_timeout_remove_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- struct io_timeout_rem *tr = &req->timeout_rem;
+-
+- if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+- return -EINVAL;
+- if (sqe->buf_index || sqe->len || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- tr->ltimeout = false;
+- tr->addr = READ_ONCE(sqe->addr);
+- tr->flags = READ_ONCE(sqe->timeout_flags);
+- if (tr->flags & IORING_TIMEOUT_UPDATE_MASK) {
+- if (hweight32(tr->flags & IORING_TIMEOUT_CLOCK_MASK) > 1)
+- return -EINVAL;
+- if (tr->flags & IORING_LINK_TIMEOUT_UPDATE)
+- tr->ltimeout = true;
+- if (tr->flags & ~(IORING_TIMEOUT_UPDATE_MASK|IORING_TIMEOUT_ABS))
+- return -EINVAL;
+- if (get_timespec64(&tr->ts, u64_to_user_ptr(sqe->addr2)))
+- return -EFAULT;
+- if (tr->ts.tv_sec < 0 || tr->ts.tv_nsec < 0)
+- return -EINVAL;
+- } else if (tr->flags) {
+- /* timeout removal doesn't support flags */
+- return -EINVAL;
+- }
+-
+- return 0;
+-}
+-
+-static inline enum hrtimer_mode io_translate_timeout_mode(unsigned int flags)
+-{
+- return (flags & IORING_TIMEOUT_ABS) ? HRTIMER_MODE_ABS
+- : HRTIMER_MODE_REL;
+-}
+-
+-/*
+- * Remove or update an existing timeout command
+- */
+-static int io_timeout_remove(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_timeout_rem *tr = &req->timeout_rem;
+- struct io_ring_ctx *ctx = req->ctx;
+- int ret;
+-
+- if (!(req->timeout_rem.flags & IORING_TIMEOUT_UPDATE)) {
+- struct io_cancel_data cd = { .data = tr->addr, };
+-
+- spin_lock(&ctx->completion_lock);
+- ret = io_timeout_cancel(ctx, &cd);
+- spin_unlock(&ctx->completion_lock);
+- } else {
+- enum hrtimer_mode mode = io_translate_timeout_mode(tr->flags);
+-
+- spin_lock_irq(&ctx->timeout_lock);
+- if (tr->ltimeout)
+- ret = io_linked_timeout_update(ctx, tr->addr, &tr->ts, mode);
+- else
+- ret = io_timeout_update(ctx, tr->addr, &tr->ts, mode);
+- spin_unlock_irq(&ctx->timeout_lock);
+- }
+-
+- if (ret < 0)
+- req_set_fail(req);
+- io_req_complete_post(req, ret, 0);
+- return 0;
+-}
+-
+-static int __io_timeout_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe,
+- bool is_timeout_link)
+-{
+- struct io_timeout_data *data;
+- unsigned flags;
+- u32 off = READ_ONCE(sqe->off);
+-
+- if (sqe->buf_index || sqe->len != 1 || sqe->splice_fd_in)
+- return -EINVAL;
+- if (off && is_timeout_link)
+- return -EINVAL;
+- flags = READ_ONCE(sqe->timeout_flags);
+- if (flags & ~(IORING_TIMEOUT_ABS | IORING_TIMEOUT_CLOCK_MASK |
+- IORING_TIMEOUT_ETIME_SUCCESS))
+- return -EINVAL;
+- /* more than one clock specified is invalid, obviously */
+- if (hweight32(flags & IORING_TIMEOUT_CLOCK_MASK) > 1)
+- return -EINVAL;
+-
+- INIT_LIST_HEAD(&req->timeout.list);
+- req->timeout.off = off;
+- if (unlikely(off && !req->ctx->off_timeout_used))
+- req->ctx->off_timeout_used = true;
+-
+- if (WARN_ON_ONCE(req_has_async_data(req)))
+- return -EFAULT;
+- if (io_alloc_async_data(req))
+- return -ENOMEM;
+-
+- data = req->async_data;
+- data->req = req;
+- data->flags = flags;
+-
+- if (get_timespec64(&data->ts, u64_to_user_ptr(sqe->addr)))
+- return -EFAULT;
+-
+- if (data->ts.tv_sec < 0 || data->ts.tv_nsec < 0)
+- return -EINVAL;
+-
+- INIT_LIST_HEAD(&req->timeout.list);
+- data->mode = io_translate_timeout_mode(flags);
+- hrtimer_init(&data->timer, io_timeout_get_clock(data), data->mode);
+-
+- if (is_timeout_link) {
+- struct io_submit_link *link = &req->ctx->submit_state.link;
+-
+- if (!link->head)
+- return -EINVAL;
+- if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
+- return -EINVAL;
+- req->timeout.head = link->last;
+- link->last->flags |= REQ_F_ARM_LTIMEOUT;
+- }
+- return 0;
+-}
+-
+-static int io_timeout_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- return __io_timeout_prep(req, sqe, false);
+-}
+-
+-static int io_link_timeout_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- return __io_timeout_prep(req, sqe, true);
+-}
+-
+-static int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_timeout_data *data = req->async_data;
+- struct list_head *entry;
+- u32 tail, off = req->timeout.off;
+-
+- spin_lock_irq(&ctx->timeout_lock);
+-
+- /*
+- * sqe->off holds how many events that need to occur for this
+- * timeout event to be satisfied. If it isn't set, then this is
+- * a pure timeout request, sequence isn't used.
+- */
+- if (io_is_timeout_noseq(req)) {
+- entry = ctx->timeout_list.prev;
+- goto add;
+- }
+-
+- tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+- req->timeout.target_seq = tail + off;
+-
+- /* Update the last seq here in case io_flush_timeouts() hasn't.
+- * This is safe because ->completion_lock is held, and submissions
+- * and completions are never mixed in the same ->completion_lock section.
+- */
+- ctx->cq_last_tm_flush = tail;
+-
+- /*
+- * Insertion sort, ensuring the first entry in the list is always
+- * the one we need first.
+- */
+- list_for_each_prev(entry, &ctx->timeout_list) {
+- struct io_kiocb *nxt = list_entry(entry, struct io_kiocb,
+- timeout.list);
+-
+- if (io_is_timeout_noseq(nxt))
+- continue;
+- /* nxt.seq is behind @tail, otherwise would've been completed */
+- if (off >= nxt->timeout.target_seq - tail)
+- break;
+- }
+-add:
+- list_add(&req->timeout.list, entry);
+- data->timer.function = io_timeout_fn;
+- hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode);
+- spin_unlock_irq(&ctx->timeout_lock);
+- return 0;
+-}
+-
+-static bool io_cancel_cb(struct io_wq_work *work, void *data)
+-{
+- struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+- struct io_cancel_data *cd = data;
+-
+- if (req->ctx != cd->ctx)
+- return false;
+- if (cd->flags & IORING_ASYNC_CANCEL_ANY) {
+- ;
+- } else if (cd->flags & IORING_ASYNC_CANCEL_FD) {
+- if (req->file != cd->file)
+- return false;
+- } else {
+- if (req->cqe.user_data != cd->data)
+- return false;
+- }
+- if (cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY)) {
+- if (cd->seq == req->work.cancel_seq)
+- return false;
+- req->work.cancel_seq = cd->seq;
+- }
+- return true;
+-}
+-
+-static int io_async_cancel_one(struct io_uring_task *tctx,
+- struct io_cancel_data *cd)
+-{
+- enum io_wq_cancel cancel_ret;
+- int ret = 0;
+- bool all;
+-
+- if (!tctx || !tctx->io_wq)
+- return -ENOENT;
+-
+- all = cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY);
+- cancel_ret = io_wq_cancel_cb(tctx->io_wq, io_cancel_cb, cd, all);
+- switch (cancel_ret) {
+- case IO_WQ_CANCEL_OK:
+- ret = 0;
+- break;
+- case IO_WQ_CANCEL_RUNNING:
+- ret = -EALREADY;
+- break;
+- case IO_WQ_CANCEL_NOTFOUND:
+- ret = -ENOENT;
+- break;
+- }
+-
+- return ret;
+-}
+-
+-static int io_try_cancel(struct io_kiocb *req, struct io_cancel_data *cd)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- int ret;
+-
+- WARN_ON_ONCE(!io_wq_current_is_worker() && req->task != current);
+-
+- ret = io_async_cancel_one(req->task->io_uring, cd);
+- /*
+- * Fall-through even for -EALREADY, as we may have poll armed
+- * that need unarming.
+- */
+- if (!ret)
+- return 0;
+-
+- spin_lock(&ctx->completion_lock);
+- ret = io_poll_cancel(ctx, cd);
+- if (ret != -ENOENT)
+- goto out;
+- if (!(cd->flags & IORING_ASYNC_CANCEL_FD))
+- ret = io_timeout_cancel(ctx, cd);
+-out:
+- spin_unlock(&ctx->completion_lock);
+- return ret;
+-}
+-
+-#define CANCEL_FLAGS (IORING_ASYNC_CANCEL_ALL | IORING_ASYNC_CANCEL_FD | \
+- IORING_ASYNC_CANCEL_ANY)
+-
+-static int io_async_cancel_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- if (unlikely(req->flags & REQ_F_BUFFER_SELECT))
+- return -EINVAL;
+- if (sqe->off || sqe->len || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- req->cancel.addr = READ_ONCE(sqe->addr);
+- req->cancel.flags = READ_ONCE(sqe->cancel_flags);
+- if (req->cancel.flags & ~CANCEL_FLAGS)
+- return -EINVAL;
+- if (req->cancel.flags & IORING_ASYNC_CANCEL_FD) {
+- if (req->cancel.flags & IORING_ASYNC_CANCEL_ANY)
+- return -EINVAL;
+- req->cancel.fd = READ_ONCE(sqe->fd);
+- }
+-
+- return 0;
+-}
+-
+-static int __io_async_cancel(struct io_cancel_data *cd, struct io_kiocb *req,
+- unsigned int issue_flags)
+-{
+- bool all = cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY);
+- struct io_ring_ctx *ctx = cd->ctx;
+- struct io_tctx_node *node;
+- int ret, nr = 0;
+-
+- do {
+- ret = io_try_cancel(req, cd);
+- if (ret == -ENOENT)
+- break;
+- if (!all)
+- return ret;
+- nr++;
+- } while (1);
+-
+- /* slow path, try all io-wq's */
+- io_ring_submit_lock(ctx, issue_flags);
+- ret = -ENOENT;
+- list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
+- struct io_uring_task *tctx = node->task->io_uring;
+-
+- ret = io_async_cancel_one(tctx, cd);
+- if (ret != -ENOENT) {
+- if (!all)
+- break;
+- nr++;
+- }
+- }
+- io_ring_submit_unlock(ctx, issue_flags);
+- return all ? nr : ret;
+-}
+-
+-static int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_cancel_data cd = {
+- .ctx = req->ctx,
+- .data = req->cancel.addr,
+- .flags = req->cancel.flags,
+- .seq = atomic_inc_return(&req->ctx->cancel_seq),
+- };
+- int ret;
+-
+- if (cd.flags & IORING_ASYNC_CANCEL_FD) {
+- if (req->flags & REQ_F_FIXED_FILE)
+- req->file = io_file_get_fixed(req, req->cancel.fd,
+- issue_flags);
+- else
+- req->file = io_file_get_normal(req, req->cancel.fd);
+- if (!req->file) {
+- ret = -EBADF;
+- goto done;
+- }
+- cd.file = req->file;
+- }
+-
+- ret = __io_async_cancel(&cd, req, issue_flags);
+-done:
+- if (ret < 0)
+- req_set_fail(req);
+- io_req_complete_post(req, ret, 0);
+- return 0;
+-}
+-
+-static int io_files_update_prep(struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+-{
+- if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+- return -EINVAL;
+- if (sqe->rw_flags || sqe->splice_fd_in)
+- return -EINVAL;
+-
+- req->rsrc_update.offset = READ_ONCE(sqe->off);
+- req->rsrc_update.nr_args = READ_ONCE(sqe->len);
+- if (!req->rsrc_update.nr_args)
+- return -EINVAL;
+- req->rsrc_update.arg = READ_ONCE(sqe->addr);
+- return 0;
+-}
+-
+-static int io_files_update_with_index_alloc(struct io_kiocb *req,
+- unsigned int issue_flags)
+-{
+- __s32 __user *fds = u64_to_user_ptr(req->rsrc_update.arg);
+- unsigned int done;
+- struct file *file;
+- int ret, fd;
+-
+- if (!req->ctx->file_data)
+- return -ENXIO;
+-
+- for (done = 0; done < req->rsrc_update.nr_args; done++) {
+- if (copy_from_user(&fd, &fds[done], sizeof(fd))) {
+- ret = -EFAULT;
+- break;
+- }
+-
+- file = fget(fd);
+- if (!file) {
+- ret = -EBADF;
+- break;
+- }
+- ret = io_fixed_fd_install(req, issue_flags, file,
+- IORING_FILE_INDEX_ALLOC);
+- if (ret < 0)
+- break;
+- if (copy_to_user(&fds[done], &ret, sizeof(ret))) {
+- __io_close_fixed(req, issue_flags, ret);
+- ret = -EFAULT;
+- break;
+- }
+- }
+-
+- if (done)
+- return done;
+- return ret;
+-}
+-
+-static int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_uring_rsrc_update2 up;
+- int ret;
+-
+- up.offset = req->rsrc_update.offset;
+- up.data = req->rsrc_update.arg;
+- up.nr = 0;
+- up.tags = 0;
+- up.resv = 0;
+- up.resv2 = 0;
+-
+- if (req->rsrc_update.offset == IORING_FILE_INDEX_ALLOC) {
+- ret = io_files_update_with_index_alloc(req, issue_flags);
+- } else {
+- io_ring_submit_lock(ctx, issue_flags);
+- ret = __io_register_rsrc_update(ctx, IORING_RSRC_FILE,
+- &up, req->rsrc_update.nr_args);
+- io_ring_submit_unlock(ctx, issue_flags);
+- }
+-
+- if (ret < 0)
+- req_set_fail(req);
+- __io_req_complete(req, issue_flags, ret, 0);
+- return 0;
+-}
+-
+-static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- switch (req->opcode) {
+- case IORING_OP_NOP:
+- return io_nop_prep(req, sqe);
+- case IORING_OP_READV:
+- case IORING_OP_READ_FIXED:
+- case IORING_OP_READ:
+- case IORING_OP_WRITEV:
+- case IORING_OP_WRITE_FIXED:
+- case IORING_OP_WRITE:
+- return io_prep_rw(req, sqe);
+- case IORING_OP_POLL_ADD:
+- return io_poll_add_prep(req, sqe);
+- case IORING_OP_POLL_REMOVE:
+- return io_poll_remove_prep(req, sqe);
+- case IORING_OP_FSYNC:
+- return io_fsync_prep(req, sqe);
+- case IORING_OP_SYNC_FILE_RANGE:
+- return io_sfr_prep(req, sqe);
+- case IORING_OP_SENDMSG:
+- case IORING_OP_SEND:
+- return io_sendmsg_prep(req, sqe);
+- case IORING_OP_RECVMSG:
+- case IORING_OP_RECV:
+- return io_recvmsg_prep(req, sqe);
+- case IORING_OP_CONNECT:
+- return io_connect_prep(req, sqe);
+- case IORING_OP_TIMEOUT:
+- return io_timeout_prep(req, sqe);
+- case IORING_OP_TIMEOUT_REMOVE:
+- return io_timeout_remove_prep(req, sqe);
+- case IORING_OP_ASYNC_CANCEL:
+- return io_async_cancel_prep(req, sqe);
+- case IORING_OP_LINK_TIMEOUT:
+- return io_link_timeout_prep(req, sqe);
+- case IORING_OP_ACCEPT:
+- return io_accept_prep(req, sqe);
+- case IORING_OP_FALLOCATE:
+- return io_fallocate_prep(req, sqe);
+- case IORING_OP_OPENAT:
+- return io_openat_prep(req, sqe);
+- case IORING_OP_CLOSE:
+- return io_close_prep(req, sqe);
+- case IORING_OP_FILES_UPDATE:
+- return io_files_update_prep(req, sqe);
+- case IORING_OP_STATX:
+- return io_statx_prep(req, sqe);
+- case IORING_OP_FADVISE:
+- return io_fadvise_prep(req, sqe);
+- case IORING_OP_MADVISE:
+- return io_madvise_prep(req, sqe);
+- case IORING_OP_OPENAT2:
+- return io_openat2_prep(req, sqe);
+- case IORING_OP_EPOLL_CTL:
+- return io_epoll_ctl_prep(req, sqe);
+- case IORING_OP_SPLICE:
+- return io_splice_prep(req, sqe);
+- case IORING_OP_PROVIDE_BUFFERS:
+- return io_provide_buffers_prep(req, sqe);
+- case IORING_OP_REMOVE_BUFFERS:
+- return io_remove_buffers_prep(req, sqe);
+- case IORING_OP_TEE:
+- return io_tee_prep(req, sqe);
+- case IORING_OP_SHUTDOWN:
+- return io_shutdown_prep(req, sqe);
+- case IORING_OP_RENAMEAT:
+- return io_renameat_prep(req, sqe);
+- case IORING_OP_UNLINKAT:
+- return io_unlinkat_prep(req, sqe);
+- case IORING_OP_MKDIRAT:
+- return io_mkdirat_prep(req, sqe);
+- case IORING_OP_SYMLINKAT:
+- return io_symlinkat_prep(req, sqe);
+- case IORING_OP_LINKAT:
+- return io_linkat_prep(req, sqe);
+- case IORING_OP_MSG_RING:
+- return io_msg_ring_prep(req, sqe);
+- case IORING_OP_FSETXATTR:
+- return io_fsetxattr_prep(req, sqe);
+- case IORING_OP_SETXATTR:
+- return io_setxattr_prep(req, sqe);
+- case IORING_OP_FGETXATTR:
+- return io_fgetxattr_prep(req, sqe);
+- case IORING_OP_GETXATTR:
+- return io_getxattr_prep(req, sqe);
+- case IORING_OP_SOCKET:
+- return io_socket_prep(req, sqe);
+- case IORING_OP_URING_CMD:
+- return io_uring_cmd_prep(req, sqe);
+- }
+-
+- printk_once(KERN_WARNING "io_uring: unhandled opcode %d\n",
+- req->opcode);
+- return -EINVAL;
+-}
+-
+-static int io_req_prep_async(struct io_kiocb *req)
+-{
+- const struct io_op_def *def = &io_op_defs[req->opcode];
+-
+- /* assign early for deferred execution for non-fixed file */
+- if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE))
+- req->file = io_file_get_normal(req, req->cqe.fd);
+- if (!def->needs_async_setup)
+- return 0;
+- if (WARN_ON_ONCE(req_has_async_data(req)))
+- return -EFAULT;
+- if (io_alloc_async_data(req))
+- return -EAGAIN;
+-
+- switch (req->opcode) {
+- case IORING_OP_READV:
+- return io_readv_prep_async(req);
+- case IORING_OP_WRITEV:
+- return io_writev_prep_async(req);
+- case IORING_OP_SENDMSG:
+- return io_sendmsg_prep_async(req);
+- case IORING_OP_RECVMSG:
+- return io_recvmsg_prep_async(req);
+- case IORING_OP_CONNECT:
+- return io_connect_prep_async(req);
+- case IORING_OP_URING_CMD:
+- return io_uring_cmd_prep_async(req);
+- }
+- printk_once(KERN_WARNING "io_uring: prep_async() bad opcode %d\n",
+- req->opcode);
+- return -EFAULT;
+-}
+-
+-static u32 io_get_sequence(struct io_kiocb *req)
+-{
+- u32 seq = req->ctx->cached_sq_head;
+- struct io_kiocb *cur;
+-
+- /* need original cached_sq_head, but it was increased for each req */
+- io_for_each_link(cur, req)
+- seq--;
+- return seq;
+-}
+-
+-static __cold void io_drain_req(struct io_kiocb *req)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_defer_entry *de;
+- int ret;
+- u32 seq = io_get_sequence(req);
+-
+- /* Still need defer if there is pending req in defer list. */
+- spin_lock(&ctx->completion_lock);
+- if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list)) {
+- spin_unlock(&ctx->completion_lock);
+-queue:
+- ctx->drain_active = false;
+- io_req_task_queue(req);
+- return;
+- }
+- spin_unlock(&ctx->completion_lock);
+-
+- ret = io_req_prep_async(req);
+- if (ret) {
+-fail:
+- io_req_complete_failed(req, ret);
+- return;
+- }
+- io_prep_async_link(req);
+- de = kmalloc(sizeof(*de), GFP_KERNEL);
+- if (!de) {
+- ret = -ENOMEM;
+- goto fail;
+- }
+-
+- spin_lock(&ctx->completion_lock);
+- if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
+- spin_unlock(&ctx->completion_lock);
+- kfree(de);
+- goto queue;
+- }
+-
+- trace_io_uring_defer(ctx, req, req->cqe.user_data, req->opcode);
+- de->req = req;
+- de->seq = seq;
+- list_add_tail(&de->list, &ctx->defer_list);
+- spin_unlock(&ctx->completion_lock);
+-}
+-
+-static void io_clean_op(struct io_kiocb *req)
+-{
+- if (req->flags & REQ_F_BUFFER_SELECTED) {
+- spin_lock(&req->ctx->completion_lock);
+- io_put_kbuf_comp(req);
+- spin_unlock(&req->ctx->completion_lock);
+- }
+-
+- if (req->flags & REQ_F_NEED_CLEANUP) {
+- switch (req->opcode) {
+- case IORING_OP_READV:
+- case IORING_OP_READ_FIXED:
+- case IORING_OP_READ:
+- case IORING_OP_WRITEV:
+- case IORING_OP_WRITE_FIXED:
+- case IORING_OP_WRITE: {
+- struct io_async_rw *io = req->async_data;
+-
+- kfree(io->free_iovec);
+- break;
+- }
+- case IORING_OP_RECVMSG:
+- case IORING_OP_SENDMSG: {
+- struct io_async_msghdr *io = req->async_data;
+-
+- kfree(io->free_iov);
+- break;
+- }
+- case IORING_OP_OPENAT:
+- case IORING_OP_OPENAT2:
+- if (req->open.filename)
+- putname(req->open.filename);
+- break;
+- case IORING_OP_RENAMEAT:
+- putname(req->rename.oldpath);
+- putname(req->rename.newpath);
+- break;
+- case IORING_OP_UNLINKAT:
+- putname(req->unlink.filename);
+- break;
+- case IORING_OP_MKDIRAT:
+- putname(req->mkdir.filename);
+- break;
+- case IORING_OP_SYMLINKAT:
+- putname(req->symlink.oldpath);
+- putname(req->symlink.newpath);
+- break;
+- case IORING_OP_LINKAT:
+- putname(req->hardlink.oldpath);
+- putname(req->hardlink.newpath);
+- break;
+- case IORING_OP_STATX:
+- if (req->statx.filename)
+- putname(req->statx.filename);
+- break;
+- case IORING_OP_SETXATTR:
+- case IORING_OP_FSETXATTR:
+- case IORING_OP_GETXATTR:
+- case IORING_OP_FGETXATTR:
+- __io_xattr_finish(req);
+- break;
+- }
+- }
+- if ((req->flags & REQ_F_POLLED) && req->apoll) {
+- kfree(req->apoll->double_poll);
+- kfree(req->apoll);
+- req->apoll = NULL;
+- }
+- if (req->flags & REQ_F_INFLIGHT) {
+- struct io_uring_task *tctx = req->task->io_uring;
+-
+- atomic_dec(&tctx->inflight_tracked);
+- }
+- if (req->flags & REQ_F_CREDS)
+- put_cred(req->creds);
+- if (req->flags & REQ_F_ASYNC_DATA) {
+- kfree(req->async_data);
+- req->async_data = NULL;
+- }
+- req->flags &= ~IO_REQ_CLEAN_FLAGS;
+-}
+-
+-static bool io_assign_file(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- if (req->file || !io_op_defs[req->opcode].needs_file)
+- return true;
+-
+- if (req->flags & REQ_F_FIXED_FILE)
+- req->file = io_file_get_fixed(req, req->cqe.fd, issue_flags);
+- else
+- req->file = io_file_get_normal(req, req->cqe.fd);
+-
+- return !!req->file;
+-}
+-
+-static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- const struct io_op_def *def = &io_op_defs[req->opcode];
+- const struct cred *creds = NULL;
+- int ret;
+-
+- if (unlikely(!io_assign_file(req, issue_flags)))
+- return -EBADF;
+-
+- if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
+- creds = override_creds(req->creds);
+-
+- if (!def->audit_skip)
+- audit_uring_entry(req->opcode);
+-
+- switch (req->opcode) {
+- case IORING_OP_NOP:
+- ret = io_nop(req, issue_flags);
+- break;
+- case IORING_OP_READV:
+- case IORING_OP_READ_FIXED:
+- case IORING_OP_READ:
+- ret = io_read(req, issue_flags);
+- break;
+- case IORING_OP_WRITEV:
+- case IORING_OP_WRITE_FIXED:
+- case IORING_OP_WRITE:
+- ret = io_write(req, issue_flags);
+- break;
+- case IORING_OP_FSYNC:
+- ret = io_fsync(req, issue_flags);
+- break;
+- case IORING_OP_POLL_ADD:
+- ret = io_poll_add(req, issue_flags);
+- break;
+- case IORING_OP_POLL_REMOVE:
+- ret = io_poll_remove(req, issue_flags);
+- break;
+- case IORING_OP_SYNC_FILE_RANGE:
+- ret = io_sync_file_range(req, issue_flags);
+- break;
+- case IORING_OP_SENDMSG:
+- ret = io_sendmsg(req, issue_flags);
+- break;
+- case IORING_OP_SEND:
+- ret = io_send(req, issue_flags);
+- break;
+- case IORING_OP_RECVMSG:
+- ret = io_recvmsg(req, issue_flags);
+- break;
+- case IORING_OP_RECV:
+- ret = io_recv(req, issue_flags);
+- break;
+- case IORING_OP_TIMEOUT:
+- ret = io_timeout(req, issue_flags);
+- break;
+- case IORING_OP_TIMEOUT_REMOVE:
+- ret = io_timeout_remove(req, issue_flags);
+- break;
+- case IORING_OP_ACCEPT:
+- ret = io_accept(req, issue_flags);
+- break;
+- case IORING_OP_CONNECT:
+- ret = io_connect(req, issue_flags);
+- break;
+- case IORING_OP_ASYNC_CANCEL:
+- ret = io_async_cancel(req, issue_flags);
+- break;
+- case IORING_OP_FALLOCATE:
+- ret = io_fallocate(req, issue_flags);
+- break;
+- case IORING_OP_OPENAT:
+- ret = io_openat(req, issue_flags);
+- break;
+- case IORING_OP_CLOSE:
+- ret = io_close(req, issue_flags);
+- break;
+- case IORING_OP_FILES_UPDATE:
+- ret = io_files_update(req, issue_flags);
+- break;
+- case IORING_OP_STATX:
+- ret = io_statx(req, issue_flags);
+- break;
+- case IORING_OP_FADVISE:
+- ret = io_fadvise(req, issue_flags);
+- break;
+- case IORING_OP_MADVISE:
+- ret = io_madvise(req, issue_flags);
+- break;
+- case IORING_OP_OPENAT2:
+- ret = io_openat2(req, issue_flags);
+- break;
+- case IORING_OP_EPOLL_CTL:
+- ret = io_epoll_ctl(req, issue_flags);
+- break;
+- case IORING_OP_SPLICE:
+- ret = io_splice(req, issue_flags);
+- break;
+- case IORING_OP_PROVIDE_BUFFERS:
+- ret = io_provide_buffers(req, issue_flags);
+- break;
+- case IORING_OP_REMOVE_BUFFERS:
+- ret = io_remove_buffers(req, issue_flags);
+- break;
+- case IORING_OP_TEE:
+- ret = io_tee(req, issue_flags);
+- break;
+- case IORING_OP_SHUTDOWN:
+- ret = io_shutdown(req, issue_flags);
+- break;
+- case IORING_OP_RENAMEAT:
+- ret = io_renameat(req, issue_flags);
+- break;
+- case IORING_OP_UNLINKAT:
+- ret = io_unlinkat(req, issue_flags);
+- break;
+- case IORING_OP_MKDIRAT:
+- ret = io_mkdirat(req, issue_flags);
+- break;
+- case IORING_OP_SYMLINKAT:
+- ret = io_symlinkat(req, issue_flags);
+- break;
+- case IORING_OP_LINKAT:
+- ret = io_linkat(req, issue_flags);
+- break;
+- case IORING_OP_MSG_RING:
+- ret = io_msg_ring(req, issue_flags);
+- break;
+- case IORING_OP_FSETXATTR:
+- ret = io_fsetxattr(req, issue_flags);
+- break;
+- case IORING_OP_SETXATTR:
+- ret = io_setxattr(req, issue_flags);
+- break;
+- case IORING_OP_FGETXATTR:
+- ret = io_fgetxattr(req, issue_flags);
+- break;
+- case IORING_OP_GETXATTR:
+- ret = io_getxattr(req, issue_flags);
+- break;
+- case IORING_OP_SOCKET:
+- ret = io_socket(req, issue_flags);
+- break;
+- case IORING_OP_URING_CMD:
+- ret = io_uring_cmd(req, issue_flags);
+- break;
+- default:
+- ret = -EINVAL;
+- break;
+- }
+-
+- if (!def->audit_skip)
+- audit_uring_exit(!ret, ret);
+-
+- if (creds)
+- revert_creds(creds);
+- if (ret)
+- return ret;
+- /* If the op doesn't have a file, we're not polling for it */
+- if ((req->ctx->flags & IORING_SETUP_IOPOLL) && req->file)
+- io_iopoll_req_issued(req, issue_flags);
+-
+- return 0;
+-}
+-
+-static struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
+-{
+- struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-
+- req = io_put_req_find_next(req);
+- return req ? &req->work : NULL;
+-}
+-
+-static void io_wq_submit_work(struct io_wq_work *work)
+-{
+- struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+- const struct io_op_def *def = &io_op_defs[req->opcode];
+- unsigned int issue_flags = IO_URING_F_UNLOCKED;
+- bool needs_poll = false;
+- int ret = 0, err = -ECANCELED;
+-
+- /* one will be dropped by ->io_free_work() after returning to io-wq */
+- if (!(req->flags & REQ_F_REFCOUNT))
+- __io_req_set_refcount(req, 2);
+- else
+- req_ref_get(req);
+-
+- io_arm_ltimeout(req);
+-
+- /* either cancelled or io-wq is dying, so don't touch tctx->iowq */
+- if (work->flags & IO_WQ_WORK_CANCEL) {
+-fail:
+- io_req_task_queue_fail(req, err);
+- return;
+- }
+- if (!io_assign_file(req, issue_flags)) {
+- err = -EBADF;
+- work->flags |= IO_WQ_WORK_CANCEL;
+- goto fail;
+- }
+-
+- if (req->flags & REQ_F_FORCE_ASYNC) {
+- bool opcode_poll = def->pollin || def->pollout;
+-
+- if (opcode_poll && file_can_poll(req->file)) {
+- needs_poll = true;
+- issue_flags |= IO_URING_F_NONBLOCK;
+- }
+- }
+-
+- do {
+- ret = io_issue_sqe(req, issue_flags);
+- if (ret != -EAGAIN)
+- break;
+- /*
+- * We can get EAGAIN for iopolled IO even though we're
+- * forcing a sync submission from here, since we can't
+- * wait for request slots on the block side.
+- */
+- if (!needs_poll) {
+- if (!(req->ctx->flags & IORING_SETUP_IOPOLL))
+- break;
+- cond_resched();
+- continue;
+- }
+-
+- if (io_arm_poll_handler(req, issue_flags) == IO_APOLL_OK)
+- return;
+- /* aborted or ready, in either case retry blocking */
+- needs_poll = false;
+- issue_flags &= ~IO_URING_F_NONBLOCK;
+- } while (1);
+-
+- /* avoid locking problems by failing it from a clean context */
+- if (ret)
+- io_req_task_queue_fail(req, ret);
+-}
+-
+-static inline struct io_fixed_file *io_fixed_file_slot(struct io_file_table *table,
+- unsigned i)
+-{
+- return &table->files[i];
+-}
+-
+-static inline struct file *io_file_from_index(struct io_ring_ctx *ctx,
+- int index)
+-{
+- struct io_fixed_file *slot = io_fixed_file_slot(&ctx->file_table, index);
+-
+- return (struct file *) (slot->file_ptr & FFS_MASK);
+-}
+-
+-static void io_fixed_file_set(struct io_fixed_file *file_slot, struct file *file)
+-{
+- unsigned long file_ptr = (unsigned long) file;
+-
+- file_ptr |= io_file_get_flags(file);
+- file_slot->file_ptr = file_ptr;
+-}
+-
+-static inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
+- unsigned int issue_flags)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct file *file = NULL;
+- unsigned long file_ptr;
+-
+- io_ring_submit_lock(ctx, issue_flags);
+-
+- if (unlikely((unsigned int)fd >= ctx->nr_user_files))
+- goto out;
+- fd = array_index_nospec(fd, ctx->nr_user_files);
+- file_ptr = io_fixed_file_slot(&ctx->file_table, fd)->file_ptr;
+- file = (struct file *) (file_ptr & FFS_MASK);
+- file_ptr &= ~FFS_MASK;
+- /* mask in overlapping REQ_F and FFS bits */
+- req->flags |= (file_ptr << REQ_F_SUPPORT_NOWAIT_BIT);
+- io_req_set_rsrc_node(req, ctx, 0);
+- WARN_ON_ONCE(file && !test_bit(fd, ctx->file_table.bitmap));
+-out:
+- io_ring_submit_unlock(ctx, issue_flags);
+- return file;
+-}
+-
+-static struct file *io_file_get_normal(struct io_kiocb *req, int fd)
+-{
+- struct file *file = fget(fd);
+-
+- trace_io_uring_file_get(req->ctx, req, req->cqe.user_data, fd);
+-
+- /* we don't allow fixed io_uring files */
+- if (file && file->f_op == &io_uring_fops)
+- io_req_track_inflight(req);
+- return file;
+-}
+-
+-static void io_req_task_link_timeout(struct io_kiocb *req, bool *locked)
+-{
+- struct io_kiocb *prev = req->timeout.prev;
+- int ret = -ENOENT;
+-
+- if (prev) {
+- if (!(req->task->flags & PF_EXITING)) {
+- struct io_cancel_data cd = {
+- .ctx = req->ctx,
+- .data = prev->cqe.user_data,
+- };
+-
+- ret = io_try_cancel(req, &cd);
+- }
+- io_req_complete_post(req, ret ?: -ETIME, 0);
+- io_put_req(prev);
+- } else {
+- io_req_complete_post(req, -ETIME, 0);
+- }
+-}
+-
+-static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
+-{
+- struct io_timeout_data *data = container_of(timer,
+- struct io_timeout_data, timer);
+- struct io_kiocb *prev, *req = data->req;
+- struct io_ring_ctx *ctx = req->ctx;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&ctx->timeout_lock, flags);
+- prev = req->timeout.head;
+- req->timeout.head = NULL;
+-
+- /*
+- * We don't expect the list to be empty, that will only happen if we
+- * race with the completion of the linked work.
+- */
+- if (prev) {
+- io_remove_next_linked(prev);
+- if (!req_ref_inc_not_zero(prev))
+- prev = NULL;
+- }
+- list_del(&req->timeout.list);
+- req->timeout.prev = prev;
+- spin_unlock_irqrestore(&ctx->timeout_lock, flags);
+-
+- req->io_task_work.func = io_req_task_link_timeout;
+- io_req_task_work_add(req);
+- return HRTIMER_NORESTART;
+-}
+-
+-static void io_queue_linked_timeout(struct io_kiocb *req)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- spin_lock_irq(&ctx->timeout_lock);
+- /*
+- * If the back reference is NULL, then our linked request finished
+- * before we got a chance to setup the timer
+- */
+- if (req->timeout.head) {
+- struct io_timeout_data *data = req->async_data;
+-
+- data->timer.function = io_link_timeout_fn;
+- hrtimer_start(&data->timer, timespec64_to_ktime(data->ts),
+- data->mode);
+- list_add_tail(&req->timeout.list, &ctx->ltimeout_list);
+- }
+- spin_unlock_irq(&ctx->timeout_lock);
+- /* drop submission reference */
+- io_put_req(req);
+-}
+-
+-static void io_queue_async(struct io_kiocb *req, int ret)
+- __must_hold(&req->ctx->uring_lock)
+-{
+- struct io_kiocb *linked_timeout;
+-
+- if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
+- io_req_complete_failed(req, ret);
+- return;
+- }
+-
+- linked_timeout = io_prep_linked_timeout(req);
+-
+- switch (io_arm_poll_handler(req, 0)) {
+- case IO_APOLL_READY:
+- io_req_task_queue(req);
+- break;
+- case IO_APOLL_ABORTED:
+- /*
+- * Queued up for async execution, worker will release
+- * submit reference when the iocb is actually submitted.
+- */
+- io_kbuf_recycle(req, 0);
+- io_queue_iowq(req, NULL);
+- break;
+- case IO_APOLL_OK:
+- break;
+- }
+-
+- if (linked_timeout)
+- io_queue_linked_timeout(linked_timeout);
+-}
+-
+-static inline void io_queue_sqe(struct io_kiocb *req)
+- __must_hold(&req->ctx->uring_lock)
+-{
+- int ret;
+-
+- ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
+-
+- if (req->flags & REQ_F_COMPLETE_INLINE) {
+- io_req_add_compl_list(req);
+- return;
+- }
+- /*
+- * We async punt it if the file wasn't marked NOWAIT, or if the file
+- * doesn't support non-blocking read/write attempts
+- */
+- if (likely(!ret))
+- io_arm_ltimeout(req);
+- else
+- io_queue_async(req, ret);
+-}
+-
+-static void io_queue_sqe_fallback(struct io_kiocb *req)
+- __must_hold(&req->ctx->uring_lock)
+-{
+- if (unlikely(req->flags & REQ_F_FAIL)) {
+- /*
+- * We don't submit, fail them all, for that replace hardlinks
+- * with normal links. Extra REQ_F_LINK is tolerated.
+- */
+- req->flags &= ~REQ_F_HARDLINK;
+- req->flags |= REQ_F_LINK;
+- io_req_complete_failed(req, req->cqe.res);
+- } else if (unlikely(req->ctx->drain_active)) {
+- io_drain_req(req);
+- } else {
+- int ret = io_req_prep_async(req);
+-
+- if (unlikely(ret))
+- io_req_complete_failed(req, ret);
+- else
+- io_queue_iowq(req, NULL);
+- }
+-}
+-
+-/*
+- * Check SQE restrictions (opcode and flags).
+- *
+- * Returns 'true' if SQE is allowed, 'false' otherwise.
+- */
+-static inline bool io_check_restriction(struct io_ring_ctx *ctx,
+- struct io_kiocb *req,
+- unsigned int sqe_flags)
+-{
+- if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
+- return false;
+-
+- if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
+- ctx->restrictions.sqe_flags_required)
+- return false;
+-
+- if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
+- ctx->restrictions.sqe_flags_required))
+- return false;
+-
+- return true;
+-}
+-
+-static void io_init_req_drain(struct io_kiocb *req)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_kiocb *head = ctx->submit_state.link.head;
+-
+- ctx->drain_active = true;
+- if (head) {
+- /*
+- * If we need to drain a request in the middle of a link, drain
+- * the head request and the next request/link after the current
+- * link. Considering sequential execution of links,
+- * REQ_F_IO_DRAIN will be maintained for every request of our
+- * link.
+- */
+- head->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
+- ctx->drain_next = true;
+- }
+-}
+-
+-static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+- __must_hold(&ctx->uring_lock)
+-{
+- const struct io_op_def *def;
+- unsigned int sqe_flags;
+- int personality;
+- u8 opcode;
+-
+- /* req is partially pre-initialised, see io_preinit_req() */
+- req->opcode = opcode = READ_ONCE(sqe->opcode);
+- /* same numerical values with corresponding REQ_F_*, safe to copy */
+- req->flags = sqe_flags = READ_ONCE(sqe->flags);
+- req->cqe.user_data = READ_ONCE(sqe->user_data);
+- req->file = NULL;
+- req->rsrc_node = NULL;
+- req->task = current;
+-
+- if (unlikely(opcode >= IORING_OP_LAST)) {
+- req->opcode = 0;
+- return -EINVAL;
+- }
+- def = &io_op_defs[opcode];
+- if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) {
+- /* enforce forwards compatibility on users */
+- if (sqe_flags & ~SQE_VALID_FLAGS)
+- return -EINVAL;
+- if (sqe_flags & IOSQE_BUFFER_SELECT) {
+- if (!def->buffer_select)
+- return -EOPNOTSUPP;
+- req->buf_index = READ_ONCE(sqe->buf_group);
+- }
+- if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS)
+- ctx->drain_disabled = true;
+- if (sqe_flags & IOSQE_IO_DRAIN) {
+- if (ctx->drain_disabled)
+- return -EOPNOTSUPP;
+- io_init_req_drain(req);
+- }
+- }
+- if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
+- if (ctx->restricted && !io_check_restriction(ctx, req, sqe_flags))
+- return -EACCES;
+- /* knock it to the slow queue path, will be drained there */
+- if (ctx->drain_active)
+- req->flags |= REQ_F_FORCE_ASYNC;
+- /* if there is no link, we're at "next" request and need to drain */
+- if (unlikely(ctx->drain_next) && !ctx->submit_state.link.head) {
+- ctx->drain_next = false;
+- ctx->drain_active = true;
+- req->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
+- }
+- }
+-
+- if (!def->ioprio && sqe->ioprio)
+- return -EINVAL;
+- if (!def->iopoll && (ctx->flags & IORING_SETUP_IOPOLL))
+- return -EINVAL;
+-
+- if (def->needs_file) {
+- struct io_submit_state *state = &ctx->submit_state;
+-
+- req->cqe.fd = READ_ONCE(sqe->fd);
+-
+- /*
+- * Plug now if we have more than 2 IO left after this, and the
+- * target is potentially a read/write to block based storage.
+- */
+- if (state->need_plug && def->plug) {
+- state->plug_started = true;
+- state->need_plug = false;
+- blk_start_plug_nr_ios(&state->plug, state->submit_nr);
+- }
+- }
+-
+- personality = READ_ONCE(sqe->personality);
+- if (personality) {
+- int ret;
+-
+- req->creds = xa_load(&ctx->personalities, personality);
+- if (!req->creds)
+- return -EINVAL;
+- get_cred(req->creds);
+- ret = security_uring_override_creds(req->creds);
+- if (ret) {
+- put_cred(req->creds);
+- return ret;
+- }
+- req->flags |= REQ_F_CREDS;
+- }
+-
+- return io_req_prep(req, sqe);
+-}
+-
+-static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
+- struct io_kiocb *req, int ret)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_submit_link *link = &ctx->submit_state.link;
+- struct io_kiocb *head = link->head;
+-
+- trace_io_uring_req_failed(sqe, ctx, req, ret);
+-
+- /*
+- * Avoid breaking links in the middle as it renders links with SQPOLL
+- * unusable. Instead of failing eagerly, continue assembling the link if
+- * applicable and mark the head with REQ_F_FAIL. The link flushing code
+- * should find the flag and handle the rest.
+- */
+- req_fail_link_node(req, ret);
+- if (head && !(head->flags & REQ_F_FAIL))
+- req_fail_link_node(head, -ECANCELED);
+-
+- if (!(req->flags & IO_REQ_LINK_FLAGS)) {
+- if (head) {
+- link->last->link = req;
+- link->head = NULL;
+- req = head;
+- }
+- io_queue_sqe_fallback(req);
+- return ret;
+- }
+-
+- if (head)
+- link->last->link = req;
+- else
+- link->head = req;
+- link->last = req;
+- return 0;
+-}
+-
+-static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
+- __must_hold(&ctx->uring_lock)
+-{
+- struct io_submit_link *link = &ctx->submit_state.link;
+- int ret;
+-
+- ret = io_init_req(ctx, req, sqe);
+- if (unlikely(ret))
+- return io_submit_fail_init(sqe, req, ret);
+-
+- /* don't need @sqe from now on */
+- trace_io_uring_submit_sqe(ctx, req, req->cqe.user_data, req->opcode,
+- req->flags, true,
+- ctx->flags & IORING_SETUP_SQPOLL);
+-
+- /*
+- * If we already have a head request, queue this one for async
+- * submittal once the head completes. If we don't have a head but
+- * IOSQE_IO_LINK is set in the sqe, start a new head. This one will be
+- * submitted sync once the chain is complete. If none of those
+- * conditions are true (normal request), then just queue it.
+- */
+- if (unlikely(link->head)) {
+- ret = io_req_prep_async(req);
+- if (unlikely(ret))
+- return io_submit_fail_init(sqe, req, ret);
+-
+- trace_io_uring_link(ctx, req, link->head);
+- link->last->link = req;
+- link->last = req;
+-
+- if (req->flags & IO_REQ_LINK_FLAGS)
+- return 0;
+- /* last request of the link, flush it */
+- req = link->head;
+- link->head = NULL;
+- if (req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))
+- goto fallback;
+-
+- } else if (unlikely(req->flags & (IO_REQ_LINK_FLAGS |
+- REQ_F_FORCE_ASYNC | REQ_F_FAIL))) {
+- if (req->flags & IO_REQ_LINK_FLAGS) {
+- link->head = req;
+- link->last = req;
+- } else {
+-fallback:
+- io_queue_sqe_fallback(req);
+- }
+- return 0;
+- }
+-
+- io_queue_sqe(req);
+- return 0;
+-}
+-
+-/*
+- * Batched submission is done, ensure local IO is flushed out.
+- */
+-static void io_submit_state_end(struct io_ring_ctx *ctx)
+-{
+- struct io_submit_state *state = &ctx->submit_state;
+-
+- if (unlikely(state->link.head))
+- io_queue_sqe_fallback(state->link.head);
+- /* flush only after queuing links as they can generate completions */
+- io_submit_flush_completions(ctx);
+- if (state->plug_started)
+- blk_finish_plug(&state->plug);
+-}
+-
+-/*
+- * Start submission side cache.
+- */
+-static void io_submit_state_start(struct io_submit_state *state,
+- unsigned int max_ios)
+-{
+- state->plug_started = false;
+- state->need_plug = max_ios > 2;
+- state->submit_nr = max_ios;
+- /* set only head, no need to init link_last in advance */
+- state->link.head = NULL;
+-}
+-
+-static void io_commit_sqring(struct io_ring_ctx *ctx)
+-{
+- struct io_rings *rings = ctx->rings;
+-
+- /*
+- * Ensure any loads from the SQEs are done at this point,
+- * since once we write the new head, the application could
+- * write new data to them.
+- */
+- smp_store_release(&rings->sq.head, ctx->cached_sq_head);
+-}
+-
+-/*
+- * Fetch an sqe, if one is available. Note this returns a pointer to memory
+- * that is mapped by userspace. This means that care needs to be taken to
+- * ensure that reads are stable, as we cannot rely on userspace always
+- * being a good citizen. If members of the sqe are validated and then later
+- * used, it's important that those reads are done through READ_ONCE() to
+- * prevent a re-load down the line.
+- */
+-static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx)
+-{
+- unsigned head, mask = ctx->sq_entries - 1;
+- unsigned sq_idx = ctx->cached_sq_head++ & mask;
+-
+- /*
+- * The cached sq head (or cq tail) serves two purposes:
+- *
+- * 1) allows us to batch the cost of updating the user visible
+- * head updates.
+- * 2) allows the kernel side to track the head on its own, even
+- * though the application is the one updating it.
+- */
+- head = READ_ONCE(ctx->sq_array[sq_idx]);
+- if (likely(head < ctx->sq_entries)) {
+- /* double index for 128-byte SQEs, twice as long */
+- if (ctx->flags & IORING_SETUP_SQE128)
+- head <<= 1;
+- return &ctx->sq_sqes[head];
+- }
+-
+- /* drop invalid entries */
+- ctx->cq_extra--;
+- WRITE_ONCE(ctx->rings->sq_dropped,
+- READ_ONCE(ctx->rings->sq_dropped) + 1);
+- return NULL;
+-}
+-
+-static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
+- __must_hold(&ctx->uring_lock)
+-{
+- unsigned int entries = io_sqring_entries(ctx);
+- unsigned int left;
+- int ret;
+-
+- if (unlikely(!entries))
+- return 0;
+- /* make sure SQ entry isn't read before tail */
+- ret = left = min3(nr, ctx->sq_entries, entries);
+- io_get_task_refs(left);
+- io_submit_state_start(&ctx->submit_state, left);
+-
+- do {
+- const struct io_uring_sqe *sqe;
+- struct io_kiocb *req;
+-
+- if (unlikely(!io_alloc_req_refill(ctx)))
+- break;
+- req = io_alloc_req(ctx);
+- sqe = io_get_sqe(ctx);
+- if (unlikely(!sqe)) {
+- io_req_add_to_cache(req, ctx);
+- break;
+- }
+-
+- /*
+- * Continue submitting even for sqe failure if the
+- * ring was setup with IORING_SETUP_SUBMIT_ALL
+- */
+- if (unlikely(io_submit_sqe(ctx, req, sqe)) &&
+- !(ctx->flags & IORING_SETUP_SUBMIT_ALL)) {
+- left--;
+- break;
+- }
+- } while (--left);
+-
+- if (unlikely(left)) {
+- ret -= left;
+- /* try again if it submitted nothing and can't allocate a req */
+- if (!ret && io_req_cache_empty(ctx))
+- ret = -EAGAIN;
+- current->io_uring->cached_refs += left;
+- }
+-
+- io_submit_state_end(ctx);
+- /* Commit SQ ring head once we've consumed and submitted all SQEs */
+- io_commit_sqring(ctx);
+- return ret;
+-}
+-
+-static inline bool io_sqd_events_pending(struct io_sq_data *sqd)
+-{
+- return READ_ONCE(sqd->state);
+-}
+-
+-static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries)
+-{
+- unsigned int to_submit;
+- int ret = 0;
+-
+- to_submit = io_sqring_entries(ctx);
+- /* if we're handling multiple rings, cap submit size for fairness */
+- if (cap_entries && to_submit > IORING_SQPOLL_CAP_ENTRIES_VALUE)
+- to_submit = IORING_SQPOLL_CAP_ENTRIES_VALUE;
+-
+- if (!wq_list_empty(&ctx->iopoll_list) || to_submit) {
+- const struct cred *creds = NULL;
+-
+- if (ctx->sq_creds != current_cred())
+- creds = override_creds(ctx->sq_creds);
+-
+- mutex_lock(&ctx->uring_lock);
+- if (!wq_list_empty(&ctx->iopoll_list))
+- io_do_iopoll(ctx, true);
+-
+- /*
+- * Don't submit if refs are dying, good for io_uring_register(),
+- * but also it is relied upon by io_ring_exit_work()
+- */
+- if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs)) &&
+- !(ctx->flags & IORING_SETUP_R_DISABLED))
+- ret = io_submit_sqes(ctx, to_submit);
+- mutex_unlock(&ctx->uring_lock);
+-
+- if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait))
+- wake_up(&ctx->sqo_sq_wait);
+- if (creds)
+- revert_creds(creds);
+- }
+-
+- return ret;
+-}
+-
+-static __cold void io_sqd_update_thread_idle(struct io_sq_data *sqd)
+-{
+- struct io_ring_ctx *ctx;
+- unsigned sq_thread_idle = 0;
+-
+- list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+- sq_thread_idle = max(sq_thread_idle, ctx->sq_thread_idle);
+- sqd->sq_thread_idle = sq_thread_idle;
+-}
+-
+-static bool io_sqd_handle_event(struct io_sq_data *sqd)
+-{
+- bool did_sig = false;
+- struct ksignal ksig;
+-
+- if (test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state) ||
+- signal_pending(current)) {
+- mutex_unlock(&sqd->lock);
+- if (signal_pending(current))
+- did_sig = get_signal(&ksig);
+- cond_resched();
+- mutex_lock(&sqd->lock);
+- }
+- return did_sig || test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
+-}
+-
+-static int io_sq_thread(void *data)
+-{
+- struct io_sq_data *sqd = data;
+- struct io_ring_ctx *ctx;
+- unsigned long timeout = 0;
+- char buf[TASK_COMM_LEN];
+- DEFINE_WAIT(wait);
+-
+- snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid);
+- set_task_comm(current, buf);
+-
+- if (sqd->sq_cpu != -1)
+- set_cpus_allowed_ptr(current, cpumask_of(sqd->sq_cpu));
+- else
+- set_cpus_allowed_ptr(current, cpu_online_mask);
+- current->flags |= PF_NO_SETAFFINITY;
+-
+- audit_alloc_kernel(current);
+-
+- mutex_lock(&sqd->lock);
+- while (1) {
+- bool cap_entries, sqt_spin = false;
+-
+- if (io_sqd_events_pending(sqd) || signal_pending(current)) {
+- if (io_sqd_handle_event(sqd))
+- break;
+- timeout = jiffies + sqd->sq_thread_idle;
+- }
+-
+- cap_entries = !list_is_singular(&sqd->ctx_list);
+- list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
+- int ret = __io_sq_thread(ctx, cap_entries);
+-
+- if (!sqt_spin && (ret > 0 || !wq_list_empty(&ctx->iopoll_list)))
+- sqt_spin = true;
+- }
+- if (io_run_task_work())
+- sqt_spin = true;
+-
+- if (sqt_spin || !time_after(jiffies, timeout)) {
+- cond_resched();
+- if (sqt_spin)
+- timeout = jiffies + sqd->sq_thread_idle;
+- continue;
+- }
+-
+- prepare_to_wait(&sqd->wait, &wait, TASK_INTERRUPTIBLE);
+- if (!io_sqd_events_pending(sqd) && !task_work_pending(current)) {
+- bool needs_sched = true;
+-
+- list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
+- atomic_or(IORING_SQ_NEED_WAKEUP,
+- &ctx->rings->sq_flags);
+- if ((ctx->flags & IORING_SETUP_IOPOLL) &&
+- !wq_list_empty(&ctx->iopoll_list)) {
+- needs_sched = false;
+- break;
+- }
+-
+- /*
+- * Ensure the store of the wakeup flag is not
+- * reordered with the load of the SQ tail
+- */
+- smp_mb__after_atomic();
+-
+- if (io_sqring_entries(ctx)) {
+- needs_sched = false;
+- break;
+- }
+- }
+-
+- if (needs_sched) {
+- mutex_unlock(&sqd->lock);
+- schedule();
+- mutex_lock(&sqd->lock);
+- }
+- list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+- atomic_andnot(IORING_SQ_NEED_WAKEUP,
+- &ctx->rings->sq_flags);
+- }
+-
+- finish_wait(&sqd->wait, &wait);
+- timeout = jiffies + sqd->sq_thread_idle;
+- }
+-
+- io_uring_cancel_generic(true, sqd);
+- sqd->thread = NULL;
+- list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+- atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags);
+- io_run_task_work();
+- mutex_unlock(&sqd->lock);
+-
+- audit_free(current);
+-
+- complete(&sqd->exited);
+- do_exit(0);
+-}
+-
+-struct io_wait_queue {
+- struct wait_queue_entry wq;
+- struct io_ring_ctx *ctx;
+- unsigned cq_tail;
+- unsigned nr_timeouts;
+-};
+-
+-static inline bool io_should_wake(struct io_wait_queue *iowq)
+-{
+- struct io_ring_ctx *ctx = iowq->ctx;
+- int dist = ctx->cached_cq_tail - (int) iowq->cq_tail;
+-
+- /*
+- * Wake up if we have enough events, or if a timeout occurred since we
+- * started waiting. For timeouts, we always want to return to userspace,
+- * regardless of event count.
+- */
+- return dist >= 0 || atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts;
+-}
+-
+-static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
+- int wake_flags, void *key)
+-{
+- struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue,
+- wq);
+-
+- /*
+- * Cannot safely flush overflowed CQEs from here, ensure we wake up
+- * the task, and the next invocation will do it.
+- */
+- if (io_should_wake(iowq) ||
+- test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &iowq->ctx->check_cq))
+- return autoremove_wake_function(curr, mode, wake_flags, key);
+- return -1;
+-}
+-
+-static int io_run_task_work_sig(void)
+-{
+- if (io_run_task_work())
+- return 1;
+- if (test_thread_flag(TIF_NOTIFY_SIGNAL))
+- return -ERESTARTSYS;
+- if (task_sigpending(current))
+- return -EINTR;
+- return 0;
+-}
+-
+-/* when returns >0, the caller should retry */
+-static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+- struct io_wait_queue *iowq,
+- ktime_t timeout)
+-{
+- int ret;
+- unsigned long check_cq;
+-
+- /* make sure we run task_work before checking for signals */
+- ret = io_run_task_work_sig();
+- if (ret || io_should_wake(iowq))
+- return ret;
+- check_cq = READ_ONCE(ctx->check_cq);
+- /* let the caller flush overflows, retry */
+- if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
+- return 1;
+- if (unlikely(check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)))
+- return -EBADR;
+- if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS))
+- return -ETIME;
+- return 1;
+-}
+-
+-/*
+- * Wait until events become available, if we don't already have some. The
+- * application must reap them itself, as they reside on the shared cq ring.
+- */
+-static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+- const sigset_t __user *sig, size_t sigsz,
+- struct __kernel_timespec __user *uts)
+-{
+- struct io_wait_queue iowq;
+- struct io_rings *rings = ctx->rings;
+- ktime_t timeout = KTIME_MAX;
+- int ret;
+-
+- do {
+- io_cqring_overflow_flush(ctx);
+- if (io_cqring_events(ctx) >= min_events)
+- return 0;
+- if (!io_run_task_work())
+- break;
+- } while (1);
+-
+- if (sig) {
+-#ifdef CONFIG_COMPAT
+- if (in_compat_syscall())
+- ret = set_compat_user_sigmask((const compat_sigset_t __user *)sig,
+- sigsz);
+- else
+-#endif
+- ret = set_user_sigmask(sig, sigsz);
+-
+- if (ret)
+- return ret;
+- }
+-
+- if (uts) {
+- struct timespec64 ts;
+-
+- if (get_timespec64(&ts, uts))
+- return -EFAULT;
+- timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
+- }
+-
+- init_waitqueue_func_entry(&iowq.wq, io_wake_function);
+- iowq.wq.private = current;
+- INIT_LIST_HEAD(&iowq.wq.entry);
+- iowq.ctx = ctx;
+- iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
+- iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
+-
+- trace_io_uring_cqring_wait(ctx, min_events);
+- do {
+- /* if we can't even flush overflow, don't wait for more */
+- if (!io_cqring_overflow_flush(ctx)) {
+- ret = -EBUSY;
+- break;
+- }
+- prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
+- TASK_INTERRUPTIBLE);
+- ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
+- cond_resched();
+- } while (ret > 0);
+-
+- finish_wait(&ctx->cq_wait, &iowq.wq);
+- restore_saved_sigmask_unless(ret == -EINTR);
+-
+- return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
+-}
+-
+-static void io_free_page_table(void **table, size_t size)
+-{
+- unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
+-
+- for (i = 0; i < nr_tables; i++)
+- kfree(table[i]);
+- kfree(table);
+-}
+-
+-static __cold void **io_alloc_page_table(size_t size)
+-{
+- unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
+- size_t init_size = size;
+- void **table;
+-
+- table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL_ACCOUNT);
+- if (!table)
+- return NULL;
+-
+- for (i = 0; i < nr_tables; i++) {
+- unsigned int this_size = min_t(size_t, size, PAGE_SIZE);
+-
+- table[i] = kzalloc(this_size, GFP_KERNEL_ACCOUNT);
+- if (!table[i]) {
+- io_free_page_table(table, init_size);
+- return NULL;
+- }
+- size -= this_size;
+- }
+- return table;
+-}
+-
+-static void io_rsrc_node_destroy(struct io_rsrc_node *ref_node)
+-{
+- percpu_ref_exit(&ref_node->refs);
+- kfree(ref_node);
+-}
+-
+-static __cold void io_rsrc_node_ref_zero(struct percpu_ref *ref)
+-{
+- struct io_rsrc_node *node = container_of(ref, struct io_rsrc_node, refs);
+- struct io_ring_ctx *ctx = node->rsrc_data->ctx;
+- unsigned long flags;
+- bool first_add = false;
+- unsigned long delay = HZ;
+-
+- spin_lock_irqsave(&ctx->rsrc_ref_lock, flags);
+- node->done = true;
+-
+- /* if we are mid-quiesce then do not delay */
+- if (node->rsrc_data->quiesce)
+- delay = 0;
+-
+- while (!list_empty(&ctx->rsrc_ref_list)) {
+- node = list_first_entry(&ctx->rsrc_ref_list,
+- struct io_rsrc_node, node);
+- /* recycle ref nodes in order */
+- if (!node->done)
+- break;
+- list_del(&node->node);
+- first_add |= llist_add(&node->llist, &ctx->rsrc_put_llist);
+- }
+- spin_unlock_irqrestore(&ctx->rsrc_ref_lock, flags);
+-
+- if (first_add)
+- mod_delayed_work(system_wq, &ctx->rsrc_put_work, delay);
+-}
+-
+-static struct io_rsrc_node *io_rsrc_node_alloc(void)
+-{
+- struct io_rsrc_node *ref_node;
+-
+- ref_node = kzalloc(sizeof(*ref_node), GFP_KERNEL);
+- if (!ref_node)
+- return NULL;
+-
+- if (percpu_ref_init(&ref_node->refs, io_rsrc_node_ref_zero,
+- 0, GFP_KERNEL)) {
+- kfree(ref_node);
+- return NULL;
+- }
+- INIT_LIST_HEAD(&ref_node->node);
+- INIT_LIST_HEAD(&ref_node->rsrc_list);
+- ref_node->done = false;
+- return ref_node;
+-}
+-
+-static void io_rsrc_node_switch(struct io_ring_ctx *ctx,
+- struct io_rsrc_data *data_to_kill)
+- __must_hold(&ctx->uring_lock)
+-{
+- WARN_ON_ONCE(!ctx->rsrc_backup_node);
+- WARN_ON_ONCE(data_to_kill && !ctx->rsrc_node);
+-
+- io_rsrc_refs_drop(ctx);
+-
+- if (data_to_kill) {
+- struct io_rsrc_node *rsrc_node = ctx->rsrc_node;
+-
+- rsrc_node->rsrc_data = data_to_kill;
+- spin_lock_irq(&ctx->rsrc_ref_lock);
+- list_add_tail(&rsrc_node->node, &ctx->rsrc_ref_list);
+- spin_unlock_irq(&ctx->rsrc_ref_lock);
+-
+- atomic_inc(&data_to_kill->refs);
+- percpu_ref_kill(&rsrc_node->refs);
+- ctx->rsrc_node = NULL;
+- }
+-
+- if (!ctx->rsrc_node) {
+- ctx->rsrc_node = ctx->rsrc_backup_node;
+- ctx->rsrc_backup_node = NULL;
+- }
+-}
+-
+-static int io_rsrc_node_switch_start(struct io_ring_ctx *ctx)
+-{
+- if (ctx->rsrc_backup_node)
+- return 0;
+- ctx->rsrc_backup_node = io_rsrc_node_alloc();
+- return ctx->rsrc_backup_node ? 0 : -ENOMEM;
+-}
+-
+-static __cold int io_rsrc_ref_quiesce(struct io_rsrc_data *data,
+- struct io_ring_ctx *ctx)
+-{
+- int ret;
+-
+- /* As we may drop ->uring_lock, other task may have started quiesce */
+- if (data->quiesce)
+- return -ENXIO;
+-
+- data->quiesce = true;
+- do {
+- ret = io_rsrc_node_switch_start(ctx);
+- if (ret)
+- break;
+- io_rsrc_node_switch(ctx, data);
+-
+- /* kill initial ref, already quiesced if zero */
+- if (atomic_dec_and_test(&data->refs))
+- break;
+- mutex_unlock(&ctx->uring_lock);
+- flush_delayed_work(&ctx->rsrc_put_work);
+- ret = wait_for_completion_interruptible(&data->done);
+- if (!ret) {
+- mutex_lock(&ctx->uring_lock);
+- if (atomic_read(&data->refs) > 0) {
+- /*
+- * it has been revived by another thread while
+- * we were unlocked
+- */
+- mutex_unlock(&ctx->uring_lock);
+- } else {
+- break;
+- }
+- }
+-
+- atomic_inc(&data->refs);
+- /* wait for all works potentially completing data->done */
+- flush_delayed_work(&ctx->rsrc_put_work);
+- reinit_completion(&data->done);
+-
+- ret = io_run_task_work_sig();
+- mutex_lock(&ctx->uring_lock);
+- } while (ret >= 0);
+- data->quiesce = false;
+-
+- return ret;
+-}
+-
+-static u64 *io_get_tag_slot(struct io_rsrc_data *data, unsigned int idx)
+-{
+- unsigned int off = idx & IO_RSRC_TAG_TABLE_MASK;
+- unsigned int table_idx = idx >> IO_RSRC_TAG_TABLE_SHIFT;
+-
+- return &data->tags[table_idx][off];
+-}
+-
+-static void io_rsrc_data_free(struct io_rsrc_data *data)
+-{
+- size_t size = data->nr * sizeof(data->tags[0][0]);
+-
+- if (data->tags)
+- io_free_page_table((void **)data->tags, size);
+- kfree(data);
+-}
+-
+-static __cold int io_rsrc_data_alloc(struct io_ring_ctx *ctx, rsrc_put_fn *do_put,
+- u64 __user *utags, unsigned nr,
+- struct io_rsrc_data **pdata)
+-{
+- struct io_rsrc_data *data;
+- int ret = -ENOMEM;
+- unsigned i;
+-
+- data = kzalloc(sizeof(*data), GFP_KERNEL);
+- if (!data)
+- return -ENOMEM;
+- data->tags = (u64 **)io_alloc_page_table(nr * sizeof(data->tags[0][0]));
+- if (!data->tags) {
+- kfree(data);
+- return -ENOMEM;
+- }
+-
+- data->nr = nr;
+- data->ctx = ctx;
+- data->do_put = do_put;
+- if (utags) {
+- ret = -EFAULT;
+- for (i = 0; i < nr; i++) {
+- u64 *tag_slot = io_get_tag_slot(data, i);
+-
+- if (copy_from_user(tag_slot, &utags[i],
+- sizeof(*tag_slot)))
+- goto fail;
+- }
+- }
+-
+- atomic_set(&data->refs, 1);
+- init_completion(&data->done);
+- *pdata = data;
+- return 0;
+-fail:
+- io_rsrc_data_free(data);
+- return ret;
+-}
+-
+-static bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files)
+-{
+- table->files = kvcalloc(nr_files, sizeof(table->files[0]),
+- GFP_KERNEL_ACCOUNT);
+- if (unlikely(!table->files))
+- return false;
+-
+- table->bitmap = bitmap_zalloc(nr_files, GFP_KERNEL_ACCOUNT);
+- if (unlikely(!table->bitmap)) {
+- kvfree(table->files);
+- return false;
+- }
+-
+- return true;
+-}
+-
+-static void io_free_file_tables(struct io_file_table *table)
+-{
+- kvfree(table->files);
+- bitmap_free(table->bitmap);
+- table->files = NULL;
+- table->bitmap = NULL;
+-}
+-
+-static inline void io_file_bitmap_set(struct io_file_table *table, int bit)
+-{
+- WARN_ON_ONCE(test_bit(bit, table->bitmap));
+- __set_bit(bit, table->bitmap);
+- table->alloc_hint = bit + 1;
+-}
+-
+-static inline void io_file_bitmap_clear(struct io_file_table *table, int bit)
+-{
+- __clear_bit(bit, table->bitmap);
+- table->alloc_hint = bit;
+-}
+-
+-static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
+-{
+-#if !defined(IO_URING_SCM_ALL)
+- int i;
+-
+- for (i = 0; i < ctx->nr_user_files; i++) {
+- struct file *file = io_file_from_index(ctx, i);
+-
+- if (!file)
+- continue;
+- if (io_fixed_file_slot(&ctx->file_table, i)->file_ptr & FFS_SCM)
+- continue;
+- io_file_bitmap_clear(&ctx->file_table, i);
+- fput(file);
+- }
+-#endif
+-
+-#if defined(CONFIG_UNIX)
+- if (ctx->ring_sock) {
+- struct sock *sock = ctx->ring_sock->sk;
+- struct sk_buff *skb;
+-
+- while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
+- kfree_skb(skb);
+- }
+-#endif
+- io_free_file_tables(&ctx->file_table);
+- io_rsrc_data_free(ctx->file_data);
+- ctx->file_data = NULL;
+- ctx->nr_user_files = 0;
+-}
+-
+-static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+-{
+- unsigned nr = ctx->nr_user_files;
+- int ret;
+-
+- if (!ctx->file_data)
+- return -ENXIO;
+-
+- /*
+- * Quiesce may unlock ->uring_lock, and while it's not held
+- * prevent new requests using the table.
+- */
+- ctx->nr_user_files = 0;
+- ret = io_rsrc_ref_quiesce(ctx->file_data, ctx);
+- ctx->nr_user_files = nr;
+- if (!ret)
+- __io_sqe_files_unregister(ctx);
+- return ret;
+-}
+-
+-static void io_sq_thread_unpark(struct io_sq_data *sqd)
+- __releases(&sqd->lock)
+-{
+- WARN_ON_ONCE(sqd->thread == current);
+-
+- /*
+- * Do the dance but not conditional clear_bit() because it'd race with
+- * other threads incrementing park_pending and setting the bit.
+- */
+- clear_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
+- if (atomic_dec_return(&sqd->park_pending))
+- set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
+- mutex_unlock(&sqd->lock);
+-}
+-
+-static void io_sq_thread_park(struct io_sq_data *sqd)
+- __acquires(&sqd->lock)
+-{
+- WARN_ON_ONCE(sqd->thread == current);
+-
+- atomic_inc(&sqd->park_pending);
+- set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
+- mutex_lock(&sqd->lock);
+- if (sqd->thread)
+- wake_up_process(sqd->thread);
+-}
+-
+-static void io_sq_thread_stop(struct io_sq_data *sqd)
+-{
+- WARN_ON_ONCE(sqd->thread == current);
+- WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));
+-
+- set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
+- mutex_lock(&sqd->lock);
+- if (sqd->thread)
+- wake_up_process(sqd->thread);
+- mutex_unlock(&sqd->lock);
+- wait_for_completion(&sqd->exited);
+-}
+-
+-static void io_put_sq_data(struct io_sq_data *sqd)
+-{
+- if (refcount_dec_and_test(&sqd->refs)) {
+- WARN_ON_ONCE(atomic_read(&sqd->park_pending));
+-
+- io_sq_thread_stop(sqd);
+- kfree(sqd);
+- }
+-}
+-
+-static void io_sq_thread_finish(struct io_ring_ctx *ctx)
+-{
+- struct io_sq_data *sqd = ctx->sq_data;
+-
+- if (sqd) {
+- io_sq_thread_park(sqd);
+- list_del_init(&ctx->sqd_list);
+- io_sqd_update_thread_idle(sqd);
+- io_sq_thread_unpark(sqd);
+-
+- io_put_sq_data(sqd);
+- ctx->sq_data = NULL;
+- }
+-}
+-
+-static struct io_sq_data *io_attach_sq_data(struct io_uring_params *p)
+-{
+- struct io_ring_ctx *ctx_attach;
+- struct io_sq_data *sqd;
+- struct fd f;
+-
+- f = fdget(p->wq_fd);
+- if (!f.file)
+- return ERR_PTR(-ENXIO);
+- if (f.file->f_op != &io_uring_fops) {
+- fdput(f);
+- return ERR_PTR(-EINVAL);
+- }
+-
+- ctx_attach = f.file->private_data;
+- sqd = ctx_attach->sq_data;
+- if (!sqd) {
+- fdput(f);
+- return ERR_PTR(-EINVAL);
+- }
+- if (sqd->task_tgid != current->tgid) {
+- fdput(f);
+- return ERR_PTR(-EPERM);
+- }
+-
+- refcount_inc(&sqd->refs);
+- fdput(f);
+- return sqd;
+-}
+-
+-static struct io_sq_data *io_get_sq_data(struct io_uring_params *p,
+- bool *attached)
+-{
+- struct io_sq_data *sqd;
+-
+- *attached = false;
+- if (p->flags & IORING_SETUP_ATTACH_WQ) {
+- sqd = io_attach_sq_data(p);
+- if (!IS_ERR(sqd)) {
+- *attached = true;
+- return sqd;
+- }
+- /* fall through for EPERM case, setup new sqd/task */
+- if (PTR_ERR(sqd) != -EPERM)
+- return sqd;
+- }
+-
+- sqd = kzalloc(sizeof(*sqd), GFP_KERNEL);
+- if (!sqd)
+- return ERR_PTR(-ENOMEM);
+-
+- atomic_set(&sqd->park_pending, 0);
+- refcount_set(&sqd->refs, 1);
+- INIT_LIST_HEAD(&sqd->ctx_list);
+- mutex_init(&sqd->lock);
+- init_waitqueue_head(&sqd->wait);
+- init_completion(&sqd->exited);
+- return sqd;
+-}
+-
+-/*
+- * Ensure the UNIX gc is aware of our file set, so we are certain that
+- * the io_uring can be safely unregistered on process exit, even if we have
+- * loops in the file referencing. We account only files that can hold other
+- * files because otherwise they can't form a loop and so are not interesting
+- * for GC.
+- */
+-static int io_scm_file_account(struct io_ring_ctx *ctx, struct file *file)
+-{
+-#if defined(CONFIG_UNIX)
+- struct sock *sk = ctx->ring_sock->sk;
+- struct sk_buff_head *head = &sk->sk_receive_queue;
+- struct scm_fp_list *fpl;
+- struct sk_buff *skb;
+-
+- if (likely(!io_file_need_scm(file)))
+- return 0;
+-
+- /*
+- * See if we can merge this file into an existing skb SCM_RIGHTS
+- * file set. If there's no room, fall back to allocating a new skb
+- * and filling it in.
+- */
+- spin_lock_irq(&head->lock);
+- skb = skb_peek(head);
+- if (skb && UNIXCB(skb).fp->count < SCM_MAX_FD)
+- __skb_unlink(skb, head);
+- else
+- skb = NULL;
+- spin_unlock_irq(&head->lock);
+-
+- if (!skb) {
+- fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
+- if (!fpl)
+- return -ENOMEM;
+-
+- skb = alloc_skb(0, GFP_KERNEL);
+- if (!skb) {
+- kfree(fpl);
+- return -ENOMEM;
+- }
+-
+- fpl->user = get_uid(current_user());
+- fpl->max = SCM_MAX_FD;
+- fpl->count = 0;
+-
+- UNIXCB(skb).fp = fpl;
+- skb->sk = sk;
+- skb->destructor = unix_destruct_scm;
+- refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+- }
+-
+- fpl = UNIXCB(skb).fp;
+- fpl->fp[fpl->count++] = get_file(file);
+- unix_inflight(fpl->user, file);
+- skb_queue_head(head, skb);
+- fput(file);
+-#endif
+- return 0;
+-}
+-
+-static void io_rsrc_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
+-{
+- struct file *file = prsrc->file;
+-#if defined(CONFIG_UNIX)
+- struct sock *sock = ctx->ring_sock->sk;
+- struct sk_buff_head list, *head = &sock->sk_receive_queue;
+- struct sk_buff *skb;
+- int i;
+-
+- if (!io_file_need_scm(file)) {
+- fput(file);
+- return;
+- }
+-
+- __skb_queue_head_init(&list);
+-
+- /*
+- * Find the skb that holds this file in its SCM_RIGHTS. When found,
+- * remove this entry and rearrange the file array.
+- */
+- skb = skb_dequeue(head);
+- while (skb) {
+- struct scm_fp_list *fp;
+-
+- fp = UNIXCB(skb).fp;
+- for (i = 0; i < fp->count; i++) {
+- int left;
+-
+- if (fp->fp[i] != file)
+- continue;
+-
+- unix_notinflight(fp->user, fp->fp[i]);
+- left = fp->count - 1 - i;
+- if (left) {
+- memmove(&fp->fp[i], &fp->fp[i + 1],
+- left * sizeof(struct file *));
+- }
+- fp->count--;
+- if (!fp->count) {
+- kfree_skb(skb);
+- skb = NULL;
+- } else {
+- __skb_queue_tail(&list, skb);
+- }
+- fput(file);
+- file = NULL;
+- break;
+- }
+-
+- if (!file)
+- break;
+-
+- __skb_queue_tail(&list, skb);
+-
+- skb = skb_dequeue(head);
+- }
+-
+- if (skb_peek(&list)) {
+- spin_lock_irq(&head->lock);
+- while ((skb = __skb_dequeue(&list)) != NULL)
+- __skb_queue_tail(head, skb);
+- spin_unlock_irq(&head->lock);
+- }
+-#else
+- fput(file);
+-#endif
+-}
+-
+-static void __io_rsrc_put_work(struct io_rsrc_node *ref_node)
+-{
+- struct io_rsrc_data *rsrc_data = ref_node->rsrc_data;
+- struct io_ring_ctx *ctx = rsrc_data->ctx;
+- struct io_rsrc_put *prsrc, *tmp;
+-
+- list_for_each_entry_safe(prsrc, tmp, &ref_node->rsrc_list, list) {
+- list_del(&prsrc->list);
+-
+- if (prsrc->tag) {
+- if (ctx->flags & IORING_SETUP_IOPOLL)
+- mutex_lock(&ctx->uring_lock);
+-
+- spin_lock(&ctx->completion_lock);
+- io_fill_cqe_aux(ctx, prsrc->tag, 0, 0);
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- io_cqring_ev_posted(ctx);
+-
+- if (ctx->flags & IORING_SETUP_IOPOLL)
+- mutex_unlock(&ctx->uring_lock);
+- }
+-
+- rsrc_data->do_put(ctx, prsrc);
+- kfree(prsrc);
+- }
+-
+- io_rsrc_node_destroy(ref_node);
+- if (atomic_dec_and_test(&rsrc_data->refs))
+- complete(&rsrc_data->done);
+-}
+-
+-static void io_rsrc_put_work(struct work_struct *work)
+-{
+- struct io_ring_ctx *ctx;
+- struct llist_node *node;
+-
+- ctx = container_of(work, struct io_ring_ctx, rsrc_put_work.work);
+- node = llist_del_all(&ctx->rsrc_put_llist);
+-
+- while (node) {
+- struct io_rsrc_node *ref_node;
+- struct llist_node *next = node->next;
+-
+- ref_node = llist_entry(node, struct io_rsrc_node, llist);
+- __io_rsrc_put_work(ref_node);
+- node = next;
+- }
+-}
+-
+-static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+- unsigned nr_args, u64 __user *tags)
+-{
+- __s32 __user *fds = (__s32 __user *) arg;
+- struct file *file;
+- int fd, ret;
+- unsigned i;
+-
+- if (ctx->file_data)
+- return -EBUSY;
+- if (!nr_args)
+- return -EINVAL;
+- if (nr_args > IORING_MAX_FIXED_FILES)
+- return -EMFILE;
+- if (nr_args > rlimit(RLIMIT_NOFILE))
+- return -EMFILE;
+- ret = io_rsrc_node_switch_start(ctx);
+- if (ret)
+- return ret;
+- ret = io_rsrc_data_alloc(ctx, io_rsrc_file_put, tags, nr_args,
+- &ctx->file_data);
+- if (ret)
+- return ret;
+-
+- if (!io_alloc_file_tables(&ctx->file_table, nr_args)) {
+- io_rsrc_data_free(ctx->file_data);
+- ctx->file_data = NULL;
+- return -ENOMEM;
+- }
+-
+- for (i = 0; i < nr_args; i++, ctx->nr_user_files++) {
+- struct io_fixed_file *file_slot;
+-
+- if (fds && copy_from_user(&fd, &fds[i], sizeof(fd))) {
+- ret = -EFAULT;
+- goto fail;
+- }
+- /* allow sparse sets */
+- if (!fds || fd == -1) {
+- ret = -EINVAL;
+- if (unlikely(*io_get_tag_slot(ctx->file_data, i)))
+- goto fail;
+- continue;
+- }
+-
+- file = fget(fd);
+- ret = -EBADF;
+- if (unlikely(!file))
+- goto fail;
+-
+- /*
+- * Don't allow io_uring instances to be registered. If UNIX
+- * isn't enabled, then this causes a reference cycle and this
+- * instance can never get freed. If UNIX is enabled we'll
+- * handle it just fine, but there's still no point in allowing
+- * a ring fd as it doesn't support regular read/write anyway.
+- */
+- if (file->f_op == &io_uring_fops) {
+- fput(file);
+- goto fail;
+- }
+- ret = io_scm_file_account(ctx, file);
+- if (ret) {
+- fput(file);
+- goto fail;
+- }
+- file_slot = io_fixed_file_slot(&ctx->file_table, i);
+- io_fixed_file_set(file_slot, file);
+- io_file_bitmap_set(&ctx->file_table, i);
+- }
+-
+- io_rsrc_node_switch(ctx, NULL);
+- return 0;
+-fail:
+- __io_sqe_files_unregister(ctx);
+- return ret;
+-}
+-
+-static int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
+- struct io_rsrc_node *node, void *rsrc)
+-{
+- u64 *tag_slot = io_get_tag_slot(data, idx);
+- struct io_rsrc_put *prsrc;
+-
+- prsrc = kzalloc(sizeof(*prsrc), GFP_KERNEL);
+- if (!prsrc)
+- return -ENOMEM;
+-
+- prsrc->tag = *tag_slot;
+- *tag_slot = 0;
+- prsrc->rsrc = rsrc;
+- list_add(&prsrc->list, &node->rsrc_list);
+- return 0;
+-}
+-
+-static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
+- unsigned int issue_flags, u32 slot_index)
+- __must_hold(&req->ctx->uring_lock)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- bool needs_switch = false;
+- struct io_fixed_file *file_slot;
+- int ret;
+-
+- if (file->f_op == &io_uring_fops)
+- return -EBADF;
+- if (!ctx->file_data)
+- return -ENXIO;
+- if (slot_index >= ctx->nr_user_files)
+- return -EINVAL;
+-
+- slot_index = array_index_nospec(slot_index, ctx->nr_user_files);
+- file_slot = io_fixed_file_slot(&ctx->file_table, slot_index);
+-
+- if (file_slot->file_ptr) {
+- struct file *old_file;
+-
+- ret = io_rsrc_node_switch_start(ctx);
+- if (ret)
+- goto err;
+-
+- old_file = (struct file *)(file_slot->file_ptr & FFS_MASK);
+- ret = io_queue_rsrc_removal(ctx->file_data, slot_index,
+- ctx->rsrc_node, old_file);
+- if (ret)
+- goto err;
+- file_slot->file_ptr = 0;
+- io_file_bitmap_clear(&ctx->file_table, slot_index);
+- needs_switch = true;
+- }
+-
+- ret = io_scm_file_account(ctx, file);
+- if (!ret) {
+- *io_get_tag_slot(ctx->file_data, slot_index) = 0;
+- io_fixed_file_set(file_slot, file);
+- io_file_bitmap_set(&ctx->file_table, slot_index);
+- }
+-err:
+- if (needs_switch)
+- io_rsrc_node_switch(ctx, ctx->file_data);
+- if (ret)
+- fput(file);
+- return ret;
+-}
+-
+-static int __io_close_fixed(struct io_kiocb *req, unsigned int issue_flags,
+- unsigned int offset)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_fixed_file *file_slot;
+- struct file *file;
+- int ret;
+-
+- io_ring_submit_lock(ctx, issue_flags);
+- ret = -ENXIO;
+- if (unlikely(!ctx->file_data))
+- goto out;
+- ret = -EINVAL;
+- if (offset >= ctx->nr_user_files)
+- goto out;
+- ret = io_rsrc_node_switch_start(ctx);
+- if (ret)
+- goto out;
+-
+- offset = array_index_nospec(offset, ctx->nr_user_files);
+- file_slot = io_fixed_file_slot(&ctx->file_table, offset);
+- ret = -EBADF;
+- if (!file_slot->file_ptr)
+- goto out;
+-
+- file = (struct file *)(file_slot->file_ptr & FFS_MASK);
+- ret = io_queue_rsrc_removal(ctx->file_data, offset, ctx->rsrc_node, file);
+- if (ret)
+- goto out;
+-
+- file_slot->file_ptr = 0;
+- io_file_bitmap_clear(&ctx->file_table, offset);
+- io_rsrc_node_switch(ctx, ctx->file_data);
+- ret = 0;
+-out:
+- io_ring_submit_unlock(ctx, issue_flags);
+- return ret;
+-}
+-
+-static inline int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags)
+-{
+- return __io_close_fixed(req, issue_flags, req->close.file_slot - 1);
+-}
+-
+-static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+- struct io_uring_rsrc_update2 *up,
+- unsigned nr_args)
+-{
+- u64 __user *tags = u64_to_user_ptr(up->tags);
+- __s32 __user *fds = u64_to_user_ptr(up->data);
+- struct io_rsrc_data *data = ctx->file_data;
+- struct io_fixed_file *file_slot;
+- struct file *file;
+- int fd, i, err = 0;
+- unsigned int done;
+- bool needs_switch = false;
+-
+- if (!ctx->file_data)
+- return -ENXIO;
+- if (up->offset + nr_args > ctx->nr_user_files)
+- return -EINVAL;
+-
+- for (done = 0; done < nr_args; done++) {
+- u64 tag = 0;
+-
+- if ((tags && copy_from_user(&tag, &tags[done], sizeof(tag))) ||
+- copy_from_user(&fd, &fds[done], sizeof(fd))) {
+- err = -EFAULT;
+- break;
+- }
+- if ((fd == IORING_REGISTER_FILES_SKIP || fd == -1) && tag) {
+- err = -EINVAL;
+- break;
+- }
+- if (fd == IORING_REGISTER_FILES_SKIP)
+- continue;
+-
+- i = array_index_nospec(up->offset + done, ctx->nr_user_files);
+- file_slot = io_fixed_file_slot(&ctx->file_table, i);
+-
+- if (file_slot->file_ptr) {
+- file = (struct file *)(file_slot->file_ptr & FFS_MASK);
+- err = io_queue_rsrc_removal(data, i, ctx->rsrc_node, file);
+- if (err)
+- break;
+- file_slot->file_ptr = 0;
+- io_file_bitmap_clear(&ctx->file_table, i);
+- needs_switch = true;
+- }
+- if (fd != -1) {
+- file = fget(fd);
+- if (!file) {
+- err = -EBADF;
+- break;
+- }
+- /*
+- * Don't allow io_uring instances to be registered. If
+- * UNIX isn't enabled, then this causes a reference
+- * cycle and this instance can never get freed. If UNIX
+- * is enabled we'll handle it just fine, but there's
+- * still no point in allowing a ring fd as it doesn't
+- * support regular read/write anyway.
+- */
+- if (file->f_op == &io_uring_fops) {
+- fput(file);
+- err = -EBADF;
+- break;
+- }
+- err = io_scm_file_account(ctx, file);
+- if (err) {
+- fput(file);
+- break;
+- }
+- *io_get_tag_slot(data, i) = tag;
+- io_fixed_file_set(file_slot, file);
+- io_file_bitmap_set(&ctx->file_table, i);
+- }
+- }
+-
+- if (needs_switch)
+- io_rsrc_node_switch(ctx, data);
+- return done ? done : err;
+-}
+-
+-static struct io_wq *io_init_wq_offload(struct io_ring_ctx *ctx,
+- struct task_struct *task)
+-{
+- struct io_wq_hash *hash;
+- struct io_wq_data data;
+- unsigned int concurrency;
+-
+- mutex_lock(&ctx->uring_lock);
+- hash = ctx->hash_map;
+- if (!hash) {
+- hash = kzalloc(sizeof(*hash), GFP_KERNEL);
+- if (!hash) {
+- mutex_unlock(&ctx->uring_lock);
+- return ERR_PTR(-ENOMEM);
+- }
+- refcount_set(&hash->refs, 1);
+- init_waitqueue_head(&hash->wait);
+- ctx->hash_map = hash;
+- }
+- mutex_unlock(&ctx->uring_lock);
+-
+- data.hash = hash;
+- data.task = task;
+- data.free_work = io_wq_free_work;
+- data.do_work = io_wq_submit_work;
+-
+- /* Do QD, or 4 * CPUS, whatever is smallest */
+- concurrency = min(ctx->sq_entries, 4 * num_online_cpus());
+-
+- return io_wq_create(concurrency, &data);
+-}
+-
+-static __cold int io_uring_alloc_task_context(struct task_struct *task,
+- struct io_ring_ctx *ctx)
+-{
+- struct io_uring_task *tctx;
+- int ret;
+-
+- tctx = kzalloc(sizeof(*tctx), GFP_KERNEL);
+- if (unlikely(!tctx))
+- return -ENOMEM;
+-
+- tctx->registered_rings = kcalloc(IO_RINGFD_REG_MAX,
+- sizeof(struct file *), GFP_KERNEL);
+- if (unlikely(!tctx->registered_rings)) {
+- kfree(tctx);
+- return -ENOMEM;
+- }
+-
+- ret = percpu_counter_init(&tctx->inflight, 0, GFP_KERNEL);
+- if (unlikely(ret)) {
+- kfree(tctx->registered_rings);
+- kfree(tctx);
+- return ret;
+- }
+-
+- tctx->io_wq = io_init_wq_offload(ctx, task);
+- if (IS_ERR(tctx->io_wq)) {
+- ret = PTR_ERR(tctx->io_wq);
+- percpu_counter_destroy(&tctx->inflight);
+- kfree(tctx->registered_rings);
+- kfree(tctx);
+- return ret;
+- }
+-
+- xa_init(&tctx->xa);
+- init_waitqueue_head(&tctx->wait);
+- atomic_set(&tctx->in_idle, 0);
+- atomic_set(&tctx->inflight_tracked, 0);
+- task->io_uring = tctx;
+- spin_lock_init(&tctx->task_lock);
+- INIT_WQ_LIST(&tctx->task_list);
+- INIT_WQ_LIST(&tctx->prio_task_list);
+- init_task_work(&tctx->task_work, tctx_task_work);
+- return 0;
+-}
+-
+-void __io_uring_free(struct task_struct *tsk)
+-{
+- struct io_uring_task *tctx = tsk->io_uring;
+-
+- WARN_ON_ONCE(!xa_empty(&tctx->xa));
+- WARN_ON_ONCE(tctx->io_wq);
+- WARN_ON_ONCE(tctx->cached_refs);
+-
+- kfree(tctx->registered_rings);
+- percpu_counter_destroy(&tctx->inflight);
+- kfree(tctx);
+- tsk->io_uring = NULL;
+-}
+-
+-static __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+- struct io_uring_params *p)
+-{
+- int ret;
+-
+- /* Retain compatibility with failing for an invalid attach attempt */
+- if ((ctx->flags & (IORING_SETUP_ATTACH_WQ | IORING_SETUP_SQPOLL)) ==
+- IORING_SETUP_ATTACH_WQ) {
+- struct fd f;
+-
+- f = fdget(p->wq_fd);
+- if (!f.file)
+- return -ENXIO;
+- if (f.file->f_op != &io_uring_fops) {
+- fdput(f);
+- return -EINVAL;
+- }
+- fdput(f);
+- }
+- if (ctx->flags & IORING_SETUP_SQPOLL) {
+- struct task_struct *tsk;
+- struct io_sq_data *sqd;
+- bool attached;
+-
+- ret = security_uring_sqpoll();
+- if (ret)
+- return ret;
+-
+- sqd = io_get_sq_data(p, &attached);
+- if (IS_ERR(sqd)) {
+- ret = PTR_ERR(sqd);
+- goto err;
+- }
+-
+- ctx->sq_creds = get_current_cred();
+- ctx->sq_data = sqd;
+- ctx->sq_thread_idle = msecs_to_jiffies(p->sq_thread_idle);
+- if (!ctx->sq_thread_idle)
+- ctx->sq_thread_idle = HZ;
+-
+- io_sq_thread_park(sqd);
+- list_add(&ctx->sqd_list, &sqd->ctx_list);
+- io_sqd_update_thread_idle(sqd);
+- /* don't attach to a dying SQPOLL thread, would be racy */
+- ret = (attached && !sqd->thread) ? -ENXIO : 0;
+- io_sq_thread_unpark(sqd);
+-
+- if (ret < 0)
+- goto err;
+- if (attached)
+- return 0;
+-
+- if (p->flags & IORING_SETUP_SQ_AFF) {
+- int cpu = p->sq_thread_cpu;
+-
+- ret = -EINVAL;
+- if (cpu >= nr_cpu_ids || !cpu_online(cpu))
+- goto err_sqpoll;
+- sqd->sq_cpu = cpu;
+- } else {
+- sqd->sq_cpu = -1;
+- }
+-
+- sqd->task_pid = current->pid;
+- sqd->task_tgid = current->tgid;
+- tsk = create_io_thread(io_sq_thread, sqd, NUMA_NO_NODE);
+- if (IS_ERR(tsk)) {
+- ret = PTR_ERR(tsk);
+- goto err_sqpoll;
+- }
+-
+- sqd->thread = tsk;
+- ret = io_uring_alloc_task_context(tsk, ctx);
+- wake_up_new_task(tsk);
+- if (ret)
+- goto err;
+- } else if (p->flags & IORING_SETUP_SQ_AFF) {
+- /* Can't have SQ_AFF without SQPOLL */
+- ret = -EINVAL;
+- goto err;
+- }
+-
+- return 0;
+-err_sqpoll:
+- complete(&ctx->sq_data->exited);
+-err:
+- io_sq_thread_finish(ctx);
+- return ret;
+-}
+-
+-static inline void __io_unaccount_mem(struct user_struct *user,
+- unsigned long nr_pages)
+-{
+- atomic_long_sub(nr_pages, &user->locked_vm);
+-}
+-
+-static inline int __io_account_mem(struct user_struct *user,
+- unsigned long nr_pages)
+-{
+- unsigned long page_limit, cur_pages, new_pages;
+-
+- /* Don't allow more pages than we can safely lock */
+- page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+-
+- do {
+- cur_pages = atomic_long_read(&user->locked_vm);
+- new_pages = cur_pages + nr_pages;
+- if (new_pages > page_limit)
+- return -ENOMEM;
+- } while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
+- new_pages) != cur_pages);
+-
+- return 0;
+-}
+-
+-static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+-{
+- if (ctx->user)
+- __io_unaccount_mem(ctx->user, nr_pages);
+-
+- if (ctx->mm_account)
+- atomic64_sub(nr_pages, &ctx->mm_account->pinned_vm);
+-}
+-
+-static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+-{
+- int ret;
+-
+- if (ctx->user) {
+- ret = __io_account_mem(ctx->user, nr_pages);
+- if (ret)
+- return ret;
+- }
+-
+- if (ctx->mm_account)
+- atomic64_add(nr_pages, &ctx->mm_account->pinned_vm);
+-
+- return 0;
+-}
+-
+-static void io_mem_free(void *ptr)
+-{
+- struct page *page;
+-
+- if (!ptr)
+- return;
+-
+- page = virt_to_head_page(ptr);
+- if (put_page_testzero(page))
+- free_compound_page(page);
+-}
+-
+-static void *io_mem_alloc(size_t size)
+-{
+- gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP;
+-
+- return (void *) __get_free_pages(gfp, get_order(size));
+-}
+-
+-static unsigned long rings_size(struct io_ring_ctx *ctx, unsigned int sq_entries,
+- unsigned int cq_entries, size_t *sq_offset)
+-{
+- struct io_rings *rings;
+- size_t off, sq_array_size;
+-
+- off = struct_size(rings, cqes, cq_entries);
+- if (off == SIZE_MAX)
+- return SIZE_MAX;
+- if (ctx->flags & IORING_SETUP_CQE32) {
+- if (check_shl_overflow(off, 1, &off))
+- return SIZE_MAX;
+- }
+-
+-#ifdef CONFIG_SMP
+- off = ALIGN(off, SMP_CACHE_BYTES);
+- if (off == 0)
+- return SIZE_MAX;
+-#endif
+-
+- if (sq_offset)
+- *sq_offset = off;
+-
+- sq_array_size = array_size(sizeof(u32), sq_entries);
+- if (sq_array_size == SIZE_MAX)
+- return SIZE_MAX;
+-
+- if (check_add_overflow(off, sq_array_size, &off))
+- return SIZE_MAX;
+-
+- return off;
+-}
+-
+-static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slot)
+-{
+- struct io_mapped_ubuf *imu = *slot;
+- unsigned int i;
+-
+- if (imu != ctx->dummy_ubuf) {
+- for (i = 0; i < imu->nr_bvecs; i++)
+- unpin_user_page(imu->bvec[i].bv_page);
+- if (imu->acct_pages)
+- io_unaccount_mem(ctx, imu->acct_pages);
+- kvfree(imu);
+- }
+- *slot = NULL;
+-}
+-
+-static void io_rsrc_buf_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
+-{
+- io_buffer_unmap(ctx, &prsrc->buf);
+- prsrc->buf = NULL;
+-}
+-
+-static void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
+-{
+- unsigned int i;
+-
+- for (i = 0; i < ctx->nr_user_bufs; i++)
+- io_buffer_unmap(ctx, &ctx->user_bufs[i]);
+- kfree(ctx->user_bufs);
+- io_rsrc_data_free(ctx->buf_data);
+- ctx->user_bufs = NULL;
+- ctx->buf_data = NULL;
+- ctx->nr_user_bufs = 0;
+-}
+-
+-static int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
+-{
+- unsigned nr = ctx->nr_user_bufs;
+- int ret;
+-
+- if (!ctx->buf_data)
+- return -ENXIO;
+-
+- /*
+- * Quiesce may unlock ->uring_lock, and while it's not held
+- * prevent new requests using the table.
+- */
+- ctx->nr_user_bufs = 0;
+- ret = io_rsrc_ref_quiesce(ctx->buf_data, ctx);
+- ctx->nr_user_bufs = nr;
+- if (!ret)
+- __io_sqe_buffers_unregister(ctx);
+- return ret;
+-}
+-
+-static int io_copy_iov(struct io_ring_ctx *ctx, struct iovec *dst,
+- void __user *arg, unsigned index)
+-{
+- struct iovec __user *src;
+-
+-#ifdef CONFIG_COMPAT
+- if (ctx->compat) {
+- struct compat_iovec __user *ciovs;
+- struct compat_iovec ciov;
+-
+- ciovs = (struct compat_iovec __user *) arg;
+- if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov)))
+- return -EFAULT;
+-
+- dst->iov_base = u64_to_user_ptr((u64)ciov.iov_base);
+- dst->iov_len = ciov.iov_len;
+- return 0;
+- }
+-#endif
+- src = (struct iovec __user *) arg;
+- if (copy_from_user(dst, &src[index], sizeof(*dst)))
+- return -EFAULT;
+- return 0;
+-}
+-
+-/*
+- * Not super efficient, but this is just a registration time. And we do cache
+- * the last compound head, so generally we'll only do a full search if we don't
+- * match that one.
+- *
+- * We check if the given compound head page has already been accounted, to
+- * avoid double accounting it. This allows us to account the full size of the
+- * page, not just the constituent pages of a huge page.
+- */
+-static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
+- int nr_pages, struct page *hpage)
+-{
+- int i, j;
+-
+- /* check current page array */
+- for (i = 0; i < nr_pages; i++) {
+- if (!PageCompound(pages[i]))
+- continue;
+- if (compound_head(pages[i]) == hpage)
+- return true;
+- }
+-
+- /* check previously registered pages */
+- for (i = 0; i < ctx->nr_user_bufs; i++) {
+- struct io_mapped_ubuf *imu = ctx->user_bufs[i];
+-
+- for (j = 0; j < imu->nr_bvecs; j++) {
+- if (!PageCompound(imu->bvec[j].bv_page))
+- continue;
+- if (compound_head(imu->bvec[j].bv_page) == hpage)
+- return true;
+- }
+- }
+-
+- return false;
+-}
+-
+-static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
+- int nr_pages, struct io_mapped_ubuf *imu,
+- struct page **last_hpage)
+-{
+- int i, ret;
+-
+- imu->acct_pages = 0;
+- for (i = 0; i < nr_pages; i++) {
+- if (!PageCompound(pages[i])) {
+- imu->acct_pages++;
+- } else {
+- struct page *hpage;
+-
+- hpage = compound_head(pages[i]);
+- if (hpage == *last_hpage)
+- continue;
+- *last_hpage = hpage;
+- if (headpage_already_acct(ctx, pages, i, hpage))
+- continue;
+- imu->acct_pages += page_size(hpage) >> PAGE_SHIFT;
+- }
+- }
+-
+- if (!imu->acct_pages)
+- return 0;
+-
+- ret = io_account_mem(ctx, imu->acct_pages);
+- if (ret)
+- imu->acct_pages = 0;
+- return ret;
+-}
+-
+-static struct page **io_pin_pages(unsigned long ubuf, unsigned long len,
+- int *npages)
+-{
+- unsigned long start, end, nr_pages;
+- struct vm_area_struct **vmas = NULL;
+- struct page **pages = NULL;
+- int i, pret, ret = -ENOMEM;
+-
+- end = (ubuf + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+- start = ubuf >> PAGE_SHIFT;
+- nr_pages = end - start;
+-
+- pages = kvmalloc_array(nr_pages, sizeof(struct page *), GFP_KERNEL);
+- if (!pages)
+- goto done;
+-
+- vmas = kvmalloc_array(nr_pages, sizeof(struct vm_area_struct *),
+- GFP_KERNEL);
+- if (!vmas)
+- goto done;
+-
+- ret = 0;
+- mmap_read_lock(current->mm);
+- pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
+- pages, vmas);
+- if (pret == nr_pages) {
+- /* don't support file backed memory */
+- for (i = 0; i < nr_pages; i++) {
+- struct vm_area_struct *vma = vmas[i];
+-
+- if (vma_is_shmem(vma))
+- continue;
+- if (vma->vm_file &&
+- !is_file_hugepages(vma->vm_file)) {
+- ret = -EOPNOTSUPP;
+- break;
+- }
+- }
+- *npages = nr_pages;
+- } else {
+- ret = pret < 0 ? pret : -EFAULT;
+- }
+- mmap_read_unlock(current->mm);
+- if (ret) {
+- /*
+- * if we did partial map, or found file backed vmas,
+- * release any pages we did get
+- */
+- if (pret > 0)
+- unpin_user_pages(pages, pret);
+- goto done;
+- }
+- ret = 0;
+-done:
+- kvfree(vmas);
+- if (ret < 0) {
+- kvfree(pages);
+- pages = ERR_PTR(ret);
+- }
+- return pages;
+-}
+-
+-static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
+- struct io_mapped_ubuf **pimu,
+- struct page **last_hpage)
+-{
+- struct io_mapped_ubuf *imu = NULL;
+- struct page **pages = NULL;
+- unsigned long off;
+- size_t size;
+- int ret, nr_pages, i;
+-
+- if (!iov->iov_base) {
+- *pimu = ctx->dummy_ubuf;
+- return 0;
+- }
+-
+- *pimu = NULL;
+- ret = -ENOMEM;
+-
+- pages = io_pin_pages((unsigned long) iov->iov_base, iov->iov_len,
+- &nr_pages);
+- if (IS_ERR(pages)) {
+- ret = PTR_ERR(pages);
+- pages = NULL;
+- goto done;
+- }
+-
+- imu = kvmalloc(struct_size(imu, bvec, nr_pages), GFP_KERNEL);
+- if (!imu)
+- goto done;
+-
+- ret = io_buffer_account_pin(ctx, pages, nr_pages, imu, last_hpage);
+- if (ret) {
+- unpin_user_pages(pages, nr_pages);
+- goto done;
+- }
+-
+- off = (unsigned long) iov->iov_base & ~PAGE_MASK;
+- size = iov->iov_len;
+- for (i = 0; i < nr_pages; i++) {
+- size_t vec_len;
+-
+- vec_len = min_t(size_t, size, PAGE_SIZE - off);
+- imu->bvec[i].bv_page = pages[i];
+- imu->bvec[i].bv_len = vec_len;
+- imu->bvec[i].bv_offset = off;
+- off = 0;
+- size -= vec_len;
+- }
+- /* store original address for later verification */
+- imu->ubuf = (unsigned long) iov->iov_base;
+- imu->ubuf_end = imu->ubuf + iov->iov_len;
+- imu->nr_bvecs = nr_pages;
+- *pimu = imu;
+- ret = 0;
+-done:
+- if (ret)
+- kvfree(imu);
+- kvfree(pages);
+- return ret;
+-}
+-
+-static int io_buffers_map_alloc(struct io_ring_ctx *ctx, unsigned int nr_args)
+-{
+- ctx->user_bufs = kcalloc(nr_args, sizeof(*ctx->user_bufs), GFP_KERNEL);
+- return ctx->user_bufs ? 0 : -ENOMEM;
+-}
+-
+-static int io_buffer_validate(struct iovec *iov)
+-{
+- unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1);
+-
+- /*
+- * Don't impose further limits on the size and buffer
+- * constraints here, we'll -EINVAL later when IO is
+- * submitted if they are wrong.
+- */
+- if (!iov->iov_base)
+- return iov->iov_len ? -EFAULT : 0;
+- if (!iov->iov_len)
+- return -EFAULT;
+-
+- /* arbitrary limit, but we need something */
+- if (iov->iov_len > SZ_1G)
+- return -EFAULT;
+-
+- if (check_add_overflow((unsigned long)iov->iov_base, acct_len, &tmp))
+- return -EOVERFLOW;
+-
+- return 0;
+-}
+-
+-static int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
+- unsigned int nr_args, u64 __user *tags)
+-{
+- struct page *last_hpage = NULL;
+- struct io_rsrc_data *data;
+- int i, ret;
+- struct iovec iov;
+-
+- if (ctx->user_bufs)
+- return -EBUSY;
+- if (!nr_args || nr_args > IORING_MAX_REG_BUFFERS)
+- return -EINVAL;
+- ret = io_rsrc_node_switch_start(ctx);
+- if (ret)
+- return ret;
+- ret = io_rsrc_data_alloc(ctx, io_rsrc_buf_put, tags, nr_args, &data);
+- if (ret)
+- return ret;
+- ret = io_buffers_map_alloc(ctx, nr_args);
+- if (ret) {
+- io_rsrc_data_free(data);
+- return ret;
+- }
+-
+- for (i = 0; i < nr_args; i++, ctx->nr_user_bufs++) {
+- if (arg) {
+- ret = io_copy_iov(ctx, &iov, arg, i);
+- if (ret)
+- break;
+- ret = io_buffer_validate(&iov);
+- if (ret)
+- break;
+- } else {
+- memset(&iov, 0, sizeof(iov));
+- }
+-
+- if (!iov.iov_base && *io_get_tag_slot(data, i)) {
+- ret = -EINVAL;
+- break;
+- }
+-
+- ret = io_sqe_buffer_register(ctx, &iov, &ctx->user_bufs[i],
+- &last_hpage);
+- if (ret)
+- break;
+- }
+-
+- WARN_ON_ONCE(ctx->buf_data);
+-
+- ctx->buf_data = data;
+- if (ret)
+- __io_sqe_buffers_unregister(ctx);
+- else
+- io_rsrc_node_switch(ctx, NULL);
+- return ret;
+-}
+-
+-static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
+- struct io_uring_rsrc_update2 *up,
+- unsigned int nr_args)
+-{
+- u64 __user *tags = u64_to_user_ptr(up->tags);
+- struct iovec iov, __user *iovs = u64_to_user_ptr(up->data);
+- struct page *last_hpage = NULL;
+- bool needs_switch = false;
+- __u32 done;
+- int i, err;
+-
+- if (!ctx->buf_data)
+- return -ENXIO;
+- if (up->offset + nr_args > ctx->nr_user_bufs)
+- return -EINVAL;
+-
+- for (done = 0; done < nr_args; done++) {
+- struct io_mapped_ubuf *imu;
+- int offset = up->offset + done;
+- u64 tag = 0;
+-
+- err = io_copy_iov(ctx, &iov, iovs, done);
+- if (err)
+- break;
+- if (tags && copy_from_user(&tag, &tags[done], sizeof(tag))) {
+- err = -EFAULT;
+- break;
+- }
+- err = io_buffer_validate(&iov);
+- if (err)
+- break;
+- if (!iov.iov_base && tag) {
+- err = -EINVAL;
+- break;
+- }
+- err = io_sqe_buffer_register(ctx, &iov, &imu, &last_hpage);
+- if (err)
+- break;
+-
+- i = array_index_nospec(offset, ctx->nr_user_bufs);
+- if (ctx->user_bufs[i] != ctx->dummy_ubuf) {
+- err = io_queue_rsrc_removal(ctx->buf_data, i,
+- ctx->rsrc_node, ctx->user_bufs[i]);
+- if (unlikely(err)) {
+- io_buffer_unmap(ctx, &imu);
+- break;
+- }
+- ctx->user_bufs[i] = NULL;
+- needs_switch = true;
+- }
+-
+- ctx->user_bufs[i] = imu;
+- *io_get_tag_slot(ctx->buf_data, offset) = tag;
+- }
+-
+- if (needs_switch)
+- io_rsrc_node_switch(ctx, ctx->buf_data);
+- return done ? done : err;
+-}
+-
+-static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
+- unsigned int eventfd_async)
+-{
+- struct io_ev_fd *ev_fd;
+- __s32 __user *fds = arg;
+- int fd;
+-
+- ev_fd = rcu_dereference_protected(ctx->io_ev_fd,
+- lockdep_is_held(&ctx->uring_lock));
+- if (ev_fd)
+- return -EBUSY;
+-
+- if (copy_from_user(&fd, fds, sizeof(*fds)))
+- return -EFAULT;
+-
+- ev_fd = kmalloc(sizeof(*ev_fd), GFP_KERNEL);
+- if (!ev_fd)
+- return -ENOMEM;
+-
+- ev_fd->cq_ev_fd = eventfd_ctx_fdget(fd);
+- if (IS_ERR(ev_fd->cq_ev_fd)) {
+- int ret = PTR_ERR(ev_fd->cq_ev_fd);
+- kfree(ev_fd);
+- return ret;
+- }
+- ev_fd->eventfd_async = eventfd_async;
+- ctx->has_evfd = true;
+- rcu_assign_pointer(ctx->io_ev_fd, ev_fd);
+- return 0;
+-}
+-
+-static void io_eventfd_put(struct rcu_head *rcu)
+-{
+- struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
+-
+- eventfd_ctx_put(ev_fd->cq_ev_fd);
+- kfree(ev_fd);
+-}
+-
+-static int io_eventfd_unregister(struct io_ring_ctx *ctx)
+-{
+- struct io_ev_fd *ev_fd;
+-
+- ev_fd = rcu_dereference_protected(ctx->io_ev_fd,
+- lockdep_is_held(&ctx->uring_lock));
+- if (ev_fd) {
+- ctx->has_evfd = false;
+- rcu_assign_pointer(ctx->io_ev_fd, NULL);
+- call_rcu(&ev_fd->rcu, io_eventfd_put);
+- return 0;
+- }
+-
+- return -ENXIO;
+-}
+-
+-static void io_destroy_buffers(struct io_ring_ctx *ctx)
+-{
+- struct io_buffer_list *bl;
+- unsigned long index;
+- int i;
+-
+- for (i = 0; i < BGID_ARRAY; i++) {
+- if (!ctx->io_bl)
+- break;
+- __io_remove_buffers(ctx, &ctx->io_bl[i], -1U);
+- }
+-
+- xa_for_each(&ctx->io_bl_xa, index, bl) {
+- xa_erase(&ctx->io_bl_xa, bl->bgid);
+- __io_remove_buffers(ctx, bl, -1U);
+- kfree(bl);
+- }
+-
+- while (!list_empty(&ctx->io_buffers_pages)) {
+- struct page *page;
+-
+- page = list_first_entry(&ctx->io_buffers_pages, struct page, lru);
+- list_del_init(&page->lru);
+- __free_page(page);
+- }
+-}
+-
+-static void io_req_caches_free(struct io_ring_ctx *ctx)
+-{
+- struct io_submit_state *state = &ctx->submit_state;
+- int nr = 0;
+-
+- mutex_lock(&ctx->uring_lock);
+- io_flush_cached_locked_reqs(ctx, state);
+-
+- while (!io_req_cache_empty(ctx)) {
+- struct io_wq_work_node *node;
+- struct io_kiocb *req;
+-
+- node = wq_stack_extract(&state->free_list);
+- req = container_of(node, struct io_kiocb, comp_list);
+- kmem_cache_free(req_cachep, req);
+- nr++;
+- }
+- if (nr)
+- percpu_ref_put_many(&ctx->refs, nr);
+- mutex_unlock(&ctx->uring_lock);
+-}
+-
+-static void io_wait_rsrc_data(struct io_rsrc_data *data)
+-{
+- if (data && !atomic_dec_and_test(&data->refs))
+- wait_for_completion(&data->done);
+-}
+-
+-static void io_flush_apoll_cache(struct io_ring_ctx *ctx)
+-{
+- struct async_poll *apoll;
+-
+- while (!list_empty(&ctx->apoll_cache)) {
+- apoll = list_first_entry(&ctx->apoll_cache, struct async_poll,
+- poll.wait.entry);
+- list_del(&apoll->poll.wait.entry);
+- kfree(apoll);
+- }
+-}
+-
+-static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
+-{
+- io_sq_thread_finish(ctx);
+-
+- if (ctx->mm_account) {
+- mmdrop(ctx->mm_account);
+- ctx->mm_account = NULL;
+- }
+-
+- io_rsrc_refs_drop(ctx);
+- /* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
+- io_wait_rsrc_data(ctx->buf_data);
+- io_wait_rsrc_data(ctx->file_data);
+-
+- mutex_lock(&ctx->uring_lock);
+- if (ctx->buf_data)
+- __io_sqe_buffers_unregister(ctx);
+- if (ctx->file_data)
+- __io_sqe_files_unregister(ctx);
+- if (ctx->rings)
+- __io_cqring_overflow_flush(ctx, true);
+- io_eventfd_unregister(ctx);
+- io_flush_apoll_cache(ctx);
+- mutex_unlock(&ctx->uring_lock);
+- io_destroy_buffers(ctx);
+- if (ctx->sq_creds)
+- put_cred(ctx->sq_creds);
+-
+- /* there are no registered resources left, nobody uses it */
+- if (ctx->rsrc_node)
+- io_rsrc_node_destroy(ctx->rsrc_node);
+- if (ctx->rsrc_backup_node)
+- io_rsrc_node_destroy(ctx->rsrc_backup_node);
+- flush_delayed_work(&ctx->rsrc_put_work);
+- flush_delayed_work(&ctx->fallback_work);
+-
+- WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
+- WARN_ON_ONCE(!llist_empty(&ctx->rsrc_put_llist));
+-
+-#if defined(CONFIG_UNIX)
+- if (ctx->ring_sock) {
+- ctx->ring_sock->file = NULL; /* so that iput() is called */
+- sock_release(ctx->ring_sock);
+- }
+-#endif
+- WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
+-
+- io_mem_free(ctx->rings);
+- io_mem_free(ctx->sq_sqes);
+-
+- percpu_ref_exit(&ctx->refs);
+- free_uid(ctx->user);
+- io_req_caches_free(ctx);
+- if (ctx->hash_map)
+- io_wq_put_hash(ctx->hash_map);
+- kfree(ctx->cancel_hash);
+- kfree(ctx->dummy_ubuf);
+- kfree(ctx->io_bl);
+- xa_destroy(&ctx->io_bl_xa);
+- kfree(ctx);
+-}
+-
+-static __poll_t io_uring_poll(struct file *file, poll_table *wait)
+-{
+- struct io_ring_ctx *ctx = file->private_data;
+- __poll_t mask = 0;
+-
+- poll_wait(file, &ctx->cq_wait, wait);
+- /*
+- * synchronizes with barrier from wq_has_sleeper call in
+- * io_commit_cqring
+- */
+- smp_rmb();
+- if (!io_sqring_full(ctx))
+- mask |= EPOLLOUT | EPOLLWRNORM;
+-
+- /*
+- * Don't flush cqring overflow list here, just do a simple check.
+- * Otherwise there could possible be ABBA deadlock:
+- * CPU0 CPU1
+- * ---- ----
+- * lock(&ctx->uring_lock);
+- * lock(&ep->mtx);
+- * lock(&ctx->uring_lock);
+- * lock(&ep->mtx);
+- *
+- * Users may get EPOLLIN meanwhile seeing nothing in cqring, this
+- * pushs them to do the flush.
+- */
+- if (io_cqring_events(ctx) ||
+- test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
+- mask |= EPOLLIN | EPOLLRDNORM;
+-
+- return mask;
+-}
+-
+-static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
+-{
+- const struct cred *creds;
+-
+- creds = xa_erase(&ctx->personalities, id);
+- if (creds) {
+- put_cred(creds);
+- return 0;
+- }
+-
+- return -EINVAL;
+-}
+-
+-struct io_tctx_exit {
+- struct callback_head task_work;
+- struct completion completion;
+- struct io_ring_ctx *ctx;
+-};
+-
+-static __cold void io_tctx_exit_cb(struct callback_head *cb)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+- struct io_tctx_exit *work;
+-
+- work = container_of(cb, struct io_tctx_exit, task_work);
+- /*
+- * When @in_idle, we're in cancellation and it's racy to remove the
+- * node. It'll be removed by the end of cancellation, just ignore it.
+- */
+- if (!atomic_read(&tctx->in_idle))
+- io_uring_del_tctx_node((unsigned long)work->ctx);
+- complete(&work->completion);
+-}
+-
+-static __cold bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
+-{
+- struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+-
+- return req->ctx == data;
+-}
+-
+-static __cold void io_ring_exit_work(struct work_struct *work)
+-{
+- struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, exit_work);
+- unsigned long timeout = jiffies + HZ * 60 * 5;
+- unsigned long interval = HZ / 20;
+- struct io_tctx_exit exit;
+- struct io_tctx_node *node;
+- int ret;
+-
+- /*
+- * If we're doing polled IO and end up having requests being
+- * submitted async (out-of-line), then completions can come in while
+- * we're waiting for refs to drop. We need to reap these manually,
+- * as nobody else will be looking for them.
+- */
+- do {
+- io_uring_try_cancel_requests(ctx, NULL, true);
+- if (ctx->sq_data) {
+- struct io_sq_data *sqd = ctx->sq_data;
+- struct task_struct *tsk;
+-
+- io_sq_thread_park(sqd);
+- tsk = sqd->thread;
+- if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
+- io_wq_cancel_cb(tsk->io_uring->io_wq,
+- io_cancel_ctx_cb, ctx, true);
+- io_sq_thread_unpark(sqd);
+- }
+-
+- io_req_caches_free(ctx);
+-
+- if (WARN_ON_ONCE(time_after(jiffies, timeout))) {
+- /* there is little hope left, don't run it too often */
+- interval = HZ * 60;
+- }
+- } while (!wait_for_completion_timeout(&ctx->ref_comp, interval));
+-
+- init_completion(&exit.completion);
+- init_task_work(&exit.task_work, io_tctx_exit_cb);
+- exit.ctx = ctx;
+- /*
+- * Some may use context even when all refs and requests have been put,
+- * and they are free to do so while still holding uring_lock or
+- * completion_lock, see io_req_task_submit(). Apart from other work,
+- * this lock/unlock section also waits them to finish.
+- */
+- mutex_lock(&ctx->uring_lock);
+- while (!list_empty(&ctx->tctx_list)) {
+- WARN_ON_ONCE(time_after(jiffies, timeout));
+-
+- node = list_first_entry(&ctx->tctx_list, struct io_tctx_node,
+- ctx_node);
+- /* don't spin on a single task if cancellation failed */
+- list_rotate_left(&ctx->tctx_list);
+- ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
+- if (WARN_ON_ONCE(ret))
+- continue;
+-
+- mutex_unlock(&ctx->uring_lock);
+- wait_for_completion(&exit.completion);
+- mutex_lock(&ctx->uring_lock);
+- }
+- mutex_unlock(&ctx->uring_lock);
+- spin_lock(&ctx->completion_lock);
+- spin_unlock(&ctx->completion_lock);
+-
+- io_ring_ctx_free(ctx);
+-}
+-
+-/* Returns true if we found and killed one or more timeouts */
+-static __cold bool io_kill_timeouts(struct io_ring_ctx *ctx,
+- struct task_struct *tsk, bool cancel_all)
+-{
+- struct io_kiocb *req, *tmp;
+- int canceled = 0;
+-
+- spin_lock(&ctx->completion_lock);
+- spin_lock_irq(&ctx->timeout_lock);
+- list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+- if (io_match_task(req, tsk, cancel_all)) {
+- io_kill_timeout(req, -ECANCELED);
+- canceled++;
+- }
+- }
+- spin_unlock_irq(&ctx->timeout_lock);
+- io_commit_cqring(ctx);
+- spin_unlock(&ctx->completion_lock);
+- if (canceled != 0)
+- io_cqring_ev_posted(ctx);
+- return canceled != 0;
+-}
+-
+-static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+-{
+- unsigned long index;
+- struct creds *creds;
+-
+- mutex_lock(&ctx->uring_lock);
+- percpu_ref_kill(&ctx->refs);
+- if (ctx->rings)
+- __io_cqring_overflow_flush(ctx, true);
+- xa_for_each(&ctx->personalities, index, creds)
+- io_unregister_personality(ctx, index);
+- mutex_unlock(&ctx->uring_lock);
+-
+- /* failed during ring init, it couldn't have issued any requests */
+- if (ctx->rings) {
+- io_kill_timeouts(ctx, NULL, true);
+- io_poll_remove_all(ctx, NULL, true);
+- /* if we failed setting up the ctx, we might not have any rings */
+- io_iopoll_try_reap_events(ctx);
+- }
+-
+- INIT_WORK(&ctx->exit_work, io_ring_exit_work);
+- /*
+- * Use system_unbound_wq to avoid spawning tons of event kworkers
+- * if we're exiting a ton of rings at the same time. It just adds
+- * noise and overhead, there's no discernable change in runtime
+- * over using system_wq.
+- */
+- queue_work(system_unbound_wq, &ctx->exit_work);
+-}
+-
+-static int io_uring_release(struct inode *inode, struct file *file)
+-{
+- struct io_ring_ctx *ctx = file->private_data;
+-
+- file->private_data = NULL;
+- io_ring_ctx_wait_and_kill(ctx);
+- return 0;
+-}
+-
+-struct io_task_cancel {
+- struct task_struct *task;
+- bool all;
+-};
+-
+-static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+-{
+- struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+- struct io_task_cancel *cancel = data;
+-
+- return io_match_task_safe(req, cancel->task, cancel->all);
+-}
+-
+-static __cold bool io_cancel_defer_files(struct io_ring_ctx *ctx,
+- struct task_struct *task,
+- bool cancel_all)
+-{
+- struct io_defer_entry *de;
+- LIST_HEAD(list);
+-
+- spin_lock(&ctx->completion_lock);
+- list_for_each_entry_reverse(de, &ctx->defer_list, list) {
+- if (io_match_task_safe(de->req, task, cancel_all)) {
+- list_cut_position(&list, &ctx->defer_list, &de->list);
+- break;
+- }
+- }
+- spin_unlock(&ctx->completion_lock);
+- if (list_empty(&list))
+- return false;
+-
+- while (!list_empty(&list)) {
+- de = list_first_entry(&list, struct io_defer_entry, list);
+- list_del_init(&de->list);
+- io_req_complete_failed(de->req, -ECANCELED);
+- kfree(de);
+- }
+- return true;
+-}
+-
+-static __cold bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx)
+-{
+- struct io_tctx_node *node;
+- enum io_wq_cancel cret;
+- bool ret = false;
+-
+- mutex_lock(&ctx->uring_lock);
+- list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
+- struct io_uring_task *tctx = node->task->io_uring;
+-
+- /*
+- * io_wq will stay alive while we hold uring_lock, because it's
+- * killed after ctx nodes, which requires to take the lock.
+- */
+- if (!tctx || !tctx->io_wq)
+- continue;
+- cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true);
+- ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
+- }
+- mutex_unlock(&ctx->uring_lock);
+-
+- return ret;
+-}
+-
+-static __cold void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+- struct task_struct *task,
+- bool cancel_all)
+-{
+- struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
+- struct io_uring_task *tctx = task ? task->io_uring : NULL;
+-
+- /* failed during ring init, it couldn't have issued any requests */
+- if (!ctx->rings)
+- return;
+-
+- while (1) {
+- enum io_wq_cancel cret;
+- bool ret = false;
+-
+- if (!task) {
+- ret |= io_uring_try_cancel_iowq(ctx);
+- } else if (tctx && tctx->io_wq) {
+- /*
+- * Cancels requests of all rings, not only @ctx, but
+- * it's fine as the task is in exit/exec.
+- */
+- cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
+- &cancel, true);
+- ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
+- }
+-
+- /* SQPOLL thread does its own polling */
+- if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) ||
+- (ctx->sq_data && ctx->sq_data->thread == current)) {
+- while (!wq_list_empty(&ctx->iopoll_list)) {
+- io_iopoll_try_reap_events(ctx);
+- ret = true;
+- }
+- }
+-
+- ret |= io_cancel_defer_files(ctx, task, cancel_all);
+- ret |= io_poll_remove_all(ctx, task, cancel_all);
+- ret |= io_kill_timeouts(ctx, task, cancel_all);
+- if (task)
+- ret |= io_run_task_work();
+- if (!ret)
+- break;
+- cond_resched();
+- }
+-}
+-
+-static int __io_uring_add_tctx_node(struct io_ring_ctx *ctx)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+- struct io_tctx_node *node;
+- int ret;
+-
+- if (unlikely(!tctx)) {
+- ret = io_uring_alloc_task_context(current, ctx);
+- if (unlikely(ret))
+- return ret;
+-
+- tctx = current->io_uring;
+- if (ctx->iowq_limits_set) {
+- unsigned int limits[2] = { ctx->iowq_limits[0],
+- ctx->iowq_limits[1], };
+-
+- ret = io_wq_max_workers(tctx->io_wq, limits);
+- if (ret)
+- return ret;
+- }
+- }
+- if (!xa_load(&tctx->xa, (unsigned long)ctx)) {
+- node = kmalloc(sizeof(*node), GFP_KERNEL);
+- if (!node)
+- return -ENOMEM;
+- node->ctx = ctx;
+- node->task = current;
+-
+- ret = xa_err(xa_store(&tctx->xa, (unsigned long)ctx,
+- node, GFP_KERNEL));
+- if (ret) {
+- kfree(node);
+- return ret;
+- }
+-
+- mutex_lock(&ctx->uring_lock);
+- list_add(&node->ctx_node, &ctx->tctx_list);
+- mutex_unlock(&ctx->uring_lock);
+- }
+- tctx->last = ctx;
+- return 0;
+-}
+-
+-/*
+- * Note that this task has used io_uring. We use it for cancelation purposes.
+- */
+-static inline int io_uring_add_tctx_node(struct io_ring_ctx *ctx)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+-
+- if (likely(tctx && tctx->last == ctx))
+- return 0;
+- return __io_uring_add_tctx_node(ctx);
+-}
+-
+-/*
+- * Remove this io_uring_file -> task mapping.
+- */
+-static __cold void io_uring_del_tctx_node(unsigned long index)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+- struct io_tctx_node *node;
+-
+- if (!tctx)
+- return;
+- node = xa_erase(&tctx->xa, index);
+- if (!node)
+- return;
+-
+- WARN_ON_ONCE(current != node->task);
+- WARN_ON_ONCE(list_empty(&node->ctx_node));
+-
+- mutex_lock(&node->ctx->uring_lock);
+- list_del(&node->ctx_node);
+- mutex_unlock(&node->ctx->uring_lock);
+-
+- if (tctx->last == node->ctx)
+- tctx->last = NULL;
+- kfree(node);
+-}
+-
+-static __cold void io_uring_clean_tctx(struct io_uring_task *tctx)
+-{
+- struct io_wq *wq = tctx->io_wq;
+- struct io_tctx_node *node;
+- unsigned long index;
+-
+- xa_for_each(&tctx->xa, index, node) {
+- io_uring_del_tctx_node(index);
+- cond_resched();
+- }
+- if (wq) {
+- /*
+- * Must be after io_uring_del_tctx_node() (removes nodes under
+- * uring_lock) to avoid race with io_uring_try_cancel_iowq().
+- */
+- io_wq_put_and_exit(wq);
+- tctx->io_wq = NULL;
+- }
+-}
+-
+-static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
+-{
+- if (tracked)
+- return atomic_read(&tctx->inflight_tracked);
+- return percpu_counter_sum(&tctx->inflight);
+-}
+-
+-/*
+- * Find any io_uring ctx that this task has registered or done IO on, and cancel
+- * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
+- */
+-static __cold void io_uring_cancel_generic(bool cancel_all,
+- struct io_sq_data *sqd)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+- struct io_ring_ctx *ctx;
+- s64 inflight;
+- DEFINE_WAIT(wait);
+-
+- WARN_ON_ONCE(sqd && sqd->thread != current);
+-
+- if (!current->io_uring)
+- return;
+- if (tctx->io_wq)
+- io_wq_exit_start(tctx->io_wq);
+-
+- atomic_inc(&tctx->in_idle);
+- do {
+- io_uring_drop_tctx_refs(current);
+- /* read completions before cancelations */
+- inflight = tctx_inflight(tctx, !cancel_all);
+- if (!inflight)
+- break;
+-
+- if (!sqd) {
+- struct io_tctx_node *node;
+- unsigned long index;
+-
+- xa_for_each(&tctx->xa, index, node) {
+- /* sqpoll task will cancel all its requests */
+- if (node->ctx->sq_data)
+- continue;
+- io_uring_try_cancel_requests(node->ctx, current,
+- cancel_all);
+- }
+- } else {
+- list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+- io_uring_try_cancel_requests(ctx, current,
+- cancel_all);
+- }
+-
+- prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE);
+- io_run_task_work();
+- io_uring_drop_tctx_refs(current);
+-
+- /*
+- * If we've seen completions, retry without waiting. This
+- * avoids a race where a completion comes in before we did
+- * prepare_to_wait().
+- */
+- if (inflight == tctx_inflight(tctx, !cancel_all))
+- schedule();
+- finish_wait(&tctx->wait, &wait);
+- } while (1);
+-
+- io_uring_clean_tctx(tctx);
+- if (cancel_all) {
+- /*
+- * We shouldn't run task_works after cancel, so just leave
+- * ->in_idle set for normal exit.
+- */
+- atomic_dec(&tctx->in_idle);
+- /* for exec all current's requests should be gone, kill tctx */
+- __io_uring_free(current);
+- }
+-}
+-
+-void __io_uring_cancel(bool cancel_all)
+-{
+- io_uring_cancel_generic(cancel_all, NULL);
+-}
+-
+-void io_uring_unreg_ringfd(void)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+- int i;
+-
+- for (i = 0; i < IO_RINGFD_REG_MAX; i++) {
+- if (tctx->registered_rings[i]) {
+- fput(tctx->registered_rings[i]);
+- tctx->registered_rings[i] = NULL;
+- }
+- }
+-}
+-
+-static int io_ring_add_registered_fd(struct io_uring_task *tctx, int fd,
+- int start, int end)
+-{
+- struct file *file;
+- int offset;
+-
+- for (offset = start; offset < end; offset++) {
+- offset = array_index_nospec(offset, IO_RINGFD_REG_MAX);
+- if (tctx->registered_rings[offset])
+- continue;
+-
+- file = fget(fd);
+- if (!file) {
+- return -EBADF;
+- } else if (file->f_op != &io_uring_fops) {
+- fput(file);
+- return -EOPNOTSUPP;
+- }
+- tctx->registered_rings[offset] = file;
+- return offset;
+- }
+-
+- return -EBUSY;
+-}
+-
+-/*
+- * Register a ring fd to avoid fdget/fdput for each io_uring_enter()
+- * invocation. User passes in an array of struct io_uring_rsrc_update
+- * with ->data set to the ring_fd, and ->offset given for the desired
+- * index. If no index is desired, application may set ->offset == -1U
+- * and we'll find an available index. Returns number of entries
+- * successfully processed, or < 0 on error if none were processed.
+- */
+-static int io_ringfd_register(struct io_ring_ctx *ctx, void __user *__arg,
+- unsigned nr_args)
+-{
+- struct io_uring_rsrc_update __user *arg = __arg;
+- struct io_uring_rsrc_update reg;
+- struct io_uring_task *tctx;
+- int ret, i;
+-
+- if (!nr_args || nr_args > IO_RINGFD_REG_MAX)
+- return -EINVAL;
+-
+- mutex_unlock(&ctx->uring_lock);
+- ret = io_uring_add_tctx_node(ctx);
+- mutex_lock(&ctx->uring_lock);
+- if (ret)
+- return ret;
+-
+- tctx = current->io_uring;
+- for (i = 0; i < nr_args; i++) {
+- int start, end;
+-
+- if (copy_from_user(®, &arg[i], sizeof(reg))) {
+- ret = -EFAULT;
+- break;
+- }
+-
+- if (reg.resv) {
+- ret = -EINVAL;
+- break;
+- }
+-
+- if (reg.offset == -1U) {
+- start = 0;
+- end = IO_RINGFD_REG_MAX;
+- } else {
+- if (reg.offset >= IO_RINGFD_REG_MAX) {
+- ret = -EINVAL;
+- break;
+- }
+- start = reg.offset;
+- end = start + 1;
+- }
+-
+- ret = io_ring_add_registered_fd(tctx, reg.data, start, end);
+- if (ret < 0)
+- break;
+-
+- reg.offset = ret;
+- if (copy_to_user(&arg[i], ®, sizeof(reg))) {
+- fput(tctx->registered_rings[reg.offset]);
+- tctx->registered_rings[reg.offset] = NULL;
+- ret = -EFAULT;
+- break;
+- }
+- }
+-
+- return i ? i : ret;
+-}
+-
+-static int io_ringfd_unregister(struct io_ring_ctx *ctx, void __user *__arg,
+- unsigned nr_args)
+-{
+- struct io_uring_rsrc_update __user *arg = __arg;
+- struct io_uring_task *tctx = current->io_uring;
+- struct io_uring_rsrc_update reg;
+- int ret = 0, i;
+-
+- if (!nr_args || nr_args > IO_RINGFD_REG_MAX)
+- return -EINVAL;
+- if (!tctx)
+- return 0;
+-
+- for (i = 0; i < nr_args; i++) {
+- if (copy_from_user(®, &arg[i], sizeof(reg))) {
+- ret = -EFAULT;
+- break;
+- }
+- if (reg.resv || reg.data || reg.offset >= IO_RINGFD_REG_MAX) {
+- ret = -EINVAL;
+- break;
+- }
+-
+- reg.offset = array_index_nospec(reg.offset, IO_RINGFD_REG_MAX);
+- if (tctx->registered_rings[reg.offset]) {
+- fput(tctx->registered_rings[reg.offset]);
+- tctx->registered_rings[reg.offset] = NULL;
+- }
+- }
+-
+- return i ? i : ret;
+-}
+-
+-static void *io_uring_validate_mmap_request(struct file *file,
+- loff_t pgoff, size_t sz)
+-{
+- struct io_ring_ctx *ctx = file->private_data;
+- loff_t offset = pgoff << PAGE_SHIFT;
+- struct page *page;
+- void *ptr;
+-
+- switch (offset) {
+- case IORING_OFF_SQ_RING:
+- case IORING_OFF_CQ_RING:
+- ptr = ctx->rings;
+- break;
+- case IORING_OFF_SQES:
+- ptr = ctx->sq_sqes;
+- break;
+- default:
+- return ERR_PTR(-EINVAL);
+- }
+-
+- page = virt_to_head_page(ptr);
+- if (sz > page_size(page))
+- return ERR_PTR(-EINVAL);
+-
+- return ptr;
+-}
+-
+-#ifdef CONFIG_MMU
+-
+-static __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
+-{
+- size_t sz = vma->vm_end - vma->vm_start;
+- unsigned long pfn;
+- void *ptr;
+-
+- ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz);
+- if (IS_ERR(ptr))
+- return PTR_ERR(ptr);
+-
+- pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
+- return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
+-}
+-
+-#else /* !CONFIG_MMU */
+-
+-static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
+-{
+- return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -EINVAL;
+-}
+-
+-static unsigned int io_uring_nommu_mmap_capabilities(struct file *file)
+-{
+- return NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_WRITE;
+-}
+-
+-static unsigned long io_uring_nommu_get_unmapped_area(struct file *file,
+- unsigned long addr, unsigned long len,
+- unsigned long pgoff, unsigned long flags)
+-{
+- void *ptr;
+-
+- ptr = io_uring_validate_mmap_request(file, pgoff, len);
+- if (IS_ERR(ptr))
+- return PTR_ERR(ptr);
+-
+- return (unsigned long) ptr;
+-}
+-
+-#endif /* !CONFIG_MMU */
+-
+-static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+-{
+- DEFINE_WAIT(wait);
+-
+- do {
+- if (!io_sqring_full(ctx))
+- break;
+- prepare_to_wait(&ctx->sqo_sq_wait, &wait, TASK_INTERRUPTIBLE);
+-
+- if (!io_sqring_full(ctx))
+- break;
+- schedule();
+- } while (!signal_pending(current));
+-
+- finish_wait(&ctx->sqo_sq_wait, &wait);
+- return 0;
+-}
+-
+-static int io_validate_ext_arg(unsigned flags, const void __user *argp, size_t argsz)
+-{
+- if (flags & IORING_ENTER_EXT_ARG) {
+- struct io_uring_getevents_arg arg;
+-
+- if (argsz != sizeof(arg))
+- return -EINVAL;
+- if (copy_from_user(&arg, argp, sizeof(arg)))
+- return -EFAULT;
+- }
+- return 0;
+-}
+-
+-static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz,
+- struct __kernel_timespec __user **ts,
+- const sigset_t __user **sig)
+-{
+- struct io_uring_getevents_arg arg;
+-
+- /*
+- * If EXT_ARG isn't set, then we have no timespec and the argp pointer
+- * is just a pointer to the sigset_t.
+- */
+- if (!(flags & IORING_ENTER_EXT_ARG)) {
+- *sig = (const sigset_t __user *) argp;
+- *ts = NULL;
+- return 0;
+- }
+-
+- /*
+- * EXT_ARG is set - ensure we agree on the size of it and copy in our
+- * timespec and sigset_t pointers if good.
+- */
+- if (*argsz != sizeof(arg))
+- return -EINVAL;
+- if (copy_from_user(&arg, argp, sizeof(arg)))
+- return -EFAULT;
+- if (arg.pad)
+- return -EINVAL;
+- *sig = u64_to_user_ptr(arg.sigmask);
+- *argsz = arg.sigmask_sz;
+- *ts = u64_to_user_ptr(arg.ts);
+- return 0;
+-}
+-
+-SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+- u32, min_complete, u32, flags, const void __user *, argp,
+- size_t, argsz)
+-{
+- struct io_ring_ctx *ctx;
+- struct fd f;
+- long ret;
+-
+- io_run_task_work();
+-
+- if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
+- IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG |
+- IORING_ENTER_REGISTERED_RING)))
+- return -EINVAL;
+-
+- /*
+- * Ring fd has been registered via IORING_REGISTER_RING_FDS, we
+- * need only dereference our task private array to find it.
+- */
+- if (flags & IORING_ENTER_REGISTERED_RING) {
+- struct io_uring_task *tctx = current->io_uring;
+-
+- if (!tctx || fd >= IO_RINGFD_REG_MAX)
+- return -EINVAL;
+- fd = array_index_nospec(fd, IO_RINGFD_REG_MAX);
+- f.file = tctx->registered_rings[fd];
+- f.flags = 0;
+- } else {
+- f = fdget(fd);
+- }
+-
+- if (unlikely(!f.file))
+- return -EBADF;
+-
+- ret = -EOPNOTSUPP;
+- if (unlikely(f.file->f_op != &io_uring_fops))
+- goto out_fput;
+-
+- ret = -ENXIO;
+- ctx = f.file->private_data;
+- if (unlikely(!percpu_ref_tryget(&ctx->refs)))
+- goto out_fput;
+-
+- ret = -EBADFD;
+- if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
+- goto out;
+-
+- /*
+- * For SQ polling, the thread will do all submissions and completions.
+- * Just return the requested submit count, and wake the thread if
+- * we were asked to.
+- */
+- ret = 0;
+- if (ctx->flags & IORING_SETUP_SQPOLL) {
+- io_cqring_overflow_flush(ctx);
+-
+- if (unlikely(ctx->sq_data->thread == NULL)) {
+- ret = -EOWNERDEAD;
+- goto out;
+- }
+- if (flags & IORING_ENTER_SQ_WAKEUP)
+- wake_up(&ctx->sq_data->wait);
+- if (flags & IORING_ENTER_SQ_WAIT) {
+- ret = io_sqpoll_wait_sq(ctx);
+- if (ret)
+- goto out;
+- }
+- ret = to_submit;
+- } else if (to_submit) {
+- ret = io_uring_add_tctx_node(ctx);
+- if (unlikely(ret))
+- goto out;
+-
+- mutex_lock(&ctx->uring_lock);
+- ret = io_submit_sqes(ctx, to_submit);
+- if (ret != to_submit) {
+- mutex_unlock(&ctx->uring_lock);
+- goto out;
+- }
+- if ((flags & IORING_ENTER_GETEVENTS) && ctx->syscall_iopoll)
+- goto iopoll_locked;
+- mutex_unlock(&ctx->uring_lock);
+- }
+- if (flags & IORING_ENTER_GETEVENTS) {
+- int ret2;
+- if (ctx->syscall_iopoll) {
+- /*
+- * We disallow the app entering submit/complete with
+- * polling, but we still need to lock the ring to
+- * prevent racing with polled issue that got punted to
+- * a workqueue.
+- */
+- mutex_lock(&ctx->uring_lock);
+-iopoll_locked:
+- ret2 = io_validate_ext_arg(flags, argp, argsz);
+- if (likely(!ret2)) {
+- min_complete = min(min_complete,
+- ctx->cq_entries);
+- ret2 = io_iopoll_check(ctx, min_complete);
+- }
+- mutex_unlock(&ctx->uring_lock);
+- } else {
+- const sigset_t __user *sig;
+- struct __kernel_timespec __user *ts;
+-
+- ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
+- if (likely(!ret2)) {
+- min_complete = min(min_complete,
+- ctx->cq_entries);
+- ret2 = io_cqring_wait(ctx, min_complete, sig,
+- argsz, ts);
+- }
+- }
+-
+- if (!ret) {
+- ret = ret2;
+-
+- /*
+- * EBADR indicates that one or more CQE were dropped.
+- * Once the user has been informed we can clear the bit
+- * as they are obviously ok with those drops.
+- */
+- if (unlikely(ret2 == -EBADR))
+- clear_bit(IO_CHECK_CQ_DROPPED_BIT,
+- &ctx->check_cq);
+- }
+- }
+-
+-out:
+- percpu_ref_put(&ctx->refs);
+-out_fput:
+- fdput(f);
+- return ret;
+-}
+-
+-#ifdef CONFIG_PROC_FS
+-static __cold int io_uring_show_cred(struct seq_file *m, unsigned int id,
+- const struct cred *cred)
+-{
+- struct user_namespace *uns = seq_user_ns(m);
+- struct group_info *gi;
+- kernel_cap_t cap;
+- unsigned __capi;
+- int g;
+-
+- seq_printf(m, "%5d\n", id);
+- seq_put_decimal_ull(m, "\tUid:\t", from_kuid_munged(uns, cred->uid));
+- seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->euid));
+- seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->suid));
+- seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->fsuid));
+- seq_put_decimal_ull(m, "\n\tGid:\t", from_kgid_munged(uns, cred->gid));
+- seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->egid));
+- seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->sgid));
+- seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->fsgid));
+- seq_puts(m, "\n\tGroups:\t");
+- gi = cred->group_info;
+- for (g = 0; g < gi->ngroups; g++) {
+- seq_put_decimal_ull(m, g ? " " : "",
+- from_kgid_munged(uns, gi->gid[g]));
+- }
+- seq_puts(m, "\n\tCapEff:\t");
+- cap = cred->cap_effective;
+- CAP_FOR_EACH_U32(__capi)
+- seq_put_hex_ll(m, NULL, cap.cap[CAP_LAST_U32 - __capi], 8);
+- seq_putc(m, '\n');
+- return 0;
+-}
+-
+-static __cold void __io_uring_show_fdinfo(struct io_ring_ctx *ctx,
+- struct seq_file *m)
+-{
+- struct io_sq_data *sq = NULL;
+- struct io_overflow_cqe *ocqe;
+- struct io_rings *r = ctx->rings;
+- unsigned int sq_mask = ctx->sq_entries - 1, cq_mask = ctx->cq_entries - 1;
+- unsigned int sq_head = READ_ONCE(r->sq.head);
+- unsigned int sq_tail = READ_ONCE(r->sq.tail);
+- unsigned int cq_head = READ_ONCE(r->cq.head);
+- unsigned int cq_tail = READ_ONCE(r->cq.tail);
+- unsigned int cq_shift = 0;
+- unsigned int sq_entries, cq_entries;
+- bool has_lock;
+- bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
+- unsigned int i;
+-
+- if (is_cqe32)
+- cq_shift = 1;
+-
+- /*
+- * we may get imprecise sqe and cqe info if uring is actively running
+- * since we get cached_sq_head and cached_cq_tail without uring_lock
+- * and sq_tail and cq_head are changed by userspace. But it's ok since
+- * we usually use these info when it is stuck.
+- */
+- seq_printf(m, "SqMask:\t0x%x\n", sq_mask);
+- seq_printf(m, "SqHead:\t%u\n", sq_head);
+- seq_printf(m, "SqTail:\t%u\n", sq_tail);
+- seq_printf(m, "CachedSqHead:\t%u\n", ctx->cached_sq_head);
+- seq_printf(m, "CqMask:\t0x%x\n", cq_mask);
+- seq_printf(m, "CqHead:\t%u\n", cq_head);
+- seq_printf(m, "CqTail:\t%u\n", cq_tail);
+- seq_printf(m, "CachedCqTail:\t%u\n", ctx->cached_cq_tail);
+- seq_printf(m, "SQEs:\t%u\n", sq_tail - ctx->cached_sq_head);
+- sq_entries = min(sq_tail - sq_head, ctx->sq_entries);
+- for (i = 0; i < sq_entries; i++) {
+- unsigned int entry = i + sq_head;
+- unsigned int sq_idx = READ_ONCE(ctx->sq_array[entry & sq_mask]);
+- struct io_uring_sqe *sqe;
+-
+- if (sq_idx > sq_mask)
+- continue;
+- sqe = &ctx->sq_sqes[sq_idx];
+- seq_printf(m, "%5u: opcode:%d, fd:%d, flags:%x, user_data:%llu\n",
+- sq_idx, sqe->opcode, sqe->fd, sqe->flags,
+- sqe->user_data);
+- }
+- seq_printf(m, "CQEs:\t%u\n", cq_tail - cq_head);
+- cq_entries = min(cq_tail - cq_head, ctx->cq_entries);
+- for (i = 0; i < cq_entries; i++) {
+- unsigned int entry = i + cq_head;
+- struct io_uring_cqe *cqe = &r->cqes[(entry & cq_mask) << cq_shift];
+-
+- if (!is_cqe32) {
+- seq_printf(m, "%5u: user_data:%llu, res:%d, flag:%x\n",
+- entry & cq_mask, cqe->user_data, cqe->res,
+- cqe->flags);
+- } else {
+- seq_printf(m, "%5u: user_data:%llu, res:%d, flag:%x, "
+- "extra1:%llu, extra2:%llu\n",
+- entry & cq_mask, cqe->user_data, cqe->res,
+- cqe->flags, cqe->big_cqe[0], cqe->big_cqe[1]);
+- }
+- }
+-
+- /*
+- * Avoid ABBA deadlock between the seq lock and the io_uring mutex,
+- * since fdinfo case grabs it in the opposite direction of normal use
+- * cases. If we fail to get the lock, we just don't iterate any
+- * structures that could be going away outside the io_uring mutex.
+- */
+- has_lock = mutex_trylock(&ctx->uring_lock);
+-
+- if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL)) {
+- sq = ctx->sq_data;
+- if (!sq->thread)
+- sq = NULL;
+- }
+-
+- seq_printf(m, "SqThread:\t%d\n", sq ? task_pid_nr(sq->thread) : -1);
+- seq_printf(m, "SqThreadCpu:\t%d\n", sq ? task_cpu(sq->thread) : -1);
+- seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files);
+- for (i = 0; has_lock && i < ctx->nr_user_files; i++) {
+- struct file *f = io_file_from_index(ctx, i);
+-
+- if (f)
+- seq_printf(m, "%5u: %s\n", i, file_dentry(f)->d_iname);
+- else
+- seq_printf(m, "%5u: <none>\n", i);
+- }
+- seq_printf(m, "UserBufs:\t%u\n", ctx->nr_user_bufs);
+- for (i = 0; has_lock && i < ctx->nr_user_bufs; i++) {
+- struct io_mapped_ubuf *buf = ctx->user_bufs[i];
+- unsigned int len = buf->ubuf_end - buf->ubuf;
+-
+- seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf, len);
+- }
+- if (has_lock && !xa_empty(&ctx->personalities)) {
+- unsigned long index;
+- const struct cred *cred;
+-
+- seq_printf(m, "Personalities:\n");
+- xa_for_each(&ctx->personalities, index, cred)
+- io_uring_show_cred(m, index, cred);
+- }
+- if (has_lock)
+- mutex_unlock(&ctx->uring_lock);
+-
+- seq_puts(m, "PollList:\n");
+- spin_lock(&ctx->completion_lock);
+- for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
+- struct hlist_head *list = &ctx->cancel_hash[i];
+- struct io_kiocb *req;
+-
+- hlist_for_each_entry(req, list, hash_node)
+- seq_printf(m, " op=%d, task_works=%d\n", req->opcode,
+- task_work_pending(req->task));
+- }
+-
+- seq_puts(m, "CqOverflowList:\n");
+- list_for_each_entry(ocqe, &ctx->cq_overflow_list, list) {
+- struct io_uring_cqe *cqe = &ocqe->cqe;
+-
+- seq_printf(m, " user_data=%llu, res=%d, flags=%x\n",
+- cqe->user_data, cqe->res, cqe->flags);
+-
+- }
+-
+- spin_unlock(&ctx->completion_lock);
+-}
+-
+-static __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
+-{
+- struct io_ring_ctx *ctx = f->private_data;
+-
+- if (percpu_ref_tryget(&ctx->refs)) {
+- __io_uring_show_fdinfo(ctx, m);
+- percpu_ref_put(&ctx->refs);
+- }
+-}
+-#endif
+-
+-static const struct file_operations io_uring_fops = {
+- .release = io_uring_release,
+- .mmap = io_uring_mmap,
+-#ifndef CONFIG_MMU
+- .get_unmapped_area = io_uring_nommu_get_unmapped_area,
+- .mmap_capabilities = io_uring_nommu_mmap_capabilities,
+-#endif
+- .poll = io_uring_poll,
+-#ifdef CONFIG_PROC_FS
+- .show_fdinfo = io_uring_show_fdinfo,
+-#endif
+-};
+-
+-static __cold int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+- struct io_uring_params *p)
+-{
+- struct io_rings *rings;
+- size_t size, sq_array_offset;
+-
+- /* make sure these are sane, as we already accounted them */
+- ctx->sq_entries = p->sq_entries;
+- ctx->cq_entries = p->cq_entries;
+-
+- size = rings_size(ctx, p->sq_entries, p->cq_entries, &sq_array_offset);
+- if (size == SIZE_MAX)
+- return -EOVERFLOW;
+-
+- rings = io_mem_alloc(size);
+- if (!rings)
+- return -ENOMEM;
+-
+- ctx->rings = rings;
+- ctx->sq_array = (u32 *)((char *)rings + sq_array_offset);
+- rings->sq_ring_mask = p->sq_entries - 1;
+- rings->cq_ring_mask = p->cq_entries - 1;
+- rings->sq_ring_entries = p->sq_entries;
+- rings->cq_ring_entries = p->cq_entries;
+-
+- if (p->flags & IORING_SETUP_SQE128)
+- size = array_size(2 * sizeof(struct io_uring_sqe), p->sq_entries);
+- else
+- size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
+- if (size == SIZE_MAX) {
+- io_mem_free(ctx->rings);
+- ctx->rings = NULL;
+- return -EOVERFLOW;
+- }
+-
+- ctx->sq_sqes = io_mem_alloc(size);
+- if (!ctx->sq_sqes) {
+- io_mem_free(ctx->rings);
+- ctx->rings = NULL;
+- return -ENOMEM;
+- }
+-
+- return 0;
+-}
+-
+-static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
+-{
+- int ret, fd;
+-
+- fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
+- if (fd < 0)
+- return fd;
+-
+- ret = io_uring_add_tctx_node(ctx);
+- if (ret) {
+- put_unused_fd(fd);
+- return ret;
+- }
+- fd_install(fd, file);
+- return fd;
+-}
+-
+-/*
+- * Allocate an anonymous fd, this is what constitutes the application
+- * visible backing of an io_uring instance. The application mmaps this
+- * fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
+- * we have to tie this fd to a socket for file garbage collection purposes.
+- */
+-static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
+-{
+- struct file *file;
+-#if defined(CONFIG_UNIX)
+- int ret;
+-
+- ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
+- &ctx->ring_sock);
+- if (ret)
+- return ERR_PTR(ret);
+-#endif
+-
+- file = anon_inode_getfile_secure("[io_uring]", &io_uring_fops, ctx,
+- O_RDWR | O_CLOEXEC, NULL);
+-#if defined(CONFIG_UNIX)
+- if (IS_ERR(file)) {
+- sock_release(ctx->ring_sock);
+- ctx->ring_sock = NULL;
+- } else {
+- ctx->ring_sock->file = file;
+- }
+-#endif
+- return file;
+-}
+-
+-static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
+- struct io_uring_params __user *params)
+-{
+- struct io_ring_ctx *ctx;
+- struct file *file;
+- int ret;
+-
+- if (!entries)
+- return -EINVAL;
+- if (entries > IORING_MAX_ENTRIES) {
+- if (!(p->flags & IORING_SETUP_CLAMP))
+- return -EINVAL;
+- entries = IORING_MAX_ENTRIES;
+- }
+-
+- /*
+- * Use twice as many entries for the CQ ring. It's possible for the
+- * application to drive a higher depth than the size of the SQ ring,
+- * since the sqes are only used at submission time. This allows for
+- * some flexibility in overcommitting a bit. If the application has
+- * set IORING_SETUP_CQSIZE, it will have passed in the desired number
+- * of CQ ring entries manually.
+- */
+- p->sq_entries = roundup_pow_of_two(entries);
+- if (p->flags & IORING_SETUP_CQSIZE) {
+- /*
+- * If IORING_SETUP_CQSIZE is set, we do the same roundup
+- * to a power-of-two, if it isn't already. We do NOT impose
+- * any cq vs sq ring sizing.
+- */
+- if (!p->cq_entries)
+- return -EINVAL;
+- if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {
+- if (!(p->flags & IORING_SETUP_CLAMP))
+- return -EINVAL;
+- p->cq_entries = IORING_MAX_CQ_ENTRIES;
+- }
+- p->cq_entries = roundup_pow_of_two(p->cq_entries);
+- if (p->cq_entries < p->sq_entries)
+- return -EINVAL;
+- } else {
+- p->cq_entries = 2 * p->sq_entries;
+- }
+-
+- ctx = io_ring_ctx_alloc(p);
+- if (!ctx)
+- return -ENOMEM;
+-
+- /*
+- * When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, user
+- * space applications don't need to do io completion events
+- * polling again, they can rely on io_sq_thread to do polling
+- * work, which can reduce cpu usage and uring_lock contention.
+- */
+- if (ctx->flags & IORING_SETUP_IOPOLL &&
+- !(ctx->flags & IORING_SETUP_SQPOLL))
+- ctx->syscall_iopoll = 1;
+-
+- ctx->compat = in_compat_syscall();
+- if (!capable(CAP_IPC_LOCK))
+- ctx->user = get_uid(current_user());
+-
+- /*
+- * For SQPOLL, we just need a wakeup, always. For !SQPOLL, if
+- * COOP_TASKRUN is set, then IPIs are never needed by the app.
+- */
+- ret = -EINVAL;
+- if (ctx->flags & IORING_SETUP_SQPOLL) {
+- /* IPI related flags don't make sense with SQPOLL */
+- if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
+- IORING_SETUP_TASKRUN_FLAG))
+- goto err;
+- ctx->notify_method = TWA_SIGNAL_NO_IPI;
+- } else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
+- ctx->notify_method = TWA_SIGNAL_NO_IPI;
+- } else {
+- if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
+- goto err;
+- ctx->notify_method = TWA_SIGNAL;
+- }
+-
+- /*
+- * This is just grabbed for accounting purposes. When a process exits,
+- * the mm is exited and dropped before the files, hence we need to hang
+- * on to this mm purely for the purposes of being able to unaccount
+- * memory (locked/pinned vm). It's not used for anything else.
+- */
+- mmgrab(current->mm);
+- ctx->mm_account = current->mm;
+-
+- ret = io_allocate_scq_urings(ctx, p);
+- if (ret)
+- goto err;
+-
+- ret = io_sq_offload_create(ctx, p);
+- if (ret)
+- goto err;
+- /* always set a rsrc node */
+- ret = io_rsrc_node_switch_start(ctx);
+- if (ret)
+- goto err;
+- io_rsrc_node_switch(ctx, NULL);
+-
+- memset(&p->sq_off, 0, sizeof(p->sq_off));
+- p->sq_off.head = offsetof(struct io_rings, sq.head);
+- p->sq_off.tail = offsetof(struct io_rings, sq.tail);
+- p->sq_off.ring_mask = offsetof(struct io_rings, sq_ring_mask);
+- p->sq_off.ring_entries = offsetof(struct io_rings, sq_ring_entries);
+- p->sq_off.flags = offsetof(struct io_rings, sq_flags);
+- p->sq_off.dropped = offsetof(struct io_rings, sq_dropped);
+- p->sq_off.array = (char *)ctx->sq_array - (char *)ctx->rings;
+-
+- memset(&p->cq_off, 0, sizeof(p->cq_off));
+- p->cq_off.head = offsetof(struct io_rings, cq.head);
+- p->cq_off.tail = offsetof(struct io_rings, cq.tail);
+- p->cq_off.ring_mask = offsetof(struct io_rings, cq_ring_mask);
+- p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries);
+- p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);
+- p->cq_off.cqes = offsetof(struct io_rings, cqes);
+- p->cq_off.flags = offsetof(struct io_rings, cq_flags);
+-
+- p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
+- IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
+- IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
+- IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
+- IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
+- IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
+- IORING_FEAT_LINKED_FILE;
+-
+- if (copy_to_user(params, p, sizeof(*p))) {
+- ret = -EFAULT;
+- goto err;
+- }
+-
+- file = io_uring_get_file(ctx);
+- if (IS_ERR(file)) {
+- ret = PTR_ERR(file);
+- goto err;
+- }
+-
+- /*
+- * Install ring fd as the very last thing, so we don't risk someone
+- * having closed it before we finish setup
+- */
+- ret = io_uring_install_fd(ctx, file);
+- if (ret < 0) {
+- /* fput will clean it up */
+- fput(file);
+- return ret;
+- }
+-
+- trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
+- return ret;
+-err:
+- io_ring_ctx_wait_and_kill(ctx);
+- return ret;
+-}
+-
+-/*
+- * Sets up an aio uring context, and returns the fd. Applications asks for a
+- * ring size, we return the actual sq/cq ring sizes (among other things) in the
+- * params structure passed in.
+- */
+-static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
+-{
+- struct io_uring_params p;
+- int i;
+-
+- if (copy_from_user(&p, params, sizeof(p)))
+- return -EFAULT;
+- for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
+- if (p.resv[i])
+- return -EINVAL;
+- }
+-
+- if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
+- IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
+- IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
+- IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL |
+- IORING_SETUP_COOP_TASKRUN | IORING_SETUP_TASKRUN_FLAG |
+- IORING_SETUP_SQE128 | IORING_SETUP_CQE32))
+- return -EINVAL;
+-
+- return io_uring_create(entries, &p, params);
+-}
+-
+-SYSCALL_DEFINE2(io_uring_setup, u32, entries,
+- struct io_uring_params __user *, params)
+-{
+- return io_uring_setup(entries, params);
+-}
+-
+-static __cold int io_probe(struct io_ring_ctx *ctx, void __user *arg,
+- unsigned nr_args)
+-{
+- struct io_uring_probe *p;
+- size_t size;
+- int i, ret;
+-
+- size = struct_size(p, ops, nr_args);
+- if (size == SIZE_MAX)
+- return -EOVERFLOW;
+- p = kzalloc(size, GFP_KERNEL);
+- if (!p)
+- return -ENOMEM;
+-
+- ret = -EFAULT;
+- if (copy_from_user(p, arg, size))
+- goto out;
+- ret = -EINVAL;
+- if (memchr_inv(p, 0, size))
+- goto out;
+-
+- p->last_op = IORING_OP_LAST - 1;
+- if (nr_args > IORING_OP_LAST)
+- nr_args = IORING_OP_LAST;
+-
+- for (i = 0; i < nr_args; i++) {
+- p->ops[i].op = i;
+- if (!io_op_defs[i].not_supported)
+- p->ops[i].flags = IO_URING_OP_SUPPORTED;
+- }
+- p->ops_len = i;
+-
+- ret = 0;
+- if (copy_to_user(arg, p, size))
+- ret = -EFAULT;
+-out:
+- kfree(p);
+- return ret;
+-}
+-
+-static int io_register_personality(struct io_ring_ctx *ctx)
+-{
+- const struct cred *creds;
+- u32 id;
+- int ret;
+-
+- creds = get_current_cred();
+-
+- ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)creds,
+- XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
+- if (ret < 0) {
+- put_cred(creds);
+- return ret;
+- }
+- return id;
+-}
+-
+-static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
+- void __user *arg, unsigned int nr_args)
+-{
+- struct io_uring_restriction *res;
+- size_t size;
+- int i, ret;
+-
+- /* Restrictions allowed only if rings started disabled */
+- if (!(ctx->flags & IORING_SETUP_R_DISABLED))
+- return -EBADFD;
+-
+- /* We allow only a single restrictions registration */
+- if (ctx->restrictions.registered)
+- return -EBUSY;
+-
+- if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
+- return -EINVAL;
+-
+- size = array_size(nr_args, sizeof(*res));
+- if (size == SIZE_MAX)
+- return -EOVERFLOW;
+-
+- res = memdup_user(arg, size);
+- if (IS_ERR(res))
+- return PTR_ERR(res);
+-
+- ret = 0;
+-
+- for (i = 0; i < nr_args; i++) {
+- switch (res[i].opcode) {
+- case IORING_RESTRICTION_REGISTER_OP:
+- if (res[i].register_op >= IORING_REGISTER_LAST) {
+- ret = -EINVAL;
+- goto out;
+- }
+-
+- __set_bit(res[i].register_op,
+- ctx->restrictions.register_op);
+- break;
+- case IORING_RESTRICTION_SQE_OP:
+- if (res[i].sqe_op >= IORING_OP_LAST) {
+- ret = -EINVAL;
+- goto out;
+- }
+-
+- __set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
+- break;
+- case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
+- ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags;
+- break;
+- case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
+- ctx->restrictions.sqe_flags_required = res[i].sqe_flags;
+- break;
+- default:
+- ret = -EINVAL;
+- goto out;
+- }
+- }
+-
+-out:
+- /* Reset all restrictions if an error happened */
+- if (ret != 0)
+- memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
+- else
+- ctx->restrictions.registered = true;
+-
+- kfree(res);
+- return ret;
+-}
+-
+-static int io_register_enable_rings(struct io_ring_ctx *ctx)
+-{
+- if (!(ctx->flags & IORING_SETUP_R_DISABLED))
+- return -EBADFD;
+-
+- if (ctx->restrictions.registered)
+- ctx->restricted = 1;
+-
+- ctx->flags &= ~IORING_SETUP_R_DISABLED;
+- if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
+- wake_up(&ctx->sq_data->wait);
+- return 0;
+-}
+-
+-static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
+- struct io_uring_rsrc_update2 *up,
+- unsigned nr_args)
+-{
+- __u32 tmp;
+- int err;
+-
+- if (check_add_overflow(up->offset, nr_args, &tmp))
+- return -EOVERFLOW;
+- err = io_rsrc_node_switch_start(ctx);
+- if (err)
+- return err;
+-
+- switch (type) {
+- case IORING_RSRC_FILE:
+- return __io_sqe_files_update(ctx, up, nr_args);
+- case IORING_RSRC_BUFFER:
+- return __io_sqe_buffers_update(ctx, up, nr_args);
+- }
+- return -EINVAL;
+-}
+-
+-static int io_register_files_update(struct io_ring_ctx *ctx, void __user *arg,
+- unsigned nr_args)
+-{
+- struct io_uring_rsrc_update2 up;
+-
+- if (!nr_args)
+- return -EINVAL;
+- memset(&up, 0, sizeof(up));
+- if (copy_from_user(&up, arg, sizeof(struct io_uring_rsrc_update)))
+- return -EFAULT;
+- if (up.resv || up.resv2)
+- return -EINVAL;
+- return __io_register_rsrc_update(ctx, IORING_RSRC_FILE, &up, nr_args);
+-}
+-
+-static int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
+- unsigned size, unsigned type)
+-{
+- struct io_uring_rsrc_update2 up;
+-
+- if (size != sizeof(up))
+- return -EINVAL;
+- if (copy_from_user(&up, arg, sizeof(up)))
+- return -EFAULT;
+- if (!up.nr || up.resv || up.resv2)
+- return -EINVAL;
+- return __io_register_rsrc_update(ctx, type, &up, up.nr);
+-}
+-
+-static __cold int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,
+- unsigned int size, unsigned int type)
+-{
+- struct io_uring_rsrc_register rr;
+-
+- /* keep it extendible */
+- if (size != sizeof(rr))
+- return -EINVAL;
+-
+- memset(&rr, 0, sizeof(rr));
+- if (copy_from_user(&rr, arg, size))
+- return -EFAULT;
+- if (!rr.nr || rr.resv2)
+- return -EINVAL;
+- if (rr.flags & ~IORING_RSRC_REGISTER_SPARSE)
+- return -EINVAL;
+-
+- switch (type) {
+- case IORING_RSRC_FILE:
+- if (rr.flags & IORING_RSRC_REGISTER_SPARSE && rr.data)
+- break;
+- return io_sqe_files_register(ctx, u64_to_user_ptr(rr.data),
+- rr.nr, u64_to_user_ptr(rr.tags));
+- case IORING_RSRC_BUFFER:
+- if (rr.flags & IORING_RSRC_REGISTER_SPARSE && rr.data)
+- break;
+- return io_sqe_buffers_register(ctx, u64_to_user_ptr(rr.data),
+- rr.nr, u64_to_user_ptr(rr.tags));
+- }
+- return -EINVAL;
+-}
+-
+-static __cold int io_register_iowq_aff(struct io_ring_ctx *ctx,
+- void __user *arg, unsigned len)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+- cpumask_var_t new_mask;
+- int ret;
+-
+- if (!tctx || !tctx->io_wq)
+- return -EINVAL;
+-
+- if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
+- return -ENOMEM;
+-
+- cpumask_clear(new_mask);
+- if (len > cpumask_size())
+- len = cpumask_size();
+-
+- if (in_compat_syscall()) {
+- ret = compat_get_bitmap(cpumask_bits(new_mask),
+- (const compat_ulong_t __user *)arg,
+- len * 8 /* CHAR_BIT */);
+- } else {
+- ret = copy_from_user(new_mask, arg, len);
+- }
+-
+- if (ret) {
+- free_cpumask_var(new_mask);
+- return -EFAULT;
+- }
+-
+- ret = io_wq_cpu_affinity(tctx->io_wq, new_mask);
+- free_cpumask_var(new_mask);
+- return ret;
+-}
+-
+-static __cold int io_unregister_iowq_aff(struct io_ring_ctx *ctx)
+-{
+- struct io_uring_task *tctx = current->io_uring;
+-
+- if (!tctx || !tctx->io_wq)
+- return -EINVAL;
+-
+- return io_wq_cpu_affinity(tctx->io_wq, NULL);
+-}
+-
+-static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
+- void __user *arg)
+- __must_hold(&ctx->uring_lock)
+-{
+- struct io_tctx_node *node;
+- struct io_uring_task *tctx = NULL;
+- struct io_sq_data *sqd = NULL;
+- __u32 new_count[2];
+- int i, ret;
+-
+- if (copy_from_user(new_count, arg, sizeof(new_count)))
+- return -EFAULT;
+- for (i = 0; i < ARRAY_SIZE(new_count); i++)
+- if (new_count[i] > INT_MAX)
+- return -EINVAL;
+-
+- if (ctx->flags & IORING_SETUP_SQPOLL) {
+- sqd = ctx->sq_data;
+- if (sqd) {
+- /*
+- * Observe the correct sqd->lock -> ctx->uring_lock
+- * ordering. Fine to drop uring_lock here, we hold
+- * a ref to the ctx.
+- */
+- refcount_inc(&sqd->refs);
+- mutex_unlock(&ctx->uring_lock);
+- mutex_lock(&sqd->lock);
+- mutex_lock(&ctx->uring_lock);
+- if (sqd->thread)
+- tctx = sqd->thread->io_uring;
+- }
+- } else {
+- tctx = current->io_uring;
+- }
+-
+- BUILD_BUG_ON(sizeof(new_count) != sizeof(ctx->iowq_limits));
+-
+- for (i = 0; i < ARRAY_SIZE(new_count); i++)
+- if (new_count[i])
+- ctx->iowq_limits[i] = new_count[i];
+- ctx->iowq_limits_set = true;
+-
+- if (tctx && tctx->io_wq) {
+- ret = io_wq_max_workers(tctx->io_wq, new_count);
+- if (ret)
+- goto err;
+- } else {
+- memset(new_count, 0, sizeof(new_count));
+- }
+-
+- if (sqd) {
+- mutex_unlock(&sqd->lock);
+- io_put_sq_data(sqd);
+- }
+-
+- if (copy_to_user(arg, new_count, sizeof(new_count)))
+- return -EFAULT;
+-
+- /* that's it for SQPOLL, only the SQPOLL task creates requests */
+- if (sqd)
+- return 0;
+-
+- /* now propagate the restriction to all registered users */
+- list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
+- struct io_uring_task *tctx = node->task->io_uring;
+-
+- if (WARN_ON_ONCE(!tctx->io_wq))
+- continue;
+-
+- for (i = 0; i < ARRAY_SIZE(new_count); i++)
+- new_count[i] = ctx->iowq_limits[i];
+- /* ignore errors, it always returns zero anyway */
+- (void)io_wq_max_workers(tctx->io_wq, new_count);
+- }
+- return 0;
+-err:
+- if (sqd) {
+- mutex_unlock(&sqd->lock);
+- io_put_sq_data(sqd);
+- }
+- return ret;
+-}
+-
+-static int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
+-{
+- struct io_uring_buf_ring *br;
+- struct io_uring_buf_reg reg;
+- struct io_buffer_list *bl, *free_bl = NULL;
+- struct page **pages;
+- int nr_pages;
+-
+- if (copy_from_user(®, arg, sizeof(reg)))
+- return -EFAULT;
+-
+- if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2])
+- return -EINVAL;
+- if (!reg.ring_addr)
+- return -EFAULT;
+- if (reg.ring_addr & ~PAGE_MASK)
+- return -EINVAL;
+- if (!is_power_of_2(reg.ring_entries))
+- return -EINVAL;
+-
+- /* cannot disambiguate full vs empty due to head/tail size */
+- if (reg.ring_entries >= 65536)
+- return -EINVAL;
+-
+- if (unlikely(reg.bgid < BGID_ARRAY && !ctx->io_bl)) {
+- int ret = io_init_bl_list(ctx);
+- if (ret)
+- return ret;
+- }
+-
+- bl = io_buffer_get_list(ctx, reg.bgid);
+- if (bl) {
+- /* if mapped buffer ring OR classic exists, don't allow */
+- if (bl->buf_nr_pages || !list_empty(&bl->buf_list))
+- return -EEXIST;
+- } else {
+- free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL);
+- if (!bl)
+- return -ENOMEM;
+- }
+-
+- pages = io_pin_pages(reg.ring_addr,
+- struct_size(br, bufs, reg.ring_entries),
+- &nr_pages);
+- if (IS_ERR(pages)) {
+- kfree(free_bl);
+- return PTR_ERR(pages);
+- }
+-
+- br = page_address(pages[0]);
+- bl->buf_pages = pages;
+- bl->buf_nr_pages = nr_pages;
+- bl->nr_entries = reg.ring_entries;
+- bl->buf_ring = br;
+- bl->mask = reg.ring_entries - 1;
+- io_buffer_add_list(ctx, bl, reg.bgid);
+- return 0;
+-}
+-
+-static int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
+-{
+- struct io_uring_buf_reg reg;
+- struct io_buffer_list *bl;
+-
+- if (copy_from_user(®, arg, sizeof(reg)))
+- return -EFAULT;
+- if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2])
+- return -EINVAL;
+-
+- bl = io_buffer_get_list(ctx, reg.bgid);
+- if (!bl)
+- return -ENOENT;
+- if (!bl->buf_nr_pages)
+- return -EINVAL;
+-
+- __io_remove_buffers(ctx, bl, -1U);
+- if (bl->bgid >= BGID_ARRAY) {
+- xa_erase(&ctx->io_bl_xa, bl->bgid);
+- kfree(bl);
+- }
+- return 0;
+-}
+-
+-static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
+- void __user *arg, unsigned nr_args)
+- __releases(ctx->uring_lock)
+- __acquires(ctx->uring_lock)
+-{
+- int ret;
+-
+- /*
+- * We're inside the ring mutex, if the ref is already dying, then
+- * someone else killed the ctx or is already going through
+- * io_uring_register().
+- */
+- if (percpu_ref_is_dying(&ctx->refs))
+- return -ENXIO;
+-
+- if (ctx->restricted) {
+- if (opcode >= IORING_REGISTER_LAST)
+- return -EINVAL;
+- opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
+- if (!test_bit(opcode, ctx->restrictions.register_op))
+- return -EACCES;
+- }
+-
+- switch (opcode) {
+- case IORING_REGISTER_BUFFERS:
+- ret = -EFAULT;
+- if (!arg)
+- break;
+- ret = io_sqe_buffers_register(ctx, arg, nr_args, NULL);
+- break;
+- case IORING_UNREGISTER_BUFFERS:
+- ret = -EINVAL;
+- if (arg || nr_args)
+- break;
+- ret = io_sqe_buffers_unregister(ctx);
+- break;
+- case IORING_REGISTER_FILES:
+- ret = -EFAULT;
+- if (!arg)
+- break;
+- ret = io_sqe_files_register(ctx, arg, nr_args, NULL);
+- break;
+- case IORING_UNREGISTER_FILES:
+- ret = -EINVAL;
+- if (arg || nr_args)
+- break;
+- ret = io_sqe_files_unregister(ctx);
+- break;
+- case IORING_REGISTER_FILES_UPDATE:
+- ret = io_register_files_update(ctx, arg, nr_args);
+- break;
+- case IORING_REGISTER_EVENTFD:
+- ret = -EINVAL;
+- if (nr_args != 1)
+- break;
+- ret = io_eventfd_register(ctx, arg, 0);
+- break;
+- case IORING_REGISTER_EVENTFD_ASYNC:
+- ret = -EINVAL;
+- if (nr_args != 1)
+- break;
+- ret = io_eventfd_register(ctx, arg, 1);
+- break;
+- case IORING_UNREGISTER_EVENTFD:
+- ret = -EINVAL;
+- if (arg || nr_args)
+- break;
+- ret = io_eventfd_unregister(ctx);
+- break;
+- case IORING_REGISTER_PROBE:
+- ret = -EINVAL;
+- if (!arg || nr_args > 256)
+- break;
+- ret = io_probe(ctx, arg, nr_args);
+- break;
+- case IORING_REGISTER_PERSONALITY:
+- ret = -EINVAL;
+- if (arg || nr_args)
+- break;
+- ret = io_register_personality(ctx);
+- break;
+- case IORING_UNREGISTER_PERSONALITY:
+- ret = -EINVAL;
+- if (arg)
+- break;
+- ret = io_unregister_personality(ctx, nr_args);
+- break;
+- case IORING_REGISTER_ENABLE_RINGS:
+- ret = -EINVAL;
+- if (arg || nr_args)
+- break;
+- ret = io_register_enable_rings(ctx);
+- break;
+- case IORING_REGISTER_RESTRICTIONS:
+- ret = io_register_restrictions(ctx, arg, nr_args);
+- break;
+- case IORING_REGISTER_FILES2:
+- ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_FILE);
+- break;
+- case IORING_REGISTER_FILES_UPDATE2:
+- ret = io_register_rsrc_update(ctx, arg, nr_args,
+- IORING_RSRC_FILE);
+- break;
+- case IORING_REGISTER_BUFFERS2:
+- ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_BUFFER);
+- break;
+- case IORING_REGISTER_BUFFERS_UPDATE:
+- ret = io_register_rsrc_update(ctx, arg, nr_args,
+- IORING_RSRC_BUFFER);
+- break;
+- case IORING_REGISTER_IOWQ_AFF:
+- ret = -EINVAL;
+- if (!arg || !nr_args)
+- break;
+- ret = io_register_iowq_aff(ctx, arg, nr_args);
+- break;
+- case IORING_UNREGISTER_IOWQ_AFF:
+- ret = -EINVAL;
+- if (arg || nr_args)
+- break;
+- ret = io_unregister_iowq_aff(ctx);
+- break;
+- case IORING_REGISTER_IOWQ_MAX_WORKERS:
+- ret = -EINVAL;
+- if (!arg || nr_args != 2)
+- break;
+- ret = io_register_iowq_max_workers(ctx, arg);
+- break;
+- case IORING_REGISTER_RING_FDS:
+- ret = io_ringfd_register(ctx, arg, nr_args);
+- break;
+- case IORING_UNREGISTER_RING_FDS:
+- ret = io_ringfd_unregister(ctx, arg, nr_args);
+- break;
+- case IORING_REGISTER_PBUF_RING:
+- ret = -EINVAL;
+- if (!arg || nr_args != 1)
+- break;
+- ret = io_register_pbuf_ring(ctx, arg);
+- break;
+- case IORING_UNREGISTER_PBUF_RING:
+- ret = -EINVAL;
+- if (!arg || nr_args != 1)
+- break;
+- ret = io_unregister_pbuf_ring(ctx, arg);
+- break;
+- default:
+- ret = -EINVAL;
+- break;
+- }
+-
+- return ret;
+-}
+-
+-SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
+- void __user *, arg, unsigned int, nr_args)
+-{
+- struct io_ring_ctx *ctx;
+- long ret = -EBADF;
+- struct fd f;
+-
+- f = fdget(fd);
+- if (!f.file)
+- return -EBADF;
+-
+- ret = -EOPNOTSUPP;
+- if (f.file->f_op != &io_uring_fops)
+- goto out_fput;
+-
+- ctx = f.file->private_data;
+-
+- io_run_task_work();
+-
+- mutex_lock(&ctx->uring_lock);
+- ret = __io_uring_register(ctx, opcode, arg, nr_args);
+- mutex_unlock(&ctx->uring_lock);
+- trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs, ret);
+-out_fput:
+- fdput(f);
+- return ret;
+-}
+-
+-static int __init io_uring_init(void)
+-{
+-#define __BUILD_BUG_VERIFY_ELEMENT(stype, eoffset, etype, ename) do { \
+- BUILD_BUG_ON(offsetof(stype, ename) != eoffset); \
+- BUILD_BUG_ON(sizeof(etype) != sizeof_field(stype, ename)); \
+-} while (0)
+-
+-#define BUILD_BUG_SQE_ELEM(eoffset, etype, ename) \
+- __BUILD_BUG_VERIFY_ELEMENT(struct io_uring_sqe, eoffset, etype, ename)
+- BUILD_BUG_ON(sizeof(struct io_uring_sqe) != 64);
+- BUILD_BUG_SQE_ELEM(0, __u8, opcode);
+- BUILD_BUG_SQE_ELEM(1, __u8, flags);
+- BUILD_BUG_SQE_ELEM(2, __u16, ioprio);
+- BUILD_BUG_SQE_ELEM(4, __s32, fd);
+- BUILD_BUG_SQE_ELEM(8, __u64, off);
+- BUILD_BUG_SQE_ELEM(8, __u64, addr2);
+- BUILD_BUG_SQE_ELEM(16, __u64, addr);
+- BUILD_BUG_SQE_ELEM(16, __u64, splice_off_in);
+- BUILD_BUG_SQE_ELEM(24, __u32, len);
+- BUILD_BUG_SQE_ELEM(28, __kernel_rwf_t, rw_flags);
+- BUILD_BUG_SQE_ELEM(28, /* compat */ int, rw_flags);
+- BUILD_BUG_SQE_ELEM(28, /* compat */ __u32, rw_flags);
+- BUILD_BUG_SQE_ELEM(28, __u32, fsync_flags);
+- BUILD_BUG_SQE_ELEM(28, /* compat */ __u16, poll_events);
+- BUILD_BUG_SQE_ELEM(28, __u32, poll32_events);
+- BUILD_BUG_SQE_ELEM(28, __u32, sync_range_flags);
+- BUILD_BUG_SQE_ELEM(28, __u32, msg_flags);
+- BUILD_BUG_SQE_ELEM(28, __u32, timeout_flags);
+- BUILD_BUG_SQE_ELEM(28, __u32, accept_flags);
+- BUILD_BUG_SQE_ELEM(28, __u32, cancel_flags);
+- BUILD_BUG_SQE_ELEM(28, __u32, open_flags);
+- BUILD_BUG_SQE_ELEM(28, __u32, statx_flags);
+- BUILD_BUG_SQE_ELEM(28, __u32, fadvise_advice);
+- BUILD_BUG_SQE_ELEM(28, __u32, splice_flags);
+- BUILD_BUG_SQE_ELEM(32, __u64, user_data);
+- BUILD_BUG_SQE_ELEM(40, __u16, buf_index);
+- BUILD_BUG_SQE_ELEM(40, __u16, buf_group);
+- BUILD_BUG_SQE_ELEM(42, __u16, personality);
+- BUILD_BUG_SQE_ELEM(44, __s32, splice_fd_in);
+- BUILD_BUG_SQE_ELEM(44, __u32, file_index);
+- BUILD_BUG_SQE_ELEM(48, __u64, addr3);
+-
+- BUILD_BUG_ON(sizeof(struct io_uring_files_update) !=
+- sizeof(struct io_uring_rsrc_update));
+- BUILD_BUG_ON(sizeof(struct io_uring_rsrc_update) >
+- sizeof(struct io_uring_rsrc_update2));
+-
+- /* ->buf_index is u16 */
+- BUILD_BUG_ON(IORING_MAX_REG_BUFFERS >= (1u << 16));
+- BUILD_BUG_ON(BGID_ARRAY * sizeof(struct io_buffer_list) > PAGE_SIZE);
+- BUILD_BUG_ON(offsetof(struct io_uring_buf_ring, bufs) != 0);
+- BUILD_BUG_ON(offsetof(struct io_uring_buf, resv) !=
+- offsetof(struct io_uring_buf_ring, tail));
+-
+- /* should fit into one byte */
+- BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
+- BUILD_BUG_ON(SQE_COMMON_FLAGS >= (1 << 8));
+- BUILD_BUG_ON((SQE_VALID_FLAGS | SQE_COMMON_FLAGS) != SQE_VALID_FLAGS);
+-
+- BUILD_BUG_ON(ARRAY_SIZE(io_op_defs) != IORING_OP_LAST);
+- BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof(int));
+-
+- BUILD_BUG_ON(sizeof(atomic_t) != sizeof(u32));
+-
+- BUILD_BUG_ON(sizeof(struct io_uring_cmd) > 64);
+-
+- req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC |
+- SLAB_ACCOUNT);
+- return 0;
+-};
+-__initcall(io_uring_init);
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index eb315e81f1a6b..af1a9191368cb 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -553,13 +553,13 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ */
+ jbd2_journal_switch_revoke_table(journal);
+
++ write_lock(&journal->j_state_lock);
+ /*
+ * Reserved credits cannot be claimed anymore, free them
+ */
+ atomic_sub(atomic_read(&journal->j_reserved_credits),
+ &commit_transaction->t_outstanding_credits);
+
+- write_lock(&journal->j_state_lock);
+ trace_jbd2_commit_flushing(journal, commit_transaction);
+ stats.run.rs_flushing = jiffies;
+ stats.run.rs_locked = jbd2_time_diff(stats.run.rs_locked,
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index e9c308ae475fd..e0377f558eb14 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1486,8 +1486,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ struct journal_head *jh;
+ int ret = 0;
+
+- if (is_handle_aborted(handle))
+- return -EROFS;
+ if (!buffer_jbd(bh))
+ return -EUCLEAN;
+
+@@ -1534,6 +1532,18 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ journal = transaction->t_journal;
+ spin_lock(&jh->b_state_lock);
+
++ if (is_handle_aborted(handle)) {
++ /*
++ * Check journal aborting with @jh->b_state_lock locked,
++ * since 'jh->b_transaction' could be replaced with
++ * 'jh->b_next_transaction' during old transaction
++ * committing if journal aborted, which may fail
++ * assertion on 'jh->b_frozen_data == NULL'.
++ */
++ ret = -EROFS;
++ goto out_unlock_bh;
++ }
++
+ if (jh->b_modified == 0) {
+ /*
+ * This buffer's got modified and becoming part
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 6eca72cfa1f28..1cc88ba6de907 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -1343,14 +1343,17 @@ static void __kernfs_remove(struct kernfs_node *kn)
+ {
+ struct kernfs_node *pos;
+
++ /* Short-circuit if non-root @kn has already finished removal. */
++ if (!kn)
++ return;
++
+ lockdep_assert_held_write(&kernfs_root(kn)->kernfs_rwsem);
+
+ /*
+- * Short-circuit if non-root @kn has already finished removal.
+ * This is for kernfs_remove_self() which plays with active ref
+ * after removal.
+ */
+- if (!kn || (kn->parent && RB_EMPTY_NODE(&kn->rb)))
++ if (kn->parent && RB_EMPTY_NODE(&kn->rb))
+ return;
+
+ pr_debug("kernfs %s: removing\n", kn->name);
+diff --git a/fs/ksmbd/smb2misc.c b/fs/ksmbd/smb2misc.c
+index f8f456377a51d..6e25ace365684 100644
+--- a/fs/ksmbd/smb2misc.c
++++ b/fs/ksmbd/smb2misc.c
+@@ -90,11 +90,6 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ *off = 0;
+ *len = 0;
+
+- /* error reqeusts do not have data area */
+- if (hdr->Status && hdr->Status != STATUS_MORE_PROCESSING_REQUIRED &&
+- (((struct smb2_err_rsp *)hdr)->StructureSize) == SMB2_ERROR_STRUCTURE_SIZE2_LE)
+- return ret;
+-
+ /*
+ * Following commands have data areas so we have to get the location
+ * of the data buffer offset and data buffer length for the particular
+@@ -136,8 +131,11 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ *len = le16_to_cpu(((struct smb2_read_req *)hdr)->ReadChannelInfoLength);
+ break;
+ case SMB2_WRITE:
+- if (((struct smb2_write_req *)hdr)->DataOffset) {
+- *off = le16_to_cpu(((struct smb2_write_req *)hdr)->DataOffset);
++ if (((struct smb2_write_req *)hdr)->DataOffset ||
++ ((struct smb2_write_req *)hdr)->Length) {
++ *off = max_t(unsigned int,
++ le16_to_cpu(((struct smb2_write_req *)hdr)->DataOffset),
++ offsetof(struct smb2_write_req, Buffer));
+ *len = le32_to_cpu(((struct smb2_write_req *)hdr)->Length);
+ break;
+ }
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index 353f047e783ca..a9c33d15ca1fb 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -535,9 +535,10 @@ int smb2_allocate_rsp_buf(struct ksmbd_work *work)
+ struct smb2_query_info_req *req;
+
+ req = smb2_get_msg(work->request_buf);
+- if (req->InfoType == SMB2_O_INFO_FILE &&
+- (req->FileInfoClass == FILE_FULL_EA_INFORMATION ||
+- req->FileInfoClass == FILE_ALL_INFORMATION))
++ if ((req->InfoType == SMB2_O_INFO_FILE &&
++ (req->FileInfoClass == FILE_FULL_EA_INFORMATION ||
++ req->FileInfoClass == FILE_ALL_INFORMATION)) ||
++ req->InfoType == SMB2_O_INFO_SECURITY)
+ sz = large_sz;
+ }
+
+@@ -1139,12 +1140,16 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
+ status);
+ rsp->hdr.Status = status;
+ rc = -EINVAL;
++ kfree(conn->preauth_info);
++ conn->preauth_info = NULL;
+ goto err_out;
+ }
+
+ rc = init_smb3_11_server(conn);
+ if (rc < 0) {
+ rsp->hdr.Status = STATUS_INVALID_PARAMETER;
++ kfree(conn->preauth_info);
++ conn->preauth_info = NULL;
+ goto err_out;
+ }
+
+@@ -2039,6 +2044,7 @@ int smb2_tree_disconnect(struct ksmbd_work *work)
+
+ ksmbd_close_tree_conn_fds(work);
+ ksmbd_tree_conn_disconnect(sess, tcon);
++ work->tcon = NULL;
+ return 0;
+ }
+
+@@ -2969,7 +2975,7 @@ int smb2_open(struct ksmbd_work *work)
+ goto err_out;
+
+ rc = build_sec_desc(user_ns,
+- pntsd, NULL,
++ pntsd, NULL, 0,
+ OWNER_SECINFO |
+ GROUP_SECINFO |
+ DACL_SECINFO,
+@@ -3814,6 +3820,15 @@ static int verify_info_level(int info_level)
+ return 0;
+ }
+
++static int smb2_resp_buf_len(struct ksmbd_work *work, unsigned short hdr2_len)
++{
++ int free_len;
++
++ free_len = (int)(work->response_sz -
++ (get_rfc1002_len(work->response_buf) + 4)) - hdr2_len;
++ return free_len;
++}
++
+ static int smb2_calc_max_out_buf_len(struct ksmbd_work *work,
+ unsigned short hdr2_len,
+ unsigned int out_buf_len)
+@@ -3823,9 +3838,7 @@ static int smb2_calc_max_out_buf_len(struct ksmbd_work *work,
+ if (out_buf_len > work->conn->vals->max_trans_size)
+ return -EINVAL;
+
+- free_len = (int)(work->response_sz -
+- (get_rfc1002_len(work->response_buf) + 4)) -
+- hdr2_len;
++ free_len = smb2_resp_buf_len(work, hdr2_len);
+ if (free_len < 0)
+ return -EINVAL;
+
+@@ -5088,10 +5101,10 @@ static int smb2_get_info_sec(struct ksmbd_work *work,
+ struct smb_ntsd *pntsd = (struct smb_ntsd *)rsp->Buffer, *ppntsd = NULL;
+ struct smb_fattr fattr = {{0}};
+ struct inode *inode;
+- __u32 secdesclen;
++ __u32 secdesclen = 0;
+ unsigned int id = KSMBD_NO_FID, pid = KSMBD_NO_FID;
+ int addition_info = le32_to_cpu(req->AdditionalInformation);
+- int rc;
++ int rc = 0, ppntsd_size = 0;
+
+ if (addition_info & ~(OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO |
+ PROTECTED_DACL_SECINFO |
+@@ -5137,11 +5150,14 @@ static int smb2_get_info_sec(struct ksmbd_work *work,
+
+ if (test_share_config_flag(work->tcon->share_conf,
+ KSMBD_SHARE_FLAG_ACL_XATTR))
+- ksmbd_vfs_get_sd_xattr(work->conn, user_ns,
+- fp->filp->f_path.dentry, &ppntsd);
+-
+- rc = build_sec_desc(user_ns, pntsd, ppntsd, addition_info,
+- &secdesclen, &fattr);
++ ppntsd_size = ksmbd_vfs_get_sd_xattr(work->conn, user_ns,
++ fp->filp->f_path.dentry,
++ &ppntsd);
++
++ /* Check if sd buffer size exceeds response buffer size */
++ if (smb2_resp_buf_len(work, 8) > ppntsd_size)
++ rc = build_sec_desc(user_ns, pntsd, ppntsd, ppntsd_size,
++ addition_info, &secdesclen, &fattr);
+ posix_acl_release(fattr.cf_acls);
+ posix_acl_release(fattr.cf_dacls);
+ kfree(ppntsd);
+@@ -6495,14 +6511,12 @@ int smb2_write(struct ksmbd_work *work)
+ writethrough = true;
+
+ if (is_rdma_channel == false) {
+- if ((u64)le16_to_cpu(req->DataOffset) + length >
+- get_rfc1002_len(work->request_buf)) {
+- pr_err("invalid write data offset %u, smb_len %u\n",
+- le16_to_cpu(req->DataOffset),
+- get_rfc1002_len(work->request_buf));
++ if (le16_to_cpu(req->DataOffset) <
++ offsetof(struct smb2_write_req, Buffer)) {
+ err = -EINVAL;
+ goto out;
+ }
++
+ data_buf = (char *)(((char *)&req->hdr.ProtocolId) +
+ le16_to_cpu(req->DataOffset));
+
+diff --git a/fs/ksmbd/smbacl.c b/fs/ksmbd/smbacl.c
+index 38f23bf981ac9..3781bca2c8fc4 100644
+--- a/fs/ksmbd/smbacl.c
++++ b/fs/ksmbd/smbacl.c
+@@ -690,6 +690,7 @@ posix_default_acl:
+ static void set_ntacl_dacl(struct user_namespace *user_ns,
+ struct smb_acl *pndacl,
+ struct smb_acl *nt_dacl,
++ unsigned int aces_size,
+ const struct smb_sid *pownersid,
+ const struct smb_sid *pgrpsid,
+ struct smb_fattr *fattr)
+@@ -703,9 +704,19 @@ static void set_ntacl_dacl(struct user_namespace *user_ns,
+ if (nt_num_aces) {
+ ntace = (struct smb_ace *)((char *)nt_dacl + sizeof(struct smb_acl));
+ for (i = 0; i < nt_num_aces; i++) {
+- memcpy((char *)pndace + size, ntace, le16_to_cpu(ntace->size));
+- size += le16_to_cpu(ntace->size);
+- ntace = (struct smb_ace *)((char *)ntace + le16_to_cpu(ntace->size));
++ unsigned short nt_ace_size;
++
++ if (offsetof(struct smb_ace, access_req) > aces_size)
++ break;
++
++ nt_ace_size = le16_to_cpu(ntace->size);
++ if (nt_ace_size > aces_size)
++ break;
++
++ memcpy((char *)pndace + size, ntace, nt_ace_size);
++ size += nt_ace_size;
++ aces_size -= nt_ace_size;
++ ntace = (struct smb_ace *)((char *)ntace + nt_ace_size);
+ num_aces++;
+ }
+ }
+@@ -878,7 +889,7 @@ int parse_sec_desc(struct user_namespace *user_ns, struct smb_ntsd *pntsd,
+ /* Convert permission bits from mode to equivalent CIFS ACL */
+ int build_sec_desc(struct user_namespace *user_ns,
+ struct smb_ntsd *pntsd, struct smb_ntsd *ppntsd,
+- int addition_info, __u32 *secdesclen,
++ int ppntsd_size, int addition_info, __u32 *secdesclen,
+ struct smb_fattr *fattr)
+ {
+ int rc = 0;
+@@ -938,15 +949,25 @@ int build_sec_desc(struct user_namespace *user_ns,
+
+ if (!ppntsd) {
+ set_mode_dacl(user_ns, dacl_ptr, fattr);
+- } else if (!ppntsd->dacloffset) {
+- goto out;
+ } else {
+ struct smb_acl *ppdacl_ptr;
++ unsigned int dacl_offset = le32_to_cpu(ppntsd->dacloffset);
++ int ppdacl_size, ntacl_size = ppntsd_size - dacl_offset;
++
++ if (!dacl_offset ||
++ (dacl_offset + sizeof(struct smb_acl) > ppntsd_size))
++ goto out;
++
++ ppdacl_ptr = (struct smb_acl *)((char *)ppntsd + dacl_offset);
++ ppdacl_size = le16_to_cpu(ppdacl_ptr->size);
++ if (ppdacl_size > ntacl_size ||
++ ppdacl_size < sizeof(struct smb_acl))
++ goto out;
+
+- ppdacl_ptr = (struct smb_acl *)((char *)ppntsd +
+- le32_to_cpu(ppntsd->dacloffset));
+ set_ntacl_dacl(user_ns, dacl_ptr, ppdacl_ptr,
+- nowner_sid_ptr, ngroup_sid_ptr, fattr);
++ ntacl_size - sizeof(struct smb_acl),
++ nowner_sid_ptr, ngroup_sid_ptr,
++ fattr);
+ }
+ pntsd->dacloffset = cpu_to_le32(offset);
+ offset += le16_to_cpu(dacl_ptr->size);
+@@ -980,24 +1001,31 @@ int smb_inherit_dacl(struct ksmbd_conn *conn,
+ struct smb_sid owner_sid, group_sid;
+ struct dentry *parent = path->dentry->d_parent;
+ struct user_namespace *user_ns = mnt_user_ns(path->mnt);
+- int inherited_flags = 0, flags = 0, i, ace_cnt = 0, nt_size = 0;
+- int rc = 0, num_aces, dacloffset, pntsd_type, acl_len;
++ int inherited_flags = 0, flags = 0, i, ace_cnt = 0, nt_size = 0, pdacl_size;
++ int rc = 0, num_aces, dacloffset, pntsd_type, pntsd_size, acl_len, aces_size;
+ char *aces_base;
+ bool is_dir = S_ISDIR(d_inode(path->dentry)->i_mode);
+
+- acl_len = ksmbd_vfs_get_sd_xattr(conn, user_ns,
+- parent, &parent_pntsd);
+- if (acl_len <= 0)
++ pntsd_size = ksmbd_vfs_get_sd_xattr(conn, user_ns,
++ parent, &parent_pntsd);
++ if (pntsd_size <= 0)
+ return -ENOENT;
+ dacloffset = le32_to_cpu(parent_pntsd->dacloffset);
+- if (!dacloffset) {
++ if (!dacloffset || (dacloffset + sizeof(struct smb_acl) > pntsd_size)) {
+ rc = -EINVAL;
+ goto free_parent_pntsd;
+ }
+
+ parent_pdacl = (struct smb_acl *)((char *)parent_pntsd + dacloffset);
++ acl_len = pntsd_size - dacloffset;
+ num_aces = le32_to_cpu(parent_pdacl->num_aces);
+ pntsd_type = le16_to_cpu(parent_pntsd->type);
++ pdacl_size = le16_to_cpu(parent_pdacl->size);
++
++ if (pdacl_size > acl_len || pdacl_size < sizeof(struct smb_acl)) {
++ rc = -EINVAL;
++ goto free_parent_pntsd;
++ }
+
+ aces_base = kmalloc(sizeof(struct smb_ace) * num_aces * 2, GFP_KERNEL);
+ if (!aces_base) {
+@@ -1008,11 +1036,23 @@ int smb_inherit_dacl(struct ksmbd_conn *conn,
+ aces = (struct smb_ace *)aces_base;
+ parent_aces = (struct smb_ace *)((char *)parent_pdacl +
+ sizeof(struct smb_acl));
++ aces_size = acl_len - sizeof(struct smb_acl);
+
+ if (pntsd_type & DACL_AUTO_INHERITED)
+ inherited_flags = INHERITED_ACE;
+
+ for (i = 0; i < num_aces; i++) {
++ int pace_size;
++
++ if (offsetof(struct smb_ace, access_req) > aces_size)
++ break;
++
++ pace_size = le16_to_cpu(parent_aces->size);
++ if (pace_size > aces_size)
++ break;
++
++ aces_size -= pace_size;
++
+ flags = parent_aces->flags;
+ if (!smb_inherit_flags(flags, is_dir))
+ goto pass;
+@@ -1057,8 +1097,7 @@ int smb_inherit_dacl(struct ksmbd_conn *conn,
+ aces = (struct smb_ace *)((char *)aces + le16_to_cpu(aces->size));
+ ace_cnt++;
+ pass:
+- parent_aces =
+- (struct smb_ace *)((char *)parent_aces + le16_to_cpu(parent_aces->size));
++ parent_aces = (struct smb_ace *)((char *)parent_aces + pace_size);
+ }
+
+ if (nt_size > 0) {
+@@ -1153,7 +1192,7 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, struct path *path,
+ struct smb_ntsd *pntsd = NULL;
+ struct smb_acl *pdacl;
+ struct posix_acl *posix_acls;
+- int rc = 0, acl_size;
++ int rc = 0, pntsd_size, acl_size, aces_size, pdacl_size, dacl_offset;
+ struct smb_sid sid;
+ int granted = le32_to_cpu(*pdaccess & ~FILE_MAXIMAL_ACCESS_LE);
+ struct smb_ace *ace;
+@@ -1162,37 +1201,33 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, struct path *path,
+ struct smb_ace *others_ace = NULL;
+ struct posix_acl_entry *pa_entry;
+ unsigned int sid_type = SIDOWNER;
+- char *end_of_acl;
++ unsigned short ace_size;
+
+ ksmbd_debug(SMB, "check permission using windows acl\n");
+- acl_size = ksmbd_vfs_get_sd_xattr(conn, user_ns,
+- path->dentry, &pntsd);
+- if (acl_size <= 0 || !pntsd || !pntsd->dacloffset) {
+- kfree(pntsd);
+- return 0;
+- }
++ pntsd_size = ksmbd_vfs_get_sd_xattr(conn, user_ns,
++ path->dentry, &pntsd);
++ if (pntsd_size <= 0 || !pntsd)
++ goto err_out;
++
++ dacl_offset = le32_to_cpu(pntsd->dacloffset);
++ if (!dacl_offset ||
++ (dacl_offset + sizeof(struct smb_acl) > pntsd_size))
++ goto err_out;
+
+ pdacl = (struct smb_acl *)((char *)pntsd + le32_to_cpu(pntsd->dacloffset));
+- end_of_acl = ((char *)pntsd) + acl_size;
+- if (end_of_acl <= (char *)pdacl) {
+- kfree(pntsd);
+- return 0;
+- }
++ acl_size = pntsd_size - dacl_offset;
++ pdacl_size = le16_to_cpu(pdacl->size);
+
+- if (end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size) ||
+- le16_to_cpu(pdacl->size) < sizeof(struct smb_acl)) {
+- kfree(pntsd);
+- return 0;
+- }
++ if (pdacl_size > acl_size || pdacl_size < sizeof(struct smb_acl))
++ goto err_out;
+
+ if (!pdacl->num_aces) {
+- if (!(le16_to_cpu(pdacl->size) - sizeof(struct smb_acl)) &&
++ if (!(pdacl_size - sizeof(struct smb_acl)) &&
+ *pdaccess & ~(FILE_READ_CONTROL_LE | FILE_WRITE_DAC_LE)) {
+ rc = -EACCES;
+ goto err_out;
+ }
+- kfree(pntsd);
+- return 0;
++ goto err_out;
+ }
+
+ if (*pdaccess & FILE_MAXIMAL_ACCESS_LE) {
+@@ -1200,11 +1235,16 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, struct path *path,
+ DELETE;
+
+ ace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl));
++ aces_size = acl_size - sizeof(struct smb_acl);
+ for (i = 0; i < le32_to_cpu(pdacl->num_aces); i++) {
++ if (offsetof(struct smb_ace, access_req) > aces_size)
++ break;
++ ace_size = le16_to_cpu(ace->size);
++ if (ace_size > aces_size)
++ break;
++ aces_size -= ace_size;
+ granted |= le32_to_cpu(ace->access_req);
+ ace = (struct smb_ace *)((char *)ace + le16_to_cpu(ace->size));
+- if (end_of_acl < (char *)ace)
+- goto err_out;
+ }
+
+ if (!pdacl->num_aces)
+@@ -1216,7 +1256,15 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, struct path *path,
+ id_to_sid(uid, sid_type, &sid);
+
+ ace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl));
++ aces_size = acl_size - sizeof(struct smb_acl);
+ for (i = 0; i < le32_to_cpu(pdacl->num_aces); i++) {
++ if (offsetof(struct smb_ace, access_req) > aces_size)
++ break;
++ ace_size = le16_to_cpu(ace->size);
++ if (ace_size > aces_size)
++ break;
++ aces_size -= ace_size;
++
+ if (!compare_sids(&sid, &ace->sid) ||
+ !compare_sids(&sid_unix_NFS_mode, &ace->sid)) {
+ found = 1;
+@@ -1226,8 +1274,6 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, struct path *path,
+ others_ace = ace;
+
+ ace = (struct smb_ace *)((char *)ace + le16_to_cpu(ace->size));
+- if (end_of_acl < (char *)ace)
+- goto err_out;
+ }
+
+ if (*pdaccess & FILE_MAXIMAL_ACCESS_LE && found) {
+diff --git a/fs/ksmbd/smbacl.h b/fs/ksmbd/smbacl.h
+index 811af33094291..fcb2c83f29928 100644
+--- a/fs/ksmbd/smbacl.h
++++ b/fs/ksmbd/smbacl.h
+@@ -193,7 +193,7 @@ struct posix_acl_state {
+ int parse_sec_desc(struct user_namespace *user_ns, struct smb_ntsd *pntsd,
+ int acl_len, struct smb_fattr *fattr);
+ int build_sec_desc(struct user_namespace *user_ns, struct smb_ntsd *pntsd,
+- struct smb_ntsd *ppntsd, int addition_info,
++ struct smb_ntsd *ppntsd, int ppntsd_size, int addition_info,
+ __u32 *secdesclen, struct smb_fattr *fattr);
+ int init_acl_state(struct posix_acl_state *state, int cnt);
+ void free_acl_state(struct posix_acl_state *state);
+diff --git a/fs/ksmbd/vfs.c b/fs/ksmbd/vfs.c
+index 05efcdf7a4a73..201962f03772d 100644
+--- a/fs/ksmbd/vfs.c
++++ b/fs/ksmbd/vfs.c
+@@ -1540,6 +1540,11 @@ int ksmbd_vfs_get_sd_xattr(struct ksmbd_conn *conn,
+ }
+
+ *pntsd = acl.sd_buf;
++ if (acl.sd_size < sizeof(struct smb_ntsd)) {
++ pr_err("sd size is invalid\n");
++ goto out_free;
++ }
++
+ (*pntsd)->osidoffset = cpu_to_le32(le32_to_cpu((*pntsd)->osidoffset) -
+ NDR_NTSD_OFFSETOF);
+ (*pntsd)->gsidoffset = cpu_to_le32(le32_to_cpu((*pntsd)->gsidoffset) -
+diff --git a/fs/lockd/svc4proc.c b/fs/lockd/svc4proc.c
+index 176b468a61c75..e5adb524a445f 100644
+--- a/fs/lockd/svc4proc.c
++++ b/fs/lockd/svc4proc.c
+@@ -32,6 +32,10 @@ nlm4svc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
+ if (!nlmsvc_ops)
+ return nlm_lck_denied_nolocks;
+
++ if (lock->lock_start > OFFSET_MAX ||
++ (lock->lock_len && ((lock->lock_len - 1) > (OFFSET_MAX - lock->lock_start))))
++ return nlm4_fbig;
++
+ /* Obtain host handle */
+ if (!(host = nlmsvc_lookup_host(rqstp, lock->caller, lock->len))
+ || (argp->monitor && nsm_monitor(host) < 0))
+@@ -50,6 +54,10 @@ nlm4svc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
+ /* Set up the missing parts of the file_lock structure */
+ lock->fl.fl_file = file->f_file[mode];
+ lock->fl.fl_pid = current->tgid;
++ lock->fl.fl_start = (loff_t)lock->lock_start;
++ lock->fl.fl_end = lock->lock_len ?
++ (loff_t)(lock->lock_start + lock->lock_len - 1) :
++ OFFSET_MAX;
+ lock->fl.fl_lmops = &nlmsvc_lock_operations;
+ nlmsvc_locks_init_private(&lock->fl, host, (pid_t)lock->svid);
+ if (!lock->fl.fl_owner) {
+diff --git a/fs/lockd/xdr4.c b/fs/lockd/xdr4.c
+index 856267c0864bd..712fdfeb8ef06 100644
+--- a/fs/lockd/xdr4.c
++++ b/fs/lockd/xdr4.c
+@@ -20,13 +20,6 @@
+
+ #include "svcxdr.h"
+
+-static inline loff_t
+-s64_to_loff_t(__s64 offset)
+-{
+- return (loff_t)offset;
+-}
+-
+-
+ static inline s64
+ loff_t_to_s64(loff_t offset)
+ {
+@@ -70,8 +63,6 @@ static bool
+ svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
+ {
+ struct file_lock *fl = &lock->fl;
+- u64 len, start;
+- s64 end;
+
+ if (!svcxdr_decode_string(xdr, &lock->caller, &lock->len))
+ return false;
+@@ -81,20 +72,14 @@ svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
+ return false;
+ if (xdr_stream_decode_u32(xdr, &lock->svid) < 0)
+ return false;
+- if (xdr_stream_decode_u64(xdr, &start) < 0)
++ if (xdr_stream_decode_u64(xdr, &lock->lock_start) < 0)
+ return false;
+- if (xdr_stream_decode_u64(xdr, &len) < 0)
++ if (xdr_stream_decode_u64(xdr, &lock->lock_len) < 0)
+ return false;
+
+ locks_init_lock(fl);
+ fl->fl_flags = FL_POSIX;
+ fl->fl_type = F_RDLCK;
+- end = start + len - 1;
+- fl->fl_start = s64_to_loff_t(start);
+- if (len == 0 || end < 0)
+- fl->fl_end = OFFSET_MAX;
+- else
+- fl->fl_end = s64_to_loff_t(end);
+
+ return true;
+ }
+diff --git a/fs/mbcache.c b/fs/mbcache.c
+index 97c54d3a22276..2010bc80a3f2d 100644
+--- a/fs/mbcache.c
++++ b/fs/mbcache.c
+@@ -11,7 +11,7 @@
+ /*
+ * Mbcache is a simple key-value store. Keys need not be unique, however
+ * key-value pairs are expected to be unique (we use this fact in
+- * mb_cache_entry_delete()).
++ * mb_cache_entry_delete_or_get()).
+ *
+ * Ext2 and ext4 use this cache for deduplication of extended attribute blocks.
+ * Ext4 also uses it for deduplication of xattr values stored in inodes.
+@@ -125,6 +125,19 @@ void __mb_cache_entry_free(struct mb_cache_entry *entry)
+ }
+ EXPORT_SYMBOL(__mb_cache_entry_free);
+
++/*
++ * mb_cache_entry_wait_unused - wait to be the last user of the entry
++ *
++ * @entry - entry to work on
++ *
++ * Wait to be the last user of the entry.
++ */
++void mb_cache_entry_wait_unused(struct mb_cache_entry *entry)
++{
++ wait_var_event(&entry->e_refcnt, atomic_read(&entry->e_refcnt) <= 3);
++}
++EXPORT_SYMBOL(mb_cache_entry_wait_unused);
++
+ static struct mb_cache_entry *__entry_find(struct mb_cache *cache,
+ struct mb_cache_entry *entry,
+ u32 key)
+@@ -217,7 +230,7 @@ out:
+ }
+ EXPORT_SYMBOL(mb_cache_entry_get);
+
+-/* mb_cache_entry_delete - remove a cache entry
++/* mb_cache_entry_delete - try to remove a cache entry
+ * @cache - cache we work with
+ * @key - key
+ * @value - value
+@@ -254,6 +267,55 @@ void mb_cache_entry_delete(struct mb_cache *cache, u32 key, u64 value)
+ }
+ EXPORT_SYMBOL(mb_cache_entry_delete);
+
++/* mb_cache_entry_delete_or_get - remove a cache entry if it has no users
++ * @cache - cache we work with
++ * @key - key
++ * @value - value
++ *
++ * Remove entry from cache @cache with key @key and value @value. The removal
++ * happens only if the entry is unused. The function returns NULL in case the
++ * entry was successfully removed or there's no entry in cache. Otherwise the
++ * function grabs reference of the entry that we failed to delete because it
++ * still has users and return it.
++ */
++struct mb_cache_entry *mb_cache_entry_delete_or_get(struct mb_cache *cache,
++ u32 key, u64 value)
++{
++ struct hlist_bl_node *node;
++ struct hlist_bl_head *head;
++ struct mb_cache_entry *entry;
++
++ head = mb_cache_entry_head(cache, key);
++ hlist_bl_lock(head);
++ hlist_bl_for_each_entry(entry, node, head, e_hash_list) {
++ if (entry->e_key == key && entry->e_value == value) {
++ if (atomic_read(&entry->e_refcnt) > 2) {
++ atomic_inc(&entry->e_refcnt);
++ hlist_bl_unlock(head);
++ return entry;
++ }
++ /* We keep hash list reference to keep entry alive */
++ hlist_bl_del_init(&entry->e_hash_list);
++ hlist_bl_unlock(head);
++ spin_lock(&cache->c_list_lock);
++ if (!list_empty(&entry->e_list)) {
++ list_del_init(&entry->e_list);
++ if (!WARN_ONCE(cache->c_entry_count == 0,
++ "mbcache: attempt to decrement c_entry_count past zero"))
++ cache->c_entry_count--;
++ atomic_dec(&entry->e_refcnt);
++ }
++ spin_unlock(&cache->c_list_lock);
++ mb_cache_entry_put(cache, entry);
++ return NULL;
++ }
++ }
++ hlist_bl_unlock(head);
++
++ return NULL;
++}
++EXPORT_SYMBOL(mb_cache_entry_delete_or_get);
++
+ /* mb_cache_entry_touch - cache entry got used
+ * @cache - cache the entry belongs to
+ * @entry - entry that got used
+@@ -288,7 +350,7 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache,
+ while (nr_to_scan-- && !list_empty(&cache->c_list)) {
+ entry = list_first_entry(&cache->c_list,
+ struct mb_cache_entry, e_list);
+- if (entry->e_referenced) {
++ if (entry->e_referenced || atomic_read(&entry->e_refcnt) > 2) {
+ entry->e_referenced = 0;
+ list_move_tail(&entry->e_list, &cache->c_list);
+ continue;
+@@ -302,6 +364,14 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache,
+ spin_unlock(&cache->c_list_lock);
+ head = mb_cache_entry_head(cache, entry->e_key);
+ hlist_bl_lock(head);
++ /* Now a reliable check if the entry didn't get used... */
++ if (atomic_read(&entry->e_refcnt) > 2) {
++ hlist_bl_unlock(head);
++ spin_lock(&cache->c_list_lock);
++ list_add_tail(&entry->e_list, &cache->c_list);
++ cache->c_entry_count++;
++ continue;
++ }
+ if (!hlist_bl_unhashed(&entry->e_hash_list)) {
+ hlist_bl_del_init(&entry->e_hash_list);
+ atomic_dec(&entry->e_refcnt);
+diff --git a/fs/namei.c b/fs/namei.c
+index 1f28d3f463c3b..7a5992805583c 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -1505,6 +1505,8 @@ static bool __follow_mount_rcu(struct nameidata *nd, struct path *path,
+ * becoming unpinned.
+ */
+ flags = dentry->d_flags;
++ if (read_seqretry(&mount_lock, nd->m_seq))
++ return false;
+ continue;
+ }
+ if (read_seqretry(&mount_lock, nd->m_seq))
+@@ -3565,6 +3567,8 @@ struct dentry *vfs_tmpfile(struct user_namespace *mnt_userns,
+ child = d_alloc(dentry, &slash_name);
+ if (unlikely(!child))
+ goto out_err;
++ if (!IS_POSIXACL(dir))
++ mode &= ~current_umask();
+ error = dir->i_op->tmpfile(mnt_userns, dir, child, mode);
+ if (error)
+ goto out_err;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 604be402ae13c..7d285561e59f6 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1131,6 +1131,8 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ case -EIO:
+ case -ETIMEDOUT:
+ case -EPIPE:
++ case -EPROTO:
++ case -ENODEV:
+ dprintk("%s DS connection error %d\n", __func__,
+ task->tk_status);
+ nfs4_delete_deviceid(devid->ld, devid->nfs_client,
+@@ -1236,6 +1238,8 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ case -ENOBUFS:
+ case -EPIPE:
+ case -EPERM:
++ case -EPROTO:
++ case -ENODEV:
+ *op_status = status = NFS4ERR_NXIO;
+ break;
+ case -EACCES:
+diff --git a/fs/nfs/nfs3client.c b/fs/nfs/nfs3client.c
+index 5601e47360c28..b49359afac883 100644
+--- a/fs/nfs/nfs3client.c
++++ b/fs/nfs/nfs3client.c
+@@ -108,7 +108,6 @@ struct nfs_client *nfs3_set_ds_client(struct nfs_server *mds_srv,
+ if (mds_srv->flags & NFS_MOUNT_NORESVPORT)
+ __set_bit(NFS_CS_NORESVPORT, &cl_init.init_flags);
+
+- __set_bit(NFS_CS_NOPING, &cl_init.init_flags);
+ __set_bit(NFS_CS_DS, &cl_init.init_flags);
+
+ /* Use the MDS nfs_client cl_ipaddr. */
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index 9cb2d590c0361..e1f98d32cee1b 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -184,12 +184,6 @@ nfsd_file_alloc(struct inode *inode, unsigned int may, unsigned int hashval,
+ nf->nf_hashval = hashval;
+ refcount_set(&nf->nf_ref, 1);
+ nf->nf_may = may & NFSD_FILE_MAY_MASK;
+- if (may & NFSD_MAY_NOT_BREAK_LEASE) {
+- if (may & NFSD_MAY_WRITE)
+- __set_bit(NFSD_FILE_BREAK_WRITE, &nf->nf_flags);
+- if (may & NFSD_MAY_READ)
+- __set_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags);
+- }
+ nf->nf_mark = NULL;
+ trace_nfsd_file_alloc(nf);
+ }
+@@ -958,21 +952,7 @@ wait_for_construction:
+
+ this_cpu_inc(nfsd_file_cache_hits);
+
+- if (!(may_flags & NFSD_MAY_NOT_BREAK_LEASE)) {
+- bool write = (may_flags & NFSD_MAY_WRITE);
+-
+- if (test_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags) ||
+- (test_bit(NFSD_FILE_BREAK_WRITE, &nf->nf_flags) && write)) {
+- status = nfserrno(nfsd_open_break_lease(
+- file_inode(nf->nf_file), may_flags));
+- if (status == nfs_ok) {
+- clear_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags);
+- if (write)
+- clear_bit(NFSD_FILE_BREAK_WRITE,
+- &nf->nf_flags);
+- }
+- }
+- }
++ status = nfserrno(nfsd_open_break_lease(file_inode(nf->nf_file), may_flags));
+ out:
+ if (status == nfs_ok) {
+ *pnf = nf;
+diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h
+index 1da0c79a55804..c9e3c6eb4776e 100644
+--- a/fs/nfsd/filecache.h
++++ b/fs/nfsd/filecache.h
+@@ -37,9 +37,7 @@ struct nfsd_file {
+ struct net *nf_net;
+ #define NFSD_FILE_HASHED (0)
+ #define NFSD_FILE_PENDING (1)
+-#define NFSD_FILE_BREAK_READ (2)
+-#define NFSD_FILE_BREAK_WRITE (3)
+-#define NFSD_FILE_REFERENCED (4)
++#define NFSD_FILE_REFERENCED (2)
+ unsigned long nf_flags;
+ struct inode *nf_inode;
+ unsigned int nf_hashval;
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index a60ead3b227a5..081179fb17e88 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -696,8 +696,6 @@ DEFINE_CLID_EVENT(confirmed_r);
+ __print_flags(val, "|", \
+ { 1 << NFSD_FILE_HASHED, "HASHED" }, \
+ { 1 << NFSD_FILE_PENDING, "PENDING" }, \
+- { 1 << NFSD_FILE_BREAK_READ, "BREAK_READ" }, \
+- { 1 << NFSD_FILE_BREAK_WRITE, "BREAK_WRITE" }, \
+ { 1 << NFSD_FILE_REFERENCED, "REFERENCED"})
+
+ DECLARE_EVENT_CLASS(nfsd_file_class,
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index 2eada97bbd23f..e065a5b9a442e 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -259,7 +259,7 @@ static int ovl_encode_fh(struct inode *inode, u32 *fid, int *max_len,
+ return FILEID_INVALID;
+
+ dentry = d_find_any_alias(inode);
+- if (WARN_ON(!dentry))
++ if (!dentry)
+ return FILEID_INVALID;
+
+ bytes = ovl_dentry_to_fid(ofs, dentry, fid, buflen);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 8dfa36a99c742..93f7e3d971e4b 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -1885,7 +1885,7 @@ void proc_pid_evict_inode(struct proc_inode *ei)
+ put_pid(pid);
+ }
+
+-struct inode *proc_pid_make_inode(struct super_block * sb,
++struct inode *proc_pid_make_inode(struct super_block *sb,
+ struct task_struct *task, umode_t mode)
+ {
+ struct inode * inode;
+@@ -1914,11 +1914,6 @@ struct inode *proc_pid_make_inode(struct super_block * sb,
+
+ /* Let the pid remember us for quick removal */
+ ei->pid = pid;
+- if (S_ISDIR(mode)) {
+- spin_lock(&pid->lock);
+- hlist_add_head_rcu(&ei->sibling_inodes, &pid->inodes);
+- spin_unlock(&pid->lock);
+- }
+
+ task_dump_owner(task, 0, &inode->i_uid, &inode->i_gid);
+ security_task_to_inode(task, inode);
+@@ -1931,6 +1926,39 @@ out_unlock:
+ return NULL;
+ }
+
++/*
++ * Generating an inode and adding it into @pid->inodes, so that task will
++ * invalidate inode's dentry before being released.
++ *
++ * This helper is used for creating dir-type entries under '/proc' and
++ * '/proc/<tgid>/task'. Other entries(eg. fd, stat) under '/proc/<tgid>'
++ * can be released by invalidating '/proc/<tgid>' dentry.
++ * In theory, dentries under '/proc/<tgid>/task' can also be released by
++ * invalidating '/proc/<tgid>' dentry, we reserve it to handle single
++ * thread exiting situation: Any one of threads should invalidate its
++ * '/proc/<tgid>/task/<pid>' dentry before released.
++ */
++static struct inode *proc_pid_make_base_inode(struct super_block *sb,
++ struct task_struct *task, umode_t mode)
++{
++ struct inode *inode;
++ struct proc_inode *ei;
++ struct pid *pid;
++
++ inode = proc_pid_make_inode(sb, task, mode);
++ if (!inode)
++ return NULL;
++
++ /* Let proc_flush_pid find this directory inode */
++ ei = PROC_I(inode);
++ pid = ei->pid;
++ spin_lock(&pid->lock);
++ hlist_add_head_rcu(&ei->sibling_inodes, &pid->inodes);
++ spin_unlock(&pid->lock);
++
++ return inode;
++}
++
+ int pid_getattr(struct user_namespace *mnt_userns, const struct path *path,
+ struct kstat *stat, u32 request_mask, unsigned int query_flags)
+ {
+@@ -3369,7 +3397,8 @@ static struct dentry *proc_pid_instantiate(struct dentry * dentry,
+ {
+ struct inode *inode;
+
+- inode = proc_pid_make_inode(dentry->d_sb, task, S_IFDIR | S_IRUGO | S_IXUGO);
++ inode = proc_pid_make_base_inode(dentry->d_sb, task,
++ S_IFDIR | S_IRUGO | S_IXUGO);
+ if (!inode)
+ return ERR_PTR(-ENOENT);
+
+@@ -3671,7 +3700,8 @@ static struct dentry *proc_task_instantiate(struct dentry *dentry,
+ struct task_struct *task, const void *ptr)
+ {
+ struct inode *inode;
+- inode = proc_pid_make_inode(dentry->d_sb, task, S_IFDIR | S_IRUGO | S_IXUGO);
++ inode = proc_pid_make_base_inode(dentry->d_sb, task,
++ S_IFDIR | S_IRUGO | S_IXUGO);
+ if (!inode)
+ return ERR_PTR(-ENOENT);
+
+diff --git a/fs/splice.c b/fs/splice.c
+index 047b79db8eb52..93a2c9bf62494 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -814,17 +814,15 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
+ {
+ struct pipe_inode_info *pipe;
+ long ret, bytes;
+- umode_t i_mode;
+ size_t len;
+ int i, flags, more;
+
+ /*
+- * We require the input being a regular file, as we don't want to
+- * randomly drop data for eg socket -> socket splicing. Use the
+- * piped splicing for that!
++ * We require the input to be seekable, as we don't want to randomly
++ * drop data for eg socket -> socket splicing. Use the piped splicing
++ * for that!
+ */
+- i_mode = file_inode(in)->i_mode;
+- if (unlikely(!S_ISREG(i_mode) && !S_ISBLK(i_mode)))
++ if (unlikely(!(in->f_mode & FMODE_LSEEK)))
+ return -EINVAL;
+
+ /*
+diff --git a/include/acpi/cppc_acpi.h b/include/acpi/cppc_acpi.h
+index d389bab54241d..f73d357ecdf5f 100644
+--- a/include/acpi/cppc_acpi.h
++++ b/include/acpi/cppc_acpi.h
+@@ -17,7 +17,7 @@
+ #include <acpi/pcc.h>
+ #include <acpi/processor.h>
+
+-/* Support CPPCv2 and CPPCv3 */
++/* CPPCv2 and CPPCv3 support */
+ #define CPPC_V2_REV 2
+ #define CPPC_V3_REV 3
+ #define CPPC_V2_NUM_ENT 21
+diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h
+index 52363eee2b20e..506d56530ca93 100644
+--- a/include/crypto/internal/blake2s.h
++++ b/include/crypto/internal/blake2s.h
+@@ -8,7 +8,6 @@
+ #define _CRYPTO_INTERNAL_BLAKE2S_H
+
+ #include <crypto/blake2s.h>
+-#include <crypto/internal/hash.h>
+ #include <linux/string.h>
+
+ void blake2s_compress_generic(struct blake2s_state *state, const u8 *block,
+@@ -19,111 +18,4 @@ void blake2s_compress(struct blake2s_state *state, const u8 *block,
+
+ bool blake2s_selftest(void);
+
+-static inline void blake2s_set_lastblock(struct blake2s_state *state)
+-{
+- state->f[0] = -1;
+-}
+-
+-/* Helper functions for BLAKE2s shared by the library and shash APIs */
+-
+-static __always_inline void
+-__blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen,
+- bool force_generic)
+-{
+- const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen;
+-
+- if (unlikely(!inlen))
+- return;
+- if (inlen > fill) {
+- memcpy(state->buf + state->buflen, in, fill);
+- if (force_generic)
+- blake2s_compress_generic(state, state->buf, 1,
+- BLAKE2S_BLOCK_SIZE);
+- else
+- blake2s_compress(state, state->buf, 1,
+- BLAKE2S_BLOCK_SIZE);
+- state->buflen = 0;
+- in += fill;
+- inlen -= fill;
+- }
+- if (inlen > BLAKE2S_BLOCK_SIZE) {
+- const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE);
+- /* Hash one less (full) block than strictly possible */
+- if (force_generic)
+- blake2s_compress_generic(state, in, nblocks - 1,
+- BLAKE2S_BLOCK_SIZE);
+- else
+- blake2s_compress(state, in, nblocks - 1,
+- BLAKE2S_BLOCK_SIZE);
+- in += BLAKE2S_BLOCK_SIZE * (nblocks - 1);
+- inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1);
+- }
+- memcpy(state->buf + state->buflen, in, inlen);
+- state->buflen += inlen;
+-}
+-
+-static __always_inline void
+-__blake2s_final(struct blake2s_state *state, u8 *out, bool force_generic)
+-{
+- blake2s_set_lastblock(state);
+- memset(state->buf + state->buflen, 0,
+- BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */
+- if (force_generic)
+- blake2s_compress_generic(state, state->buf, 1, state->buflen);
+- else
+- blake2s_compress(state, state->buf, 1, state->buflen);
+- cpu_to_le32_array(state->h, ARRAY_SIZE(state->h));
+- memcpy(out, state->h, state->outlen);
+-}
+-
+-/* Helper functions for shash implementations of BLAKE2s */
+-
+-struct blake2s_tfm_ctx {
+- u8 key[BLAKE2S_KEY_SIZE];
+- unsigned int keylen;
+-};
+-
+-static inline int crypto_blake2s_setkey(struct crypto_shash *tfm,
+- const u8 *key, unsigned int keylen)
+-{
+- struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+-
+- if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE)
+- return -EINVAL;
+-
+- memcpy(tctx->key, key, keylen);
+- tctx->keylen = keylen;
+-
+- return 0;
+-}
+-
+-static inline int crypto_blake2s_init(struct shash_desc *desc)
+-{
+- const struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+- struct blake2s_state *state = shash_desc_ctx(desc);
+- unsigned int outlen = crypto_shash_digestsize(desc->tfm);
+-
+- __blake2s_init(state, outlen, tctx->key, tctx->keylen);
+- return 0;
+-}
+-
+-static inline int crypto_blake2s_update(struct shash_desc *desc,
+- const u8 *in, unsigned int inlen,
+- bool force_generic)
+-{
+- struct blake2s_state *state = shash_desc_ctx(desc);
+-
+- __blake2s_update(state, in, inlen, force_generic);
+- return 0;
+-}
+-
+-static inline int crypto_blake2s_final(struct shash_desc *desc, u8 *out,
+- bool force_generic)
+-{
+- struct blake2s_state *state = shash_desc_ctx(desc);
+-
+- __blake2s_final(state, out, force_generic);
+- return 0;
+-}
+-
+ #endif /* _CRYPTO_INTERNAL_BLAKE2S_H */
+diff --git a/include/dt-bindings/clock/qcom,gcc-msm8939.h b/include/dt-bindings/clock/qcom,gcc-msm8939.h
+index 0634467c4ce5a..2d545ed0d35ab 100644
+--- a/include/dt-bindings/clock/qcom,gcc-msm8939.h
++++ b/include/dt-bindings/clock/qcom,gcc-msm8939.h
+@@ -192,6 +192,7 @@
+ #define GCC_VENUS0_CORE0_VCODEC0_CLK 183
+ #define GCC_VENUS0_CORE1_VCODEC0_CLK 184
+ #define GCC_OXILI_TIMER_CLK 185
++#define SYSTEM_MM_NOC_BFDCD_CLK_SRC 186
+
+ /* Indexes for GDSCs */
+ #define BIMC_GDSC 0
+diff --git a/include/linux/acpi_viot.h b/include/linux/acpi_viot.h
+index 1eb8ee5b0e5fe..a5a1224315637 100644
+--- a/include/linux/acpi_viot.h
++++ b/include/linux/acpi_viot.h
+@@ -6,9 +6,11 @@
+ #include <linux/acpi.h>
+
+ #ifdef CONFIG_ACPI_VIOT
++void __init acpi_viot_early_init(void);
+ void __init acpi_viot_init(void);
+ int viot_iommu_configure(struct device *dev);
+ #else
++static inline void acpi_viot_early_init(void) {}
+ static inline void acpi_viot_init(void) {}
+ static inline int viot_iommu_configure(struct device *dev)
+ {
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 2f7b43444c5f8..62e3ff52ab033 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1206,6 +1206,11 @@ bdev_max_zone_append_sectors(struct block_device *bdev)
+ return queue_max_zone_append_sectors(bdev_get_queue(bdev));
+ }
+
++static inline unsigned int bdev_max_segments(struct block_device *bdev)
++{
++ return queue_max_segments(bdev_get_queue(bdev));
++}
++
+ static inline unsigned queue_logical_block_size(const struct request_queue *q)
+ {
+ int retval = 512;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 2b914a56a2c53..7424cf234ae03 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1025,7 +1025,6 @@ struct bpf_prog_aux {
+ bool sleepable;
+ bool tail_call_reachable;
+ bool xdp_has_frags;
+- bool use_bpf_prog_pack;
+ /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
+ const struct btf_type *attach_func_proto;
+ /* function name for valid attach_btf_id */
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index c9d1463bb20f3..badcc0e3418f2 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -117,7 +117,6 @@ static __always_inline int test_clear_buffer_##name(struct buffer_head *bh) \
+ * of the form "mark_buffer_foo()". These are higher-level functions which
+ * do something in addition to setting a b_state bit.
+ */
+-BUFFER_FNS(Uptodate, uptodate)
+ BUFFER_FNS(Dirty, dirty)
+ TAS_BUFFER_FNS(Dirty, dirty)
+ BUFFER_FNS(Lock, locked)
+@@ -135,6 +134,30 @@ BUFFER_FNS(Meta, meta)
+ BUFFER_FNS(Prio, prio)
+ BUFFER_FNS(Defer_Completion, defer_completion)
+
++static __always_inline void set_buffer_uptodate(struct buffer_head *bh)
++{
++ /*
++ * make it consistent with folio_mark_uptodate
++ * pairs with smp_load_acquire in buffer_uptodate
++ */
++ smp_mb__before_atomic();
++ set_bit(BH_Uptodate, &bh->b_state);
++}
++
++static __always_inline void clear_buffer_uptodate(struct buffer_head *bh)
++{
++ clear_bit(BH_Uptodate, &bh->b_state);
++}
++
++static __always_inline int buffer_uptodate(const struct buffer_head *bh)
++{
++ /*
++ * make it consistent with folio_test_uptodate
++ * pairs with smp_mb__before_atomic in set_buffer_uptodate
++ */
++ return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0;
++}
++
+ #define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK)
+
+ /* If we *know* page->private refers to buffer_heads */
+diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
+index fe29ac7cc469c..4592d08459417 100644
+--- a/include/linux/cpumask.h
++++ b/include/linux/cpumask.h
+@@ -1071,4 +1071,22 @@ cpumap_print_list_to_buf(char *buf, const struct cpumask *mask,
+ [0] = 1UL \
+ } }
+
++/*
++ * Provide a valid theoretical max size for cpumap and cpulist sysfs files
++ * to avoid breaking userspace which may allocate a buffer based on the size
++ * reported by e.g. fstat.
++ *
++ * for cpumap NR_CPUS * 9/32 - 1 should be an exact length.
++ *
++ * For cpulist 7 is (ceil(log10(NR_CPUS)) + 1) allowing for NR_CPUS to be up
++ * to 2 orders of magnitude larger than 8192. And then we divide by 2 to
++ * cover a worst-case of every other cpu being on one of two nodes for a
++ * very large NR_CPUS.
++ *
++ * Use PAGE_SIZE as a minimum for smaller configurations.
++ */
++#define CPUMAP_FILE_MAX_BYTES ((((NR_CPUS * 9)/32 - 1) > PAGE_SIZE) \
++ ? (NR_CPUS * 9)/32 - 1 : PAGE_SIZE)
++#define CPULIST_FILE_MAX_BYTES (((NR_CPUS * 7)/2 > PAGE_SIZE) ? (NR_CPUS * 7)/2 : PAGE_SIZE)
++
+ #endif /* __LINUX_CPUMASK_H */
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index 47a01c7cffdf3..e9c043f12e531 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -373,6 +373,12 @@ struct dm_target {
+ * after returning DM_MAPIO_SUBMITTED from its map function.
+ */
+ bool accounts_remapped_io:1;
++
++ /*
++ * Set if the target will submit the DM bio without first calling
++ * bio_set_dev(). NOTE: ideally a target should _not_ need this.
++ */
++ bool needs_bio_set_dev:1;
+ };
+
+ void *dm_per_bio_data(struct bio *bio, size_t data_size);
+diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
+index 8419bffb4398f..b9caa01dfac48 100644
+--- a/include/linux/energy_model.h
++++ b/include/linux/energy_model.h
+@@ -62,7 +62,7 @@ struct em_perf_domain {
+ /*
+ * em_perf_domain flags:
+ *
+- * EM_PERF_DOMAIN_MILLIWATTS: The power values are in milli-Watts or some
++ * EM_PERF_DOMAIN_MICROWATTS: The power values are in micro-Watts or some
+ * other scale.
+ *
+ * EM_PERF_DOMAIN_SKIP_INEFFICIENCIES: Skip inefficient states when estimating
+@@ -71,7 +71,7 @@ struct em_perf_domain {
+ * EM_PERF_DOMAIN_ARTIFICIAL: The power values are artificial and might be
+ * created by platform missing real power information
+ */
+-#define EM_PERF_DOMAIN_MILLIWATTS BIT(0)
++#define EM_PERF_DOMAIN_MICROWATTS BIT(0)
+ #define EM_PERF_DOMAIN_SKIP_INEFFICIENCIES BIT(1)
+ #define EM_PERF_DOMAIN_ARTIFICIAL BIT(2)
+
+@@ -79,22 +79,44 @@ struct em_perf_domain {
+ #define em_is_artificial(em) ((em)->flags & EM_PERF_DOMAIN_ARTIFICIAL)
+
+ #ifdef CONFIG_ENERGY_MODEL
+-#define EM_MAX_POWER 0xFFFF
++/*
++ * The max power value in micro-Watts. The limit of 64 Watts is set as
++ * a safety net to not overflow multiplications on 32bit platforms. The
++ * 32bit value limit for total Perf Domain power implies a limit of
++ * maximum CPUs in such domain to 64.
++ */
++#define EM_MAX_POWER (64000000) /* 64 Watts */
++
++/*
++ * To avoid possible energy estimation overflow on 32bit machines add
++ * limits to number of CPUs in the Perf. Domain.
++ * We are safe on 64bit machine, thus some big number.
++ */
++#ifdef CONFIG_64BIT
++#define EM_MAX_NUM_CPUS 4096
++#else
++#define EM_MAX_NUM_CPUS 16
++#endif
+
+ /*
+- * Increase resolution of energy estimation calculations for 64-bit
+- * architectures. The extra resolution improves decision made by EAS for the
+- * task placement when two Performance Domains might provide similar energy
+- * estimation values (w/o better resolution the values could be equal).
++ * To avoid an overflow on 32bit machines while calculating the energy
++ * use a different order in the operation. First divide by the 'cpu_scale'
++ * which would reduce big value stored in the 'cost' field, then multiply by
++ * the 'sum_util'. This would allow to handle existing platforms, which have
++ * e.g. power ~1.3 Watt at max freq, so the 'cost' value > 1mln micro-Watts.
++ * In such scenario, where there are 4 CPUs in the Perf. Domain the 'sum_util'
++ * could be 4096, then multiplication: 'cost' * 'sum_util' would overflow.
++ * This reordering of operations has some limitations, we lose small
++ * precision in the estimation (comparing to 64bit platform w/o reordering).
+ *
+- * We increase resolution only if we have enough bits to allow this increased
+- * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
+- * are pretty high and the returns do not justify the increased costs.
++ * We are safe on 64bit machine.
+ */
+ #ifdef CONFIG_64BIT
+-#define em_scale_power(p) ((p) * 1000)
++#define em_estimate_energy(cost, sum_util, scale_cpu) \
++ (((cost) * (sum_util)) / (scale_cpu))
+ #else
+-#define em_scale_power(p) (p)
++#define em_estimate_energy(cost, sum_util, scale_cpu) \
++ (((cost) / (scale_cpu)) * (sum_util))
+ #endif
+
+ struct em_data_callback {
+@@ -112,7 +134,7 @@ struct em_data_callback {
+ * and frequency.
+ *
+ * In case of CPUs, the power is the one of a single CPU in the domain,
+- * expressed in milli-Watts or an abstract scale. It is expected to
++ * expressed in micro-Watts or an abstract scale. It is expected to
+ * fit in the [0, EM_MAX_POWER] range.
+ *
+ * Return 0 on success.
+@@ -148,7 +170,7 @@ struct em_perf_domain *em_cpu_get(int cpu);
+ struct em_perf_domain *em_pd_get(struct device *dev);
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ struct em_data_callback *cb, cpumask_t *span,
+- bool milliwatts);
++ bool microwatts);
+ void em_dev_unregister_perf_domain(struct device *dev);
+
+ /**
+@@ -273,7 +295,7 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
+ * pd_nrg = ------------------------ (4)
+ * scale_cpu
+ */
+- return ps->cost * sum_util / scale_cpu;
++ return em_estimate_energy(ps->cost, sum_util, scale_cpu);
+ }
+
+ /**
+@@ -297,7 +319,7 @@ struct em_data_callback {};
+ static inline
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ struct em_data_callback *cb, cpumask_t *span,
+- bool milliwatts)
++ bool microwatts)
+ {
+ return -EINVAL;
+ }
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index ed0c0ff42ad5b..8fd2e2f58eeb2 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -948,6 +948,7 @@ u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
+ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog);
+ void bpf_jit_compile(struct bpf_prog *prog);
+ bool bpf_jit_needs_zext(void);
++bool bpf_jit_supports_subprog_tailcalls(void);
+ bool bpf_jit_supports_kfunc_call(void);
+ bool bpf_helper_changes_pkt_data(void *func);
+
+@@ -1060,6 +1061,14 @@ u64 bpf_jit_alloc_exec_limit(void);
+ void *bpf_jit_alloc_exec(unsigned long size);
+ void bpf_jit_free_exec(void *addr);
+ void bpf_jit_free(struct bpf_prog *fp);
++struct bpf_binary_header *
++bpf_jit_binary_pack_hdr(const struct bpf_prog *fp);
++
++static inline bool bpf_prog_kallsyms_verify_off(const struct bpf_prog *fp)
++{
++ return list_empty(&fp->aux->ksym.lnode) ||
++ fp->aux->ksym.lnode.prev == LIST_POISON2;
++}
+
+ struct bpf_binary_header *
+ bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **ro_image,
+diff --git a/include/linux/highmem.h b/include/linux/highmem.h
+index 56d6a01965348..80d7b289d37b1 100644
+--- a/include/linux/highmem.h
++++ b/include/linux/highmem.h
+@@ -243,6 +243,16 @@ static inline void clear_highpage(struct page *page)
+ kunmap_local(kaddr);
+ }
+
++static inline void clear_highpage_kasan_tagged(struct page *page)
++{
++ u8 tag;
++
++ tag = page_kasan_tag(page);
++ page_kasan_tag_reset(page);
++ clear_highpage(page);
++ page_kasan_tag_set(page, tag);
++}
++
+ #ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGE
+
+ static inline void tag_clear_highpage(struct page *page)
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index e4cff27d1198c..756b66ff025e5 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -170,7 +170,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
+ vm_flags_t vm_flags);
+ long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
+ long freed);
+-bool isolate_huge_page(struct page *page, struct list_head *list);
++int isolate_hugetlb(struct page *page, struct list_head *list);
+ int get_hwpoison_huge_page(struct page *page, bool *hugetlb);
+ int get_huge_page_for_hwpoison(unsigned long pfn, int flags);
+ void putback_active_hugepage(struct page *page);
+@@ -376,9 +376,9 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
+ return NULL;
+ }
+
+-static inline bool isolate_huge_page(struct page *page, struct list_head *list)
++static inline int isolate_hugetlb(struct page *page, struct list_head *list)
+ {
+- return false;
++ return -EBUSY;
+ }
+
+ static inline int get_hwpoison_huge_page(struct page *page, bool *hugetlb)
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 75d40acb60c1c..5f66d5c9e3de1 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -4345,4 +4345,7 @@ enum ieee80211_range_params_max_total_ltf {
+ IEEE80211_RANGE_PARAMS_MAX_TOTAL_LTF_UNSPECIFIED,
+ };
+
++/* multi-link device */
++#define IEEE80211_MLD_MAX_NUM_LINKS 15
++
+ #endif /* LINUX_IEEE80211_H */
+diff --git a/include/linux/iio/common/cros_ec_sensors_core.h b/include/linux/iio/common/cros_ec_sensors_core.h
+index c582e1a142320..7b5dbd7499957 100644
+--- a/include/linux/iio/common/cros_ec_sensors_core.h
++++ b/include/linux/iio/common/cros_ec_sensors_core.h
+@@ -95,8 +95,11 @@ int cros_ec_sensors_read_cmd(struct iio_dev *indio_dev, unsigned long scan_mask,
+ struct platform_device;
+ int cros_ec_sensors_core_init(struct platform_device *pdev,
+ struct iio_dev *indio_dev, bool physical_device,
+- cros_ec_sensors_capture_t trigger_capture,
+- cros_ec_sensorhub_push_data_cb_t push_data);
++ cros_ec_sensors_capture_t trigger_capture);
++
++int cros_ec_sensors_core_register(struct device *dev,
++ struct iio_dev *indio_dev,
++ cros_ec_sensorhub_push_data_cb_t push_data);
+
+ irqreturn_t cros_ec_sensors_capture(int irq, void *p);
+ int cros_ec_sensors_push_data(struct iio_dev *indio_dev,
+diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
+index 233d2e6b77211..a0db62297ea1c 100644
+--- a/include/linux/iio/iio.h
++++ b/include/linux/iio/iio.h
+@@ -9,6 +9,7 @@
+
+ #include <linux/device.h>
+ #include <linux/cdev.h>
++#include <linux/slab.h>
+ #include <linux/iio/types.h>
+ #include <linux/of.h>
+ /* IIO TODO LIST */
+@@ -709,8 +710,13 @@ static inline void *iio_device_get_drvdata(const struct iio_dev *indio_dev)
+ return dev_get_drvdata(&indio_dev->dev);
+ }
+
+-/* Can we make this smaller? */
+-#define IIO_ALIGN L1_CACHE_BYTES
++/*
++ * Used to ensure the iio_priv() structure is aligned to allow that structure
++ * to in turn include IIO_DMA_MINALIGN'd elements such as buffers which
++ * must not share cachelines with the rest of the structure, thus making
++ * them safe for use with non-coherent DMA.
++ */
++#define IIO_DMA_MINALIGN ARCH_KMALLOC_MINALIGN
+ struct iio_dev *iio_device_alloc(struct device *parent, int sizeof_priv);
+
+ /* The information at the returned address is guaranteed to be cacheline aligned */
+diff --git a/include/linux/kexec.h b/include/linux/kexec.h
+index 475683cd67f16..6e7510f393680 100644
+--- a/include/linux/kexec.h
++++ b/include/linux/kexec.h
+@@ -188,21 +188,48 @@ int kexec_purgatory_get_set_symbol(struct kimage *image, const char *name,
+ void *buf, unsigned int size,
+ bool get_value);
+ void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name);
++void *kexec_image_load_default(struct kimage *image);
+
+-/* Architectures may override the below functions */
+-int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
+- unsigned long buf_len);
+-void *arch_kexec_kernel_image_load(struct kimage *image);
+-int arch_kimage_file_post_load_cleanup(struct kimage *image);
+-#ifdef CONFIG_KEXEC_SIG
+-int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+- unsigned long buf_len);
++#ifndef arch_kexec_kernel_image_probe
++static inline int
++arch_kexec_kernel_image_probe(struct kimage *image, void *buf, unsigned long buf_len)
++{
++ return kexec_image_probe_default(image, buf, buf_len);
++}
++#endif
++
++#ifndef arch_kimage_file_post_load_cleanup
++static inline int arch_kimage_file_post_load_cleanup(struct kimage *image)
++{
++ return kexec_image_post_load_cleanup_default(image);
++}
++#endif
++
++#ifndef arch_kexec_kernel_image_load
++static inline void *arch_kexec_kernel_image_load(struct kimage *image)
++{
++ return kexec_image_load_default(image);
++}
+ #endif
+-int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf);
+
+ extern int kexec_add_buffer(struct kexec_buf *kbuf);
+ int kexec_locate_mem_hole(struct kexec_buf *kbuf);
+
++#ifndef arch_kexec_locate_mem_hole
++/**
++ * arch_kexec_locate_mem_hole - Find free memory to place the segments.
++ * @kbuf: Parameters for the memory search.
++ *
++ * On success, kbuf->mem will have the start address of the memory region found.
++ *
++ * Return: 0 on success, negative errno on error.
++ */
++static inline int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf)
++{
++ return kexec_locate_mem_hole(kbuf);
++}
++#endif
++
+ /* Alignment required for elf header segment */
+ #define ELF_CORE_HEADER_ALIGN 4096
+
+diff --git a/include/linux/kfifo.h b/include/linux/kfifo.h
+index 86249476b57f4..0b35a41440ff1 100644
+--- a/include/linux/kfifo.h
++++ b/include/linux/kfifo.h
+@@ -688,7 +688,7 @@ __kfifo_uint_must_check_helper( \
+ * writer, you don't need extra locking to use these macro.
+ */
+ #define kfifo_to_user(fifo, to, len, copied) \
+-__kfifo_uint_must_check_helper( \
++__kfifo_int_must_check_helper( \
+ ({ \
+ typeof((fifo) + 1) __tmp = (fifo); \
+ void __user *__to = (to); \
+diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
+index ac1ebb37a0ffd..f328a01db4fe9 100644
+--- a/include/linux/kvm_types.h
++++ b/include/linux/kvm_types.h
+@@ -19,6 +19,7 @@ struct kvm_memslots;
+ enum kvm_mr_change;
+
+ #include <linux/bits.h>
++#include <linux/mutex.h>
+ #include <linux/types.h>
+ #include <linux/spinlock_types.h>
+
+@@ -69,6 +70,7 @@ struct gfn_to_pfn_cache {
+ struct kvm_vcpu *vcpu;
+ struct list_head list;
+ rwlock_t lock;
++ struct mutex refresh_lock;
+ void *khva;
+ kvm_pfn_t pfn;
+ enum pfn_cache_usage usage;
+diff --git a/include/linux/lockd/xdr.h b/include/linux/lockd/xdr.h
+index 398f70093cd35..67e4a2c5500bd 100644
+--- a/include/linux/lockd/xdr.h
++++ b/include/linux/lockd/xdr.h
+@@ -41,6 +41,8 @@ struct nlm_lock {
+ struct nfs_fh fh;
+ struct xdr_netobj oh;
+ u32 svid;
++ u64 lock_start;
++ u64 lock_len;
+ struct file_lock fl;
+ };
+
+diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
+index b6829b9700936..1f1099dac3f05 100644
+--- a/include/linux/lockdep.h
++++ b/include/linux/lockdep.h
+@@ -188,7 +188,7 @@ static inline void
+ lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
+ struct lock_class_key *key, int subclass, u8 inner, u8 outer)
+ {
+- lockdep_init_map_type(lock, name, key, subclass, inner, LD_WAIT_INV, LD_LOCK_NORMAL);
++ lockdep_init_map_type(lock, name, key, subclass, inner, outer, LD_LOCK_NORMAL);
+ }
+
+ static inline void
+@@ -211,24 +211,28 @@ static inline void lockdep_init_map(struct lockdep_map *lock, const char *name,
+ * or they are too narrow (they suffer from a false class-split):
+ */
+ #define lockdep_set_class(lock, key) \
+- lockdep_init_map_waits(&(lock)->dep_map, #key, key, 0, \
+- (lock)->dep_map.wait_type_inner, \
+- (lock)->dep_map.wait_type_outer)
++ lockdep_init_map_type(&(lock)->dep_map, #key, key, 0, \
++ (lock)->dep_map.wait_type_inner, \
++ (lock)->dep_map.wait_type_outer, \
++ (lock)->dep_map.lock_type)
+
+ #define lockdep_set_class_and_name(lock, key, name) \
+- lockdep_init_map_waits(&(lock)->dep_map, name, key, 0, \
+- (lock)->dep_map.wait_type_inner, \
+- (lock)->dep_map.wait_type_outer)
++ lockdep_init_map_type(&(lock)->dep_map, name, key, 0, \
++ (lock)->dep_map.wait_type_inner, \
++ (lock)->dep_map.wait_type_outer, \
++ (lock)->dep_map.lock_type)
+
+ #define lockdep_set_class_and_subclass(lock, key, sub) \
+- lockdep_init_map_waits(&(lock)->dep_map, #key, key, sub,\
+- (lock)->dep_map.wait_type_inner, \
+- (lock)->dep_map.wait_type_outer)
++ lockdep_init_map_type(&(lock)->dep_map, #key, key, sub, \
++ (lock)->dep_map.wait_type_inner, \
++ (lock)->dep_map.wait_type_outer, \
++ (lock)->dep_map.lock_type)
+
+ #define lockdep_set_subclass(lock, sub) \
+- lockdep_init_map_waits(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\
+- (lock)->dep_map.wait_type_inner, \
+- (lock)->dep_map.wait_type_outer)
++ lockdep_init_map_type(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\
++ (lock)->dep_map.wait_type_inner, \
++ (lock)->dep_map.wait_type_outer, \
++ (lock)->dep_map.lock_type)
+
+ #define lockdep_set_novalidate_class(lock) \
+ lockdep_set_class_and_name(lock, &__lockdep_no_validate__, #lock)
+diff --git a/include/linux/mbcache.h b/include/linux/mbcache.h
+index 20f1e3ff60130..8eca7f25c4320 100644
+--- a/include/linux/mbcache.h
++++ b/include/linux/mbcache.h
+@@ -30,15 +30,23 @@ void mb_cache_destroy(struct mb_cache *cache);
+ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ u64 value, bool reusable);
+ void __mb_cache_entry_free(struct mb_cache_entry *entry);
++void mb_cache_entry_wait_unused(struct mb_cache_entry *entry);
+ static inline int mb_cache_entry_put(struct mb_cache *cache,
+ struct mb_cache_entry *entry)
+ {
+- if (!atomic_dec_and_test(&entry->e_refcnt))
++ unsigned int cnt = atomic_dec_return(&entry->e_refcnt);
++
++ if (cnt > 0) {
++ if (cnt <= 3)
++ wake_up_var(&entry->e_refcnt);
+ return 0;
++ }
+ __mb_cache_entry_free(entry);
+ return 1;
+ }
+
++struct mb_cache_entry *mb_cache_entry_delete_or_get(struct mb_cache *cache,
++ u32 key, u64 value);
+ void mb_cache_entry_delete(struct mb_cache *cache, u32 key, u64 value);
+ struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *cache, u32 key,
+ u64 value);
+diff --git a/include/linux/mdev.h b/include/linux/mdev.h
+index bb539794f54a8..47ad3b104d9e7 100644
+--- a/include/linux/mdev.h
++++ b/include/linux/mdev.h
+@@ -65,11 +65,6 @@ struct mdev_driver {
+ struct device_driver driver;
+ };
+
+-static inline const guid_t *mdev_uuid(struct mdev_device *mdev)
+-{
+- return &mdev->uuid;
+-}
+-
+ extern struct bus_type mdev_bus_type;
+
+ int mdev_register_device(struct device *dev, struct mdev_driver *mdev_driver);
+diff --git a/include/linux/mfd/t7l66xb.h b/include/linux/mfd/t7l66xb.h
+index 69632c1b07bd8..ae3e7a5c5219b 100644
+--- a/include/linux/mfd/t7l66xb.h
++++ b/include/linux/mfd/t7l66xb.h
+@@ -12,7 +12,6 @@
+
+ struct t7l66xb_platform_data {
+ int (*enable)(struct platform_device *dev);
+- int (*disable)(struct platform_device *dev);
+ int (*suspend)(struct platform_device *dev);
+ int (*resume)(struct platform_device *dev);
+
+diff --git a/include/linux/once_lite.h b/include/linux/once_lite.h
+index 861e606b820fa..b7bce4983638f 100644
+--- a/include/linux/once_lite.h
++++ b/include/linux/once_lite.h
+@@ -9,15 +9,27 @@
+ */
+ #define DO_ONCE_LITE(func, ...) \
+ DO_ONCE_LITE_IF(true, func, ##__VA_ARGS__)
+-#define DO_ONCE_LITE_IF(condition, func, ...) \
++
++#define __ONCE_LITE_IF(condition) \
+ ({ \
+ static bool __section(".data.once") __already_done; \
+- bool __ret_do_once = !!(condition); \
++ bool __ret_cond = !!(condition); \
++ bool __ret_once = false; \
+ \
+- if (unlikely(__ret_do_once && !__already_done)) { \
++ if (unlikely(__ret_cond && !__already_done)) { \
+ __already_done = true; \
+- func(__VA_ARGS__); \
++ __ret_once = true; \
+ } \
++ unlikely(__ret_once); \
++ })
++
++#define DO_ONCE_LITE_IF(condition, func, ...) \
++ ({ \
++ bool __ret_do_once = !!(condition); \
++ \
++ if (__ONCE_LITE_IF(__ret_do_once)) \
++ func(__VA_ARGS__); \
++ \
+ unlikely(__ret_do_once); \
+ })
+
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index cb0fd633a6106..4ea4969241062 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -229,6 +229,15 @@ static inline bool pipe_buf_try_steal(struct pipe_inode_info *pipe,
+ return buf->ops->try_steal(pipe, buf);
+ }
+
++static inline void pipe_discard_from(struct pipe_inode_info *pipe,
++ unsigned int old_head)
++{
++ unsigned int mask = pipe->ring_size - 1;
++
++ while (pipe->head > old_head)
++ pipe_buf_release(pipe, &pipe->bufs[--pipe->head & mask]);
++}
++
+ /* Differs from PIPE_BUF in that PIPE_SIZE is the length of the actual
+ memory allocation, whereas PIPE_BUF makes atomicity guarantees. */
+ #define PIPE_SIZE PAGE_SIZE
+diff --git a/include/linux/platform-feature.h b/include/linux/platform-feature.h
+index b2f48be999fa4..6ed859928b978 100644
+--- a/include/linux/platform-feature.h
++++ b/include/linux/platform-feature.h
+@@ -6,11 +6,7 @@
+ #include <asm/platform-feature.h>
+
+ /* The platform features are starting with the architecture specific ones. */
+-
+-/* Used to enable platform specific DMA handling for virtio devices. */
+-#define PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS (0 + PLATFORM_ARCH_FEAT_N)
+-
+-#define PLATFORM_FEAT_N (1 + PLATFORM_ARCH_FEAT_N)
++#define PLATFORM_FEAT_N (0 + PLATFORM_ARCH_FEAT_N)
+
+ void platform_set(unsigned int feature);
+ void platform_clear(unsigned int feature);
+diff --git a/include/linux/rmap.h b/include/linux/rmap.h
+index 9ec23138e4107..bf80adca980b9 100644
+--- a/include/linux/rmap.h
++++ b/include/linux/rmap.h
+@@ -325,8 +325,8 @@ struct page_vma_mapped_walk {
+ #define DEFINE_PAGE_VMA_WALK(name, _page, _vma, _address, _flags) \
+ struct page_vma_mapped_walk name = { \
+ .pfn = page_to_pfn(_page), \
+- .nr_pages = compound_nr(page), \
+- .pgoff = page_to_pgoff(page), \
++ .nr_pages = compound_nr(_page), \
++ .pgoff = page_to_pgoff(_page), \
+ .vma = _vma, \
+ .address = _address, \
+ .flags = _flags, \
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index c46f3a63b758f..6d877c7e22ffd 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1813,7 +1813,7 @@ current_restore_flags(unsigned long orig_flags, unsigned long flags)
+ }
+
+ extern int cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
+-extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed);
++extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_effective_cpus);
+ #ifdef CONFIG_SMP
+ extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask);
+ extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask);
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index e5af028c08b49..994c25640e156 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -39,20 +39,12 @@ static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *p)
+ }
+ extern void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task);
+ extern void rt_mutex_adjust_pi(struct task_struct *p);
+-static inline bool tsk_is_pi_blocked(struct task_struct *tsk)
+-{
+- return tsk->pi_blocked_on != NULL;
+-}
+ #else
+ static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *task)
+ {
+ return NULL;
+ }
+ # define rt_mutex_adjust_pi(p) do { } while (0)
+-static inline bool tsk_is_pi_blocked(struct task_struct *tsk)
+-{
+- return false;
+-}
+ #endif
+
+ extern void normalize_rt_tasks(void);
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 56cffe42abbc4..816df6cc444e1 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -81,6 +81,7 @@ struct sched_domain_shared {
+ atomic_t ref;
+ atomic_t nr_busy_cpus;
+ int has_idle_cores;
++ int nr_idle_scan;
+ };
+
+ struct sched_domain {
+diff --git a/include/linux/soundwire/sdw.h b/include/linux/soundwire/sdw.h
+index 76ce3f3ac0f22..bf6f0decb3f6d 100644
+--- a/include/linux/soundwire/sdw.h
++++ b/include/linux/soundwire/sdw.h
+@@ -646,9 +646,6 @@ struct sdw_slave_ops {
+ * @dev_num: Current Device Number, values can be 0 or dev_num_sticky
+ * @dev_num_sticky: one-time static Device Number assigned by Bus
+ * @probed: boolean tracking driver state
+- * @probe_complete: completion utility to control potential races
+- * on startup between driver probe/initialization and SoundWire
+- * Slave state changes/implementation-defined interrupts
+ * @enumeration_complete: completion utility to control potential races
+ * on startup between device enumeration and read/write access to the
+ * Slave device
+@@ -663,6 +660,7 @@ struct sdw_slave_ops {
+ * for a Slave happens for the first time after enumeration
+ * @is_mockup_device: status flag used to squelch errors in the command/control
+ * protocol for SoundWire mockup devices
++ * @sdw_dev_lock: mutex used to protect callbacks/remove races
+ */
+ struct sdw_slave {
+ struct sdw_slave_id id;
+@@ -680,12 +678,12 @@ struct sdw_slave {
+ u16 dev_num;
+ u16 dev_num_sticky;
+ bool probed;
+- struct completion probe_complete;
+ struct completion enumeration_complete;
+ struct completion initialization_complete;
+ u32 unattach_request;
+ bool first_interrupt_done;
+ bool is_mockup_device;
++ struct mutex sdw_dev_lock; /* protect callbacks/remove races */
+ };
+
+ #define dev_to_sdw_dev(_dev) container_of(_dev, struct sdw_slave, dev)
+diff --git a/include/linux/swapops.h b/include/linux/swapops.h
+index f24775b418807..bb7afd03a324f 100644
+--- a/include/linux/swapops.h
++++ b/include/linux/swapops.h
+@@ -244,8 +244,10 @@ extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
+ spinlock_t *ptl);
+ extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
+ unsigned long address);
+-extern void migration_entry_wait_huge(struct vm_area_struct *vma,
+- struct mm_struct *mm, pte_t *pte);
++#ifdef CONFIG_HUGETLB_PAGE
++extern void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl);
++extern void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte);
++#endif
+ #else
+ static inline swp_entry_t make_readable_migration_entry(pgoff_t offset)
+ {
+@@ -271,8 +273,10 @@ static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
+ spinlock_t *ptl) { }
+ static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
+ unsigned long address) { }
+-static inline void migration_entry_wait_huge(struct vm_area_struct *vma,
+- struct mm_struct *mm, pte_t *pte) { }
++#ifdef CONFIG_HUGETLB_PAGE
++static inline void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl) { }
++static inline void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte) { }
++#endif
+ static inline int is_writable_migration_entry(swp_entry_t entry)
+ {
+ return 0;
+diff --git a/include/linux/tpm_eventlog.h b/include/linux/tpm_eventlog.h
+index 739ba9a03ec16..20c0ff54b7a0d 100644
+--- a/include/linux/tpm_eventlog.h
++++ b/include/linux/tpm_eventlog.h
+@@ -157,7 +157,7 @@ struct tcg_algorithm_info {
+ * Return: size of the event on success, 0 on failure
+ */
+
+-static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
++static __always_inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+ struct tcg_pcr_event *event_header,
+ bool do_mapping)
+ {
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index e6e95a9f07a52..b18759a673c66 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -916,6 +916,24 @@ perf_trace_buf_submit(void *raw_data, int size, int rctx, u16 type,
+
+ #endif
+
++#define TRACE_EVENT_STR_MAX 512
++
++/*
++ * gcc warns that you can not use a va_list in an inlined
++ * function. But lets me make it into a macro :-/
++ */
++#define __trace_event_vstr_len(fmt, va) \
++({ \
++ va_list __ap; \
++ int __ret; \
++ \
++ va_copy(__ap, *(va)); \
++ __ret = vsnprintf(NULL, 0, fmt, __ap) + 1; \
++ va_end(__ap); \
++ \
++ min(__ret, TRACE_EVENT_STR_MAX); \
++})
++
+ #endif /* _LINUX_TRACE_EVENT_H */
+
+ /*
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 2c1fc9212cf28..98d1921f02b1e 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -66,6 +66,7 @@
+
+ struct giveback_urb_bh {
+ bool running;
++ bool high_prio;
+ spinlock_t lock;
+ struct list_head head;
+ struct tasklet_struct bh;
+diff --git a/include/linux/vfio.h b/include/linux/vfio.h
+index aa888cc517578..d6c592565be73 100644
+--- a/include/linux/vfio.h
++++ b/include/linux/vfio.h
+@@ -32,6 +32,11 @@ struct vfio_device_set {
+ struct vfio_device {
+ struct device *dev;
+ const struct vfio_device_ops *ops;
++ /*
++ * mig_ops is a static property of the vfio_device which must be set
++ * prior to registering the vfio_device.
++ */
++ const struct vfio_migration_ops *mig_ops;
+ struct vfio_group *group;
+ struct vfio_device_set *dev_set;
+ struct list_head dev_set_list;
+@@ -61,16 +66,6 @@ struct vfio_device {
+ * match, -errno for abort (ex. match with insufficient or incorrect
+ * additional args)
+ * @device_feature: Optional, fill in the VFIO_DEVICE_FEATURE ioctl
+- * @migration_set_state: Optional callback to change the migration state for
+- * devices that support migration. It's mandatory for
+- * VFIO_DEVICE_FEATURE_MIGRATION migration support.
+- * The returned FD is used for data transfer according to the FSM
+- * definition. The driver is responsible to ensure that FD reaches end
+- * of stream or error whenever the migration FSM leaves a data transfer
+- * state or before close_device() returns.
+- * @migration_get_state: Optional callback to get the migration state for
+- * devices that support migration. It's mandatory for
+- * VFIO_DEVICE_FEATURE_MIGRATION migration support.
+ */
+ struct vfio_device_ops {
+ char *name;
+@@ -87,6 +82,21 @@ struct vfio_device_ops {
+ int (*match)(struct vfio_device *vdev, char *buf);
+ int (*device_feature)(struct vfio_device *device, u32 flags,
+ void __user *arg, size_t argsz);
++};
++
++/**
++ * @migration_set_state: Optional callback to change the migration state for
++ * devices that support migration. It's mandatory for
++ * VFIO_DEVICE_FEATURE_MIGRATION migration support.
++ * The returned FD is used for data transfer according to the FSM
++ * definition. The driver is responsible to ensure that FD reaches end
++ * of stream or error whenever the migration FSM leaves a data transfer
++ * state or before close_device() returns.
++ * @migration_get_state: Optional callback to get the migration state for
++ * devices that support migration. It's mandatory for
++ * VFIO_DEVICE_FEATURE_MIGRATION migration support.
++ */
++struct vfio_migration_ops {
+ struct file *(*migration_set_state)(
+ struct vfio_device *device,
+ enum vfio_device_mig_state new_state);
+diff --git a/include/linux/virtio_anchor.h b/include/linux/virtio_anchor.h
+new file mode 100644
+index 0000000000000..432e6c00b3cae
+--- /dev/null
++++ b/include/linux/virtio_anchor.h
+@@ -0,0 +1,19 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _LINUX_VIRTIO_ANCHOR_H
++#define _LINUX_VIRTIO_ANCHOR_H
++
++#ifdef CONFIG_VIRTIO_ANCHOR
++struct virtio_device;
++
++bool virtio_require_restricted_mem_acc(struct virtio_device *dev);
++extern bool (*virtio_check_mem_acc_cb)(struct virtio_device *dev);
++
++static inline void virtio_set_mem_acc_cb(bool (*func)(struct virtio_device *))
++{
++ virtio_check_mem_acc_cb = func;
++}
++#else
++#define virtio_set_mem_acc_cb(func) do { } while (0)
++#endif
++
++#endif /* _LINUX_VIRTIO_ANCHOR_H */
+diff --git a/include/linux/wait.h b/include/linux/wait.h
+index 851e07da2583f..58cfbf81447cc 100644
+--- a/include/linux/wait.h
++++ b/include/linux/wait.h
+@@ -544,10 +544,11 @@ do { \
+ \
+ hrtimer_init_sleeper_on_stack(&__t, CLOCK_MONOTONIC, \
+ HRTIMER_MODE_REL); \
+- if ((timeout) != KTIME_MAX) \
+- hrtimer_start_range_ns(&__t.timer, timeout, \
+- current->timer_slack_ns, \
+- HRTIMER_MODE_REL); \
++ if ((timeout) != KTIME_MAX) { \
++ hrtimer_set_expires_range_ns(&__t.timer, timeout, \
++ current->timer_slack_ns); \
++ hrtimer_sleeper_start_expires(&__t, HRTIMER_MODE_REL); \
++ } \
+ \
+ __ret = ___wait_event(wq_head, condition, state, 0, 0, \
+ if (!__t.task) { \
+diff --git a/include/media/hevc-ctrls.h b/include/media/hevc-ctrls.h
+index 01ccda48d8c57..88e804578cb19 100644
+--- a/include/media/hevc-ctrls.h
++++ b/include/media/hevc-ctrls.h
+@@ -135,7 +135,7 @@ struct v4l2_hevc_dpb_entry {
+ __u64 timestamp;
+ __u8 flags;
+ __u8 field_pic;
+- __u16 pic_order_cnt[2];
++ __s32 pic_order_cnt_val;
+ __u8 padding[2];
+ };
+
+@@ -178,7 +178,7 @@ struct v4l2_ctrl_hevc_slice_params {
+ /* ISO/IEC 23008-2, ITU-T Rec. H.265: General slice segment header */
+ __u8 slice_type;
+ __u8 colour_plane_id;
+- __u16 slice_pic_order_cnt;
++ __s32 slice_pic_order_cnt;
+ __u8 num_ref_idx_l0_active_minus1;
+ __u8 num_ref_idx_l1_active_minus1;
+ __u8 collocated_ref_idx;
+diff --git a/include/net/9p/client.h b/include/net/9p/client.h
+index ec1d1706f43c0..cb78e0e333324 100644
+--- a/include/net/9p/client.h
++++ b/include/net/9p/client.h
+@@ -76,7 +76,7 @@ enum p9_req_status_t {
+ struct p9_req_t {
+ int status;
+ int t_err;
+- struct kref refcount;
++ refcount_t refcount;
+ wait_queue_head_t wq;
+ struct p9_fcall tc;
+ struct p9_fcall rc;
+@@ -227,15 +227,15 @@ struct p9_req_t *p9_tag_lookup(struct p9_client *c, u16 tag);
+
+ static inline void p9_req_get(struct p9_req_t *r)
+ {
+- kref_get(&r->refcount);
++ refcount_inc(&r->refcount);
+ }
+
+ static inline int p9_req_try_get(struct p9_req_t *r)
+ {
+- return kref_get_unless_zero(&r->refcount);
++ return refcount_inc_not_zero(&r->refcount);
+ }
+
+-int p9_req_put(struct p9_req_t *r);
++int p9_req_put(struct p9_client *c, struct p9_req_t *r);
+
+ void p9_client_cb(struct p9_client *c, struct p9_req_t *req, int status);
+
+diff --git a/include/net/ax25.h b/include/net/ax25.h
+index a427a05672e2a..f8cf3629a4193 100644
+--- a/include/net/ax25.h
++++ b/include/net/ax25.h
+@@ -236,6 +236,7 @@ typedef struct ax25_cb {
+ ax25_address source_addr, dest_addr;
+ ax25_digi *digipeat;
+ ax25_dev *ax25_dev;
++ netdevice_tracker dev_tracker;
+ unsigned char iamdigi;
+ unsigned char state, modulus, pidincl;
+ unsigned short vs, vr, va;
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index fe7935be7dc44..4a45c48eb0d25 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -361,6 +361,7 @@ enum {
+ HCI_QUALITY_REPORT,
+ HCI_OFFLOAD_CODECS_ENABLED,
+ HCI_LE_SIMULTANEOUS_ROLES,
++ HCI_CMD_DRAIN_WORKQUEUE,
+
+ __HCI_NUM_FLAGS,
+ };
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 80f41446b1f0e..b3afb02c8a4ee 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -1158,6 +1158,7 @@ struct cfg80211_mbssid_elems {
+
+ /**
+ * struct cfg80211_beacon_data - beacon data
++ * @link_id: the link ID for the AP MLD link sending this beacon
+ * @head: head portion of beacon (before TIM IE)
+ * or %NULL if not changed
+ * @tail: tail portion of beacon (after TIM IE)
+@@ -1188,6 +1189,8 @@ struct cfg80211_mbssid_elems {
+ * attribute is present in beacon data or not.
+ */
+ struct cfg80211_beacon_data {
++ unsigned int link_id;
++
+ const u8 *head, *tail;
+ const u8 *beacon_ies;
+ const u8 *proberesp_ies;
+@@ -4201,7 +4204,8 @@ struct cfg80211_ops {
+ struct cfg80211_ap_settings *settings);
+ int (*change_beacon)(struct wiphy *wiphy, struct net_device *dev,
+ struct cfg80211_beacon_data *info);
+- int (*stop_ap)(struct wiphy *wiphy, struct net_device *dev);
++ int (*stop_ap)(struct wiphy *wiphy, struct net_device *dev,
++ unsigned int link_id);
+
+
+ int (*add_station)(struct wiphy *wiphy, struct net_device *dev,
+@@ -4309,6 +4313,7 @@ struct cfg80211_ops {
+
+ int (*set_bitrate_mask)(struct wiphy *wiphy,
+ struct net_device *dev,
++ unsigned int link_id,
+ const u8 *peer,
+ const struct cfg80211_bitrate_mask *mask);
+
+@@ -4384,6 +4389,7 @@ struct cfg80211_ops {
+
+ int (*get_channel)(struct wiphy *wiphy,
+ struct wireless_dev *wdev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef);
+
+ int (*start_p2p_device)(struct wiphy *wiphy,
+@@ -4420,6 +4426,7 @@ struct cfg80211_ops {
+ struct cfg80211_qos_map *qos_map);
+
+ int (*set_ap_chanwidth)(struct wiphy *wiphy, struct net_device *dev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef);
+
+ int (*add_tx_ts)(struct wiphy *wiphy, struct net_device *dev,
+@@ -4545,10 +4552,14 @@ struct cfg80211_ops {
+ * @WIPHY_FLAG_HAS_STATIC_WEP: The device supports static WEP key installation
+ * before connection.
+ * @WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK: The device supports bigger kek and kck keys
++ * @WIPHY_FLAG_SUPPORTS_MLO: This is a temporary flag gating the MLO APIs,
++ * in order to not have them reachable in normal drivers, until we have
++ * complete feature/interface combinations/etc. advertisement. No driver
++ * should set this flag for now.
+ */
+ enum wiphy_flags {
+ WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK = BIT(0),
+- /* use hole at 1 */
++ WIPHY_FLAG_SUPPORTS_MLO = BIT(1),
+ WIPHY_FLAG_SPLIT_SCAN_6GHZ = BIT(2),
+ WIPHY_FLAG_NETNS_OK = BIT(3),
+ WIPHY_FLAG_PS_ON_BY_DEFAULT = BIT(4),
+@@ -5505,6 +5516,8 @@ static inline void wiphy_unlock(struct wiphy *wiphy)
+ * @netdev: (private) Used to reference back to the netdev, may be %NULL
+ * @identifier: (private) Identifier used in nl80211 to identify this
+ * wireless device if it has no netdev
++ * @connected_addr: (private) BSSID or AP MLD address if connected
++ * @connected: indicates if connected or not (STA mode)
+ * @current_bss: (private) Used by the internal configuration code
+ * @chandef: (private) Used by the internal configuration code to track
+ * the user-set channel definition.
+@@ -5585,8 +5598,6 @@ struct wireless_dev {
+ u8 address[ETH_ALEN] __aligned(sizeof(u16));
+
+ /* currently used for IBSS and SME - might be rearranged later */
+- u8 ssid[IEEE80211_MAX_SSID_LEN];
+- u8 ssid_len, mesh_id_len, mesh_id_up_len;
+ struct cfg80211_conn *conn;
+ struct cfg80211_cached_keys *connect_keys;
+ enum ieee80211_bss_type conn_bss_type;
+@@ -5598,20 +5609,17 @@ struct wireless_dev {
+ struct list_head event_list;
+ spinlock_t event_lock;
+
+- struct cfg80211_internal_bss *current_bss; /* associated / joined */
+- struct cfg80211_chan_def preset_chandef;
+- struct cfg80211_chan_def chandef;
++ u8 connected:1;
+
+ bool ps;
+ int ps_timeout;
+
+- int beacon_interval;
+-
+ u32 ap_unexpected_nlportid;
+
+ u32 owner_nlportid;
+ bool nl_owner_dead;
+
++ /* FIXME: need to rework radar detection for MLO */
+ bool cac_started;
+ unsigned long cac_start_time;
+ unsigned int cac_time_ms;
+@@ -5639,6 +5647,50 @@ struct wireless_dev {
+ struct work_struct pmsr_free_wk;
+
+ unsigned long unprot_beacon_reported;
++
++ union {
++ struct {
++ u8 connected_addr[ETH_ALEN] __aligned(2);
++ u8 ssid[IEEE80211_MAX_SSID_LEN];
++ u8 ssid_len;
++ } client;
++ struct {
++ int beacon_interval;
++ struct cfg80211_chan_def preset_chandef;
++ struct cfg80211_chan_def chandef;
++ u8 id[IEEE80211_MAX_SSID_LEN];
++ u8 id_len, id_up_len;
++ } mesh;
++ struct {
++ struct cfg80211_chan_def preset_chandef;
++ u8 ssid[IEEE80211_MAX_SSID_LEN];
++ u8 ssid_len;
++ } ap;
++ struct {
++ struct cfg80211_internal_bss *current_bss;
++ struct cfg80211_chan_def chandef;
++ int beacon_interval;
++ u8 ssid[IEEE80211_MAX_SSID_LEN];
++ u8 ssid_len;
++ } ibss;
++ struct {
++ struct cfg80211_chan_def chandef;
++ } ocb;
++ } u;
++
++ struct {
++ u8 addr[ETH_ALEN] __aligned(2);
++ union {
++ struct {
++ unsigned int beacon_interval;
++ struct cfg80211_chan_def chandef;
++ } ap;
++ struct {
++ struct cfg80211_internal_bss *current_bss;
++ } client;
++ };
++ } links[IEEE80211_MLD_MAX_NUM_LINKS];
++ u16 valid_links;
+ };
+
+ static inline const u8 *wdev_address(struct wireless_dev *wdev)
+@@ -5667,6 +5719,31 @@ static inline void *wdev_priv(struct wireless_dev *wdev)
+ return wiphy_priv(wdev->wiphy);
+ }
+
++/**
++ * wdev_chandef - return chandef pointer from wireless_dev
++ * @wdev: the wdev
++ * @link_id: the link ID for MLO
++ *
++ * Return: The chandef depending on the mode, or %NULL.
++ */
++struct cfg80211_chan_def *wdev_chandef(struct wireless_dev *wdev,
++ unsigned int link_id);
++
++static inline void WARN_INVALID_LINK_ID(struct wireless_dev *wdev,
++ unsigned int link_id)
++{
++ WARN_ON(link_id && !wdev->valid_links);
++ WARN_ON(wdev->valid_links &&
++ !(wdev->valid_links & BIT(link_id)));
++}
++
++#define for_each_valid_link(wdev, link_id) \
++ for (link_id = 0; \
++ link_id < ((wdev)->valid_links ? ARRAY_SIZE((wdev)->links) : 1); \
++ link_id++) \
++ if (!(wdev)->valid_links || \
++ ((wdev)->valid_links & BIT(link_id)))
++
+ /**
+ * DOC: Utility functions
+ *
+@@ -7882,12 +7959,14 @@ bool cfg80211_reg_can_beacon_relax(struct wiphy *wiphy,
+ * cfg80211_ch_switch_notify - update wdev channel and notify userspace
+ * @dev: the device which switched channels
+ * @chandef: the new channel definition
++ * @link_id: the link ID for MLO, must be 0 for non-MLO
+ *
+ * Caller must acquire wdev_lock, therefore must only be called from sleepable
+ * driver context!
+ */
+ void cfg80211_ch_switch_notify(struct net_device *dev,
+- struct cfg80211_chan_def *chandef);
++ struct cfg80211_chan_def *chandef,
++ unsigned int link_id);
+
+ /*
+ * cfg80211_ch_switch_started_notify - notify channel switch start
+diff --git a/include/net/inet6_hashtables.h b/include/net/inet6_hashtables.h
+index f259e1ae14ba0..56f1286583d3c 100644
+--- a/include/net/inet6_hashtables.h
++++ b/include/net/inet6_hashtables.h
+@@ -110,8 +110,6 @@ static inline bool inet6_match(struct net *net, const struct sock *sk,
+ const __portpair ports,
+ const int dif, const int sdif)
+ {
+- int bound_dev_if;
+-
+ if (!net_eq(sock_net(sk), net) ||
+ sk->sk_family != AF_INET6 ||
+ sk->sk_portpair != ports ||
+@@ -119,8 +117,9 @@ static inline bool inet6_match(struct net *net, const struct sock *sk,
+ !ipv6_addr_equal(&sk->sk_v6_rcv_saddr, daddr))
+ return false;
+
+- bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
+- return bound_dev_if == dif || bound_dev_if == sdif;
++ /* READ_ONCE() paired with WRITE_ONCE() in sock_bindtoindex_locked() */
++ return inet_sk_bound_dev_eq(net, READ_ONCE(sk->sk_bound_dev_if), dif,
++ sdif);
+ }
+ #endif /* IS_ENABLED(CONFIG_IPV6) */
+
+diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h
+index fd6b510d114bc..e9cf2157ed8ac 100644
+--- a/include/net/inet_hashtables.h
++++ b/include/net/inet_hashtables.h
+@@ -175,17 +175,6 @@ static inline void inet_ehash_locks_free(struct inet_hashinfo *hashinfo)
+ hashinfo->ehash_locks = NULL;
+ }
+
+-static inline bool inet_sk_bound_dev_eq(struct net *net, int bound_dev_if,
+- int dif, int sdif)
+-{
+-#if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV)
+- return inet_bound_dev_eq(!!READ_ONCE(net->ipv4.sysctl_tcp_l3mdev_accept),
+- bound_dev_if, dif, sdif);
+-#else
+- return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);
+-#endif
+-}
+-
+ struct inet_bind_bucket *
+ inet_bind_bucket_create(struct kmem_cache *cachep, struct net *net,
+ struct inet_bind_hashbucket *head,
+@@ -271,16 +260,14 @@ static inline bool inet_match(struct net *net, const struct sock *sk,
+ const __addrpair cookie, const __portpair ports,
+ int dif, int sdif)
+ {
+- int bound_dev_if;
+-
+ if (!net_eq(sock_net(sk), net) ||
+ sk->sk_portpair != ports ||
+ sk->sk_addrpair != cookie)
+ return false;
+
+- /* Paired with WRITE_ONCE() from sock_bindtoindex_locked() */
+- bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
+- return bound_dev_if == dif || bound_dev_if == sdif;
++ /* READ_ONCE() paired with WRITE_ONCE() in sock_bindtoindex_locked() */
++ return inet_sk_bound_dev_eq(net, READ_ONCE(sk->sk_bound_dev_if), dif,
++ sdif);
+ }
+
+ /* Sockets in TCP_CLOSE state are _always_ taken out of the hash, so we need
+diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
+index 6395f6b9a5d29..bf5654ce711ef 100644
+--- a/include/net/inet_sock.h
++++ b/include/net/inet_sock.h
+@@ -149,6 +149,17 @@ static inline bool inet_bound_dev_eq(bool l3mdev_accept, int bound_dev_if,
+ return bound_dev_if == dif || bound_dev_if == sdif;
+ }
+
++static inline bool inet_sk_bound_dev_eq(struct net *net, int bound_dev_if,
++ int dif, int sdif)
++{
++#if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV)
++ return inet_bound_dev_eq(!!READ_ONCE(net->ipv4.sysctl_tcp_l3mdev_accept),
++ bound_dev_if, dif, sdif);
++#else
++ return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);
++#endif
++}
++
+ struct inet_cork {
+ unsigned int flags;
+ __be32 addr;
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index c24fa934221dd..20f60d9da7418 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -54,6 +54,7 @@ struct ip_tunnel_key {
+ __be32 label; /* Flow Label for IPv6 */
+ __be16 tp_src;
+ __be16 tp_dst;
++ __u8 flow_flags;
+ };
+
+ /* Flags for ip_tunnel_info mode. */
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 47642b020706b..d95d8cbfc6796 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -636,6 +636,19 @@ struct ieee80211_fils_discovery {
+ * @tx_pwr_env_num: number of @tx_pwr_env.
+ * @pwr_reduction: power constraint of BSS.
+ * @eht_support: does this BSS support EHT
++ * @csa_active: marks whether a channel switch is going on. Internally it is
++ * write-protected by sdata_lock and local->mtx so holding either is fine
++ * for read access.
++ * @mu_mimo_owner: indicates interface owns MU-MIMO capability
++ * @chanctx_conf: The channel context this interface is assigned to, or %NULL
++ * when it is not assigned. This pointer is RCU-protected due to the TX
++ * path needing to access it; even though the netdev carrier will always
++ * be off when it is %NULL there can still be races and packets could be
++ * processed after it switches back to %NULL.
++ * @color_change_active: marks whether a color change is ongoing. Internally it is
++ * write-protected by sdata_lock and local->mtx so holding either is fine
++ * for read access.
++ * @color_change_color: the bss color that will be used after the change.
+ */
+ struct ieee80211_bss_conf {
+ const u8 *bssid;
+@@ -711,6 +724,13 @@ struct ieee80211_bss_conf {
+ u8 tx_pwr_env_num;
+ u8 pwr_reduction;
+ bool eht_support;
++
++ bool csa_active;
++ bool mu_mimo_owner;
++ struct ieee80211_chanctx_conf __rcu *chanctx_conf;
++
++ bool color_change_active;
++ u8 color_change_color;
+ };
+
+ /**
+@@ -1713,10 +1733,6 @@ enum ieee80211_offload_flags {
+ * @addr: address of this interface
+ * @p2p: indicates whether this AP or STA interface is a p2p
+ * interface, i.e. a GO or p2p-sta respectively
+- * @csa_active: marks whether a channel switch is going on. Internally it is
+- * write-protected by sdata_lock and local->mtx so holding either is fine
+- * for read access.
+- * @mu_mimo_owner: indicates interface owns MU-MIMO capability
+ * @driver_flags: flags/capabilities the driver has for this interface,
+ * these need to be set (or cleared) when the interface is added
+ * or, if supported by the driver, the interface type is changed
+@@ -1728,11 +1744,6 @@ enum ieee80211_offload_flags {
+ * restrictions.
+ * @hw_queue: hardware queue for each AC
+ * @cab_queue: content-after-beacon (DTIM beacon really) queue, AP mode only
+- * @chanctx_conf: The channel context this interface is assigned to, or %NULL
+- * when it is not assigned. This pointer is RCU-protected due to the TX
+- * path needing to access it; even though the netdev carrier will always
+- * be off when it is %NULL there can still be races and packets could be
+- * processed after it switches back to %NULL.
+ * @debugfs_dir: debugfs dentry, can be used by drivers to create own per
+ * interface debug files. Note that it will be NULL for the virtual
+ * monitor interface (if that is requested.)
+@@ -1747,10 +1758,6 @@ enum ieee80211_offload_flags {
+ * protected by fq->lock.
+ * @offload_flags: 802.3 -> 802.11 enapsulation offload flags, see
+ * &enum ieee80211_offload_flags.
+- * @color_change_active: marks whether a color change is ongoing. Internally it is
+- * write-protected by sdata_lock and local->mtx so holding either is fine
+- * for read access.
+- * @color_change_color: the bss color that will be used after the change.
+ * @mbssid_tx_vif: Pointer to the transmitting interface if MBSSID is enabled.
+ */
+ struct ieee80211_vif {
+@@ -1758,16 +1765,12 @@ struct ieee80211_vif {
+ struct ieee80211_bss_conf bss_conf;
+ u8 addr[ETH_ALEN] __aligned(2);
+ bool p2p;
+- bool csa_active;
+- bool mu_mimo_owner;
+
+ u8 cab_queue;
+ u8 hw_queue[IEEE80211_NUM_ACS];
+
+ struct ieee80211_txq *txq;
+
+- struct ieee80211_chanctx_conf __rcu *chanctx_conf;
+-
+ u32 driver_flags;
+ u32 offload_flags;
+
+@@ -1780,9 +1783,6 @@ struct ieee80211_vif {
+
+ bool txqs_stopped[IEEE80211_NUM_ACS];
+
+- bool color_change_active;
+- u8 color_change_color;
+-
+ struct ieee80211_vif *mbssid_tx_vif;
+
+ /* must be last */
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 64cf655c818cc..b8890ace0f879 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -206,13 +206,18 @@ struct nft_ctx {
+ bool report;
+ };
+
++enum nft_data_desc_flags {
++ NFT_DATA_DESC_SETELEM = (1 << 0),
++};
++
+ struct nft_data_desc {
+ enum nft_data_types type;
++ unsigned int size;
+ unsigned int len;
++ unsigned int flags;
+ };
+
+-int nft_data_init(const struct nft_ctx *ctx,
+- struct nft_data *data, unsigned int size,
++int nft_data_init(const struct nft_ctx *ctx, struct nft_data *data,
+ struct nft_data_desc *desc, const struct nlattr *nla);
+ void nft_data_hold(const struct nft_data *data, enum nft_data_types type);
+ void nft_data_release(const struct nft_data *data, enum nft_data_types type);
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 44a35531952e1..3372a1f67cf4e 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -173,11 +173,28 @@ struct tc_taprio_qopt_offload {
+ struct tc_taprio_sched_entry entries[];
+ };
+
++#if IS_ENABLED(CONFIG_NET_SCH_TAPRIO)
++
+ /* Reference counting */
+ struct tc_taprio_qopt_offload *taprio_offload_get(struct tc_taprio_qopt_offload
+ *offload);
+ void taprio_offload_free(struct tc_taprio_qopt_offload *offload);
+
++#else
++
++/* Reference counting */
++static inline struct tc_taprio_qopt_offload *
++taprio_offload_get(struct tc_taprio_qopt_offload *offload)
++{
++ return NULL;
++}
++
++static inline void taprio_offload_free(struct tc_taprio_qopt_offload *offload)
++{
++}
++
++#endif
++
+ /* Ensure skb_mstamp_ns, which might have been populated with the txtime, is
+ * not mistaken for a software timestamp, because this will otherwise prevent
+ * the dispatch of hardware timestamps to the socket.
+diff --git a/include/net/raw.h b/include/net/raw.h
+index c51a635671a73..537d9d1df890d 100644
+--- a/include/net/raw.h
++++ b/include/net/raw.h
+@@ -20,9 +20,8 @@
+ extern struct proto raw_prot;
+
+ extern struct raw_hashinfo raw_v4_hashinfo;
+-struct sock *__raw_v4_lookup(struct net *net, struct sock *sk,
+- unsigned short num, __be32 raddr,
+- __be32 laddr, int dif, int sdif);
++bool raw_v4_match(struct net *net, struct sock *sk, unsigned short num,
++ __be32 raddr, __be32 laddr, int dif, int sdif);
+
+ int raw_abort(struct sock *sk, int err);
+ void raw_icmp_error(struct sk_buff *, int, u32);
+@@ -34,9 +33,18 @@ int raw_rcv(struct sock *, struct sk_buff *);
+
+ struct raw_hashinfo {
+ rwlock_t lock;
+- struct hlist_head ht[RAW_HTABLE_SIZE];
++ struct hlist_nulls_head ht[RAW_HTABLE_SIZE];
+ };
+
++static inline void raw_hashinfo_init(struct raw_hashinfo *hashinfo)
++{
++ int i;
++
++ rwlock_init(&hashinfo->lock);
++ for (i = 0; i < RAW_HTABLE_SIZE; i++)
++ INIT_HLIST_NULLS_HEAD(&hashinfo->ht[i], i);
++}
++
+ #ifdef CONFIG_PROC_FS
+ int raw_proc_init(void);
+ void raw_proc_exit(void);
+diff --git a/include/net/rawv6.h b/include/net/rawv6.h
+index 53d86b6055e8c..bc70909625f60 100644
+--- a/include/net/rawv6.h
++++ b/include/net/rawv6.h
+@@ -3,11 +3,12 @@
+ #define _NET_RAWV6_H
+
+ #include <net/protocol.h>
++#include <net/raw.h>
+
+ extern struct raw_hashinfo raw_v6_hashinfo;
+-struct sock *__raw_v6_lookup(struct net *net, struct sock *sk,
+- unsigned short num, const struct in6_addr *loc_addr,
+- const struct in6_addr *rmt_addr, int dif, int sdif);
++bool raw_v6_match(struct net *net, struct sock *sk, unsigned short num,
++ const struct in6_addr *loc_addr,
++ const struct in6_addr *rmt_addr, int dif, int sdif);
+
+ int raw_abort(struct sock *sk, int err);
+
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 7a48991cdb198..13944ceea7ed0 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1552,19 +1552,23 @@ static inline bool sk_has_account(struct sock *sk)
+
+ static inline bool sk_wmem_schedule(struct sock *sk, int size)
+ {
++ int delta;
++
+ if (!sk_has_account(sk))
+ return true;
+- return size <= sk->sk_forward_alloc ||
+- __sk_mem_schedule(sk, size, SK_MEM_SEND);
++ delta = size - sk->sk_forward_alloc;
++ return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_SEND);
+ }
+
+ static inline bool
+ sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
+ {
++ int delta;
++
+ if (!sk_has_account(sk))
+ return true;
+- return size <= sk->sk_forward_alloc ||
+- __sk_mem_schedule(sk, size, SK_MEM_RECV) ||
++ delta = size - sk->sk_forward_alloc;
++ return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_RECV) ||
+ skb_pfmemalloc(skb);
+ }
+
+diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
+index 4aa0318496688..0774ce97c2f1b 100644
+--- a/include/net/xdp_sock_drv.h
++++ b/include/net/xdp_sock_drv.h
+@@ -95,6 +95,13 @@ static inline void xsk_buff_free(struct xdp_buff *xdp)
+ xp_free(xskb);
+ }
+
++static inline void xsk_buff_discard(struct xdp_buff *xdp)
++{
++ struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp);
++
++ xp_release(xskb);
++}
++
+ static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
+ {
+ xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
+@@ -238,6 +245,10 @@ static inline void xsk_buff_free(struct xdp_buff *xdp)
+ {
+ }
+
++static inline void xsk_buff_discard(struct xdp_buff *xdp)
++{
++}
++
+ static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
+ {
+ }
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index c0703cd20a993..9758a4a9923f5 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -411,7 +411,7 @@ extern int iscsi_host_add(struct Scsi_Host *shost, struct device *pdev);
+ extern struct Scsi_Host *iscsi_host_alloc(struct scsi_host_template *sht,
+ int dd_data_size,
+ bool xmit_can_sleep);
+-extern void iscsi_host_remove(struct Scsi_Host *shost);
++extern void iscsi_host_remove(struct Scsi_Host *shost, bool is_shutdown);
+ extern void iscsi_host_free(struct Scsi_Host *shost);
+ extern int iscsi_target_alloc(struct scsi_target *starget);
+ extern int iscsi_host_get_max_scsi_cmds(struct Scsi_Host *shost,
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index 9acb8422f6802..d6eab7cb221a7 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -442,6 +442,7 @@ extern struct iscsi_cls_session *iscsi_create_session(struct Scsi_Host *shost,
+ struct iscsi_transport *t,
+ int dd_size,
+ unsigned int target_id);
++extern void iscsi_force_destroy_session(struct iscsi_cls_session *session);
+ extern void iscsi_remove_session(struct iscsi_cls_session *session);
+ extern void iscsi_free_session(struct iscsi_cls_session *session);
+ extern struct iscsi_cls_conn *iscsi_alloc_conn(struct iscsi_cls_session *sess,
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 5f88385a77484..ac151ecc7f19f 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -575,6 +575,7 @@ struct ocelot_ops {
+ int (*psfp_stats_get)(struct ocelot *ocelot, struct flow_cls_offload *f,
+ struct flow_stats *stats);
+ void (*cut_through_fwd)(struct ocelot *ocelot);
++ void (*tas_clock_adjust)(struct ocelot *ocelot);
+ };
+
+ struct ocelot_vcap_policer {
+@@ -669,6 +670,8 @@ struct ocelot_port {
+ /* VLAN that untagged frames are classified to, on ingress */
+ const struct ocelot_bridge_vlan *pvid_vlan;
+
++ struct tc_taprio_qopt_offload *taprio;
++
+ phy_interface_t phy_mode;
+
+ unsigned int ptp_skbs_in_flight;
+@@ -757,6 +760,9 @@ struct ocelot {
+ /* Lock for serializing forwarding domain changes */
+ struct mutex fwd_domain_lock;
+
++ /* Lock for serializing Time-Aware Shaper changes */
++ struct mutex tas_lock;
++
+ struct workqueue_struct *owq;
+
+ u8 ptp:1;
+diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h
+index aa2f951b07cdf..6a12eef3ffb75 100644
+--- a/include/trace/events/io_uring.h
++++ b/include/trace/events/io_uring.h
+@@ -622,7 +622,7 @@ TRACE_EVENT(io_uring_cqe_overflow,
+ __entry->ocqe = ocqe;
+ ),
+
+- TP_printk("ring %p, user_data 0x%llx, res %d, flags %x, "
++ TP_printk("ring %p, user_data 0x%llx, res %d, cflags 0x%x, "
+ "overflow_cqe %p",
+ __entry->ctx, __entry->user_data, __entry->res,
+ __entry->cflags, __entry->ocqe)
+diff --git a/include/trace/events/spmi.h b/include/trace/events/spmi.h
+index 8b60efe18ba68..a6819fd85cdf4 100644
+--- a/include/trace/events/spmi.h
++++ b/include/trace/events/spmi.h
+@@ -21,15 +21,15 @@ TRACE_EVENT(spmi_write_begin,
+ __field ( u8, sid )
+ __field ( u16, addr )
+ __field ( u8, len )
+- __dynamic_array ( u8, buf, len + 1 )
++ __dynamic_array ( u8, buf, len )
+ ),
+
+ TP_fast_assign(
+ __entry->opcode = opcode;
+ __entry->sid = sid;
+ __entry->addr = addr;
+- __entry->len = len + 1;
+- memcpy(__get_dynamic_array(buf), buf, len + 1);
++ __entry->len = len;
++ memcpy(__get_dynamic_array(buf), buf, len);
+ ),
+
+ TP_printk("opc=%d sid=%02d addr=0x%04x len=%d buf=0x[%*phD]",
+@@ -92,7 +92,7 @@ TRACE_EVENT(spmi_read_end,
+ __field ( u16, addr )
+ __field ( int, ret )
+ __field ( u8, len )
+- __dynamic_array ( u8, buf, len + 1 )
++ __dynamic_array ( u8, buf, len )
+ ),
+
+ TP_fast_assign(
+@@ -100,8 +100,8 @@ TRACE_EVENT(spmi_read_end,
+ __entry->sid = sid;
+ __entry->addr = addr;
+ __entry->ret = ret;
+- __entry->len = len + 1;
+- memcpy(__get_dynamic_array(buf), buf, len + 1);
++ __entry->len = len;
++ memcpy(__get_dynamic_array(buf), buf, len);
+ ),
+
+ TP_printk("opc=%d sid=%02d addr=0x%04x ret=%d len=%02d buf=0x[%*phD]",
+diff --git a/include/trace/stages/stage1_struct_define.h b/include/trace/stages/stage1_struct_define.h
+index a16783419687e..1b7bab60434c1 100644
+--- a/include/trace/stages/stage1_struct_define.h
++++ b/include/trace/stages/stage1_struct_define.h
+@@ -26,6 +26,9 @@
+ #undef __string_len
+ #define __string_len(item, src, len) __dynamic_array(char, item, -1)
+
++#undef __vstring
++#define __vstring(item, fmt, ap) __dynamic_array(char, item, -1)
++
+ #undef __bitmask
+ #define __bitmask(item, nr_bits) __dynamic_array(char, item, -1)
+
+diff --git a/include/trace/stages/stage2_data_offsets.h b/include/trace/stages/stage2_data_offsets.h
+index 42fd1e8813ecf..1b7a8f764fddd 100644
+--- a/include/trace/stages/stage2_data_offsets.h
++++ b/include/trace/stages/stage2_data_offsets.h
+@@ -32,6 +32,9 @@
+ #undef __string_len
+ #define __string_len(item, src, len) __dynamic_array(char, item, -1)
+
++#undef __vstring
++#define __vstring(item, fmt, ap) __dynamic_array(char, item, -1)
++
+ #undef __bitmask
+ #define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
+
+diff --git a/include/trace/stages/stage4_event_fields.h b/include/trace/stages/stage4_event_fields.h
+index e80cdc397a436..80d34f3965555 100644
+--- a/include/trace/stages/stage4_event_fields.h
++++ b/include/trace/stages/stage4_event_fields.h
+@@ -2,16 +2,18 @@
+
+ /* Stage 4 definitions for creating trace events */
+
++#define ALIGN_STRUCTFIELD(type) ((int)(offsetof(struct {char a; type b;}, b)))
++
+ #undef __field_ext
+ #define __field_ext(_type, _item, _filter_type) { \
+ .type = #_type, .name = #_item, \
+- .size = sizeof(_type), .align = __alignof__(_type), \
++ .size = sizeof(_type), .align = ALIGN_STRUCTFIELD(_type), \
+ .is_signed = is_signed_type(_type), .filter_type = _filter_type },
+
+ #undef __field_struct_ext
+ #define __field_struct_ext(_type, _item, _filter_type) { \
+ .type = #_type, .name = #_item, \
+- .size = sizeof(_type), .align = __alignof__(_type), \
++ .size = sizeof(_type), .align = ALIGN_STRUCTFIELD(_type), \
+ 0, .filter_type = _filter_type },
+
+ #undef __field
+@@ -23,7 +25,7 @@
+ #undef __array
+ #define __array(_type, _item, _len) { \
+ .type = #_type"["__stringify(_len)"]", .name = #_item, \
+- .size = sizeof(_type[_len]), .align = __alignof__(_type), \
++ .size = sizeof(_type[_len]), .align = ALIGN_STRUCTFIELD(_type), \
+ .is_signed = is_signed_type(_type), .filter_type = FILTER_OTHER },
+
+ #undef __dynamic_array
+@@ -38,6 +40,9 @@
+ #undef __string_len
+ #define __string_len(item, src, len) __dynamic_array(char, item, -1)
+
++#undef __vstring
++#define __vstring(item, fmt, ap) __dynamic_array(char, item, -1)
++
+ #undef __bitmask
+ #define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
+
+diff --git a/include/trace/stages/stage5_get_offsets.h b/include/trace/stages/stage5_get_offsets.h
+index 7ee5931300e6d..fba4c24ed9e60 100644
+--- a/include/trace/stages/stage5_get_offsets.h
++++ b/include/trace/stages/stage5_get_offsets.h
+@@ -39,6 +39,10 @@
+ #undef __string_len
+ #define __string_len(item, src, len) __dynamic_array(char, item, (len) + 1)
+
++#undef __vstring
++#define __vstring(item, fmt, ap) __dynamic_array(char, item, \
++ __trace_event_vstr_len(fmt, ap))
++
+ #undef __rel_dynamic_array
+ #define __rel_dynamic_array(type, item, len) \
+ __item_length = (len) * sizeof(type); \
+diff --git a/include/trace/stages/stage6_event_callback.h b/include/trace/stages/stage6_event_callback.h
+index e1724f73594be..3c554a5853204 100644
+--- a/include/trace/stages/stage6_event_callback.h
++++ b/include/trace/stages/stage6_event_callback.h
+@@ -24,6 +24,9 @@
+ #undef __string_len
+ #define __string_len(item, src, len) __dynamic_array(char, item, -1)
+
++#undef __vstring
++#define __vstring(item, fmt, ap) __dynamic_array(char, item, -1)
++
+ #undef __assign_str
+ #define __assign_str(dst, src) \
+ strcpy(__get_str(dst), (src) ? (const char *)(src) : "(null)");
+@@ -35,6 +38,15 @@
+ __get_str(dst)[len] = '\0'; \
+ } while(0)
+
++#undef __assign_vstr
++#define __assign_vstr(dst, fmt, va) \
++ do { \
++ va_list __cp_va; \
++ va_copy(__cp_va, *(va)); \
++ vsnprintf(__get_str(dst), TRACE_EVENT_STR_MAX, fmt, __cp_va); \
++ va_end(__cp_va); \
++ } while (0)
++
+ #undef __bitmask
+ #define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
+
+diff --git a/include/uapi/linux/can/error.h b/include/uapi/linux/can/error.h
+index 34633283de641..a1000cb630632 100644
+--- a/include/uapi/linux/can/error.h
++++ b/include/uapi/linux/can/error.h
+@@ -120,6 +120,9 @@
+ #define CAN_ERR_TRX_CANL_SHORT_TO_GND 0x70 /* 0111 0000 */
+ #define CAN_ERR_TRX_CANL_SHORT_TO_CANH 0x80 /* 1000 0000 */
+
+-/* controller specific additional information / data[5..7] */
++/* data[5] is reserved (do not use) */
++
++/* TX error counter / data[6] */
++/* RX error counter / data[7] */
+
+ #endif /* _UAPI_CAN_ERROR_H */
+diff --git a/include/uapi/linux/dm-ioctl.h b/include/uapi/linux/dm-ioctl.h
+index 2e9550fef90fa..27ad9671f2df8 100644
+--- a/include/uapi/linux/dm-ioctl.h
++++ b/include/uapi/linux/dm-ioctl.h
+@@ -286,9 +286,9 @@ enum {
+ #define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
+
+ #define DM_VERSION_MAJOR 4
+-#define DM_VERSION_MINOR 46
++#define DM_VERSION_MINOR 47
+ #define DM_VERSION_PATCHLEVEL 0
+-#define DM_VERSION_EXTRA "-ioctl (2022-02-22)"
++#define DM_VERSION_EXTRA "-ioctl (2022-07-28)"
+
+ /* Status bits */
+ #define DM_READONLY_FLAG (1 << 0) /* In/Out */
+diff --git a/include/uapi/linux/netfilter/xt_IDLETIMER.h b/include/uapi/linux/netfilter/xt_IDLETIMER.h
+index 49ddcdc61c094..7bfb31a66fc9b 100644
+--- a/include/uapi/linux/netfilter/xt_IDLETIMER.h
++++ b/include/uapi/linux/netfilter/xt_IDLETIMER.h
+@@ -1,6 +1,5 @@
++/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
+ /*
+- * linux/include/linux/netfilter/xt_IDLETIMER.h
+- *
+ * Header file for Xtables timer target module.
+ *
+ * Copyright (C) 2004, 2010 Nokia Corporation
+@@ -10,20 +9,6 @@
+ * by Luciano Coelho <luciano.coelho@nokia.com>
+ *
+ * Contact: Luciano Coelho <luciano.coelho@nokia.com>
+- *
+- * This program is free software; you can redistribute it and/or
+- * modify it under the terms of the GNU General Public License
+- * version 2 as published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful, but
+- * WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+- * General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+- * 02110-1301 USA
+ */
+
+ #ifndef _XT_IDLETIMER_H
+diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h
+index d9490e3062a70..509253bf4d119 100644
+--- a/include/uapi/linux/nl80211.h
++++ b/include/uapi/linux/nl80211.h
+@@ -323,6 +323,17 @@
+ * Once the association is done, the driver cleans the FILS AAD data.
+ */
+
++/**
++ * DOC: Multi-Link Operation
++ *
++ * In Multi-Link Operation, a connection between to MLDs utilizes multiple
++ * links. To use this in nl80211, various commands and responses now need
++ * to or will include the new %NL80211_ATTR_MLO_LINKS attribute.
++ * Additionally, various commands that need to operate on a specific link
++ * now need to be given the %NL80211_ATTR_MLO_LINK_ID attribute, e.g. to
++ * use %NL80211_CMD_START_AP or similar functions.
++ */
++
+ /**
+ * enum nl80211_commands - supported nl80211 commands
+ *
+@@ -1237,6 +1248,12 @@
+ * to describe the BSSID address of the AP and %NL80211_ATTR_TIMEOUT to
+ * specify the timeout value.
+ *
++ * @NL80211_CMD_ADD_LINK: Add a new link to an interface. The
++ * %NL80211_ATTR_MLO_LINK_ID attribute is used for the new link.
++ * @NL80211_CMD_REMOVE_LINK: Remove a link from an interface. This may come
++ * without %NL80211_ATTR_MLO_LINK_ID as an easy way to remove all links
++ * in preparation for e.g. roaming to a regular (non-MLO) AP.
++ *
+ * @NL80211_CMD_MAX: highest used command number
+ * @__NL80211_CMD_AFTER_LAST: internal use
+ */
+@@ -1481,6 +1498,9 @@ enum nl80211_commands {
+
+ NL80211_CMD_ASSOC_COMEBACK,
+
++ NL80211_CMD_ADD_LINK,
++ NL80211_CMD_REMOVE_LINK,
++
+ /* add new commands above here */
+
+ /* used to define NL80211_CMD_MAX below */
+@@ -2663,6 +2683,11 @@ enum nl80211_commands {
+ * association request when used with NL80211_CMD_NEW_STATION). Can be set
+ * only if %NL80211_STA_FLAG_WME is set.
+ *
++ * @NL80211_ATTR_MLO_LINK_ID: A (u8) link ID for use with MLO, to be used with
++ * various commands that need a link ID to operate.
++ * @NL80211_ATTR_MLO_LINKS: A nested array of links, each containing some
++ * per-link information and a link ID.
++ *
+ * @NUM_NL80211_ATTR: total number of nl80211_attrs available
+ * @NL80211_ATTR_MAX: highest attribute number currently defined
+ * @__NL80211_ATTR_AFTER_LAST: internal use
+@@ -3177,6 +3202,9 @@ enum nl80211_attrs {
+
+ NL80211_ATTR_DISABLE_EHT,
+
++ NL80211_ATTR_MLO_LINKS,
++ NL80211_ATTR_MLO_LINK_ID,
++
+ /* add attributes here, update the policy in nl80211.c */
+
+ __NL80211_ATTR_AFTER_LAST,
+diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
+index 80546960f8b77..dae0f350c6780 100644
+--- a/include/xen/xen-ops.h
++++ b/include/xen/xen-ops.h
+@@ -5,6 +5,7 @@
+ #include <linux/percpu.h>
+ #include <linux/notifier.h>
+ #include <linux/efi.h>
++#include <linux/virtio_anchor.h>
+ #include <xen/features.h>
+ #include <asm/xen/interface.h>
+ #include <xen/interface/vcpu.h>
+@@ -217,6 +218,7 @@ static inline void xen_preemptible_hcall_end(void) { }
+ #ifdef CONFIG_XEN_GRANT_DMA_OPS
+ void xen_grant_setup_dma_ops(struct device *dev);
+ bool xen_is_grant_dma_device(struct device *dev);
++bool xen_virtio_mem_acc(struct virtio_device *dev);
+ #else
+ static inline void xen_grant_setup_dma_ops(struct device *dev)
+ {
+@@ -225,6 +227,13 @@ static inline bool xen_is_grant_dma_device(struct device *dev)
+ {
+ return false;
+ }
++
++struct virtio_device;
++
++static inline bool xen_virtio_mem_acc(struct virtio_device *dev)
++{
++ return false;
++}
+ #endif /* CONFIG_XEN_GRANT_DMA_OPS */
+
+ #endif /* INCLUDE_XEN_OPS_H */
+diff --git a/include/xen/xen.h b/include/xen/xen.h
+index 0780a81e140de..a99bab8175234 100644
+--- a/include/xen/xen.h
++++ b/include/xen/xen.h
+@@ -52,14 +52,6 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
+ extern u64 xen_saved_max_mem_size;
+ #endif
+
+-#include <linux/platform-feature.h>
+-
+-static inline void xen_set_restricted_virtio_memory_access(void)
+-{
+- if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
+- platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+-}
+-
+ #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+ int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
+ void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
+diff --git a/init/main.c b/init/main.c
+index 0ee39cdcfcac9..91642a4e69be6 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -99,6 +99,7 @@
+ #include <linux/kcsan.h>
+ #include <linux/init_syscalls.h>
+ #include <linux/stackdepot.h>
++#include <linux/randomize_kstack.h>
+ #include <net/net_namespace.h>
+
+ #include <asm/io.h>
+diff --git a/io_uring/Makefile b/io_uring/Makefile
+new file mode 100644
+index 0000000000000..3680425df9478
+--- /dev/null
++++ b/io_uring/Makefile
+@@ -0,0 +1,6 @@
++# SPDX-License-Identifier: GPL-2.0
++#
++# Makefile for io_uring
++
++obj-$(CONFIG_IO_URING) += io_uring.o
++obj-$(CONFIG_IO_WQ) += io-wq.o
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+new file mode 100644
+index 0000000000000..824623bcf1a53
+--- /dev/null
++++ b/io_uring/io-wq.c
+@@ -0,0 +1,1424 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Basic worker thread pool for io_uring
++ *
++ * Copyright (C) 2019 Jens Axboe
++ *
++ */
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/errno.h>
++#include <linux/sched/signal.h>
++#include <linux/percpu.h>
++#include <linux/slab.h>
++#include <linux/rculist_nulls.h>
++#include <linux/cpu.h>
++#include <linux/task_work.h>
++#include <linux/audit.h>
++#include <uapi/linux/io_uring.h>
++
++#include "io-wq.h"
++
++#define WORKER_IDLE_TIMEOUT (5 * HZ)
++
++enum {
++ IO_WORKER_F_UP = 1, /* up and active */
++ IO_WORKER_F_RUNNING = 2, /* account as running */
++ IO_WORKER_F_FREE = 4, /* worker on free list */
++ IO_WORKER_F_BOUND = 8, /* is doing bounded work */
++};
++
++enum {
++ IO_WQ_BIT_EXIT = 0, /* wq exiting */
++};
++
++enum {
++ IO_ACCT_STALLED_BIT = 0, /* stalled on hash */
++};
++
++/*
++ * One for each thread in a wqe pool
++ */
++struct io_worker {
++ refcount_t ref;
++ unsigned flags;
++ struct hlist_nulls_node nulls_node;
++ struct list_head all_list;
++ struct task_struct *task;
++ struct io_wqe *wqe;
++
++ struct io_wq_work *cur_work;
++ struct io_wq_work *next_work;
++ raw_spinlock_t lock;
++
++ struct completion ref_done;
++
++ unsigned long create_state;
++ struct callback_head create_work;
++ int create_index;
++
++ union {
++ struct rcu_head rcu;
++ struct work_struct work;
++ };
++};
++
++#if BITS_PER_LONG == 64
++#define IO_WQ_HASH_ORDER 6
++#else
++#define IO_WQ_HASH_ORDER 5
++#endif
++
++#define IO_WQ_NR_HASH_BUCKETS (1u << IO_WQ_HASH_ORDER)
++
++struct io_wqe_acct {
++ unsigned nr_workers;
++ unsigned max_workers;
++ int index;
++ atomic_t nr_running;
++ raw_spinlock_t lock;
++ struct io_wq_work_list work_list;
++ unsigned long flags;
++};
++
++enum {
++ IO_WQ_ACCT_BOUND,
++ IO_WQ_ACCT_UNBOUND,
++ IO_WQ_ACCT_NR,
++};
++
++/*
++ * Per-node worker thread pool
++ */
++struct io_wqe {
++ raw_spinlock_t lock;
++ struct io_wqe_acct acct[IO_WQ_ACCT_NR];
++
++ int node;
++
++ struct hlist_nulls_head free_list;
++ struct list_head all_list;
++
++ struct wait_queue_entry wait;
++
++ struct io_wq *wq;
++ struct io_wq_work *hash_tail[IO_WQ_NR_HASH_BUCKETS];
++
++ cpumask_var_t cpu_mask;
++};
++
++/*
++ * Per io_wq state
++ */
++struct io_wq {
++ unsigned long state;
++
++ free_work_fn *free_work;
++ io_wq_work_fn *do_work;
++
++ struct io_wq_hash *hash;
++
++ atomic_t worker_refs;
++ struct completion worker_done;
++
++ struct hlist_node cpuhp_node;
++
++ struct task_struct *task;
++
++ struct io_wqe *wqes[];
++};
++
++static enum cpuhp_state io_wq_online;
++
++struct io_cb_cancel_data {
++ work_cancel_fn *fn;
++ void *data;
++ int nr_running;
++ int nr_pending;
++ bool cancel_all;
++};
++
++static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index);
++static void io_wqe_dec_running(struct io_worker *worker);
++static bool io_acct_cancel_pending_work(struct io_wqe *wqe,
++ struct io_wqe_acct *acct,
++ struct io_cb_cancel_data *match);
++static void create_worker_cb(struct callback_head *cb);
++static void io_wq_cancel_tw_create(struct io_wq *wq);
++
++static bool io_worker_get(struct io_worker *worker)
++{
++ return refcount_inc_not_zero(&worker->ref);
++}
++
++static void io_worker_release(struct io_worker *worker)
++{
++ if (refcount_dec_and_test(&worker->ref))
++ complete(&worker->ref_done);
++}
++
++static inline struct io_wqe_acct *io_get_acct(struct io_wqe *wqe, bool bound)
++{
++ return &wqe->acct[bound ? IO_WQ_ACCT_BOUND : IO_WQ_ACCT_UNBOUND];
++}
++
++static inline struct io_wqe_acct *io_work_get_acct(struct io_wqe *wqe,
++ struct io_wq_work *work)
++{
++ return io_get_acct(wqe, !(work->flags & IO_WQ_WORK_UNBOUND));
++}
++
++static inline struct io_wqe_acct *io_wqe_get_acct(struct io_worker *worker)
++{
++ return io_get_acct(worker->wqe, worker->flags & IO_WORKER_F_BOUND);
++}
++
++static void io_worker_ref_put(struct io_wq *wq)
++{
++ if (atomic_dec_and_test(&wq->worker_refs))
++ complete(&wq->worker_done);
++}
++
++static void io_worker_cancel_cb(struct io_worker *worker)
++{
++ struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++ struct io_wqe *wqe = worker->wqe;
++ struct io_wq *wq = wqe->wq;
++
++ atomic_dec(&acct->nr_running);
++ raw_spin_lock(&worker->wqe->lock);
++ acct->nr_workers--;
++ raw_spin_unlock(&worker->wqe->lock);
++ io_worker_ref_put(wq);
++ clear_bit_unlock(0, &worker->create_state);
++ io_worker_release(worker);
++}
++
++static bool io_task_worker_match(struct callback_head *cb, void *data)
++{
++ struct io_worker *worker;
++
++ if (cb->func != create_worker_cb)
++ return false;
++ worker = container_of(cb, struct io_worker, create_work);
++ return worker == data;
++}
++
++static void io_worker_exit(struct io_worker *worker)
++{
++ struct io_wqe *wqe = worker->wqe;
++ struct io_wq *wq = wqe->wq;
++
++ while (1) {
++ struct callback_head *cb = task_work_cancel_match(wq->task,
++ io_task_worker_match, worker);
++
++ if (!cb)
++ break;
++ io_worker_cancel_cb(worker);
++ }
++
++ io_worker_release(worker);
++ wait_for_completion(&worker->ref_done);
++
++ raw_spin_lock(&wqe->lock);
++ if (worker->flags & IO_WORKER_F_FREE)
++ hlist_nulls_del_rcu(&worker->nulls_node);
++ list_del_rcu(&worker->all_list);
++ raw_spin_unlock(&wqe->lock);
++ io_wqe_dec_running(worker);
++ worker->flags = 0;
++ preempt_disable();
++ current->flags &= ~PF_IO_WORKER;
++ preempt_enable();
++
++ kfree_rcu(worker, rcu);
++ io_worker_ref_put(wqe->wq);
++ do_exit(0);
++}
++
++static inline bool io_acct_run_queue(struct io_wqe_acct *acct)
++{
++ bool ret = false;
++
++ raw_spin_lock(&acct->lock);
++ if (!wq_list_empty(&acct->work_list) &&
++ !test_bit(IO_ACCT_STALLED_BIT, &acct->flags))
++ ret = true;
++ raw_spin_unlock(&acct->lock);
++
++ return ret;
++}
++
++/*
++ * Check head of free list for an available worker. If one isn't available,
++ * caller must create one.
++ */
++static bool io_wqe_activate_free_worker(struct io_wqe *wqe,
++ struct io_wqe_acct *acct)
++ __must_hold(RCU)
++{
++ struct hlist_nulls_node *n;
++ struct io_worker *worker;
++
++ /*
++ * Iterate free_list and see if we can find an idle worker to
++ * activate. If a given worker is on the free_list but in the process
++ * of exiting, keep trying.
++ */
++ hlist_nulls_for_each_entry_rcu(worker, n, &wqe->free_list, nulls_node) {
++ if (!io_worker_get(worker))
++ continue;
++ if (io_wqe_get_acct(worker) != acct) {
++ io_worker_release(worker);
++ continue;
++ }
++ if (wake_up_process(worker->task)) {
++ io_worker_release(worker);
++ return true;
++ }
++ io_worker_release(worker);
++ }
++
++ return false;
++}
++
++/*
++ * We need a worker. If we find a free one, we're good. If not, and we're
++ * below the max number of workers, create one.
++ */
++static bool io_wqe_create_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
++{
++ /*
++ * Most likely an attempt to queue unbounded work on an io_wq that
++ * wasn't setup with any unbounded workers.
++ */
++ if (unlikely(!acct->max_workers))
++ pr_warn_once("io-wq is not configured for unbound workers");
++
++ raw_spin_lock(&wqe->lock);
++ if (acct->nr_workers >= acct->max_workers) {
++ raw_spin_unlock(&wqe->lock);
++ return true;
++ }
++ acct->nr_workers++;
++ raw_spin_unlock(&wqe->lock);
++ atomic_inc(&acct->nr_running);
++ atomic_inc(&wqe->wq->worker_refs);
++ return create_io_worker(wqe->wq, wqe, acct->index);
++}
++
++static void io_wqe_inc_running(struct io_worker *worker)
++{
++ struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++
++ atomic_inc(&acct->nr_running);
++}
++
++static void create_worker_cb(struct callback_head *cb)
++{
++ struct io_worker *worker;
++ struct io_wq *wq;
++ struct io_wqe *wqe;
++ struct io_wqe_acct *acct;
++ bool do_create = false;
++
++ worker = container_of(cb, struct io_worker, create_work);
++ wqe = worker->wqe;
++ wq = wqe->wq;
++ acct = &wqe->acct[worker->create_index];
++ raw_spin_lock(&wqe->lock);
++ if (acct->nr_workers < acct->max_workers) {
++ acct->nr_workers++;
++ do_create = true;
++ }
++ raw_spin_unlock(&wqe->lock);
++ if (do_create) {
++ create_io_worker(wq, wqe, worker->create_index);
++ } else {
++ atomic_dec(&acct->nr_running);
++ io_worker_ref_put(wq);
++ }
++ clear_bit_unlock(0, &worker->create_state);
++ io_worker_release(worker);
++}
++
++static bool io_queue_worker_create(struct io_worker *worker,
++ struct io_wqe_acct *acct,
++ task_work_func_t func)
++{
++ struct io_wqe *wqe = worker->wqe;
++ struct io_wq *wq = wqe->wq;
++
++ /* raced with exit, just ignore create call */
++ if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
++ goto fail;
++ if (!io_worker_get(worker))
++ goto fail;
++ /*
++ * create_state manages ownership of create_work/index. We should
++ * only need one entry per worker, as the worker going to sleep
++ * will trigger the condition, and waking will clear it once it
++ * runs the task_work.
++ */
++ if (test_bit(0, &worker->create_state) ||
++ test_and_set_bit_lock(0, &worker->create_state))
++ goto fail_release;
++
++ atomic_inc(&wq->worker_refs);
++ init_task_work(&worker->create_work, func);
++ worker->create_index = acct->index;
++ if (!task_work_add(wq->task, &worker->create_work, TWA_SIGNAL)) {
++ /*
++ * EXIT may have been set after checking it above, check after
++ * adding the task_work and remove any creation item if it is
++ * now set. wq exit does that too, but we can have added this
++ * work item after we canceled in io_wq_exit_workers().
++ */
++ if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
++ io_wq_cancel_tw_create(wq);
++ io_worker_ref_put(wq);
++ return true;
++ }
++ io_worker_ref_put(wq);
++ clear_bit_unlock(0, &worker->create_state);
++fail_release:
++ io_worker_release(worker);
++fail:
++ atomic_dec(&acct->nr_running);
++ io_worker_ref_put(wq);
++ return false;
++}
++
++static void io_wqe_dec_running(struct io_worker *worker)
++{
++ struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++ struct io_wqe *wqe = worker->wqe;
++
++ if (!(worker->flags & IO_WORKER_F_UP))
++ return;
++
++ if (!atomic_dec_and_test(&acct->nr_running))
++ return;
++ if (!io_acct_run_queue(acct))
++ return;
++
++ atomic_inc(&acct->nr_running);
++ atomic_inc(&wqe->wq->worker_refs);
++ io_queue_worker_create(worker, acct, create_worker_cb);
++}
++
++/*
++ * Worker will start processing some work. Move it to the busy list, if
++ * it's currently on the freelist
++ */
++static void __io_worker_busy(struct io_wqe *wqe, struct io_worker *worker)
++{
++ if (worker->flags & IO_WORKER_F_FREE) {
++ worker->flags &= ~IO_WORKER_F_FREE;
++ raw_spin_lock(&wqe->lock);
++ hlist_nulls_del_init_rcu(&worker->nulls_node);
++ raw_spin_unlock(&wqe->lock);
++ }
++}
++
++/*
++ * No work, worker going to sleep. Move to freelist, and unuse mm if we
++ * have one attached. Dropping the mm may potentially sleep, so we drop
++ * the lock in that case and return success. Since the caller has to
++ * retry the loop in that case (we changed task state), we don't regrab
++ * the lock if we return success.
++ */
++static void __io_worker_idle(struct io_wqe *wqe, struct io_worker *worker)
++ __must_hold(wqe->lock)
++{
++ if (!(worker->flags & IO_WORKER_F_FREE)) {
++ worker->flags |= IO_WORKER_F_FREE;
++ hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
++ }
++}
++
++static inline unsigned int io_get_work_hash(struct io_wq_work *work)
++{
++ return work->flags >> IO_WQ_HASH_SHIFT;
++}
++
++static bool io_wait_on_hash(struct io_wqe *wqe, unsigned int hash)
++{
++ struct io_wq *wq = wqe->wq;
++ bool ret = false;
++
++ spin_lock_irq(&wq->hash->wait.lock);
++ if (list_empty(&wqe->wait.entry)) {
++ __add_wait_queue(&wq->hash->wait, &wqe->wait);
++ if (!test_bit(hash, &wq->hash->map)) {
++ __set_current_state(TASK_RUNNING);
++ list_del_init(&wqe->wait.entry);
++ ret = true;
++ }
++ }
++ spin_unlock_irq(&wq->hash->wait.lock);
++ return ret;
++}
++
++static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct,
++ struct io_worker *worker)
++ __must_hold(acct->lock)
++{
++ struct io_wq_work_node *node, *prev;
++ struct io_wq_work *work, *tail;
++ unsigned int stall_hash = -1U;
++ struct io_wqe *wqe = worker->wqe;
++
++ wq_list_for_each(node, prev, &acct->work_list) {
++ unsigned int hash;
++
++ work = container_of(node, struct io_wq_work, list);
++
++ /* not hashed, can run anytime */
++ if (!io_wq_is_hashed(work)) {
++ wq_list_del(&acct->work_list, node, prev);
++ return work;
++ }
++
++ hash = io_get_work_hash(work);
++ /* all items with this hash lie in [work, tail] */
++ tail = wqe->hash_tail[hash];
++
++ /* hashed, can run if not already running */
++ if (!test_and_set_bit(hash, &wqe->wq->hash->map)) {
++ wqe->hash_tail[hash] = NULL;
++ wq_list_cut(&acct->work_list, &tail->list, prev);
++ return work;
++ }
++ if (stall_hash == -1U)
++ stall_hash = hash;
++ /* fast forward to a next hash, for-each will fix up @prev */
++ node = &tail->list;
++ }
++
++ if (stall_hash != -1U) {
++ bool unstalled;
++
++ /*
++ * Set this before dropping the lock to avoid racing with new
++ * work being added and clearing the stalled bit.
++ */
++ set_bit(IO_ACCT_STALLED_BIT, &acct->flags);
++ raw_spin_unlock(&acct->lock);
++ unstalled = io_wait_on_hash(wqe, stall_hash);
++ raw_spin_lock(&acct->lock);
++ if (unstalled) {
++ clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
++ if (wq_has_sleeper(&wqe->wq->hash->wait))
++ wake_up(&wqe->wq->hash->wait);
++ }
++ }
++
++ return NULL;
++}
++
++static bool io_flush_signals(void)
++{
++ if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL))) {
++ __set_current_state(TASK_RUNNING);
++ clear_notify_signal();
++ if (task_work_pending(current))
++ task_work_run();
++ return true;
++ }
++ return false;
++}
++
++static void io_assign_current_work(struct io_worker *worker,
++ struct io_wq_work *work)
++{
++ if (work) {
++ io_flush_signals();
++ cond_resched();
++ }
++
++ raw_spin_lock(&worker->lock);
++ worker->cur_work = work;
++ worker->next_work = NULL;
++ raw_spin_unlock(&worker->lock);
++}
++
++static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work);
++
++static void io_worker_handle_work(struct io_worker *worker)
++{
++ struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++ struct io_wqe *wqe = worker->wqe;
++ struct io_wq *wq = wqe->wq;
++ bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state);
++
++ do {
++ struct io_wq_work *work;
++
++ /*
++ * If we got some work, mark us as busy. If we didn't, but
++ * the list isn't empty, it means we stalled on hashed work.
++ * Mark us stalled so we don't keep looking for work when we
++ * can't make progress, any work completion or insertion will
++ * clear the stalled flag.
++ */
++ raw_spin_lock(&acct->lock);
++ work = io_get_next_work(acct, worker);
++ raw_spin_unlock(&acct->lock);
++ if (work) {
++ __io_worker_busy(wqe, worker);
++
++ /*
++ * Make sure cancelation can find this, even before
++ * it becomes the active work. That avoids a window
++ * where the work has been removed from our general
++ * work list, but isn't yet discoverable as the
++ * current work item for this worker.
++ */
++ raw_spin_lock(&worker->lock);
++ worker->next_work = work;
++ raw_spin_unlock(&worker->lock);
++ } else {
++ break;
++ }
++ io_assign_current_work(worker, work);
++ __set_current_state(TASK_RUNNING);
++
++ /* handle a whole dependent link */
++ do {
++ struct io_wq_work *next_hashed, *linked;
++ unsigned int hash = io_get_work_hash(work);
++
++ next_hashed = wq_next_work(work);
++
++ if (unlikely(do_kill) && (work->flags & IO_WQ_WORK_UNBOUND))
++ work->flags |= IO_WQ_WORK_CANCEL;
++ wq->do_work(work);
++ io_assign_current_work(worker, NULL);
++
++ linked = wq->free_work(work);
++ work = next_hashed;
++ if (!work && linked && !io_wq_is_hashed(linked)) {
++ work = linked;
++ linked = NULL;
++ }
++ io_assign_current_work(worker, work);
++ if (linked)
++ io_wqe_enqueue(wqe, linked);
++
++ if (hash != -1U && !next_hashed) {
++ /* serialize hash clear with wake_up() */
++ spin_lock_irq(&wq->hash->wait.lock);
++ clear_bit(hash, &wq->hash->map);
++ clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
++ spin_unlock_irq(&wq->hash->wait.lock);
++ if (wq_has_sleeper(&wq->hash->wait))
++ wake_up(&wq->hash->wait);
++ }
++ } while (work);
++ } while (1);
++}
++
++static int io_wqe_worker(void *data)
++{
++ struct io_worker *worker = data;
++ struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++ struct io_wqe *wqe = worker->wqe;
++ struct io_wq *wq = wqe->wq;
++ bool last_timeout = false;
++ char buf[TASK_COMM_LEN];
++
++ worker->flags |= (IO_WORKER_F_UP | IO_WORKER_F_RUNNING);
++
++ snprintf(buf, sizeof(buf), "iou-wrk-%d", wq->task->pid);
++ set_task_comm(current, buf);
++
++ audit_alloc_kernel(current);
++
++ while (!test_bit(IO_WQ_BIT_EXIT, &wq->state)) {
++ long ret;
++
++ set_current_state(TASK_INTERRUPTIBLE);
++ while (io_acct_run_queue(acct))
++ io_worker_handle_work(worker);
++
++ raw_spin_lock(&wqe->lock);
++ /* timed out, exit unless we're the last worker */
++ if (last_timeout && acct->nr_workers > 1) {
++ acct->nr_workers--;
++ raw_spin_unlock(&wqe->lock);
++ __set_current_state(TASK_RUNNING);
++ break;
++ }
++ last_timeout = false;
++ __io_worker_idle(wqe, worker);
++ raw_spin_unlock(&wqe->lock);
++ if (io_flush_signals())
++ continue;
++ ret = schedule_timeout(WORKER_IDLE_TIMEOUT);
++ if (signal_pending(current)) {
++ struct ksignal ksig;
++
++ if (!get_signal(&ksig))
++ continue;
++ break;
++ }
++ last_timeout = !ret;
++ }
++
++ if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
++ io_worker_handle_work(worker);
++
++ audit_free(current);
++ io_worker_exit(worker);
++ return 0;
++}
++
++/*
++ * Called when a worker is scheduled in. Mark us as currently running.
++ */
++void io_wq_worker_running(struct task_struct *tsk)
++{
++ struct io_worker *worker = tsk->worker_private;
++
++ if (!worker)
++ return;
++ if (!(worker->flags & IO_WORKER_F_UP))
++ return;
++ if (worker->flags & IO_WORKER_F_RUNNING)
++ return;
++ worker->flags |= IO_WORKER_F_RUNNING;
++ io_wqe_inc_running(worker);
++}
++
++/*
++ * Called when worker is going to sleep. If there are no workers currently
++ * running and we have work pending, wake up a free one or create a new one.
++ */
++void io_wq_worker_sleeping(struct task_struct *tsk)
++{
++ struct io_worker *worker = tsk->worker_private;
++
++ if (!worker)
++ return;
++ if (!(worker->flags & IO_WORKER_F_UP))
++ return;
++ if (!(worker->flags & IO_WORKER_F_RUNNING))
++ return;
++
++ worker->flags &= ~IO_WORKER_F_RUNNING;
++ io_wqe_dec_running(worker);
++}
++
++static void io_init_new_worker(struct io_wqe *wqe, struct io_worker *worker,
++ struct task_struct *tsk)
++{
++ tsk->worker_private = worker;
++ worker->task = tsk;
++ set_cpus_allowed_ptr(tsk, wqe->cpu_mask);
++ tsk->flags |= PF_NO_SETAFFINITY;
++
++ raw_spin_lock(&wqe->lock);
++ hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
++ list_add_tail_rcu(&worker->all_list, &wqe->all_list);
++ worker->flags |= IO_WORKER_F_FREE;
++ raw_spin_unlock(&wqe->lock);
++ wake_up_new_task(tsk);
++}
++
++static bool io_wq_work_match_all(struct io_wq_work *work, void *data)
++{
++ return true;
++}
++
++static inline bool io_should_retry_thread(long err)
++{
++ /*
++ * Prevent perpetual task_work retry, if the task (or its group) is
++ * exiting.
++ */
++ if (fatal_signal_pending(current))
++ return false;
++
++ switch (err) {
++ case -EAGAIN:
++ case -ERESTARTSYS:
++ case -ERESTARTNOINTR:
++ case -ERESTARTNOHAND:
++ return true;
++ default:
++ return false;
++ }
++}
++
++static void create_worker_cont(struct callback_head *cb)
++{
++ struct io_worker *worker;
++ struct task_struct *tsk;
++ struct io_wqe *wqe;
++
++ worker = container_of(cb, struct io_worker, create_work);
++ clear_bit_unlock(0, &worker->create_state);
++ wqe = worker->wqe;
++ tsk = create_io_thread(io_wqe_worker, worker, wqe->node);
++ if (!IS_ERR(tsk)) {
++ io_init_new_worker(wqe, worker, tsk);
++ io_worker_release(worker);
++ return;
++ } else if (!io_should_retry_thread(PTR_ERR(tsk))) {
++ struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++
++ atomic_dec(&acct->nr_running);
++ raw_spin_lock(&wqe->lock);
++ acct->nr_workers--;
++ if (!acct->nr_workers) {
++ struct io_cb_cancel_data match = {
++ .fn = io_wq_work_match_all,
++ .cancel_all = true,
++ };
++
++ raw_spin_unlock(&wqe->lock);
++ while (io_acct_cancel_pending_work(wqe, acct, &match))
++ ;
++ } else {
++ raw_spin_unlock(&wqe->lock);
++ }
++ io_worker_ref_put(wqe->wq);
++ kfree(worker);
++ return;
++ }
++
++ /* re-create attempts grab a new worker ref, drop the existing one */
++ io_worker_release(worker);
++ schedule_work(&worker->work);
++}
++
++static void io_workqueue_create(struct work_struct *work)
++{
++ struct io_worker *worker = container_of(work, struct io_worker, work);
++ struct io_wqe_acct *acct = io_wqe_get_acct(worker);
++
++ if (!io_queue_worker_create(worker, acct, create_worker_cont))
++ kfree(worker);
++}
++
++static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
++{
++ struct io_wqe_acct *acct = &wqe->acct[index];
++ struct io_worker *worker;
++ struct task_struct *tsk;
++
++ __set_current_state(TASK_RUNNING);
++
++ worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, wqe->node);
++ if (!worker) {
++fail:
++ atomic_dec(&acct->nr_running);
++ raw_spin_lock(&wqe->lock);
++ acct->nr_workers--;
++ raw_spin_unlock(&wqe->lock);
++ io_worker_ref_put(wq);
++ return false;
++ }
++
++ refcount_set(&worker->ref, 1);
++ worker->wqe = wqe;
++ raw_spin_lock_init(&worker->lock);
++ init_completion(&worker->ref_done);
++
++ if (index == IO_WQ_ACCT_BOUND)
++ worker->flags |= IO_WORKER_F_BOUND;
++
++ tsk = create_io_thread(io_wqe_worker, worker, wqe->node);
++ if (!IS_ERR(tsk)) {
++ io_init_new_worker(wqe, worker, tsk);
++ } else if (!io_should_retry_thread(PTR_ERR(tsk))) {
++ kfree(worker);
++ goto fail;
++ } else {
++ INIT_WORK(&worker->work, io_workqueue_create);
++ schedule_work(&worker->work);
++ }
++
++ return true;
++}
++
++/*
++ * Iterate the passed in list and call the specific function for each
++ * worker that isn't exiting
++ */
++static bool io_wq_for_each_worker(struct io_wqe *wqe,
++ bool (*func)(struct io_worker *, void *),
++ void *data)
++{
++ struct io_worker *worker;
++ bool ret = false;
++
++ list_for_each_entry_rcu(worker, &wqe->all_list, all_list) {
++ if (io_worker_get(worker)) {
++ /* no task if node is/was offline */
++ if (worker->task)
++ ret = func(worker, data);
++ io_worker_release(worker);
++ if (ret)
++ break;
++ }
++ }
++
++ return ret;
++}
++
++static bool io_wq_worker_wake(struct io_worker *worker, void *data)
++{
++ __set_notify_signal(worker->task);
++ wake_up_process(worker->task);
++ return false;
++}
++
++static void io_run_cancel(struct io_wq_work *work, struct io_wqe *wqe)
++{
++ struct io_wq *wq = wqe->wq;
++
++ do {
++ work->flags |= IO_WQ_WORK_CANCEL;
++ wq->do_work(work);
++ work = wq->free_work(work);
++ } while (work);
++}
++
++static void io_wqe_insert_work(struct io_wqe *wqe, struct io_wq_work *work)
++{
++ struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
++ unsigned int hash;
++ struct io_wq_work *tail;
++
++ if (!io_wq_is_hashed(work)) {
++append:
++ wq_list_add_tail(&work->list, &acct->work_list);
++ return;
++ }
++
++ hash = io_get_work_hash(work);
++ tail = wqe->hash_tail[hash];
++ wqe->hash_tail[hash] = work;
++ if (!tail)
++ goto append;
++
++ wq_list_add_after(&work->list, &tail->list, &acct->work_list);
++}
++
++static bool io_wq_work_match_item(struct io_wq_work *work, void *data)
++{
++ return work == data;
++}
++
++static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
++{
++ struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
++ struct io_cb_cancel_data match;
++ unsigned work_flags = work->flags;
++ bool do_create;
++
++ /*
++ * If io-wq is exiting for this task, or if the request has explicitly
++ * been marked as one that should not get executed, cancel it here.
++ */
++ if (test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state) ||
++ (work->flags & IO_WQ_WORK_CANCEL)) {
++ io_run_cancel(work, wqe);
++ return;
++ }
++
++ raw_spin_lock(&acct->lock);
++ io_wqe_insert_work(wqe, work);
++ clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
++ raw_spin_unlock(&acct->lock);
++
++ raw_spin_lock(&wqe->lock);
++ rcu_read_lock();
++ do_create = !io_wqe_activate_free_worker(wqe, acct);
++ rcu_read_unlock();
++
++ raw_spin_unlock(&wqe->lock);
++
++ if (do_create && ((work_flags & IO_WQ_WORK_CONCURRENT) ||
++ !atomic_read(&acct->nr_running))) {
++ bool did_create;
++
++ did_create = io_wqe_create_worker(wqe, acct);
++ if (likely(did_create))
++ return;
++
++ raw_spin_lock(&wqe->lock);
++ if (acct->nr_workers) {
++ raw_spin_unlock(&wqe->lock);
++ return;
++ }
++ raw_spin_unlock(&wqe->lock);
++
++ /* fatal condition, failed to create the first worker */
++ match.fn = io_wq_work_match_item,
++ match.data = work,
++ match.cancel_all = false,
++
++ io_acct_cancel_pending_work(wqe, acct, &match);
++ }
++}
++
++void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work)
++{
++ struct io_wqe *wqe = wq->wqes[numa_node_id()];
++
++ io_wqe_enqueue(wqe, work);
++}
++
++/*
++ * Work items that hash to the same value will not be done in parallel.
++ * Used to limit concurrent writes, generally hashed by inode.
++ */
++void io_wq_hash_work(struct io_wq_work *work, void *val)
++{
++ unsigned int bit;
++
++ bit = hash_ptr(val, IO_WQ_HASH_ORDER);
++ work->flags |= (IO_WQ_WORK_HASHED | (bit << IO_WQ_HASH_SHIFT));
++}
++
++static bool __io_wq_worker_cancel(struct io_worker *worker,
++ struct io_cb_cancel_data *match,
++ struct io_wq_work *work)
++{
++ if (work && match->fn(work, match->data)) {
++ work->flags |= IO_WQ_WORK_CANCEL;
++ __set_notify_signal(worker->task);
++ return true;
++ }
++
++ return false;
++}
++
++static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
++{
++ struct io_cb_cancel_data *match = data;
++
++ /*
++ * Hold the lock to avoid ->cur_work going out of scope, caller
++ * may dereference the passed in work.
++ */
++ raw_spin_lock(&worker->lock);
++ if (__io_wq_worker_cancel(worker, match, worker->cur_work) ||
++ __io_wq_worker_cancel(worker, match, worker->next_work))
++ match->nr_running++;
++ raw_spin_unlock(&worker->lock);
++
++ return match->nr_running && !match->cancel_all;
++}
++
++static inline void io_wqe_remove_pending(struct io_wqe *wqe,
++ struct io_wq_work *work,
++ struct io_wq_work_node *prev)
++{
++ struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
++ unsigned int hash = io_get_work_hash(work);
++ struct io_wq_work *prev_work = NULL;
++
++ if (io_wq_is_hashed(work) && work == wqe->hash_tail[hash]) {
++ if (prev)
++ prev_work = container_of(prev, struct io_wq_work, list);
++ if (prev_work && io_get_work_hash(prev_work) == hash)
++ wqe->hash_tail[hash] = prev_work;
++ else
++ wqe->hash_tail[hash] = NULL;
++ }
++ wq_list_del(&acct->work_list, &work->list, prev);
++}
++
++static bool io_acct_cancel_pending_work(struct io_wqe *wqe,
++ struct io_wqe_acct *acct,
++ struct io_cb_cancel_data *match)
++{
++ struct io_wq_work_node *node, *prev;
++ struct io_wq_work *work;
++
++ raw_spin_lock(&acct->lock);
++ wq_list_for_each(node, prev, &acct->work_list) {
++ work = container_of(node, struct io_wq_work, list);
++ if (!match->fn(work, match->data))
++ continue;
++ io_wqe_remove_pending(wqe, work, prev);
++ raw_spin_unlock(&acct->lock);
++ io_run_cancel(work, wqe);
++ match->nr_pending++;
++ /* not safe to continue after unlock */
++ return true;
++ }
++ raw_spin_unlock(&acct->lock);
++
++ return false;
++}
++
++static void io_wqe_cancel_pending_work(struct io_wqe *wqe,
++ struct io_cb_cancel_data *match)
++{
++ int i;
++retry:
++ for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++ struct io_wqe_acct *acct = io_get_acct(wqe, i == 0);
++
++ if (io_acct_cancel_pending_work(wqe, acct, match)) {
++ if (match->cancel_all)
++ goto retry;
++ break;
++ }
++ }
++}
++
++static void io_wqe_cancel_running_work(struct io_wqe *wqe,
++ struct io_cb_cancel_data *match)
++{
++ rcu_read_lock();
++ io_wq_for_each_worker(wqe, io_wq_worker_cancel, match);
++ rcu_read_unlock();
++}
++
++enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
++ void *data, bool cancel_all)
++{
++ struct io_cb_cancel_data match = {
++ .fn = cancel,
++ .data = data,
++ .cancel_all = cancel_all,
++ };
++ int node;
++
++ /*
++ * First check pending list, if we're lucky we can just remove it
++ * from there. CANCEL_OK means that the work is returned as-new,
++ * no completion will be posted for it.
++ *
++ * Then check if a free (going busy) or busy worker has the work
++ * currently running. If we find it there, we'll return CANCEL_RUNNING
++ * as an indication that we attempt to signal cancellation. The
++ * completion will run normally in this case.
++ *
++ * Do both of these while holding the wqe->lock, to ensure that
++ * we'll find a work item regardless of state.
++ */
++ for_each_node(node) {
++ struct io_wqe *wqe = wq->wqes[node];
++
++ io_wqe_cancel_pending_work(wqe, &match);
++ if (match.nr_pending && !match.cancel_all)
++ return IO_WQ_CANCEL_OK;
++
++ raw_spin_lock(&wqe->lock);
++ io_wqe_cancel_running_work(wqe, &match);
++ raw_spin_unlock(&wqe->lock);
++ if (match.nr_running && !match.cancel_all)
++ return IO_WQ_CANCEL_RUNNING;
++ }
++
++ if (match.nr_running)
++ return IO_WQ_CANCEL_RUNNING;
++ if (match.nr_pending)
++ return IO_WQ_CANCEL_OK;
++ return IO_WQ_CANCEL_NOTFOUND;
++}
++
++static int io_wqe_hash_wake(struct wait_queue_entry *wait, unsigned mode,
++ int sync, void *key)
++{
++ struct io_wqe *wqe = container_of(wait, struct io_wqe, wait);
++ int i;
++
++ list_del_init(&wait->entry);
++
++ rcu_read_lock();
++ for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++ struct io_wqe_acct *acct = &wqe->acct[i];
++
++ if (test_and_clear_bit(IO_ACCT_STALLED_BIT, &acct->flags))
++ io_wqe_activate_free_worker(wqe, acct);
++ }
++ rcu_read_unlock();
++ return 1;
++}
++
++struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
++{
++ int ret, node, i;
++ struct io_wq *wq;
++
++ if (WARN_ON_ONCE(!data->free_work || !data->do_work))
++ return ERR_PTR(-EINVAL);
++ if (WARN_ON_ONCE(!bounded))
++ return ERR_PTR(-EINVAL);
++
++ wq = kzalloc(struct_size(wq, wqes, nr_node_ids), GFP_KERNEL);
++ if (!wq)
++ return ERR_PTR(-ENOMEM);
++ ret = cpuhp_state_add_instance_nocalls(io_wq_online, &wq->cpuhp_node);
++ if (ret)
++ goto err_wq;
++
++ refcount_inc(&data->hash->refs);
++ wq->hash = data->hash;
++ wq->free_work = data->free_work;
++ wq->do_work = data->do_work;
++
++ ret = -ENOMEM;
++ for_each_node(node) {
++ struct io_wqe *wqe;
++ int alloc_node = node;
++
++ if (!node_online(alloc_node))
++ alloc_node = NUMA_NO_NODE;
++ wqe = kzalloc_node(sizeof(struct io_wqe), GFP_KERNEL, alloc_node);
++ if (!wqe)
++ goto err;
++ if (!alloc_cpumask_var(&wqe->cpu_mask, GFP_KERNEL))
++ goto err;
++ cpumask_copy(wqe->cpu_mask, cpumask_of_node(node));
++ wq->wqes[node] = wqe;
++ wqe->node = alloc_node;
++ wqe->acct[IO_WQ_ACCT_BOUND].max_workers = bounded;
++ wqe->acct[IO_WQ_ACCT_UNBOUND].max_workers =
++ task_rlimit(current, RLIMIT_NPROC);
++ INIT_LIST_HEAD(&wqe->wait.entry);
++ wqe->wait.func = io_wqe_hash_wake;
++ for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++ struct io_wqe_acct *acct = &wqe->acct[i];
++
++ acct->index = i;
++ atomic_set(&acct->nr_running, 0);
++ INIT_WQ_LIST(&acct->work_list);
++ raw_spin_lock_init(&acct->lock);
++ }
++ wqe->wq = wq;
++ raw_spin_lock_init(&wqe->lock);
++ INIT_HLIST_NULLS_HEAD(&wqe->free_list, 0);
++ INIT_LIST_HEAD(&wqe->all_list);
++ }
++
++ wq->task = get_task_struct(data->task);
++ atomic_set(&wq->worker_refs, 1);
++ init_completion(&wq->worker_done);
++ return wq;
++err:
++ io_wq_put_hash(data->hash);
++ cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
++ for_each_node(node) {
++ if (!wq->wqes[node])
++ continue;
++ free_cpumask_var(wq->wqes[node]->cpu_mask);
++ kfree(wq->wqes[node]);
++ }
++err_wq:
++ kfree(wq);
++ return ERR_PTR(ret);
++}
++
++static bool io_task_work_match(struct callback_head *cb, void *data)
++{
++ struct io_worker *worker;
++
++ if (cb->func != create_worker_cb && cb->func != create_worker_cont)
++ return false;
++ worker = container_of(cb, struct io_worker, create_work);
++ return worker->wqe->wq == data;
++}
++
++void io_wq_exit_start(struct io_wq *wq)
++{
++ set_bit(IO_WQ_BIT_EXIT, &wq->state);
++}
++
++static void io_wq_cancel_tw_create(struct io_wq *wq)
++{
++ struct callback_head *cb;
++
++ while ((cb = task_work_cancel_match(wq->task, io_task_work_match, wq)) != NULL) {
++ struct io_worker *worker;
++
++ worker = container_of(cb, struct io_worker, create_work);
++ io_worker_cancel_cb(worker);
++ }
++}
++
++static void io_wq_exit_workers(struct io_wq *wq)
++{
++ int node;
++
++ if (!wq->task)
++ return;
++
++ io_wq_cancel_tw_create(wq);
++
++ rcu_read_lock();
++ for_each_node(node) {
++ struct io_wqe *wqe = wq->wqes[node];
++
++ io_wq_for_each_worker(wqe, io_wq_worker_wake, NULL);
++ }
++ rcu_read_unlock();
++ io_worker_ref_put(wq);
++ wait_for_completion(&wq->worker_done);
++
++ for_each_node(node) {
++ spin_lock_irq(&wq->hash->wait.lock);
++ list_del_init(&wq->wqes[node]->wait.entry);
++ spin_unlock_irq(&wq->hash->wait.lock);
++ }
++ put_task_struct(wq->task);
++ wq->task = NULL;
++}
++
++static void io_wq_destroy(struct io_wq *wq)
++{
++ int node;
++
++ cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
++
++ for_each_node(node) {
++ struct io_wqe *wqe = wq->wqes[node];
++ struct io_cb_cancel_data match = {
++ .fn = io_wq_work_match_all,
++ .cancel_all = true,
++ };
++ io_wqe_cancel_pending_work(wqe, &match);
++ free_cpumask_var(wqe->cpu_mask);
++ kfree(wqe);
++ }
++ io_wq_put_hash(wq->hash);
++ kfree(wq);
++}
++
++void io_wq_put_and_exit(struct io_wq *wq)
++{
++ WARN_ON_ONCE(!test_bit(IO_WQ_BIT_EXIT, &wq->state));
++
++ io_wq_exit_workers(wq);
++ io_wq_destroy(wq);
++}
++
++struct online_data {
++ unsigned int cpu;
++ bool online;
++};
++
++static bool io_wq_worker_affinity(struct io_worker *worker, void *data)
++{
++ struct online_data *od = data;
++
++ if (od->online)
++ cpumask_set_cpu(od->cpu, worker->wqe->cpu_mask);
++ else
++ cpumask_clear_cpu(od->cpu, worker->wqe->cpu_mask);
++ return false;
++}
++
++static int __io_wq_cpu_online(struct io_wq *wq, unsigned int cpu, bool online)
++{
++ struct online_data od = {
++ .cpu = cpu,
++ .online = online
++ };
++ int i;
++
++ rcu_read_lock();
++ for_each_node(i)
++ io_wq_for_each_worker(wq->wqes[i], io_wq_worker_affinity, &od);
++ rcu_read_unlock();
++ return 0;
++}
++
++static int io_wq_cpu_online(unsigned int cpu, struct hlist_node *node)
++{
++ struct io_wq *wq = hlist_entry_safe(node, struct io_wq, cpuhp_node);
++
++ return __io_wq_cpu_online(wq, cpu, true);
++}
++
++static int io_wq_cpu_offline(unsigned int cpu, struct hlist_node *node)
++{
++ struct io_wq *wq = hlist_entry_safe(node, struct io_wq, cpuhp_node);
++
++ return __io_wq_cpu_online(wq, cpu, false);
++}
++
++int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask)
++{
++ int i;
++
++ rcu_read_lock();
++ for_each_node(i) {
++ struct io_wqe *wqe = wq->wqes[i];
++
++ if (mask)
++ cpumask_copy(wqe->cpu_mask, mask);
++ else
++ cpumask_copy(wqe->cpu_mask, cpumask_of_node(i));
++ }
++ rcu_read_unlock();
++ return 0;
++}
++
++/*
++ * Set max number of unbounded workers, returns old value. If new_count is 0,
++ * then just return the old value.
++ */
++int io_wq_max_workers(struct io_wq *wq, int *new_count)
++{
++ int prev[IO_WQ_ACCT_NR];
++ bool first_node = true;
++ int i, node;
++
++ BUILD_BUG_ON((int) IO_WQ_ACCT_BOUND != (int) IO_WQ_BOUND);
++ BUILD_BUG_ON((int) IO_WQ_ACCT_UNBOUND != (int) IO_WQ_UNBOUND);
++ BUILD_BUG_ON((int) IO_WQ_ACCT_NR != 2);
++
++ for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++ if (new_count[i] > task_rlimit(current, RLIMIT_NPROC))
++ new_count[i] = task_rlimit(current, RLIMIT_NPROC);
++ }
++
++ for (i = 0; i < IO_WQ_ACCT_NR; i++)
++ prev[i] = 0;
++
++ rcu_read_lock();
++ for_each_node(node) {
++ struct io_wqe *wqe = wq->wqes[node];
++ struct io_wqe_acct *acct;
++
++ raw_spin_lock(&wqe->lock);
++ for (i = 0; i < IO_WQ_ACCT_NR; i++) {
++ acct = &wqe->acct[i];
++ if (first_node)
++ prev[i] = max_t(int, acct->max_workers, prev[i]);
++ if (new_count[i])
++ acct->max_workers = new_count[i];
++ }
++ raw_spin_unlock(&wqe->lock);
++ first_node = false;
++ }
++ rcu_read_unlock();
++
++ for (i = 0; i < IO_WQ_ACCT_NR; i++)
++ new_count[i] = prev[i];
++
++ return 0;
++}
++
++static __init int io_wq_init(void)
++{
++ int ret;
++
++ ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "io-wq/online",
++ io_wq_cpu_online, io_wq_cpu_offline);
++ if (ret < 0)
++ return ret;
++ io_wq_online = ret;
++ return 0;
++}
++subsys_initcall(io_wq_init);
+diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
+new file mode 100644
+index 0000000000000..ba6eee76d028f
+--- /dev/null
++++ b/io_uring/io-wq.h
+@@ -0,0 +1,228 @@
++#ifndef INTERNAL_IO_WQ_H
++#define INTERNAL_IO_WQ_H
++
++#include <linux/refcount.h>
++
++struct io_wq;
++
++enum {
++ IO_WQ_WORK_CANCEL = 1,
++ IO_WQ_WORK_HASHED = 2,
++ IO_WQ_WORK_UNBOUND = 4,
++ IO_WQ_WORK_CONCURRENT = 16,
++
++ IO_WQ_HASH_SHIFT = 24, /* upper 8 bits are used for hash key */
++};
++
++enum io_wq_cancel {
++ IO_WQ_CANCEL_OK, /* cancelled before started */
++ IO_WQ_CANCEL_RUNNING, /* found, running, and attempted cancelled */
++ IO_WQ_CANCEL_NOTFOUND, /* work not found */
++};
++
++struct io_wq_work_node {
++ struct io_wq_work_node *next;
++};
++
++struct io_wq_work_list {
++ struct io_wq_work_node *first;
++ struct io_wq_work_node *last;
++};
++
++#define wq_list_for_each(pos, prv, head) \
++ for (pos = (head)->first, prv = NULL; pos; prv = pos, pos = (pos)->next)
++
++#define wq_list_for_each_resume(pos, prv) \
++ for (; pos; prv = pos, pos = (pos)->next)
++
++#define wq_list_empty(list) (READ_ONCE((list)->first) == NULL)
++#define INIT_WQ_LIST(list) do { \
++ (list)->first = NULL; \
++} while (0)
++
++static inline void wq_list_add_after(struct io_wq_work_node *node,
++ struct io_wq_work_node *pos,
++ struct io_wq_work_list *list)
++{
++ struct io_wq_work_node *next = pos->next;
++
++ pos->next = node;
++ node->next = next;
++ if (!next)
++ list->last = node;
++}
++
++/**
++ * wq_list_merge - merge the second list to the first one.
++ * @list0: the first list
++ * @list1: the second list
++ * Return the first node after mergence.
++ */
++static inline struct io_wq_work_node *wq_list_merge(struct io_wq_work_list *list0,
++ struct io_wq_work_list *list1)
++{
++ struct io_wq_work_node *ret;
++
++ if (!list0->first) {
++ ret = list1->first;
++ } else {
++ ret = list0->first;
++ list0->last->next = list1->first;
++ }
++ INIT_WQ_LIST(list0);
++ INIT_WQ_LIST(list1);
++ return ret;
++}
++
++static inline void wq_list_add_tail(struct io_wq_work_node *node,
++ struct io_wq_work_list *list)
++{
++ node->next = NULL;
++ if (!list->first) {
++ list->last = node;
++ WRITE_ONCE(list->first, node);
++ } else {
++ list->last->next = node;
++ list->last = node;
++ }
++}
++
++static inline void wq_list_add_head(struct io_wq_work_node *node,
++ struct io_wq_work_list *list)
++{
++ node->next = list->first;
++ if (!node->next)
++ list->last = node;
++ WRITE_ONCE(list->first, node);
++}
++
++static inline void wq_list_cut(struct io_wq_work_list *list,
++ struct io_wq_work_node *last,
++ struct io_wq_work_node *prev)
++{
++ /* first in the list, if prev==NULL */
++ if (!prev)
++ WRITE_ONCE(list->first, last->next);
++ else
++ prev->next = last->next;
++
++ if (last == list->last)
++ list->last = prev;
++ last->next = NULL;
++}
++
++static inline void __wq_list_splice(struct io_wq_work_list *list,
++ struct io_wq_work_node *to)
++{
++ list->last->next = to->next;
++ to->next = list->first;
++ INIT_WQ_LIST(list);
++}
++
++static inline bool wq_list_splice(struct io_wq_work_list *list,
++ struct io_wq_work_node *to)
++{
++ if (!wq_list_empty(list)) {
++ __wq_list_splice(list, to);
++ return true;
++ }
++ return false;
++}
++
++static inline void wq_stack_add_head(struct io_wq_work_node *node,
++ struct io_wq_work_node *stack)
++{
++ node->next = stack->next;
++ stack->next = node;
++}
++
++static inline void wq_list_del(struct io_wq_work_list *list,
++ struct io_wq_work_node *node,
++ struct io_wq_work_node *prev)
++{
++ wq_list_cut(list, node, prev);
++}
++
++static inline
++struct io_wq_work_node *wq_stack_extract(struct io_wq_work_node *stack)
++{
++ struct io_wq_work_node *node = stack->next;
++
++ stack->next = node->next;
++ return node;
++}
++
++struct io_wq_work {
++ struct io_wq_work_node list;
++ unsigned flags;
++ int cancel_seq;
++};
++
++static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
++{
++ if (!work->list.next)
++ return NULL;
++
++ return container_of(work->list.next, struct io_wq_work, list);
++}
++
++typedef struct io_wq_work *(free_work_fn)(struct io_wq_work *);
++typedef void (io_wq_work_fn)(struct io_wq_work *);
++
++struct io_wq_hash {
++ refcount_t refs;
++ unsigned long map;
++ struct wait_queue_head wait;
++};
++
++static inline void io_wq_put_hash(struct io_wq_hash *hash)
++{
++ if (refcount_dec_and_test(&hash->refs))
++ kfree(hash);
++}
++
++struct io_wq_data {
++ struct io_wq_hash *hash;
++ struct task_struct *task;
++ io_wq_work_fn *do_work;
++ free_work_fn *free_work;
++};
++
++struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data);
++void io_wq_exit_start(struct io_wq *wq);
++void io_wq_put_and_exit(struct io_wq *wq);
++
++void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work);
++void io_wq_hash_work(struct io_wq_work *work, void *val);
++
++int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask);
++int io_wq_max_workers(struct io_wq *wq, int *new_count);
++
++static inline bool io_wq_is_hashed(struct io_wq_work *work)
++{
++ return work->flags & IO_WQ_WORK_HASHED;
++}
++
++typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
++
++enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
++ void *data, bool cancel_all);
++
++#if defined(CONFIG_IO_WQ)
++extern void io_wq_worker_sleeping(struct task_struct *);
++extern void io_wq_worker_running(struct task_struct *);
++#else
++static inline void io_wq_worker_sleeping(struct task_struct *tsk)
++{
++}
++static inline void io_wq_worker_running(struct task_struct *tsk)
++{
++}
++#endif
++
++static inline bool io_wq_current_is_worker(void)
++{
++ return in_task() && (current->flags & PF_IO_WORKER) &&
++ current->worker_private;
++}
++#endif
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+new file mode 100644
+index 0000000000000..6a67dbf5195f0
+--- /dev/null
++++ b/io_uring/io_uring.c
+@@ -0,0 +1,13165 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Shared application/kernel submission and completion ring pairs, for
++ * supporting fast/efficient IO.
++ *
++ * A note on the read/write ordering memory barriers that are matched between
++ * the application and kernel side.
++ *
++ * After the application reads the CQ ring tail, it must use an
++ * appropriate smp_rmb() to pair with the smp_wmb() the kernel uses
++ * before writing the tail (using smp_load_acquire to read the tail will
++ * do). It also needs a smp_mb() before updating CQ head (ordering the
++ * entry load(s) with the head store), pairing with an implicit barrier
++ * through a control-dependency in io_get_cqe (smp_store_release to
++ * store head will do). Failure to do so could lead to reading invalid
++ * CQ entries.
++ *
++ * Likewise, the application must use an appropriate smp_wmb() before
++ * writing the SQ tail (ordering SQ entry stores with the tail store),
++ * which pairs with smp_load_acquire in io_get_sqring (smp_store_release
++ * to store the tail will do). And it needs a barrier ordering the SQ
++ * head load before writing new SQ entries (smp_load_acquire to read
++ * head will do).
++ *
++ * When using the SQ poll thread (IORING_SETUP_SQPOLL), the application
++ * needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*
++ * updating the SQ tail; a full memory barrier smp_mb() is needed
++ * between.
++ *
++ * Also see the examples in the liburing library:
++ *
++ * git://git.kernel.dk/liburing
++ *
++ * io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
++ * from data shared between the kernel and application. This is done both
++ * for ordering purposes, but also to ensure that once a value is loaded from
++ * data that the application could potentially modify, it remains stable.
++ *
++ * Copyright (C) 2018-2019 Jens Axboe
++ * Copyright (c) 2018-2019 Christoph Hellwig
++ */
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/errno.h>
++#include <linux/syscalls.h>
++#include <linux/compat.h>
++#include <net/compat.h>
++#include <linux/refcount.h>
++#include <linux/uio.h>
++#include <linux/bits.h>
++
++#include <linux/sched/signal.h>
++#include <linux/fs.h>
++#include <linux/file.h>
++#include <linux/fdtable.h>
++#include <linux/mm.h>
++#include <linux/mman.h>
++#include <linux/percpu.h>
++#include <linux/slab.h>
++#include <linux/blk-mq.h>
++#include <linux/bvec.h>
++#include <linux/net.h>
++#include <net/sock.h>
++#include <net/af_unix.h>
++#include <net/scm.h>
++#include <linux/anon_inodes.h>
++#include <linux/sched/mm.h>
++#include <linux/uaccess.h>
++#include <linux/nospec.h>
++#include <linux/sizes.h>
++#include <linux/hugetlb.h>
++#include <linux/highmem.h>
++#include <linux/namei.h>
++#include <linux/fsnotify.h>
++#include <linux/fadvise.h>
++#include <linux/eventpoll.h>
++#include <linux/splice.h>
++#include <linux/task_work.h>
++#include <linux/pagemap.h>
++#include <linux/io_uring.h>
++#include <linux/audit.h>
++#include <linux/security.h>
++#include <linux/xattr.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/io_uring.h>
++
++#include <uapi/linux/io_uring.h>
++
++#include "../fs/internal.h"
++#include "io-wq.h"
++
++#define IORING_MAX_ENTRIES 32768
++#define IORING_MAX_CQ_ENTRIES (2 * IORING_MAX_ENTRIES)
++#define IORING_SQPOLL_CAP_ENTRIES_VALUE 8
++
++/* only define max */
++#define IORING_MAX_FIXED_FILES (1U << 20)
++#define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \
++ IORING_REGISTER_LAST + IORING_OP_LAST)
++
++#define IO_RSRC_TAG_TABLE_SHIFT (PAGE_SHIFT - 3)
++#define IO_RSRC_TAG_TABLE_MAX (1U << IO_RSRC_TAG_TABLE_SHIFT)
++#define IO_RSRC_TAG_TABLE_MASK (IO_RSRC_TAG_TABLE_MAX - 1)
++
++#define IORING_MAX_REG_BUFFERS (1U << 14)
++
++#define SQE_COMMON_FLAGS (IOSQE_FIXED_FILE | IOSQE_IO_LINK | \
++ IOSQE_IO_HARDLINK | IOSQE_ASYNC)
++
++#define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
++ IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
++
++#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
++ REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
++ REQ_F_ASYNC_DATA)
++
++#define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
++ IO_REQ_CLEAN_FLAGS)
++
++#define IO_APOLL_MULTI_POLLED (REQ_F_APOLL_MULTISHOT | REQ_F_POLLED)
++
++#define IO_TCTX_REFS_CACHE_NR (1U << 10)
++
++struct io_uring {
++ u32 head ____cacheline_aligned_in_smp;
++ u32 tail ____cacheline_aligned_in_smp;
++};
++
++/*
++ * This data is shared with the application through the mmap at offsets
++ * IORING_OFF_SQ_RING and IORING_OFF_CQ_RING.
++ *
++ * The offsets to the member fields are published through struct
++ * io_sqring_offsets when calling io_uring_setup.
++ */
++struct io_rings {
++ /*
++ * Head and tail offsets into the ring; the offsets need to be
++ * masked to get valid indices.
++ *
++ * The kernel controls head of the sq ring and the tail of the cq ring,
++ * and the application controls tail of the sq ring and the head of the
++ * cq ring.
++ */
++ struct io_uring sq, cq;
++ /*
++ * Bitmasks to apply to head and tail offsets (constant, equals
++ * ring_entries - 1)
++ */
++ u32 sq_ring_mask, cq_ring_mask;
++ /* Ring sizes (constant, power of 2) */
++ u32 sq_ring_entries, cq_ring_entries;
++ /*
++ * Number of invalid entries dropped by the kernel due to
++ * invalid index stored in array
++ *
++ * Written by the kernel, shouldn't be modified by the
++ * application (i.e. get number of "new events" by comparing to
++ * cached value).
++ *
++ * After a new SQ head value was read by the application this
++ * counter includes all submissions that were dropped reaching
++ * the new SQ head (and possibly more).
++ */
++ u32 sq_dropped;
++ /*
++ * Runtime SQ flags
++ *
++ * Written by the kernel, shouldn't be modified by the
++ * application.
++ *
++ * The application needs a full memory barrier before checking
++ * for IORING_SQ_NEED_WAKEUP after updating the sq tail.
++ */
++ atomic_t sq_flags;
++ /*
++ * Runtime CQ flags
++ *
++ * Written by the application, shouldn't be modified by the
++ * kernel.
++ */
++ u32 cq_flags;
++ /*
++ * Number of completion events lost because the queue was full;
++ * this should be avoided by the application by making sure
++ * there are not more requests pending than there is space in
++ * the completion queue.
++ *
++ * Written by the kernel, shouldn't be modified by the
++ * application (i.e. get number of "new events" by comparing to
++ * cached value).
++ *
++ * As completion events come in out of order this counter is not
++ * ordered with any other data.
++ */
++ u32 cq_overflow;
++ /*
++ * Ring buffer of completion events.
++ *
++ * The kernel writes completion events fresh every time they are
++ * produced, so the application is allowed to modify pending
++ * entries.
++ */
++ struct io_uring_cqe cqes[] ____cacheline_aligned_in_smp;
++};
++
++struct io_mapped_ubuf {
++ u64 ubuf;
++ u64 ubuf_end;
++ unsigned int nr_bvecs;
++ unsigned long acct_pages;
++ struct bio_vec bvec[];
++};
++
++struct io_ring_ctx;
++
++struct io_overflow_cqe {
++ struct list_head list;
++ struct io_uring_cqe cqe;
++};
++
++/*
++ * FFS_SCM is only available on 64-bit archs, for 32-bit we just define it as 0
++ * and define IO_URING_SCM_ALL. For this case, we use SCM for all files as we
++ * can't safely always dereference the file when the task has exited and ring
++ * cleanup is done. If a file is tracked and part of SCM, then unix gc on
++ * process exit may reap it before __io_sqe_files_unregister() is run.
++ */
++#define FFS_NOWAIT 0x1UL
++#define FFS_ISREG 0x2UL
++#if defined(CONFIG_64BIT)
++#define FFS_SCM 0x4UL
++#else
++#define IO_URING_SCM_ALL
++#define FFS_SCM 0x0UL
++#endif
++#define FFS_MASK ~(FFS_NOWAIT|FFS_ISREG|FFS_SCM)
++
++struct io_fixed_file {
++ /* file * with additional FFS_* flags */
++ unsigned long file_ptr;
++};
++
++struct io_rsrc_put {
++ struct list_head list;
++ u64 tag;
++ union {
++ void *rsrc;
++ struct file *file;
++ struct io_mapped_ubuf *buf;
++ };
++};
++
++struct io_file_table {
++ struct io_fixed_file *files;
++ unsigned long *bitmap;
++ unsigned int alloc_hint;
++};
++
++struct io_rsrc_node {
++ struct percpu_ref refs;
++ struct list_head node;
++ struct list_head rsrc_list;
++ struct io_rsrc_data *rsrc_data;
++ struct llist_node llist;
++ bool done;
++};
++
++typedef void (rsrc_put_fn)(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc);
++
++struct io_rsrc_data {
++ struct io_ring_ctx *ctx;
++
++ u64 **tags;
++ unsigned int nr;
++ rsrc_put_fn *do_put;
++ atomic_t refs;
++ struct completion done;
++ bool quiesce;
++};
++
++#define IO_BUFFER_LIST_BUF_PER_PAGE (PAGE_SIZE / sizeof(struct io_uring_buf))
++struct io_buffer_list {
++ /*
++ * If ->buf_nr_pages is set, then buf_pages/buf_ring are used. If not,
++ * then these are classic provided buffers and ->buf_list is used.
++ */
++ union {
++ struct list_head buf_list;
++ struct {
++ struct page **buf_pages;
++ struct io_uring_buf_ring *buf_ring;
++ };
++ };
++ __u16 bgid;
++
++ /* below is for ring provided buffers */
++ __u16 buf_nr_pages;
++ __u16 nr_entries;
++ __u16 head;
++ __u16 mask;
++};
++
++struct io_buffer {
++ struct list_head list;
++ __u64 addr;
++ __u32 len;
++ __u16 bid;
++ __u16 bgid;
++};
++
++struct io_restriction {
++ DECLARE_BITMAP(register_op, IORING_REGISTER_LAST);
++ DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
++ u8 sqe_flags_allowed;
++ u8 sqe_flags_required;
++ bool registered;
++};
++
++enum {
++ IO_SQ_THREAD_SHOULD_STOP = 0,
++ IO_SQ_THREAD_SHOULD_PARK,
++};
++
++struct io_sq_data {
++ refcount_t refs;
++ atomic_t park_pending;
++ struct mutex lock;
++
++ /* ctx's that are using this sqd */
++ struct list_head ctx_list;
++
++ struct task_struct *thread;
++ struct wait_queue_head wait;
++
++ unsigned sq_thread_idle;
++ int sq_cpu;
++ pid_t task_pid;
++ pid_t task_tgid;
++
++ unsigned long state;
++ struct completion exited;
++};
++
++#define IO_COMPL_BATCH 32
++#define IO_REQ_CACHE_SIZE 32
++#define IO_REQ_ALLOC_BATCH 8
++
++struct io_submit_link {
++ struct io_kiocb *head;
++ struct io_kiocb *last;
++};
++
++struct io_submit_state {
++ /* inline/task_work completion list, under ->uring_lock */
++ struct io_wq_work_node free_list;
++ /* batch completion logic */
++ struct io_wq_work_list compl_reqs;
++ struct io_submit_link link;
++
++ bool plug_started;
++ bool need_plug;
++ bool flush_cqes;
++ unsigned short submit_nr;
++ struct blk_plug plug;
++};
++
++struct io_ev_fd {
++ struct eventfd_ctx *cq_ev_fd;
++ unsigned int eventfd_async: 1;
++ struct rcu_head rcu;
++};
++
++#define BGID_ARRAY 64
++
++struct io_ring_ctx {
++ /* const or read-mostly hot data */
++ struct {
++ struct percpu_ref refs;
++
++ struct io_rings *rings;
++ unsigned int flags;
++ enum task_work_notify_mode notify_method;
++ unsigned int compat: 1;
++ unsigned int drain_next: 1;
++ unsigned int restricted: 1;
++ unsigned int off_timeout_used: 1;
++ unsigned int drain_active: 1;
++ unsigned int drain_disabled: 1;
++ unsigned int has_evfd: 1;
++ unsigned int syscall_iopoll: 1;
++ } ____cacheline_aligned_in_smp;
++
++ /* submission data */
++ struct {
++ struct mutex uring_lock;
++
++ /*
++ * Ring buffer of indices into array of io_uring_sqe, which is
++ * mmapped by the application using the IORING_OFF_SQES offset.
++ *
++ * This indirection could e.g. be used to assign fixed
++ * io_uring_sqe entries to operations and only submit them to
++ * the queue when needed.
++ *
++ * The kernel modifies neither the indices array nor the entries
++ * array.
++ */
++ u32 *sq_array;
++ struct io_uring_sqe *sq_sqes;
++ unsigned cached_sq_head;
++ unsigned sq_entries;
++ struct list_head defer_list;
++
++ /*
++ * Fixed resources fast path, should be accessed only under
++ * uring_lock, and updated through io_uring_register(2)
++ */
++ struct io_rsrc_node *rsrc_node;
++ int rsrc_cached_refs;
++ atomic_t cancel_seq;
++ struct io_file_table file_table;
++ unsigned nr_user_files;
++ unsigned nr_user_bufs;
++ struct io_mapped_ubuf **user_bufs;
++
++ struct io_submit_state submit_state;
++
++ struct io_buffer_list *io_bl;
++ struct xarray io_bl_xa;
++ struct list_head io_buffers_cache;
++
++ struct list_head timeout_list;
++ struct list_head ltimeout_list;
++ struct list_head cq_overflow_list;
++ struct list_head apoll_cache;
++ struct xarray personalities;
++ u32 pers_next;
++ unsigned sq_thread_idle;
++ } ____cacheline_aligned_in_smp;
++
++ /* IRQ completion list, under ->completion_lock */
++ struct io_wq_work_list locked_free_list;
++ unsigned int locked_free_nr;
++
++ const struct cred *sq_creds; /* cred used for __io_sq_thread() */
++ struct io_sq_data *sq_data; /* if using sq thread polling */
++
++ struct wait_queue_head sqo_sq_wait;
++ struct list_head sqd_list;
++
++ unsigned long check_cq;
++
++ struct {
++ /*
++ * We cache a range of free CQEs we can use, once exhausted it
++ * should go through a slower range setup, see __io_get_cqe()
++ */
++ struct io_uring_cqe *cqe_cached;
++ struct io_uring_cqe *cqe_sentinel;
++
++ unsigned cached_cq_tail;
++ unsigned cq_entries;
++ struct io_ev_fd __rcu *io_ev_fd;
++ struct wait_queue_head cq_wait;
++ unsigned cq_extra;
++ atomic_t cq_timeouts;
++ unsigned cq_last_tm_flush;
++ } ____cacheline_aligned_in_smp;
++
++ struct {
++ spinlock_t completion_lock;
++
++ spinlock_t timeout_lock;
++
++ /*
++ * ->iopoll_list is protected by the ctx->uring_lock for
++ * io_uring instances that don't use IORING_SETUP_SQPOLL.
++ * For SQPOLL, only the single threaded io_sq_thread() will
++ * manipulate the list, hence no extra locking is needed there.
++ */
++ struct io_wq_work_list iopoll_list;
++ struct hlist_head *cancel_hash;
++ unsigned cancel_hash_bits;
++ bool poll_multi_queue;
++
++ struct list_head io_buffers_comp;
++ } ____cacheline_aligned_in_smp;
++
++ struct io_restriction restrictions;
++
++ /* slow path rsrc auxilary data, used by update/register */
++ struct {
++ struct io_rsrc_node *rsrc_backup_node;
++ struct io_mapped_ubuf *dummy_ubuf;
++ struct io_rsrc_data *file_data;
++ struct io_rsrc_data *buf_data;
++
++ struct delayed_work rsrc_put_work;
++ struct llist_head rsrc_put_llist;
++ struct list_head rsrc_ref_list;
++ spinlock_t rsrc_ref_lock;
++
++ struct list_head io_buffers_pages;
++ };
++
++ /* Keep this last, we don't need it for the fast path */
++ struct {
++ #if defined(CONFIG_UNIX)
++ struct socket *ring_sock;
++ #endif
++ /* hashed buffered write serialization */
++ struct io_wq_hash *hash_map;
++
++ /* Only used for accounting purposes */
++ struct user_struct *user;
++ struct mm_struct *mm_account;
++
++ /* ctx exit and cancelation */
++ struct llist_head fallback_llist;
++ struct delayed_work fallback_work;
++ struct work_struct exit_work;
++ struct list_head tctx_list;
++ struct completion ref_comp;
++ u32 iowq_limits[2];
++ bool iowq_limits_set;
++ };
++};
++
++/*
++ * Arbitrary limit, can be raised if need be
++ */
++#define IO_RINGFD_REG_MAX 16
++
++struct io_uring_task {
++ /* submission side */
++ int cached_refs;
++ struct xarray xa;
++ struct wait_queue_head wait;
++ const struct io_ring_ctx *last;
++ struct io_wq *io_wq;
++ struct percpu_counter inflight;
++ atomic_t inflight_tracked;
++ atomic_t in_idle;
++
++ spinlock_t task_lock;
++ struct io_wq_work_list task_list;
++ struct io_wq_work_list prio_task_list;
++ struct callback_head task_work;
++ struct file **registered_rings;
++ bool task_running;
++};
++
++/*
++ * First field must be the file pointer in all the
++ * iocb unions! See also 'struct kiocb' in <linux/fs.h>
++ */
++struct io_poll_iocb {
++ struct file *file;
++ struct wait_queue_head *head;
++ __poll_t events;
++ struct wait_queue_entry wait;
++};
++
++struct io_poll_update {
++ struct file *file;
++ u64 old_user_data;
++ u64 new_user_data;
++ __poll_t events;
++ bool update_events;
++ bool update_user_data;
++};
++
++struct io_close {
++ struct file *file;
++ int fd;
++ u32 file_slot;
++};
++
++struct io_timeout_data {
++ struct io_kiocb *req;
++ struct hrtimer timer;
++ struct timespec64 ts;
++ enum hrtimer_mode mode;
++ u32 flags;
++};
++
++struct io_accept {
++ struct file *file;
++ struct sockaddr __user *addr;
++ int __user *addr_len;
++ int flags;
++ u32 file_slot;
++ unsigned long nofile;
++};
++
++struct io_socket {
++ struct file *file;
++ int domain;
++ int type;
++ int protocol;
++ int flags;
++ u32 file_slot;
++ unsigned long nofile;
++};
++
++struct io_sync {
++ struct file *file;
++ loff_t len;
++ loff_t off;
++ int flags;
++ int mode;
++};
++
++struct io_cancel {
++ struct file *file;
++ u64 addr;
++ u32 flags;
++ s32 fd;
++};
++
++struct io_timeout {
++ struct file *file;
++ u32 off;
++ u32 target_seq;
++ struct list_head list;
++ /* head of the link, used by linked timeouts only */
++ struct io_kiocb *head;
++ /* for linked completions */
++ struct io_kiocb *prev;
++};
++
++struct io_timeout_rem {
++ struct file *file;
++ u64 addr;
++
++ /* timeout update */
++ struct timespec64 ts;
++ u32 flags;
++ bool ltimeout;
++};
++
++struct io_rw {
++ /* NOTE: kiocb has the file as the first member, so don't do it here */
++ struct kiocb kiocb;
++ u64 addr;
++ u32 len;
++ rwf_t flags;
++};
++
++struct io_connect {
++ struct file *file;
++ struct sockaddr __user *addr;
++ int addr_len;
++};
++
++struct io_sr_msg {
++ struct file *file;
++ union {
++ struct compat_msghdr __user *umsg_compat;
++ struct user_msghdr __user *umsg;
++ void __user *buf;
++ };
++ int msg_flags;
++ size_t len;
++ size_t done_io;
++ unsigned int flags;
++};
++
++struct io_open {
++ struct file *file;
++ int dfd;
++ u32 file_slot;
++ struct filename *filename;
++ struct open_how how;
++ unsigned long nofile;
++};
++
++struct io_rsrc_update {
++ struct file *file;
++ u64 arg;
++ u32 nr_args;
++ u32 offset;
++};
++
++struct io_fadvise {
++ struct file *file;
++ u64 offset;
++ u32 len;
++ u32 advice;
++};
++
++struct io_madvise {
++ struct file *file;
++ u64 addr;
++ u32 len;
++ u32 advice;
++};
++
++struct io_epoll {
++ struct file *file;
++ int epfd;
++ int op;
++ int fd;
++ struct epoll_event event;
++};
++
++struct io_splice {
++ struct file *file_out;
++ loff_t off_out;
++ loff_t off_in;
++ u64 len;
++ int splice_fd_in;
++ unsigned int flags;
++};
++
++struct io_provide_buf {
++ struct file *file;
++ __u64 addr;
++ __u32 len;
++ __u32 bgid;
++ __u16 nbufs;
++ __u16 bid;
++};
++
++struct io_statx {
++ struct file *file;
++ int dfd;
++ unsigned int mask;
++ unsigned int flags;
++ struct filename *filename;
++ struct statx __user *buffer;
++};
++
++struct io_shutdown {
++ struct file *file;
++ int how;
++};
++
++struct io_rename {
++ struct file *file;
++ int old_dfd;
++ int new_dfd;
++ struct filename *oldpath;
++ struct filename *newpath;
++ int flags;
++};
++
++struct io_unlink {
++ struct file *file;
++ int dfd;
++ int flags;
++ struct filename *filename;
++};
++
++struct io_mkdir {
++ struct file *file;
++ int dfd;
++ umode_t mode;
++ struct filename *filename;
++};
++
++struct io_symlink {
++ struct file *file;
++ int new_dfd;
++ struct filename *oldpath;
++ struct filename *newpath;
++};
++
++struct io_hardlink {
++ struct file *file;
++ int old_dfd;
++ int new_dfd;
++ struct filename *oldpath;
++ struct filename *newpath;
++ int flags;
++};
++
++struct io_msg {
++ struct file *file;
++ u64 user_data;
++ u32 len;
++};
++
++struct io_async_connect {
++ struct sockaddr_storage address;
++};
++
++struct io_async_msghdr {
++ struct iovec fast_iov[UIO_FASTIOV];
++ /* points to an allocated iov, if NULL we use fast_iov instead */
++ struct iovec *free_iov;
++ struct sockaddr __user *uaddr;
++ struct msghdr msg;
++ struct sockaddr_storage addr;
++};
++
++struct io_rw_state {
++ struct iov_iter iter;
++ struct iov_iter_state iter_state;
++ struct iovec fast_iov[UIO_FASTIOV];
++};
++
++struct io_async_rw {
++ struct io_rw_state s;
++ const struct iovec *free_iovec;
++ size_t bytes_done;
++ struct wait_page_queue wpq;
++};
++
++struct io_xattr {
++ struct file *file;
++ struct xattr_ctx ctx;
++ struct filename *filename;
++};
++
++enum {
++ REQ_F_FIXED_FILE_BIT = IOSQE_FIXED_FILE_BIT,
++ REQ_F_IO_DRAIN_BIT = IOSQE_IO_DRAIN_BIT,
++ REQ_F_LINK_BIT = IOSQE_IO_LINK_BIT,
++ REQ_F_HARDLINK_BIT = IOSQE_IO_HARDLINK_BIT,
++ REQ_F_FORCE_ASYNC_BIT = IOSQE_ASYNC_BIT,
++ REQ_F_BUFFER_SELECT_BIT = IOSQE_BUFFER_SELECT_BIT,
++ REQ_F_CQE_SKIP_BIT = IOSQE_CQE_SKIP_SUCCESS_BIT,
++
++ /* first byte is taken by user flags, shift it to not overlap */
++ REQ_F_FAIL_BIT = 8,
++ REQ_F_INFLIGHT_BIT,
++ REQ_F_CUR_POS_BIT,
++ REQ_F_NOWAIT_BIT,
++ REQ_F_LINK_TIMEOUT_BIT,
++ REQ_F_NEED_CLEANUP_BIT,
++ REQ_F_POLLED_BIT,
++ REQ_F_BUFFER_SELECTED_BIT,
++ REQ_F_BUFFER_RING_BIT,
++ REQ_F_COMPLETE_INLINE_BIT,
++ REQ_F_REISSUE_BIT,
++ REQ_F_CREDS_BIT,
++ REQ_F_REFCOUNT_BIT,
++ REQ_F_ARM_LTIMEOUT_BIT,
++ REQ_F_ASYNC_DATA_BIT,
++ REQ_F_SKIP_LINK_CQES_BIT,
++ REQ_F_SINGLE_POLL_BIT,
++ REQ_F_DOUBLE_POLL_BIT,
++ REQ_F_PARTIAL_IO_BIT,
++ REQ_F_CQE32_INIT_BIT,
++ REQ_F_APOLL_MULTISHOT_BIT,
++ /* keep async read/write and isreg together and in order */
++ REQ_F_SUPPORT_NOWAIT_BIT,
++ REQ_F_ISREG_BIT,
++
++ /* not a real bit, just to check we're not overflowing the space */
++ __REQ_F_LAST_BIT,
++};
++
++enum {
++ /* ctx owns file */
++ REQ_F_FIXED_FILE = BIT(REQ_F_FIXED_FILE_BIT),
++ /* drain existing IO first */
++ REQ_F_IO_DRAIN = BIT(REQ_F_IO_DRAIN_BIT),
++ /* linked sqes */
++ REQ_F_LINK = BIT(REQ_F_LINK_BIT),
++ /* doesn't sever on completion < 0 */
++ REQ_F_HARDLINK = BIT(REQ_F_HARDLINK_BIT),
++ /* IOSQE_ASYNC */
++ REQ_F_FORCE_ASYNC = BIT(REQ_F_FORCE_ASYNC_BIT),
++ /* IOSQE_BUFFER_SELECT */
++ REQ_F_BUFFER_SELECT = BIT(REQ_F_BUFFER_SELECT_BIT),
++ /* IOSQE_CQE_SKIP_SUCCESS */
++ REQ_F_CQE_SKIP = BIT(REQ_F_CQE_SKIP_BIT),
++
++ /* fail rest of links */
++ REQ_F_FAIL = BIT(REQ_F_FAIL_BIT),
++ /* on inflight list, should be cancelled and waited on exit reliably */
++ REQ_F_INFLIGHT = BIT(REQ_F_INFLIGHT_BIT),
++ /* read/write uses file position */
++ REQ_F_CUR_POS = BIT(REQ_F_CUR_POS_BIT),
++ /* must not punt to workers */
++ REQ_F_NOWAIT = BIT(REQ_F_NOWAIT_BIT),
++ /* has or had linked timeout */
++ REQ_F_LINK_TIMEOUT = BIT(REQ_F_LINK_TIMEOUT_BIT),
++ /* needs cleanup */
++ REQ_F_NEED_CLEANUP = BIT(REQ_F_NEED_CLEANUP_BIT),
++ /* already went through poll handler */
++ REQ_F_POLLED = BIT(REQ_F_POLLED_BIT),
++ /* buffer already selected */
++ REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT),
++ /* buffer selected from ring, needs commit */
++ REQ_F_BUFFER_RING = BIT(REQ_F_BUFFER_RING_BIT),
++ /* completion is deferred through io_comp_state */
++ REQ_F_COMPLETE_INLINE = BIT(REQ_F_COMPLETE_INLINE_BIT),
++ /* caller should reissue async */
++ REQ_F_REISSUE = BIT(REQ_F_REISSUE_BIT),
++ /* supports async reads/writes */
++ REQ_F_SUPPORT_NOWAIT = BIT(REQ_F_SUPPORT_NOWAIT_BIT),
++ /* regular file */
++ REQ_F_ISREG = BIT(REQ_F_ISREG_BIT),
++ /* has creds assigned */
++ REQ_F_CREDS = BIT(REQ_F_CREDS_BIT),
++ /* skip refcounting if not set */
++ REQ_F_REFCOUNT = BIT(REQ_F_REFCOUNT_BIT),
++ /* there is a linked timeout that has to be armed */
++ REQ_F_ARM_LTIMEOUT = BIT(REQ_F_ARM_LTIMEOUT_BIT),
++ /* ->async_data allocated */
++ REQ_F_ASYNC_DATA = BIT(REQ_F_ASYNC_DATA_BIT),
++ /* don't post CQEs while failing linked requests */
++ REQ_F_SKIP_LINK_CQES = BIT(REQ_F_SKIP_LINK_CQES_BIT),
++ /* single poll may be active */
++ REQ_F_SINGLE_POLL = BIT(REQ_F_SINGLE_POLL_BIT),
++ /* double poll may active */
++ REQ_F_DOUBLE_POLL = BIT(REQ_F_DOUBLE_POLL_BIT),
++ /* request has already done partial IO */
++ REQ_F_PARTIAL_IO = BIT(REQ_F_PARTIAL_IO_BIT),
++ /* fast poll multishot mode */
++ REQ_F_APOLL_MULTISHOT = BIT(REQ_F_APOLL_MULTISHOT_BIT),
++ /* ->extra1 and ->extra2 are initialised */
++ REQ_F_CQE32_INIT = BIT(REQ_F_CQE32_INIT_BIT),
++};
++
++struct async_poll {
++ struct io_poll_iocb poll;
++ struct io_poll_iocb *double_poll;
++};
++
++typedef void (*io_req_tw_func_t)(struct io_kiocb *req, bool *locked);
++
++struct io_task_work {
++ union {
++ struct io_wq_work_node node;
++ struct llist_node fallback_node;
++ };
++ io_req_tw_func_t func;
++};
++
++enum {
++ IORING_RSRC_FILE = 0,
++ IORING_RSRC_BUFFER = 1,
++};
++
++struct io_cqe {
++ __u64 user_data;
++ __s32 res;
++ /* fd initially, then cflags for completion */
++ union {
++ __u32 flags;
++ int fd;
++ };
++};
++
++enum {
++ IO_CHECK_CQ_OVERFLOW_BIT,
++ IO_CHECK_CQ_DROPPED_BIT,
++};
++
++/*
++ * NOTE! Each of the iocb union members has the file pointer
++ * as the first entry in their struct definition. So you can
++ * access the file pointer through any of the sub-structs,
++ * or directly as just 'file' in this struct.
++ */
++struct io_kiocb {
++ union {
++ struct file *file;
++ struct io_rw rw;
++ struct io_poll_iocb poll;
++ struct io_poll_update poll_update;
++ struct io_accept accept;
++ struct io_sync sync;
++ struct io_cancel cancel;
++ struct io_timeout timeout;
++ struct io_timeout_rem timeout_rem;
++ struct io_connect connect;
++ struct io_sr_msg sr_msg;
++ struct io_open open;
++ struct io_close close;
++ struct io_rsrc_update rsrc_update;
++ struct io_fadvise fadvise;
++ struct io_madvise madvise;
++ struct io_epoll epoll;
++ struct io_splice splice;
++ struct io_provide_buf pbuf;
++ struct io_statx statx;
++ struct io_shutdown shutdown;
++ struct io_rename rename;
++ struct io_unlink unlink;
++ struct io_mkdir mkdir;
++ struct io_symlink symlink;
++ struct io_hardlink hardlink;
++ struct io_msg msg;
++ struct io_xattr xattr;
++ struct io_socket sock;
++ struct io_uring_cmd uring_cmd;
++ };
++
++ u8 opcode;
++ /* polled IO has completed */
++ u8 iopoll_completed;
++ /*
++ * Can be either a fixed buffer index, or used with provided buffers.
++ * For the latter, before issue it points to the buffer group ID,
++ * and after selection it points to the buffer ID itself.
++ */
++ u16 buf_index;
++ unsigned int flags;
++
++ struct io_cqe cqe;
++
++ struct io_ring_ctx *ctx;
++ struct task_struct *task;
++
++ struct io_rsrc_node *rsrc_node;
++
++ union {
++ /* store used ubuf, so we can prevent reloading */
++ struct io_mapped_ubuf *imu;
++
++ /* stores selected buf, valid IFF REQ_F_BUFFER_SELECTED is set */
++ struct io_buffer *kbuf;
++
++ /*
++ * stores buffer ID for ring provided buffers, valid IFF
++ * REQ_F_BUFFER_RING is set.
++ */
++ struct io_buffer_list *buf_list;
++ };
++
++ union {
++ /* used by request caches, completion batching and iopoll */
++ struct io_wq_work_node comp_list;
++ /* cache ->apoll->events */
++ __poll_t apoll_events;
++ };
++ atomic_t refs;
++ atomic_t poll_refs;
++ struct io_task_work io_task_work;
++ /* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
++ union {
++ struct hlist_node hash_node;
++ struct {
++ u64 extra1;
++ u64 extra2;
++ };
++ };
++ /* internal polling, see IORING_FEAT_FAST_POLL */
++ struct async_poll *apoll;
++ /* opcode allocated if it needs to store data for async defer */
++ void *async_data;
++ /* linked requests, IFF REQ_F_HARDLINK or REQ_F_LINK are set */
++ struct io_kiocb *link;
++ /* custom credentials, valid IFF REQ_F_CREDS is set */
++ const struct cred *creds;
++ struct io_wq_work work;
++};
++
++struct io_tctx_node {
++ struct list_head ctx_node;
++ struct task_struct *task;
++ struct io_ring_ctx *ctx;
++};
++
++struct io_defer_entry {
++ struct list_head list;
++ struct io_kiocb *req;
++ u32 seq;
++};
++
++struct io_cancel_data {
++ struct io_ring_ctx *ctx;
++ union {
++ u64 data;
++ struct file *file;
++ };
++ u32 flags;
++ int seq;
++};
++
++/*
++ * The URING_CMD payload starts at 'cmd' in the first sqe, and continues into
++ * the following sqe if SQE128 is used.
++ */
++#define uring_cmd_pdu_size(is_sqe128) \
++ ((1 + !!(is_sqe128)) * sizeof(struct io_uring_sqe) - \
++ offsetof(struct io_uring_sqe, cmd))
++
++struct io_op_def {
++ /* needs req->file assigned */
++ unsigned needs_file : 1;
++ /* should block plug */
++ unsigned plug : 1;
++ /* hash wq insertion if file is a regular file */
++ unsigned hash_reg_file : 1;
++ /* unbound wq insertion if file is a non-regular file */
++ unsigned unbound_nonreg_file : 1;
++ /* set if opcode supports polled "wait" */
++ unsigned pollin : 1;
++ unsigned pollout : 1;
++ unsigned poll_exclusive : 1;
++ /* op supports buffer selection */
++ unsigned buffer_select : 1;
++ /* do prep async if is going to be punted */
++ unsigned needs_async_setup : 1;
++ /* opcode is not supported by this kernel */
++ unsigned not_supported : 1;
++ /* skip auditing */
++ unsigned audit_skip : 1;
++ /* supports ioprio */
++ unsigned ioprio : 1;
++ /* supports iopoll */
++ unsigned iopoll : 1;
++ /* size of async data needed, if any */
++ unsigned short async_size;
++
++ int (*prep)(struct io_kiocb *, const struct io_uring_sqe *);
++ int (*issue)(struct io_kiocb *, unsigned int);
++};
++
++static const struct io_op_def io_op_defs[];
++
++/* requests with any of those set should undergo io_disarm_next() */
++#define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
++#define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK)
++
++static bool io_disarm_next(struct io_kiocb *req);
++static void io_uring_del_tctx_node(unsigned long index);
++static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
++ struct task_struct *task,
++ bool cancel_all);
++static void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd);
++
++static void __io_req_complete_post(struct io_kiocb *req, s32 res, u32 cflags);
++static void io_dismantle_req(struct io_kiocb *req);
++static void io_queue_linked_timeout(struct io_kiocb *req);
++static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
++ struct io_uring_rsrc_update2 *up,
++ unsigned nr_args);
++static void io_clean_op(struct io_kiocb *req);
++static inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
++ unsigned issue_flags);
++static struct file *io_file_get_normal(struct io_kiocb *req, int fd);
++static void io_queue_sqe(struct io_kiocb *req);
++static void io_rsrc_put_work(struct work_struct *work);
++
++static void io_req_task_queue(struct io_kiocb *req);
++static void __io_submit_flush_completions(struct io_ring_ctx *ctx);
++static int io_req_prep_async(struct io_kiocb *req);
++
++static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
++ unsigned int issue_flags, u32 slot_index);
++static int __io_close_fixed(struct io_kiocb *req, unsigned int issue_flags,
++ unsigned int offset);
++static inline int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags);
++
++static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer);
++static void io_eventfd_signal(struct io_ring_ctx *ctx);
++static void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags);
++
++static struct kmem_cache *req_cachep;
++
++static const struct file_operations io_uring_fops;
++
++const char *io_uring_get_opcode(u8 opcode)
++{
++ switch ((enum io_uring_op)opcode) {
++ case IORING_OP_NOP:
++ return "NOP";
++ case IORING_OP_READV:
++ return "READV";
++ case IORING_OP_WRITEV:
++ return "WRITEV";
++ case IORING_OP_FSYNC:
++ return "FSYNC";
++ case IORING_OP_READ_FIXED:
++ return "READ_FIXED";
++ case IORING_OP_WRITE_FIXED:
++ return "WRITE_FIXED";
++ case IORING_OP_POLL_ADD:
++ return "POLL_ADD";
++ case IORING_OP_POLL_REMOVE:
++ return "POLL_REMOVE";
++ case IORING_OP_SYNC_FILE_RANGE:
++ return "SYNC_FILE_RANGE";
++ case IORING_OP_SENDMSG:
++ return "SENDMSG";
++ case IORING_OP_RECVMSG:
++ return "RECVMSG";
++ case IORING_OP_TIMEOUT:
++ return "TIMEOUT";
++ case IORING_OP_TIMEOUT_REMOVE:
++ return "TIMEOUT_REMOVE";
++ case IORING_OP_ACCEPT:
++ return "ACCEPT";
++ case IORING_OP_ASYNC_CANCEL:
++ return "ASYNC_CANCEL";
++ case IORING_OP_LINK_TIMEOUT:
++ return "LINK_TIMEOUT";
++ case IORING_OP_CONNECT:
++ return "CONNECT";
++ case IORING_OP_FALLOCATE:
++ return "FALLOCATE";
++ case IORING_OP_OPENAT:
++ return "OPENAT";
++ case IORING_OP_CLOSE:
++ return "CLOSE";
++ case IORING_OP_FILES_UPDATE:
++ return "FILES_UPDATE";
++ case IORING_OP_STATX:
++ return "STATX";
++ case IORING_OP_READ:
++ return "READ";
++ case IORING_OP_WRITE:
++ return "WRITE";
++ case IORING_OP_FADVISE:
++ return "FADVISE";
++ case IORING_OP_MADVISE:
++ return "MADVISE";
++ case IORING_OP_SEND:
++ return "SEND";
++ case IORING_OP_RECV:
++ return "RECV";
++ case IORING_OP_OPENAT2:
++ return "OPENAT2";
++ case IORING_OP_EPOLL_CTL:
++ return "EPOLL_CTL";
++ case IORING_OP_SPLICE:
++ return "SPLICE";
++ case IORING_OP_PROVIDE_BUFFERS:
++ return "PROVIDE_BUFFERS";
++ case IORING_OP_REMOVE_BUFFERS:
++ return "REMOVE_BUFFERS";
++ case IORING_OP_TEE:
++ return "TEE";
++ case IORING_OP_SHUTDOWN:
++ return "SHUTDOWN";
++ case IORING_OP_RENAMEAT:
++ return "RENAMEAT";
++ case IORING_OP_UNLINKAT:
++ return "UNLINKAT";
++ case IORING_OP_MKDIRAT:
++ return "MKDIRAT";
++ case IORING_OP_SYMLINKAT:
++ return "SYMLINKAT";
++ case IORING_OP_LINKAT:
++ return "LINKAT";
++ case IORING_OP_MSG_RING:
++ return "MSG_RING";
++ case IORING_OP_FSETXATTR:
++ return "FSETXATTR";
++ case IORING_OP_SETXATTR:
++ return "SETXATTR";
++ case IORING_OP_FGETXATTR:
++ return "FGETXATTR";
++ case IORING_OP_GETXATTR:
++ return "GETXATTR";
++ case IORING_OP_SOCKET:
++ return "SOCKET";
++ case IORING_OP_URING_CMD:
++ return "URING_CMD";
++ case IORING_OP_LAST:
++ return "INVALID";
++ }
++ return "INVALID";
++}
++
++struct sock *io_uring_get_socket(struct file *file)
++{
++#if defined(CONFIG_UNIX)
++ if (file->f_op == &io_uring_fops) {
++ struct io_ring_ctx *ctx = file->private_data;
++
++ return ctx->ring_sock->sk;
++ }
++#endif
++ return NULL;
++}
++EXPORT_SYMBOL(io_uring_get_socket);
++
++#if defined(CONFIG_UNIX)
++static inline bool io_file_need_scm(struct file *filp)
++{
++#if defined(IO_URING_SCM_ALL)
++ return true;
++#else
++ return !!unix_get_socket(filp);
++#endif
++}
++#else
++static inline bool io_file_need_scm(struct file *filp)
++{
++ return false;
++}
++#endif
++
++static void io_ring_submit_unlock(struct io_ring_ctx *ctx, unsigned issue_flags)
++{
++ lockdep_assert_held(&ctx->uring_lock);
++ if (issue_flags & IO_URING_F_UNLOCKED)
++ mutex_unlock(&ctx->uring_lock);
++}
++
++static void io_ring_submit_lock(struct io_ring_ctx *ctx, unsigned issue_flags)
++{
++ /*
++ * "Normal" inline submissions always hold the uring_lock, since we
++ * grab it from the system call. Same is true for the SQPOLL offload.
++ * The only exception is when we've detached the request and issue it
++ * from an async worker thread, grab the lock for that case.
++ */
++ if (issue_flags & IO_URING_F_UNLOCKED)
++ mutex_lock(&ctx->uring_lock);
++ lockdep_assert_held(&ctx->uring_lock);
++}
++
++static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
++{
++ if (!*locked) {
++ mutex_lock(&ctx->uring_lock);
++ *locked = true;
++ }
++}
++
++#define io_for_each_link(pos, head) \
++ for (pos = (head); pos; pos = pos->link)
++
++/*
++ * Shamelessly stolen from the mm implementation of page reference checking,
++ * see commit f958d7b528b1 for details.
++ */
++#define req_ref_zero_or_close_to_overflow(req) \
++ ((unsigned int) atomic_read(&(req->refs)) + 127u <= 127u)
++
++static inline bool req_ref_inc_not_zero(struct io_kiocb *req)
++{
++ WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
++ return atomic_inc_not_zero(&req->refs);
++}
++
++static inline bool req_ref_put_and_test(struct io_kiocb *req)
++{
++ if (likely(!(req->flags & REQ_F_REFCOUNT)))
++ return true;
++
++ WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
++ return atomic_dec_and_test(&req->refs);
++}
++
++static inline void req_ref_get(struct io_kiocb *req)
++{
++ WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
++ WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
++ atomic_inc(&req->refs);
++}
++
++static inline void io_submit_flush_completions(struct io_ring_ctx *ctx)
++{
++ if (!wq_list_empty(&ctx->submit_state.compl_reqs))
++ __io_submit_flush_completions(ctx);
++}
++
++static inline void __io_req_set_refcount(struct io_kiocb *req, int nr)
++{
++ if (!(req->flags & REQ_F_REFCOUNT)) {
++ req->flags |= REQ_F_REFCOUNT;
++ atomic_set(&req->refs, nr);
++ }
++}
++
++static inline void io_req_set_refcount(struct io_kiocb *req)
++{
++ __io_req_set_refcount(req, 1);
++}
++
++#define IO_RSRC_REF_BATCH 100
++
++static void io_rsrc_put_node(struct io_rsrc_node *node, int nr)
++{
++ percpu_ref_put_many(&node->refs, nr);
++}
++
++static inline void io_req_put_rsrc_locked(struct io_kiocb *req,
++ struct io_ring_ctx *ctx)
++ __must_hold(&ctx->uring_lock)
++{
++ struct io_rsrc_node *node = req->rsrc_node;
++
++ if (node) {
++ if (node == ctx->rsrc_node)
++ ctx->rsrc_cached_refs++;
++ else
++ io_rsrc_put_node(node, 1);
++ }
++}
++
++static inline void io_req_put_rsrc(struct io_kiocb *req)
++{
++ if (req->rsrc_node)
++ io_rsrc_put_node(req->rsrc_node, 1);
++}
++
++static __cold void io_rsrc_refs_drop(struct io_ring_ctx *ctx)
++ __must_hold(&ctx->uring_lock)
++{
++ if (ctx->rsrc_cached_refs) {
++ io_rsrc_put_node(ctx->rsrc_node, ctx->rsrc_cached_refs);
++ ctx->rsrc_cached_refs = 0;
++ }
++}
++
++static void io_rsrc_refs_refill(struct io_ring_ctx *ctx)
++ __must_hold(&ctx->uring_lock)
++{
++ ctx->rsrc_cached_refs += IO_RSRC_REF_BATCH;
++ percpu_ref_get_many(&ctx->rsrc_node->refs, IO_RSRC_REF_BATCH);
++}
++
++static inline void io_req_set_rsrc_node(struct io_kiocb *req,
++ struct io_ring_ctx *ctx,
++ unsigned int issue_flags)
++{
++ if (!req->rsrc_node) {
++ req->rsrc_node = ctx->rsrc_node;
++
++ if (!(issue_flags & IO_URING_F_UNLOCKED)) {
++ lockdep_assert_held(&ctx->uring_lock);
++ ctx->rsrc_cached_refs--;
++ if (unlikely(ctx->rsrc_cached_refs < 0))
++ io_rsrc_refs_refill(ctx);
++ } else {
++ percpu_ref_get(&req->rsrc_node->refs);
++ }
++ }
++}
++
++static unsigned int __io_put_kbuf(struct io_kiocb *req, struct list_head *list)
++{
++ if (req->flags & REQ_F_BUFFER_RING) {
++ if (req->buf_list)
++ req->buf_list->head++;
++ req->flags &= ~REQ_F_BUFFER_RING;
++ } else {
++ list_add(&req->kbuf->list, list);
++ req->flags &= ~REQ_F_BUFFER_SELECTED;
++ }
++
++ return IORING_CQE_F_BUFFER | (req->buf_index << IORING_CQE_BUFFER_SHIFT);
++}
++
++static inline unsigned int io_put_kbuf_comp(struct io_kiocb *req)
++{
++ lockdep_assert_held(&req->ctx->completion_lock);
++
++ if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
++ return 0;
++ return __io_put_kbuf(req, &req->ctx->io_buffers_comp);
++}
++
++static inline unsigned int io_put_kbuf(struct io_kiocb *req,
++ unsigned issue_flags)
++{
++ unsigned int cflags;
++
++ if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
++ return 0;
++
++ /*
++ * We can add this buffer back to two lists:
++ *
++ * 1) The io_buffers_cache list. This one is protected by the
++ * ctx->uring_lock. If we already hold this lock, add back to this
++ * list as we can grab it from issue as well.
++ * 2) The io_buffers_comp list. This one is protected by the
++ * ctx->completion_lock.
++ *
++ * We migrate buffers from the comp_list to the issue cache list
++ * when we need one.
++ */
++ if (req->flags & REQ_F_BUFFER_RING) {
++ /* no buffers to recycle for this case */
++ cflags = __io_put_kbuf(req, NULL);
++ } else if (issue_flags & IO_URING_F_UNLOCKED) {
++ struct io_ring_ctx *ctx = req->ctx;
++
++ spin_lock(&ctx->completion_lock);
++ cflags = __io_put_kbuf(req, &ctx->io_buffers_comp);
++ spin_unlock(&ctx->completion_lock);
++ } else {
++ lockdep_assert_held(&req->ctx->uring_lock);
++
++ cflags = __io_put_kbuf(req, &req->ctx->io_buffers_cache);
++ }
++
++ return cflags;
++}
++
++static struct io_buffer_list *io_buffer_get_list(struct io_ring_ctx *ctx,
++ unsigned int bgid)
++{
++ if (ctx->io_bl && bgid < BGID_ARRAY)
++ return &ctx->io_bl[bgid];
++
++ return xa_load(&ctx->io_bl_xa, bgid);
++}
++
++static void io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_buffer_list *bl;
++ struct io_buffer *buf;
++
++ if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
++ return;
++ /*
++ * For legacy provided buffer mode, don't recycle if we already did
++ * IO to this buffer. For ring-mapped provided buffer mode, we should
++ * increment ring->head to explicitly monopolize the buffer to avoid
++ * multiple use.
++ */
++ if ((req->flags & REQ_F_BUFFER_SELECTED) &&
++ (req->flags & REQ_F_PARTIAL_IO))
++ return;
++
++ /*
++ * READV uses fields in `struct io_rw` (len/addr) to stash the selected
++ * buffer data. However if that buffer is recycled the original request
++ * data stored in addr is lost. Therefore forbid recycling for now.
++ */
++ if (req->opcode == IORING_OP_READV)
++ return;
++
++ /*
++ * We don't need to recycle for REQ_F_BUFFER_RING, we can just clear
++ * the flag and hence ensure that bl->head doesn't get incremented.
++ * If the tail has already been incremented, hang on to it.
++ */
++ if (req->flags & REQ_F_BUFFER_RING) {
++ if (req->buf_list) {
++ if (req->flags & REQ_F_PARTIAL_IO) {
++ req->buf_list->head++;
++ req->buf_list = NULL;
++ } else {
++ req->buf_index = req->buf_list->bgid;
++ req->flags &= ~REQ_F_BUFFER_RING;
++ }
++ }
++ return;
++ }
++
++ io_ring_submit_lock(ctx, issue_flags);
++
++ buf = req->kbuf;
++ bl = io_buffer_get_list(ctx, buf->bgid);
++ list_add(&buf->list, &bl->buf_list);
++ req->flags &= ~REQ_F_BUFFER_SELECTED;
++ req->buf_index = buf->bgid;
++
++ io_ring_submit_unlock(ctx, issue_flags);
++}
++
++static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
++ bool cancel_all)
++ __must_hold(&req->ctx->timeout_lock)
++{
++ struct io_kiocb *req;
++
++ if (task && head->task != task)
++ return false;
++ if (cancel_all)
++ return true;
++
++ io_for_each_link(req, head) {
++ if (req->flags & REQ_F_INFLIGHT)
++ return true;
++ }
++ return false;
++}
++
++static bool io_match_linked(struct io_kiocb *head)
++{
++ struct io_kiocb *req;
++
++ io_for_each_link(req, head) {
++ if (req->flags & REQ_F_INFLIGHT)
++ return true;
++ }
++ return false;
++}
++
++/*
++ * As io_match_task() but protected against racing with linked timeouts.
++ * User must not hold timeout_lock.
++ */
++static bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,
++ bool cancel_all)
++{
++ bool matched;
++
++ if (task && head->task != task)
++ return false;
++ if (cancel_all)
++ return true;
++
++ if (head->flags & REQ_F_LINK_TIMEOUT) {
++ struct io_ring_ctx *ctx = head->ctx;
++
++ /* protect against races with linked timeouts */
++ spin_lock_irq(&ctx->timeout_lock);
++ matched = io_match_linked(head);
++ spin_unlock_irq(&ctx->timeout_lock);
++ } else {
++ matched = io_match_linked(head);
++ }
++ return matched;
++}
++
++static inline bool req_has_async_data(struct io_kiocb *req)
++{
++ return req->flags & REQ_F_ASYNC_DATA;
++}
++
++static inline void req_set_fail(struct io_kiocb *req)
++{
++ req->flags |= REQ_F_FAIL;
++ if (req->flags & REQ_F_CQE_SKIP) {
++ req->flags &= ~REQ_F_CQE_SKIP;
++ req->flags |= REQ_F_SKIP_LINK_CQES;
++ }
++}
++
++static inline void req_fail_link_node(struct io_kiocb *req, int res)
++{
++ req_set_fail(req);
++ req->cqe.res = res;
++}
++
++static inline void io_req_add_to_cache(struct io_kiocb *req, struct io_ring_ctx *ctx)
++{
++ wq_stack_add_head(&req->comp_list, &ctx->submit_state.free_list);
++}
++
++static __cold void io_ring_ctx_ref_free(struct percpu_ref *ref)
++{
++ struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
++
++ complete(&ctx->ref_comp);
++}
++
++static inline bool io_is_timeout_noseq(struct io_kiocb *req)
++{
++ return !req->timeout.off;
++}
++
++static __cold void io_fallback_req_func(struct work_struct *work)
++{
++ struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
++ fallback_work.work);
++ struct llist_node *node = llist_del_all(&ctx->fallback_llist);
++ struct io_kiocb *req, *tmp;
++ bool locked = false;
++
++ percpu_ref_get(&ctx->refs);
++ llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node)
++ req->io_task_work.func(req, &locked);
++
++ if (locked) {
++ io_submit_flush_completions(ctx);
++ mutex_unlock(&ctx->uring_lock);
++ }
++ percpu_ref_put(&ctx->refs);
++}
++
++static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
++{
++ struct io_ring_ctx *ctx;
++ int hash_bits;
++
++ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
++ if (!ctx)
++ return NULL;
++
++ xa_init(&ctx->io_bl_xa);
++
++ /*
++ * Use 5 bits less than the max cq entries, that should give us around
++ * 32 entries per hash list if totally full and uniformly spread.
++ */
++ hash_bits = ilog2(p->cq_entries);
++ hash_bits -= 5;
++ if (hash_bits <= 0)
++ hash_bits = 1;
++ ctx->cancel_hash_bits = hash_bits;
++ ctx->cancel_hash = kmalloc((1U << hash_bits) * sizeof(struct hlist_head),
++ GFP_KERNEL);
++ if (!ctx->cancel_hash)
++ goto err;
++ __hash_init(ctx->cancel_hash, 1U << hash_bits);
++
++ ctx->dummy_ubuf = kzalloc(sizeof(*ctx->dummy_ubuf), GFP_KERNEL);
++ if (!ctx->dummy_ubuf)
++ goto err;
++ /* set invalid range, so io_import_fixed() fails meeting it */
++ ctx->dummy_ubuf->ubuf = -1UL;
++
++ if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
++ 0, GFP_KERNEL))
++ goto err;
++
++ ctx->flags = p->flags;
++ init_waitqueue_head(&ctx->sqo_sq_wait);
++ INIT_LIST_HEAD(&ctx->sqd_list);
++ INIT_LIST_HEAD(&ctx->cq_overflow_list);
++ INIT_LIST_HEAD(&ctx->io_buffers_cache);
++ INIT_LIST_HEAD(&ctx->apoll_cache);
++ init_completion(&ctx->ref_comp);
++ xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
++ mutex_init(&ctx->uring_lock);
++ init_waitqueue_head(&ctx->cq_wait);
++ spin_lock_init(&ctx->completion_lock);
++ spin_lock_init(&ctx->timeout_lock);
++ INIT_WQ_LIST(&ctx->iopoll_list);
++ INIT_LIST_HEAD(&ctx->io_buffers_pages);
++ INIT_LIST_HEAD(&ctx->io_buffers_comp);
++ INIT_LIST_HEAD(&ctx->defer_list);
++ INIT_LIST_HEAD(&ctx->timeout_list);
++ INIT_LIST_HEAD(&ctx->ltimeout_list);
++ spin_lock_init(&ctx->rsrc_ref_lock);
++ INIT_LIST_HEAD(&ctx->rsrc_ref_list);
++ INIT_DELAYED_WORK(&ctx->rsrc_put_work, io_rsrc_put_work);
++ init_llist_head(&ctx->rsrc_put_llist);
++ INIT_LIST_HEAD(&ctx->tctx_list);
++ ctx->submit_state.free_list.next = NULL;
++ INIT_WQ_LIST(&ctx->locked_free_list);
++ INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func);
++ INIT_WQ_LIST(&ctx->submit_state.compl_reqs);
++ return ctx;
++err:
++ kfree(ctx->dummy_ubuf);
++ kfree(ctx->cancel_hash);
++ kfree(ctx->io_bl);
++ xa_destroy(&ctx->io_bl_xa);
++ kfree(ctx);
++ return NULL;
++}
++
++static void io_account_cq_overflow(struct io_ring_ctx *ctx)
++{
++ struct io_rings *r = ctx->rings;
++
++ WRITE_ONCE(r->cq_overflow, READ_ONCE(r->cq_overflow) + 1);
++ ctx->cq_extra--;
++}
++
++static bool req_need_defer(struct io_kiocb *req, u32 seq)
++{
++ if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
++ struct io_ring_ctx *ctx = req->ctx;
++
++ return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
++ }
++
++ return false;
++}
++
++static inline bool io_req_ffs_set(struct io_kiocb *req)
++{
++ return req->flags & REQ_F_FIXED_FILE;
++}
++
++static inline void io_req_track_inflight(struct io_kiocb *req)
++{
++ if (!(req->flags & REQ_F_INFLIGHT)) {
++ req->flags |= REQ_F_INFLIGHT;
++ atomic_inc(&req->task->io_uring->inflight_tracked);
++ }
++}
++
++static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
++{
++ if (WARN_ON_ONCE(!req->link))
++ return NULL;
++
++ req->flags &= ~REQ_F_ARM_LTIMEOUT;
++ req->flags |= REQ_F_LINK_TIMEOUT;
++
++ /* linked timeouts should have two refs once prep'ed */
++ io_req_set_refcount(req);
++ __io_req_set_refcount(req->link, 2);
++ return req->link;
++}
++
++static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
++{
++ if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT)))
++ return NULL;
++ return __io_prep_linked_timeout(req);
++}
++
++static noinline void __io_arm_ltimeout(struct io_kiocb *req)
++{
++ io_queue_linked_timeout(__io_prep_linked_timeout(req));
++}
++
++static inline void io_arm_ltimeout(struct io_kiocb *req)
++{
++ if (unlikely(req->flags & REQ_F_ARM_LTIMEOUT))
++ __io_arm_ltimeout(req);
++}
++
++static void io_prep_async_work(struct io_kiocb *req)
++{
++ const struct io_op_def *def = &io_op_defs[req->opcode];
++ struct io_ring_ctx *ctx = req->ctx;
++
++ if (!(req->flags & REQ_F_CREDS)) {
++ req->flags |= REQ_F_CREDS;
++ req->creds = get_current_cred();
++ }
++
++ req->work.list.next = NULL;
++ req->work.flags = 0;
++ req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
++ if (req->flags & REQ_F_FORCE_ASYNC)
++ req->work.flags |= IO_WQ_WORK_CONCURRENT;
++
++ if (req->flags & REQ_F_ISREG) {
++ if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
++ io_wq_hash_work(&req->work, file_inode(req->file));
++ } else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
++ if (def->unbound_nonreg_file)
++ req->work.flags |= IO_WQ_WORK_UNBOUND;
++ }
++}
++
++static void io_prep_async_link(struct io_kiocb *req)
++{
++ struct io_kiocb *cur;
++
++ if (req->flags & REQ_F_LINK_TIMEOUT) {
++ struct io_ring_ctx *ctx = req->ctx;
++
++ spin_lock_irq(&ctx->timeout_lock);
++ io_for_each_link(cur, req)
++ io_prep_async_work(cur);
++ spin_unlock_irq(&ctx->timeout_lock);
++ } else {
++ io_for_each_link(cur, req)
++ io_prep_async_work(cur);
++ }
++}
++
++static inline void io_req_add_compl_list(struct io_kiocb *req)
++{
++ struct io_submit_state *state = &req->ctx->submit_state;
++
++ if (!(req->flags & REQ_F_CQE_SKIP))
++ state->flush_cqes = true;
++ wq_list_add_tail(&req->comp_list, &state->compl_reqs);
++}
++
++static void io_queue_iowq(struct io_kiocb *req, bool *dont_use)
++{
++ struct io_kiocb *link = io_prep_linked_timeout(req);
++ struct io_uring_task *tctx = req->task->io_uring;
++
++ BUG_ON(!tctx);
++ BUG_ON(!tctx->io_wq);
++
++ /* init ->work of the whole link before punting */
++ io_prep_async_link(req);
++
++ /*
++ * Not expected to happen, but if we do have a bug where this _can_
++ * happen, catch it here and ensure the request is marked as
++ * canceled. That will make io-wq go through the usual work cancel
++ * procedure rather than attempt to run this request (or create a new
++ * worker for it).
++ */
++ if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
++ req->work.flags |= IO_WQ_WORK_CANCEL;
++
++ trace_io_uring_queue_async_work(req->ctx, req, req->cqe.user_data,
++ req->opcode, req->flags, &req->work,
++ io_wq_is_hashed(&req->work));
++ io_wq_enqueue(tctx->io_wq, &req->work);
++ if (link)
++ io_queue_linked_timeout(link);
++}
++
++static void io_kill_timeout(struct io_kiocb *req, int status)
++ __must_hold(&req->ctx->completion_lock)
++ __must_hold(&req->ctx->timeout_lock)
++{
++ struct io_timeout_data *io = req->async_data;
++
++ if (hrtimer_try_to_cancel(&io->timer) != -1) {
++ if (status)
++ req_set_fail(req);
++ atomic_set(&req->ctx->cq_timeouts,
++ atomic_read(&req->ctx->cq_timeouts) + 1);
++ list_del_init(&req->timeout.list);
++ io_req_tw_post_queue(req, status, 0);
++ }
++}
++
++static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
++{
++ while (!list_empty(&ctx->defer_list)) {
++ struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
++ struct io_defer_entry, list);
++
++ if (req_need_defer(de->req, de->seq))
++ break;
++ list_del_init(&de->list);
++ io_req_task_queue(de->req);
++ kfree(de);
++ }
++}
++
++static __cold void io_flush_timeouts(struct io_ring_ctx *ctx)
++ __must_hold(&ctx->completion_lock)
++{
++ u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
++ struct io_kiocb *req, *tmp;
++
++ spin_lock_irq(&ctx->timeout_lock);
++ list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
++ u32 events_needed, events_got;
++
++ if (io_is_timeout_noseq(req))
++ break;
++
++ /*
++ * Since seq can easily wrap around over time, subtract
++ * the last seq at which timeouts were flushed before comparing.
++ * Assuming not more than 2^31-1 events have happened since,
++ * these subtractions won't have wrapped, so we can check if
++ * target is in [last_seq, current_seq] by comparing the two.
++ */
++ events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush;
++ events_got = seq - ctx->cq_last_tm_flush;
++ if (events_got < events_needed)
++ break;
++
++ io_kill_timeout(req, 0);
++ }
++ ctx->cq_last_tm_flush = seq;
++ spin_unlock_irq(&ctx->timeout_lock);
++}
++
++static inline void io_commit_cqring(struct io_ring_ctx *ctx)
++{
++ /* order cqe stores with ring update */
++ smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
++}
++
++static void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
++{
++ if (ctx->off_timeout_used || ctx->drain_active) {
++ spin_lock(&ctx->completion_lock);
++ if (ctx->off_timeout_used)
++ io_flush_timeouts(ctx);
++ if (ctx->drain_active)
++ io_queue_deferred(ctx);
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ }
++ if (ctx->has_evfd)
++ io_eventfd_signal(ctx);
++}
++
++static inline bool io_sqring_full(struct io_ring_ctx *ctx)
++{
++ struct io_rings *r = ctx->rings;
++
++ return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == ctx->sq_entries;
++}
++
++static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx)
++{
++ return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
++}
++
++/*
++ * writes to the cq entry need to come after reading head; the
++ * control dependency is enough as we're using WRITE_ONCE to
++ * fill the cq entry
++ */
++static noinline struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx)
++{
++ struct io_rings *rings = ctx->rings;
++ unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1);
++ unsigned int shift = 0;
++ unsigned int free, queued, len;
++
++ if (ctx->flags & IORING_SETUP_CQE32)
++ shift = 1;
++
++ /* userspace may cheat modifying the tail, be safe and do min */
++ queued = min(__io_cqring_events(ctx), ctx->cq_entries);
++ free = ctx->cq_entries - queued;
++ /* we need a contiguous range, limit based on the current array offset */
++ len = min(free, ctx->cq_entries - off);
++ if (!len)
++ return NULL;
++
++ ctx->cached_cq_tail++;
++ ctx->cqe_cached = &rings->cqes[off];
++ ctx->cqe_sentinel = ctx->cqe_cached + len;
++ ctx->cqe_cached++;
++ return &rings->cqes[off << shift];
++}
++
++static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
++{
++ if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) {
++ struct io_uring_cqe *cqe = ctx->cqe_cached;
++
++ if (ctx->flags & IORING_SETUP_CQE32) {
++ unsigned int off = ctx->cqe_cached - ctx->rings->cqes;
++
++ cqe += off;
++ }
++
++ ctx->cached_cq_tail++;
++ ctx->cqe_cached++;
++ return cqe;
++ }
++
++ return __io_get_cqe(ctx);
++}
++
++static void io_eventfd_signal(struct io_ring_ctx *ctx)
++{
++ struct io_ev_fd *ev_fd;
++
++ rcu_read_lock();
++ /*
++ * rcu_dereference ctx->io_ev_fd once and use it for both for checking
++ * and eventfd_signal
++ */
++ ev_fd = rcu_dereference(ctx->io_ev_fd);
++
++ /*
++ * Check again if ev_fd exists incase an io_eventfd_unregister call
++ * completed between the NULL check of ctx->io_ev_fd at the start of
++ * the function and rcu_read_lock.
++ */
++ if (unlikely(!ev_fd))
++ goto out;
++ if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
++ goto out;
++
++ if (!ev_fd->eventfd_async || io_wq_current_is_worker())
++ eventfd_signal(ev_fd->cq_ev_fd, 1);
++out:
++ rcu_read_unlock();
++}
++
++static inline void io_cqring_wake(struct io_ring_ctx *ctx)
++{
++ /*
++ * wake_up_all() may seem excessive, but io_wake_function() and
++ * io_should_wake() handle the termination of the loop and only
++ * wake as many waiters as we need to.
++ */
++ if (wq_has_sleeper(&ctx->cq_wait))
++ wake_up_all(&ctx->cq_wait);
++}
++
++/*
++ * This should only get called when at least one event has been posted.
++ * Some applications rely on the eventfd notification count only changing
++ * IFF a new CQE has been added to the CQ ring. There's no depedency on
++ * 1:1 relationship between how many times this function is called (and
++ * hence the eventfd count) and number of CQEs posted to the CQ ring.
++ */
++static inline void io_cqring_ev_posted(struct io_ring_ctx *ctx)
++{
++ if (unlikely(ctx->off_timeout_used || ctx->drain_active ||
++ ctx->has_evfd))
++ __io_commit_cqring_flush(ctx);
++
++ io_cqring_wake(ctx);
++}
++
++static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
++{
++ if (unlikely(ctx->off_timeout_used || ctx->drain_active ||
++ ctx->has_evfd))
++ __io_commit_cqring_flush(ctx);
++
++ if (ctx->flags & IORING_SETUP_SQPOLL)
++ io_cqring_wake(ctx);
++}
++
++/* Returns true if there are no backlogged entries after the flush */
++static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
++{
++ bool all_flushed, posted;
++ size_t cqe_size = sizeof(struct io_uring_cqe);
++
++ if (!force && __io_cqring_events(ctx) == ctx->cq_entries)
++ return false;
++
++ if (ctx->flags & IORING_SETUP_CQE32)
++ cqe_size <<= 1;
++
++ posted = false;
++ spin_lock(&ctx->completion_lock);
++ while (!list_empty(&ctx->cq_overflow_list)) {
++ struct io_uring_cqe *cqe = io_get_cqe(ctx);
++ struct io_overflow_cqe *ocqe;
++
++ if (!cqe && !force)
++ break;
++ ocqe = list_first_entry(&ctx->cq_overflow_list,
++ struct io_overflow_cqe, list);
++ if (cqe)
++ memcpy(cqe, &ocqe->cqe, cqe_size);
++ else
++ io_account_cq_overflow(ctx);
++
++ posted = true;
++ list_del(&ocqe->list);
++ kfree(ocqe);
++ }
++
++ all_flushed = list_empty(&ctx->cq_overflow_list);
++ if (all_flushed) {
++ clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
++ atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
++ }
++
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ if (posted)
++ io_cqring_ev_posted(ctx);
++ return all_flushed;
++}
++
++static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx)
++{
++ bool ret = true;
++
++ if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) {
++ /* iopoll syncs against uring_lock, not completion_lock */
++ if (ctx->flags & IORING_SETUP_IOPOLL)
++ mutex_lock(&ctx->uring_lock);
++ ret = __io_cqring_overflow_flush(ctx, false);
++ if (ctx->flags & IORING_SETUP_IOPOLL)
++ mutex_unlock(&ctx->uring_lock);
++ }
++
++ return ret;
++}
++
++static void __io_put_task(struct task_struct *task, int nr)
++{
++ struct io_uring_task *tctx = task->io_uring;
++
++ percpu_counter_sub(&tctx->inflight, nr);
++ if (unlikely(atomic_read(&tctx->in_idle)))
++ wake_up(&tctx->wait);
++ put_task_struct_many(task, nr);
++}
++
++/* must to be called somewhat shortly after putting a request */
++static inline void io_put_task(struct task_struct *task, int nr)
++{
++ if (likely(task == current))
++ task->io_uring->cached_refs += nr;
++ else
++ __io_put_task(task, nr);
++}
++
++static void io_task_refs_refill(struct io_uring_task *tctx)
++{
++ unsigned int refill = -tctx->cached_refs + IO_TCTX_REFS_CACHE_NR;
++
++ percpu_counter_add(&tctx->inflight, refill);
++ refcount_add(refill, ¤t->usage);
++ tctx->cached_refs += refill;
++}
++
++static inline void io_get_task_refs(int nr)
++{
++ struct io_uring_task *tctx = current->io_uring;
++
++ tctx->cached_refs -= nr;
++ if (unlikely(tctx->cached_refs < 0))
++ io_task_refs_refill(tctx);
++}
++
++static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
++{
++ struct io_uring_task *tctx = task->io_uring;
++ unsigned int refs = tctx->cached_refs;
++
++ if (refs) {
++ tctx->cached_refs = 0;
++ percpu_counter_sub(&tctx->inflight, refs);
++ put_task_struct_many(task, refs);
++ }
++}
++
++static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
++ s32 res, u32 cflags, u64 extra1,
++ u64 extra2)
++{
++ struct io_overflow_cqe *ocqe;
++ size_t ocq_size = sizeof(struct io_overflow_cqe);
++ bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
++
++ if (is_cqe32)
++ ocq_size += sizeof(struct io_uring_cqe);
++
++ ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
++ trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
++ if (!ocqe) {
++ /*
++ * If we're in ring overflow flush mode, or in task cancel mode,
++ * or cannot allocate an overflow entry, then we need to drop it
++ * on the floor.
++ */
++ io_account_cq_overflow(ctx);
++ set_bit(IO_CHECK_CQ_DROPPED_BIT, &ctx->check_cq);
++ return false;
++ }
++ if (list_empty(&ctx->cq_overflow_list)) {
++ set_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
++ atomic_or(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
++
++ }
++ ocqe->cqe.user_data = user_data;
++ ocqe->cqe.res = res;
++ ocqe->cqe.flags = cflags;
++ if (is_cqe32) {
++ ocqe->cqe.big_cqe[0] = extra1;
++ ocqe->cqe.big_cqe[1] = extra2;
++ }
++ list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
++ return true;
++}
++
++static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx,
++ struct io_kiocb *req)
++{
++ struct io_uring_cqe *cqe;
++
++ if (!(ctx->flags & IORING_SETUP_CQE32)) {
++ trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
++ req->cqe.res, req->cqe.flags, 0, 0);
++
++ /*
++ * If we can't get a cq entry, userspace overflowed the
++ * submission (by quite a lot). Increment the overflow count in
++ * the ring.
++ */
++ cqe = io_get_cqe(ctx);
++ if (likely(cqe)) {
++ memcpy(cqe, &req->cqe, sizeof(*cqe));
++ return true;
++ }
++
++ return io_cqring_event_overflow(ctx, req->cqe.user_data,
++ req->cqe.res, req->cqe.flags,
++ 0, 0);
++ } else {
++ u64 extra1 = 0, extra2 = 0;
++
++ if (req->flags & REQ_F_CQE32_INIT) {
++ extra1 = req->extra1;
++ extra2 = req->extra2;
++ }
++
++ trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
++ req->cqe.res, req->cqe.flags, extra1, extra2);
++
++ /*
++ * If we can't get a cq entry, userspace overflowed the
++ * submission (by quite a lot). Increment the overflow count in
++ * the ring.
++ */
++ cqe = io_get_cqe(ctx);
++ if (likely(cqe)) {
++ memcpy(cqe, &req->cqe, sizeof(struct io_uring_cqe));
++ WRITE_ONCE(cqe->big_cqe[0], extra1);
++ WRITE_ONCE(cqe->big_cqe[1], extra2);
++ return true;
++ }
++
++ return io_cqring_event_overflow(ctx, req->cqe.user_data,
++ req->cqe.res, req->cqe.flags,
++ extra1, extra2);
++ }
++}
++
++static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
++ s32 res, u32 cflags)
++{
++ struct io_uring_cqe *cqe;
++
++ ctx->cq_extra++;
++ trace_io_uring_complete(ctx, NULL, user_data, res, cflags, 0, 0);
++
++ /*
++ * If we can't get a cq entry, userspace overflowed the
++ * submission (by quite a lot). Increment the overflow count in
++ * the ring.
++ */
++ cqe = io_get_cqe(ctx);
++ if (likely(cqe)) {
++ WRITE_ONCE(cqe->user_data, user_data);
++ WRITE_ONCE(cqe->res, res);
++ WRITE_ONCE(cqe->flags, cflags);
++
++ if (ctx->flags & IORING_SETUP_CQE32) {
++ WRITE_ONCE(cqe->big_cqe[0], 0);
++ WRITE_ONCE(cqe->big_cqe[1], 0);
++ }
++ return true;
++ }
++ return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
++}
++
++static void __io_req_complete_put(struct io_kiocb *req)
++{
++ /*
++ * If we're the last reference to this request, add to our locked
++ * free_list cache.
++ */
++ if (req_ref_put_and_test(req)) {
++ struct io_ring_ctx *ctx = req->ctx;
++
++ if (req->flags & IO_REQ_LINK_FLAGS) {
++ if (req->flags & IO_DISARM_MASK)
++ io_disarm_next(req);
++ if (req->link) {
++ io_req_task_queue(req->link);
++ req->link = NULL;
++ }
++ }
++ io_req_put_rsrc(req);
++ /*
++ * Selected buffer deallocation in io_clean_op() assumes that
++ * we don't hold ->completion_lock. Clean them here to avoid
++ * deadlocks.
++ */
++ io_put_kbuf_comp(req);
++ io_dismantle_req(req);
++ io_put_task(req->task, 1);
++ wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
++ ctx->locked_free_nr++;
++ }
++}
++
++static void __io_req_complete_post(struct io_kiocb *req, s32 res,
++ u32 cflags)
++{
++ if (!(req->flags & REQ_F_CQE_SKIP)) {
++ req->cqe.res = res;
++ req->cqe.flags = cflags;
++ __io_fill_cqe_req(req->ctx, req);
++ }
++ __io_req_complete_put(req);
++}
++
++static void io_req_complete_post(struct io_kiocb *req, s32 res, u32 cflags)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++
++ spin_lock(&ctx->completion_lock);
++ __io_req_complete_post(req, res, cflags);
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ io_cqring_ev_posted(ctx);
++}
++
++static inline void io_req_complete_state(struct io_kiocb *req, s32 res,
++ u32 cflags)
++{
++ req->cqe.res = res;
++ req->cqe.flags = cflags;
++ req->flags |= REQ_F_COMPLETE_INLINE;
++}
++
++static inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags,
++ s32 res, u32 cflags)
++{
++ if (issue_flags & IO_URING_F_COMPLETE_DEFER)
++ io_req_complete_state(req, res, cflags);
++ else
++ io_req_complete_post(req, res, cflags);
++}
++
++static inline void io_req_complete(struct io_kiocb *req, s32 res)
++{
++ if (res < 0)
++ req_set_fail(req);
++ __io_req_complete(req, 0, res, 0);
++}
++
++static void io_req_complete_failed(struct io_kiocb *req, s32 res)
++{
++ req_set_fail(req);
++ io_req_complete_post(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
++}
++
++/*
++ * Don't initialise the fields below on every allocation, but do that in
++ * advance and keep them valid across allocations.
++ */
++static void io_preinit_req(struct io_kiocb *req, struct io_ring_ctx *ctx)
++{
++ req->ctx = ctx;
++ req->link = NULL;
++ req->async_data = NULL;
++ /* not necessary, but safer to zero */
++ req->cqe.res = 0;
++}
++
++static void io_flush_cached_locked_reqs(struct io_ring_ctx *ctx,
++ struct io_submit_state *state)
++{
++ spin_lock(&ctx->completion_lock);
++ wq_list_splice(&ctx->locked_free_list, &state->free_list);
++ ctx->locked_free_nr = 0;
++ spin_unlock(&ctx->completion_lock);
++}
++
++static inline bool io_req_cache_empty(struct io_ring_ctx *ctx)
++{
++ return !ctx->submit_state.free_list.next;
++}
++
++/*
++ * A request might get retired back into the request caches even before opcode
++ * handlers and io_issue_sqe() are done with it, e.g. inline completion path.
++ * Because of that, io_alloc_req() should be called only under ->uring_lock
++ * and with extra caution to not get a request that is still worked on.
++ */
++static __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
++ __must_hold(&ctx->uring_lock)
++{
++ gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
++ void *reqs[IO_REQ_ALLOC_BATCH];
++ int ret, i;
++
++ /*
++ * If we have more than a batch's worth of requests in our IRQ side
++ * locked cache, grab the lock and move them over to our submission
++ * side cache.
++ */
++ if (data_race(ctx->locked_free_nr) > IO_COMPL_BATCH) {
++ io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
++ if (!io_req_cache_empty(ctx))
++ return true;
++ }
++
++ ret = kmem_cache_alloc_bulk(req_cachep, gfp, ARRAY_SIZE(reqs), reqs);
++
++ /*
++ * Bulk alloc is all-or-nothing. If we fail to get a batch,
++ * retry single alloc to be on the safe side.
++ */
++ if (unlikely(ret <= 0)) {
++ reqs[0] = kmem_cache_alloc(req_cachep, gfp);
++ if (!reqs[0])
++ return false;
++ ret = 1;
++ }
++
++ percpu_ref_get_many(&ctx->refs, ret);
++ for (i = 0; i < ret; i++) {
++ struct io_kiocb *req = reqs[i];
++
++ io_preinit_req(req, ctx);
++ io_req_add_to_cache(req, ctx);
++ }
++ return true;
++}
++
++static inline bool io_alloc_req_refill(struct io_ring_ctx *ctx)
++{
++ if (unlikely(io_req_cache_empty(ctx)))
++ return __io_alloc_req_refill(ctx);
++ return true;
++}
++
++static inline struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
++{
++ struct io_wq_work_node *node;
++
++ node = wq_stack_extract(&ctx->submit_state.free_list);
++ return container_of(node, struct io_kiocb, comp_list);
++}
++
++static inline void io_put_file(struct file *file)
++{
++ if (file)
++ fput(file);
++}
++
++static inline void io_dismantle_req(struct io_kiocb *req)
++{
++ unsigned int flags = req->flags;
++
++ if (unlikely(flags & IO_REQ_CLEAN_FLAGS))
++ io_clean_op(req);
++ if (!(flags & REQ_F_FIXED_FILE))
++ io_put_file(req->file);
++}
++
++static __cold void io_free_req(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++
++ io_req_put_rsrc(req);
++ io_dismantle_req(req);
++ io_put_task(req->task, 1);
++
++ spin_lock(&ctx->completion_lock);
++ wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
++ ctx->locked_free_nr++;
++ spin_unlock(&ctx->completion_lock);
++}
++
++static inline void io_remove_next_linked(struct io_kiocb *req)
++{
++ struct io_kiocb *nxt = req->link;
++
++ req->link = nxt->link;
++ nxt->link = NULL;
++}
++
++static struct io_kiocb *io_disarm_linked_timeout(struct io_kiocb *req)
++ __must_hold(&req->ctx->completion_lock)
++ __must_hold(&req->ctx->timeout_lock)
++{
++ struct io_kiocb *link = req->link;
++
++ if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
++ struct io_timeout_data *io = link->async_data;
++
++ io_remove_next_linked(req);
++ link->timeout.head = NULL;
++ if (hrtimer_try_to_cancel(&io->timer) != -1) {
++ list_del(&link->timeout.list);
++ return link;
++ }
++ }
++ return NULL;
++}
++
++static void io_fail_links(struct io_kiocb *req)
++ __must_hold(&req->ctx->completion_lock)
++{
++ struct io_kiocb *nxt, *link = req->link;
++ bool ignore_cqes = req->flags & REQ_F_SKIP_LINK_CQES;
++
++ req->link = NULL;
++ while (link) {
++ long res = -ECANCELED;
++
++ if (link->flags & REQ_F_FAIL)
++ res = link->cqe.res;
++
++ nxt = link->link;
++ link->link = NULL;
++
++ trace_io_uring_fail_link(req->ctx, req, req->cqe.user_data,
++ req->opcode, link);
++
++ if (ignore_cqes)
++ link->flags |= REQ_F_CQE_SKIP;
++ else
++ link->flags &= ~REQ_F_CQE_SKIP;
++ __io_req_complete_post(link, res, 0);
++ link = nxt;
++ }
++}
++
++static bool io_disarm_next(struct io_kiocb *req)
++ __must_hold(&req->ctx->completion_lock)
++{
++ struct io_kiocb *link = NULL;
++ bool posted = false;
++
++ if (req->flags & REQ_F_ARM_LTIMEOUT) {
++ link = req->link;
++ req->flags &= ~REQ_F_ARM_LTIMEOUT;
++ if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
++ io_remove_next_linked(req);
++ io_req_tw_post_queue(link, -ECANCELED, 0);
++ posted = true;
++ }
++ } else if (req->flags & REQ_F_LINK_TIMEOUT) {
++ struct io_ring_ctx *ctx = req->ctx;
++
++ spin_lock_irq(&ctx->timeout_lock);
++ link = io_disarm_linked_timeout(req);
++ spin_unlock_irq(&ctx->timeout_lock);
++ if (link) {
++ posted = true;
++ io_req_tw_post_queue(link, -ECANCELED, 0);
++ }
++ }
++ if (unlikely((req->flags & REQ_F_FAIL) &&
++ !(req->flags & REQ_F_HARDLINK))) {
++ posted |= (req->link != NULL);
++ io_fail_links(req);
++ }
++ return posted;
++}
++
++static void __io_req_find_next_prep(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ bool posted;
++
++ spin_lock(&ctx->completion_lock);
++ posted = io_disarm_next(req);
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ if (posted)
++ io_cqring_ev_posted(ctx);
++}
++
++static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
++{
++ struct io_kiocb *nxt;
++
++ /*
++ * If LINK is set, we have dependent requests in this chain. If we
++ * didn't fail this request, queue the first one up, moving any other
++ * dependencies to the next request. In case of failure, fail the rest
++ * of the chain.
++ */
++ if (unlikely(req->flags & IO_DISARM_MASK))
++ __io_req_find_next_prep(req);
++ nxt = req->link;
++ req->link = NULL;
++ return nxt;
++}
++
++static void ctx_flush_and_put(struct io_ring_ctx *ctx, bool *locked)
++{
++ if (!ctx)
++ return;
++ if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
++ atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
++ if (*locked) {
++ io_submit_flush_completions(ctx);
++ mutex_unlock(&ctx->uring_lock);
++ *locked = false;
++ }
++ percpu_ref_put(&ctx->refs);
++}
++
++static inline void ctx_commit_and_unlock(struct io_ring_ctx *ctx)
++{
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ io_cqring_ev_posted(ctx);
++}
++
++static void handle_prev_tw_list(struct io_wq_work_node *node,
++ struct io_ring_ctx **ctx, bool *uring_locked)
++{
++ if (*ctx && !*uring_locked)
++ spin_lock(&(*ctx)->completion_lock);
++
++ do {
++ struct io_wq_work_node *next = node->next;
++ struct io_kiocb *req = container_of(node, struct io_kiocb,
++ io_task_work.node);
++
++ prefetch(container_of(next, struct io_kiocb, io_task_work.node));
++
++ if (req->ctx != *ctx) {
++ if (unlikely(!*uring_locked && *ctx))
++ ctx_commit_and_unlock(*ctx);
++
++ ctx_flush_and_put(*ctx, uring_locked);
++ *ctx = req->ctx;
++ /* if not contended, grab and improve batching */
++ *uring_locked = mutex_trylock(&(*ctx)->uring_lock);
++ percpu_ref_get(&(*ctx)->refs);
++ if (unlikely(!*uring_locked))
++ spin_lock(&(*ctx)->completion_lock);
++ }
++ if (likely(*uring_locked))
++ req->io_task_work.func(req, uring_locked);
++ else
++ __io_req_complete_post(req, req->cqe.res,
++ io_put_kbuf_comp(req));
++ node = next;
++ } while (node);
++
++ if (unlikely(!*uring_locked))
++ ctx_commit_and_unlock(*ctx);
++}
++
++static void handle_tw_list(struct io_wq_work_node *node,
++ struct io_ring_ctx **ctx, bool *locked)
++{
++ do {
++ struct io_wq_work_node *next = node->next;
++ struct io_kiocb *req = container_of(node, struct io_kiocb,
++ io_task_work.node);
++
++ prefetch(container_of(next, struct io_kiocb, io_task_work.node));
++
++ if (req->ctx != *ctx) {
++ ctx_flush_and_put(*ctx, locked);
++ *ctx = req->ctx;
++ /* if not contended, grab and improve batching */
++ *locked = mutex_trylock(&(*ctx)->uring_lock);
++ percpu_ref_get(&(*ctx)->refs);
++ }
++ req->io_task_work.func(req, locked);
++ node = next;
++ } while (node);
++}
++
++static void tctx_task_work(struct callback_head *cb)
++{
++ bool uring_locked = false;
++ struct io_ring_ctx *ctx = NULL;
++ struct io_uring_task *tctx = container_of(cb, struct io_uring_task,
++ task_work);
++
++ while (1) {
++ struct io_wq_work_node *node1, *node2;
++
++ spin_lock_irq(&tctx->task_lock);
++ node1 = tctx->prio_task_list.first;
++ node2 = tctx->task_list.first;
++ INIT_WQ_LIST(&tctx->task_list);
++ INIT_WQ_LIST(&tctx->prio_task_list);
++ if (!node2 && !node1)
++ tctx->task_running = false;
++ spin_unlock_irq(&tctx->task_lock);
++ if (!node2 && !node1)
++ break;
++
++ if (node1)
++ handle_prev_tw_list(node1, &ctx, &uring_locked);
++ if (node2)
++ handle_tw_list(node2, &ctx, &uring_locked);
++ cond_resched();
++
++ if (data_race(!tctx->task_list.first) &&
++ data_race(!tctx->prio_task_list.first) && uring_locked)
++ io_submit_flush_completions(ctx);
++ }
++
++ ctx_flush_and_put(ctx, &uring_locked);
++
++ /* relaxed read is enough as only the task itself sets ->in_idle */
++ if (unlikely(atomic_read(&tctx->in_idle)))
++ io_uring_drop_tctx_refs(current);
++}
++
++static void __io_req_task_work_add(struct io_kiocb *req,
++ struct io_uring_task *tctx,
++ struct io_wq_work_list *list)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_wq_work_node *node;
++ unsigned long flags;
++ bool running;
++
++ spin_lock_irqsave(&tctx->task_lock, flags);
++ wq_list_add_tail(&req->io_task_work.node, list);
++ running = tctx->task_running;
++ if (!running)
++ tctx->task_running = true;
++ spin_unlock_irqrestore(&tctx->task_lock, flags);
++
++ /* task_work already pending, we're done */
++ if (running)
++ return;
++
++ if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
++ atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
++
++ if (likely(!task_work_add(req->task, &tctx->task_work, ctx->notify_method)))
++ return;
++
++ spin_lock_irqsave(&tctx->task_lock, flags);
++ tctx->task_running = false;
++ node = wq_list_merge(&tctx->prio_task_list, &tctx->task_list);
++ spin_unlock_irqrestore(&tctx->task_lock, flags);
++
++ while (node) {
++ req = container_of(node, struct io_kiocb, io_task_work.node);
++ node = node->next;
++ if (llist_add(&req->io_task_work.fallback_node,
++ &req->ctx->fallback_llist))
++ schedule_delayed_work(&req->ctx->fallback_work, 1);
++ }
++}
++
++static void io_req_task_work_add(struct io_kiocb *req)
++{
++ struct io_uring_task *tctx = req->task->io_uring;
++
++ __io_req_task_work_add(req, tctx, &tctx->task_list);
++}
++
++static void io_req_task_prio_work_add(struct io_kiocb *req)
++{
++ struct io_uring_task *tctx = req->task->io_uring;
++
++ if (req->ctx->flags & IORING_SETUP_SQPOLL)
++ __io_req_task_work_add(req, tctx, &tctx->prio_task_list);
++ else
++ __io_req_task_work_add(req, tctx, &tctx->task_list);
++}
++
++static void io_req_tw_post(struct io_kiocb *req, bool *locked)
++{
++ io_req_complete_post(req, req->cqe.res, req->cqe.flags);
++}
++
++static void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags)
++{
++ req->cqe.res = res;
++ req->cqe.flags = cflags;
++ req->io_task_work.func = io_req_tw_post;
++ io_req_task_work_add(req);
++}
++
++static void io_req_task_cancel(struct io_kiocb *req, bool *locked)
++{
++ /* not needed for normal modes, but SQPOLL depends on it */
++ io_tw_lock(req->ctx, locked);
++ io_req_complete_failed(req, req->cqe.res);
++}
++
++static void io_req_task_submit(struct io_kiocb *req, bool *locked)
++{
++ io_tw_lock(req->ctx, locked);
++ /* req->task == current here, checking PF_EXITING is safe */
++ if (likely(!(req->task->flags & PF_EXITING)))
++ io_queue_sqe(req);
++ else
++ io_req_complete_failed(req, -EFAULT);
++}
++
++static void io_req_task_queue_fail(struct io_kiocb *req, int ret)
++{
++ req->cqe.res = ret;
++ req->io_task_work.func = io_req_task_cancel;
++ io_req_task_work_add(req);
++}
++
++static void io_req_task_queue(struct io_kiocb *req)
++{
++ req->io_task_work.func = io_req_task_submit;
++ io_req_task_work_add(req);
++}
++
++static void io_req_task_queue_reissue(struct io_kiocb *req)
++{
++ req->io_task_work.func = io_queue_iowq;
++ io_req_task_work_add(req);
++}
++
++static void io_queue_next(struct io_kiocb *req)
++{
++ struct io_kiocb *nxt = io_req_find_next(req);
++
++ if (nxt)
++ io_req_task_queue(nxt);
++}
++
++static void io_free_batch_list(struct io_ring_ctx *ctx,
++ struct io_wq_work_node *node)
++ __must_hold(&ctx->uring_lock)
++{
++ struct task_struct *task = NULL;
++ int task_refs = 0;
++
++ do {
++ struct io_kiocb *req = container_of(node, struct io_kiocb,
++ comp_list);
++
++ if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
++ if (req->flags & REQ_F_REFCOUNT) {
++ node = req->comp_list.next;
++ if (!req_ref_put_and_test(req))
++ continue;
++ }
++ if ((req->flags & REQ_F_POLLED) && req->apoll) {
++ struct async_poll *apoll = req->apoll;
++
++ if (apoll->double_poll)
++ kfree(apoll->double_poll);
++ list_add(&apoll->poll.wait.entry,
++ &ctx->apoll_cache);
++ req->flags &= ~REQ_F_POLLED;
++ }
++ if (req->flags & IO_REQ_LINK_FLAGS)
++ io_queue_next(req);
++ if (unlikely(req->flags & IO_REQ_CLEAN_FLAGS))
++ io_clean_op(req);
++ }
++ if (!(req->flags & REQ_F_FIXED_FILE))
++ io_put_file(req->file);
++
++ io_req_put_rsrc_locked(req, ctx);
++
++ if (req->task != task) {
++ if (task)
++ io_put_task(task, task_refs);
++ task = req->task;
++ task_refs = 0;
++ }
++ task_refs++;
++ node = req->comp_list.next;
++ io_req_add_to_cache(req, ctx);
++ } while (node);
++
++ if (task)
++ io_put_task(task, task_refs);
++}
++
++static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
++ __must_hold(&ctx->uring_lock)
++{
++ struct io_wq_work_node *node, *prev;
++ struct io_submit_state *state = &ctx->submit_state;
++
++ if (state->flush_cqes) {
++ spin_lock(&ctx->completion_lock);
++ wq_list_for_each(node, prev, &state->compl_reqs) {
++ struct io_kiocb *req = container_of(node, struct io_kiocb,
++ comp_list);
++
++ if (!(req->flags & REQ_F_CQE_SKIP))
++ __io_fill_cqe_req(ctx, req);
++ }
++
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ io_cqring_ev_posted(ctx);
++ state->flush_cqes = false;
++ }
++
++ io_free_batch_list(ctx, state->compl_reqs.first);
++ INIT_WQ_LIST(&state->compl_reqs);
++}
++
++/*
++ * Drop reference to request, return next in chain (if there is one) if this
++ * was the last reference to this request.
++ */
++static inline struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
++{
++ struct io_kiocb *nxt = NULL;
++
++ if (req_ref_put_and_test(req)) {
++ if (unlikely(req->flags & IO_REQ_LINK_FLAGS))
++ nxt = io_req_find_next(req);
++ io_free_req(req);
++ }
++ return nxt;
++}
++
++static inline void io_put_req(struct io_kiocb *req)
++{
++ if (req_ref_put_and_test(req)) {
++ io_queue_next(req);
++ io_free_req(req);
++ }
++}
++
++static unsigned io_cqring_events(struct io_ring_ctx *ctx)
++{
++ /* See comment at the top of this file */
++ smp_rmb();
++ return __io_cqring_events(ctx);
++}
++
++static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
++{
++ struct io_rings *rings = ctx->rings;
++
++ /* make sure SQ entry isn't read before tail */
++ return smp_load_acquire(&rings->sq.tail) - ctx->cached_sq_head;
++}
++
++static inline bool io_run_task_work(void)
++{
++ if (test_thread_flag(TIF_NOTIFY_SIGNAL) || task_work_pending(current)) {
++ __set_current_state(TASK_RUNNING);
++ clear_notify_signal();
++ if (task_work_pending(current))
++ task_work_run();
++ return true;
++ }
++
++ return false;
++}
++
++static int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
++{
++ struct io_wq_work_node *pos, *start, *prev;
++ unsigned int poll_flags = BLK_POLL_NOSLEEP;
++ DEFINE_IO_COMP_BATCH(iob);
++ int nr_events = 0;
++
++ /*
++ * Only spin for completions if we don't have multiple devices hanging
++ * off our complete list.
++ */
++ if (ctx->poll_multi_queue || force_nonspin)
++ poll_flags |= BLK_POLL_ONESHOT;
++
++ wq_list_for_each(pos, start, &ctx->iopoll_list) {
++ struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list);
++ struct kiocb *kiocb = &req->rw.kiocb;
++ int ret;
++
++ /*
++ * Move completed and retryable entries to our local lists.
++ * If we find a request that requires polling, break out
++ * and complete those lists first, if we have entries there.
++ */
++ if (READ_ONCE(req->iopoll_completed))
++ break;
++
++ ret = kiocb->ki_filp->f_op->iopoll(kiocb, &iob, poll_flags);
++ if (unlikely(ret < 0))
++ return ret;
++ else if (ret)
++ poll_flags |= BLK_POLL_ONESHOT;
++
++ /* iopoll may have completed current req */
++ if (!rq_list_empty(iob.req_list) ||
++ READ_ONCE(req->iopoll_completed))
++ break;
++ }
++
++ if (!rq_list_empty(iob.req_list))
++ iob.complete(&iob);
++ else if (!pos)
++ return 0;
++
++ prev = start;
++ wq_list_for_each_resume(pos, prev) {
++ struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list);
++
++ /* order with io_complete_rw_iopoll(), e.g. ->result updates */
++ if (!smp_load_acquire(&req->iopoll_completed))
++ break;
++ nr_events++;
++ if (unlikely(req->flags & REQ_F_CQE_SKIP))
++ continue;
++
++ req->cqe.flags = io_put_kbuf(req, 0);
++ __io_fill_cqe_req(req->ctx, req);
++ }
++
++ if (unlikely(!nr_events))
++ return 0;
++
++ io_commit_cqring(ctx);
++ io_cqring_ev_posted_iopoll(ctx);
++ pos = start ? start->next : ctx->iopoll_list.first;
++ wq_list_cut(&ctx->iopoll_list, prev, start);
++ io_free_batch_list(ctx, pos);
++ return nr_events;
++}
++
++/*
++ * We can't just wait for polled events to come to us, we have to actively
++ * find and complete them.
++ */
++static __cold void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
++{
++ if (!(ctx->flags & IORING_SETUP_IOPOLL))
++ return;
++
++ mutex_lock(&ctx->uring_lock);
++ while (!wq_list_empty(&ctx->iopoll_list)) {
++ /* let it sleep and repeat later if can't complete a request */
++ if (io_do_iopoll(ctx, true) == 0)
++ break;
++ /*
++ * Ensure we allow local-to-the-cpu processing to take place,
++ * in this case we need to ensure that we reap all events.
++ * Also let task_work, etc. to progress by releasing the mutex
++ */
++ if (need_resched()) {
++ mutex_unlock(&ctx->uring_lock);
++ cond_resched();
++ mutex_lock(&ctx->uring_lock);
++ }
++ }
++ mutex_unlock(&ctx->uring_lock);
++}
++
++static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
++{
++ unsigned int nr_events = 0;
++ int ret = 0;
++ unsigned long check_cq;
++
++ /*
++ * Don't enter poll loop if we already have events pending.
++ * If we do, we can potentially be spinning for commands that
++ * already triggered a CQE (eg in error).
++ */
++ check_cq = READ_ONCE(ctx->check_cq);
++ if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
++ __io_cqring_overflow_flush(ctx, false);
++ if (io_cqring_events(ctx))
++ return 0;
++
++ /*
++ * Similarly do not spin if we have not informed the user of any
++ * dropped CQE.
++ */
++ if (unlikely(check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)))
++ return -EBADR;
++
++ do {
++ /*
++ * If a submit got punted to a workqueue, we can have the
++ * application entering polling for a command before it gets
++ * issued. That app will hold the uring_lock for the duration
++ * of the poll right here, so we need to take a breather every
++ * now and then to ensure that the issue has a chance to add
++ * the poll to the issued list. Otherwise we can spin here
++ * forever, while the workqueue is stuck trying to acquire the
++ * very same mutex.
++ */
++ if (wq_list_empty(&ctx->iopoll_list)) {
++ u32 tail = ctx->cached_cq_tail;
++
++ mutex_unlock(&ctx->uring_lock);
++ io_run_task_work();
++ mutex_lock(&ctx->uring_lock);
++
++ /* some requests don't go through iopoll_list */
++ if (tail != ctx->cached_cq_tail ||
++ wq_list_empty(&ctx->iopoll_list))
++ break;
++ }
++ ret = io_do_iopoll(ctx, !min);
++ if (ret < 0)
++ break;
++ nr_events += ret;
++ ret = 0;
++ } while (nr_events < min && !need_resched());
++
++ return ret;
++}
++
++static void kiocb_end_write(struct io_kiocb *req)
++{
++ /*
++ * Tell lockdep we inherited freeze protection from submission
++ * thread.
++ */
++ if (req->flags & REQ_F_ISREG) {
++ struct super_block *sb = file_inode(req->file)->i_sb;
++
++ __sb_writers_acquired(sb, SB_FREEZE_WRITE);
++ sb_end_write(sb);
++ }
++}
++
++#ifdef CONFIG_BLOCK
++static bool io_resubmit_prep(struct io_kiocb *req)
++{
++ struct io_async_rw *rw = req->async_data;
++
++ if (!req_has_async_data(req))
++ return !io_req_prep_async(req);
++ iov_iter_restore(&rw->s.iter, &rw->s.iter_state);
++ return true;
++}
++
++static bool io_rw_should_reissue(struct io_kiocb *req)
++{
++ umode_t mode = file_inode(req->file)->i_mode;
++ struct io_ring_ctx *ctx = req->ctx;
++
++ if (!S_ISBLK(mode) && !S_ISREG(mode))
++ return false;
++ if ((req->flags & REQ_F_NOWAIT) || (io_wq_current_is_worker() &&
++ !(ctx->flags & IORING_SETUP_IOPOLL)))
++ return false;
++ /*
++ * If ref is dying, we might be running poll reap from the exit work.
++ * Don't attempt to reissue from that path, just let it fail with
++ * -EAGAIN.
++ */
++ if (percpu_ref_is_dying(&ctx->refs))
++ return false;
++ /*
++ * Play it safe and assume not safe to re-import and reissue if we're
++ * not in the original thread group (or in task context).
++ */
++ if (!same_thread_group(req->task, current) || !in_task())
++ return false;
++ return true;
++}
++#else
++static bool io_resubmit_prep(struct io_kiocb *req)
++{
++ return false;
++}
++static bool io_rw_should_reissue(struct io_kiocb *req)
++{
++ return false;
++}
++#endif
++
++static bool __io_complete_rw_common(struct io_kiocb *req, long res)
++{
++ if (req->rw.kiocb.ki_flags & IOCB_WRITE) {
++ kiocb_end_write(req);
++ fsnotify_modify(req->file);
++ } else {
++ fsnotify_access(req->file);
++ }
++ if (unlikely(res != req->cqe.res)) {
++ if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
++ io_rw_should_reissue(req)) {
++ req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
++ return true;
++ }
++ req_set_fail(req);
++ req->cqe.res = res;
++ }
++ return false;
++}
++
++static inline void io_req_task_complete(struct io_kiocb *req, bool *locked)
++{
++ int res = req->cqe.res;
++
++ if (*locked) {
++ io_req_complete_state(req, res, io_put_kbuf(req, 0));
++ io_req_add_compl_list(req);
++ } else {
++ io_req_complete_post(req, res,
++ io_put_kbuf(req, IO_URING_F_UNLOCKED));
++ }
++}
++
++static void __io_complete_rw(struct io_kiocb *req, long res,
++ unsigned int issue_flags)
++{
++ if (__io_complete_rw_common(req, res))
++ return;
++ __io_req_complete(req, issue_flags, req->cqe.res,
++ io_put_kbuf(req, issue_flags));
++}
++
++static void io_complete_rw(struct kiocb *kiocb, long res)
++{
++ struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
++
++ if (__io_complete_rw_common(req, res))
++ return;
++ req->cqe.res = res;
++ req->io_task_work.func = io_req_task_complete;
++ io_req_task_prio_work_add(req);
++}
++
++static void io_complete_rw_iopoll(struct kiocb *kiocb, long res)
++{
++ struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
++
++ if (kiocb->ki_flags & IOCB_WRITE)
++ kiocb_end_write(req);
++ if (unlikely(res != req->cqe.res)) {
++ if (res == -EAGAIN && io_rw_should_reissue(req)) {
++ req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
++ return;
++ }
++ req->cqe.res = res;
++ }
++
++ /* order with io_iopoll_complete() checking ->iopoll_completed */
++ smp_store_release(&req->iopoll_completed, 1);
++}
++
++/*
++ * After the iocb has been issued, it's safe to be found on the poll list.
++ * Adding the kiocb to the list AFTER submission ensures that we don't
++ * find it from a io_do_iopoll() thread before the issuer is done
++ * accessing the kiocb cookie.
++ */
++static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ const bool needs_lock = issue_flags & IO_URING_F_UNLOCKED;
++
++ /* workqueue context doesn't hold uring_lock, grab it now */
++ if (unlikely(needs_lock))
++ mutex_lock(&ctx->uring_lock);
++
++ /*
++ * Track whether we have multiple files in our lists. This will impact
++ * how we do polling eventually, not spinning if we're on potentially
++ * different devices.
++ */
++ if (wq_list_empty(&ctx->iopoll_list)) {
++ ctx->poll_multi_queue = false;
++ } else if (!ctx->poll_multi_queue) {
++ struct io_kiocb *list_req;
++
++ list_req = container_of(ctx->iopoll_list.first, struct io_kiocb,
++ comp_list);
++ if (list_req->file != req->file)
++ ctx->poll_multi_queue = true;
++ }
++
++ /*
++ * For fast devices, IO may have already completed. If it has, add
++ * it to the front so we find it first.
++ */
++ if (READ_ONCE(req->iopoll_completed))
++ wq_list_add_head(&req->comp_list, &ctx->iopoll_list);
++ else
++ wq_list_add_tail(&req->comp_list, &ctx->iopoll_list);
++
++ if (unlikely(needs_lock)) {
++ /*
++ * If IORING_SETUP_SQPOLL is enabled, sqes are either handle
++ * in sq thread task context or in io worker task context. If
++ * current task context is sq thread, we don't need to check
++ * whether should wake up sq thread.
++ */
++ if ((ctx->flags & IORING_SETUP_SQPOLL) &&
++ wq_has_sleeper(&ctx->sq_data->wait))
++ wake_up(&ctx->sq_data->wait);
++
++ mutex_unlock(&ctx->uring_lock);
++ }
++}
++
++static bool io_bdev_nowait(struct block_device *bdev)
++{
++ return !bdev || blk_queue_nowait(bdev_get_queue(bdev));
++}
++
++/*
++ * If we tracked the file through the SCM inflight mechanism, we could support
++ * any file. For now, just ensure that anything potentially problematic is done
++ * inline.
++ */
++static bool __io_file_supports_nowait(struct file *file, umode_t mode)
++{
++ if (S_ISBLK(mode)) {
++ if (IS_ENABLED(CONFIG_BLOCK) &&
++ io_bdev_nowait(I_BDEV(file->f_mapping->host)))
++ return true;
++ return false;
++ }
++ if (S_ISSOCK(mode))
++ return true;
++ if (S_ISREG(mode)) {
++ if (IS_ENABLED(CONFIG_BLOCK) &&
++ io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
++ file->f_op != &io_uring_fops)
++ return true;
++ return false;
++ }
++
++ /* any ->read/write should understand O_NONBLOCK */
++ if (file->f_flags & O_NONBLOCK)
++ return true;
++ return file->f_mode & FMODE_NOWAIT;
++}
++
++/*
++ * If we tracked the file through the SCM inflight mechanism, we could support
++ * any file. For now, just ensure that anything potentially problematic is done
++ * inline.
++ */
++static unsigned int io_file_get_flags(struct file *file)
++{
++ umode_t mode = file_inode(file)->i_mode;
++ unsigned int res = 0;
++
++ if (S_ISREG(mode))
++ res |= FFS_ISREG;
++ if (__io_file_supports_nowait(file, mode))
++ res |= FFS_NOWAIT;
++ if (io_file_need_scm(file))
++ res |= FFS_SCM;
++ return res;
++}
++
++static inline bool io_file_supports_nowait(struct io_kiocb *req)
++{
++ return req->flags & REQ_F_SUPPORT_NOWAIT;
++}
++
++static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct kiocb *kiocb = &req->rw.kiocb;
++ unsigned ioprio;
++ int ret;
++
++ kiocb->ki_pos = READ_ONCE(sqe->off);
++ /* used for fixed read/write too - just read unconditionally */
++ req->buf_index = READ_ONCE(sqe->buf_index);
++
++ if (req->opcode == IORING_OP_READ_FIXED ||
++ req->opcode == IORING_OP_WRITE_FIXED) {
++ struct io_ring_ctx *ctx = req->ctx;
++ u16 index;
++
++ if (unlikely(req->buf_index >= ctx->nr_user_bufs))
++ return -EFAULT;
++ index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
++ req->imu = ctx->user_bufs[index];
++ io_req_set_rsrc_node(req, ctx, 0);
++ }
++
++ ioprio = READ_ONCE(sqe->ioprio);
++ if (ioprio) {
++ ret = ioprio_check_cap(ioprio);
++ if (ret)
++ return ret;
++
++ kiocb->ki_ioprio = ioprio;
++ } else {
++ kiocb->ki_ioprio = get_current_ioprio();
++ }
++
++ req->rw.addr = READ_ONCE(sqe->addr);
++ req->rw.len = READ_ONCE(sqe->len);
++ req->rw.flags = READ_ONCE(sqe->rw_flags);
++ return 0;
++}
++
++static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
++{
++ switch (ret) {
++ case -EIOCBQUEUED:
++ break;
++ case -ERESTARTSYS:
++ case -ERESTARTNOINTR:
++ case -ERESTARTNOHAND:
++ case -ERESTART_RESTARTBLOCK:
++ /*
++ * We can't just restart the syscall, since previously
++ * submitted sqes may already be in progress. Just fail this
++ * IO with EINTR.
++ */
++ ret = -EINTR;
++ fallthrough;
++ default:
++ kiocb->ki_complete(kiocb, ret);
++ }
++}
++
++static inline loff_t *io_kiocb_update_pos(struct io_kiocb *req)
++{
++ struct kiocb *kiocb = &req->rw.kiocb;
++
++ if (kiocb->ki_pos != -1)
++ return &kiocb->ki_pos;
++
++ if (!(req->file->f_mode & FMODE_STREAM)) {
++ req->flags |= REQ_F_CUR_POS;
++ kiocb->ki_pos = req->file->f_pos;
++ return &kiocb->ki_pos;
++ }
++
++ kiocb->ki_pos = 0;
++ return NULL;
++}
++
++static void kiocb_done(struct io_kiocb *req, ssize_t ret,
++ unsigned int issue_flags)
++{
++ struct io_async_rw *io = req->async_data;
++
++ /* add previously done IO, if any */
++ if (req_has_async_data(req) && io->bytes_done > 0) {
++ if (ret < 0)
++ ret = io->bytes_done;
++ else
++ ret += io->bytes_done;
++ }
++
++ if (req->flags & REQ_F_CUR_POS)
++ req->file->f_pos = req->rw.kiocb.ki_pos;
++ if (ret >= 0 && (req->rw.kiocb.ki_complete == io_complete_rw))
++ __io_complete_rw(req, ret, issue_flags);
++ else
++ io_rw_done(&req->rw.kiocb, ret);
++
++ if (req->flags & REQ_F_REISSUE) {
++ req->flags &= ~REQ_F_REISSUE;
++ if (io_resubmit_prep(req))
++ io_req_task_queue_reissue(req);
++ else
++ io_req_task_queue_fail(req, ret);
++ }
++}
++
++static int __io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
++ struct io_mapped_ubuf *imu)
++{
++ size_t len = req->rw.len;
++ u64 buf_end, buf_addr = req->rw.addr;
++ size_t offset;
++
++ if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end)))
++ return -EFAULT;
++ /* not inside the mapped region */
++ if (unlikely(buf_addr < imu->ubuf || buf_end > imu->ubuf_end))
++ return -EFAULT;
++
++ /*
++ * May not be a start of buffer, set size appropriately
++ * and advance us to the beginning.
++ */
++ offset = buf_addr - imu->ubuf;
++ iov_iter_bvec(iter, rw, imu->bvec, imu->nr_bvecs, offset + len);
++
++ if (offset) {
++ /*
++ * Don't use iov_iter_advance() here, as it's really slow for
++ * using the latter parts of a big fixed buffer - it iterates
++ * over each segment manually. We can cheat a bit here, because
++ * we know that:
++ *
++ * 1) it's a BVEC iter, we set it up
++ * 2) all bvecs are PAGE_SIZE in size, except potentially the
++ * first and last bvec
++ *
++ * So just find our index, and adjust the iterator afterwards.
++ * If the offset is within the first bvec (or the whole first
++ * bvec, just use iov_iter_advance(). This makes it easier
++ * since we can just skip the first segment, which may not
++ * be PAGE_SIZE aligned.
++ */
++ const struct bio_vec *bvec = imu->bvec;
++
++ if (offset <= bvec->bv_len) {
++ iov_iter_advance(iter, offset);
++ } else {
++ unsigned long seg_skip;
++
++ /* skip first vec */
++ offset -= bvec->bv_len;
++ seg_skip = 1 + (offset >> PAGE_SHIFT);
++
++ iter->bvec = bvec + seg_skip;
++ iter->nr_segs -= seg_skip;
++ iter->count -= bvec->bv_len + offset;
++ iter->iov_offset = offset & ~PAGE_MASK;
++ }
++ }
++
++ return 0;
++}
++
++static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
++ unsigned int issue_flags)
++{
++ if (WARN_ON_ONCE(!req->imu))
++ return -EFAULT;
++ return __io_import_fixed(req, rw, iter, req->imu);
++}
++
++static int io_buffer_add_list(struct io_ring_ctx *ctx,
++ struct io_buffer_list *bl, unsigned int bgid)
++{
++ bl->bgid = bgid;
++ if (bgid < BGID_ARRAY)
++ return 0;
++
++ return xa_err(xa_store(&ctx->io_bl_xa, bgid, bl, GFP_KERNEL));
++}
++
++static void __user *io_provided_buffer_select(struct io_kiocb *req, size_t *len,
++ struct io_buffer_list *bl)
++{
++ if (!list_empty(&bl->buf_list)) {
++ struct io_buffer *kbuf;
++
++ kbuf = list_first_entry(&bl->buf_list, struct io_buffer, list);
++ list_del(&kbuf->list);
++ if (*len > kbuf->len)
++ *len = kbuf->len;
++ req->flags |= REQ_F_BUFFER_SELECTED;
++ req->kbuf = kbuf;
++ req->buf_index = kbuf->bid;
++ return u64_to_user_ptr(kbuf->addr);
++ }
++ return NULL;
++}
++
++static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
++ struct io_buffer_list *bl,
++ unsigned int issue_flags)
++{
++ struct io_uring_buf_ring *br = bl->buf_ring;
++ struct io_uring_buf *buf;
++ __u16 head = bl->head;
++
++ if (unlikely(smp_load_acquire(&br->tail) == head))
++ return NULL;
++
++ head &= bl->mask;
++ if (head < IO_BUFFER_LIST_BUF_PER_PAGE) {
++ buf = &br->bufs[head];
++ } else {
++ int off = head & (IO_BUFFER_LIST_BUF_PER_PAGE - 1);
++ int index = head / IO_BUFFER_LIST_BUF_PER_PAGE;
++ buf = page_address(bl->buf_pages[index]);
++ buf += off;
++ }
++ if (*len > buf->len)
++ *len = buf->len;
++ req->flags |= REQ_F_BUFFER_RING;
++ req->buf_list = bl;
++ req->buf_index = buf->bid;
++
++ if (issue_flags & IO_URING_F_UNLOCKED || !file_can_poll(req->file)) {
++ /*
++ * If we came in unlocked, we have no choice but to consume the
++ * buffer here. This does mean it'll be pinned until the IO
++ * completes. But coming in unlocked means we're in io-wq
++ * context, hence there should be no further retry. For the
++ * locked case, the caller must ensure to call the commit when
++ * the transfer completes (or if we get -EAGAIN and must poll
++ * or retry).
++ */
++ req->buf_list = NULL;
++ bl->head++;
++ }
++ return u64_to_user_ptr(buf->addr);
++}
++
++static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
++ unsigned int issue_flags)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_buffer_list *bl;
++ void __user *ret = NULL;
++
++ io_ring_submit_lock(req->ctx, issue_flags);
++
++ bl = io_buffer_get_list(ctx, req->buf_index);
++ if (likely(bl)) {
++ if (bl->buf_nr_pages)
++ ret = io_ring_buffer_select(req, len, bl, issue_flags);
++ else
++ ret = io_provided_buffer_select(req, len, bl);
++ }
++ io_ring_submit_unlock(req->ctx, issue_flags);
++ return ret;
++}
++
++#ifdef CONFIG_COMPAT
++static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
++ unsigned int issue_flags)
++{
++ struct compat_iovec __user *uiov;
++ compat_ssize_t clen;
++ void __user *buf;
++ size_t len;
++
++ uiov = u64_to_user_ptr(req->rw.addr);
++ if (!access_ok(uiov, sizeof(*uiov)))
++ return -EFAULT;
++ if (__get_user(clen, &uiov->iov_len))
++ return -EFAULT;
++ if (clen < 0)
++ return -EINVAL;
++
++ len = clen;
++ buf = io_buffer_select(req, &len, issue_flags);
++ if (!buf)
++ return -ENOBUFS;
++ req->rw.addr = (unsigned long) buf;
++ iov[0].iov_base = buf;
++ req->rw.len = iov[0].iov_len = (compat_size_t) len;
++ return 0;
++}
++#endif
++
++static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
++ unsigned int issue_flags)
++{
++ struct iovec __user *uiov = u64_to_user_ptr(req->rw.addr);
++ void __user *buf;
++ ssize_t len;
++
++ if (copy_from_user(iov, uiov, sizeof(*uiov)))
++ return -EFAULT;
++
++ len = iov[0].iov_len;
++ if (len < 0)
++ return -EINVAL;
++ buf = io_buffer_select(req, &len, issue_flags);
++ if (!buf)
++ return -ENOBUFS;
++ req->rw.addr = (unsigned long) buf;
++ iov[0].iov_base = buf;
++ req->rw.len = iov[0].iov_len = len;
++ return 0;
++}
++
++static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
++ unsigned int issue_flags)
++{
++ if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)) {
++ iov[0].iov_base = u64_to_user_ptr(req->rw.addr);
++ iov[0].iov_len = req->rw.len;
++ return 0;
++ }
++ if (req->rw.len != 1)
++ return -EINVAL;
++
++#ifdef CONFIG_COMPAT
++ if (req->ctx->compat)
++ return io_compat_import(req, iov, issue_flags);
++#endif
++
++ return __io_iov_buffer_select(req, iov, issue_flags);
++}
++
++static inline bool io_do_buffer_select(struct io_kiocb *req)
++{
++ if (!(req->flags & REQ_F_BUFFER_SELECT))
++ return false;
++ return !(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING));
++}
++
++static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
++ struct io_rw_state *s,
++ unsigned int issue_flags)
++{
++ struct iov_iter *iter = &s->iter;
++ u8 opcode = req->opcode;
++ struct iovec *iovec;
++ void __user *buf;
++ size_t sqe_len;
++ ssize_t ret;
++
++ if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) {
++ ret = io_import_fixed(req, rw, iter, issue_flags);
++ if (ret)
++ return ERR_PTR(ret);
++ return NULL;
++ }
++
++ buf = u64_to_user_ptr(req->rw.addr);
++ sqe_len = req->rw.len;
++
++ if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
++ if (io_do_buffer_select(req)) {
++ buf = io_buffer_select(req, &sqe_len, issue_flags);
++ if (!buf)
++ return ERR_PTR(-ENOBUFS);
++ req->rw.addr = (unsigned long) buf;
++ req->rw.len = sqe_len;
++ }
++
++ ret = import_single_range(rw, buf, sqe_len, s->fast_iov, iter);
++ if (ret)
++ return ERR_PTR(ret);
++ return NULL;
++ }
++
++ iovec = s->fast_iov;
++ if (req->flags & REQ_F_BUFFER_SELECT) {
++ ret = io_iov_buffer_select(req, iovec, issue_flags);
++ if (ret)
++ return ERR_PTR(ret);
++ iov_iter_init(iter, rw, iovec, 1, iovec->iov_len);
++ return NULL;
++ }
++
++ ret = __import_iovec(rw, buf, sqe_len, UIO_FASTIOV, &iovec, iter,
++ req->ctx->compat);
++ if (unlikely(ret < 0))
++ return ERR_PTR(ret);
++ return iovec;
++}
++
++static inline int io_import_iovec(int rw, struct io_kiocb *req,
++ struct iovec **iovec, struct io_rw_state *s,
++ unsigned int issue_flags)
++{
++ *iovec = __io_import_iovec(rw, req, s, issue_flags);
++ if (unlikely(IS_ERR(*iovec)))
++ return PTR_ERR(*iovec);
++
++ iov_iter_save_state(&s->iter, &s->iter_state);
++ return 0;
++}
++
++static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
++{
++ return (kiocb->ki_filp->f_mode & FMODE_STREAM) ? NULL : &kiocb->ki_pos;
++}
++
++/*
++ * For files that don't have ->read_iter() and ->write_iter(), handle them
++ * by looping over ->read() or ->write() manually.
++ */
++static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
++{
++ struct kiocb *kiocb = &req->rw.kiocb;
++ struct file *file = req->file;
++ ssize_t ret = 0;
++ loff_t *ppos;
++
++ /*
++ * Don't support polled IO through this interface, and we can't
++ * support non-blocking either. For the latter, this just causes
++ * the kiocb to be handled from an async context.
++ */
++ if (kiocb->ki_flags & IOCB_HIPRI)
++ return -EOPNOTSUPP;
++ if ((kiocb->ki_flags & IOCB_NOWAIT) &&
++ !(kiocb->ki_filp->f_flags & O_NONBLOCK))
++ return -EAGAIN;
++
++ ppos = io_kiocb_ppos(kiocb);
++
++ while (iov_iter_count(iter)) {
++ struct iovec iovec;
++ ssize_t nr;
++
++ if (!iov_iter_is_bvec(iter)) {
++ iovec = iov_iter_iovec(iter);
++ } else {
++ iovec.iov_base = u64_to_user_ptr(req->rw.addr);
++ iovec.iov_len = req->rw.len;
++ }
++
++ if (rw == READ) {
++ nr = file->f_op->read(file, iovec.iov_base,
++ iovec.iov_len, ppos);
++ } else {
++ nr = file->f_op->write(file, iovec.iov_base,
++ iovec.iov_len, ppos);
++ }
++
++ if (nr < 0) {
++ if (!ret)
++ ret = nr;
++ break;
++ }
++ ret += nr;
++ if (!iov_iter_is_bvec(iter)) {
++ iov_iter_advance(iter, nr);
++ } else {
++ req->rw.addr += nr;
++ req->rw.len -= nr;
++ if (!req->rw.len)
++ break;
++ }
++ if (nr != iovec.iov_len)
++ break;
++ }
++
++ return ret;
++}
++
++static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
++ const struct iovec *fast_iov, struct iov_iter *iter)
++{
++ struct io_async_rw *rw = req->async_data;
++
++ memcpy(&rw->s.iter, iter, sizeof(*iter));
++ rw->free_iovec = iovec;
++ rw->bytes_done = 0;
++ /* can only be fixed buffers, no need to do anything */
++ if (iov_iter_is_bvec(iter))
++ return;
++ if (!iovec) {
++ unsigned iov_off = 0;
++
++ rw->s.iter.iov = rw->s.fast_iov;
++ if (iter->iov != fast_iov) {
++ iov_off = iter->iov - fast_iov;
++ rw->s.iter.iov += iov_off;
++ }
++ if (rw->s.fast_iov != fast_iov)
++ memcpy(rw->s.fast_iov + iov_off, fast_iov + iov_off,
++ sizeof(struct iovec) * iter->nr_segs);
++ } else {
++ req->flags |= REQ_F_NEED_CLEANUP;
++ }
++}
++
++static inline bool io_alloc_async_data(struct io_kiocb *req)
++{
++ WARN_ON_ONCE(!io_op_defs[req->opcode].async_size);
++ req->async_data = kmalloc(io_op_defs[req->opcode].async_size, GFP_KERNEL);
++ if (req->async_data) {
++ req->flags |= REQ_F_ASYNC_DATA;
++ return false;
++ }
++ return true;
++}
++
++static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
++ struct io_rw_state *s, bool force)
++{
++ if (!force && !io_op_defs[req->opcode].needs_async_setup)
++ return 0;
++ if (!req_has_async_data(req)) {
++ struct io_async_rw *iorw;
++
++ if (io_alloc_async_data(req)) {
++ kfree(iovec);
++ return -ENOMEM;
++ }
++
++ io_req_map_rw(req, iovec, s->fast_iov, &s->iter);
++ iorw = req->async_data;
++ /* we've copied and mapped the iter, ensure state is saved */
++ iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
++ }
++ return 0;
++}
++
++static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
++{
++ struct io_async_rw *iorw = req->async_data;
++ struct iovec *iov;
++ int ret;
++
++ /* submission path, ->uring_lock should already be taken */
++ ret = io_import_iovec(rw, req, &iov, &iorw->s, 0);
++ if (unlikely(ret < 0))
++ return ret;
++
++ iorw->bytes_done = 0;
++ iorw->free_iovec = iov;
++ if (iov)
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_readv_prep_async(struct io_kiocb *req)
++{
++ return io_rw_prep_async(req, READ);
++}
++
++static int io_writev_prep_async(struct io_kiocb *req)
++{
++ return io_rw_prep_async(req, WRITE);
++}
++
++/*
++ * This is our waitqueue callback handler, registered through __folio_lock_async()
++ * when we initially tried to do the IO with the iocb armed our waitqueue.
++ * This gets called when the page is unlocked, and we generally expect that to
++ * happen when the page IO is completed and the page is now uptodate. This will
++ * queue a task_work based retry of the operation, attempting to copy the data
++ * again. If the latter fails because the page was NOT uptodate, then we will
++ * do a thread based blocking retry of the operation. That's the unexpected
++ * slow path.
++ */
++static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode,
++ int sync, void *arg)
++{
++ struct wait_page_queue *wpq;
++ struct io_kiocb *req = wait->private;
++ struct wait_page_key *key = arg;
++
++ wpq = container_of(wait, struct wait_page_queue, wait);
++
++ if (!wake_page_match(wpq, key))
++ return 0;
++
++ req->rw.kiocb.ki_flags &= ~IOCB_WAITQ;
++ list_del_init(&wait->entry);
++ io_req_task_queue(req);
++ return 1;
++}
++
++/*
++ * This controls whether a given IO request should be armed for async page
++ * based retry. If we return false here, the request is handed to the async
++ * worker threads for retry. If we're doing buffered reads on a regular file,
++ * we prepare a private wait_page_queue entry and retry the operation. This
++ * will either succeed because the page is now uptodate and unlocked, or it
++ * will register a callback when the page is unlocked at IO completion. Through
++ * that callback, io_uring uses task_work to setup a retry of the operation.
++ * That retry will attempt the buffered read again. The retry will generally
++ * succeed, or in rare cases where it fails, we then fall back to using the
++ * async worker threads for a blocking retry.
++ */
++static bool io_rw_should_retry(struct io_kiocb *req)
++{
++ struct io_async_rw *rw = req->async_data;
++ struct wait_page_queue *wait = &rw->wpq;
++ struct kiocb *kiocb = &req->rw.kiocb;
++
++ /* never retry for NOWAIT, we just complete with -EAGAIN */
++ if (req->flags & REQ_F_NOWAIT)
++ return false;
++
++ /* Only for buffered IO */
++ if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_HIPRI))
++ return false;
++
++ /*
++ * just use poll if we can, and don't attempt if the fs doesn't
++ * support callback based unlocks
++ */
++ if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC))
++ return false;
++
++ wait->wait.func = io_async_buf_func;
++ wait->wait.private = req;
++ wait->wait.flags = 0;
++ INIT_LIST_HEAD(&wait->wait.entry);
++ kiocb->ki_flags |= IOCB_WAITQ;
++ kiocb->ki_flags &= ~IOCB_NOWAIT;
++ kiocb->ki_waitq = wait;
++ return true;
++}
++
++static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
++{
++ if (likely(req->file->f_op->read_iter))
++ return call_read_iter(req->file, &req->rw.kiocb, iter);
++ else if (req->file->f_op->read)
++ return loop_rw_iter(READ, req, iter);
++ else
++ return -EINVAL;
++}
++
++static bool need_read_all(struct io_kiocb *req)
++{
++ return req->flags & REQ_F_ISREG ||
++ S_ISBLK(file_inode(req->file)->i_mode);
++}
++
++static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
++{
++ struct kiocb *kiocb = &req->rw.kiocb;
++ struct io_ring_ctx *ctx = req->ctx;
++ struct file *file = req->file;
++ int ret;
++
++ if (unlikely(!file || !(file->f_mode & mode)))
++ return -EBADF;
++
++ if (!io_req_ffs_set(req))
++ req->flags |= io_file_get_flags(file) << REQ_F_SUPPORT_NOWAIT_BIT;
++
++ kiocb->ki_flags = iocb_flags(file);
++ ret = kiocb_set_rw_flags(kiocb, req->rw.flags);
++ if (unlikely(ret))
++ return ret;
++
++ /*
++ * If the file is marked O_NONBLOCK, still allow retry for it if it
++ * supports async. Otherwise it's impossible to use O_NONBLOCK files
++ * reliably. If not, or it IOCB_NOWAIT is set, don't retry.
++ */
++ if ((kiocb->ki_flags & IOCB_NOWAIT) ||
++ ((file->f_flags & O_NONBLOCK) && !io_file_supports_nowait(req)))
++ req->flags |= REQ_F_NOWAIT;
++
++ if (ctx->flags & IORING_SETUP_IOPOLL) {
++ if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll)
++ return -EOPNOTSUPP;
++
++ kiocb->private = NULL;
++ kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
++ kiocb->ki_complete = io_complete_rw_iopoll;
++ req->iopoll_completed = 0;
++ } else {
++ if (kiocb->ki_flags & IOCB_HIPRI)
++ return -EINVAL;
++ kiocb->ki_complete = io_complete_rw;
++ }
++
++ return 0;
++}
++
++static int io_read(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_rw_state __s, *s = &__s;
++ struct iovec *iovec;
++ struct kiocb *kiocb = &req->rw.kiocb;
++ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++ struct io_async_rw *rw;
++ ssize_t ret, ret2;
++ loff_t *ppos;
++
++ if (!req_has_async_data(req)) {
++ ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
++ if (unlikely(ret < 0))
++ return ret;
++ } else {
++ rw = req->async_data;
++ s = &rw->s;
++
++ /*
++ * Safe and required to re-import if we're using provided
++ * buffers, as we dropped the selected one before retry.
++ */
++ if (io_do_buffer_select(req)) {
++ ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
++ if (unlikely(ret < 0))
++ return ret;
++ }
++
++ /*
++ * We come here from an earlier attempt, restore our state to
++ * match in case it doesn't. It's cheap enough that we don't
++ * need to make this conditional.
++ */
++ iov_iter_restore(&s->iter, &s->iter_state);
++ iovec = NULL;
++ }
++ ret = io_rw_init_file(req, FMODE_READ);
++ if (unlikely(ret)) {
++ kfree(iovec);
++ return ret;
++ }
++ req->cqe.res = iov_iter_count(&s->iter);
++
++ if (force_nonblock) {
++ /* If the file doesn't support async, just async punt */
++ if (unlikely(!io_file_supports_nowait(req))) {
++ ret = io_setup_async_rw(req, iovec, s, true);
++ return ret ?: -EAGAIN;
++ }
++ kiocb->ki_flags |= IOCB_NOWAIT;
++ } else {
++ /* Ensure we clear previously set non-block flag */
++ kiocb->ki_flags &= ~IOCB_NOWAIT;
++ }
++
++ ppos = io_kiocb_update_pos(req);
++
++ ret = rw_verify_area(READ, req->file, ppos, req->cqe.res);
++ if (unlikely(ret)) {
++ kfree(iovec);
++ return ret;
++ }
++
++ ret = io_iter_do_read(req, &s->iter);
++
++ if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
++ req->flags &= ~REQ_F_REISSUE;
++ /* if we can poll, just do that */
++ if (req->opcode == IORING_OP_READ && file_can_poll(req->file))
++ return -EAGAIN;
++ /* IOPOLL retry should happen for io-wq threads */
++ if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL))
++ goto done;
++ /* no retry on NONBLOCK nor RWF_NOWAIT */
++ if (req->flags & REQ_F_NOWAIT)
++ goto done;
++ ret = 0;
++ } else if (ret == -EIOCBQUEUED) {
++ goto out_free;
++ } else if (ret == req->cqe.res || ret <= 0 || !force_nonblock ||
++ (req->flags & REQ_F_NOWAIT) || !need_read_all(req)) {
++ /* read all, failed, already did sync or don't want to retry */
++ goto done;
++ }
++
++ /*
++ * Don't depend on the iter state matching what was consumed, or being
++ * untouched in case of error. Restore it and we'll advance it
++ * manually if we need to.
++ */
++ iov_iter_restore(&s->iter, &s->iter_state);
++
++ ret2 = io_setup_async_rw(req, iovec, s, true);
++ if (ret2)
++ return ret2;
++
++ iovec = NULL;
++ rw = req->async_data;
++ s = &rw->s;
++ /*
++ * Now use our persistent iterator and state, if we aren't already.
++ * We've restored and mapped the iter to match.
++ */
++
++ do {
++ /*
++ * We end up here because of a partial read, either from
++ * above or inside this loop. Advance the iter by the bytes
++ * that were consumed.
++ */
++ iov_iter_advance(&s->iter, ret);
++ if (!iov_iter_count(&s->iter))
++ break;
++ rw->bytes_done += ret;
++ iov_iter_save_state(&s->iter, &s->iter_state);
++
++ /* if we can retry, do so with the callbacks armed */
++ if (!io_rw_should_retry(req)) {
++ kiocb->ki_flags &= ~IOCB_WAITQ;
++ return -EAGAIN;
++ }
++
++ /*
++ * Now retry read with the IOCB_WAITQ parts set in the iocb. If
++ * we get -EIOCBQUEUED, then we'll get a notification when the
++ * desired page gets unlocked. We can also get a partial read
++ * here, and if we do, then just retry at the new offset.
++ */
++ ret = io_iter_do_read(req, &s->iter);
++ if (ret == -EIOCBQUEUED)
++ return 0;
++ /* we got some bytes, but not all. retry. */
++ kiocb->ki_flags &= ~IOCB_WAITQ;
++ iov_iter_restore(&s->iter, &s->iter_state);
++ } while (ret > 0);
++done:
++ kiocb_done(req, ret, issue_flags);
++out_free:
++ /* it's faster to check here then delegate to kfree */
++ if (iovec)
++ kfree(iovec);
++ return 0;
++}
++
++static int io_write(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_rw_state __s, *s = &__s;
++ struct iovec *iovec;
++ struct kiocb *kiocb = &req->rw.kiocb;
++ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++ ssize_t ret, ret2;
++ loff_t *ppos;
++
++ if (!req_has_async_data(req)) {
++ ret = io_import_iovec(WRITE, req, &iovec, s, issue_flags);
++ if (unlikely(ret < 0))
++ return ret;
++ } else {
++ struct io_async_rw *rw = req->async_data;
++
++ s = &rw->s;
++ iov_iter_restore(&s->iter, &s->iter_state);
++ iovec = NULL;
++ }
++ ret = io_rw_init_file(req, FMODE_WRITE);
++ if (unlikely(ret)) {
++ kfree(iovec);
++ return ret;
++ }
++ req->cqe.res = iov_iter_count(&s->iter);
++
++ if (force_nonblock) {
++ /* If the file doesn't support async, just async punt */
++ if (unlikely(!io_file_supports_nowait(req)))
++ goto copy_iov;
++
++ /* file path doesn't support NOWAIT for non-direct_IO */
++ if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
++ (req->flags & REQ_F_ISREG))
++ goto copy_iov;
++
++ kiocb->ki_flags |= IOCB_NOWAIT;
++ } else {
++ /* Ensure we clear previously set non-block flag */
++ kiocb->ki_flags &= ~IOCB_NOWAIT;
++ }
++
++ ppos = io_kiocb_update_pos(req);
++
++ ret = rw_verify_area(WRITE, req->file, ppos, req->cqe.res);
++ if (unlikely(ret))
++ goto out_free;
++
++ /*
++ * Open-code file_start_write here to grab freeze protection,
++ * which will be released by another thread in
++ * io_complete_rw(). Fool lockdep by telling it the lock got
++ * released so that it doesn't complain about the held lock when
++ * we return to userspace.
++ */
++ if (req->flags & REQ_F_ISREG) {
++ sb_start_write(file_inode(req->file)->i_sb);
++ __sb_writers_release(file_inode(req->file)->i_sb,
++ SB_FREEZE_WRITE);
++ }
++ kiocb->ki_flags |= IOCB_WRITE;
++
++ if (likely(req->file->f_op->write_iter))
++ ret2 = call_write_iter(req->file, kiocb, &s->iter);
++ else if (req->file->f_op->write)
++ ret2 = loop_rw_iter(WRITE, req, &s->iter);
++ else
++ ret2 = -EINVAL;
++
++ if (req->flags & REQ_F_REISSUE) {
++ req->flags &= ~REQ_F_REISSUE;
++ ret2 = -EAGAIN;
++ }
++
++ /*
++ * Raw bdev writes will return -EOPNOTSUPP for IOCB_NOWAIT. Just
++ * retry them without IOCB_NOWAIT.
++ */
++ if (ret2 == -EOPNOTSUPP && (kiocb->ki_flags & IOCB_NOWAIT))
++ ret2 = -EAGAIN;
++ /* no retry on NONBLOCK nor RWF_NOWAIT */
++ if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT))
++ goto done;
++ if (!force_nonblock || ret2 != -EAGAIN) {
++ /* IOPOLL retry should happen for io-wq threads */
++ if (ret2 == -EAGAIN && (req->ctx->flags & IORING_SETUP_IOPOLL))
++ goto copy_iov;
++done:
++ kiocb_done(req, ret2, issue_flags);
++ } else {
++copy_iov:
++ iov_iter_restore(&s->iter, &s->iter_state);
++ ret = io_setup_async_rw(req, iovec, s, false);
++ return ret ?: -EAGAIN;
++ }
++out_free:
++ /* it's reportedly faster than delegating the null check to kfree() */
++ if (iovec)
++ kfree(iovec);
++ return ret;
++}
++
++static int io_renameat_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_rename *ren = &req->rename;
++ const char __user *oldf, *newf;
++
++ if (sqe->buf_index || sqe->splice_fd_in)
++ return -EINVAL;
++ if (unlikely(req->flags & REQ_F_FIXED_FILE))
++ return -EBADF;
++
++ ren->old_dfd = READ_ONCE(sqe->fd);
++ oldf = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ newf = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++ ren->new_dfd = READ_ONCE(sqe->len);
++ ren->flags = READ_ONCE(sqe->rename_flags);
++
++ ren->oldpath = getname(oldf);
++ if (IS_ERR(ren->oldpath))
++ return PTR_ERR(ren->oldpath);
++
++ ren->newpath = getname(newf);
++ if (IS_ERR(ren->newpath)) {
++ putname(ren->oldpath);
++ return PTR_ERR(ren->newpath);
++ }
++
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_renameat(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_rename *ren = &req->rename;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = do_renameat2(ren->old_dfd, ren->oldpath, ren->new_dfd,
++ ren->newpath, ren->flags);
++
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static inline void __io_xattr_finish(struct io_kiocb *req)
++{
++ struct io_xattr *ix = &req->xattr;
++
++ if (ix->filename)
++ putname(ix->filename);
++
++ kfree(ix->ctx.kname);
++ kvfree(ix->ctx.kvalue);
++}
++
++static void io_xattr_finish(struct io_kiocb *req, int ret)
++{
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++
++ __io_xattr_finish(req);
++ io_req_complete(req, ret);
++}
++
++static int __io_getxattr_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_xattr *ix = &req->xattr;
++ const char __user *name;
++ int ret;
++
++ if (unlikely(req->flags & REQ_F_FIXED_FILE))
++ return -EBADF;
++
++ ix->filename = NULL;
++ ix->ctx.kvalue = NULL;
++ name = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ ix->ctx.cvalue = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++ ix->ctx.size = READ_ONCE(sqe->len);
++ ix->ctx.flags = READ_ONCE(sqe->xattr_flags);
++
++ if (ix->ctx.flags)
++ return -EINVAL;
++
++ ix->ctx.kname = kmalloc(sizeof(*ix->ctx.kname), GFP_KERNEL);
++ if (!ix->ctx.kname)
++ return -ENOMEM;
++
++ ret = strncpy_from_user(ix->ctx.kname->name, name,
++ sizeof(ix->ctx.kname->name));
++ if (!ret || ret == sizeof(ix->ctx.kname->name))
++ ret = -ERANGE;
++ if (ret < 0) {
++ kfree(ix->ctx.kname);
++ return ret;
++ }
++
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_fgetxattr_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ return __io_getxattr_prep(req, sqe);
++}
++
++static int io_getxattr_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_xattr *ix = &req->xattr;
++ const char __user *path;
++ int ret;
++
++ ret = __io_getxattr_prep(req, sqe);
++ if (ret)
++ return ret;
++
++ path = u64_to_user_ptr(READ_ONCE(sqe->addr3));
++
++ ix->filename = getname_flags(path, LOOKUP_FOLLOW, NULL);
++ if (IS_ERR(ix->filename)) {
++ ret = PTR_ERR(ix->filename);
++ ix->filename = NULL;
++ }
++
++ return ret;
++}
++
++static int io_fgetxattr(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_xattr *ix = &req->xattr;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = do_getxattr(mnt_user_ns(req->file->f_path.mnt),
++ req->file->f_path.dentry,
++ &ix->ctx);
++
++ io_xattr_finish(req, ret);
++ return 0;
++}
++
++static int io_getxattr(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_xattr *ix = &req->xattr;
++ unsigned int lookup_flags = LOOKUP_FOLLOW;
++ struct path path;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++retry:
++ ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL);
++ if (!ret) {
++ ret = do_getxattr(mnt_user_ns(path.mnt),
++ path.dentry,
++ &ix->ctx);
++
++ path_put(&path);
++ if (retry_estale(ret, lookup_flags)) {
++ lookup_flags |= LOOKUP_REVAL;
++ goto retry;
++ }
++ }
++
++ io_xattr_finish(req, ret);
++ return 0;
++}
++
++static int __io_setxattr_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_xattr *ix = &req->xattr;
++ const char __user *name;
++ int ret;
++
++ if (unlikely(req->flags & REQ_F_FIXED_FILE))
++ return -EBADF;
++
++ ix->filename = NULL;
++ name = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ ix->ctx.cvalue = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++ ix->ctx.kvalue = NULL;
++ ix->ctx.size = READ_ONCE(sqe->len);
++ ix->ctx.flags = READ_ONCE(sqe->xattr_flags);
++
++ ix->ctx.kname = kmalloc(sizeof(*ix->ctx.kname), GFP_KERNEL);
++ if (!ix->ctx.kname)
++ return -ENOMEM;
++
++ ret = setxattr_copy(name, &ix->ctx);
++ if (ret) {
++ kfree(ix->ctx.kname);
++ return ret;
++ }
++
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_setxattr_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_xattr *ix = &req->xattr;
++ const char __user *path;
++ int ret;
++
++ ret = __io_setxattr_prep(req, sqe);
++ if (ret)
++ return ret;
++
++ path = u64_to_user_ptr(READ_ONCE(sqe->addr3));
++
++ ix->filename = getname_flags(path, LOOKUP_FOLLOW, NULL);
++ if (IS_ERR(ix->filename)) {
++ ret = PTR_ERR(ix->filename);
++ ix->filename = NULL;
++ }
++
++ return ret;
++}
++
++static int io_fsetxattr_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ return __io_setxattr_prep(req, sqe);
++}
++
++static int __io_setxattr(struct io_kiocb *req, unsigned int issue_flags,
++ struct path *path)
++{
++ struct io_xattr *ix = &req->xattr;
++ int ret;
++
++ ret = mnt_want_write(path->mnt);
++ if (!ret) {
++ ret = do_setxattr(mnt_user_ns(path->mnt), path->dentry, &ix->ctx);
++ mnt_drop_write(path->mnt);
++ }
++
++ return ret;
++}
++
++static int io_fsetxattr(struct io_kiocb *req, unsigned int issue_flags)
++{
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = __io_setxattr(req, issue_flags, &req->file->f_path);
++ io_xattr_finish(req, ret);
++
++ return 0;
++}
++
++static int io_setxattr(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_xattr *ix = &req->xattr;
++ unsigned int lookup_flags = LOOKUP_FOLLOW;
++ struct path path;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++retry:
++ ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL);
++ if (!ret) {
++ ret = __io_setxattr(req, issue_flags, &path);
++ path_put(&path);
++ if (retry_estale(ret, lookup_flags)) {
++ lookup_flags |= LOOKUP_REVAL;
++ goto retry;
++ }
++ }
++
++ io_xattr_finish(req, ret);
++ return 0;
++}
++
++static int io_unlinkat_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_unlink *un = &req->unlink;
++ const char __user *fname;
++
++ if (sqe->off || sqe->len || sqe->buf_index || sqe->splice_fd_in)
++ return -EINVAL;
++ if (unlikely(req->flags & REQ_F_FIXED_FILE))
++ return -EBADF;
++
++ un->dfd = READ_ONCE(sqe->fd);
++
++ un->flags = READ_ONCE(sqe->unlink_flags);
++ if (un->flags & ~AT_REMOVEDIR)
++ return -EINVAL;
++
++ fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ un->filename = getname(fname);
++ if (IS_ERR(un->filename))
++ return PTR_ERR(un->filename);
++
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_unlinkat(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_unlink *un = &req->unlink;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ if (un->flags & AT_REMOVEDIR)
++ ret = do_rmdir(un->dfd, un->filename);
++ else
++ ret = do_unlinkat(un->dfd, un->filename);
++
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static int io_mkdirat_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_mkdir *mkd = &req->mkdir;
++ const char __user *fname;
++
++ if (sqe->off || sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
++ return -EINVAL;
++ if (unlikely(req->flags & REQ_F_FIXED_FILE))
++ return -EBADF;
++
++ mkd->dfd = READ_ONCE(sqe->fd);
++ mkd->mode = READ_ONCE(sqe->len);
++
++ fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ mkd->filename = getname(fname);
++ if (IS_ERR(mkd->filename))
++ return PTR_ERR(mkd->filename);
++
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_mkdirat(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_mkdir *mkd = &req->mkdir;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = do_mkdirat(mkd->dfd, mkd->filename, mkd->mode);
++
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static int io_symlinkat_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_symlink *sl = &req->symlink;
++ const char __user *oldpath, *newpath;
++
++ if (sqe->len || sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
++ return -EINVAL;
++ if (unlikely(req->flags & REQ_F_FIXED_FILE))
++ return -EBADF;
++
++ sl->new_dfd = READ_ONCE(sqe->fd);
++ oldpath = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ newpath = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++
++ sl->oldpath = getname(oldpath);
++ if (IS_ERR(sl->oldpath))
++ return PTR_ERR(sl->oldpath);
++
++ sl->newpath = getname(newpath);
++ if (IS_ERR(sl->newpath)) {
++ putname(sl->oldpath);
++ return PTR_ERR(sl->newpath);
++ }
++
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_symlinkat(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_symlink *sl = &req->symlink;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = do_symlinkat(sl->oldpath, sl->new_dfd, sl->newpath);
++
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static int io_linkat_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_hardlink *lnk = &req->hardlink;
++ const char __user *oldf, *newf;
++
++ if (sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
++ return -EINVAL;
++ if (unlikely(req->flags & REQ_F_FIXED_FILE))
++ return -EBADF;
++
++ lnk->old_dfd = READ_ONCE(sqe->fd);
++ lnk->new_dfd = READ_ONCE(sqe->len);
++ oldf = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ newf = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++ lnk->flags = READ_ONCE(sqe->hardlink_flags);
++
++ lnk->oldpath = getname(oldf);
++ if (IS_ERR(lnk->oldpath))
++ return PTR_ERR(lnk->oldpath);
++
++ lnk->newpath = getname(newf);
++ if (IS_ERR(lnk->newpath)) {
++ putname(lnk->oldpath);
++ return PTR_ERR(lnk->newpath);
++ }
++
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_linkat(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_hardlink *lnk = &req->hardlink;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = do_linkat(lnk->old_dfd, lnk->oldpath, lnk->new_dfd,
++ lnk->newpath, lnk->flags);
++
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static void io_uring_cmd_work(struct io_kiocb *req, bool *locked)
++{
++ req->uring_cmd.task_work_cb(&req->uring_cmd);
++}
++
++void io_uring_cmd_complete_in_task(struct io_uring_cmd *ioucmd,
++ void (*task_work_cb)(struct io_uring_cmd *))
++{
++ struct io_kiocb *req = container_of(ioucmd, struct io_kiocb, uring_cmd);
++
++ req->uring_cmd.task_work_cb = task_work_cb;
++ req->io_task_work.func = io_uring_cmd_work;
++ io_req_task_work_add(req);
++}
++EXPORT_SYMBOL_GPL(io_uring_cmd_complete_in_task);
++
++static inline void io_req_set_cqe32_extra(struct io_kiocb *req,
++ u64 extra1, u64 extra2)
++{
++ req->extra1 = extra1;
++ req->extra2 = extra2;
++ req->flags |= REQ_F_CQE32_INIT;
++}
++
++/*
++ * Called by consumers of io_uring_cmd, if they originally returned
++ * -EIOCBQUEUED upon receiving the command.
++ */
++void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2)
++{
++ struct io_kiocb *req = container_of(ioucmd, struct io_kiocb, uring_cmd);
++
++ if (ret < 0)
++ req_set_fail(req);
++
++ if (req->ctx->flags & IORING_SETUP_CQE32)
++ io_req_set_cqe32_extra(req, res2, 0);
++ io_req_complete(req, ret);
++}
++EXPORT_SYMBOL_GPL(io_uring_cmd_done);
++
++static int io_uring_cmd_prep_async(struct io_kiocb *req)
++{
++ size_t cmd_size;
++
++ cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128);
++
++ memcpy(req->async_data, req->uring_cmd.cmd, cmd_size);
++ return 0;
++}
++
++static int io_uring_cmd_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_uring_cmd *ioucmd = &req->uring_cmd;
++
++ if (sqe->rw_flags || sqe->__pad1)
++ return -EINVAL;
++ ioucmd->cmd = sqe->cmd;
++ ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
++ return 0;
++}
++
++static int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_uring_cmd *ioucmd = &req->uring_cmd;
++ struct io_ring_ctx *ctx = req->ctx;
++ struct file *file = req->file;
++ int ret;
++
++ if (!req->file->f_op->uring_cmd)
++ return -EOPNOTSUPP;
++
++ if (ctx->flags & IORING_SETUP_SQE128)
++ issue_flags |= IO_URING_F_SQE128;
++ if (ctx->flags & IORING_SETUP_CQE32)
++ issue_flags |= IO_URING_F_CQE32;
++ if (ctx->flags & IORING_SETUP_IOPOLL)
++ issue_flags |= IO_URING_F_IOPOLL;
++
++ if (req_has_async_data(req))
++ ioucmd->cmd = req->async_data;
++
++ ret = file->f_op->uring_cmd(ioucmd, issue_flags);
++ if (ret == -EAGAIN) {
++ if (!req_has_async_data(req)) {
++ if (io_alloc_async_data(req))
++ return -ENOMEM;
++ io_uring_cmd_prep_async(req);
++ }
++ return -EAGAIN;
++ }
++
++ if (ret != -EIOCBQUEUED)
++ io_uring_cmd_done(ioucmd, ret, 0);
++ return 0;
++}
++
++static int __io_splice_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_splice *sp = &req->splice;
++ unsigned int valid_flags = SPLICE_F_FD_IN_FIXED | SPLICE_F_ALL;
++
++ sp->len = READ_ONCE(sqe->len);
++ sp->flags = READ_ONCE(sqe->splice_flags);
++ if (unlikely(sp->flags & ~valid_flags))
++ return -EINVAL;
++ sp->splice_fd_in = READ_ONCE(sqe->splice_fd_in);
++ return 0;
++}
++
++static int io_tee_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ if (READ_ONCE(sqe->splice_off_in) || READ_ONCE(sqe->off))
++ return -EINVAL;
++ return __io_splice_prep(req, sqe);
++}
++
++static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_splice *sp = &req->splice;
++ struct file *out = sp->file_out;
++ unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
++ struct file *in;
++ long ret = 0;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ if (sp->flags & SPLICE_F_FD_IN_FIXED)
++ in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags);
++ else
++ in = io_file_get_normal(req, sp->splice_fd_in);
++ if (!in) {
++ ret = -EBADF;
++ goto done;
++ }
++
++ if (sp->len)
++ ret = do_tee(in, out, sp->len, flags);
++
++ if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
++ io_put_file(in);
++done:
++ if (ret != sp->len)
++ req_set_fail(req);
++ __io_req_complete(req, 0, ret, 0);
++ return 0;
++}
++
++static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct io_splice *sp = &req->splice;
++
++ sp->off_in = READ_ONCE(sqe->splice_off_in);
++ sp->off_out = READ_ONCE(sqe->off);
++ return __io_splice_prep(req, sqe);
++}
++
++static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_splice *sp = &req->splice;
++ struct file *out = sp->file_out;
++ unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
++ loff_t *poff_in, *poff_out;
++ struct file *in;
++ long ret = 0;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ if (sp->flags & SPLICE_F_FD_IN_FIXED)
++ in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags);
++ else
++ in = io_file_get_normal(req, sp->splice_fd_in);
++ if (!in) {
++ ret = -EBADF;
++ goto done;
++ }
++
++ poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;
++ poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;
++
++ if (sp->len)
++ ret = do_splice(in, poff_in, out, poff_out, sp->len, flags);
++
++ if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
++ io_put_file(in);
++done:
++ if (ret != sp->len)
++ req_set_fail(req);
++ __io_req_complete(req, 0, ret, 0);
++ return 0;
++}
++
++static int io_nop_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ return 0;
++}
++
++/*
++ * IORING_OP_NOP just posts a completion event, nothing else.
++ */
++static int io_nop(struct io_kiocb *req, unsigned int issue_flags)
++{
++ __io_req_complete(req, issue_flags, 0, 0);
++ return 0;
++}
++
++static int io_msg_ring_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ if (unlikely(sqe->addr || sqe->rw_flags || sqe->splice_fd_in ||
++ sqe->buf_index || sqe->personality))
++ return -EINVAL;
++
++ req->msg.user_data = READ_ONCE(sqe->off);
++ req->msg.len = READ_ONCE(sqe->len);
++ return 0;
++}
++
++static int io_msg_ring(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_ring_ctx *target_ctx;
++ struct io_msg *msg = &req->msg;
++ bool filled;
++ int ret;
++
++ ret = -EBADFD;
++ if (req->file->f_op != &io_uring_fops)
++ goto done;
++
++ ret = -EOVERFLOW;
++ target_ctx = req->file->private_data;
++
++ spin_lock(&target_ctx->completion_lock);
++ filled = io_fill_cqe_aux(target_ctx, msg->user_data, msg->len, 0);
++ io_commit_cqring(target_ctx);
++ spin_unlock(&target_ctx->completion_lock);
++
++ if (filled) {
++ io_cqring_ev_posted(target_ctx);
++ ret = 0;
++ }
++
++done:
++ if (ret < 0)
++ req_set_fail(req);
++ __io_req_complete(req, issue_flags, ret, 0);
++ /* put file to avoid an attempt to IOPOLL the req */
++ io_put_file(req->file);
++ req->file = NULL;
++ return 0;
++}
++
++static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ if (unlikely(sqe->addr || sqe->buf_index || sqe->splice_fd_in))
++ return -EINVAL;
++
++ req->sync.flags = READ_ONCE(sqe->fsync_flags);
++ if (unlikely(req->sync.flags & ~IORING_FSYNC_DATASYNC))
++ return -EINVAL;
++
++ req->sync.off = READ_ONCE(sqe->off);
++ req->sync.len = READ_ONCE(sqe->len);
++ return 0;
++}
++
++static int io_fsync(struct io_kiocb *req, unsigned int issue_flags)
++{
++ loff_t end = req->sync.off + req->sync.len;
++ int ret;
++
++ /* fsync always requires a blocking context */
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = vfs_fsync_range(req->file, req->sync.off,
++ end > 0 ? end : LLONG_MAX,
++ req->sync.flags & IORING_FSYNC_DATASYNC);
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static int io_fallocate_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ if (sqe->buf_index || sqe->rw_flags || sqe->splice_fd_in)
++ return -EINVAL;
++
++ req->sync.off = READ_ONCE(sqe->off);
++ req->sync.len = READ_ONCE(sqe->addr);
++ req->sync.mode = READ_ONCE(sqe->len);
++ return 0;
++}
++
++static int io_fallocate(struct io_kiocb *req, unsigned int issue_flags)
++{
++ int ret;
++
++ /* fallocate always requiring blocking context */
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++ ret = vfs_fallocate(req->file, req->sync.mode, req->sync.off,
++ req->sync.len);
++ if (ret >= 0)
++ fsnotify_modify(req->file);
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ const char __user *fname;
++ int ret;
++
++ if (unlikely(sqe->buf_index))
++ return -EINVAL;
++ if (unlikely(req->flags & REQ_F_FIXED_FILE))
++ return -EBADF;
++
++ /* open.how should be already initialised */
++ if (!(req->open.how.flags & O_PATH) && force_o_largefile())
++ req->open.how.flags |= O_LARGEFILE;
++
++ req->open.dfd = READ_ONCE(sqe->fd);
++ fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ req->open.filename = getname(fname);
++ if (IS_ERR(req->open.filename)) {
++ ret = PTR_ERR(req->open.filename);
++ req->open.filename = NULL;
++ return ret;
++ }
++
++ req->open.file_slot = READ_ONCE(sqe->file_index);
++ if (req->open.file_slot && (req->open.how.flags & O_CLOEXEC))
++ return -EINVAL;
++
++ req->open.nofile = rlimit(RLIMIT_NOFILE);
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ u64 mode = READ_ONCE(sqe->len);
++ u64 flags = READ_ONCE(sqe->open_flags);
++
++ req->open.how = build_open_how(flags, mode);
++ return __io_openat_prep(req, sqe);
++}
++
++static int io_openat2_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct open_how __user *how;
++ size_t len;
++ int ret;
++
++ how = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++ len = READ_ONCE(sqe->len);
++ if (len < OPEN_HOW_SIZE_VER0)
++ return -EINVAL;
++
++ ret = copy_struct_from_user(&req->open.how, sizeof(req->open.how), how,
++ len);
++ if (ret)
++ return ret;
++
++ return __io_openat_prep(req, sqe);
++}
++
++static int io_file_bitmap_get(struct io_ring_ctx *ctx)
++{
++ struct io_file_table *table = &ctx->file_table;
++ unsigned long nr = ctx->nr_user_files;
++ int ret;
++
++ do {
++ ret = find_next_zero_bit(table->bitmap, nr, table->alloc_hint);
++ if (ret != nr)
++ return ret;
++
++ if (!table->alloc_hint)
++ break;
++
++ nr = table->alloc_hint;
++ table->alloc_hint = 0;
++ } while (1);
++
++ return -ENFILE;
++}
++
++/*
++ * Note when io_fixed_fd_install() returns error value, it will ensure
++ * fput() is called correspondingly.
++ */
++static int io_fixed_fd_install(struct io_kiocb *req, unsigned int issue_flags,
++ struct file *file, unsigned int file_slot)
++{
++ bool alloc_slot = file_slot == IORING_FILE_INDEX_ALLOC;
++ struct io_ring_ctx *ctx = req->ctx;
++ int ret;
++
++ io_ring_submit_lock(ctx, issue_flags);
++
++ if (alloc_slot) {
++ ret = io_file_bitmap_get(ctx);
++ if (unlikely(ret < 0))
++ goto err;
++ file_slot = ret;
++ } else {
++ file_slot--;
++ }
++
++ ret = io_install_fixed_file(req, file, issue_flags, file_slot);
++ if (!ret && alloc_slot)
++ ret = file_slot;
++err:
++ io_ring_submit_unlock(ctx, issue_flags);
++ if (unlikely(ret < 0))
++ fput(file);
++ return ret;
++}
++
++static int io_openat2(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct open_flags op;
++ struct file *file;
++ bool resolve_nonblock, nonblock_set;
++ bool fixed = !!req->open.file_slot;
++ int ret;
++
++ ret = build_open_flags(&req->open.how, &op);
++ if (ret)
++ goto err;
++ nonblock_set = op.open_flag & O_NONBLOCK;
++ resolve_nonblock = req->open.how.resolve & RESOLVE_CACHED;
++ if (issue_flags & IO_URING_F_NONBLOCK) {
++ /*
++ * Don't bother trying for O_TRUNC, O_CREAT, or O_TMPFILE open,
++ * it'll always -EAGAIN
++ */
++ if (req->open.how.flags & (O_TRUNC | O_CREAT | O_TMPFILE))
++ return -EAGAIN;
++ op.lookup_flags |= LOOKUP_CACHED;
++ op.open_flag |= O_NONBLOCK;
++ }
++
++ if (!fixed) {
++ ret = __get_unused_fd_flags(req->open.how.flags, req->open.nofile);
++ if (ret < 0)
++ goto err;
++ }
++
++ file = do_filp_open(req->open.dfd, req->open.filename, &op);
++ if (IS_ERR(file)) {
++ /*
++ * We could hang on to this 'fd' on retrying, but seems like
++ * marginal gain for something that is now known to be a slower
++ * path. So just put it, and we'll get a new one when we retry.
++ */
++ if (!fixed)
++ put_unused_fd(ret);
++
++ ret = PTR_ERR(file);
++ /* only retry if RESOLVE_CACHED wasn't already set by application */
++ if (ret == -EAGAIN &&
++ (!resolve_nonblock && (issue_flags & IO_URING_F_NONBLOCK)))
++ return -EAGAIN;
++ goto err;
++ }
++
++ if ((issue_flags & IO_URING_F_NONBLOCK) && !nonblock_set)
++ file->f_flags &= ~O_NONBLOCK;
++ fsnotify_open(file);
++
++ if (!fixed)
++ fd_install(ret, file);
++ else
++ ret = io_fixed_fd_install(req, issue_flags, file,
++ req->open.file_slot);
++err:
++ putname(req->open.filename);
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++ if (ret < 0)
++ req_set_fail(req);
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static int io_openat(struct io_kiocb *req, unsigned int issue_flags)
++{
++ return io_openat2(req, issue_flags);
++}
++
++static int io_remove_buffers_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_provide_buf *p = &req->pbuf;
++ u64 tmp;
++
++ if (sqe->rw_flags || sqe->addr || sqe->len || sqe->off ||
++ sqe->splice_fd_in)
++ return -EINVAL;
++
++ tmp = READ_ONCE(sqe->fd);
++ if (!tmp || tmp > USHRT_MAX)
++ return -EINVAL;
++
++ memset(p, 0, sizeof(*p));
++ p->nbufs = tmp;
++ p->bgid = READ_ONCE(sqe->buf_group);
++ return 0;
++}
++
++static int __io_remove_buffers(struct io_ring_ctx *ctx,
++ struct io_buffer_list *bl, unsigned nbufs)
++{
++ unsigned i = 0;
++
++ /* shouldn't happen */
++ if (!nbufs)
++ return 0;
++
++ if (bl->buf_nr_pages) {
++ int j;
++
++ i = bl->buf_ring->tail - bl->head;
++ for (j = 0; j < bl->buf_nr_pages; j++)
++ unpin_user_page(bl->buf_pages[j]);
++ kvfree(bl->buf_pages);
++ bl->buf_pages = NULL;
++ bl->buf_nr_pages = 0;
++ /* make sure it's seen as empty */
++ INIT_LIST_HEAD(&bl->buf_list);
++ return i;
++ }
++
++ /* the head kbuf is the list itself */
++ while (!list_empty(&bl->buf_list)) {
++ struct io_buffer *nxt;
++
++ nxt = list_first_entry(&bl->buf_list, struct io_buffer, list);
++ list_del(&nxt->list);
++ if (++i == nbufs)
++ return i;
++ cond_resched();
++ }
++ i++;
++
++ return i;
++}
++
++static int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_provide_buf *p = &req->pbuf;
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_buffer_list *bl;
++ int ret = 0;
++
++ io_ring_submit_lock(ctx, issue_flags);
++
++ ret = -ENOENT;
++ bl = io_buffer_get_list(ctx, p->bgid);
++ if (bl) {
++ ret = -EINVAL;
++ /* can't use provide/remove buffers command on mapped buffers */
++ if (!bl->buf_nr_pages)
++ ret = __io_remove_buffers(ctx, bl, p->nbufs);
++ }
++ if (ret < 0)
++ req_set_fail(req);
++
++ /* complete before unlock, IOPOLL may need the lock */
++ __io_req_complete(req, issue_flags, ret, 0);
++ io_ring_submit_unlock(ctx, issue_flags);
++ return 0;
++}
++
++static int io_provide_buffers_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ unsigned long size, tmp_check;
++ struct io_provide_buf *p = &req->pbuf;
++ u64 tmp;
++
++ if (sqe->rw_flags || sqe->splice_fd_in)
++ return -EINVAL;
++
++ tmp = READ_ONCE(sqe->fd);
++ if (!tmp || tmp > USHRT_MAX)
++ return -E2BIG;
++ p->nbufs = tmp;
++ p->addr = READ_ONCE(sqe->addr);
++ p->len = READ_ONCE(sqe->len);
++
++ if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
++ &size))
++ return -EOVERFLOW;
++ if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
++ return -EOVERFLOW;
++
++ size = (unsigned long)p->len * p->nbufs;
++ if (!access_ok(u64_to_user_ptr(p->addr), size))
++ return -EFAULT;
++
++ p->bgid = READ_ONCE(sqe->buf_group);
++ tmp = READ_ONCE(sqe->off);
++ if (tmp > USHRT_MAX)
++ return -E2BIG;
++ p->bid = tmp;
++ return 0;
++}
++
++static int io_refill_buffer_cache(struct io_ring_ctx *ctx)
++{
++ struct io_buffer *buf;
++ struct page *page;
++ int bufs_in_page;
++
++ /*
++ * Completions that don't happen inline (eg not under uring_lock) will
++ * add to ->io_buffers_comp. If we don't have any free buffers, check
++ * the completion list and splice those entries first.
++ */
++ if (!list_empty_careful(&ctx->io_buffers_comp)) {
++ spin_lock(&ctx->completion_lock);
++ if (!list_empty(&ctx->io_buffers_comp)) {
++ list_splice_init(&ctx->io_buffers_comp,
++ &ctx->io_buffers_cache);
++ spin_unlock(&ctx->completion_lock);
++ return 0;
++ }
++ spin_unlock(&ctx->completion_lock);
++ }
++
++ /*
++ * No free buffers and no completion entries either. Allocate a new
++ * page worth of buffer entries and add those to our freelist.
++ */
++ page = alloc_page(GFP_KERNEL_ACCOUNT);
++ if (!page)
++ return -ENOMEM;
++
++ list_add(&page->lru, &ctx->io_buffers_pages);
++
++ buf = page_address(page);
++ bufs_in_page = PAGE_SIZE / sizeof(*buf);
++ while (bufs_in_page) {
++ list_add_tail(&buf->list, &ctx->io_buffers_cache);
++ buf++;
++ bufs_in_page--;
++ }
++
++ return 0;
++}
++
++static int io_add_buffers(struct io_ring_ctx *ctx, struct io_provide_buf *pbuf,
++ struct io_buffer_list *bl)
++{
++ struct io_buffer *buf;
++ u64 addr = pbuf->addr;
++ int i, bid = pbuf->bid;
++
++ for (i = 0; i < pbuf->nbufs; i++) {
++ if (list_empty(&ctx->io_buffers_cache) &&
++ io_refill_buffer_cache(ctx))
++ break;
++ buf = list_first_entry(&ctx->io_buffers_cache, struct io_buffer,
++ list);
++ list_move_tail(&buf->list, &bl->buf_list);
++ buf->addr = addr;
++ buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
++ buf->bid = bid;
++ buf->bgid = pbuf->bgid;
++ addr += pbuf->len;
++ bid++;
++ cond_resched();
++ }
++
++ return i ? 0 : -ENOMEM;
++}
++
++static __cold int io_init_bl_list(struct io_ring_ctx *ctx)
++{
++ int i;
++
++ ctx->io_bl = kcalloc(BGID_ARRAY, sizeof(struct io_buffer_list),
++ GFP_KERNEL);
++ if (!ctx->io_bl)
++ return -ENOMEM;
++
++ for (i = 0; i < BGID_ARRAY; i++) {
++ INIT_LIST_HEAD(&ctx->io_bl[i].buf_list);
++ ctx->io_bl[i].bgid = i;
++ }
++
++ return 0;
++}
++
++static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_provide_buf *p = &req->pbuf;
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_buffer_list *bl;
++ int ret = 0;
++
++ io_ring_submit_lock(ctx, issue_flags);
++
++ if (unlikely(p->bgid < BGID_ARRAY && !ctx->io_bl)) {
++ ret = io_init_bl_list(ctx);
++ if (ret)
++ goto err;
++ }
++
++ bl = io_buffer_get_list(ctx, p->bgid);
++ if (unlikely(!bl)) {
++ bl = kzalloc(sizeof(*bl), GFP_KERNEL_ACCOUNT);
++ if (!bl) {
++ ret = -ENOMEM;
++ goto err;
++ }
++ INIT_LIST_HEAD(&bl->buf_list);
++ ret = io_buffer_add_list(ctx, bl, p->bgid);
++ if (ret) {
++ kfree(bl);
++ goto err;
++ }
++ }
++ /* can't add buffers via this command for a mapped buffer ring */
++ if (bl->buf_nr_pages) {
++ ret = -EINVAL;
++ goto err;
++ }
++
++ ret = io_add_buffers(ctx, p, bl);
++err:
++ if (ret < 0)
++ req_set_fail(req);
++ /* complete before unlock, IOPOLL may need the lock */
++ __io_req_complete(req, issue_flags, ret, 0);
++ io_ring_submit_unlock(ctx, issue_flags);
++ return 0;
++}
++
++static int io_epoll_ctl_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++#if defined(CONFIG_EPOLL)
++ if (sqe->buf_index || sqe->splice_fd_in)
++ return -EINVAL;
++
++ req->epoll.epfd = READ_ONCE(sqe->fd);
++ req->epoll.op = READ_ONCE(sqe->len);
++ req->epoll.fd = READ_ONCE(sqe->off);
++
++ if (ep_op_has_event(req->epoll.op)) {
++ struct epoll_event __user *ev;
++
++ ev = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ if (copy_from_user(&req->epoll.event, ev, sizeof(*ev)))
++ return -EFAULT;
++ }
++
++ return 0;
++#else
++ return -EOPNOTSUPP;
++#endif
++}
++
++static int io_epoll_ctl(struct io_kiocb *req, unsigned int issue_flags)
++{
++#if defined(CONFIG_EPOLL)
++ struct io_epoll *ie = &req->epoll;
++ int ret;
++ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++ ret = do_epoll_ctl(ie->epfd, ie->op, ie->fd, &ie->event, force_nonblock);
++ if (force_nonblock && ret == -EAGAIN)
++ return -EAGAIN;
++
++ if (ret < 0)
++ req_set_fail(req);
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++#else
++ return -EOPNOTSUPP;
++#endif
++}
++
++static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
++ if (sqe->buf_index || sqe->off || sqe->splice_fd_in)
++ return -EINVAL;
++
++ req->madvise.addr = READ_ONCE(sqe->addr);
++ req->madvise.len = READ_ONCE(sqe->len);
++ req->madvise.advice = READ_ONCE(sqe->fadvise_advice);
++ return 0;
++#else
++ return -EOPNOTSUPP;
++#endif
++}
++
++static int io_madvise(struct io_kiocb *req, unsigned int issue_flags)
++{
++#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
++ struct io_madvise *ma = &req->madvise;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = do_madvise(current->mm, ma->addr, ma->len, ma->advice);
++ io_req_complete(req, ret);
++ return 0;
++#else
++ return -EOPNOTSUPP;
++#endif
++}
++
++static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ if (sqe->buf_index || sqe->addr || sqe->splice_fd_in)
++ return -EINVAL;
++
++ req->fadvise.offset = READ_ONCE(sqe->off);
++ req->fadvise.len = READ_ONCE(sqe->len);
++ req->fadvise.advice = READ_ONCE(sqe->fadvise_advice);
++ return 0;
++}
++
++static int io_fadvise(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_fadvise *fa = &req->fadvise;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK) {
++ switch (fa->advice) {
++ case POSIX_FADV_NORMAL:
++ case POSIX_FADV_RANDOM:
++ case POSIX_FADV_SEQUENTIAL:
++ break;
++ default:
++ return -EAGAIN;
++ }
++ }
++
++ ret = vfs_fadvise(req->file, fa->offset, fa->len, fa->advice);
++ if (ret < 0)
++ req_set_fail(req);
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ const char __user *path;
++
++ if (sqe->buf_index || sqe->splice_fd_in)
++ return -EINVAL;
++ if (req->flags & REQ_F_FIXED_FILE)
++ return -EBADF;
++
++ req->statx.dfd = READ_ONCE(sqe->fd);
++ req->statx.mask = READ_ONCE(sqe->len);
++ path = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ req->statx.buffer = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++ req->statx.flags = READ_ONCE(sqe->statx_flags);
++
++ req->statx.filename = getname_flags(path,
++ getname_statx_lookup_flags(req->statx.flags),
++ NULL);
++
++ if (IS_ERR(req->statx.filename)) {
++ int ret = PTR_ERR(req->statx.filename);
++
++ req->statx.filename = NULL;
++ return ret;
++ }
++
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return 0;
++}
++
++static int io_statx(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_statx *ctx = &req->statx;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask,
++ ctx->buffer);
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ if (sqe->off || sqe->addr || sqe->len || sqe->rw_flags || sqe->buf_index)
++ return -EINVAL;
++ if (req->flags & REQ_F_FIXED_FILE)
++ return -EBADF;
++
++ req->close.fd = READ_ONCE(sqe->fd);
++ req->close.file_slot = READ_ONCE(sqe->file_index);
++ if (req->close.file_slot && req->close.fd)
++ return -EINVAL;
++
++ return 0;
++}
++
++static int io_close(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct files_struct *files = current->files;
++ struct io_close *close = &req->close;
++ struct fdtable *fdt;
++ struct file *file;
++ int ret = -EBADF;
++
++ if (req->close.file_slot) {
++ ret = io_close_fixed(req, issue_flags);
++ goto err;
++ }
++
++ spin_lock(&files->file_lock);
++ fdt = files_fdtable(files);
++ if (close->fd >= fdt->max_fds) {
++ spin_unlock(&files->file_lock);
++ goto err;
++ }
++ file = rcu_dereference_protected(fdt->fd[close->fd],
++ lockdep_is_held(&files->file_lock));
++ if (!file || file->f_op == &io_uring_fops) {
++ spin_unlock(&files->file_lock);
++ goto err;
++ }
++
++ /* if the file has a flush method, be safe and punt to async */
++ if (file->f_op->flush && (issue_flags & IO_URING_F_NONBLOCK)) {
++ spin_unlock(&files->file_lock);
++ return -EAGAIN;
++ }
++
++ file = __close_fd_get_file(close->fd);
++ spin_unlock(&files->file_lock);
++ if (!file)
++ goto err;
++
++ /* No ->flush() or already async, safely close from here */
++ ret = filp_close(file, current->files);
++err:
++ if (ret < 0)
++ req_set_fail(req);
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ if (unlikely(sqe->addr || sqe->buf_index || sqe->splice_fd_in))
++ return -EINVAL;
++
++ req->sync.off = READ_ONCE(sqe->off);
++ req->sync.len = READ_ONCE(sqe->len);
++ req->sync.flags = READ_ONCE(sqe->sync_range_flags);
++ return 0;
++}
++
++static int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags)
++{
++ int ret;
++
++ /* sync_file_range always requires a blocking context */
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ ret = sync_file_range(req->file, req->sync.off, req->sync.len,
++ req->sync.flags);
++ io_req_complete(req, ret);
++ return 0;
++}
++
++#if defined(CONFIG_NET)
++static int io_shutdown_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ if (unlikely(sqe->off || sqe->addr || sqe->rw_flags ||
++ sqe->buf_index || sqe->splice_fd_in))
++ return -EINVAL;
++
++ req->shutdown.how = READ_ONCE(sqe->len);
++ return 0;
++}
++
++static int io_shutdown(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct socket *sock;
++ int ret;
++
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ return -EAGAIN;
++
++ sock = sock_from_file(req->file);
++ if (unlikely(!sock))
++ return -ENOTSOCK;
++
++ ret = __sys_shutdown_sock(sock, req->shutdown.how);
++ io_req_complete(req, ret);
++ return 0;
++}
++
++static bool io_net_retry(struct socket *sock, int flags)
++{
++ if (!(flags & MSG_WAITALL))
++ return false;
++ return sock->type == SOCK_STREAM || sock->type == SOCK_SEQPACKET;
++}
++
++static int io_setup_async_msg(struct io_kiocb *req,
++ struct io_async_msghdr *kmsg)
++{
++ struct io_async_msghdr *async_msg = req->async_data;
++
++ if (async_msg)
++ return -EAGAIN;
++ if (io_alloc_async_data(req)) {
++ kfree(kmsg->free_iov);
++ return -ENOMEM;
++ }
++ async_msg = req->async_data;
++ req->flags |= REQ_F_NEED_CLEANUP;
++ memcpy(async_msg, kmsg, sizeof(*kmsg));
++ async_msg->msg.msg_name = &async_msg->addr;
++ /* if were using fast_iov, set it to the new one */
++ if (!async_msg->free_iov)
++ async_msg->msg.msg_iter.iov = async_msg->fast_iov;
++
++ return -EAGAIN;
++}
++
++static int io_sendmsg_copy_hdr(struct io_kiocb *req,
++ struct io_async_msghdr *iomsg)
++{
++ iomsg->msg.msg_name = &iomsg->addr;
++ iomsg->free_iov = iomsg->fast_iov;
++ return sendmsg_copy_msghdr(&iomsg->msg, req->sr_msg.umsg,
++ req->sr_msg.msg_flags, &iomsg->free_iov);
++}
++
++static int io_sendmsg_prep_async(struct io_kiocb *req)
++{
++ int ret;
++
++ ret = io_sendmsg_copy_hdr(req, req->async_data);
++ if (!ret)
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return ret;
++}
++
++static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct io_sr_msg *sr = &req->sr_msg;
++
++ if (unlikely(sqe->file_index || sqe->addr2))
++ return -EINVAL;
++
++ sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ sr->len = READ_ONCE(sqe->len);
++ sr->flags = READ_ONCE(sqe->ioprio);
++ if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)
++ return -EINVAL;
++ sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
++ if (sr->msg_flags & MSG_DONTWAIT)
++ req->flags |= REQ_F_NOWAIT;
++
++#ifdef CONFIG_COMPAT
++ if (req->ctx->compat)
++ sr->msg_flags |= MSG_CMSG_COMPAT;
++#endif
++ sr->done_io = 0;
++ return 0;
++}
++
++static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_async_msghdr iomsg, *kmsg;
++ struct io_sr_msg *sr = &req->sr_msg;
++ struct socket *sock;
++ unsigned flags;
++ int min_ret = 0;
++ int ret;
++
++ sock = sock_from_file(req->file);
++ if (unlikely(!sock))
++ return -ENOTSOCK;
++
++ if (req_has_async_data(req)) {
++ kmsg = req->async_data;
++ } else {
++ ret = io_sendmsg_copy_hdr(req, &iomsg);
++ if (ret)
++ return ret;
++ kmsg = &iomsg;
++ }
++
++ if (!(req->flags & REQ_F_POLLED) &&
++ (sr->flags & IORING_RECVSEND_POLL_FIRST))
++ return io_setup_async_msg(req, kmsg);
++
++ flags = sr->msg_flags;
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ flags |= MSG_DONTWAIT;
++ if (flags & MSG_WAITALL)
++ min_ret = iov_iter_count(&kmsg->msg.msg_iter);
++
++ ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
++
++ if (ret < min_ret) {
++ if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
++ return io_setup_async_msg(req, kmsg);
++ if (ret == -ERESTARTSYS)
++ ret = -EINTR;
++ if (ret > 0 && io_net_retry(sock, flags)) {
++ sr->done_io += ret;
++ req->flags |= REQ_F_PARTIAL_IO;
++ return io_setup_async_msg(req, kmsg);
++ }
++ req_set_fail(req);
++ }
++ /* fast path, check for non-NULL to avoid function call */
++ if (kmsg->free_iov)
++ kfree(kmsg->free_iov);
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++ if (ret >= 0)
++ ret += sr->done_io;
++ else if (sr->done_io)
++ ret = sr->done_io;
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static int io_send(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_sr_msg *sr = &req->sr_msg;
++ struct msghdr msg;
++ struct iovec iov;
++ struct socket *sock;
++ unsigned flags;
++ int min_ret = 0;
++ int ret;
++
++ if (!(req->flags & REQ_F_POLLED) &&
++ (sr->flags & IORING_RECVSEND_POLL_FIRST))
++ return -EAGAIN;
++
++ sock = sock_from_file(req->file);
++ if (unlikely(!sock))
++ return -ENOTSOCK;
++
++ ret = import_single_range(WRITE, sr->buf, sr->len, &iov, &msg.msg_iter);
++ if (unlikely(ret))
++ return ret;
++
++ msg.msg_name = NULL;
++ msg.msg_control = NULL;
++ msg.msg_controllen = 0;
++ msg.msg_namelen = 0;
++
++ flags = sr->msg_flags;
++ if (issue_flags & IO_URING_F_NONBLOCK)
++ flags |= MSG_DONTWAIT;
++ if (flags & MSG_WAITALL)
++ min_ret = iov_iter_count(&msg.msg_iter);
++
++ msg.msg_flags = flags;
++ ret = sock_sendmsg(sock, &msg);
++ if (ret < min_ret) {
++ if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
++ return -EAGAIN;
++ if (ret == -ERESTARTSYS)
++ ret = -EINTR;
++ if (ret > 0 && io_net_retry(sock, flags)) {
++ sr->len -= ret;
++ sr->buf += ret;
++ sr->done_io += ret;
++ req->flags |= REQ_F_PARTIAL_IO;
++ return -EAGAIN;
++ }
++ req_set_fail(req);
++ }
++ if (ret >= 0)
++ ret += sr->done_io;
++ else if (sr->done_io)
++ ret = sr->done_io;
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static int __io_recvmsg_copy_hdr(struct io_kiocb *req,
++ struct io_async_msghdr *iomsg)
++{
++ struct io_sr_msg *sr = &req->sr_msg;
++ struct iovec __user *uiov;
++ size_t iov_len;
++ int ret;
++
++ ret = __copy_msghdr_from_user(&iomsg->msg, sr->umsg,
++ &iomsg->uaddr, &uiov, &iov_len);
++ if (ret)
++ return ret;
++
++ if (req->flags & REQ_F_BUFFER_SELECT) {
++ if (iov_len > 1)
++ return -EINVAL;
++ if (copy_from_user(iomsg->fast_iov, uiov, sizeof(*uiov)))
++ return -EFAULT;
++ sr->len = iomsg->fast_iov[0].iov_len;
++ iomsg->free_iov = NULL;
++ } else {
++ iomsg->free_iov = iomsg->fast_iov;
++ ret = __import_iovec(READ, uiov, iov_len, UIO_FASTIOV,
++ &iomsg->free_iov, &iomsg->msg.msg_iter,
++ false);
++ if (ret > 0)
++ ret = 0;
++ }
++
++ return ret;
++}
++
++#ifdef CONFIG_COMPAT
++static int __io_compat_recvmsg_copy_hdr(struct io_kiocb *req,
++ struct io_async_msghdr *iomsg)
++{
++ struct io_sr_msg *sr = &req->sr_msg;
++ struct compat_iovec __user *uiov;
++ compat_uptr_t ptr;
++ compat_size_t len;
++ int ret;
++
++ ret = __get_compat_msghdr(&iomsg->msg, sr->umsg_compat, &iomsg->uaddr,
++ &ptr, &len);
++ if (ret)
++ return ret;
++
++ uiov = compat_ptr(ptr);
++ if (req->flags & REQ_F_BUFFER_SELECT) {
++ compat_ssize_t clen;
++
++ if (len > 1)
++ return -EINVAL;
++ if (!access_ok(uiov, sizeof(*uiov)))
++ return -EFAULT;
++ if (__get_user(clen, &uiov->iov_len))
++ return -EFAULT;
++ if (clen < 0)
++ return -EINVAL;
++ sr->len = clen;
++ iomsg->free_iov = NULL;
++ } else {
++ iomsg->free_iov = iomsg->fast_iov;
++ ret = __import_iovec(READ, (struct iovec __user *)uiov, len,
++ UIO_FASTIOV, &iomsg->free_iov,
++ &iomsg->msg.msg_iter, true);
++ if (ret < 0)
++ return ret;
++ }
++
++ return 0;
++}
++#endif
++
++static int io_recvmsg_copy_hdr(struct io_kiocb *req,
++ struct io_async_msghdr *iomsg)
++{
++ iomsg->msg.msg_name = &iomsg->addr;
++
++#ifdef CONFIG_COMPAT
++ if (req->ctx->compat)
++ return __io_compat_recvmsg_copy_hdr(req, iomsg);
++#endif
++
++ return __io_recvmsg_copy_hdr(req, iomsg);
++}
++
++static int io_recvmsg_prep_async(struct io_kiocb *req)
++{
++ int ret;
++
++ ret = io_recvmsg_copy_hdr(req, req->async_data);
++ if (!ret)
++ req->flags |= REQ_F_NEED_CLEANUP;
++ return ret;
++}
++
++static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct io_sr_msg *sr = &req->sr_msg;
++
++ if (unlikely(sqe->file_index || sqe->addr2))
++ return -EINVAL;
++
++ sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ sr->len = READ_ONCE(sqe->len);
++ sr->flags = READ_ONCE(sqe->ioprio);
++ if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)
++ return -EINVAL;
++ sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
++ if (sr->msg_flags & MSG_DONTWAIT)
++ req->flags |= REQ_F_NOWAIT;
++
++#ifdef CONFIG_COMPAT
++ if (req->ctx->compat)
++ sr->msg_flags |= MSG_CMSG_COMPAT;
++#endif
++ sr->done_io = 0;
++ return 0;
++}
++
++static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_async_msghdr iomsg, *kmsg;
++ struct io_sr_msg *sr = &req->sr_msg;
++ struct socket *sock;
++ unsigned int cflags;
++ unsigned flags;
++ int ret, min_ret = 0;
++ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++ sock = sock_from_file(req->file);
++ if (unlikely(!sock))
++ return -ENOTSOCK;
++
++ if (req_has_async_data(req)) {
++ kmsg = req->async_data;
++ } else {
++ ret = io_recvmsg_copy_hdr(req, &iomsg);
++ if (ret)
++ return ret;
++ kmsg = &iomsg;
++ }
++
++ if (!(req->flags & REQ_F_POLLED) &&
++ (sr->flags & IORING_RECVSEND_POLL_FIRST))
++ return io_setup_async_msg(req, kmsg);
++
++ if (io_do_buffer_select(req)) {
++ void __user *buf;
++
++ buf = io_buffer_select(req, &sr->len, issue_flags);
++ if (!buf)
++ return -ENOBUFS;
++ kmsg->fast_iov[0].iov_base = buf;
++ kmsg->fast_iov[0].iov_len = sr->len;
++ iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->fast_iov, 1,
++ sr->len);
++ }
++
++ flags = sr->msg_flags;
++ if (force_nonblock)
++ flags |= MSG_DONTWAIT;
++ if (flags & MSG_WAITALL)
++ min_ret = iov_iter_count(&kmsg->msg.msg_iter);
++
++ kmsg->msg.msg_get_inq = 1;
++ ret = __sys_recvmsg_sock(sock, &kmsg->msg, sr->umsg, kmsg->uaddr, flags);
++ if (ret < min_ret) {
++ if (ret == -EAGAIN && force_nonblock)
++ return io_setup_async_msg(req, kmsg);
++ if (ret == -ERESTARTSYS)
++ ret = -EINTR;
++ if (ret > 0 && io_net_retry(sock, flags)) {
++ sr->done_io += ret;
++ req->flags |= REQ_F_PARTIAL_IO;
++ return io_setup_async_msg(req, kmsg);
++ }
++ req_set_fail(req);
++ } else if ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
++ req_set_fail(req);
++ }
++
++ /* fast path, check for non-NULL to avoid function call */
++ if (kmsg->free_iov)
++ kfree(kmsg->free_iov);
++ req->flags &= ~REQ_F_NEED_CLEANUP;
++ if (ret >= 0)
++ ret += sr->done_io;
++ else if (sr->done_io)
++ ret = sr->done_io;
++ cflags = io_put_kbuf(req, issue_flags);
++ if (kmsg->msg.msg_inq)
++ cflags |= IORING_CQE_F_SOCK_NONEMPTY;
++ __io_req_complete(req, issue_flags, ret, cflags);
++ return 0;
++}
++
++static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_sr_msg *sr = &req->sr_msg;
++ struct msghdr msg;
++ struct socket *sock;
++ struct iovec iov;
++ unsigned int cflags;
++ unsigned flags;
++ int ret, min_ret = 0;
++ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++ if (!(req->flags & REQ_F_POLLED) &&
++ (sr->flags & IORING_RECVSEND_POLL_FIRST))
++ return -EAGAIN;
++
++ sock = sock_from_file(req->file);
++ if (unlikely(!sock))
++ return -ENOTSOCK;
++
++ if (io_do_buffer_select(req)) {
++ void __user *buf;
++
++ buf = io_buffer_select(req, &sr->len, issue_flags);
++ if (!buf)
++ return -ENOBUFS;
++ sr->buf = buf;
++ }
++
++ ret = import_single_range(READ, sr->buf, sr->len, &iov, &msg.msg_iter);
++ if (unlikely(ret))
++ goto out_free;
++
++ msg.msg_name = NULL;
++ msg.msg_namelen = 0;
++ msg.msg_control = NULL;
++ msg.msg_get_inq = 1;
++ msg.msg_flags = 0;
++ msg.msg_controllen = 0;
++ msg.msg_iocb = NULL;
++
++ flags = sr->msg_flags;
++ if (force_nonblock)
++ flags |= MSG_DONTWAIT;
++ if (flags & MSG_WAITALL)
++ min_ret = iov_iter_count(&msg.msg_iter);
++
++ ret = sock_recvmsg(sock, &msg, flags);
++ if (ret < min_ret) {
++ if (ret == -EAGAIN && force_nonblock)
++ return -EAGAIN;
++ if (ret == -ERESTARTSYS)
++ ret = -EINTR;
++ if (ret > 0 && io_net_retry(sock, flags)) {
++ sr->len -= ret;
++ sr->buf += ret;
++ sr->done_io += ret;
++ req->flags |= REQ_F_PARTIAL_IO;
++ return -EAGAIN;
++ }
++ req_set_fail(req);
++ } else if ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
++out_free:
++ req_set_fail(req);
++ }
++
++ if (ret >= 0)
++ ret += sr->done_io;
++ else if (sr->done_io)
++ ret = sr->done_io;
++ cflags = io_put_kbuf(req, issue_flags);
++ if (msg.msg_inq)
++ cflags |= IORING_CQE_F_SOCK_NONEMPTY;
++ __io_req_complete(req, issue_flags, ret, cflags);
++ return 0;
++}
++
++static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct io_accept *accept = &req->accept;
++ unsigned flags;
++
++ if (sqe->len || sqe->buf_index)
++ return -EINVAL;
++
++ accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2));
++ accept->flags = READ_ONCE(sqe->accept_flags);
++ accept->nofile = rlimit(RLIMIT_NOFILE);
++ flags = READ_ONCE(sqe->ioprio);
++ if (flags & ~IORING_ACCEPT_MULTISHOT)
++ return -EINVAL;
++
++ accept->file_slot = READ_ONCE(sqe->file_index);
++ if (accept->file_slot) {
++ if (accept->flags & SOCK_CLOEXEC)
++ return -EINVAL;
++ if (flags & IORING_ACCEPT_MULTISHOT &&
++ accept->file_slot != IORING_FILE_INDEX_ALLOC)
++ return -EINVAL;
++ }
++ if (accept->flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
++ return -EINVAL;
++ if (SOCK_NONBLOCK != O_NONBLOCK && (accept->flags & SOCK_NONBLOCK))
++ accept->flags = (accept->flags & ~SOCK_NONBLOCK) | O_NONBLOCK;
++ if (flags & IORING_ACCEPT_MULTISHOT)
++ req->flags |= REQ_F_APOLL_MULTISHOT;
++ return 0;
++}
++
++static int io_accept(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_accept *accept = &req->accept;
++ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++ unsigned int file_flags = force_nonblock ? O_NONBLOCK : 0;
++ bool fixed = !!accept->file_slot;
++ struct file *file;
++ int ret, fd;
++
++retry:
++ if (!fixed) {
++ fd = __get_unused_fd_flags(accept->flags, accept->nofile);
++ if (unlikely(fd < 0))
++ return fd;
++ }
++ file = do_accept(req->file, file_flags, accept->addr, accept->addr_len,
++ accept->flags);
++ if (IS_ERR(file)) {
++ if (!fixed)
++ put_unused_fd(fd);
++ ret = PTR_ERR(file);
++ if (ret == -EAGAIN && force_nonblock) {
++ /*
++ * if it's multishot and polled, we don't need to
++ * return EAGAIN to arm the poll infra since it
++ * has already been done
++ */
++ if ((req->flags & IO_APOLL_MULTI_POLLED) ==
++ IO_APOLL_MULTI_POLLED)
++ ret = 0;
++ return ret;
++ }
++ if (ret == -ERESTARTSYS)
++ ret = -EINTR;
++ req_set_fail(req);
++ } else if (!fixed) {
++ fd_install(fd, file);
++ ret = fd;
++ } else {
++ ret = io_fixed_fd_install(req, issue_flags, file,
++ accept->file_slot);
++ }
++
++ if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++ }
++ if (ret >= 0) {
++ bool filled;
++
++ spin_lock(&ctx->completion_lock);
++ filled = io_fill_cqe_aux(ctx, req->cqe.user_data, ret,
++ IORING_CQE_F_MORE);
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ if (filled) {
++ io_cqring_ev_posted(ctx);
++ goto retry;
++ }
++ ret = -ECANCELED;
++ }
++
++ return ret;
++}
++
++static int io_socket_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct io_socket *sock = &req->sock;
++
++ if (sqe->addr || sqe->rw_flags || sqe->buf_index)
++ return -EINVAL;
++
++ sock->domain = READ_ONCE(sqe->fd);
++ sock->type = READ_ONCE(sqe->off);
++ sock->protocol = READ_ONCE(sqe->len);
++ sock->file_slot = READ_ONCE(sqe->file_index);
++ sock->nofile = rlimit(RLIMIT_NOFILE);
++
++ sock->flags = sock->type & ~SOCK_TYPE_MASK;
++ if (sock->file_slot && (sock->flags & SOCK_CLOEXEC))
++ return -EINVAL;
++ if (sock->flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
++ return -EINVAL;
++ return 0;
++}
++
++static int io_socket(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_socket *sock = &req->sock;
++ bool fixed = !!sock->file_slot;
++ struct file *file;
++ int ret, fd;
++
++ if (!fixed) {
++ fd = __get_unused_fd_flags(sock->flags, sock->nofile);
++ if (unlikely(fd < 0))
++ return fd;
++ }
++ file = __sys_socket_file(sock->domain, sock->type, sock->protocol);
++ if (IS_ERR(file)) {
++ if (!fixed)
++ put_unused_fd(fd);
++ ret = PTR_ERR(file);
++ if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
++ return -EAGAIN;
++ if (ret == -ERESTARTSYS)
++ ret = -EINTR;
++ req_set_fail(req);
++ } else if (!fixed) {
++ fd_install(fd, file);
++ ret = fd;
++ } else {
++ ret = io_fixed_fd_install(req, issue_flags, file,
++ sock->file_slot);
++ }
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static int io_connect_prep_async(struct io_kiocb *req)
++{
++ struct io_async_connect *io = req->async_data;
++ struct io_connect *conn = &req->connect;
++
++ return move_addr_to_kernel(conn->addr, conn->addr_len, &io->address);
++}
++
++static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct io_connect *conn = &req->connect;
++
++ if (sqe->len || sqe->buf_index || sqe->rw_flags || sqe->splice_fd_in)
++ return -EINVAL;
++
++ conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
++ conn->addr_len = READ_ONCE(sqe->addr2);
++ return 0;
++}
++
++static int io_connect(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_async_connect __io, *io;
++ unsigned file_flags;
++ int ret;
++ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
++
++ if (req_has_async_data(req)) {
++ io = req->async_data;
++ } else {
++ ret = move_addr_to_kernel(req->connect.addr,
++ req->connect.addr_len,
++ &__io.address);
++ if (ret)
++ goto out;
++ io = &__io;
++ }
++
++ file_flags = force_nonblock ? O_NONBLOCK : 0;
++
++ ret = __sys_connect_file(req->file, &io->address,
++ req->connect.addr_len, file_flags);
++ if ((ret == -EAGAIN || ret == -EINPROGRESS) && force_nonblock) {
++ if (req_has_async_data(req))
++ return -EAGAIN;
++ if (io_alloc_async_data(req)) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ memcpy(req->async_data, &__io, sizeof(__io));
++ return -EAGAIN;
++ }
++ if (ret == -ERESTARTSYS)
++ ret = -EINTR;
++out:
++ if (ret < 0)
++ req_set_fail(req);
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++#else /* !CONFIG_NET */
++#define IO_NETOP_FN(op) \
++static int io_##op(struct io_kiocb *req, unsigned int issue_flags) \
++{ \
++ return -EOPNOTSUPP; \
++}
++
++#define IO_NETOP_PREP(op) \
++IO_NETOP_FN(op) \
++static int io_##op##_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) \
++{ \
++ return -EOPNOTSUPP; \
++} \
++
++#define IO_NETOP_PREP_ASYNC(op) \
++IO_NETOP_PREP(op) \
++static int io_##op##_prep_async(struct io_kiocb *req) \
++{ \
++ return -EOPNOTSUPP; \
++}
++
++IO_NETOP_PREP_ASYNC(sendmsg);
++IO_NETOP_PREP_ASYNC(recvmsg);
++IO_NETOP_PREP_ASYNC(connect);
++IO_NETOP_PREP(accept);
++IO_NETOP_PREP(socket);
++IO_NETOP_PREP(shutdown);
++IO_NETOP_FN(send);
++IO_NETOP_FN(recv);
++#endif /* CONFIG_NET */
++
++struct io_poll_table {
++ struct poll_table_struct pt;
++ struct io_kiocb *req;
++ int nr_entries;
++ int error;
++};
++
++#define IO_POLL_CANCEL_FLAG BIT(31)
++#define IO_POLL_REF_MASK GENMASK(30, 0)
++
++/*
++ * If refs part of ->poll_refs (see IO_POLL_REF_MASK) is 0, it's free. We can
++ * bump it and acquire ownership. It's disallowed to modify requests while not
++ * owning it, that prevents from races for enqueueing task_work's and b/w
++ * arming poll and wakeups.
++ */
++static inline bool io_poll_get_ownership(struct io_kiocb *req)
++{
++ return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
++}
++
++static void io_poll_mark_cancelled(struct io_kiocb *req)
++{
++ atomic_or(IO_POLL_CANCEL_FLAG, &req->poll_refs);
++}
++
++static struct io_poll_iocb *io_poll_get_double(struct io_kiocb *req)
++{
++ /* pure poll stashes this in ->async_data, poll driven retry elsewhere */
++ if (req->opcode == IORING_OP_POLL_ADD)
++ return req->async_data;
++ return req->apoll->double_poll;
++}
++
++static struct io_poll_iocb *io_poll_get_single(struct io_kiocb *req)
++{
++ if (req->opcode == IORING_OP_POLL_ADD)
++ return &req->poll;
++ return &req->apoll->poll;
++}
++
++static void io_poll_req_insert(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct hlist_head *list;
++
++ list = &ctx->cancel_hash[hash_long(req->cqe.user_data, ctx->cancel_hash_bits)];
++ hlist_add_head(&req->hash_node, list);
++}
++
++static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
++ wait_queue_func_t wake_func)
++{
++ poll->head = NULL;
++#define IO_POLL_UNMASK (EPOLLERR|EPOLLHUP|EPOLLNVAL|EPOLLRDHUP)
++ /* mask in events that we always want/need */
++ poll->events = events | IO_POLL_UNMASK;
++ INIT_LIST_HEAD(&poll->wait.entry);
++ init_waitqueue_func_entry(&poll->wait, wake_func);
++}
++
++static inline void io_poll_remove_entry(struct io_poll_iocb *poll)
++{
++ struct wait_queue_head *head = smp_load_acquire(&poll->head);
++
++ if (head) {
++ spin_lock_irq(&head->lock);
++ list_del_init(&poll->wait.entry);
++ poll->head = NULL;
++ spin_unlock_irq(&head->lock);
++ }
++}
++
++static void io_poll_remove_entries(struct io_kiocb *req)
++{
++ /*
++ * Nothing to do if neither of those flags are set. Avoid dipping
++ * into the poll/apoll/double cachelines if we can.
++ */
++ if (!(req->flags & (REQ_F_SINGLE_POLL | REQ_F_DOUBLE_POLL)))
++ return;
++
++ /*
++ * While we hold the waitqueue lock and the waitqueue is nonempty,
++ * wake_up_pollfree() will wait for us. However, taking the waitqueue
++ * lock in the first place can race with the waitqueue being freed.
++ *
++ * We solve this as eventpoll does: by taking advantage of the fact that
++ * all users of wake_up_pollfree() will RCU-delay the actual free. If
++ * we enter rcu_read_lock() and see that the pointer to the queue is
++ * non-NULL, we can then lock it without the memory being freed out from
++ * under us.
++ *
++ * Keep holding rcu_read_lock() as long as we hold the queue lock, in
++ * case the caller deletes the entry from the queue, leaving it empty.
++ * In that case, only RCU prevents the queue memory from being freed.
++ */
++ rcu_read_lock();
++ if (req->flags & REQ_F_SINGLE_POLL)
++ io_poll_remove_entry(io_poll_get_single(req));
++ if (req->flags & REQ_F_DOUBLE_POLL)
++ io_poll_remove_entry(io_poll_get_double(req));
++ rcu_read_unlock();
++}
++
++static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags);
++/*
++ * All poll tw should go through this. Checks for poll events, manages
++ * references, does rewait, etc.
++ *
++ * Returns a negative error on failure. >0 when no action require, which is
++ * either spurious wakeup or multishot CQE is served. 0 when it's done with
++ * the request, then the mask is stored in req->cqe.res.
++ */
++static int io_poll_check_events(struct io_kiocb *req, bool *locked)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ int v, ret;
++
++ /* req->task == current here, checking PF_EXITING is safe */
++ if (unlikely(req->task->flags & PF_EXITING))
++ return -ECANCELED;
++
++ do {
++ v = atomic_read(&req->poll_refs);
++
++ /* tw handler should be the owner, and so have some references */
++ if (WARN_ON_ONCE(!(v & IO_POLL_REF_MASK)))
++ return 0;
++ if (v & IO_POLL_CANCEL_FLAG)
++ return -ECANCELED;
++
++ if (!req->cqe.res) {
++ struct poll_table_struct pt = { ._key = req->apoll_events };
++ req->cqe.res = vfs_poll(req->file, &pt) & req->apoll_events;
++ }
++
++ if ((unlikely(!req->cqe.res)))
++ continue;
++ if (req->apoll_events & EPOLLONESHOT)
++ return 0;
++
++ /* multishot, just fill a CQE and proceed */
++ if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
++ __poll_t mask = mangle_poll(req->cqe.res &
++ req->apoll_events);
++ bool filled;
++
++ spin_lock(&ctx->completion_lock);
++ filled = io_fill_cqe_aux(ctx, req->cqe.user_data,
++ mask, IORING_CQE_F_MORE);
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ if (filled) {
++ io_cqring_ev_posted(ctx);
++ continue;
++ }
++ return -ECANCELED;
++ }
++
++ io_tw_lock(req->ctx, locked);
++ if (unlikely(req->task->flags & PF_EXITING))
++ return -EFAULT;
++ ret = io_issue_sqe(req,
++ IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
++ if (ret)
++ return ret;
++
++ /*
++ * Release all references, retry if someone tried to restart
++ * task_work while we were executing it.
++ */
++ } while (atomic_sub_return(v & IO_POLL_REF_MASK, &req->poll_refs));
++
++ return 1;
++}
++
++static void io_poll_task_func(struct io_kiocb *req, bool *locked)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ int ret;
++
++ ret = io_poll_check_events(req, locked);
++ if (ret > 0)
++ return;
++
++ if (!ret) {
++ req->cqe.res = mangle_poll(req->cqe.res & req->poll.events);
++ } else {
++ req->cqe.res = ret;
++ req_set_fail(req);
++ }
++
++ io_poll_remove_entries(req);
++ spin_lock(&ctx->completion_lock);
++ hash_del(&req->hash_node);
++ __io_req_complete_post(req, req->cqe.res, 0);
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ io_cqring_ev_posted(ctx);
++}
++
++static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ int ret;
++
++ ret = io_poll_check_events(req, locked);
++ if (ret > 0)
++ return;
++
++ io_poll_remove_entries(req);
++ spin_lock(&ctx->completion_lock);
++ hash_del(&req->hash_node);
++ spin_unlock(&ctx->completion_lock);
++
++ if (!ret)
++ io_req_task_submit(req, locked);
++ else
++ io_req_complete_failed(req, ret);
++}
++
++static void __io_poll_execute(struct io_kiocb *req, int mask,
++ __poll_t __maybe_unused events)
++{
++ req->cqe.res = mask;
++ /*
++ * This is useful for poll that is armed on behalf of another
++ * request, and where the wakeup path could be on a different
++ * CPU. We want to avoid pulling in req->apoll->events for that
++ * case.
++ */
++ if (req->opcode == IORING_OP_POLL_ADD)
++ req->io_task_work.func = io_poll_task_func;
++ else
++ req->io_task_work.func = io_apoll_task_func;
++
++ trace_io_uring_task_add(req->ctx, req, req->cqe.user_data, req->opcode, mask);
++ io_req_task_work_add(req);
++}
++
++static inline void io_poll_execute(struct io_kiocb *req, int res,
++ __poll_t events)
++{
++ if (io_poll_get_ownership(req))
++ __io_poll_execute(req, res, events);
++}
++
++static void io_poll_cancel_req(struct io_kiocb *req)
++{
++ io_poll_mark_cancelled(req);
++ /* kick tw, which should complete the request */
++ io_poll_execute(req, 0, 0);
++}
++
++#define wqe_to_req(wait) ((void *)((unsigned long) (wait)->private & ~1))
++#define wqe_is_double(wait) ((unsigned long) (wait)->private & 1)
++#define IO_ASYNC_POLL_COMMON (EPOLLONESHOT | EPOLLPRI)
++
++static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
++ void *key)
++{
++ struct io_kiocb *req = wqe_to_req(wait);
++ struct io_poll_iocb *poll = container_of(wait, struct io_poll_iocb,
++ wait);
++ __poll_t mask = key_to_poll(key);
++
++ if (unlikely(mask & POLLFREE)) {
++ io_poll_mark_cancelled(req);
++ /* we have to kick tw in case it's not already */
++ io_poll_execute(req, 0, poll->events);
++
++ /*
++ * If the waitqueue is being freed early but someone is already
++ * holds ownership over it, we have to tear down the request as
++ * best we can. That means immediately removing the request from
++ * its waitqueue and preventing all further accesses to the
++ * waitqueue via the request.
++ */
++ list_del_init(&poll->wait.entry);
++
++ /*
++ * Careful: this *must* be the last step, since as soon
++ * as req->head is NULL'ed out, the request can be
++ * completed and freed, since aio_poll_complete_work()
++ * will no longer need to take the waitqueue lock.
++ */
++ smp_store_release(&poll->head, NULL);
++ return 1;
++ }
++
++ /* for instances that support it check for an event match first */
++ if (mask && !(mask & (poll->events & ~IO_ASYNC_POLL_COMMON)))
++ return 0;
++
++ if (io_poll_get_ownership(req)) {
++ /* optional, saves extra locking for removal in tw handler */
++ if (mask && poll->events & EPOLLONESHOT) {
++ list_del_init(&poll->wait.entry);
++ poll->head = NULL;
++ if (wqe_is_double(wait))
++ req->flags &= ~REQ_F_DOUBLE_POLL;
++ else
++ req->flags &= ~REQ_F_SINGLE_POLL;
++ }
++ __io_poll_execute(req, mask, poll->events);
++ }
++ return 1;
++}
++
++static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
++ struct wait_queue_head *head,
++ struct io_poll_iocb **poll_ptr)
++{
++ struct io_kiocb *req = pt->req;
++ unsigned long wqe_private = (unsigned long) req;
++
++ /*
++ * The file being polled uses multiple waitqueues for poll handling
++ * (e.g. one for read, one for write). Setup a separate io_poll_iocb
++ * if this happens.
++ */
++ if (unlikely(pt->nr_entries)) {
++ struct io_poll_iocb *first = poll;
++
++ /* double add on the same waitqueue head, ignore */
++ if (first->head == head)
++ return;
++ /* already have a 2nd entry, fail a third attempt */
++ if (*poll_ptr) {
++ if ((*poll_ptr)->head == head)
++ return;
++ pt->error = -EINVAL;
++ return;
++ }
++
++ poll = kmalloc(sizeof(*poll), GFP_ATOMIC);
++ if (!poll) {
++ pt->error = -ENOMEM;
++ return;
++ }
++ /* mark as double wq entry */
++ wqe_private |= 1;
++ req->flags |= REQ_F_DOUBLE_POLL;
++ io_init_poll_iocb(poll, first->events, first->wait.func);
++ *poll_ptr = poll;
++ if (req->opcode == IORING_OP_POLL_ADD)
++ req->flags |= REQ_F_ASYNC_DATA;
++ }
++
++ req->flags |= REQ_F_SINGLE_POLL;
++ pt->nr_entries++;
++ poll->head = head;
++ poll->wait.private = (void *) wqe_private;
++
++ if (poll->events & EPOLLEXCLUSIVE)
++ add_wait_queue_exclusive(head, &poll->wait);
++ else
++ add_wait_queue(head, &poll->wait);
++}
++
++static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
++ struct poll_table_struct *p)
++{
++ struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
++
++ __io_queue_proc(&pt->req->poll, pt, head,
++ (struct io_poll_iocb **) &pt->req->async_data);
++}
++
++static int __io_arm_poll_handler(struct io_kiocb *req,
++ struct io_poll_iocb *poll,
++ struct io_poll_table *ipt, __poll_t mask)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ int v;
++
++ INIT_HLIST_NODE(&req->hash_node);
++ req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
++ io_init_poll_iocb(poll, mask, io_poll_wake);
++ poll->file = req->file;
++
++ req->apoll_events = poll->events;
++
++ ipt->pt._key = mask;
++ ipt->req = req;
++ ipt->error = 0;
++ ipt->nr_entries = 0;
++
++ /*
++ * Take the ownership to delay any tw execution up until we're done
++ * with poll arming. see io_poll_get_ownership().
++ */
++ atomic_set(&req->poll_refs, 1);
++ mask = vfs_poll(req->file, &ipt->pt) & poll->events;
++
++ if (mask && (poll->events & EPOLLONESHOT)) {
++ io_poll_remove_entries(req);
++ /* no one else has access to the req, forget about the ref */
++ return mask;
++ }
++ if (!mask && unlikely(ipt->error || !ipt->nr_entries)) {
++ io_poll_remove_entries(req);
++ if (!ipt->error)
++ ipt->error = -EINVAL;
++ return 0;
++ }
++
++ spin_lock(&ctx->completion_lock);
++ io_poll_req_insert(req);
++ spin_unlock(&ctx->completion_lock);
++
++ if (mask) {
++ /* can't multishot if failed, just queue the event we've got */
++ if (unlikely(ipt->error || !ipt->nr_entries)) {
++ poll->events |= EPOLLONESHOT;
++ req->apoll_events |= EPOLLONESHOT;
++ ipt->error = 0;
++ }
++ __io_poll_execute(req, mask, poll->events);
++ return 0;
++ }
++
++ /*
++ * Release ownership. If someone tried to queue a tw while it was
++ * locked, kick it off for them.
++ */
++ v = atomic_dec_return(&req->poll_refs);
++ if (unlikely(v & IO_POLL_REF_MASK))
++ __io_poll_execute(req, 0, poll->events);
++ return 0;
++}
++
++static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
++ struct poll_table_struct *p)
++{
++ struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
++ struct async_poll *apoll = pt->req->apoll;
++
++ __io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll);
++}
++
++enum {
++ IO_APOLL_OK,
++ IO_APOLL_ABORTED,
++ IO_APOLL_READY
++};
++
++static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
++{
++ const struct io_op_def *def = &io_op_defs[req->opcode];
++ struct io_ring_ctx *ctx = req->ctx;
++ struct async_poll *apoll;
++ struct io_poll_table ipt;
++ __poll_t mask = POLLPRI | POLLERR;
++ int ret;
++
++ if (!def->pollin && !def->pollout)
++ return IO_APOLL_ABORTED;
++ if (!file_can_poll(req->file))
++ return IO_APOLL_ABORTED;
++ if ((req->flags & (REQ_F_POLLED|REQ_F_PARTIAL_IO)) == REQ_F_POLLED)
++ return IO_APOLL_ABORTED;
++ if (!(req->flags & REQ_F_APOLL_MULTISHOT))
++ mask |= EPOLLONESHOT;
++
++ if (def->pollin) {
++ mask |= EPOLLIN | EPOLLRDNORM;
++
++ /* If reading from MSG_ERRQUEUE using recvmsg, ignore POLLIN */
++ if ((req->opcode == IORING_OP_RECVMSG) &&
++ (req->sr_msg.msg_flags & MSG_ERRQUEUE))
++ mask &= ~EPOLLIN;
++ } else {
++ mask |= EPOLLOUT | EPOLLWRNORM;
++ }
++ if (def->poll_exclusive)
++ mask |= EPOLLEXCLUSIVE;
++ if (req->flags & REQ_F_POLLED) {
++ apoll = req->apoll;
++ kfree(apoll->double_poll);
++ } else if (!(issue_flags & IO_URING_F_UNLOCKED) &&
++ !list_empty(&ctx->apoll_cache)) {
++ apoll = list_first_entry(&ctx->apoll_cache, struct async_poll,
++ poll.wait.entry);
++ list_del_init(&apoll->poll.wait.entry);
++ } else {
++ apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
++ if (unlikely(!apoll))
++ return IO_APOLL_ABORTED;
++ }
++ apoll->double_poll = NULL;
++ req->apoll = apoll;
++ req->flags |= REQ_F_POLLED;
++ ipt.pt._qproc = io_async_queue_proc;
++
++ io_kbuf_recycle(req, issue_flags);
++
++ ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask);
++ if (ret || ipt.error)
++ return ret ? IO_APOLL_READY : IO_APOLL_ABORTED;
++
++ trace_io_uring_poll_arm(ctx, req, req->cqe.user_data, req->opcode,
++ mask, apoll->poll.events);
++ return IO_APOLL_OK;
++}
++
++/*
++ * Returns true if we found and killed one or more poll requests
++ */
++static __cold bool io_poll_remove_all(struct io_ring_ctx *ctx,
++ struct task_struct *tsk, bool cancel_all)
++{
++ struct hlist_node *tmp;
++ struct io_kiocb *req;
++ bool found = false;
++ int i;
++
++ spin_lock(&ctx->completion_lock);
++ for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
++ struct hlist_head *list;
++
++ list = &ctx->cancel_hash[i];
++ hlist_for_each_entry_safe(req, tmp, list, hash_node) {
++ if (io_match_task_safe(req, tsk, cancel_all)) {
++ hlist_del_init(&req->hash_node);
++ io_poll_cancel_req(req);
++ found = true;
++ }
++ }
++ }
++ spin_unlock(&ctx->completion_lock);
++ return found;
++}
++
++static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, bool poll_only,
++ struct io_cancel_data *cd)
++ __must_hold(&ctx->completion_lock)
++{
++ struct hlist_head *list;
++ struct io_kiocb *req;
++
++ list = &ctx->cancel_hash[hash_long(cd->data, ctx->cancel_hash_bits)];
++ hlist_for_each_entry(req, list, hash_node) {
++ if (cd->data != req->cqe.user_data)
++ continue;
++ if (poll_only && req->opcode != IORING_OP_POLL_ADD)
++ continue;
++ if (cd->flags & IORING_ASYNC_CANCEL_ALL) {
++ if (cd->seq == req->work.cancel_seq)
++ continue;
++ req->work.cancel_seq = cd->seq;
++ }
++ return req;
++ }
++ return NULL;
++}
++
++static struct io_kiocb *io_poll_file_find(struct io_ring_ctx *ctx,
++ struct io_cancel_data *cd)
++ __must_hold(&ctx->completion_lock)
++{
++ struct io_kiocb *req;
++ int i;
++
++ for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
++ struct hlist_head *list;
++
++ list = &ctx->cancel_hash[i];
++ hlist_for_each_entry(req, list, hash_node) {
++ if (!(cd->flags & IORING_ASYNC_CANCEL_ANY) &&
++ req->file != cd->file)
++ continue;
++ if (cd->seq == req->work.cancel_seq)
++ continue;
++ req->work.cancel_seq = cd->seq;
++ return req;
++ }
++ }
++ return NULL;
++}
++
++static bool io_poll_disarm(struct io_kiocb *req)
++ __must_hold(&ctx->completion_lock)
++{
++ if (!io_poll_get_ownership(req))
++ return false;
++ io_poll_remove_entries(req);
++ hash_del(&req->hash_node);
++ return true;
++}
++
++static int io_poll_cancel(struct io_ring_ctx *ctx, struct io_cancel_data *cd)
++ __must_hold(&ctx->completion_lock)
++{
++ struct io_kiocb *req;
++
++ if (cd->flags & (IORING_ASYNC_CANCEL_FD|IORING_ASYNC_CANCEL_ANY))
++ req = io_poll_file_find(ctx, cd);
++ else
++ req = io_poll_find(ctx, false, cd);
++ if (!req)
++ return -ENOENT;
++ io_poll_cancel_req(req);
++ return 0;
++}
++
++static __poll_t io_poll_parse_events(const struct io_uring_sqe *sqe,
++ unsigned int flags)
++{
++ u32 events;
++
++ events = READ_ONCE(sqe->poll32_events);
++#ifdef __BIG_ENDIAN
++ events = swahw32(events);
++#endif
++ if (!(flags & IORING_POLL_ADD_MULTI))
++ events |= EPOLLONESHOT;
++ return demangle_poll(events) | (events & (EPOLLEXCLUSIVE|EPOLLONESHOT));
++}
++
++static int io_poll_remove_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_poll_update *upd = &req->poll_update;
++ u32 flags;
++
++ if (sqe->buf_index || sqe->splice_fd_in)
++ return -EINVAL;
++ flags = READ_ONCE(sqe->len);
++ if (flags & ~(IORING_POLL_UPDATE_EVENTS | IORING_POLL_UPDATE_USER_DATA |
++ IORING_POLL_ADD_MULTI))
++ return -EINVAL;
++ /* meaningless without update */
++ if (flags == IORING_POLL_ADD_MULTI)
++ return -EINVAL;
++
++ upd->old_user_data = READ_ONCE(sqe->addr);
++ upd->update_events = flags & IORING_POLL_UPDATE_EVENTS;
++ upd->update_user_data = flags & IORING_POLL_UPDATE_USER_DATA;
++
++ upd->new_user_data = READ_ONCE(sqe->off);
++ if (!upd->update_user_data && upd->new_user_data)
++ return -EINVAL;
++ if (upd->update_events)
++ upd->events = io_poll_parse_events(sqe, flags);
++ else if (sqe->poll32_events)
++ return -EINVAL;
++
++ return 0;
++}
++
++static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++{
++ struct io_poll_iocb *poll = &req->poll;
++ u32 flags;
++
++ if (sqe->buf_index || sqe->off || sqe->addr)
++ return -EINVAL;
++ flags = READ_ONCE(sqe->len);
++ if (flags & ~IORING_POLL_ADD_MULTI)
++ return -EINVAL;
++ if ((flags & IORING_POLL_ADD_MULTI) && (req->flags & REQ_F_CQE_SKIP))
++ return -EINVAL;
++
++ io_req_set_refcount(req);
++ poll->events = io_poll_parse_events(sqe, flags);
++ return 0;
++}
++
++static int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_poll_iocb *poll = &req->poll;
++ struct io_poll_table ipt;
++ int ret;
++
++ ipt.pt._qproc = io_poll_queue_proc;
++
++ ret = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events);
++ if (!ret && ipt.error)
++ req_set_fail(req);
++ ret = ret ?: ipt.error;
++ if (ret)
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_cancel_data cd = { .data = req->poll_update.old_user_data, };
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_kiocb *preq;
++ int ret2, ret = 0;
++ bool locked;
++
++ spin_lock(&ctx->completion_lock);
++ preq = io_poll_find(ctx, true, &cd);
++ if (!preq || !io_poll_disarm(preq)) {
++ spin_unlock(&ctx->completion_lock);
++ ret = preq ? -EALREADY : -ENOENT;
++ goto out;
++ }
++ spin_unlock(&ctx->completion_lock);
++
++ if (req->poll_update.update_events || req->poll_update.update_user_data) {
++ /* only mask one event flags, keep behavior flags */
++ if (req->poll_update.update_events) {
++ preq->poll.events &= ~0xffff;
++ preq->poll.events |= req->poll_update.events & 0xffff;
++ preq->poll.events |= IO_POLL_UNMASK;
++ }
++ if (req->poll_update.update_user_data)
++ preq->cqe.user_data = req->poll_update.new_user_data;
++
++ ret2 = io_poll_add(preq, issue_flags);
++ /* successfully updated, don't complete poll request */
++ if (!ret2)
++ goto out;
++ }
++
++ req_set_fail(preq);
++ preq->cqe.res = -ECANCELED;
++ locked = !(issue_flags & IO_URING_F_UNLOCKED);
++ io_req_task_complete(preq, &locked);
++out:
++ if (ret < 0)
++ req_set_fail(req);
++ /* complete update request, we're done with it */
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
++{
++ struct io_timeout_data *data = container_of(timer,
++ struct io_timeout_data, timer);
++ struct io_kiocb *req = data->req;
++ struct io_ring_ctx *ctx = req->ctx;
++ unsigned long flags;
++
++ spin_lock_irqsave(&ctx->timeout_lock, flags);
++ list_del_init(&req->timeout.list);
++ atomic_set(&req->ctx->cq_timeouts,
++ atomic_read(&req->ctx->cq_timeouts) + 1);
++ spin_unlock_irqrestore(&ctx->timeout_lock, flags);
++
++ if (!(data->flags & IORING_TIMEOUT_ETIME_SUCCESS))
++ req_set_fail(req);
++
++ req->cqe.res = -ETIME;
++ req->io_task_work.func = io_req_task_complete;
++ io_req_task_work_add(req);
++ return HRTIMER_NORESTART;
++}
++
++static struct io_kiocb *io_timeout_extract(struct io_ring_ctx *ctx,
++ struct io_cancel_data *cd)
++ __must_hold(&ctx->timeout_lock)
++{
++ struct io_timeout_data *io;
++ struct io_kiocb *req;
++ bool found = false;
++
++ list_for_each_entry(req, &ctx->timeout_list, timeout.list) {
++ if (!(cd->flags & IORING_ASYNC_CANCEL_ANY) &&
++ cd->data != req->cqe.user_data)
++ continue;
++ if (cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY)) {
++ if (cd->seq == req->work.cancel_seq)
++ continue;
++ req->work.cancel_seq = cd->seq;
++ }
++ found = true;
++ break;
++ }
++ if (!found)
++ return ERR_PTR(-ENOENT);
++
++ io = req->async_data;
++ if (hrtimer_try_to_cancel(&io->timer) == -1)
++ return ERR_PTR(-EALREADY);
++ list_del_init(&req->timeout.list);
++ return req;
++}
++
++static int io_timeout_cancel(struct io_ring_ctx *ctx, struct io_cancel_data *cd)
++ __must_hold(&ctx->completion_lock)
++{
++ struct io_kiocb *req;
++
++ spin_lock_irq(&ctx->timeout_lock);
++ req = io_timeout_extract(ctx, cd);
++ spin_unlock_irq(&ctx->timeout_lock);
++
++ if (IS_ERR(req))
++ return PTR_ERR(req);
++ io_req_task_queue_fail(req, -ECANCELED);
++ return 0;
++}
++
++static clockid_t io_timeout_get_clock(struct io_timeout_data *data)
++{
++ switch (data->flags & IORING_TIMEOUT_CLOCK_MASK) {
++ case IORING_TIMEOUT_BOOTTIME:
++ return CLOCK_BOOTTIME;
++ case IORING_TIMEOUT_REALTIME:
++ return CLOCK_REALTIME;
++ default:
++ /* can't happen, vetted at prep time */
++ WARN_ON_ONCE(1);
++ fallthrough;
++ case 0:
++ return CLOCK_MONOTONIC;
++ }
++}
++
++static int io_linked_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
++ struct timespec64 *ts, enum hrtimer_mode mode)
++ __must_hold(&ctx->timeout_lock)
++{
++ struct io_timeout_data *io;
++ struct io_kiocb *req;
++ bool found = false;
++
++ list_for_each_entry(req, &ctx->ltimeout_list, timeout.list) {
++ found = user_data == req->cqe.user_data;
++ if (found)
++ break;
++ }
++ if (!found)
++ return -ENOENT;
++
++ io = req->async_data;
++ if (hrtimer_try_to_cancel(&io->timer) == -1)
++ return -EALREADY;
++ hrtimer_init(&io->timer, io_timeout_get_clock(io), mode);
++ io->timer.function = io_link_timeout_fn;
++ hrtimer_start(&io->timer, timespec64_to_ktime(*ts), mode);
++ return 0;
++}
++
++static int io_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
++ struct timespec64 *ts, enum hrtimer_mode mode)
++ __must_hold(&ctx->timeout_lock)
++{
++ struct io_cancel_data cd = { .data = user_data, };
++ struct io_kiocb *req = io_timeout_extract(ctx, &cd);
++ struct io_timeout_data *data;
++
++ if (IS_ERR(req))
++ return PTR_ERR(req);
++
++ req->timeout.off = 0; /* noseq */
++ data = req->async_data;
++ list_add_tail(&req->timeout.list, &ctx->timeout_list);
++ hrtimer_init(&data->timer, io_timeout_get_clock(data), mode);
++ data->timer.function = io_timeout_fn;
++ hrtimer_start(&data->timer, timespec64_to_ktime(*ts), mode);
++ return 0;
++}
++
++static int io_timeout_remove_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ struct io_timeout_rem *tr = &req->timeout_rem;
++
++ if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
++ return -EINVAL;
++ if (sqe->buf_index || sqe->len || sqe->splice_fd_in)
++ return -EINVAL;
++
++ tr->ltimeout = false;
++ tr->addr = READ_ONCE(sqe->addr);
++ tr->flags = READ_ONCE(sqe->timeout_flags);
++ if (tr->flags & IORING_TIMEOUT_UPDATE_MASK) {
++ if (hweight32(tr->flags & IORING_TIMEOUT_CLOCK_MASK) > 1)
++ return -EINVAL;
++ if (tr->flags & IORING_LINK_TIMEOUT_UPDATE)
++ tr->ltimeout = true;
++ if (tr->flags & ~(IORING_TIMEOUT_UPDATE_MASK|IORING_TIMEOUT_ABS))
++ return -EINVAL;
++ if (get_timespec64(&tr->ts, u64_to_user_ptr(sqe->addr2)))
++ return -EFAULT;
++ if (tr->ts.tv_sec < 0 || tr->ts.tv_nsec < 0)
++ return -EINVAL;
++ } else if (tr->flags) {
++ /* timeout removal doesn't support flags */
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
++static inline enum hrtimer_mode io_translate_timeout_mode(unsigned int flags)
++{
++ return (flags & IORING_TIMEOUT_ABS) ? HRTIMER_MODE_ABS
++ : HRTIMER_MODE_REL;
++}
++
++/*
++ * Remove or update an existing timeout command
++ */
++static int io_timeout_remove(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_timeout_rem *tr = &req->timeout_rem;
++ struct io_ring_ctx *ctx = req->ctx;
++ int ret;
++
++ if (!(req->timeout_rem.flags & IORING_TIMEOUT_UPDATE)) {
++ struct io_cancel_data cd = { .data = tr->addr, };
++
++ spin_lock(&ctx->completion_lock);
++ ret = io_timeout_cancel(ctx, &cd);
++ spin_unlock(&ctx->completion_lock);
++ } else {
++ enum hrtimer_mode mode = io_translate_timeout_mode(tr->flags);
++
++ spin_lock_irq(&ctx->timeout_lock);
++ if (tr->ltimeout)
++ ret = io_linked_timeout_update(ctx, tr->addr, &tr->ts, mode);
++ else
++ ret = io_timeout_update(ctx, tr->addr, &tr->ts, mode);
++ spin_unlock_irq(&ctx->timeout_lock);
++ }
++
++ if (ret < 0)
++ req_set_fail(req);
++ io_req_complete_post(req, ret, 0);
++ return 0;
++}
++
++static int __io_timeout_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe,
++ bool is_timeout_link)
++{
++ struct io_timeout_data *data;
++ unsigned flags;
++ u32 off = READ_ONCE(sqe->off);
++
++ if (sqe->buf_index || sqe->len != 1 || sqe->splice_fd_in)
++ return -EINVAL;
++ if (off && is_timeout_link)
++ return -EINVAL;
++ flags = READ_ONCE(sqe->timeout_flags);
++ if (flags & ~(IORING_TIMEOUT_ABS | IORING_TIMEOUT_CLOCK_MASK |
++ IORING_TIMEOUT_ETIME_SUCCESS))
++ return -EINVAL;
++ /* more than one clock specified is invalid, obviously */
++ if (hweight32(flags & IORING_TIMEOUT_CLOCK_MASK) > 1)
++ return -EINVAL;
++
++ INIT_LIST_HEAD(&req->timeout.list);
++ req->timeout.off = off;
++ if (unlikely(off && !req->ctx->off_timeout_used))
++ req->ctx->off_timeout_used = true;
++
++ if (WARN_ON_ONCE(req_has_async_data(req)))
++ return -EFAULT;
++ if (io_alloc_async_data(req))
++ return -ENOMEM;
++
++ data = req->async_data;
++ data->req = req;
++ data->flags = flags;
++
++ if (get_timespec64(&data->ts, u64_to_user_ptr(sqe->addr)))
++ return -EFAULT;
++
++ if (data->ts.tv_sec < 0 || data->ts.tv_nsec < 0)
++ return -EINVAL;
++
++ INIT_LIST_HEAD(&req->timeout.list);
++ data->mode = io_translate_timeout_mode(flags);
++ hrtimer_init(&data->timer, io_timeout_get_clock(data), data->mode);
++
++ if (is_timeout_link) {
++ struct io_submit_link *link = &req->ctx->submit_state.link;
++
++ if (!link->head)
++ return -EINVAL;
++ if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
++ return -EINVAL;
++ req->timeout.head = link->last;
++ link->last->flags |= REQ_F_ARM_LTIMEOUT;
++ }
++ return 0;
++}
++
++static int io_timeout_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ return __io_timeout_prep(req, sqe, false);
++}
++
++static int io_link_timeout_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ return __io_timeout_prep(req, sqe, true);
++}
++
++static int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_timeout_data *data = req->async_data;
++ struct list_head *entry;
++ u32 tail, off = req->timeout.off;
++
++ spin_lock_irq(&ctx->timeout_lock);
++
++ /*
++ * sqe->off holds how many events that need to occur for this
++ * timeout event to be satisfied. If it isn't set, then this is
++ * a pure timeout request, sequence isn't used.
++ */
++ if (io_is_timeout_noseq(req)) {
++ entry = ctx->timeout_list.prev;
++ goto add;
++ }
++
++ tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
++ req->timeout.target_seq = tail + off;
++
++ /* Update the last seq here in case io_flush_timeouts() hasn't.
++ * This is safe because ->completion_lock is held, and submissions
++ * and completions are never mixed in the same ->completion_lock section.
++ */
++ ctx->cq_last_tm_flush = tail;
++
++ /*
++ * Insertion sort, ensuring the first entry in the list is always
++ * the one we need first.
++ */
++ list_for_each_prev(entry, &ctx->timeout_list) {
++ struct io_kiocb *nxt = list_entry(entry, struct io_kiocb,
++ timeout.list);
++
++ if (io_is_timeout_noseq(nxt))
++ continue;
++ /* nxt.seq is behind @tail, otherwise would've been completed */
++ if (off >= nxt->timeout.target_seq - tail)
++ break;
++ }
++add:
++ list_add(&req->timeout.list, entry);
++ data->timer.function = io_timeout_fn;
++ hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode);
++ spin_unlock_irq(&ctx->timeout_lock);
++ return 0;
++}
++
++static bool io_cancel_cb(struct io_wq_work *work, void *data)
++{
++ struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++ struct io_cancel_data *cd = data;
++
++ if (req->ctx != cd->ctx)
++ return false;
++ if (cd->flags & IORING_ASYNC_CANCEL_ANY) {
++ ;
++ } else if (cd->flags & IORING_ASYNC_CANCEL_FD) {
++ if (req->file != cd->file)
++ return false;
++ } else {
++ if (req->cqe.user_data != cd->data)
++ return false;
++ }
++ if (cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY)) {
++ if (cd->seq == req->work.cancel_seq)
++ return false;
++ req->work.cancel_seq = cd->seq;
++ }
++ return true;
++}
++
++static int io_async_cancel_one(struct io_uring_task *tctx,
++ struct io_cancel_data *cd)
++{
++ enum io_wq_cancel cancel_ret;
++ int ret = 0;
++ bool all;
++
++ if (!tctx || !tctx->io_wq)
++ return -ENOENT;
++
++ all = cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY);
++ cancel_ret = io_wq_cancel_cb(tctx->io_wq, io_cancel_cb, cd, all);
++ switch (cancel_ret) {
++ case IO_WQ_CANCEL_OK:
++ ret = 0;
++ break;
++ case IO_WQ_CANCEL_RUNNING:
++ ret = -EALREADY;
++ break;
++ case IO_WQ_CANCEL_NOTFOUND:
++ ret = -ENOENT;
++ break;
++ }
++
++ return ret;
++}
++
++static int io_try_cancel(struct io_kiocb *req, struct io_cancel_data *cd)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ int ret;
++
++ WARN_ON_ONCE(!io_wq_current_is_worker() && req->task != current);
++
++ ret = io_async_cancel_one(req->task->io_uring, cd);
++ /*
++ * Fall-through even for -EALREADY, as we may have poll armed
++ * that need unarming.
++ */
++ if (!ret)
++ return 0;
++
++ spin_lock(&ctx->completion_lock);
++ ret = io_poll_cancel(ctx, cd);
++ if (ret != -ENOENT)
++ goto out;
++ if (!(cd->flags & IORING_ASYNC_CANCEL_FD))
++ ret = io_timeout_cancel(ctx, cd);
++out:
++ spin_unlock(&ctx->completion_lock);
++ return ret;
++}
++
++#define CANCEL_FLAGS (IORING_ASYNC_CANCEL_ALL | IORING_ASYNC_CANCEL_FD | \
++ IORING_ASYNC_CANCEL_ANY)
++
++static int io_async_cancel_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ if (unlikely(req->flags & REQ_F_BUFFER_SELECT))
++ return -EINVAL;
++ if (sqe->off || sqe->len || sqe->splice_fd_in)
++ return -EINVAL;
++
++ req->cancel.addr = READ_ONCE(sqe->addr);
++ req->cancel.flags = READ_ONCE(sqe->cancel_flags);
++ if (req->cancel.flags & ~CANCEL_FLAGS)
++ return -EINVAL;
++ if (req->cancel.flags & IORING_ASYNC_CANCEL_FD) {
++ if (req->cancel.flags & IORING_ASYNC_CANCEL_ANY)
++ return -EINVAL;
++ req->cancel.fd = READ_ONCE(sqe->fd);
++ }
++
++ return 0;
++}
++
++static int __io_async_cancel(struct io_cancel_data *cd, struct io_kiocb *req,
++ unsigned int issue_flags)
++{
++ bool all = cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY);
++ struct io_ring_ctx *ctx = cd->ctx;
++ struct io_tctx_node *node;
++ int ret, nr = 0;
++
++ do {
++ ret = io_try_cancel(req, cd);
++ if (ret == -ENOENT)
++ break;
++ if (!all)
++ return ret;
++ nr++;
++ } while (1);
++
++ /* slow path, try all io-wq's */
++ io_ring_submit_lock(ctx, issue_flags);
++ ret = -ENOENT;
++ list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
++ struct io_uring_task *tctx = node->task->io_uring;
++
++ ret = io_async_cancel_one(tctx, cd);
++ if (ret != -ENOENT) {
++ if (!all)
++ break;
++ nr++;
++ }
++ }
++ io_ring_submit_unlock(ctx, issue_flags);
++ return all ? nr : ret;
++}
++
++static int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_cancel_data cd = {
++ .ctx = req->ctx,
++ .data = req->cancel.addr,
++ .flags = req->cancel.flags,
++ .seq = atomic_inc_return(&req->ctx->cancel_seq),
++ };
++ int ret;
++
++ if (cd.flags & IORING_ASYNC_CANCEL_FD) {
++ if (req->flags & REQ_F_FIXED_FILE)
++ req->file = io_file_get_fixed(req, req->cancel.fd,
++ issue_flags);
++ else
++ req->file = io_file_get_normal(req, req->cancel.fd);
++ if (!req->file) {
++ ret = -EBADF;
++ goto done;
++ }
++ cd.file = req->file;
++ }
++
++ ret = __io_async_cancel(&cd, req, issue_flags);
++done:
++ if (ret < 0)
++ req_set_fail(req);
++ io_req_complete_post(req, ret, 0);
++ return 0;
++}
++
++static int io_files_update_prep(struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++{
++ if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
++ return -EINVAL;
++ if (sqe->rw_flags || sqe->splice_fd_in)
++ return -EINVAL;
++
++ req->rsrc_update.offset = READ_ONCE(sqe->off);
++ req->rsrc_update.nr_args = READ_ONCE(sqe->len);
++ if (!req->rsrc_update.nr_args)
++ return -EINVAL;
++ req->rsrc_update.arg = READ_ONCE(sqe->addr);
++ return 0;
++}
++
++static int io_files_update_with_index_alloc(struct io_kiocb *req,
++ unsigned int issue_flags)
++{
++ __s32 __user *fds = u64_to_user_ptr(req->rsrc_update.arg);
++ unsigned int done;
++ struct file *file;
++ int ret, fd;
++
++ if (!req->ctx->file_data)
++ return -ENXIO;
++
++ for (done = 0; done < req->rsrc_update.nr_args; done++) {
++ if (copy_from_user(&fd, &fds[done], sizeof(fd))) {
++ ret = -EFAULT;
++ break;
++ }
++
++ file = fget(fd);
++ if (!file) {
++ ret = -EBADF;
++ break;
++ }
++ ret = io_fixed_fd_install(req, issue_flags, file,
++ IORING_FILE_INDEX_ALLOC);
++ if (ret < 0)
++ break;
++ if (copy_to_user(&fds[done], &ret, sizeof(ret))) {
++ __io_close_fixed(req, issue_flags, ret);
++ ret = -EFAULT;
++ break;
++ }
++ }
++
++ if (done)
++ return done;
++ return ret;
++}
++
++static int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_uring_rsrc_update2 up;
++ int ret;
++
++ up.offset = req->rsrc_update.offset;
++ up.data = req->rsrc_update.arg;
++ up.nr = 0;
++ up.tags = 0;
++ up.resv = 0;
++ up.resv2 = 0;
++
++ if (req->rsrc_update.offset == IORING_FILE_INDEX_ALLOC) {
++ ret = io_files_update_with_index_alloc(req, issue_flags);
++ } else {
++ io_ring_submit_lock(ctx, issue_flags);
++ ret = __io_register_rsrc_update(ctx, IORING_RSRC_FILE,
++ &up, req->rsrc_update.nr_args);
++ io_ring_submit_unlock(ctx, issue_flags);
++ }
++
++ if (ret < 0)
++ req_set_fail(req);
++ __io_req_complete(req, issue_flags, ret, 0);
++ return 0;
++}
++
++static int io_req_prep_async(struct io_kiocb *req)
++{
++ const struct io_op_def *def = &io_op_defs[req->opcode];
++
++ /* assign early for deferred execution for non-fixed file */
++ if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE))
++ req->file = io_file_get_normal(req, req->cqe.fd);
++ if (!def->needs_async_setup)
++ return 0;
++ if (WARN_ON_ONCE(req_has_async_data(req)))
++ return -EFAULT;
++ if (io_alloc_async_data(req))
++ return -EAGAIN;
++
++ switch (req->opcode) {
++ case IORING_OP_READV:
++ return io_readv_prep_async(req);
++ case IORING_OP_WRITEV:
++ return io_writev_prep_async(req);
++ case IORING_OP_SENDMSG:
++ return io_sendmsg_prep_async(req);
++ case IORING_OP_RECVMSG:
++ return io_recvmsg_prep_async(req);
++ case IORING_OP_CONNECT:
++ return io_connect_prep_async(req);
++ case IORING_OP_URING_CMD:
++ return io_uring_cmd_prep_async(req);
++ }
++
++ printk_once(KERN_WARNING "io_uring: unhandled opcode %d\n",
++ req->opcode);
++ return -EINVAL;
++}
++
++static u32 io_get_sequence(struct io_kiocb *req)
++{
++ u32 seq = req->ctx->cached_sq_head;
++ struct io_kiocb *cur;
++
++ /* need original cached_sq_head, but it was increased for each req */
++ io_for_each_link(cur, req)
++ seq--;
++ return seq;
++}
++
++static __cold void io_drain_req(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_defer_entry *de;
++ int ret;
++ u32 seq = io_get_sequence(req);
++
++ /* Still need defer if there is pending req in defer list. */
++ spin_lock(&ctx->completion_lock);
++ if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list)) {
++ spin_unlock(&ctx->completion_lock);
++queue:
++ ctx->drain_active = false;
++ io_req_task_queue(req);
++ return;
++ }
++ spin_unlock(&ctx->completion_lock);
++
++ ret = io_req_prep_async(req);
++ if (ret) {
++fail:
++ io_req_complete_failed(req, ret);
++ return;
++ }
++ io_prep_async_link(req);
++ de = kmalloc(sizeof(*de), GFP_KERNEL);
++ if (!de) {
++ ret = -ENOMEM;
++ goto fail;
++ }
++
++ spin_lock(&ctx->completion_lock);
++ if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
++ spin_unlock(&ctx->completion_lock);
++ kfree(de);
++ goto queue;
++ }
++
++ trace_io_uring_defer(ctx, req, req->cqe.user_data, req->opcode);
++ de->req = req;
++ de->seq = seq;
++ list_add_tail(&de->list, &ctx->defer_list);
++ spin_unlock(&ctx->completion_lock);
++}
++
++static void io_clean_op(struct io_kiocb *req)
++{
++ if (req->flags & REQ_F_BUFFER_SELECTED) {
++ spin_lock(&req->ctx->completion_lock);
++ io_put_kbuf_comp(req);
++ spin_unlock(&req->ctx->completion_lock);
++ }
++
++ if (req->flags & REQ_F_NEED_CLEANUP) {
++ switch (req->opcode) {
++ case IORING_OP_READV:
++ case IORING_OP_READ_FIXED:
++ case IORING_OP_READ:
++ case IORING_OP_WRITEV:
++ case IORING_OP_WRITE_FIXED:
++ case IORING_OP_WRITE: {
++ struct io_async_rw *io = req->async_data;
++
++ kfree(io->free_iovec);
++ break;
++ }
++ case IORING_OP_RECVMSG:
++ case IORING_OP_SENDMSG: {
++ struct io_async_msghdr *io = req->async_data;
++
++ kfree(io->free_iov);
++ break;
++ }
++ case IORING_OP_OPENAT:
++ case IORING_OP_OPENAT2:
++ if (req->open.filename)
++ putname(req->open.filename);
++ break;
++ case IORING_OP_RENAMEAT:
++ putname(req->rename.oldpath);
++ putname(req->rename.newpath);
++ break;
++ case IORING_OP_UNLINKAT:
++ putname(req->unlink.filename);
++ break;
++ case IORING_OP_MKDIRAT:
++ putname(req->mkdir.filename);
++ break;
++ case IORING_OP_SYMLINKAT:
++ putname(req->symlink.oldpath);
++ putname(req->symlink.newpath);
++ break;
++ case IORING_OP_LINKAT:
++ putname(req->hardlink.oldpath);
++ putname(req->hardlink.newpath);
++ break;
++ case IORING_OP_STATX:
++ if (req->statx.filename)
++ putname(req->statx.filename);
++ break;
++ case IORING_OP_SETXATTR:
++ case IORING_OP_FSETXATTR:
++ case IORING_OP_GETXATTR:
++ case IORING_OP_FGETXATTR:
++ __io_xattr_finish(req);
++ break;
++ }
++ }
++ if ((req->flags & REQ_F_POLLED) && req->apoll) {
++ kfree(req->apoll->double_poll);
++ kfree(req->apoll);
++ req->apoll = NULL;
++ }
++ if (req->flags & REQ_F_INFLIGHT) {
++ struct io_uring_task *tctx = req->task->io_uring;
++
++ atomic_dec(&tctx->inflight_tracked);
++ }
++ if (req->flags & REQ_F_CREDS)
++ put_cred(req->creds);
++ if (req->flags & REQ_F_ASYNC_DATA) {
++ kfree(req->async_data);
++ req->async_data = NULL;
++ }
++ req->flags &= ~IO_REQ_CLEAN_FLAGS;
++}
++
++static bool io_assign_file(struct io_kiocb *req, unsigned int issue_flags)
++{
++ if (req->file || !io_op_defs[req->opcode].needs_file)
++ return true;
++
++ if (req->flags & REQ_F_FIXED_FILE)
++ req->file = io_file_get_fixed(req, req->cqe.fd, issue_flags);
++ else
++ req->file = io_file_get_normal(req, req->cqe.fd);
++
++ return !!req->file;
++}
++
++static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
++{
++ const struct io_op_def *def = &io_op_defs[req->opcode];
++ const struct cred *creds = NULL;
++ int ret;
++
++ if (unlikely(!io_assign_file(req, issue_flags)))
++ return -EBADF;
++
++ if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
++ creds = override_creds(req->creds);
++
++ if (!def->audit_skip)
++ audit_uring_entry(req->opcode);
++
++ ret = def->issue(req, issue_flags);
++
++ if (!def->audit_skip)
++ audit_uring_exit(!ret, ret);
++
++ if (creds)
++ revert_creds(creds);
++ if (ret)
++ return ret;
++ /* If the op doesn't have a file, we're not polling for it */
++ if ((req->ctx->flags & IORING_SETUP_IOPOLL) && req->file)
++ io_iopoll_req_issued(req, issue_flags);
++
++ return 0;
++}
++
++static struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
++{
++ struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++
++ req = io_put_req_find_next(req);
++ return req ? &req->work : NULL;
++}
++
++static void io_wq_submit_work(struct io_wq_work *work)
++{
++ struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++ const struct io_op_def *def = &io_op_defs[req->opcode];
++ unsigned int issue_flags = IO_URING_F_UNLOCKED;
++ bool needs_poll = false;
++ int ret = 0, err = -ECANCELED;
++
++ /* one will be dropped by ->io_free_work() after returning to io-wq */
++ if (!(req->flags & REQ_F_REFCOUNT))
++ __io_req_set_refcount(req, 2);
++ else
++ req_ref_get(req);
++
++ io_arm_ltimeout(req);
++
++ /* either cancelled or io-wq is dying, so don't touch tctx->iowq */
++ if (work->flags & IO_WQ_WORK_CANCEL) {
++fail:
++ io_req_task_queue_fail(req, err);
++ return;
++ }
++ if (!io_assign_file(req, issue_flags)) {
++ err = -EBADF;
++ work->flags |= IO_WQ_WORK_CANCEL;
++ goto fail;
++ }
++
++ if (req->flags & REQ_F_FORCE_ASYNC) {
++ bool opcode_poll = def->pollin || def->pollout;
++
++ if (opcode_poll && file_can_poll(req->file)) {
++ needs_poll = true;
++ issue_flags |= IO_URING_F_NONBLOCK;
++ }
++ }
++
++ do {
++ ret = io_issue_sqe(req, issue_flags);
++ if (ret != -EAGAIN)
++ break;
++ /*
++ * We can get EAGAIN for iopolled IO even though we're
++ * forcing a sync submission from here, since we can't
++ * wait for request slots on the block side.
++ */
++ if (!needs_poll) {
++ if (!(req->ctx->flags & IORING_SETUP_IOPOLL))
++ break;
++ cond_resched();
++ continue;
++ }
++
++ if (io_arm_poll_handler(req, issue_flags) == IO_APOLL_OK)
++ return;
++ /* aborted or ready, in either case retry blocking */
++ needs_poll = false;
++ issue_flags &= ~IO_URING_F_NONBLOCK;
++ } while (1);
++
++ /* avoid locking problems by failing it from a clean context */
++ if (ret)
++ io_req_task_queue_fail(req, ret);
++}
++
++static inline struct io_fixed_file *io_fixed_file_slot(struct io_file_table *table,
++ unsigned i)
++{
++ return &table->files[i];
++}
++
++static inline struct file *io_file_from_index(struct io_ring_ctx *ctx,
++ int index)
++{
++ struct io_fixed_file *slot = io_fixed_file_slot(&ctx->file_table, index);
++
++ return (struct file *) (slot->file_ptr & FFS_MASK);
++}
++
++static void io_fixed_file_set(struct io_fixed_file *file_slot, struct file *file)
++{
++ unsigned long file_ptr = (unsigned long) file;
++
++ file_ptr |= io_file_get_flags(file);
++ file_slot->file_ptr = file_ptr;
++}
++
++static inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
++ unsigned int issue_flags)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct file *file = NULL;
++ unsigned long file_ptr;
++
++ io_ring_submit_lock(ctx, issue_flags);
++
++ if (unlikely((unsigned int)fd >= ctx->nr_user_files))
++ goto out;
++ fd = array_index_nospec(fd, ctx->nr_user_files);
++ file_ptr = io_fixed_file_slot(&ctx->file_table, fd)->file_ptr;
++ file = (struct file *) (file_ptr & FFS_MASK);
++ file_ptr &= ~FFS_MASK;
++ /* mask in overlapping REQ_F and FFS bits */
++ req->flags |= (file_ptr << REQ_F_SUPPORT_NOWAIT_BIT);
++ io_req_set_rsrc_node(req, ctx, 0);
++ WARN_ON_ONCE(file && !test_bit(fd, ctx->file_table.bitmap));
++out:
++ io_ring_submit_unlock(ctx, issue_flags);
++ return file;
++}
++
++static struct file *io_file_get_normal(struct io_kiocb *req, int fd)
++{
++ struct file *file = fget(fd);
++
++ trace_io_uring_file_get(req->ctx, req, req->cqe.user_data, fd);
++
++ /* we don't allow fixed io_uring files */
++ if (file && file->f_op == &io_uring_fops)
++ io_req_track_inflight(req);
++ return file;
++}
++
++static void io_req_task_link_timeout(struct io_kiocb *req, bool *locked)
++{
++ struct io_kiocb *prev = req->timeout.prev;
++ int ret = -ENOENT;
++
++ if (prev) {
++ if (!(req->task->flags & PF_EXITING)) {
++ struct io_cancel_data cd = {
++ .ctx = req->ctx,
++ .data = prev->cqe.user_data,
++ };
++
++ ret = io_try_cancel(req, &cd);
++ }
++ io_req_complete_post(req, ret ?: -ETIME, 0);
++ io_put_req(prev);
++ } else {
++ io_req_complete_post(req, -ETIME, 0);
++ }
++}
++
++static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
++{
++ struct io_timeout_data *data = container_of(timer,
++ struct io_timeout_data, timer);
++ struct io_kiocb *prev, *req = data->req;
++ struct io_ring_ctx *ctx = req->ctx;
++ unsigned long flags;
++
++ spin_lock_irqsave(&ctx->timeout_lock, flags);
++ prev = req->timeout.head;
++ req->timeout.head = NULL;
++
++ /*
++ * We don't expect the list to be empty, that will only happen if we
++ * race with the completion of the linked work.
++ */
++ if (prev) {
++ io_remove_next_linked(prev);
++ if (!req_ref_inc_not_zero(prev))
++ prev = NULL;
++ }
++ list_del(&req->timeout.list);
++ req->timeout.prev = prev;
++ spin_unlock_irqrestore(&ctx->timeout_lock, flags);
++
++ req->io_task_work.func = io_req_task_link_timeout;
++ io_req_task_work_add(req);
++ return HRTIMER_NORESTART;
++}
++
++static void io_queue_linked_timeout(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++
++ spin_lock_irq(&ctx->timeout_lock);
++ /*
++ * If the back reference is NULL, then our linked request finished
++ * before we got a chance to setup the timer
++ */
++ if (req->timeout.head) {
++ struct io_timeout_data *data = req->async_data;
++
++ data->timer.function = io_link_timeout_fn;
++ hrtimer_start(&data->timer, timespec64_to_ktime(data->ts),
++ data->mode);
++ list_add_tail(&req->timeout.list, &ctx->ltimeout_list);
++ }
++ spin_unlock_irq(&ctx->timeout_lock);
++ /* drop submission reference */
++ io_put_req(req);
++}
++
++static void io_queue_async(struct io_kiocb *req, int ret)
++ __must_hold(&req->ctx->uring_lock)
++{
++ struct io_kiocb *linked_timeout;
++
++ if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
++ io_req_complete_failed(req, ret);
++ return;
++ }
++
++ linked_timeout = io_prep_linked_timeout(req);
++
++ switch (io_arm_poll_handler(req, 0)) {
++ case IO_APOLL_READY:
++ io_req_task_queue(req);
++ break;
++ case IO_APOLL_ABORTED:
++ /*
++ * Queued up for async execution, worker will release
++ * submit reference when the iocb is actually submitted.
++ */
++ io_kbuf_recycle(req, 0);
++ io_queue_iowq(req, NULL);
++ break;
++ case IO_APOLL_OK:
++ break;
++ }
++
++ if (linked_timeout)
++ io_queue_linked_timeout(linked_timeout);
++}
++
++static inline void io_queue_sqe(struct io_kiocb *req)
++ __must_hold(&req->ctx->uring_lock)
++{
++ int ret;
++
++ ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
++
++ if (req->flags & REQ_F_COMPLETE_INLINE) {
++ io_req_add_compl_list(req);
++ return;
++ }
++ /*
++ * We async punt it if the file wasn't marked NOWAIT, or if the file
++ * doesn't support non-blocking read/write attempts
++ */
++ if (likely(!ret))
++ io_arm_ltimeout(req);
++ else
++ io_queue_async(req, ret);
++}
++
++static void io_queue_sqe_fallback(struct io_kiocb *req)
++ __must_hold(&req->ctx->uring_lock)
++{
++ if (unlikely(req->flags & REQ_F_FAIL)) {
++ /*
++ * We don't submit, fail them all, for that replace hardlinks
++ * with normal links. Extra REQ_F_LINK is tolerated.
++ */
++ req->flags &= ~REQ_F_HARDLINK;
++ req->flags |= REQ_F_LINK;
++ io_req_complete_failed(req, req->cqe.res);
++ } else if (unlikely(req->ctx->drain_active)) {
++ io_drain_req(req);
++ } else {
++ int ret = io_req_prep_async(req);
++
++ if (unlikely(ret))
++ io_req_complete_failed(req, ret);
++ else
++ io_queue_iowq(req, NULL);
++ }
++}
++
++/*
++ * Check SQE restrictions (opcode and flags).
++ *
++ * Returns 'true' if SQE is allowed, 'false' otherwise.
++ */
++static inline bool io_check_restriction(struct io_ring_ctx *ctx,
++ struct io_kiocb *req,
++ unsigned int sqe_flags)
++{
++ if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
++ return false;
++
++ if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
++ ctx->restrictions.sqe_flags_required)
++ return false;
++
++ if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
++ ctx->restrictions.sqe_flags_required))
++ return false;
++
++ return true;
++}
++
++static void io_init_req_drain(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_kiocb *head = ctx->submit_state.link.head;
++
++ ctx->drain_active = true;
++ if (head) {
++ /*
++ * If we need to drain a request in the middle of a link, drain
++ * the head request and the next request/link after the current
++ * link. Considering sequential execution of links,
++ * REQ_F_IO_DRAIN will be maintained for every request of our
++ * link.
++ */
++ head->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
++ ctx->drain_next = true;
++ }
++}
++
++static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++ __must_hold(&ctx->uring_lock)
++{
++ const struct io_op_def *def;
++ unsigned int sqe_flags;
++ int personality;
++ u8 opcode;
++
++ /* req is partially pre-initialised, see io_preinit_req() */
++ req->opcode = opcode = READ_ONCE(sqe->opcode);
++ /* same numerical values with corresponding REQ_F_*, safe to copy */
++ req->flags = sqe_flags = READ_ONCE(sqe->flags);
++ req->cqe.user_data = READ_ONCE(sqe->user_data);
++ req->file = NULL;
++ req->rsrc_node = NULL;
++ req->task = current;
++
++ if (unlikely(opcode >= IORING_OP_LAST)) {
++ req->opcode = 0;
++ return -EINVAL;
++ }
++ def = &io_op_defs[opcode];
++ if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) {
++ /* enforce forwards compatibility on users */
++ if (sqe_flags & ~SQE_VALID_FLAGS)
++ return -EINVAL;
++ if (sqe_flags & IOSQE_BUFFER_SELECT) {
++ if (!def->buffer_select)
++ return -EOPNOTSUPP;
++ req->buf_index = READ_ONCE(sqe->buf_group);
++ }
++ if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS)
++ ctx->drain_disabled = true;
++ if (sqe_flags & IOSQE_IO_DRAIN) {
++ if (ctx->drain_disabled)
++ return -EOPNOTSUPP;
++ io_init_req_drain(req);
++ }
++ }
++ if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
++ if (ctx->restricted && !io_check_restriction(ctx, req, sqe_flags))
++ return -EACCES;
++ /* knock it to the slow queue path, will be drained there */
++ if (ctx->drain_active)
++ req->flags |= REQ_F_FORCE_ASYNC;
++ /* if there is no link, we're at "next" request and need to drain */
++ if (unlikely(ctx->drain_next) && !ctx->submit_state.link.head) {
++ ctx->drain_next = false;
++ ctx->drain_active = true;
++ req->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
++ }
++ }
++
++ if (!def->ioprio && sqe->ioprio)
++ return -EINVAL;
++ if (!def->iopoll && (ctx->flags & IORING_SETUP_IOPOLL))
++ return -EINVAL;
++
++ if (def->needs_file) {
++ struct io_submit_state *state = &ctx->submit_state;
++
++ req->cqe.fd = READ_ONCE(sqe->fd);
++
++ /*
++ * Plug now if we have more than 2 IO left after this, and the
++ * target is potentially a read/write to block based storage.
++ */
++ if (state->need_plug && def->plug) {
++ state->plug_started = true;
++ state->need_plug = false;
++ blk_start_plug_nr_ios(&state->plug, state->submit_nr);
++ }
++ }
++
++ personality = READ_ONCE(sqe->personality);
++ if (personality) {
++ int ret;
++
++ req->creds = xa_load(&ctx->personalities, personality);
++ if (!req->creds)
++ return -EINVAL;
++ get_cred(req->creds);
++ ret = security_uring_override_creds(req->creds);
++ if (ret) {
++ put_cred(req->creds);
++ return ret;
++ }
++ req->flags |= REQ_F_CREDS;
++ }
++
++ return def->prep(req, sqe);
++}
++
++static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
++ struct io_kiocb *req, int ret)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_submit_link *link = &ctx->submit_state.link;
++ struct io_kiocb *head = link->head;
++
++ trace_io_uring_req_failed(sqe, ctx, req, ret);
++
++ /*
++ * Avoid breaking links in the middle as it renders links with SQPOLL
++ * unusable. Instead of failing eagerly, continue assembling the link if
++ * applicable and mark the head with REQ_F_FAIL. The link flushing code
++ * should find the flag and handle the rest.
++ */
++ req_fail_link_node(req, ret);
++ if (head && !(head->flags & REQ_F_FAIL))
++ req_fail_link_node(head, -ECANCELED);
++
++ if (!(req->flags & IO_REQ_LINK_FLAGS)) {
++ if (head) {
++ link->last->link = req;
++ link->head = NULL;
++ req = head;
++ }
++ io_queue_sqe_fallback(req);
++ return ret;
++ }
++
++ if (head)
++ link->last->link = req;
++ else
++ link->head = req;
++ link->last = req;
++ return 0;
++}
++
++static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
++ const struct io_uring_sqe *sqe)
++ __must_hold(&ctx->uring_lock)
++{
++ struct io_submit_link *link = &ctx->submit_state.link;
++ int ret;
++
++ ret = io_init_req(ctx, req, sqe);
++ if (unlikely(ret))
++ return io_submit_fail_init(sqe, req, ret);
++
++ /* don't need @sqe from now on */
++ trace_io_uring_submit_sqe(ctx, req, req->cqe.user_data, req->opcode,
++ req->flags, true,
++ ctx->flags & IORING_SETUP_SQPOLL);
++
++ /*
++ * If we already have a head request, queue this one for async
++ * submittal once the head completes. If we don't have a head but
++ * IOSQE_IO_LINK is set in the sqe, start a new head. This one will be
++ * submitted sync once the chain is complete. If none of those
++ * conditions are true (normal request), then just queue it.
++ */
++ if (unlikely(link->head)) {
++ ret = io_req_prep_async(req);
++ if (unlikely(ret))
++ return io_submit_fail_init(sqe, req, ret);
++
++ trace_io_uring_link(ctx, req, link->head);
++ link->last->link = req;
++ link->last = req;
++
++ if (req->flags & IO_REQ_LINK_FLAGS)
++ return 0;
++ /* last request of the link, flush it */
++ req = link->head;
++ link->head = NULL;
++ if (req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))
++ goto fallback;
++
++ } else if (unlikely(req->flags & (IO_REQ_LINK_FLAGS |
++ REQ_F_FORCE_ASYNC | REQ_F_FAIL))) {
++ if (req->flags & IO_REQ_LINK_FLAGS) {
++ link->head = req;
++ link->last = req;
++ } else {
++fallback:
++ io_queue_sqe_fallback(req);
++ }
++ return 0;
++ }
++
++ io_queue_sqe(req);
++ return 0;
++}
++
++/*
++ * Batched submission is done, ensure local IO is flushed out.
++ */
++static void io_submit_state_end(struct io_ring_ctx *ctx)
++{
++ struct io_submit_state *state = &ctx->submit_state;
++
++ if (unlikely(state->link.head))
++ io_queue_sqe_fallback(state->link.head);
++ /* flush only after queuing links as they can generate completions */
++ io_submit_flush_completions(ctx);
++ if (state->plug_started)
++ blk_finish_plug(&state->plug);
++}
++
++/*
++ * Start submission side cache.
++ */
++static void io_submit_state_start(struct io_submit_state *state,
++ unsigned int max_ios)
++{
++ state->plug_started = false;
++ state->need_plug = max_ios > 2;
++ state->submit_nr = max_ios;
++ /* set only head, no need to init link_last in advance */
++ state->link.head = NULL;
++}
++
++static void io_commit_sqring(struct io_ring_ctx *ctx)
++{
++ struct io_rings *rings = ctx->rings;
++
++ /*
++ * Ensure any loads from the SQEs are done at this point,
++ * since once we write the new head, the application could
++ * write new data to them.
++ */
++ smp_store_release(&rings->sq.head, ctx->cached_sq_head);
++}
++
++/*
++ * Fetch an sqe, if one is available. Note this returns a pointer to memory
++ * that is mapped by userspace. This means that care needs to be taken to
++ * ensure that reads are stable, as we cannot rely on userspace always
++ * being a good citizen. If members of the sqe are validated and then later
++ * used, it's important that those reads are done through READ_ONCE() to
++ * prevent a re-load down the line.
++ */
++static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx)
++{
++ unsigned head, mask = ctx->sq_entries - 1;
++ unsigned sq_idx = ctx->cached_sq_head++ & mask;
++
++ /*
++ * The cached sq head (or cq tail) serves two purposes:
++ *
++ * 1) allows us to batch the cost of updating the user visible
++ * head updates.
++ * 2) allows the kernel side to track the head on its own, even
++ * though the application is the one updating it.
++ */
++ head = READ_ONCE(ctx->sq_array[sq_idx]);
++ if (likely(head < ctx->sq_entries)) {
++ /* double index for 128-byte SQEs, twice as long */
++ if (ctx->flags & IORING_SETUP_SQE128)
++ head <<= 1;
++ return &ctx->sq_sqes[head];
++ }
++
++ /* drop invalid entries */
++ ctx->cq_extra--;
++ WRITE_ONCE(ctx->rings->sq_dropped,
++ READ_ONCE(ctx->rings->sq_dropped) + 1);
++ return NULL;
++}
++
++static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
++ __must_hold(&ctx->uring_lock)
++{
++ unsigned int entries = io_sqring_entries(ctx);
++ unsigned int left;
++ int ret;
++
++ if (unlikely(!entries))
++ return 0;
++ /* make sure SQ entry isn't read before tail */
++ ret = left = min3(nr, ctx->sq_entries, entries);
++ io_get_task_refs(left);
++ io_submit_state_start(&ctx->submit_state, left);
++
++ do {
++ const struct io_uring_sqe *sqe;
++ struct io_kiocb *req;
++
++ if (unlikely(!io_alloc_req_refill(ctx)))
++ break;
++ req = io_alloc_req(ctx);
++ sqe = io_get_sqe(ctx);
++ if (unlikely(!sqe)) {
++ io_req_add_to_cache(req, ctx);
++ break;
++ }
++
++ /*
++ * Continue submitting even for sqe failure if the
++ * ring was setup with IORING_SETUP_SUBMIT_ALL
++ */
++ if (unlikely(io_submit_sqe(ctx, req, sqe)) &&
++ !(ctx->flags & IORING_SETUP_SUBMIT_ALL)) {
++ left--;
++ break;
++ }
++ } while (--left);
++
++ if (unlikely(left)) {
++ ret -= left;
++ /* try again if it submitted nothing and can't allocate a req */
++ if (!ret && io_req_cache_empty(ctx))
++ ret = -EAGAIN;
++ current->io_uring->cached_refs += left;
++ }
++
++ io_submit_state_end(ctx);
++ /* Commit SQ ring head once we've consumed and submitted all SQEs */
++ io_commit_sqring(ctx);
++ return ret;
++}
++
++static inline bool io_sqd_events_pending(struct io_sq_data *sqd)
++{
++ return READ_ONCE(sqd->state);
++}
++
++static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries)
++{
++ unsigned int to_submit;
++ int ret = 0;
++
++ to_submit = io_sqring_entries(ctx);
++ /* if we're handling multiple rings, cap submit size for fairness */
++ if (cap_entries && to_submit > IORING_SQPOLL_CAP_ENTRIES_VALUE)
++ to_submit = IORING_SQPOLL_CAP_ENTRIES_VALUE;
++
++ if (!wq_list_empty(&ctx->iopoll_list) || to_submit) {
++ const struct cred *creds = NULL;
++
++ if (ctx->sq_creds != current_cred())
++ creds = override_creds(ctx->sq_creds);
++
++ mutex_lock(&ctx->uring_lock);
++ if (!wq_list_empty(&ctx->iopoll_list))
++ io_do_iopoll(ctx, true);
++
++ /*
++ * Don't submit if refs are dying, good for io_uring_register(),
++ * but also it is relied upon by io_ring_exit_work()
++ */
++ if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs)) &&
++ !(ctx->flags & IORING_SETUP_R_DISABLED))
++ ret = io_submit_sqes(ctx, to_submit);
++ mutex_unlock(&ctx->uring_lock);
++
++ if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait))
++ wake_up(&ctx->sqo_sq_wait);
++ if (creds)
++ revert_creds(creds);
++ }
++
++ return ret;
++}
++
++static __cold void io_sqd_update_thread_idle(struct io_sq_data *sqd)
++{
++ struct io_ring_ctx *ctx;
++ unsigned sq_thread_idle = 0;
++
++ list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
++ sq_thread_idle = max(sq_thread_idle, ctx->sq_thread_idle);
++ sqd->sq_thread_idle = sq_thread_idle;
++}
++
++static bool io_sqd_handle_event(struct io_sq_data *sqd)
++{
++ bool did_sig = false;
++ struct ksignal ksig;
++
++ if (test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state) ||
++ signal_pending(current)) {
++ mutex_unlock(&sqd->lock);
++ if (signal_pending(current))
++ did_sig = get_signal(&ksig);
++ cond_resched();
++ mutex_lock(&sqd->lock);
++ }
++ return did_sig || test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
++}
++
++static int io_sq_thread(void *data)
++{
++ struct io_sq_data *sqd = data;
++ struct io_ring_ctx *ctx;
++ unsigned long timeout = 0;
++ char buf[TASK_COMM_LEN];
++ DEFINE_WAIT(wait);
++
++ snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid);
++ set_task_comm(current, buf);
++
++ if (sqd->sq_cpu != -1)
++ set_cpus_allowed_ptr(current, cpumask_of(sqd->sq_cpu));
++ else
++ set_cpus_allowed_ptr(current, cpu_online_mask);
++ current->flags |= PF_NO_SETAFFINITY;
++
++ audit_alloc_kernel(current);
++
++ mutex_lock(&sqd->lock);
++ while (1) {
++ bool cap_entries, sqt_spin = false;
++
++ if (io_sqd_events_pending(sqd) || signal_pending(current)) {
++ if (io_sqd_handle_event(sqd))
++ break;
++ timeout = jiffies + sqd->sq_thread_idle;
++ }
++
++ cap_entries = !list_is_singular(&sqd->ctx_list);
++ list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
++ int ret = __io_sq_thread(ctx, cap_entries);
++
++ if (!sqt_spin && (ret > 0 || !wq_list_empty(&ctx->iopoll_list)))
++ sqt_spin = true;
++ }
++ if (io_run_task_work())
++ sqt_spin = true;
++
++ if (sqt_spin || !time_after(jiffies, timeout)) {
++ cond_resched();
++ if (sqt_spin)
++ timeout = jiffies + sqd->sq_thread_idle;
++ continue;
++ }
++
++ prepare_to_wait(&sqd->wait, &wait, TASK_INTERRUPTIBLE);
++ if (!io_sqd_events_pending(sqd) && !task_work_pending(current)) {
++ bool needs_sched = true;
++
++ list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
++ atomic_or(IORING_SQ_NEED_WAKEUP,
++ &ctx->rings->sq_flags);
++ if ((ctx->flags & IORING_SETUP_IOPOLL) &&
++ !wq_list_empty(&ctx->iopoll_list)) {
++ needs_sched = false;
++ break;
++ }
++
++ /*
++ * Ensure the store of the wakeup flag is not
++ * reordered with the load of the SQ tail
++ */
++ smp_mb__after_atomic();
++
++ if (io_sqring_entries(ctx)) {
++ needs_sched = false;
++ break;
++ }
++ }
++
++ if (needs_sched) {
++ mutex_unlock(&sqd->lock);
++ schedule();
++ mutex_lock(&sqd->lock);
++ }
++ list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
++ atomic_andnot(IORING_SQ_NEED_WAKEUP,
++ &ctx->rings->sq_flags);
++ }
++
++ finish_wait(&sqd->wait, &wait);
++ timeout = jiffies + sqd->sq_thread_idle;
++ }
++
++ io_uring_cancel_generic(true, sqd);
++ sqd->thread = NULL;
++ list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
++ atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags);
++ io_run_task_work();
++ mutex_unlock(&sqd->lock);
++
++ audit_free(current);
++
++ complete(&sqd->exited);
++ do_exit(0);
++}
++
++struct io_wait_queue {
++ struct wait_queue_entry wq;
++ struct io_ring_ctx *ctx;
++ unsigned cq_tail;
++ unsigned nr_timeouts;
++};
++
++static inline bool io_should_wake(struct io_wait_queue *iowq)
++{
++ struct io_ring_ctx *ctx = iowq->ctx;
++ int dist = ctx->cached_cq_tail - (int) iowq->cq_tail;
++
++ /*
++ * Wake up if we have enough events, or if a timeout occurred since we
++ * started waiting. For timeouts, we always want to return to userspace,
++ * regardless of event count.
++ */
++ return dist >= 0 || atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts;
++}
++
++static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
++ int wake_flags, void *key)
++{
++ struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue,
++ wq);
++
++ /*
++ * Cannot safely flush overflowed CQEs from here, ensure we wake up
++ * the task, and the next invocation will do it.
++ */
++ if (io_should_wake(iowq) ||
++ test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &iowq->ctx->check_cq))
++ return autoremove_wake_function(curr, mode, wake_flags, key);
++ return -1;
++}
++
++static int io_run_task_work_sig(void)
++{
++ if (io_run_task_work())
++ return 1;
++ if (test_thread_flag(TIF_NOTIFY_SIGNAL))
++ return -ERESTARTSYS;
++ if (task_sigpending(current))
++ return -EINTR;
++ return 0;
++}
++
++/* when returns >0, the caller should retry */
++static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
++ struct io_wait_queue *iowq,
++ ktime_t timeout)
++{
++ int ret;
++ unsigned long check_cq;
++
++ /* make sure we run task_work before checking for signals */
++ ret = io_run_task_work_sig();
++ if (ret || io_should_wake(iowq))
++ return ret;
++ check_cq = READ_ONCE(ctx->check_cq);
++ /* let the caller flush overflows, retry */
++ if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
++ return 1;
++ if (unlikely(check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)))
++ return -EBADR;
++ if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS))
++ return -ETIME;
++ return 1;
++}
++
++/*
++ * Wait until events become available, if we don't already have some. The
++ * application must reap them itself, as they reside on the shared cq ring.
++ */
++static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
++ const sigset_t __user *sig, size_t sigsz,
++ struct __kernel_timespec __user *uts)
++{
++ struct io_wait_queue iowq;
++ struct io_rings *rings = ctx->rings;
++ ktime_t timeout = KTIME_MAX;
++ int ret;
++
++ do {
++ io_cqring_overflow_flush(ctx);
++ if (io_cqring_events(ctx) >= min_events)
++ return 0;
++ if (!io_run_task_work())
++ break;
++ } while (1);
++
++ if (sig) {
++#ifdef CONFIG_COMPAT
++ if (in_compat_syscall())
++ ret = set_compat_user_sigmask((const compat_sigset_t __user *)sig,
++ sigsz);
++ else
++#endif
++ ret = set_user_sigmask(sig, sigsz);
++
++ if (ret)
++ return ret;
++ }
++
++ if (uts) {
++ struct timespec64 ts;
++
++ if (get_timespec64(&ts, uts))
++ return -EFAULT;
++ timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
++ }
++
++ init_waitqueue_func_entry(&iowq.wq, io_wake_function);
++ iowq.wq.private = current;
++ INIT_LIST_HEAD(&iowq.wq.entry);
++ iowq.ctx = ctx;
++ iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
++ iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
++
++ trace_io_uring_cqring_wait(ctx, min_events);
++ do {
++ /* if we can't even flush overflow, don't wait for more */
++ if (!io_cqring_overflow_flush(ctx)) {
++ ret = -EBUSY;
++ break;
++ }
++ prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
++ TASK_INTERRUPTIBLE);
++ ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
++ cond_resched();
++ } while (ret > 0);
++
++ finish_wait(&ctx->cq_wait, &iowq.wq);
++ restore_saved_sigmask_unless(ret == -EINTR);
++
++ return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
++}
++
++static void io_free_page_table(void **table, size_t size)
++{
++ unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
++
++ for (i = 0; i < nr_tables; i++)
++ kfree(table[i]);
++ kfree(table);
++}
++
++static __cold void **io_alloc_page_table(size_t size)
++{
++ unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
++ size_t init_size = size;
++ void **table;
++
++ table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL_ACCOUNT);
++ if (!table)
++ return NULL;
++
++ for (i = 0; i < nr_tables; i++) {
++ unsigned int this_size = min_t(size_t, size, PAGE_SIZE);
++
++ table[i] = kzalloc(this_size, GFP_KERNEL_ACCOUNT);
++ if (!table[i]) {
++ io_free_page_table(table, init_size);
++ return NULL;
++ }
++ size -= this_size;
++ }
++ return table;
++}
++
++static void io_rsrc_node_destroy(struct io_rsrc_node *ref_node)
++{
++ percpu_ref_exit(&ref_node->refs);
++ kfree(ref_node);
++}
++
++static __cold void io_rsrc_node_ref_zero(struct percpu_ref *ref)
++{
++ struct io_rsrc_node *node = container_of(ref, struct io_rsrc_node, refs);
++ struct io_ring_ctx *ctx = node->rsrc_data->ctx;
++ unsigned long flags;
++ bool first_add = false;
++ unsigned long delay = HZ;
++
++ spin_lock_irqsave(&ctx->rsrc_ref_lock, flags);
++ node->done = true;
++
++ /* if we are mid-quiesce then do not delay */
++ if (node->rsrc_data->quiesce)
++ delay = 0;
++
++ while (!list_empty(&ctx->rsrc_ref_list)) {
++ node = list_first_entry(&ctx->rsrc_ref_list,
++ struct io_rsrc_node, node);
++ /* recycle ref nodes in order */
++ if (!node->done)
++ break;
++ list_del(&node->node);
++ first_add |= llist_add(&node->llist, &ctx->rsrc_put_llist);
++ }
++ spin_unlock_irqrestore(&ctx->rsrc_ref_lock, flags);
++
++ if (first_add)
++ mod_delayed_work(system_wq, &ctx->rsrc_put_work, delay);
++}
++
++static struct io_rsrc_node *io_rsrc_node_alloc(void)
++{
++ struct io_rsrc_node *ref_node;
++
++ ref_node = kzalloc(sizeof(*ref_node), GFP_KERNEL);
++ if (!ref_node)
++ return NULL;
++
++ if (percpu_ref_init(&ref_node->refs, io_rsrc_node_ref_zero,
++ 0, GFP_KERNEL)) {
++ kfree(ref_node);
++ return NULL;
++ }
++ INIT_LIST_HEAD(&ref_node->node);
++ INIT_LIST_HEAD(&ref_node->rsrc_list);
++ ref_node->done = false;
++ return ref_node;
++}
++
++static void io_rsrc_node_switch(struct io_ring_ctx *ctx,
++ struct io_rsrc_data *data_to_kill)
++ __must_hold(&ctx->uring_lock)
++{
++ WARN_ON_ONCE(!ctx->rsrc_backup_node);
++ WARN_ON_ONCE(data_to_kill && !ctx->rsrc_node);
++
++ io_rsrc_refs_drop(ctx);
++
++ if (data_to_kill) {
++ struct io_rsrc_node *rsrc_node = ctx->rsrc_node;
++
++ rsrc_node->rsrc_data = data_to_kill;
++ spin_lock_irq(&ctx->rsrc_ref_lock);
++ list_add_tail(&rsrc_node->node, &ctx->rsrc_ref_list);
++ spin_unlock_irq(&ctx->rsrc_ref_lock);
++
++ atomic_inc(&data_to_kill->refs);
++ percpu_ref_kill(&rsrc_node->refs);
++ ctx->rsrc_node = NULL;
++ }
++
++ if (!ctx->rsrc_node) {
++ ctx->rsrc_node = ctx->rsrc_backup_node;
++ ctx->rsrc_backup_node = NULL;
++ }
++}
++
++static int io_rsrc_node_switch_start(struct io_ring_ctx *ctx)
++{
++ if (ctx->rsrc_backup_node)
++ return 0;
++ ctx->rsrc_backup_node = io_rsrc_node_alloc();
++ return ctx->rsrc_backup_node ? 0 : -ENOMEM;
++}
++
++static __cold int io_rsrc_ref_quiesce(struct io_rsrc_data *data,
++ struct io_ring_ctx *ctx)
++{
++ int ret;
++
++ /* As we may drop ->uring_lock, other task may have started quiesce */
++ if (data->quiesce)
++ return -ENXIO;
++
++ data->quiesce = true;
++ do {
++ ret = io_rsrc_node_switch_start(ctx);
++ if (ret)
++ break;
++ io_rsrc_node_switch(ctx, data);
++
++ /* kill initial ref, already quiesced if zero */
++ if (atomic_dec_and_test(&data->refs))
++ break;
++ mutex_unlock(&ctx->uring_lock);
++ flush_delayed_work(&ctx->rsrc_put_work);
++ ret = wait_for_completion_interruptible(&data->done);
++ if (!ret) {
++ mutex_lock(&ctx->uring_lock);
++ if (atomic_read(&data->refs) > 0) {
++ /*
++ * it has been revived by another thread while
++ * we were unlocked
++ */
++ mutex_unlock(&ctx->uring_lock);
++ } else {
++ break;
++ }
++ }
++
++ atomic_inc(&data->refs);
++ /* wait for all works potentially completing data->done */
++ flush_delayed_work(&ctx->rsrc_put_work);
++ reinit_completion(&data->done);
++
++ ret = io_run_task_work_sig();
++ mutex_lock(&ctx->uring_lock);
++ } while (ret >= 0);
++ data->quiesce = false;
++
++ return ret;
++}
++
++static u64 *io_get_tag_slot(struct io_rsrc_data *data, unsigned int idx)
++{
++ unsigned int off = idx & IO_RSRC_TAG_TABLE_MASK;
++ unsigned int table_idx = idx >> IO_RSRC_TAG_TABLE_SHIFT;
++
++ return &data->tags[table_idx][off];
++}
++
++static void io_rsrc_data_free(struct io_rsrc_data *data)
++{
++ size_t size = data->nr * sizeof(data->tags[0][0]);
++
++ if (data->tags)
++ io_free_page_table((void **)data->tags, size);
++ kfree(data);
++}
++
++static __cold int io_rsrc_data_alloc(struct io_ring_ctx *ctx, rsrc_put_fn *do_put,
++ u64 __user *utags, unsigned nr,
++ struct io_rsrc_data **pdata)
++{
++ struct io_rsrc_data *data;
++ int ret = -ENOMEM;
++ unsigned i;
++
++ data = kzalloc(sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
++ data->tags = (u64 **)io_alloc_page_table(nr * sizeof(data->tags[0][0]));
++ if (!data->tags) {
++ kfree(data);
++ return -ENOMEM;
++ }
++
++ data->nr = nr;
++ data->ctx = ctx;
++ data->do_put = do_put;
++ if (utags) {
++ ret = -EFAULT;
++ for (i = 0; i < nr; i++) {
++ u64 *tag_slot = io_get_tag_slot(data, i);
++
++ if (copy_from_user(tag_slot, &utags[i],
++ sizeof(*tag_slot)))
++ goto fail;
++ }
++ }
++
++ atomic_set(&data->refs, 1);
++ init_completion(&data->done);
++ *pdata = data;
++ return 0;
++fail:
++ io_rsrc_data_free(data);
++ return ret;
++}
++
++static bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files)
++{
++ table->files = kvcalloc(nr_files, sizeof(table->files[0]),
++ GFP_KERNEL_ACCOUNT);
++ if (unlikely(!table->files))
++ return false;
++
++ table->bitmap = bitmap_zalloc(nr_files, GFP_KERNEL_ACCOUNT);
++ if (unlikely(!table->bitmap)) {
++ kvfree(table->files);
++ return false;
++ }
++
++ return true;
++}
++
++static void io_free_file_tables(struct io_file_table *table)
++{
++ kvfree(table->files);
++ bitmap_free(table->bitmap);
++ table->files = NULL;
++ table->bitmap = NULL;
++}
++
++static inline void io_file_bitmap_set(struct io_file_table *table, int bit)
++{
++ WARN_ON_ONCE(test_bit(bit, table->bitmap));
++ __set_bit(bit, table->bitmap);
++ table->alloc_hint = bit + 1;
++}
++
++static inline void io_file_bitmap_clear(struct io_file_table *table, int bit)
++{
++ __clear_bit(bit, table->bitmap);
++ table->alloc_hint = bit;
++}
++
++static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
++{
++#if !defined(IO_URING_SCM_ALL)
++ int i;
++
++ for (i = 0; i < ctx->nr_user_files; i++) {
++ struct file *file = io_file_from_index(ctx, i);
++
++ if (!file)
++ continue;
++ if (io_fixed_file_slot(&ctx->file_table, i)->file_ptr & FFS_SCM)
++ continue;
++ io_file_bitmap_clear(&ctx->file_table, i);
++ fput(file);
++ }
++#endif
++
++#if defined(CONFIG_UNIX)
++ if (ctx->ring_sock) {
++ struct sock *sock = ctx->ring_sock->sk;
++ struct sk_buff *skb;
++
++ while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
++ kfree_skb(skb);
++ }
++#endif
++ io_free_file_tables(&ctx->file_table);
++ io_rsrc_data_free(ctx->file_data);
++ ctx->file_data = NULL;
++ ctx->nr_user_files = 0;
++}
++
++static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
++{
++ unsigned nr = ctx->nr_user_files;
++ int ret;
++
++ if (!ctx->file_data)
++ return -ENXIO;
++
++ /*
++ * Quiesce may unlock ->uring_lock, and while it's not held
++ * prevent new requests using the table.
++ */
++ ctx->nr_user_files = 0;
++ ret = io_rsrc_ref_quiesce(ctx->file_data, ctx);
++ ctx->nr_user_files = nr;
++ if (!ret)
++ __io_sqe_files_unregister(ctx);
++ return ret;
++}
++
++static void io_sq_thread_unpark(struct io_sq_data *sqd)
++ __releases(&sqd->lock)
++{
++ WARN_ON_ONCE(sqd->thread == current);
++
++ /*
++ * Do the dance but not conditional clear_bit() because it'd race with
++ * other threads incrementing park_pending and setting the bit.
++ */
++ clear_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
++ if (atomic_dec_return(&sqd->park_pending))
++ set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
++ mutex_unlock(&sqd->lock);
++}
++
++static void io_sq_thread_park(struct io_sq_data *sqd)
++ __acquires(&sqd->lock)
++{
++ WARN_ON_ONCE(sqd->thread == current);
++
++ atomic_inc(&sqd->park_pending);
++ set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
++ mutex_lock(&sqd->lock);
++ if (sqd->thread)
++ wake_up_process(sqd->thread);
++}
++
++static void io_sq_thread_stop(struct io_sq_data *sqd)
++{
++ WARN_ON_ONCE(sqd->thread == current);
++ WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));
++
++ set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
++ mutex_lock(&sqd->lock);
++ if (sqd->thread)
++ wake_up_process(sqd->thread);
++ mutex_unlock(&sqd->lock);
++ wait_for_completion(&sqd->exited);
++}
++
++static void io_put_sq_data(struct io_sq_data *sqd)
++{
++ if (refcount_dec_and_test(&sqd->refs)) {
++ WARN_ON_ONCE(atomic_read(&sqd->park_pending));
++
++ io_sq_thread_stop(sqd);
++ kfree(sqd);
++ }
++}
++
++static void io_sq_thread_finish(struct io_ring_ctx *ctx)
++{
++ struct io_sq_data *sqd = ctx->sq_data;
++
++ if (sqd) {
++ io_sq_thread_park(sqd);
++ list_del_init(&ctx->sqd_list);
++ io_sqd_update_thread_idle(sqd);
++ io_sq_thread_unpark(sqd);
++
++ io_put_sq_data(sqd);
++ ctx->sq_data = NULL;
++ }
++}
++
++static struct io_sq_data *io_attach_sq_data(struct io_uring_params *p)
++{
++ struct io_ring_ctx *ctx_attach;
++ struct io_sq_data *sqd;
++ struct fd f;
++
++ f = fdget(p->wq_fd);
++ if (!f.file)
++ return ERR_PTR(-ENXIO);
++ if (f.file->f_op != &io_uring_fops) {
++ fdput(f);
++ return ERR_PTR(-EINVAL);
++ }
++
++ ctx_attach = f.file->private_data;
++ sqd = ctx_attach->sq_data;
++ if (!sqd) {
++ fdput(f);
++ return ERR_PTR(-EINVAL);
++ }
++ if (sqd->task_tgid != current->tgid) {
++ fdput(f);
++ return ERR_PTR(-EPERM);
++ }
++
++ refcount_inc(&sqd->refs);
++ fdput(f);
++ return sqd;
++}
++
++static struct io_sq_data *io_get_sq_data(struct io_uring_params *p,
++ bool *attached)
++{
++ struct io_sq_data *sqd;
++
++ *attached = false;
++ if (p->flags & IORING_SETUP_ATTACH_WQ) {
++ sqd = io_attach_sq_data(p);
++ if (!IS_ERR(sqd)) {
++ *attached = true;
++ return sqd;
++ }
++ /* fall through for EPERM case, setup new sqd/task */
++ if (PTR_ERR(sqd) != -EPERM)
++ return sqd;
++ }
++
++ sqd = kzalloc(sizeof(*sqd), GFP_KERNEL);
++ if (!sqd)
++ return ERR_PTR(-ENOMEM);
++
++ atomic_set(&sqd->park_pending, 0);
++ refcount_set(&sqd->refs, 1);
++ INIT_LIST_HEAD(&sqd->ctx_list);
++ mutex_init(&sqd->lock);
++ init_waitqueue_head(&sqd->wait);
++ init_completion(&sqd->exited);
++ return sqd;
++}
++
++/*
++ * Ensure the UNIX gc is aware of our file set, so we are certain that
++ * the io_uring can be safely unregistered on process exit, even if we have
++ * loops in the file referencing. We account only files that can hold other
++ * files because otherwise they can't form a loop and so are not interesting
++ * for GC.
++ */
++static int io_scm_file_account(struct io_ring_ctx *ctx, struct file *file)
++{
++#if defined(CONFIG_UNIX)
++ struct sock *sk = ctx->ring_sock->sk;
++ struct sk_buff_head *head = &sk->sk_receive_queue;
++ struct scm_fp_list *fpl;
++ struct sk_buff *skb;
++
++ if (likely(!io_file_need_scm(file)))
++ return 0;
++
++ /*
++ * See if we can merge this file into an existing skb SCM_RIGHTS
++ * file set. If there's no room, fall back to allocating a new skb
++ * and filling it in.
++ */
++ spin_lock_irq(&head->lock);
++ skb = skb_peek(head);
++ if (skb && UNIXCB(skb).fp->count < SCM_MAX_FD)
++ __skb_unlink(skb, head);
++ else
++ skb = NULL;
++ spin_unlock_irq(&head->lock);
++
++ if (!skb) {
++ fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
++ if (!fpl)
++ return -ENOMEM;
++
++ skb = alloc_skb(0, GFP_KERNEL);
++ if (!skb) {
++ kfree(fpl);
++ return -ENOMEM;
++ }
++
++ fpl->user = get_uid(current_user());
++ fpl->max = SCM_MAX_FD;
++ fpl->count = 0;
++
++ UNIXCB(skb).fp = fpl;
++ skb->sk = sk;
++ skb->destructor = unix_destruct_scm;
++ refcount_add(skb->truesize, &sk->sk_wmem_alloc);
++ }
++
++ fpl = UNIXCB(skb).fp;
++ fpl->fp[fpl->count++] = get_file(file);
++ unix_inflight(fpl->user, file);
++ skb_queue_head(head, skb);
++ fput(file);
++#endif
++ return 0;
++}
++
++static void io_rsrc_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
++{
++ struct file *file = prsrc->file;
++#if defined(CONFIG_UNIX)
++ struct sock *sock = ctx->ring_sock->sk;
++ struct sk_buff_head list, *head = &sock->sk_receive_queue;
++ struct sk_buff *skb;
++ int i;
++
++ if (!io_file_need_scm(file)) {
++ fput(file);
++ return;
++ }
++
++ __skb_queue_head_init(&list);
++
++ /*
++ * Find the skb that holds this file in its SCM_RIGHTS. When found,
++ * remove this entry and rearrange the file array.
++ */
++ skb = skb_dequeue(head);
++ while (skb) {
++ struct scm_fp_list *fp;
++
++ fp = UNIXCB(skb).fp;
++ for (i = 0; i < fp->count; i++) {
++ int left;
++
++ if (fp->fp[i] != file)
++ continue;
++
++ unix_notinflight(fp->user, fp->fp[i]);
++ left = fp->count - 1 - i;
++ if (left) {
++ memmove(&fp->fp[i], &fp->fp[i + 1],
++ left * sizeof(struct file *));
++ }
++ fp->count--;
++ if (!fp->count) {
++ kfree_skb(skb);
++ skb = NULL;
++ } else {
++ __skb_queue_tail(&list, skb);
++ }
++ fput(file);
++ file = NULL;
++ break;
++ }
++
++ if (!file)
++ break;
++
++ __skb_queue_tail(&list, skb);
++
++ skb = skb_dequeue(head);
++ }
++
++ if (skb_peek(&list)) {
++ spin_lock_irq(&head->lock);
++ while ((skb = __skb_dequeue(&list)) != NULL)
++ __skb_queue_tail(head, skb);
++ spin_unlock_irq(&head->lock);
++ }
++#else
++ fput(file);
++#endif
++}
++
++static void __io_rsrc_put_work(struct io_rsrc_node *ref_node)
++{
++ struct io_rsrc_data *rsrc_data = ref_node->rsrc_data;
++ struct io_ring_ctx *ctx = rsrc_data->ctx;
++ struct io_rsrc_put *prsrc, *tmp;
++
++ list_for_each_entry_safe(prsrc, tmp, &ref_node->rsrc_list, list) {
++ list_del(&prsrc->list);
++
++ if (prsrc->tag) {
++ if (ctx->flags & IORING_SETUP_IOPOLL)
++ mutex_lock(&ctx->uring_lock);
++
++ spin_lock(&ctx->completion_lock);
++ io_fill_cqe_aux(ctx, prsrc->tag, 0, 0);
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ io_cqring_ev_posted(ctx);
++
++ if (ctx->flags & IORING_SETUP_IOPOLL)
++ mutex_unlock(&ctx->uring_lock);
++ }
++
++ rsrc_data->do_put(ctx, prsrc);
++ kfree(prsrc);
++ }
++
++ io_rsrc_node_destroy(ref_node);
++ if (atomic_dec_and_test(&rsrc_data->refs))
++ complete(&rsrc_data->done);
++}
++
++static void io_rsrc_put_work(struct work_struct *work)
++{
++ struct io_ring_ctx *ctx;
++ struct llist_node *node;
++
++ ctx = container_of(work, struct io_ring_ctx, rsrc_put_work.work);
++ node = llist_del_all(&ctx->rsrc_put_llist);
++
++ while (node) {
++ struct io_rsrc_node *ref_node;
++ struct llist_node *next = node->next;
++
++ ref_node = llist_entry(node, struct io_rsrc_node, llist);
++ __io_rsrc_put_work(ref_node);
++ node = next;
++ }
++}
++
++static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
++ unsigned nr_args, u64 __user *tags)
++{
++ __s32 __user *fds = (__s32 __user *) arg;
++ struct file *file;
++ int fd, ret;
++ unsigned i;
++
++ if (ctx->file_data)
++ return -EBUSY;
++ if (!nr_args)
++ return -EINVAL;
++ if (nr_args > IORING_MAX_FIXED_FILES)
++ return -EMFILE;
++ if (nr_args > rlimit(RLIMIT_NOFILE))
++ return -EMFILE;
++ ret = io_rsrc_node_switch_start(ctx);
++ if (ret)
++ return ret;
++ ret = io_rsrc_data_alloc(ctx, io_rsrc_file_put, tags, nr_args,
++ &ctx->file_data);
++ if (ret)
++ return ret;
++
++ if (!io_alloc_file_tables(&ctx->file_table, nr_args)) {
++ io_rsrc_data_free(ctx->file_data);
++ ctx->file_data = NULL;
++ return -ENOMEM;
++ }
++
++ for (i = 0; i < nr_args; i++, ctx->nr_user_files++) {
++ struct io_fixed_file *file_slot;
++
++ if (fds && copy_from_user(&fd, &fds[i], sizeof(fd))) {
++ ret = -EFAULT;
++ goto fail;
++ }
++ /* allow sparse sets */
++ if (!fds || fd == -1) {
++ ret = -EINVAL;
++ if (unlikely(*io_get_tag_slot(ctx->file_data, i)))
++ goto fail;
++ continue;
++ }
++
++ file = fget(fd);
++ ret = -EBADF;
++ if (unlikely(!file))
++ goto fail;
++
++ /*
++ * Don't allow io_uring instances to be registered. If UNIX
++ * isn't enabled, then this causes a reference cycle and this
++ * instance can never get freed. If UNIX is enabled we'll
++ * handle it just fine, but there's still no point in allowing
++ * a ring fd as it doesn't support regular read/write anyway.
++ */
++ if (file->f_op == &io_uring_fops) {
++ fput(file);
++ goto fail;
++ }
++ ret = io_scm_file_account(ctx, file);
++ if (ret) {
++ fput(file);
++ goto fail;
++ }
++ file_slot = io_fixed_file_slot(&ctx->file_table, i);
++ io_fixed_file_set(file_slot, file);
++ io_file_bitmap_set(&ctx->file_table, i);
++ }
++
++ io_rsrc_node_switch(ctx, NULL);
++ return 0;
++fail:
++ __io_sqe_files_unregister(ctx);
++ return ret;
++}
++
++static int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
++ struct io_rsrc_node *node, void *rsrc)
++{
++ u64 *tag_slot = io_get_tag_slot(data, idx);
++ struct io_rsrc_put *prsrc;
++
++ prsrc = kzalloc(sizeof(*prsrc), GFP_KERNEL);
++ if (!prsrc)
++ return -ENOMEM;
++
++ prsrc->tag = *tag_slot;
++ *tag_slot = 0;
++ prsrc->rsrc = rsrc;
++ list_add(&prsrc->list, &node->rsrc_list);
++ return 0;
++}
++
++static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
++ unsigned int issue_flags, u32 slot_index)
++ __must_hold(&req->ctx->uring_lock)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ bool needs_switch = false;
++ struct io_fixed_file *file_slot;
++ int ret;
++
++ if (file->f_op == &io_uring_fops)
++ return -EBADF;
++ if (!ctx->file_data)
++ return -ENXIO;
++ if (slot_index >= ctx->nr_user_files)
++ return -EINVAL;
++
++ slot_index = array_index_nospec(slot_index, ctx->nr_user_files);
++ file_slot = io_fixed_file_slot(&ctx->file_table, slot_index);
++
++ if (file_slot->file_ptr) {
++ struct file *old_file;
++
++ ret = io_rsrc_node_switch_start(ctx);
++ if (ret)
++ goto err;
++
++ old_file = (struct file *)(file_slot->file_ptr & FFS_MASK);
++ ret = io_queue_rsrc_removal(ctx->file_data, slot_index,
++ ctx->rsrc_node, old_file);
++ if (ret)
++ goto err;
++ file_slot->file_ptr = 0;
++ io_file_bitmap_clear(&ctx->file_table, slot_index);
++ needs_switch = true;
++ }
++
++ ret = io_scm_file_account(ctx, file);
++ if (!ret) {
++ *io_get_tag_slot(ctx->file_data, slot_index) = 0;
++ io_fixed_file_set(file_slot, file);
++ io_file_bitmap_set(&ctx->file_table, slot_index);
++ }
++err:
++ if (needs_switch)
++ io_rsrc_node_switch(ctx, ctx->file_data);
++ if (ret)
++ fput(file);
++ return ret;
++}
++
++static int __io_close_fixed(struct io_kiocb *req, unsigned int issue_flags,
++ unsigned int offset)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ struct io_fixed_file *file_slot;
++ struct file *file;
++ int ret;
++
++ io_ring_submit_lock(ctx, issue_flags);
++ ret = -ENXIO;
++ if (unlikely(!ctx->file_data))
++ goto out;
++ ret = -EINVAL;
++ if (offset >= ctx->nr_user_files)
++ goto out;
++ ret = io_rsrc_node_switch_start(ctx);
++ if (ret)
++ goto out;
++
++ offset = array_index_nospec(offset, ctx->nr_user_files);
++ file_slot = io_fixed_file_slot(&ctx->file_table, offset);
++ ret = -EBADF;
++ if (!file_slot->file_ptr)
++ goto out;
++
++ file = (struct file *)(file_slot->file_ptr & FFS_MASK);
++ ret = io_queue_rsrc_removal(ctx->file_data, offset, ctx->rsrc_node, file);
++ if (ret)
++ goto out;
++
++ file_slot->file_ptr = 0;
++ io_file_bitmap_clear(&ctx->file_table, offset);
++ io_rsrc_node_switch(ctx, ctx->file_data);
++ ret = 0;
++out:
++ io_ring_submit_unlock(ctx, issue_flags);
++ return ret;
++}
++
++static inline int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags)
++{
++ return __io_close_fixed(req, issue_flags, req->close.file_slot - 1);
++}
++
++static int __io_sqe_files_update(struct io_ring_ctx *ctx,
++ struct io_uring_rsrc_update2 *up,
++ unsigned nr_args)
++{
++ u64 __user *tags = u64_to_user_ptr(up->tags);
++ __s32 __user *fds = u64_to_user_ptr(up->data);
++ struct io_rsrc_data *data = ctx->file_data;
++ struct io_fixed_file *file_slot;
++ struct file *file;
++ int fd, i, err = 0;
++ unsigned int done;
++ bool needs_switch = false;
++
++ if (!ctx->file_data)
++ return -ENXIO;
++ if (up->offset + nr_args > ctx->nr_user_files)
++ return -EINVAL;
++
++ for (done = 0; done < nr_args; done++) {
++ u64 tag = 0;
++
++ if ((tags && copy_from_user(&tag, &tags[done], sizeof(tag))) ||
++ copy_from_user(&fd, &fds[done], sizeof(fd))) {
++ err = -EFAULT;
++ break;
++ }
++ if ((fd == IORING_REGISTER_FILES_SKIP || fd == -1) && tag) {
++ err = -EINVAL;
++ break;
++ }
++ if (fd == IORING_REGISTER_FILES_SKIP)
++ continue;
++
++ i = array_index_nospec(up->offset + done, ctx->nr_user_files);
++ file_slot = io_fixed_file_slot(&ctx->file_table, i);
++
++ if (file_slot->file_ptr) {
++ file = (struct file *)(file_slot->file_ptr & FFS_MASK);
++ err = io_queue_rsrc_removal(data, i, ctx->rsrc_node, file);
++ if (err)
++ break;
++ file_slot->file_ptr = 0;
++ io_file_bitmap_clear(&ctx->file_table, i);
++ needs_switch = true;
++ }
++ if (fd != -1) {
++ file = fget(fd);
++ if (!file) {
++ err = -EBADF;
++ break;
++ }
++ /*
++ * Don't allow io_uring instances to be registered. If
++ * UNIX isn't enabled, then this causes a reference
++ * cycle and this instance can never get freed. If UNIX
++ * is enabled we'll handle it just fine, but there's
++ * still no point in allowing a ring fd as it doesn't
++ * support regular read/write anyway.
++ */
++ if (file->f_op == &io_uring_fops) {
++ fput(file);
++ err = -EBADF;
++ break;
++ }
++ err = io_scm_file_account(ctx, file);
++ if (err) {
++ fput(file);
++ break;
++ }
++ *io_get_tag_slot(data, i) = tag;
++ io_fixed_file_set(file_slot, file);
++ io_file_bitmap_set(&ctx->file_table, i);
++ }
++ }
++
++ if (needs_switch)
++ io_rsrc_node_switch(ctx, data);
++ return done ? done : err;
++}
++
++static struct io_wq *io_init_wq_offload(struct io_ring_ctx *ctx,
++ struct task_struct *task)
++{
++ struct io_wq_hash *hash;
++ struct io_wq_data data;
++ unsigned int concurrency;
++
++ mutex_lock(&ctx->uring_lock);
++ hash = ctx->hash_map;
++ if (!hash) {
++ hash = kzalloc(sizeof(*hash), GFP_KERNEL);
++ if (!hash) {
++ mutex_unlock(&ctx->uring_lock);
++ return ERR_PTR(-ENOMEM);
++ }
++ refcount_set(&hash->refs, 1);
++ init_waitqueue_head(&hash->wait);
++ ctx->hash_map = hash;
++ }
++ mutex_unlock(&ctx->uring_lock);
++
++ data.hash = hash;
++ data.task = task;
++ data.free_work = io_wq_free_work;
++ data.do_work = io_wq_submit_work;
++
++ /* Do QD, or 4 * CPUS, whatever is smallest */
++ concurrency = min(ctx->sq_entries, 4 * num_online_cpus());
++
++ return io_wq_create(concurrency, &data);
++}
++
++static __cold int io_uring_alloc_task_context(struct task_struct *task,
++ struct io_ring_ctx *ctx)
++{
++ struct io_uring_task *tctx;
++ int ret;
++
++ tctx = kzalloc(sizeof(*tctx), GFP_KERNEL);
++ if (unlikely(!tctx))
++ return -ENOMEM;
++
++ tctx->registered_rings = kcalloc(IO_RINGFD_REG_MAX,
++ sizeof(struct file *), GFP_KERNEL);
++ if (unlikely(!tctx->registered_rings)) {
++ kfree(tctx);
++ return -ENOMEM;
++ }
++
++ ret = percpu_counter_init(&tctx->inflight, 0, GFP_KERNEL);
++ if (unlikely(ret)) {
++ kfree(tctx->registered_rings);
++ kfree(tctx);
++ return ret;
++ }
++
++ tctx->io_wq = io_init_wq_offload(ctx, task);
++ if (IS_ERR(tctx->io_wq)) {
++ ret = PTR_ERR(tctx->io_wq);
++ percpu_counter_destroy(&tctx->inflight);
++ kfree(tctx->registered_rings);
++ kfree(tctx);
++ return ret;
++ }
++
++ xa_init(&tctx->xa);
++ init_waitqueue_head(&tctx->wait);
++ atomic_set(&tctx->in_idle, 0);
++ atomic_set(&tctx->inflight_tracked, 0);
++ task->io_uring = tctx;
++ spin_lock_init(&tctx->task_lock);
++ INIT_WQ_LIST(&tctx->task_list);
++ INIT_WQ_LIST(&tctx->prio_task_list);
++ init_task_work(&tctx->task_work, tctx_task_work);
++ return 0;
++}
++
++void __io_uring_free(struct task_struct *tsk)
++{
++ struct io_uring_task *tctx = tsk->io_uring;
++
++ WARN_ON_ONCE(!xa_empty(&tctx->xa));
++ WARN_ON_ONCE(tctx->io_wq);
++ WARN_ON_ONCE(tctx->cached_refs);
++
++ kfree(tctx->registered_rings);
++ percpu_counter_destroy(&tctx->inflight);
++ kfree(tctx);
++ tsk->io_uring = NULL;
++}
++
++static __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
++ struct io_uring_params *p)
++{
++ int ret;
++
++ /* Retain compatibility with failing for an invalid attach attempt */
++ if ((ctx->flags & (IORING_SETUP_ATTACH_WQ | IORING_SETUP_SQPOLL)) ==
++ IORING_SETUP_ATTACH_WQ) {
++ struct fd f;
++
++ f = fdget(p->wq_fd);
++ if (!f.file)
++ return -ENXIO;
++ if (f.file->f_op != &io_uring_fops) {
++ fdput(f);
++ return -EINVAL;
++ }
++ fdput(f);
++ }
++ if (ctx->flags & IORING_SETUP_SQPOLL) {
++ struct task_struct *tsk;
++ struct io_sq_data *sqd;
++ bool attached;
++
++ ret = security_uring_sqpoll();
++ if (ret)
++ return ret;
++
++ sqd = io_get_sq_data(p, &attached);
++ if (IS_ERR(sqd)) {
++ ret = PTR_ERR(sqd);
++ goto err;
++ }
++
++ ctx->sq_creds = get_current_cred();
++ ctx->sq_data = sqd;
++ ctx->sq_thread_idle = msecs_to_jiffies(p->sq_thread_idle);
++ if (!ctx->sq_thread_idle)
++ ctx->sq_thread_idle = HZ;
++
++ io_sq_thread_park(sqd);
++ list_add(&ctx->sqd_list, &sqd->ctx_list);
++ io_sqd_update_thread_idle(sqd);
++ /* don't attach to a dying SQPOLL thread, would be racy */
++ ret = (attached && !sqd->thread) ? -ENXIO : 0;
++ io_sq_thread_unpark(sqd);
++
++ if (ret < 0)
++ goto err;
++ if (attached)
++ return 0;
++
++ if (p->flags & IORING_SETUP_SQ_AFF) {
++ int cpu = p->sq_thread_cpu;
++
++ ret = -EINVAL;
++ if (cpu >= nr_cpu_ids || !cpu_online(cpu))
++ goto err_sqpoll;
++ sqd->sq_cpu = cpu;
++ } else {
++ sqd->sq_cpu = -1;
++ }
++
++ sqd->task_pid = current->pid;
++ sqd->task_tgid = current->tgid;
++ tsk = create_io_thread(io_sq_thread, sqd, NUMA_NO_NODE);
++ if (IS_ERR(tsk)) {
++ ret = PTR_ERR(tsk);
++ goto err_sqpoll;
++ }
++
++ sqd->thread = tsk;
++ ret = io_uring_alloc_task_context(tsk, ctx);
++ wake_up_new_task(tsk);
++ if (ret)
++ goto err;
++ } else if (p->flags & IORING_SETUP_SQ_AFF) {
++ /* Can't have SQ_AFF without SQPOLL */
++ ret = -EINVAL;
++ goto err;
++ }
++
++ return 0;
++err_sqpoll:
++ complete(&ctx->sq_data->exited);
++err:
++ io_sq_thread_finish(ctx);
++ return ret;
++}
++
++static inline void __io_unaccount_mem(struct user_struct *user,
++ unsigned long nr_pages)
++{
++ atomic_long_sub(nr_pages, &user->locked_vm);
++}
++
++static inline int __io_account_mem(struct user_struct *user,
++ unsigned long nr_pages)
++{
++ unsigned long page_limit, cur_pages, new_pages;
++
++ /* Don't allow more pages than we can safely lock */
++ page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
++
++ do {
++ cur_pages = atomic_long_read(&user->locked_vm);
++ new_pages = cur_pages + nr_pages;
++ if (new_pages > page_limit)
++ return -ENOMEM;
++ } while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
++ new_pages) != cur_pages);
++
++ return 0;
++}
++
++static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
++{
++ if (ctx->user)
++ __io_unaccount_mem(ctx->user, nr_pages);
++
++ if (ctx->mm_account)
++ atomic64_sub(nr_pages, &ctx->mm_account->pinned_vm);
++}
++
++static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
++{
++ int ret;
++
++ if (ctx->user) {
++ ret = __io_account_mem(ctx->user, nr_pages);
++ if (ret)
++ return ret;
++ }
++
++ if (ctx->mm_account)
++ atomic64_add(nr_pages, &ctx->mm_account->pinned_vm);
++
++ return 0;
++}
++
++static void io_mem_free(void *ptr)
++{
++ struct page *page;
++
++ if (!ptr)
++ return;
++
++ page = virt_to_head_page(ptr);
++ if (put_page_testzero(page))
++ free_compound_page(page);
++}
++
++static void *io_mem_alloc(size_t size)
++{
++ gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP;
++
++ return (void *) __get_free_pages(gfp, get_order(size));
++}
++
++static unsigned long rings_size(struct io_ring_ctx *ctx, unsigned int sq_entries,
++ unsigned int cq_entries, size_t *sq_offset)
++{
++ struct io_rings *rings;
++ size_t off, sq_array_size;
++
++ off = struct_size(rings, cqes, cq_entries);
++ if (off == SIZE_MAX)
++ return SIZE_MAX;
++ if (ctx->flags & IORING_SETUP_CQE32) {
++ if (check_shl_overflow(off, 1, &off))
++ return SIZE_MAX;
++ }
++
++#ifdef CONFIG_SMP
++ off = ALIGN(off, SMP_CACHE_BYTES);
++ if (off == 0)
++ return SIZE_MAX;
++#endif
++
++ if (sq_offset)
++ *sq_offset = off;
++
++ sq_array_size = array_size(sizeof(u32), sq_entries);
++ if (sq_array_size == SIZE_MAX)
++ return SIZE_MAX;
++
++ if (check_add_overflow(off, sq_array_size, &off))
++ return SIZE_MAX;
++
++ return off;
++}
++
++static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slot)
++{
++ struct io_mapped_ubuf *imu = *slot;
++ unsigned int i;
++
++ if (imu != ctx->dummy_ubuf) {
++ for (i = 0; i < imu->nr_bvecs; i++)
++ unpin_user_page(imu->bvec[i].bv_page);
++ if (imu->acct_pages)
++ io_unaccount_mem(ctx, imu->acct_pages);
++ kvfree(imu);
++ }
++ *slot = NULL;
++}
++
++static void io_rsrc_buf_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
++{
++ io_buffer_unmap(ctx, &prsrc->buf);
++ prsrc->buf = NULL;
++}
++
++static void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
++{
++ unsigned int i;
++
++ for (i = 0; i < ctx->nr_user_bufs; i++)
++ io_buffer_unmap(ctx, &ctx->user_bufs[i]);
++ kfree(ctx->user_bufs);
++ io_rsrc_data_free(ctx->buf_data);
++ ctx->user_bufs = NULL;
++ ctx->buf_data = NULL;
++ ctx->nr_user_bufs = 0;
++}
++
++static int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
++{
++ unsigned nr = ctx->nr_user_bufs;
++ int ret;
++
++ if (!ctx->buf_data)
++ return -ENXIO;
++
++ /*
++ * Quiesce may unlock ->uring_lock, and while it's not held
++ * prevent new requests using the table.
++ */
++ ctx->nr_user_bufs = 0;
++ ret = io_rsrc_ref_quiesce(ctx->buf_data, ctx);
++ ctx->nr_user_bufs = nr;
++ if (!ret)
++ __io_sqe_buffers_unregister(ctx);
++ return ret;
++}
++
++static int io_copy_iov(struct io_ring_ctx *ctx, struct iovec *dst,
++ void __user *arg, unsigned index)
++{
++ struct iovec __user *src;
++
++#ifdef CONFIG_COMPAT
++ if (ctx->compat) {
++ struct compat_iovec __user *ciovs;
++ struct compat_iovec ciov;
++
++ ciovs = (struct compat_iovec __user *) arg;
++ if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov)))
++ return -EFAULT;
++
++ dst->iov_base = u64_to_user_ptr((u64)ciov.iov_base);
++ dst->iov_len = ciov.iov_len;
++ return 0;
++ }
++#endif
++ src = (struct iovec __user *) arg;
++ if (copy_from_user(dst, &src[index], sizeof(*dst)))
++ return -EFAULT;
++ return 0;
++}
++
++/*
++ * Not super efficient, but this is just a registration time. And we do cache
++ * the last compound head, so generally we'll only do a full search if we don't
++ * match that one.
++ *
++ * We check if the given compound head page has already been accounted, to
++ * avoid double accounting it. This allows us to account the full size of the
++ * page, not just the constituent pages of a huge page.
++ */
++static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
++ int nr_pages, struct page *hpage)
++{
++ int i, j;
++
++ /* check current page array */
++ for (i = 0; i < nr_pages; i++) {
++ if (!PageCompound(pages[i]))
++ continue;
++ if (compound_head(pages[i]) == hpage)
++ return true;
++ }
++
++ /* check previously registered pages */
++ for (i = 0; i < ctx->nr_user_bufs; i++) {
++ struct io_mapped_ubuf *imu = ctx->user_bufs[i];
++
++ for (j = 0; j < imu->nr_bvecs; j++) {
++ if (!PageCompound(imu->bvec[j].bv_page))
++ continue;
++ if (compound_head(imu->bvec[j].bv_page) == hpage)
++ return true;
++ }
++ }
++
++ return false;
++}
++
++static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
++ int nr_pages, struct io_mapped_ubuf *imu,
++ struct page **last_hpage)
++{
++ int i, ret;
++
++ imu->acct_pages = 0;
++ for (i = 0; i < nr_pages; i++) {
++ if (!PageCompound(pages[i])) {
++ imu->acct_pages++;
++ } else {
++ struct page *hpage;
++
++ hpage = compound_head(pages[i]);
++ if (hpage == *last_hpage)
++ continue;
++ *last_hpage = hpage;
++ if (headpage_already_acct(ctx, pages, i, hpage))
++ continue;
++ imu->acct_pages += page_size(hpage) >> PAGE_SHIFT;
++ }
++ }
++
++ if (!imu->acct_pages)
++ return 0;
++
++ ret = io_account_mem(ctx, imu->acct_pages);
++ if (ret)
++ imu->acct_pages = 0;
++ return ret;
++}
++
++static struct page **io_pin_pages(unsigned long ubuf, unsigned long len,
++ int *npages)
++{
++ unsigned long start, end, nr_pages;
++ struct vm_area_struct **vmas = NULL;
++ struct page **pages = NULL;
++ int i, pret, ret = -ENOMEM;
++
++ end = (ubuf + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
++ start = ubuf >> PAGE_SHIFT;
++ nr_pages = end - start;
++
++ pages = kvmalloc_array(nr_pages, sizeof(struct page *), GFP_KERNEL);
++ if (!pages)
++ goto done;
++
++ vmas = kvmalloc_array(nr_pages, sizeof(struct vm_area_struct *),
++ GFP_KERNEL);
++ if (!vmas)
++ goto done;
++
++ ret = 0;
++ mmap_read_lock(current->mm);
++ pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
++ pages, vmas);
++ if (pret == nr_pages) {
++ /* don't support file backed memory */
++ for (i = 0; i < nr_pages; i++) {
++ struct vm_area_struct *vma = vmas[i];
++
++ if (vma_is_shmem(vma))
++ continue;
++ if (vma->vm_file &&
++ !is_file_hugepages(vma->vm_file)) {
++ ret = -EOPNOTSUPP;
++ break;
++ }
++ }
++ *npages = nr_pages;
++ } else {
++ ret = pret < 0 ? pret : -EFAULT;
++ }
++ mmap_read_unlock(current->mm);
++ if (ret) {
++ /*
++ * if we did partial map, or found file backed vmas,
++ * release any pages we did get
++ */
++ if (pret > 0)
++ unpin_user_pages(pages, pret);
++ goto done;
++ }
++ ret = 0;
++done:
++ kvfree(vmas);
++ if (ret < 0) {
++ kvfree(pages);
++ pages = ERR_PTR(ret);
++ }
++ return pages;
++}
++
++static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
++ struct io_mapped_ubuf **pimu,
++ struct page **last_hpage)
++{
++ struct io_mapped_ubuf *imu = NULL;
++ struct page **pages = NULL;
++ unsigned long off;
++ size_t size;
++ int ret, nr_pages, i;
++
++ if (!iov->iov_base) {
++ *pimu = ctx->dummy_ubuf;
++ return 0;
++ }
++
++ *pimu = NULL;
++ ret = -ENOMEM;
++
++ pages = io_pin_pages((unsigned long) iov->iov_base, iov->iov_len,
++ &nr_pages);
++ if (IS_ERR(pages)) {
++ ret = PTR_ERR(pages);
++ pages = NULL;
++ goto done;
++ }
++
++ imu = kvmalloc(struct_size(imu, bvec, nr_pages), GFP_KERNEL);
++ if (!imu)
++ goto done;
++
++ ret = io_buffer_account_pin(ctx, pages, nr_pages, imu, last_hpage);
++ if (ret) {
++ unpin_user_pages(pages, nr_pages);
++ goto done;
++ }
++
++ off = (unsigned long) iov->iov_base & ~PAGE_MASK;
++ size = iov->iov_len;
++ for (i = 0; i < nr_pages; i++) {
++ size_t vec_len;
++
++ vec_len = min_t(size_t, size, PAGE_SIZE - off);
++ imu->bvec[i].bv_page = pages[i];
++ imu->bvec[i].bv_len = vec_len;
++ imu->bvec[i].bv_offset = off;
++ off = 0;
++ size -= vec_len;
++ }
++ /* store original address for later verification */
++ imu->ubuf = (unsigned long) iov->iov_base;
++ imu->ubuf_end = imu->ubuf + iov->iov_len;
++ imu->nr_bvecs = nr_pages;
++ *pimu = imu;
++ ret = 0;
++done:
++ if (ret)
++ kvfree(imu);
++ kvfree(pages);
++ return ret;
++}
++
++static int io_buffers_map_alloc(struct io_ring_ctx *ctx, unsigned int nr_args)
++{
++ ctx->user_bufs = kcalloc(nr_args, sizeof(*ctx->user_bufs), GFP_KERNEL);
++ return ctx->user_bufs ? 0 : -ENOMEM;
++}
++
++static int io_buffer_validate(struct iovec *iov)
++{
++ unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1);
++
++ /*
++ * Don't impose further limits on the size and buffer
++ * constraints here, we'll -EINVAL later when IO is
++ * submitted if they are wrong.
++ */
++ if (!iov->iov_base)
++ return iov->iov_len ? -EFAULT : 0;
++ if (!iov->iov_len)
++ return -EFAULT;
++
++ /* arbitrary limit, but we need something */
++ if (iov->iov_len > SZ_1G)
++ return -EFAULT;
++
++ if (check_add_overflow((unsigned long)iov->iov_base, acct_len, &tmp))
++ return -EOVERFLOW;
++
++ return 0;
++}
++
++static int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
++ unsigned int nr_args, u64 __user *tags)
++{
++ struct page *last_hpage = NULL;
++ struct io_rsrc_data *data;
++ int i, ret;
++ struct iovec iov;
++
++ if (ctx->user_bufs)
++ return -EBUSY;
++ if (!nr_args || nr_args > IORING_MAX_REG_BUFFERS)
++ return -EINVAL;
++ ret = io_rsrc_node_switch_start(ctx);
++ if (ret)
++ return ret;
++ ret = io_rsrc_data_alloc(ctx, io_rsrc_buf_put, tags, nr_args, &data);
++ if (ret)
++ return ret;
++ ret = io_buffers_map_alloc(ctx, nr_args);
++ if (ret) {
++ io_rsrc_data_free(data);
++ return ret;
++ }
++
++ for (i = 0; i < nr_args; i++, ctx->nr_user_bufs++) {
++ if (arg) {
++ ret = io_copy_iov(ctx, &iov, arg, i);
++ if (ret)
++ break;
++ ret = io_buffer_validate(&iov);
++ if (ret)
++ break;
++ } else {
++ memset(&iov, 0, sizeof(iov));
++ }
++
++ if (!iov.iov_base && *io_get_tag_slot(data, i)) {
++ ret = -EINVAL;
++ break;
++ }
++
++ ret = io_sqe_buffer_register(ctx, &iov, &ctx->user_bufs[i],
++ &last_hpage);
++ if (ret)
++ break;
++ }
++
++ WARN_ON_ONCE(ctx->buf_data);
++
++ ctx->buf_data = data;
++ if (ret)
++ __io_sqe_buffers_unregister(ctx);
++ else
++ io_rsrc_node_switch(ctx, NULL);
++ return ret;
++}
++
++static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
++ struct io_uring_rsrc_update2 *up,
++ unsigned int nr_args)
++{
++ u64 __user *tags = u64_to_user_ptr(up->tags);
++ struct iovec iov, __user *iovs = u64_to_user_ptr(up->data);
++ struct page *last_hpage = NULL;
++ bool needs_switch = false;
++ __u32 done;
++ int i, err;
++
++ if (!ctx->buf_data)
++ return -ENXIO;
++ if (up->offset + nr_args > ctx->nr_user_bufs)
++ return -EINVAL;
++
++ for (done = 0; done < nr_args; done++) {
++ struct io_mapped_ubuf *imu;
++ int offset = up->offset + done;
++ u64 tag = 0;
++
++ err = io_copy_iov(ctx, &iov, iovs, done);
++ if (err)
++ break;
++ if (tags && copy_from_user(&tag, &tags[done], sizeof(tag))) {
++ err = -EFAULT;
++ break;
++ }
++ err = io_buffer_validate(&iov);
++ if (err)
++ break;
++ if (!iov.iov_base && tag) {
++ err = -EINVAL;
++ break;
++ }
++ err = io_sqe_buffer_register(ctx, &iov, &imu, &last_hpage);
++ if (err)
++ break;
++
++ i = array_index_nospec(offset, ctx->nr_user_bufs);
++ if (ctx->user_bufs[i] != ctx->dummy_ubuf) {
++ err = io_queue_rsrc_removal(ctx->buf_data, i,
++ ctx->rsrc_node, ctx->user_bufs[i]);
++ if (unlikely(err)) {
++ io_buffer_unmap(ctx, &imu);
++ break;
++ }
++ ctx->user_bufs[i] = NULL;
++ needs_switch = true;
++ }
++
++ ctx->user_bufs[i] = imu;
++ *io_get_tag_slot(ctx->buf_data, offset) = tag;
++ }
++
++ if (needs_switch)
++ io_rsrc_node_switch(ctx, ctx->buf_data);
++ return done ? done : err;
++}
++
++static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
++ unsigned int eventfd_async)
++{
++ struct io_ev_fd *ev_fd;
++ __s32 __user *fds = arg;
++ int fd;
++
++ ev_fd = rcu_dereference_protected(ctx->io_ev_fd,
++ lockdep_is_held(&ctx->uring_lock));
++ if (ev_fd)
++ return -EBUSY;
++
++ if (copy_from_user(&fd, fds, sizeof(*fds)))
++ return -EFAULT;
++
++ ev_fd = kmalloc(sizeof(*ev_fd), GFP_KERNEL);
++ if (!ev_fd)
++ return -ENOMEM;
++
++ ev_fd->cq_ev_fd = eventfd_ctx_fdget(fd);
++ if (IS_ERR(ev_fd->cq_ev_fd)) {
++ int ret = PTR_ERR(ev_fd->cq_ev_fd);
++ kfree(ev_fd);
++ return ret;
++ }
++ ev_fd->eventfd_async = eventfd_async;
++ ctx->has_evfd = true;
++ rcu_assign_pointer(ctx->io_ev_fd, ev_fd);
++ return 0;
++}
++
++static void io_eventfd_put(struct rcu_head *rcu)
++{
++ struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
++
++ eventfd_ctx_put(ev_fd->cq_ev_fd);
++ kfree(ev_fd);
++}
++
++static int io_eventfd_unregister(struct io_ring_ctx *ctx)
++{
++ struct io_ev_fd *ev_fd;
++
++ ev_fd = rcu_dereference_protected(ctx->io_ev_fd,
++ lockdep_is_held(&ctx->uring_lock));
++ if (ev_fd) {
++ ctx->has_evfd = false;
++ rcu_assign_pointer(ctx->io_ev_fd, NULL);
++ call_rcu(&ev_fd->rcu, io_eventfd_put);
++ return 0;
++ }
++
++ return -ENXIO;
++}
++
++static void io_destroy_buffers(struct io_ring_ctx *ctx)
++{
++ struct io_buffer_list *bl;
++ unsigned long index;
++ int i;
++
++ for (i = 0; i < BGID_ARRAY; i++) {
++ if (!ctx->io_bl)
++ break;
++ __io_remove_buffers(ctx, &ctx->io_bl[i], -1U);
++ }
++
++ xa_for_each(&ctx->io_bl_xa, index, bl) {
++ xa_erase(&ctx->io_bl_xa, bl->bgid);
++ __io_remove_buffers(ctx, bl, -1U);
++ kfree(bl);
++ }
++
++ while (!list_empty(&ctx->io_buffers_pages)) {
++ struct page *page;
++
++ page = list_first_entry(&ctx->io_buffers_pages, struct page, lru);
++ list_del_init(&page->lru);
++ __free_page(page);
++ }
++}
++
++static void io_req_caches_free(struct io_ring_ctx *ctx)
++{
++ struct io_submit_state *state = &ctx->submit_state;
++ int nr = 0;
++
++ mutex_lock(&ctx->uring_lock);
++ io_flush_cached_locked_reqs(ctx, state);
++
++ while (!io_req_cache_empty(ctx)) {
++ struct io_wq_work_node *node;
++ struct io_kiocb *req;
++
++ node = wq_stack_extract(&state->free_list);
++ req = container_of(node, struct io_kiocb, comp_list);
++ kmem_cache_free(req_cachep, req);
++ nr++;
++ }
++ if (nr)
++ percpu_ref_put_many(&ctx->refs, nr);
++ mutex_unlock(&ctx->uring_lock);
++}
++
++static void io_wait_rsrc_data(struct io_rsrc_data *data)
++{
++ if (data && !atomic_dec_and_test(&data->refs))
++ wait_for_completion(&data->done);
++}
++
++static void io_flush_apoll_cache(struct io_ring_ctx *ctx)
++{
++ struct async_poll *apoll;
++
++ while (!list_empty(&ctx->apoll_cache)) {
++ apoll = list_first_entry(&ctx->apoll_cache, struct async_poll,
++ poll.wait.entry);
++ list_del(&apoll->poll.wait.entry);
++ kfree(apoll);
++ }
++}
++
++static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
++{
++ io_sq_thread_finish(ctx);
++
++ if (ctx->mm_account) {
++ mmdrop(ctx->mm_account);
++ ctx->mm_account = NULL;
++ }
++
++ io_rsrc_refs_drop(ctx);
++ /* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
++ io_wait_rsrc_data(ctx->buf_data);
++ io_wait_rsrc_data(ctx->file_data);
++
++ mutex_lock(&ctx->uring_lock);
++ if (ctx->buf_data)
++ __io_sqe_buffers_unregister(ctx);
++ if (ctx->file_data)
++ __io_sqe_files_unregister(ctx);
++ if (ctx->rings)
++ __io_cqring_overflow_flush(ctx, true);
++ io_eventfd_unregister(ctx);
++ io_flush_apoll_cache(ctx);
++ mutex_unlock(&ctx->uring_lock);
++ io_destroy_buffers(ctx);
++ if (ctx->sq_creds)
++ put_cred(ctx->sq_creds);
++
++ /* there are no registered resources left, nobody uses it */
++ if (ctx->rsrc_node)
++ io_rsrc_node_destroy(ctx->rsrc_node);
++ if (ctx->rsrc_backup_node)
++ io_rsrc_node_destroy(ctx->rsrc_backup_node);
++ flush_delayed_work(&ctx->rsrc_put_work);
++ flush_delayed_work(&ctx->fallback_work);
++
++ WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
++ WARN_ON_ONCE(!llist_empty(&ctx->rsrc_put_llist));
++
++#if defined(CONFIG_UNIX)
++ if (ctx->ring_sock) {
++ ctx->ring_sock->file = NULL; /* so that iput() is called */
++ sock_release(ctx->ring_sock);
++ }
++#endif
++ WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
++
++ io_mem_free(ctx->rings);
++ io_mem_free(ctx->sq_sqes);
++
++ percpu_ref_exit(&ctx->refs);
++ free_uid(ctx->user);
++ io_req_caches_free(ctx);
++ if (ctx->hash_map)
++ io_wq_put_hash(ctx->hash_map);
++ kfree(ctx->cancel_hash);
++ kfree(ctx->dummy_ubuf);
++ kfree(ctx->io_bl);
++ xa_destroy(&ctx->io_bl_xa);
++ kfree(ctx);
++}
++
++static __poll_t io_uring_poll(struct file *file, poll_table *wait)
++{
++ struct io_ring_ctx *ctx = file->private_data;
++ __poll_t mask = 0;
++
++ poll_wait(file, &ctx->cq_wait, wait);
++ /*
++ * synchronizes with barrier from wq_has_sleeper call in
++ * io_commit_cqring
++ */
++ smp_rmb();
++ if (!io_sqring_full(ctx))
++ mask |= EPOLLOUT | EPOLLWRNORM;
++
++ /*
++ * Don't flush cqring overflow list here, just do a simple check.
++ * Otherwise there could possible be ABBA deadlock:
++ * CPU0 CPU1
++ * ---- ----
++ * lock(&ctx->uring_lock);
++ * lock(&ep->mtx);
++ * lock(&ctx->uring_lock);
++ * lock(&ep->mtx);
++ *
++ * Users may get EPOLLIN meanwhile seeing nothing in cqring, this
++ * pushs them to do the flush.
++ */
++ if (io_cqring_events(ctx) ||
++ test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
++ mask |= EPOLLIN | EPOLLRDNORM;
++
++ return mask;
++}
++
++static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
++{
++ const struct cred *creds;
++
++ creds = xa_erase(&ctx->personalities, id);
++ if (creds) {
++ put_cred(creds);
++ return 0;
++ }
++
++ return -EINVAL;
++}
++
++struct io_tctx_exit {
++ struct callback_head task_work;
++ struct completion completion;
++ struct io_ring_ctx *ctx;
++};
++
++static __cold void io_tctx_exit_cb(struct callback_head *cb)
++{
++ struct io_uring_task *tctx = current->io_uring;
++ struct io_tctx_exit *work;
++
++ work = container_of(cb, struct io_tctx_exit, task_work);
++ /*
++ * When @in_idle, we're in cancellation and it's racy to remove the
++ * node. It'll be removed by the end of cancellation, just ignore it.
++ */
++ if (!atomic_read(&tctx->in_idle))
++ io_uring_del_tctx_node((unsigned long)work->ctx);
++ complete(&work->completion);
++}
++
++static __cold bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
++{
++ struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++
++ return req->ctx == data;
++}
++
++static __cold void io_ring_exit_work(struct work_struct *work)
++{
++ struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, exit_work);
++ unsigned long timeout = jiffies + HZ * 60 * 5;
++ unsigned long interval = HZ / 20;
++ struct io_tctx_exit exit;
++ struct io_tctx_node *node;
++ int ret;
++
++ /*
++ * If we're doing polled IO and end up having requests being
++ * submitted async (out-of-line), then completions can come in while
++ * we're waiting for refs to drop. We need to reap these manually,
++ * as nobody else will be looking for them.
++ */
++ do {
++ io_uring_try_cancel_requests(ctx, NULL, true);
++ if (ctx->sq_data) {
++ struct io_sq_data *sqd = ctx->sq_data;
++ struct task_struct *tsk;
++
++ io_sq_thread_park(sqd);
++ tsk = sqd->thread;
++ if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
++ io_wq_cancel_cb(tsk->io_uring->io_wq,
++ io_cancel_ctx_cb, ctx, true);
++ io_sq_thread_unpark(sqd);
++ }
++
++ io_req_caches_free(ctx);
++
++ if (WARN_ON_ONCE(time_after(jiffies, timeout))) {
++ /* there is little hope left, don't run it too often */
++ interval = HZ * 60;
++ }
++ } while (!wait_for_completion_timeout(&ctx->ref_comp, interval));
++
++ init_completion(&exit.completion);
++ init_task_work(&exit.task_work, io_tctx_exit_cb);
++ exit.ctx = ctx;
++ /*
++ * Some may use context even when all refs and requests have been put,
++ * and they are free to do so while still holding uring_lock or
++ * completion_lock, see io_req_task_submit(). Apart from other work,
++ * this lock/unlock section also waits them to finish.
++ */
++ mutex_lock(&ctx->uring_lock);
++ while (!list_empty(&ctx->tctx_list)) {
++ WARN_ON_ONCE(time_after(jiffies, timeout));
++
++ node = list_first_entry(&ctx->tctx_list, struct io_tctx_node,
++ ctx_node);
++ /* don't spin on a single task if cancellation failed */
++ list_rotate_left(&ctx->tctx_list);
++ ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
++ if (WARN_ON_ONCE(ret))
++ continue;
++
++ mutex_unlock(&ctx->uring_lock);
++ wait_for_completion(&exit.completion);
++ mutex_lock(&ctx->uring_lock);
++ }
++ mutex_unlock(&ctx->uring_lock);
++ spin_lock(&ctx->completion_lock);
++ spin_unlock(&ctx->completion_lock);
++
++ io_ring_ctx_free(ctx);
++}
++
++/* Returns true if we found and killed one or more timeouts */
++static __cold bool io_kill_timeouts(struct io_ring_ctx *ctx,
++ struct task_struct *tsk, bool cancel_all)
++{
++ struct io_kiocb *req, *tmp;
++ int canceled = 0;
++
++ spin_lock(&ctx->completion_lock);
++ spin_lock_irq(&ctx->timeout_lock);
++ list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
++ if (io_match_task(req, tsk, cancel_all)) {
++ io_kill_timeout(req, -ECANCELED);
++ canceled++;
++ }
++ }
++ spin_unlock_irq(&ctx->timeout_lock);
++ io_commit_cqring(ctx);
++ spin_unlock(&ctx->completion_lock);
++ if (canceled != 0)
++ io_cqring_ev_posted(ctx);
++ return canceled != 0;
++}
++
++static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
++{
++ unsigned long index;
++ struct creds *creds;
++
++ mutex_lock(&ctx->uring_lock);
++ percpu_ref_kill(&ctx->refs);
++ if (ctx->rings)
++ __io_cqring_overflow_flush(ctx, true);
++ xa_for_each(&ctx->personalities, index, creds)
++ io_unregister_personality(ctx, index);
++ mutex_unlock(&ctx->uring_lock);
++
++ /* failed during ring init, it couldn't have issued any requests */
++ if (ctx->rings) {
++ io_kill_timeouts(ctx, NULL, true);
++ io_poll_remove_all(ctx, NULL, true);
++ /* if we failed setting up the ctx, we might not have any rings */
++ io_iopoll_try_reap_events(ctx);
++ }
++
++ INIT_WORK(&ctx->exit_work, io_ring_exit_work);
++ /*
++ * Use system_unbound_wq to avoid spawning tons of event kworkers
++ * if we're exiting a ton of rings at the same time. It just adds
++ * noise and overhead, there's no discernable change in runtime
++ * over using system_wq.
++ */
++ queue_work(system_unbound_wq, &ctx->exit_work);
++}
++
++static int io_uring_release(struct inode *inode, struct file *file)
++{
++ struct io_ring_ctx *ctx = file->private_data;
++
++ file->private_data = NULL;
++ io_ring_ctx_wait_and_kill(ctx);
++ return 0;
++}
++
++struct io_task_cancel {
++ struct task_struct *task;
++ bool all;
++};
++
++static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
++{
++ struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++ struct io_task_cancel *cancel = data;
++
++ return io_match_task_safe(req, cancel->task, cancel->all);
++}
++
++static __cold bool io_cancel_defer_files(struct io_ring_ctx *ctx,
++ struct task_struct *task,
++ bool cancel_all)
++{
++ struct io_defer_entry *de;
++ LIST_HEAD(list);
++
++ spin_lock(&ctx->completion_lock);
++ list_for_each_entry_reverse(de, &ctx->defer_list, list) {
++ if (io_match_task_safe(de->req, task, cancel_all)) {
++ list_cut_position(&list, &ctx->defer_list, &de->list);
++ break;
++ }
++ }
++ spin_unlock(&ctx->completion_lock);
++ if (list_empty(&list))
++ return false;
++
++ while (!list_empty(&list)) {
++ de = list_first_entry(&list, struct io_defer_entry, list);
++ list_del_init(&de->list);
++ io_req_complete_failed(de->req, -ECANCELED);
++ kfree(de);
++ }
++ return true;
++}
++
++static __cold bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx)
++{
++ struct io_tctx_node *node;
++ enum io_wq_cancel cret;
++ bool ret = false;
++
++ mutex_lock(&ctx->uring_lock);
++ list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
++ struct io_uring_task *tctx = node->task->io_uring;
++
++ /*
++ * io_wq will stay alive while we hold uring_lock, because it's
++ * killed after ctx nodes, which requires to take the lock.
++ */
++ if (!tctx || !tctx->io_wq)
++ continue;
++ cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true);
++ ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
++ }
++ mutex_unlock(&ctx->uring_lock);
++
++ return ret;
++}
++
++static __cold void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
++ struct task_struct *task,
++ bool cancel_all)
++{
++ struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
++ struct io_uring_task *tctx = task ? task->io_uring : NULL;
++
++ /* failed during ring init, it couldn't have issued any requests */
++ if (!ctx->rings)
++ return;
++
++ while (1) {
++ enum io_wq_cancel cret;
++ bool ret = false;
++
++ if (!task) {
++ ret |= io_uring_try_cancel_iowq(ctx);
++ } else if (tctx && tctx->io_wq) {
++ /*
++ * Cancels requests of all rings, not only @ctx, but
++ * it's fine as the task is in exit/exec.
++ */
++ cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
++ &cancel, true);
++ ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
++ }
++
++ /* SQPOLL thread does its own polling */
++ if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) ||
++ (ctx->sq_data && ctx->sq_data->thread == current)) {
++ while (!wq_list_empty(&ctx->iopoll_list)) {
++ io_iopoll_try_reap_events(ctx);
++ ret = true;
++ }
++ }
++
++ ret |= io_cancel_defer_files(ctx, task, cancel_all);
++ ret |= io_poll_remove_all(ctx, task, cancel_all);
++ ret |= io_kill_timeouts(ctx, task, cancel_all);
++ if (task)
++ ret |= io_run_task_work();
++ if (!ret)
++ break;
++ cond_resched();
++ }
++}
++
++static int __io_uring_add_tctx_node(struct io_ring_ctx *ctx)
++{
++ struct io_uring_task *tctx = current->io_uring;
++ struct io_tctx_node *node;
++ int ret;
++
++ if (unlikely(!tctx)) {
++ ret = io_uring_alloc_task_context(current, ctx);
++ if (unlikely(ret))
++ return ret;
++
++ tctx = current->io_uring;
++ if (ctx->iowq_limits_set) {
++ unsigned int limits[2] = { ctx->iowq_limits[0],
++ ctx->iowq_limits[1], };
++
++ ret = io_wq_max_workers(tctx->io_wq, limits);
++ if (ret)
++ return ret;
++ }
++ }
++ if (!xa_load(&tctx->xa, (unsigned long)ctx)) {
++ node = kmalloc(sizeof(*node), GFP_KERNEL);
++ if (!node)
++ return -ENOMEM;
++ node->ctx = ctx;
++ node->task = current;
++
++ ret = xa_err(xa_store(&tctx->xa, (unsigned long)ctx,
++ node, GFP_KERNEL));
++ if (ret) {
++ kfree(node);
++ return ret;
++ }
++
++ mutex_lock(&ctx->uring_lock);
++ list_add(&node->ctx_node, &ctx->tctx_list);
++ mutex_unlock(&ctx->uring_lock);
++ }
++ tctx->last = ctx;
++ return 0;
++}
++
++/*
++ * Note that this task has used io_uring. We use it for cancelation purposes.
++ */
++static inline int io_uring_add_tctx_node(struct io_ring_ctx *ctx)
++{
++ struct io_uring_task *tctx = current->io_uring;
++
++ if (likely(tctx && tctx->last == ctx))
++ return 0;
++ return __io_uring_add_tctx_node(ctx);
++}
++
++/*
++ * Remove this io_uring_file -> task mapping.
++ */
++static __cold void io_uring_del_tctx_node(unsigned long index)
++{
++ struct io_uring_task *tctx = current->io_uring;
++ struct io_tctx_node *node;
++
++ if (!tctx)
++ return;
++ node = xa_erase(&tctx->xa, index);
++ if (!node)
++ return;
++
++ WARN_ON_ONCE(current != node->task);
++ WARN_ON_ONCE(list_empty(&node->ctx_node));
++
++ mutex_lock(&node->ctx->uring_lock);
++ list_del(&node->ctx_node);
++ mutex_unlock(&node->ctx->uring_lock);
++
++ if (tctx->last == node->ctx)
++ tctx->last = NULL;
++ kfree(node);
++}
++
++static __cold void io_uring_clean_tctx(struct io_uring_task *tctx)
++{
++ struct io_wq *wq = tctx->io_wq;
++ struct io_tctx_node *node;
++ unsigned long index;
++
++ xa_for_each(&tctx->xa, index, node) {
++ io_uring_del_tctx_node(index);
++ cond_resched();
++ }
++ if (wq) {
++ /*
++ * Must be after io_uring_del_tctx_node() (removes nodes under
++ * uring_lock) to avoid race with io_uring_try_cancel_iowq().
++ */
++ io_wq_put_and_exit(wq);
++ tctx->io_wq = NULL;
++ }
++}
++
++static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
++{
++ if (tracked)
++ return atomic_read(&tctx->inflight_tracked);
++ return percpu_counter_sum(&tctx->inflight);
++}
++
++/*
++ * Find any io_uring ctx that this task has registered or done IO on, and cancel
++ * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
++ */
++static __cold void io_uring_cancel_generic(bool cancel_all,
++ struct io_sq_data *sqd)
++{
++ struct io_uring_task *tctx = current->io_uring;
++ struct io_ring_ctx *ctx;
++ s64 inflight;
++ DEFINE_WAIT(wait);
++
++ WARN_ON_ONCE(sqd && sqd->thread != current);
++
++ if (!current->io_uring)
++ return;
++ if (tctx->io_wq)
++ io_wq_exit_start(tctx->io_wq);
++
++ atomic_inc(&tctx->in_idle);
++ do {
++ io_uring_drop_tctx_refs(current);
++ /* read completions before cancelations */
++ inflight = tctx_inflight(tctx, !cancel_all);
++ if (!inflight)
++ break;
++
++ if (!sqd) {
++ struct io_tctx_node *node;
++ unsigned long index;
++
++ xa_for_each(&tctx->xa, index, node) {
++ /* sqpoll task will cancel all its requests */
++ if (node->ctx->sq_data)
++ continue;
++ io_uring_try_cancel_requests(node->ctx, current,
++ cancel_all);
++ }
++ } else {
++ list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
++ io_uring_try_cancel_requests(ctx, current,
++ cancel_all);
++ }
++
++ prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE);
++ io_run_task_work();
++ io_uring_drop_tctx_refs(current);
++
++ /*
++ * If we've seen completions, retry without waiting. This
++ * avoids a race where a completion comes in before we did
++ * prepare_to_wait().
++ */
++ if (inflight == tctx_inflight(tctx, !cancel_all))
++ schedule();
++ finish_wait(&tctx->wait, &wait);
++ } while (1);
++
++ io_uring_clean_tctx(tctx);
++ if (cancel_all) {
++ /*
++ * We shouldn't run task_works after cancel, so just leave
++ * ->in_idle set for normal exit.
++ */
++ atomic_dec(&tctx->in_idle);
++ /* for exec all current's requests should be gone, kill tctx */
++ __io_uring_free(current);
++ }
++}
++
++void __io_uring_cancel(bool cancel_all)
++{
++ io_uring_cancel_generic(cancel_all, NULL);
++}
++
++void io_uring_unreg_ringfd(void)
++{
++ struct io_uring_task *tctx = current->io_uring;
++ int i;
++
++ for (i = 0; i < IO_RINGFD_REG_MAX; i++) {
++ if (tctx->registered_rings[i]) {
++ fput(tctx->registered_rings[i]);
++ tctx->registered_rings[i] = NULL;
++ }
++ }
++}
++
++static int io_ring_add_registered_fd(struct io_uring_task *tctx, int fd,
++ int start, int end)
++{
++ struct file *file;
++ int offset;
++
++ for (offset = start; offset < end; offset++) {
++ offset = array_index_nospec(offset, IO_RINGFD_REG_MAX);
++ if (tctx->registered_rings[offset])
++ continue;
++
++ file = fget(fd);
++ if (!file) {
++ return -EBADF;
++ } else if (file->f_op != &io_uring_fops) {
++ fput(file);
++ return -EOPNOTSUPP;
++ }
++ tctx->registered_rings[offset] = file;
++ return offset;
++ }
++
++ return -EBUSY;
++}
++
++/*
++ * Register a ring fd to avoid fdget/fdput for each io_uring_enter()
++ * invocation. User passes in an array of struct io_uring_rsrc_update
++ * with ->data set to the ring_fd, and ->offset given for the desired
++ * index. If no index is desired, application may set ->offset == -1U
++ * and we'll find an available index. Returns number of entries
++ * successfully processed, or < 0 on error if none were processed.
++ */
++static int io_ringfd_register(struct io_ring_ctx *ctx, void __user *__arg,
++ unsigned nr_args)
++{
++ struct io_uring_rsrc_update __user *arg = __arg;
++ struct io_uring_rsrc_update reg;
++ struct io_uring_task *tctx;
++ int ret, i;
++
++ if (!nr_args || nr_args > IO_RINGFD_REG_MAX)
++ return -EINVAL;
++
++ mutex_unlock(&ctx->uring_lock);
++ ret = io_uring_add_tctx_node(ctx);
++ mutex_lock(&ctx->uring_lock);
++ if (ret)
++ return ret;
++
++ tctx = current->io_uring;
++ for (i = 0; i < nr_args; i++) {
++ int start, end;
++
++ if (copy_from_user(®, &arg[i], sizeof(reg))) {
++ ret = -EFAULT;
++ break;
++ }
++
++ if (reg.resv) {
++ ret = -EINVAL;
++ break;
++ }
++
++ if (reg.offset == -1U) {
++ start = 0;
++ end = IO_RINGFD_REG_MAX;
++ } else {
++ if (reg.offset >= IO_RINGFD_REG_MAX) {
++ ret = -EINVAL;
++ break;
++ }
++ start = reg.offset;
++ end = start + 1;
++ }
++
++ ret = io_ring_add_registered_fd(tctx, reg.data, start, end);
++ if (ret < 0)
++ break;
++
++ reg.offset = ret;
++ if (copy_to_user(&arg[i], ®, sizeof(reg))) {
++ fput(tctx->registered_rings[reg.offset]);
++ tctx->registered_rings[reg.offset] = NULL;
++ ret = -EFAULT;
++ break;
++ }
++ }
++
++ return i ? i : ret;
++}
++
++static int io_ringfd_unregister(struct io_ring_ctx *ctx, void __user *__arg,
++ unsigned nr_args)
++{
++ struct io_uring_rsrc_update __user *arg = __arg;
++ struct io_uring_task *tctx = current->io_uring;
++ struct io_uring_rsrc_update reg;
++ int ret = 0, i;
++
++ if (!nr_args || nr_args > IO_RINGFD_REG_MAX)
++ return -EINVAL;
++ if (!tctx)
++ return 0;
++
++ for (i = 0; i < nr_args; i++) {
++ if (copy_from_user(®, &arg[i], sizeof(reg))) {
++ ret = -EFAULT;
++ break;
++ }
++ if (reg.resv || reg.data || reg.offset >= IO_RINGFD_REG_MAX) {
++ ret = -EINVAL;
++ break;
++ }
++
++ reg.offset = array_index_nospec(reg.offset, IO_RINGFD_REG_MAX);
++ if (tctx->registered_rings[reg.offset]) {
++ fput(tctx->registered_rings[reg.offset]);
++ tctx->registered_rings[reg.offset] = NULL;
++ }
++ }
++
++ return i ? i : ret;
++}
++
++static void *io_uring_validate_mmap_request(struct file *file,
++ loff_t pgoff, size_t sz)
++{
++ struct io_ring_ctx *ctx = file->private_data;
++ loff_t offset = pgoff << PAGE_SHIFT;
++ struct page *page;
++ void *ptr;
++
++ switch (offset) {
++ case IORING_OFF_SQ_RING:
++ case IORING_OFF_CQ_RING:
++ ptr = ctx->rings;
++ break;
++ case IORING_OFF_SQES:
++ ptr = ctx->sq_sqes;
++ break;
++ default:
++ return ERR_PTR(-EINVAL);
++ }
++
++ page = virt_to_head_page(ptr);
++ if (sz > page_size(page))
++ return ERR_PTR(-EINVAL);
++
++ return ptr;
++}
++
++#ifdef CONFIG_MMU
++
++static __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
++{
++ size_t sz = vma->vm_end - vma->vm_start;
++ unsigned long pfn;
++ void *ptr;
++
++ ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz);
++ if (IS_ERR(ptr))
++ return PTR_ERR(ptr);
++
++ pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
++ return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
++}
++
++#else /* !CONFIG_MMU */
++
++static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
++{
++ return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -EINVAL;
++}
++
++static unsigned int io_uring_nommu_mmap_capabilities(struct file *file)
++{
++ return NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_WRITE;
++}
++
++static unsigned long io_uring_nommu_get_unmapped_area(struct file *file,
++ unsigned long addr, unsigned long len,
++ unsigned long pgoff, unsigned long flags)
++{
++ void *ptr;
++
++ ptr = io_uring_validate_mmap_request(file, pgoff, len);
++ if (IS_ERR(ptr))
++ return PTR_ERR(ptr);
++
++ return (unsigned long) ptr;
++}
++
++#endif /* !CONFIG_MMU */
++
++static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
++{
++ DEFINE_WAIT(wait);
++
++ do {
++ if (!io_sqring_full(ctx))
++ break;
++ prepare_to_wait(&ctx->sqo_sq_wait, &wait, TASK_INTERRUPTIBLE);
++
++ if (!io_sqring_full(ctx))
++ break;
++ schedule();
++ } while (!signal_pending(current));
++
++ finish_wait(&ctx->sqo_sq_wait, &wait);
++ return 0;
++}
++
++static int io_validate_ext_arg(unsigned flags, const void __user *argp, size_t argsz)
++{
++ if (flags & IORING_ENTER_EXT_ARG) {
++ struct io_uring_getevents_arg arg;
++
++ if (argsz != sizeof(arg))
++ return -EINVAL;
++ if (copy_from_user(&arg, argp, sizeof(arg)))
++ return -EFAULT;
++ }
++ return 0;
++}
++
++static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz,
++ struct __kernel_timespec __user **ts,
++ const sigset_t __user **sig)
++{
++ struct io_uring_getevents_arg arg;
++
++ /*
++ * If EXT_ARG isn't set, then we have no timespec and the argp pointer
++ * is just a pointer to the sigset_t.
++ */
++ if (!(flags & IORING_ENTER_EXT_ARG)) {
++ *sig = (const sigset_t __user *) argp;
++ *ts = NULL;
++ return 0;
++ }
++
++ /*
++ * EXT_ARG is set - ensure we agree on the size of it and copy in our
++ * timespec and sigset_t pointers if good.
++ */
++ if (*argsz != sizeof(arg))
++ return -EINVAL;
++ if (copy_from_user(&arg, argp, sizeof(arg)))
++ return -EFAULT;
++ if (arg.pad)
++ return -EINVAL;
++ *sig = u64_to_user_ptr(arg.sigmask);
++ *argsz = arg.sigmask_sz;
++ *ts = u64_to_user_ptr(arg.ts);
++ return 0;
++}
++
++SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
++ u32, min_complete, u32, flags, const void __user *, argp,
++ size_t, argsz)
++{
++ struct io_ring_ctx *ctx;
++ struct fd f;
++ long ret;
++
++ io_run_task_work();
++
++ if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
++ IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG |
++ IORING_ENTER_REGISTERED_RING)))
++ return -EINVAL;
++
++ /*
++ * Ring fd has been registered via IORING_REGISTER_RING_FDS, we
++ * need only dereference our task private array to find it.
++ */
++ if (flags & IORING_ENTER_REGISTERED_RING) {
++ struct io_uring_task *tctx = current->io_uring;
++
++ if (!tctx || fd >= IO_RINGFD_REG_MAX)
++ return -EINVAL;
++ fd = array_index_nospec(fd, IO_RINGFD_REG_MAX);
++ f.file = tctx->registered_rings[fd];
++ f.flags = 0;
++ } else {
++ f = fdget(fd);
++ }
++
++ if (unlikely(!f.file))
++ return -EBADF;
++
++ ret = -EOPNOTSUPP;
++ if (unlikely(f.file->f_op != &io_uring_fops))
++ goto out_fput;
++
++ ret = -ENXIO;
++ ctx = f.file->private_data;
++ if (unlikely(!percpu_ref_tryget(&ctx->refs)))
++ goto out_fput;
++
++ ret = -EBADFD;
++ if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
++ goto out;
++
++ /*
++ * For SQ polling, the thread will do all submissions and completions.
++ * Just return the requested submit count, and wake the thread if
++ * we were asked to.
++ */
++ ret = 0;
++ if (ctx->flags & IORING_SETUP_SQPOLL) {
++ io_cqring_overflow_flush(ctx);
++
++ if (unlikely(ctx->sq_data->thread == NULL)) {
++ ret = -EOWNERDEAD;
++ goto out;
++ }
++ if (flags & IORING_ENTER_SQ_WAKEUP)
++ wake_up(&ctx->sq_data->wait);
++ if (flags & IORING_ENTER_SQ_WAIT) {
++ ret = io_sqpoll_wait_sq(ctx);
++ if (ret)
++ goto out;
++ }
++ ret = to_submit;
++ } else if (to_submit) {
++ ret = io_uring_add_tctx_node(ctx);
++ if (unlikely(ret))
++ goto out;
++
++ mutex_lock(&ctx->uring_lock);
++ ret = io_submit_sqes(ctx, to_submit);
++ if (ret != to_submit) {
++ mutex_unlock(&ctx->uring_lock);
++ goto out;
++ }
++ if ((flags & IORING_ENTER_GETEVENTS) && ctx->syscall_iopoll)
++ goto iopoll_locked;
++ mutex_unlock(&ctx->uring_lock);
++ }
++ if (flags & IORING_ENTER_GETEVENTS) {
++ int ret2;
++ if (ctx->syscall_iopoll) {
++ /*
++ * We disallow the app entering submit/complete with
++ * polling, but we still need to lock the ring to
++ * prevent racing with polled issue that got punted to
++ * a workqueue.
++ */
++ mutex_lock(&ctx->uring_lock);
++iopoll_locked:
++ ret2 = io_validate_ext_arg(flags, argp, argsz);
++ if (likely(!ret2)) {
++ min_complete = min(min_complete,
++ ctx->cq_entries);
++ ret2 = io_iopoll_check(ctx, min_complete);
++ }
++ mutex_unlock(&ctx->uring_lock);
++ } else {
++ const sigset_t __user *sig;
++ struct __kernel_timespec __user *ts;
++
++ ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
++ if (likely(!ret2)) {
++ min_complete = min(min_complete,
++ ctx->cq_entries);
++ ret2 = io_cqring_wait(ctx, min_complete, sig,
++ argsz, ts);
++ }
++ }
++
++ if (!ret) {
++ ret = ret2;
++
++ /*
++ * EBADR indicates that one or more CQE were dropped.
++ * Once the user has been informed we can clear the bit
++ * as they are obviously ok with those drops.
++ */
++ if (unlikely(ret2 == -EBADR))
++ clear_bit(IO_CHECK_CQ_DROPPED_BIT,
++ &ctx->check_cq);
++ }
++ }
++
++out:
++ percpu_ref_put(&ctx->refs);
++out_fput:
++ fdput(f);
++ return ret;
++}
++
++#ifdef CONFIG_PROC_FS
++static __cold int io_uring_show_cred(struct seq_file *m, unsigned int id,
++ const struct cred *cred)
++{
++ struct user_namespace *uns = seq_user_ns(m);
++ struct group_info *gi;
++ kernel_cap_t cap;
++ unsigned __capi;
++ int g;
++
++ seq_printf(m, "%5d\n", id);
++ seq_put_decimal_ull(m, "\tUid:\t", from_kuid_munged(uns, cred->uid));
++ seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->euid));
++ seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->suid));
++ seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->fsuid));
++ seq_put_decimal_ull(m, "\n\tGid:\t", from_kgid_munged(uns, cred->gid));
++ seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->egid));
++ seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->sgid));
++ seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->fsgid));
++ seq_puts(m, "\n\tGroups:\t");
++ gi = cred->group_info;
++ for (g = 0; g < gi->ngroups; g++) {
++ seq_put_decimal_ull(m, g ? " " : "",
++ from_kgid_munged(uns, gi->gid[g]));
++ }
++ seq_puts(m, "\n\tCapEff:\t");
++ cap = cred->cap_effective;
++ CAP_FOR_EACH_U32(__capi)
++ seq_put_hex_ll(m, NULL, cap.cap[CAP_LAST_U32 - __capi], 8);
++ seq_putc(m, '\n');
++ return 0;
++}
++
++static __cold void __io_uring_show_fdinfo(struct io_ring_ctx *ctx,
++ struct seq_file *m)
++{
++ struct io_sq_data *sq = NULL;
++ struct io_overflow_cqe *ocqe;
++ struct io_rings *r = ctx->rings;
++ unsigned int sq_mask = ctx->sq_entries - 1, cq_mask = ctx->cq_entries - 1;
++ unsigned int sq_head = READ_ONCE(r->sq.head);
++ unsigned int sq_tail = READ_ONCE(r->sq.tail);
++ unsigned int cq_head = READ_ONCE(r->cq.head);
++ unsigned int cq_tail = READ_ONCE(r->cq.tail);
++ unsigned int cq_shift = 0;
++ unsigned int sq_entries, cq_entries;
++ bool has_lock;
++ bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
++ unsigned int i;
++
++ if (is_cqe32)
++ cq_shift = 1;
++
++ /*
++ * we may get imprecise sqe and cqe info if uring is actively running
++ * since we get cached_sq_head and cached_cq_tail without uring_lock
++ * and sq_tail and cq_head are changed by userspace. But it's ok since
++ * we usually use these info when it is stuck.
++ */
++ seq_printf(m, "SqMask:\t0x%x\n", sq_mask);
++ seq_printf(m, "SqHead:\t%u\n", sq_head);
++ seq_printf(m, "SqTail:\t%u\n", sq_tail);
++ seq_printf(m, "CachedSqHead:\t%u\n", ctx->cached_sq_head);
++ seq_printf(m, "CqMask:\t0x%x\n", cq_mask);
++ seq_printf(m, "CqHead:\t%u\n", cq_head);
++ seq_printf(m, "CqTail:\t%u\n", cq_tail);
++ seq_printf(m, "CachedCqTail:\t%u\n", ctx->cached_cq_tail);
++ seq_printf(m, "SQEs:\t%u\n", sq_tail - ctx->cached_sq_head);
++ sq_entries = min(sq_tail - sq_head, ctx->sq_entries);
++ for (i = 0; i < sq_entries; i++) {
++ unsigned int entry = i + sq_head;
++ unsigned int sq_idx = READ_ONCE(ctx->sq_array[entry & sq_mask]);
++ struct io_uring_sqe *sqe;
++
++ if (sq_idx > sq_mask)
++ continue;
++ sqe = &ctx->sq_sqes[sq_idx];
++ seq_printf(m, "%5u: opcode:%d, fd:%d, flags:%x, user_data:%llu\n",
++ sq_idx, sqe->opcode, sqe->fd, sqe->flags,
++ sqe->user_data);
++ }
++ seq_printf(m, "CQEs:\t%u\n", cq_tail - cq_head);
++ cq_entries = min(cq_tail - cq_head, ctx->cq_entries);
++ for (i = 0; i < cq_entries; i++) {
++ unsigned int entry = i + cq_head;
++ struct io_uring_cqe *cqe = &r->cqes[(entry & cq_mask) << cq_shift];
++
++ if (!is_cqe32) {
++ seq_printf(m, "%5u: user_data:%llu, res:%d, flag:%x\n",
++ entry & cq_mask, cqe->user_data, cqe->res,
++ cqe->flags);
++ } else {
++ seq_printf(m, "%5u: user_data:%llu, res:%d, flag:%x, "
++ "extra1:%llu, extra2:%llu\n",
++ entry & cq_mask, cqe->user_data, cqe->res,
++ cqe->flags, cqe->big_cqe[0], cqe->big_cqe[1]);
++ }
++ }
++
++ /*
++ * Avoid ABBA deadlock between the seq lock and the io_uring mutex,
++ * since fdinfo case grabs it in the opposite direction of normal use
++ * cases. If we fail to get the lock, we just don't iterate any
++ * structures that could be going away outside the io_uring mutex.
++ */
++ has_lock = mutex_trylock(&ctx->uring_lock);
++
++ if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL)) {
++ sq = ctx->sq_data;
++ if (!sq->thread)
++ sq = NULL;
++ }
++
++ seq_printf(m, "SqThread:\t%d\n", sq ? task_pid_nr(sq->thread) : -1);
++ seq_printf(m, "SqThreadCpu:\t%d\n", sq ? task_cpu(sq->thread) : -1);
++ seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files);
++ for (i = 0; has_lock && i < ctx->nr_user_files; i++) {
++ struct file *f = io_file_from_index(ctx, i);
++
++ if (f)
++ seq_printf(m, "%5u: %s\n", i, file_dentry(f)->d_iname);
++ else
++ seq_printf(m, "%5u: <none>\n", i);
++ }
++ seq_printf(m, "UserBufs:\t%u\n", ctx->nr_user_bufs);
++ for (i = 0; has_lock && i < ctx->nr_user_bufs; i++) {
++ struct io_mapped_ubuf *buf = ctx->user_bufs[i];
++ unsigned int len = buf->ubuf_end - buf->ubuf;
++
++ seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf, len);
++ }
++ if (has_lock && !xa_empty(&ctx->personalities)) {
++ unsigned long index;
++ const struct cred *cred;
++
++ seq_printf(m, "Personalities:\n");
++ xa_for_each(&ctx->personalities, index, cred)
++ io_uring_show_cred(m, index, cred);
++ }
++ if (has_lock)
++ mutex_unlock(&ctx->uring_lock);
++
++ seq_puts(m, "PollList:\n");
++ spin_lock(&ctx->completion_lock);
++ for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
++ struct hlist_head *list = &ctx->cancel_hash[i];
++ struct io_kiocb *req;
++
++ hlist_for_each_entry(req, list, hash_node)
++ seq_printf(m, " op=%d, task_works=%d\n", req->opcode,
++ task_work_pending(req->task));
++ }
++
++ seq_puts(m, "CqOverflowList:\n");
++ list_for_each_entry(ocqe, &ctx->cq_overflow_list, list) {
++ struct io_uring_cqe *cqe = &ocqe->cqe;
++
++ seq_printf(m, " user_data=%llu, res=%d, flags=%x\n",
++ cqe->user_data, cqe->res, cqe->flags);
++
++ }
++
++ spin_unlock(&ctx->completion_lock);
++}
++
++static __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
++{
++ struct io_ring_ctx *ctx = f->private_data;
++
++ if (percpu_ref_tryget(&ctx->refs)) {
++ __io_uring_show_fdinfo(ctx, m);
++ percpu_ref_put(&ctx->refs);
++ }
++}
++#endif
++
++static const struct file_operations io_uring_fops = {
++ .release = io_uring_release,
++ .mmap = io_uring_mmap,
++#ifndef CONFIG_MMU
++ .get_unmapped_area = io_uring_nommu_get_unmapped_area,
++ .mmap_capabilities = io_uring_nommu_mmap_capabilities,
++#endif
++ .poll = io_uring_poll,
++#ifdef CONFIG_PROC_FS
++ .show_fdinfo = io_uring_show_fdinfo,
++#endif
++};
++
++static __cold int io_allocate_scq_urings(struct io_ring_ctx *ctx,
++ struct io_uring_params *p)
++{
++ struct io_rings *rings;
++ size_t size, sq_array_offset;
++
++ /* make sure these are sane, as we already accounted them */
++ ctx->sq_entries = p->sq_entries;
++ ctx->cq_entries = p->cq_entries;
++
++ size = rings_size(ctx, p->sq_entries, p->cq_entries, &sq_array_offset);
++ if (size == SIZE_MAX)
++ return -EOVERFLOW;
++
++ rings = io_mem_alloc(size);
++ if (!rings)
++ return -ENOMEM;
++
++ ctx->rings = rings;
++ ctx->sq_array = (u32 *)((char *)rings + sq_array_offset);
++ rings->sq_ring_mask = p->sq_entries - 1;
++ rings->cq_ring_mask = p->cq_entries - 1;
++ rings->sq_ring_entries = p->sq_entries;
++ rings->cq_ring_entries = p->cq_entries;
++
++ if (p->flags & IORING_SETUP_SQE128)
++ size = array_size(2 * sizeof(struct io_uring_sqe), p->sq_entries);
++ else
++ size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
++ if (size == SIZE_MAX) {
++ io_mem_free(ctx->rings);
++ ctx->rings = NULL;
++ return -EOVERFLOW;
++ }
++
++ ctx->sq_sqes = io_mem_alloc(size);
++ if (!ctx->sq_sqes) {
++ io_mem_free(ctx->rings);
++ ctx->rings = NULL;
++ return -ENOMEM;
++ }
++
++ return 0;
++}
++
++static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
++{
++ int ret, fd;
++
++ fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
++ if (fd < 0)
++ return fd;
++
++ ret = io_uring_add_tctx_node(ctx);
++ if (ret) {
++ put_unused_fd(fd);
++ return ret;
++ }
++ fd_install(fd, file);
++ return fd;
++}
++
++/*
++ * Allocate an anonymous fd, this is what constitutes the application
++ * visible backing of an io_uring instance. The application mmaps this
++ * fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
++ * we have to tie this fd to a socket for file garbage collection purposes.
++ */
++static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
++{
++ struct file *file;
++#if defined(CONFIG_UNIX)
++ int ret;
++
++ ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
++ &ctx->ring_sock);
++ if (ret)
++ return ERR_PTR(ret);
++#endif
++
++ file = anon_inode_getfile_secure("[io_uring]", &io_uring_fops, ctx,
++ O_RDWR | O_CLOEXEC, NULL);
++#if defined(CONFIG_UNIX)
++ if (IS_ERR(file)) {
++ sock_release(ctx->ring_sock);
++ ctx->ring_sock = NULL;
++ } else {
++ ctx->ring_sock->file = file;
++ }
++#endif
++ return file;
++}
++
++static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
++ struct io_uring_params __user *params)
++{
++ struct io_ring_ctx *ctx;
++ struct file *file;
++ int ret;
++
++ if (!entries)
++ return -EINVAL;
++ if (entries > IORING_MAX_ENTRIES) {
++ if (!(p->flags & IORING_SETUP_CLAMP))
++ return -EINVAL;
++ entries = IORING_MAX_ENTRIES;
++ }
++
++ /*
++ * Use twice as many entries for the CQ ring. It's possible for the
++ * application to drive a higher depth than the size of the SQ ring,
++ * since the sqes are only used at submission time. This allows for
++ * some flexibility in overcommitting a bit. If the application has
++ * set IORING_SETUP_CQSIZE, it will have passed in the desired number
++ * of CQ ring entries manually.
++ */
++ p->sq_entries = roundup_pow_of_two(entries);
++ if (p->flags & IORING_SETUP_CQSIZE) {
++ /*
++ * If IORING_SETUP_CQSIZE is set, we do the same roundup
++ * to a power-of-two, if it isn't already. We do NOT impose
++ * any cq vs sq ring sizing.
++ */
++ if (!p->cq_entries)
++ return -EINVAL;
++ if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {
++ if (!(p->flags & IORING_SETUP_CLAMP))
++ return -EINVAL;
++ p->cq_entries = IORING_MAX_CQ_ENTRIES;
++ }
++ p->cq_entries = roundup_pow_of_two(p->cq_entries);
++ if (p->cq_entries < p->sq_entries)
++ return -EINVAL;
++ } else {
++ p->cq_entries = 2 * p->sq_entries;
++ }
++
++ ctx = io_ring_ctx_alloc(p);
++ if (!ctx)
++ return -ENOMEM;
++
++ /*
++ * When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, user
++ * space applications don't need to do io completion events
++ * polling again, they can rely on io_sq_thread to do polling
++ * work, which can reduce cpu usage and uring_lock contention.
++ */
++ if (ctx->flags & IORING_SETUP_IOPOLL &&
++ !(ctx->flags & IORING_SETUP_SQPOLL))
++ ctx->syscall_iopoll = 1;
++
++ ctx->compat = in_compat_syscall();
++ if (!capable(CAP_IPC_LOCK))
++ ctx->user = get_uid(current_user());
++
++ /*
++ * For SQPOLL, we just need a wakeup, always. For !SQPOLL, if
++ * COOP_TASKRUN is set, then IPIs are never needed by the app.
++ */
++ ret = -EINVAL;
++ if (ctx->flags & IORING_SETUP_SQPOLL) {
++ /* IPI related flags don't make sense with SQPOLL */
++ if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
++ IORING_SETUP_TASKRUN_FLAG))
++ goto err;
++ ctx->notify_method = TWA_SIGNAL_NO_IPI;
++ } else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
++ ctx->notify_method = TWA_SIGNAL_NO_IPI;
++ } else {
++ if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
++ goto err;
++ ctx->notify_method = TWA_SIGNAL;
++ }
++
++ /*
++ * This is just grabbed for accounting purposes. When a process exits,
++ * the mm is exited and dropped before the files, hence we need to hang
++ * on to this mm purely for the purposes of being able to unaccount
++ * memory (locked/pinned vm). It's not used for anything else.
++ */
++ mmgrab(current->mm);
++ ctx->mm_account = current->mm;
++
++ ret = io_allocate_scq_urings(ctx, p);
++ if (ret)
++ goto err;
++
++ ret = io_sq_offload_create(ctx, p);
++ if (ret)
++ goto err;
++ /* always set a rsrc node */
++ ret = io_rsrc_node_switch_start(ctx);
++ if (ret)
++ goto err;
++ io_rsrc_node_switch(ctx, NULL);
++
++ memset(&p->sq_off, 0, sizeof(p->sq_off));
++ p->sq_off.head = offsetof(struct io_rings, sq.head);
++ p->sq_off.tail = offsetof(struct io_rings, sq.tail);
++ p->sq_off.ring_mask = offsetof(struct io_rings, sq_ring_mask);
++ p->sq_off.ring_entries = offsetof(struct io_rings, sq_ring_entries);
++ p->sq_off.flags = offsetof(struct io_rings, sq_flags);
++ p->sq_off.dropped = offsetof(struct io_rings, sq_dropped);
++ p->sq_off.array = (char *)ctx->sq_array - (char *)ctx->rings;
++
++ memset(&p->cq_off, 0, sizeof(p->cq_off));
++ p->cq_off.head = offsetof(struct io_rings, cq.head);
++ p->cq_off.tail = offsetof(struct io_rings, cq.tail);
++ p->cq_off.ring_mask = offsetof(struct io_rings, cq_ring_mask);
++ p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries);
++ p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);
++ p->cq_off.cqes = offsetof(struct io_rings, cqes);
++ p->cq_off.flags = offsetof(struct io_rings, cq_flags);
++
++ p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
++ IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
++ IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
++ IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
++ IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
++ IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
++ IORING_FEAT_LINKED_FILE;
++
++ if (copy_to_user(params, p, sizeof(*p))) {
++ ret = -EFAULT;
++ goto err;
++ }
++
++ file = io_uring_get_file(ctx);
++ if (IS_ERR(file)) {
++ ret = PTR_ERR(file);
++ goto err;
++ }
++
++ /*
++ * Install ring fd as the very last thing, so we don't risk someone
++ * having closed it before we finish setup
++ */
++ ret = io_uring_install_fd(ctx, file);
++ if (ret < 0) {
++ /* fput will clean it up */
++ fput(file);
++ return ret;
++ }
++
++ trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
++ return ret;
++err:
++ io_ring_ctx_wait_and_kill(ctx);
++ return ret;
++}
++
++/*
++ * Sets up an aio uring context, and returns the fd. Applications asks for a
++ * ring size, we return the actual sq/cq ring sizes (among other things) in the
++ * params structure passed in.
++ */
++static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
++{
++ struct io_uring_params p;
++ int i;
++
++ if (copy_from_user(&p, params, sizeof(p)))
++ return -EFAULT;
++ for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
++ if (p.resv[i])
++ return -EINVAL;
++ }
++
++ if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
++ IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
++ IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
++ IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL |
++ IORING_SETUP_COOP_TASKRUN | IORING_SETUP_TASKRUN_FLAG |
++ IORING_SETUP_SQE128 | IORING_SETUP_CQE32))
++ return -EINVAL;
++
++ return io_uring_create(entries, &p, params);
++}
++
++SYSCALL_DEFINE2(io_uring_setup, u32, entries,
++ struct io_uring_params __user *, params)
++{
++ return io_uring_setup(entries, params);
++}
++
++static __cold int io_probe(struct io_ring_ctx *ctx, void __user *arg,
++ unsigned nr_args)
++{
++ struct io_uring_probe *p;
++ size_t size;
++ int i, ret;
++
++ size = struct_size(p, ops, nr_args);
++ if (size == SIZE_MAX)
++ return -EOVERFLOW;
++ p = kzalloc(size, GFP_KERNEL);
++ if (!p)
++ return -ENOMEM;
++
++ ret = -EFAULT;
++ if (copy_from_user(p, arg, size))
++ goto out;
++ ret = -EINVAL;
++ if (memchr_inv(p, 0, size))
++ goto out;
++
++ p->last_op = IORING_OP_LAST - 1;
++ if (nr_args > IORING_OP_LAST)
++ nr_args = IORING_OP_LAST;
++
++ for (i = 0; i < nr_args; i++) {
++ p->ops[i].op = i;
++ if (!io_op_defs[i].not_supported)
++ p->ops[i].flags = IO_URING_OP_SUPPORTED;
++ }
++ p->ops_len = i;
++
++ ret = 0;
++ if (copy_to_user(arg, p, size))
++ ret = -EFAULT;
++out:
++ kfree(p);
++ return ret;
++}
++
++static int io_register_personality(struct io_ring_ctx *ctx)
++{
++ const struct cred *creds;
++ u32 id;
++ int ret;
++
++ creds = get_current_cred();
++
++ ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)creds,
++ XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
++ if (ret < 0) {
++ put_cred(creds);
++ return ret;
++ }
++ return id;
++}
++
++static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
++ void __user *arg, unsigned int nr_args)
++{
++ struct io_uring_restriction *res;
++ size_t size;
++ int i, ret;
++
++ /* Restrictions allowed only if rings started disabled */
++ if (!(ctx->flags & IORING_SETUP_R_DISABLED))
++ return -EBADFD;
++
++ /* We allow only a single restrictions registration */
++ if (ctx->restrictions.registered)
++ return -EBUSY;
++
++ if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
++ return -EINVAL;
++
++ size = array_size(nr_args, sizeof(*res));
++ if (size == SIZE_MAX)
++ return -EOVERFLOW;
++
++ res = memdup_user(arg, size);
++ if (IS_ERR(res))
++ return PTR_ERR(res);
++
++ ret = 0;
++
++ for (i = 0; i < nr_args; i++) {
++ switch (res[i].opcode) {
++ case IORING_RESTRICTION_REGISTER_OP:
++ if (res[i].register_op >= IORING_REGISTER_LAST) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ __set_bit(res[i].register_op,
++ ctx->restrictions.register_op);
++ break;
++ case IORING_RESTRICTION_SQE_OP:
++ if (res[i].sqe_op >= IORING_OP_LAST) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ __set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
++ break;
++ case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
++ ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags;
++ break;
++ case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
++ ctx->restrictions.sqe_flags_required = res[i].sqe_flags;
++ break;
++ default:
++ ret = -EINVAL;
++ goto out;
++ }
++ }
++
++out:
++ /* Reset all restrictions if an error happened */
++ if (ret != 0)
++ memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
++ else
++ ctx->restrictions.registered = true;
++
++ kfree(res);
++ return ret;
++}
++
++static int io_register_enable_rings(struct io_ring_ctx *ctx)
++{
++ if (!(ctx->flags & IORING_SETUP_R_DISABLED))
++ return -EBADFD;
++
++ if (ctx->restrictions.registered)
++ ctx->restricted = 1;
++
++ ctx->flags &= ~IORING_SETUP_R_DISABLED;
++ if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
++ wake_up(&ctx->sq_data->wait);
++ return 0;
++}
++
++static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
++ struct io_uring_rsrc_update2 *up,
++ unsigned nr_args)
++{
++ __u32 tmp;
++ int err;
++
++ if (check_add_overflow(up->offset, nr_args, &tmp))
++ return -EOVERFLOW;
++ err = io_rsrc_node_switch_start(ctx);
++ if (err)
++ return err;
++
++ switch (type) {
++ case IORING_RSRC_FILE:
++ return __io_sqe_files_update(ctx, up, nr_args);
++ case IORING_RSRC_BUFFER:
++ return __io_sqe_buffers_update(ctx, up, nr_args);
++ }
++ return -EINVAL;
++}
++
++static int io_register_files_update(struct io_ring_ctx *ctx, void __user *arg,
++ unsigned nr_args)
++{
++ struct io_uring_rsrc_update2 up;
++
++ if (!nr_args)
++ return -EINVAL;
++ memset(&up, 0, sizeof(up));
++ if (copy_from_user(&up, arg, sizeof(struct io_uring_rsrc_update)))
++ return -EFAULT;
++ if (up.resv || up.resv2)
++ return -EINVAL;
++ return __io_register_rsrc_update(ctx, IORING_RSRC_FILE, &up, nr_args);
++}
++
++static int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
++ unsigned size, unsigned type)
++{
++ struct io_uring_rsrc_update2 up;
++
++ if (size != sizeof(up))
++ return -EINVAL;
++ if (copy_from_user(&up, arg, sizeof(up)))
++ return -EFAULT;
++ if (!up.nr || up.resv || up.resv2)
++ return -EINVAL;
++ return __io_register_rsrc_update(ctx, type, &up, up.nr);
++}
++
++static __cold int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,
++ unsigned int size, unsigned int type)
++{
++ struct io_uring_rsrc_register rr;
++
++ /* keep it extendible */
++ if (size != sizeof(rr))
++ return -EINVAL;
++
++ memset(&rr, 0, sizeof(rr));
++ if (copy_from_user(&rr, arg, size))
++ return -EFAULT;
++ if (!rr.nr || rr.resv2)
++ return -EINVAL;
++ if (rr.flags & ~IORING_RSRC_REGISTER_SPARSE)
++ return -EINVAL;
++
++ switch (type) {
++ case IORING_RSRC_FILE:
++ if (rr.flags & IORING_RSRC_REGISTER_SPARSE && rr.data)
++ break;
++ return io_sqe_files_register(ctx, u64_to_user_ptr(rr.data),
++ rr.nr, u64_to_user_ptr(rr.tags));
++ case IORING_RSRC_BUFFER:
++ if (rr.flags & IORING_RSRC_REGISTER_SPARSE && rr.data)
++ break;
++ return io_sqe_buffers_register(ctx, u64_to_user_ptr(rr.data),
++ rr.nr, u64_to_user_ptr(rr.tags));
++ }
++ return -EINVAL;
++}
++
++static __cold int io_register_iowq_aff(struct io_ring_ctx *ctx,
++ void __user *arg, unsigned len)
++{
++ struct io_uring_task *tctx = current->io_uring;
++ cpumask_var_t new_mask;
++ int ret;
++
++ if (!tctx || !tctx->io_wq)
++ return -EINVAL;
++
++ if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++ return -ENOMEM;
++
++ cpumask_clear(new_mask);
++ if (len > cpumask_size())
++ len = cpumask_size();
++
++ if (in_compat_syscall()) {
++ ret = compat_get_bitmap(cpumask_bits(new_mask),
++ (const compat_ulong_t __user *)arg,
++ len * 8 /* CHAR_BIT */);
++ } else {
++ ret = copy_from_user(new_mask, arg, len);
++ }
++
++ if (ret) {
++ free_cpumask_var(new_mask);
++ return -EFAULT;
++ }
++
++ ret = io_wq_cpu_affinity(tctx->io_wq, new_mask);
++ free_cpumask_var(new_mask);
++ return ret;
++}
++
++static __cold int io_unregister_iowq_aff(struct io_ring_ctx *ctx)
++{
++ struct io_uring_task *tctx = current->io_uring;
++
++ if (!tctx || !tctx->io_wq)
++ return -EINVAL;
++
++ return io_wq_cpu_affinity(tctx->io_wq, NULL);
++}
++
++static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
++ void __user *arg)
++ __must_hold(&ctx->uring_lock)
++{
++ struct io_tctx_node *node;
++ struct io_uring_task *tctx = NULL;
++ struct io_sq_data *sqd = NULL;
++ __u32 new_count[2];
++ int i, ret;
++
++ if (copy_from_user(new_count, arg, sizeof(new_count)))
++ return -EFAULT;
++ for (i = 0; i < ARRAY_SIZE(new_count); i++)
++ if (new_count[i] > INT_MAX)
++ return -EINVAL;
++
++ if (ctx->flags & IORING_SETUP_SQPOLL) {
++ sqd = ctx->sq_data;
++ if (sqd) {
++ /*
++ * Observe the correct sqd->lock -> ctx->uring_lock
++ * ordering. Fine to drop uring_lock here, we hold
++ * a ref to the ctx.
++ */
++ refcount_inc(&sqd->refs);
++ mutex_unlock(&ctx->uring_lock);
++ mutex_lock(&sqd->lock);
++ mutex_lock(&ctx->uring_lock);
++ if (sqd->thread)
++ tctx = sqd->thread->io_uring;
++ }
++ } else {
++ tctx = current->io_uring;
++ }
++
++ BUILD_BUG_ON(sizeof(new_count) != sizeof(ctx->iowq_limits));
++
++ for (i = 0; i < ARRAY_SIZE(new_count); i++)
++ if (new_count[i])
++ ctx->iowq_limits[i] = new_count[i];
++ ctx->iowq_limits_set = true;
++
++ if (tctx && tctx->io_wq) {
++ ret = io_wq_max_workers(tctx->io_wq, new_count);
++ if (ret)
++ goto err;
++ } else {
++ memset(new_count, 0, sizeof(new_count));
++ }
++
++ if (sqd) {
++ mutex_unlock(&sqd->lock);
++ io_put_sq_data(sqd);
++ }
++
++ if (copy_to_user(arg, new_count, sizeof(new_count)))
++ return -EFAULT;
++
++ /* that's it for SQPOLL, only the SQPOLL task creates requests */
++ if (sqd)
++ return 0;
++
++ /* now propagate the restriction to all registered users */
++ list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
++ struct io_uring_task *tctx = node->task->io_uring;
++
++ if (WARN_ON_ONCE(!tctx->io_wq))
++ continue;
++
++ for (i = 0; i < ARRAY_SIZE(new_count); i++)
++ new_count[i] = ctx->iowq_limits[i];
++ /* ignore errors, it always returns zero anyway */
++ (void)io_wq_max_workers(tctx->io_wq, new_count);
++ }
++ return 0;
++err:
++ if (sqd) {
++ mutex_unlock(&sqd->lock);
++ io_put_sq_data(sqd);
++ }
++ return ret;
++}
++
++static int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
++{
++ struct io_uring_buf_ring *br;
++ struct io_uring_buf_reg reg;
++ struct io_buffer_list *bl, *free_bl = NULL;
++ struct page **pages;
++ int nr_pages;
++
++ if (copy_from_user(®, arg, sizeof(reg)))
++ return -EFAULT;
++
++ if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2])
++ return -EINVAL;
++ if (!reg.ring_addr)
++ return -EFAULT;
++ if (reg.ring_addr & ~PAGE_MASK)
++ return -EINVAL;
++ if (!is_power_of_2(reg.ring_entries))
++ return -EINVAL;
++
++ /* cannot disambiguate full vs empty due to head/tail size */
++ if (reg.ring_entries >= 65536)
++ return -EINVAL;
++
++ if (unlikely(reg.bgid < BGID_ARRAY && !ctx->io_bl)) {
++ int ret = io_init_bl_list(ctx);
++ if (ret)
++ return ret;
++ }
++
++ bl = io_buffer_get_list(ctx, reg.bgid);
++ if (bl) {
++ /* if mapped buffer ring OR classic exists, don't allow */
++ if (bl->buf_nr_pages || !list_empty(&bl->buf_list))
++ return -EEXIST;
++ } else {
++ free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL);
++ if (!bl)
++ return -ENOMEM;
++ }
++
++ pages = io_pin_pages(reg.ring_addr,
++ struct_size(br, bufs, reg.ring_entries),
++ &nr_pages);
++ if (IS_ERR(pages)) {
++ kfree(free_bl);
++ return PTR_ERR(pages);
++ }
++
++ br = page_address(pages[0]);
++ bl->buf_pages = pages;
++ bl->buf_nr_pages = nr_pages;
++ bl->nr_entries = reg.ring_entries;
++ bl->buf_ring = br;
++ bl->mask = reg.ring_entries - 1;
++ io_buffer_add_list(ctx, bl, reg.bgid);
++ return 0;
++}
++
++static int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
++{
++ struct io_uring_buf_reg reg;
++ struct io_buffer_list *bl;
++
++ if (copy_from_user(®, arg, sizeof(reg)))
++ return -EFAULT;
++ if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2])
++ return -EINVAL;
++
++ bl = io_buffer_get_list(ctx, reg.bgid);
++ if (!bl)
++ return -ENOENT;
++ if (!bl->buf_nr_pages)
++ return -EINVAL;
++
++ __io_remove_buffers(ctx, bl, -1U);
++ if (bl->bgid >= BGID_ARRAY) {
++ xa_erase(&ctx->io_bl_xa, bl->bgid);
++ kfree(bl);
++ }
++ return 0;
++}
++
++static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
++ void __user *arg, unsigned nr_args)
++ __releases(ctx->uring_lock)
++ __acquires(ctx->uring_lock)
++{
++ int ret;
++
++ /*
++ * We're inside the ring mutex, if the ref is already dying, then
++ * someone else killed the ctx or is already going through
++ * io_uring_register().
++ */
++ if (percpu_ref_is_dying(&ctx->refs))
++ return -ENXIO;
++
++ if (ctx->restricted) {
++ if (opcode >= IORING_REGISTER_LAST)
++ return -EINVAL;
++ opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
++ if (!test_bit(opcode, ctx->restrictions.register_op))
++ return -EACCES;
++ }
++
++ switch (opcode) {
++ case IORING_REGISTER_BUFFERS:
++ ret = -EFAULT;
++ if (!arg)
++ break;
++ ret = io_sqe_buffers_register(ctx, arg, nr_args, NULL);
++ break;
++ case IORING_UNREGISTER_BUFFERS:
++ ret = -EINVAL;
++ if (arg || nr_args)
++ break;
++ ret = io_sqe_buffers_unregister(ctx);
++ break;
++ case IORING_REGISTER_FILES:
++ ret = -EFAULT;
++ if (!arg)
++ break;
++ ret = io_sqe_files_register(ctx, arg, nr_args, NULL);
++ break;
++ case IORING_UNREGISTER_FILES:
++ ret = -EINVAL;
++ if (arg || nr_args)
++ break;
++ ret = io_sqe_files_unregister(ctx);
++ break;
++ case IORING_REGISTER_FILES_UPDATE:
++ ret = io_register_files_update(ctx, arg, nr_args);
++ break;
++ case IORING_REGISTER_EVENTFD:
++ ret = -EINVAL;
++ if (nr_args != 1)
++ break;
++ ret = io_eventfd_register(ctx, arg, 0);
++ break;
++ case IORING_REGISTER_EVENTFD_ASYNC:
++ ret = -EINVAL;
++ if (nr_args != 1)
++ break;
++ ret = io_eventfd_register(ctx, arg, 1);
++ break;
++ case IORING_UNREGISTER_EVENTFD:
++ ret = -EINVAL;
++ if (arg || nr_args)
++ break;
++ ret = io_eventfd_unregister(ctx);
++ break;
++ case IORING_REGISTER_PROBE:
++ ret = -EINVAL;
++ if (!arg || nr_args > 256)
++ break;
++ ret = io_probe(ctx, arg, nr_args);
++ break;
++ case IORING_REGISTER_PERSONALITY:
++ ret = -EINVAL;
++ if (arg || nr_args)
++ break;
++ ret = io_register_personality(ctx);
++ break;
++ case IORING_UNREGISTER_PERSONALITY:
++ ret = -EINVAL;
++ if (arg)
++ break;
++ ret = io_unregister_personality(ctx, nr_args);
++ break;
++ case IORING_REGISTER_ENABLE_RINGS:
++ ret = -EINVAL;
++ if (arg || nr_args)
++ break;
++ ret = io_register_enable_rings(ctx);
++ break;
++ case IORING_REGISTER_RESTRICTIONS:
++ ret = io_register_restrictions(ctx, arg, nr_args);
++ break;
++ case IORING_REGISTER_FILES2:
++ ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_FILE);
++ break;
++ case IORING_REGISTER_FILES_UPDATE2:
++ ret = io_register_rsrc_update(ctx, arg, nr_args,
++ IORING_RSRC_FILE);
++ break;
++ case IORING_REGISTER_BUFFERS2:
++ ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_BUFFER);
++ break;
++ case IORING_REGISTER_BUFFERS_UPDATE:
++ ret = io_register_rsrc_update(ctx, arg, nr_args,
++ IORING_RSRC_BUFFER);
++ break;
++ case IORING_REGISTER_IOWQ_AFF:
++ ret = -EINVAL;
++ if (!arg || !nr_args)
++ break;
++ ret = io_register_iowq_aff(ctx, arg, nr_args);
++ break;
++ case IORING_UNREGISTER_IOWQ_AFF:
++ ret = -EINVAL;
++ if (arg || nr_args)
++ break;
++ ret = io_unregister_iowq_aff(ctx);
++ break;
++ case IORING_REGISTER_IOWQ_MAX_WORKERS:
++ ret = -EINVAL;
++ if (!arg || nr_args != 2)
++ break;
++ ret = io_register_iowq_max_workers(ctx, arg);
++ break;
++ case IORING_REGISTER_RING_FDS:
++ ret = io_ringfd_register(ctx, arg, nr_args);
++ break;
++ case IORING_UNREGISTER_RING_FDS:
++ ret = io_ringfd_unregister(ctx, arg, nr_args);
++ break;
++ case IORING_REGISTER_PBUF_RING:
++ ret = -EINVAL;
++ if (!arg || nr_args != 1)
++ break;
++ ret = io_register_pbuf_ring(ctx, arg);
++ break;
++ case IORING_UNREGISTER_PBUF_RING:
++ ret = -EINVAL;
++ if (!arg || nr_args != 1)
++ break;
++ ret = io_unregister_pbuf_ring(ctx, arg);
++ break;
++ default:
++ ret = -EINVAL;
++ break;
++ }
++
++ return ret;
++}
++
++SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
++ void __user *, arg, unsigned int, nr_args)
++{
++ struct io_ring_ctx *ctx;
++ long ret = -EBADF;
++ struct fd f;
++
++ f = fdget(fd);
++ if (!f.file)
++ return -EBADF;
++
++ ret = -EOPNOTSUPP;
++ if (f.file->f_op != &io_uring_fops)
++ goto out_fput;
++
++ ctx = f.file->private_data;
++
++ io_run_task_work();
++
++ mutex_lock(&ctx->uring_lock);
++ ret = __io_uring_register(ctx, opcode, arg, nr_args);
++ mutex_unlock(&ctx->uring_lock);
++ trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs, ret);
++out_fput:
++ fdput(f);
++ return ret;
++}
++
++static int io_no_issue(struct io_kiocb *req, unsigned int issue_flags)
++{
++ WARN_ON_ONCE(1);
++ return -ECANCELED;
++}
++
++static const struct io_op_def io_op_defs[] = {
++ [IORING_OP_NOP] = {
++ .audit_skip = 1,
++ .iopoll = 1,
++ .prep = io_nop_prep,
++ .issue = io_nop,
++ },
++ [IORING_OP_READV] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollin = 1,
++ .buffer_select = 1,
++ .needs_async_setup = 1,
++ .plug = 1,
++ .audit_skip = 1,
++ .ioprio = 1,
++ .iopoll = 1,
++ .async_size = sizeof(struct io_async_rw),
++ .prep = io_prep_rw,
++ .issue = io_read,
++ },
++ [IORING_OP_WRITEV] = {
++ .needs_file = 1,
++ .hash_reg_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollout = 1,
++ .needs_async_setup = 1,
++ .plug = 1,
++ .audit_skip = 1,
++ .ioprio = 1,
++ .iopoll = 1,
++ .async_size = sizeof(struct io_async_rw),
++ .prep = io_prep_rw,
++ .issue = io_write,
++ },
++ [IORING_OP_FSYNC] = {
++ .needs_file = 1,
++ .audit_skip = 1,
++ .prep = io_fsync_prep,
++ .issue = io_fsync,
++ },
++ [IORING_OP_READ_FIXED] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollin = 1,
++ .plug = 1,
++ .audit_skip = 1,
++ .ioprio = 1,
++ .iopoll = 1,
++ .async_size = sizeof(struct io_async_rw),
++ .prep = io_prep_rw,
++ .issue = io_read,
++ },
++ [IORING_OP_WRITE_FIXED] = {
++ .needs_file = 1,
++ .hash_reg_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollout = 1,
++ .plug = 1,
++ .audit_skip = 1,
++ .ioprio = 1,
++ .iopoll = 1,
++ .async_size = sizeof(struct io_async_rw),
++ .prep = io_prep_rw,
++ .issue = io_write,
++ },
++ [IORING_OP_POLL_ADD] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .audit_skip = 1,
++ .prep = io_poll_add_prep,
++ .issue = io_poll_add,
++ },
++ [IORING_OP_POLL_REMOVE] = {
++ .audit_skip = 1,
++ .prep = io_poll_remove_prep,
++ .issue = io_poll_remove,
++ },
++ [IORING_OP_SYNC_FILE_RANGE] = {
++ .needs_file = 1,
++ .audit_skip = 1,
++ .prep = io_sfr_prep,
++ .issue = io_sync_file_range,
++ },
++ [IORING_OP_SENDMSG] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollout = 1,
++ .needs_async_setup = 1,
++ .ioprio = 1,
++ .async_size = sizeof(struct io_async_msghdr),
++ .prep = io_sendmsg_prep,
++ .issue = io_sendmsg,
++ },
++ [IORING_OP_RECVMSG] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollin = 1,
++ .buffer_select = 1,
++ .needs_async_setup = 1,
++ .ioprio = 1,
++ .async_size = sizeof(struct io_async_msghdr),
++ .prep = io_recvmsg_prep,
++ .issue = io_recvmsg,
++ },
++ [IORING_OP_TIMEOUT] = {
++ .audit_skip = 1,
++ .async_size = sizeof(struct io_timeout_data),
++ .prep = io_timeout_prep,
++ .issue = io_timeout,
++ },
++ [IORING_OP_TIMEOUT_REMOVE] = {
++ /* used by timeout updates' prep() */
++ .audit_skip = 1,
++ .prep = io_timeout_remove_prep,
++ .issue = io_timeout_remove,
++ },
++ [IORING_OP_ACCEPT] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollin = 1,
++ .poll_exclusive = 1,
++ .ioprio = 1, /* used for flags */
++ .prep = io_accept_prep,
++ .issue = io_accept,
++ },
++ [IORING_OP_ASYNC_CANCEL] = {
++ .audit_skip = 1,
++ .prep = io_async_cancel_prep,
++ .issue = io_async_cancel,
++ },
++ [IORING_OP_LINK_TIMEOUT] = {
++ .audit_skip = 1,
++ .async_size = sizeof(struct io_timeout_data),
++ .prep = io_link_timeout_prep,
++ .issue = io_no_issue,
++ },
++ [IORING_OP_CONNECT] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollout = 1,
++ .needs_async_setup = 1,
++ .async_size = sizeof(struct io_async_connect),
++ .prep = io_connect_prep,
++ .issue = io_connect,
++ },
++ [IORING_OP_FALLOCATE] = {
++ .needs_file = 1,
++ .prep = io_fallocate_prep,
++ .issue = io_fallocate,
++ },
++ [IORING_OP_OPENAT] = {
++ .prep = io_openat_prep,
++ .issue = io_openat,
++ },
++ [IORING_OP_CLOSE] = {
++ .prep = io_close_prep,
++ .issue = io_close,
++ },
++ [IORING_OP_FILES_UPDATE] = {
++ .audit_skip = 1,
++ .iopoll = 1,
++ .prep = io_files_update_prep,
++ .issue = io_files_update,
++ },
++ [IORING_OP_STATX] = {
++ .audit_skip = 1,
++ .prep = io_statx_prep,
++ .issue = io_statx,
++ },
++ [IORING_OP_READ] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollin = 1,
++ .buffer_select = 1,
++ .plug = 1,
++ .audit_skip = 1,
++ .ioprio = 1,
++ .iopoll = 1,
++ .async_size = sizeof(struct io_async_rw),
++ .prep = io_prep_rw,
++ .issue = io_read,
++ },
++ [IORING_OP_WRITE] = {
++ .needs_file = 1,
++ .hash_reg_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollout = 1,
++ .plug = 1,
++ .audit_skip = 1,
++ .ioprio = 1,
++ .iopoll = 1,
++ .async_size = sizeof(struct io_async_rw),
++ .prep = io_prep_rw,
++ .issue = io_write,
++ },
++ [IORING_OP_FADVISE] = {
++ .needs_file = 1,
++ .audit_skip = 1,
++ .prep = io_fadvise_prep,
++ .issue = io_fadvise,
++ },
++ [IORING_OP_MADVISE] = {
++ .prep = io_madvise_prep,
++ .issue = io_madvise,
++ },
++ [IORING_OP_SEND] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollout = 1,
++ .audit_skip = 1,
++ .ioprio = 1,
++ .prep = io_sendmsg_prep,
++ .issue = io_send,
++ },
++ [IORING_OP_RECV] = {
++ .needs_file = 1,
++ .unbound_nonreg_file = 1,
++ .pollin = 1,
++ .buffer_select = 1,
++ .audit_skip = 1,
++ .ioprio = 1,
++ .prep = io_recvmsg_prep,
++ .issue = io_recv,
++ },
++ [IORING_OP_OPENAT2] = {
++ .prep = io_openat2_prep,
++ .issue = io_openat2,
++ },
++ [IORING_OP_EPOLL_CTL] = {
++ .unbound_nonreg_file = 1,
++ .audit_skip = 1,
++ .prep = io_epoll_ctl_prep,
++ .issue = io_epoll_ctl,
++ },
++ [IORING_OP_SPLICE] = {
++ .needs_file = 1,
++ .hash_reg_file = 1,
++ .unbound_nonreg_file = 1,
++ .audit_skip = 1,
++ .prep = io_splice_prep,
++ .issue = io_splice,
++ },
++ [IORING_OP_PROVIDE_BUFFERS] = {
++ .audit_skip = 1,
++ .iopoll = 1,
++ .prep = io_provide_buffers_prep,
++ .issue = io_provide_buffers,
++ },
++ [IORING_OP_REMOVE_BUFFERS] = {
++ .audit_skip = 1,
++ .iopoll = 1,
++ .prep = io_remove_buffers_prep,
++ .issue = io_remove_buffers,
++ },
++ [IORING_OP_TEE] = {
++ .needs_file = 1,
++ .hash_reg_file = 1,
++ .unbound_nonreg_file = 1,
++ .audit_skip = 1,
++ .prep = io_tee_prep,
++ .issue = io_tee,
++ },
++ [IORING_OP_SHUTDOWN] = {
++ .needs_file = 1,
++ .prep = io_shutdown_prep,
++ .issue = io_shutdown,
++ },
++ [IORING_OP_RENAMEAT] = {
++ .prep = io_renameat_prep,
++ .issue = io_renameat,
++ },
++ [IORING_OP_UNLINKAT] = {
++ .prep = io_unlinkat_prep,
++ .issue = io_unlinkat,
++ },
++ [IORING_OP_MKDIRAT] = {
++ .prep = io_mkdirat_prep,
++ .issue = io_mkdirat,
++ },
++ [IORING_OP_SYMLINKAT] = {
++ .prep = io_symlinkat_prep,
++ .issue = io_symlinkat,
++ },
++ [IORING_OP_LINKAT] = {
++ .prep = io_linkat_prep,
++ .issue = io_linkat,
++ },
++ [IORING_OP_MSG_RING] = {
++ .needs_file = 1,
++ .iopoll = 1,
++ .prep = io_msg_ring_prep,
++ .issue = io_msg_ring,
++ },
++ [IORING_OP_FSETXATTR] = {
++ .needs_file = 1,
++ .prep = io_fsetxattr_prep,
++ .issue = io_fsetxattr,
++ },
++ [IORING_OP_SETXATTR] = {
++ .prep = io_setxattr_prep,
++ .issue = io_setxattr,
++ },
++ [IORING_OP_FGETXATTR] = {
++ .needs_file = 1,
++ .prep = io_fgetxattr_prep,
++ .issue = io_fgetxattr,
++ },
++ [IORING_OP_GETXATTR] = {
++ .prep = io_getxattr_prep,
++ .issue = io_getxattr,
++ },
++ [IORING_OP_SOCKET] = {
++ .audit_skip = 1,
++ .prep = io_socket_prep,
++ .issue = io_socket,
++ },
++ [IORING_OP_URING_CMD] = {
++ .needs_file = 1,
++ .plug = 1,
++ .needs_async_setup = 1,
++ .async_size = uring_cmd_pdu_size(1),
++ .prep = io_uring_cmd_prep,
++ .issue = io_uring_cmd,
++ },
++};
++
++static int __init io_uring_init(void)
++{
++ int i;
++
++#define __BUILD_BUG_VERIFY_ELEMENT(stype, eoffset, etype, ename) do { \
++ BUILD_BUG_ON(offsetof(stype, ename) != eoffset); \
++ BUILD_BUG_ON(sizeof(etype) != sizeof_field(stype, ename)); \
++} while (0)
++
++#define BUILD_BUG_SQE_ELEM(eoffset, etype, ename) \
++ __BUILD_BUG_VERIFY_ELEMENT(struct io_uring_sqe, eoffset, etype, ename)
++ BUILD_BUG_ON(sizeof(struct io_uring_sqe) != 64);
++ BUILD_BUG_SQE_ELEM(0, __u8, opcode);
++ BUILD_BUG_SQE_ELEM(1, __u8, flags);
++ BUILD_BUG_SQE_ELEM(2, __u16, ioprio);
++ BUILD_BUG_SQE_ELEM(4, __s32, fd);
++ BUILD_BUG_SQE_ELEM(8, __u64, off);
++ BUILD_BUG_SQE_ELEM(8, __u64, addr2);
++ BUILD_BUG_SQE_ELEM(16, __u64, addr);
++ BUILD_BUG_SQE_ELEM(16, __u64, splice_off_in);
++ BUILD_BUG_SQE_ELEM(24, __u32, len);
++ BUILD_BUG_SQE_ELEM(28, __kernel_rwf_t, rw_flags);
++ BUILD_BUG_SQE_ELEM(28, /* compat */ int, rw_flags);
++ BUILD_BUG_SQE_ELEM(28, /* compat */ __u32, rw_flags);
++ BUILD_BUG_SQE_ELEM(28, __u32, fsync_flags);
++ BUILD_BUG_SQE_ELEM(28, /* compat */ __u16, poll_events);
++ BUILD_BUG_SQE_ELEM(28, __u32, poll32_events);
++ BUILD_BUG_SQE_ELEM(28, __u32, sync_range_flags);
++ BUILD_BUG_SQE_ELEM(28, __u32, msg_flags);
++ BUILD_BUG_SQE_ELEM(28, __u32, timeout_flags);
++ BUILD_BUG_SQE_ELEM(28, __u32, accept_flags);
++ BUILD_BUG_SQE_ELEM(28, __u32, cancel_flags);
++ BUILD_BUG_SQE_ELEM(28, __u32, open_flags);
++ BUILD_BUG_SQE_ELEM(28, __u32, statx_flags);
++ BUILD_BUG_SQE_ELEM(28, __u32, fadvise_advice);
++ BUILD_BUG_SQE_ELEM(28, __u32, splice_flags);
++ BUILD_BUG_SQE_ELEM(32, __u64, user_data);
++ BUILD_BUG_SQE_ELEM(40, __u16, buf_index);
++ BUILD_BUG_SQE_ELEM(40, __u16, buf_group);
++ BUILD_BUG_SQE_ELEM(42, __u16, personality);
++ BUILD_BUG_SQE_ELEM(44, __s32, splice_fd_in);
++ BUILD_BUG_SQE_ELEM(44, __u32, file_index);
++ BUILD_BUG_SQE_ELEM(48, __u64, addr3);
++
++ BUILD_BUG_ON(sizeof(struct io_uring_files_update) !=
++ sizeof(struct io_uring_rsrc_update));
++ BUILD_BUG_ON(sizeof(struct io_uring_rsrc_update) >
++ sizeof(struct io_uring_rsrc_update2));
++
++ /* ->buf_index is u16 */
++ BUILD_BUG_ON(IORING_MAX_REG_BUFFERS >= (1u << 16));
++ BUILD_BUG_ON(BGID_ARRAY * sizeof(struct io_buffer_list) > PAGE_SIZE);
++ BUILD_BUG_ON(offsetof(struct io_uring_buf_ring, bufs) != 0);
++ BUILD_BUG_ON(offsetof(struct io_uring_buf, resv) !=
++ offsetof(struct io_uring_buf_ring, tail));
++
++ /* should fit into one byte */
++ BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
++ BUILD_BUG_ON(SQE_COMMON_FLAGS >= (1 << 8));
++ BUILD_BUG_ON((SQE_VALID_FLAGS | SQE_COMMON_FLAGS) != SQE_VALID_FLAGS);
++
++ BUILD_BUG_ON(ARRAY_SIZE(io_op_defs) != IORING_OP_LAST);
++ BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof(int));
++
++ BUILD_BUG_ON(sizeof(atomic_t) != sizeof(u32));
++
++ BUILD_BUG_ON(sizeof(struct io_uring_cmd) > 64);
++
++ for (i = 0; i < ARRAY_SIZE(io_op_defs); i++) {
++ BUG_ON(!io_op_defs[i].prep);
++ BUG_ON(!io_op_defs[i].issue);
++ }
++
++ req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC |
++ SLAB_ACCOUNT);
++ return 0;
++};
++__initcall(io_uring_init);
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index fe40d3b9458f0..1d05d63e6fa5a 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -156,6 +156,11 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
+ return &array->map;
+ }
+
++static void *array_map_elem_ptr(struct bpf_array* array, u32 index)
++{
++ return array->value + (u64)array->elem_size * index;
++}
++
+ /* Called from syscall or from eBPF program */
+ static void *array_map_lookup_elem(struct bpf_map *map, void *key)
+ {
+@@ -165,7 +170,7 @@ static void *array_map_lookup_elem(struct bpf_map *map, void *key)
+ if (unlikely(index >= array->map.max_entries))
+ return NULL;
+
+- return array->value + array->elem_size * (index & array->index_mask);
++ return array->value + (u64)array->elem_size * (index & array->index_mask);
+ }
+
+ static int array_map_direct_value_addr(const struct bpf_map *map, u64 *imm,
+@@ -339,7 +344,7 @@ static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
+ value, map->value_size);
+ } else {
+ val = array->value +
+- array->elem_size * (index & array->index_mask);
++ (u64)array->elem_size * (index & array->index_mask);
+ if (map_flags & BPF_F_LOCK)
+ copy_map_value_locked(map, val, value, false);
+ else
+@@ -408,8 +413,7 @@ static void array_map_free_timers(struct bpf_map *map)
+ return;
+
+ for (i = 0; i < array->map.max_entries; i++)
+- bpf_timer_cancel_and_free(array->value + array->elem_size * i +
+- map->timer_off);
++ bpf_timer_cancel_and_free(array_map_elem_ptr(array, i) + map->timer_off);
+ }
+
+ /* Called when map->refcnt goes to zero, either from workqueue or from syscall */
+@@ -420,7 +424,7 @@ static void array_map_free(struct bpf_map *map)
+
+ if (map_value_has_kptrs(map)) {
+ for (i = 0; i < array->map.max_entries; i++)
+- bpf_map_free_kptrs(map, array->value + array->elem_size * i);
++ bpf_map_free_kptrs(map, array_map_elem_ptr(array, i));
+ bpf_map_free_kptr_off_tab(map);
+ }
+
+@@ -556,7 +560,7 @@ static void *bpf_array_map_seq_start(struct seq_file *seq, loff_t *pos)
+ index = info->index & array->index_mask;
+ if (info->percpu_value_buf)
+ return array->pptrs[index];
+- return array->value + array->elem_size * index;
++ return array_map_elem_ptr(array, index);
+ }
+
+ static void *bpf_array_map_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+@@ -575,7 +579,7 @@ static void *bpf_array_map_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ index = info->index & array->index_mask;
+ if (info->percpu_value_buf)
+ return array->pptrs[index];
+- return array->value + array->elem_size * index;
++ return array_map_elem_ptr(array, index);
+ }
+
+ static int __bpf_array_map_seq_show(struct seq_file *seq, void *v)
+@@ -690,7 +694,7 @@ static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_
+ if (is_percpu)
+ val = this_cpu_ptr(array->pptrs[i]);
+ else
+- val = array->value + array->elem_size * i;
++ val = array_map_elem_ptr(array, i);
+ num_elems++;
+ key = i;
+ ret = callback_fn((u64)(long)map, (u64)(long)&key,
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index afb414b26d01d..7a394f7c205c4 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -720,6 +720,60 @@ static struct bpf_prog_list *find_detach_entry(struct list_head *progs,
+ return ERR_PTR(-ENOENT);
+ }
+
++/**
++ * purge_effective_progs() - After compute_effective_progs fails to alloc new
++ * cgrp->bpf.inactive table we can recover by
++ * recomputing the array in place.
++ *
++ * @cgrp: The cgroup which descendants to travers
++ * @prog: A program to detach or NULL
++ * @link: A link to detach or NULL
++ * @atype: Type of detach operation
++ */
++static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
++ struct bpf_cgroup_link *link,
++ enum cgroup_bpf_attach_type atype)
++{
++ struct cgroup_subsys_state *css;
++ struct bpf_prog_array *progs;
++ struct bpf_prog_list *pl;
++ struct list_head *head;
++ struct cgroup *cg;
++ int pos;
++
++ /* recompute effective prog array in place */
++ css_for_each_descendant_pre(css, &cgrp->self) {
++ struct cgroup *desc = container_of(css, struct cgroup, self);
++
++ if (percpu_ref_is_zero(&desc->bpf.refcnt))
++ continue;
++
++ /* find position of link or prog in effective progs array */
++ for (pos = 0, cg = desc; cg; cg = cgroup_parent(cg)) {
++ if (pos && !(cg->bpf.flags[atype] & BPF_F_ALLOW_MULTI))
++ continue;
++
++ head = &cg->bpf.progs[atype];
++ list_for_each_entry(pl, head, node) {
++ if (!prog_list_prog(pl))
++ continue;
++ if (pl->prog == prog && pl->link == link)
++ goto found;
++ pos++;
++ }
++ }
++found:
++ BUG_ON(!cg);
++ progs = rcu_dereference_protected(
++ desc->bpf.effective[atype],
++ lockdep_is_held(&cgroup_mutex));
++
++ /* Remove the program from the array */
++ WARN_ONCE(bpf_prog_array_delete_safe_at(progs, pos),
++ "Failed to purge a prog from array at index %d", pos);
++ }
++}
++
+ /**
+ * __cgroup_bpf_detach() - Detach the program or link from a cgroup, and
+ * propagate the change to descendants
+@@ -739,7 +793,6 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ struct bpf_prog_list *pl;
+ struct list_head *progs;
+ u32 flags;
+- int err;
+
+ atype = to_cgroup_bpf_attach_type(type);
+ if (atype < 0)
+@@ -761,9 +814,12 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ pl->prog = NULL;
+ pl->link = NULL;
+
+- err = update_effective_progs(cgrp, atype);
+- if (err)
+- goto cleanup;
++ if (update_effective_progs(cgrp, atype)) {
++ /* if update effective array failed replace the prog with a dummy prog*/
++ pl->prog = old_prog;
++ pl->link = link;
++ purge_effective_progs(cgrp, old_prog, link, atype);
++ }
+
+ /* now can actually delete it from this cgroup list */
+ list_del(&pl->node);
+@@ -775,12 +831,6 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ bpf_prog_put(old_prog);
+ static_branch_dec(&cgroup_bpf_enabled_key[atype]);
+ return 0;
+-
+-cleanup:
+- /* restore back prog or link */
+- pl->prog = old_prog;
+- pl->link = link;
+- return err;
+ }
+
+ static int cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index e7961508a47d9..fb6bd57228a84 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -649,12 +649,6 @@ static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp)
+ return fp->jited && !bpf_prog_was_classic(fp);
+ }
+
+-static bool bpf_prog_kallsyms_verify_off(const struct bpf_prog *fp)
+-{
+- return list_empty(&fp->aux->ksym.lnode) ||
+- fp->aux->ksym.lnode.prev == LIST_POISON2;
+-}
+-
+ void bpf_prog_kallsyms_add(struct bpf_prog *fp)
+ {
+ if (!bpf_prog_kallsyms_candidate(fp) ||
+@@ -1152,7 +1146,6 @@ int bpf_jit_binary_pack_finalize(struct bpf_prog *prog,
+ bpf_prog_pack_free(ro_header);
+ return PTR_ERR(ptr);
+ }
+- prog->aux->use_bpf_prog_pack = true;
+ return 0;
+ }
+
+@@ -1176,17 +1169,23 @@ void bpf_jit_binary_pack_free(struct bpf_binary_header *ro_header,
+ bpf_jit_uncharge_modmem(size);
+ }
+
++struct bpf_binary_header *
++bpf_jit_binary_pack_hdr(const struct bpf_prog *fp)
++{
++ unsigned long real_start = (unsigned long)fp->bpf_func;
++ unsigned long addr;
++
++ addr = real_start & BPF_PROG_CHUNK_MASK;
++ return (void *)addr;
++}
++
+ static inline struct bpf_binary_header *
+ bpf_jit_binary_hdr(const struct bpf_prog *fp)
+ {
+ unsigned long real_start = (unsigned long)fp->bpf_func;
+ unsigned long addr;
+
+- if (fp->aux->use_bpf_prog_pack)
+- addr = real_start & BPF_PROG_CHUNK_MASK;
+- else
+- addr = real_start & PAGE_MASK;
+-
++ addr = real_start & PAGE_MASK;
+ return (void *)addr;
+ }
+
+@@ -1199,11 +1198,7 @@ void __weak bpf_jit_free(struct bpf_prog *fp)
+ if (fp->jited) {
+ struct bpf_binary_header *hdr = bpf_jit_binary_hdr(fp);
+
+- if (fp->aux->use_bpf_prog_pack)
+- bpf_jit_binary_pack_free(hdr, NULL /* rw_buffer */);
+- else
+- bpf_jit_binary_free(hdr);
+-
++ bpf_jit_binary_free(hdr);
+ WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(fp));
+ }
+
+@@ -2716,6 +2711,12 @@ bool __weak bpf_jit_needs_zext(void)
+ return false;
+ }
+
++/* Return TRUE if the JIT backend supports mixing bpf2bpf and tailcalls. */
++bool __weak bpf_jit_supports_subprog_tailcalls(void)
++{
++ return false;
++}
++
+ bool __weak bpf_jit_supports_kfunc_call(void)
+ {
+ return false;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 0efbac0fd1264..e91d2faef1605 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -6143,7 +6143,8 @@ static bool may_update_sockmap(struct bpf_verifier_env *env, int func_id)
+
+ static bool allow_tail_call_in_subprogs(struct bpf_verifier_env *env)
+ {
+- return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);
++ return env->prog->jit_requested &&
++ bpf_jit_supports_subprog_tailcalls();
+ }
+
+ static int check_map_func_compatibility(struct bpf_verifier_env *env,
+@@ -13525,6 +13526,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ /* Below members will be freed only at prog->aux */
+ func[i]->aux->btf = prog->aux->btf;
+ func[i]->aux->func_info = prog->aux->func_info;
++ func[i]->aux->func_info_cnt = prog->aux->func_info_cnt;
+ func[i]->aux->poke_tab = prog->aux->poke_tab;
+ func[i]->aux->size_poke_tab = prog->aux->size_poke_tab;
+
+@@ -13537,9 +13539,6 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ poke->aux = func[i]->aux;
+ }
+
+- /* Use bpf_prog_F_tag to indicate functions in stack traces.
+- * Long term would need debug info to populate names
+- */
+ func[i]->aux->name[0] = 'F';
+ func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
+ func[i]->jit_requested = 1;
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 71a418858a5e0..58aadfda9b8b3 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -2239,7 +2239,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ goto out_unlock;
+
+ cgroup_taskset_for_each(task, css, tset) {
+- ret = task_can_attach(task, cs->cpus_allowed);
++ ret = task_can_attach(task, cs->effective_cpus);
+ if (ret)
+ goto out_unlock;
+ ret = security_task_setscheduler(task);
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index cb50f8d383606..5830dce6081b3 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -580,7 +580,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
+ int index;
+ phys_addr_t tlb_addr;
+
+- if (!mem)
++ if (!mem || !mem->nslabs)
+ panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
+
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
+diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
+index 10929eda98258..fc760d064a653 100644
+--- a/kernel/irq/Kconfig
++++ b/kernel/irq/Kconfig
+@@ -82,6 +82,7 @@ config IRQ_FASTEOI_HIERARCHY_HANDLERS
+ # Generic IRQ IPI support
+ config GENERIC_IRQ_IPI
+ bool
++ depends on SMP
+ select IRQ_DOMAIN_HIERARCHY
+
+ # Generic MSI interrupt support
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 886789dcee435..c19040530789f 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -1516,7 +1516,8 @@ int irq_chip_request_resources_parent(struct irq_data *data)
+ if (data->chip->irq_request_resources)
+ return data->chip->irq_request_resources(data);
+
+- return -ENOSYS;
++ /* no error on missing optional irq_chip::irq_request_resources */
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(irq_chip_request_resources_parent);
+
+diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
+index d5ce965105493..481abb885d61f 100644
+--- a/kernel/irq/irqdomain.c
++++ b/kernel/irq/irqdomain.c
+@@ -910,6 +910,8 @@ struct irq_desc *__irq_resolve_mapping(struct irq_domain *domain,
+ data = irq_domain_get_irq_data(domain, hwirq);
+ if (data && data->hwirq == hwirq)
+ desc = irq_data_to_desc(data);
++ if (irq && desc)
++ *irq = hwirq;
+ }
+
+ return desc;
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index f9261c07b0487..6dc1294c90fcf 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -62,14 +62,7 @@ int kexec_image_probe_default(struct kimage *image, void *buf,
+ return ret;
+ }
+
+-/* Architectures can provide this probe function */
+-int __weak arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
+- unsigned long buf_len)
+-{
+- return kexec_image_probe_default(image, buf, buf_len);
+-}
+-
+-static void *kexec_image_load_default(struct kimage *image)
++void *kexec_image_load_default(struct kimage *image)
+ {
+ if (!image->fops || !image->fops->load)
+ return ERR_PTR(-ENOEXEC);
+@@ -80,11 +73,6 @@ static void *kexec_image_load_default(struct kimage *image)
+ image->cmdline_buf_len);
+ }
+
+-void * __weak arch_kexec_kernel_image_load(struct kimage *image)
+-{
+- return kexec_image_load_default(image);
+-}
+-
+ int kexec_image_post_load_cleanup_default(struct kimage *image)
+ {
+ if (!image->fops || !image->fops->cleanup)
+@@ -93,30 +81,6 @@ int kexec_image_post_load_cleanup_default(struct kimage *image)
+ return image->fops->cleanup(image->image_loader_data);
+ }
+
+-int __weak arch_kimage_file_post_load_cleanup(struct kimage *image)
+-{
+- return kexec_image_post_load_cleanup_default(image);
+-}
+-
+-#ifdef CONFIG_KEXEC_SIG
+-static int kexec_image_verify_sig_default(struct kimage *image, void *buf,
+- unsigned long buf_len)
+-{
+- if (!image->fops || !image->fops->verify_sig) {
+- pr_debug("kernel loader does not support signature verification.\n");
+- return -EKEYREJECTED;
+- }
+-
+- return image->fops->verify_sig(buf, buf_len);
+-}
+-
+-int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+- unsigned long buf_len)
+-{
+- return kexec_image_verify_sig_default(image, buf, buf_len);
+-}
+-#endif
+-
+ /*
+ * Free up memory used by kernel, initrd, and command line. This is temporary
+ * memory allocation which is not needed any more after these buffers have
+@@ -159,13 +123,24 @@ void kimage_file_post_load_cleanup(struct kimage *image)
+ }
+
+ #ifdef CONFIG_KEXEC_SIG
++static int kexec_image_verify_sig(struct kimage *image, void *buf,
++ unsigned long buf_len)
++{
++ if (!image->fops || !image->fops->verify_sig) {
++ pr_debug("kernel loader does not support signature verification.\n");
++ return -EKEYREJECTED;
++ }
++
++ return image->fops->verify_sig(buf, buf_len);
++}
++
+ static int
+ kimage_validate_signature(struct kimage *image)
+ {
+ int ret;
+
+- ret = arch_kexec_kernel_verify_sig(image, image->kernel_buf,
+- image->kernel_buf_len);
++ ret = kexec_image_verify_sig(image, image->kernel_buf,
++ image->kernel_buf_len);
+ if (ret) {
+
+ if (sig_enforce) {
+@@ -621,19 +596,6 @@ int kexec_locate_mem_hole(struct kexec_buf *kbuf)
+ return ret == 1 ? 0 : -EADDRNOTAVAIL;
+ }
+
+-/**
+- * arch_kexec_locate_mem_hole - Find free memory to place the segments.
+- * @kbuf: Parameters for the memory search.
+- *
+- * On success, kbuf->mem will have the start address of the memory region found.
+- *
+- * Return: 0 on success, negative errno on error.
+- */
+-int __weak arch_kexec_locate_mem_hole(struct kexec_buf *kbuf)
+-{
+- return kexec_locate_mem_hole(kbuf);
+-}
+-
+ /**
+ * kexec_add_buffer - place a buffer in a kexec segment
+ * @kbuf: Buffer contents and memory parameters.
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index f214f8c088ede..80697e5e03e49 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1560,7 +1560,8 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ preempt_disable();
+
+ /* Ensure it is not in reserved area nor out of text */
+- if (!kernel_text_address((unsigned long) p->addr) ||
++ if (!(core_kernel_text((unsigned long) p->addr) ||
++ is_module_text_address((unsigned long) p->addr)) ||
+ within_kprobe_blacklist((unsigned long) p->addr) ||
+ jump_label_text_reserved(p->addr, p->addr) ||
+ static_call_text_reserved(p->addr, p->addr) ||
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index f06b91ca6482d..e2f179491b086 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -5238,9 +5238,10 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
+ return 0;
+ }
+
+- lockdep_init_map_waits(lock, name, key, 0,
+- lock->wait_type_inner,
+- lock->wait_type_outer);
++ lockdep_init_map_type(lock, name, key, 0,
++ lock->wait_type_inner,
++ lock->wait_type_outer,
++ lock->lock_type);
+ class = register_lock_class(lock, subclass, 0);
+ hlock->class_idx = class - lock_classes;
+
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index 6c373f2960e71..f82111837b8d1 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -145,7 +145,7 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+
+ /*
+ * The power returned by active_state() is expected to be
+- * positive and to fit into 16 bits.
++ * positive and be in range.
+ */
+ if (!power || power > EM_MAX_POWER) {
+ dev_err(dev, "EM: invalid power: %lu\n",
+@@ -170,7 +170,7 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+ goto free_ps_table;
+ }
+ } else {
+- power_res = em_scale_power(table[i].power);
++ power_res = table[i].power;
+ cost = div64_u64(fmax * power_res, table[i].frequency);
+ }
+
+@@ -201,9 +201,17 @@ static int em_create_pd(struct device *dev, int nr_states,
+ {
+ struct em_perf_domain *pd;
+ struct device *cpu_dev;
+- int cpu, ret;
++ int cpu, ret, num_cpus;
+
+ if (_is_cpu_device(dev)) {
++ num_cpus = cpumask_weight(cpus);
++
++ /* Prevent max possible energy calculation to not overflow */
++ if (num_cpus > EM_MAX_NUM_CPUS) {
++ dev_err(dev, "EM: too many CPUs, overflow possible\n");
++ return -EINVAL;
++ }
++
+ pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL);
+ if (!pd)
+ return -ENOMEM;
+@@ -314,13 +322,13 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
+ * @cpus : Pointer to cpumask_t, which in case of a CPU device is
+ * obligatory. It can be taken from i.e. 'policy->cpus'. For other
+ * type of devices this should be set to NULL.
+- * @milliwatts : Flag indicating that the power values are in milliWatts or
++ * @microwatts : Flag indicating that the power values are in micro-Watts or
+ * in some other scale. It must be set properly.
+ *
+ * Create Energy Model tables for a performance domain using the callbacks
+ * defined in cb.
+ *
+- * The @milliwatts is important to set with correct value. Some kernel
++ * The @microwatts is important to set with correct value. Some kernel
+ * sub-systems might rely on this flag and check if all devices in the EM are
+ * using the same scale.
+ *
+@@ -331,7 +339,7 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
+ */
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ struct em_data_callback *cb, cpumask_t *cpus,
+- bool milliwatts)
++ bool microwatts)
+ {
+ unsigned long cap, prev_cap = 0;
+ unsigned long flags = 0;
+@@ -381,8 +389,8 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ }
+ }
+
+- if (milliwatts)
+- flags |= EM_PERF_DOMAIN_MILLIWATTS;
++ if (microwatts)
++ flags |= EM_PERF_DOMAIN_MICROWATTS;
+ else if (cb->get_cost)
+ flags |= EM_PERF_DOMAIN_ARTIFICIAL;
+
+diff --git a/kernel/power/user.c b/kernel/power/user.c
+index ad241b4ff64c5..d43c2aa583b26 100644
+--- a/kernel/power/user.c
++++ b/kernel/power/user.c
+@@ -26,6 +26,7 @@
+
+ #include "power.h"
+
++static bool need_wait;
+
+ static struct snapshot_data {
+ struct snapshot_handle handle;
+@@ -78,7 +79,7 @@ static int snapshot_open(struct inode *inode, struct file *filp)
+ * Resuming. We may need to wait for the image device to
+ * appear.
+ */
+- wait_for_device_probe();
++ need_wait = true;
+
+ data->swap = -1;
+ data->mode = O_WRONLY;
+@@ -168,6 +169,11 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
+ ssize_t res;
+ loff_t pg_offp = *offp & ~PAGE_MASK;
+
++ if (need_wait) {
++ wait_for_device_probe();
++ need_wait = false;
++ }
++
+ lock_system_sleep();
+
+ data = filp->private_data;
+@@ -244,6 +250,11 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
+ loff_t size;
+ sector_t offset;
+
++ if (need_wait) {
++ wait_for_device_probe();
++ need_wait = false;
++ }
++
+ if (_IOC_TYPE(cmd) != SNAPSHOT_IOC_MAGIC)
+ return -ENOTTY;
+ if (_IOC_NR(cmd) > SNAPSHOT_IOC_MAXNR)
+diff --git a/kernel/profile.c b/kernel/profile.c
+index 37640a0bd8a3c..ae82ddfc6a684 100644
+--- a/kernel/profile.c
++++ b/kernel/profile.c
+@@ -109,6 +109,13 @@ int __ref profile_init(void)
+
+ /* only text is profiled */
+ prof_len = (_etext - _stext) >> prof_shift;
++
++ if (!prof_len) {
++ pr_warn("profiling shift: %u too large\n", prof_shift);
++ prof_on = 0;
++ return -EINVAL;
++ }
++
+ buffer_bytes = prof_len*sizeof(atomic_t);
+
+ if (!alloc_cpumask_var(&prof_cpu_mask, GFP_KERNEL))
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 7120165a93426..7c72ee97455f8 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -2075,6 +2075,19 @@ static int rcutorture_booster_init(unsigned int cpu)
+ if (boost_tasks[cpu] != NULL)
+ return 0; /* Already created, nothing more to do. */
+
++ // Testing RCU priority boosting requires rcutorture do
++ // some serious abuse. Counter this by running ksoftirqd
++ // at higher priority.
++ if (IS_BUILTIN(CONFIG_RCU_TORTURE_TEST)) {
++ struct sched_param sp;
++ struct task_struct *t;
++
++ t = per_cpu(ksoftirqd, cpu);
++ WARN_ON_ONCE(!t);
++ sp.sched_priority = 2;
++ sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
++ }
++
+ /* Don't allow time recalculation while creating a new task. */
+ mutex_lock(&boost_mutex);
+ rcu_torture_disable_rt_throttle();
+@@ -3329,21 +3342,6 @@ rcu_torture_init(void)
+ rcutor_hp = firsterr;
+ if (torture_init_error(firsterr))
+ goto unwind;
+-
+- // Testing RCU priority boosting requires rcutorture do
+- // some serious abuse. Counter this by running ksoftirqd
+- // at higher priority.
+- if (IS_BUILTIN(CONFIG_RCU_TORTURE_TEST)) {
+- for_each_online_cpu(cpu) {
+- struct sched_param sp;
+- struct task_struct *t;
+-
+- t = per_cpu(ksoftirqd, cpu);
+- WARN_ON_ONCE(!t);
+- sp.sched_priority = 2;
+- sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
+- }
+- }
+ }
+ shutdown_jiffies = jiffies + shutdown_secs * HZ;
+ firsterr = torture_shutdown_init(shutdown_secs, rcu_torture_cleanup);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index da0bf6fe9ecdc..d4af56927a4da 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -91,7 +91,7 @@
+ #include "stats.h"
+
+ #include "../workqueue_internal.h"
+-#include "../../fs/io-wq.h"
++#include "../../io_uring/io-wq.h"
+ #include "../smpboot.h"
+
+ /*
+@@ -3808,7 +3808,7 @@ bool cpus_share_cache(int this_cpu, int that_cpu)
+ return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
+ }
+
+-static inline bool ttwu_queue_cond(int cpu, int wake_flags)
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
+ {
+ /*
+ * Do not complicate things with the async wake_list while the CPU is
+@@ -3817,6 +3817,10 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
+ if (!cpu_active(cpu))
+ return false;
+
++ /* Ensure the task will still be allowed to run on the CPU. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
+ /*
+ * If the CPU does not share cache, then queue the task on the
+ * remote rqs wakelist to avoid accessing remote data.
+@@ -3824,13 +3828,21 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
+ if (!cpus_share_cache(smp_processor_id(), cpu))
+ return true;
+
++ if (cpu == smp_processor_id())
++ return false;
++
+ /*
+- * If the task is descheduling and the only running task on the
+- * CPU then use the wakelist to offload the task activation to
+- * the soon-to-be-idle CPU as the current CPU is likely busy.
+- * nr_running is checked to avoid unnecessary task stacking.
++ * If the wakee cpu is idle, or the task is descheduling and the
++ * only running task on the CPU, then use the wakelist to offload
++ * the task activation to the idle (or soon-to-be-idle) CPU as
++ * the current CPU is likely busy. nr_running is checked to
++ * avoid unnecessary task stacking.
++ *
++ * Note that we can only get here with (wakee) p->on_rq=0,
++ * p->on_cpu can be whatever, we've done the dequeue, so
++ * the wakee has been accounted out of ->nr_running.
+ */
+- if ((wake_flags & WF_ON_CPU) && cpu_rq(cpu)->nr_running <= 1)
++ if (!cpu_rq(cpu)->nr_running)
+ return true;
+
+ return false;
+@@ -3838,10 +3850,7 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
+
+ static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
+ {
+- if (sched_feat(TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
+- if (WARN_ON_ONCE(cpu == smp_processor_id()))
+- return false;
+-
++ if (sched_feat(TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
+ sched_clock_cpu(cpu); /* Sync clocks across CPUs */
+ __ttwu_queue_wakelist(p, cpu, wake_flags);
+ return true;
+@@ -4163,7 +4172,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
+ * scheduling.
+ */
+ if (smp_load_acquire(&p->on_cpu) &&
+- ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
++ ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
+ goto unlock;
+
+ /*
+@@ -4753,7 +4762,8 @@ static inline void prepare_task(struct task_struct *next)
+ * Claim the task as running, we do this before switching to it
+ * such that any running task will have this set.
+ *
+- * See the ttwu() WF_ON_CPU case and its ordering comment.
++ * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++ * its ordering comment.
+ */
+ WRITE_ONCE(next->on_cpu, 1);
+ #endif
+@@ -6500,8 +6510,12 @@ static inline void sched_submit_work(struct task_struct *tsk)
+ io_wq_worker_sleeping(tsk);
+ }
+
+- if (tsk_is_pi_blocked(tsk))
+- return;
++ /*
++ * spinlock and rwlock must not flush block requests. This will
++ * deadlock if the callback attempts to acquire a lock which is
++ * already acquired.
++ */
++ SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
+
+ /*
+ * If we are going to sleep and we have plugged IO queued,
+@@ -6998,17 +7012,29 @@ out_unlock:
+ EXPORT_SYMBOL(set_user_nice);
+
+ /*
+- * can_nice - check if a task can reduce its nice value
++ * is_nice_reduction - check if nice value is an actual reduction
++ *
++ * Similar to can_nice() but does not perform a capability check.
++ *
+ * @p: task
+ * @nice: nice value
+ */
+-int can_nice(const struct task_struct *p, const int nice)
++static bool is_nice_reduction(const struct task_struct *p, const int nice)
+ {
+ /* Convert nice value [19,-20] to rlimit style value [1,40]: */
+ int nice_rlim = nice_to_rlimit(nice);
+
+- return (nice_rlim <= task_rlimit(p, RLIMIT_NICE) ||
+- capable(CAP_SYS_NICE));
++ return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
++}
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++ return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
+ }
+
+ #ifdef __ARCH_WANT_SYS_NICE
+@@ -7287,6 +7313,69 @@ static bool check_same_owner(struct task_struct *p)
+ return match;
+ }
+
++/*
++ * Allow unprivileged RT tasks to decrease priority.
++ * Only issue a capable test if needed and only once to avoid an audit
++ * event on permitted non-privileged operations:
++ */
++static int user_check_sched_setscheduler(struct task_struct *p,
++ const struct sched_attr *attr,
++ int policy, int reset_on_fork)
++{
++ if (fair_policy(policy)) {
++ if (attr->sched_nice < task_nice(p) &&
++ !is_nice_reduction(p, attr->sched_nice))
++ goto req_priv;
++ }
++
++ if (rt_policy(policy)) {
++ unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
++
++ /* Can't set/change the rt policy: */
++ if (policy != p->policy && !rlim_rtprio)
++ goto req_priv;
++
++ /* Can't increase priority: */
++ if (attr->sched_priority > p->rt_priority &&
++ attr->sched_priority > rlim_rtprio)
++ goto req_priv;
++ }
++
++ /*
++ * Can't set/change SCHED_DEADLINE policy at all for now
++ * (safest behavior); in the future we would like to allow
++ * unprivileged DL tasks to increase their relative deadline
++ * or reduce their runtime (both ways reducing utilization)
++ */
++ if (dl_policy(policy))
++ goto req_priv;
++
++ /*
++ * Treat SCHED_IDLE as nice 20. Only allow a switch to
++ * SCHED_NORMAL if the RLIMIT_NICE would normally permit it.
++ */
++ if (task_has_idle_policy(p) && !idle_policy(policy)) {
++ if (!is_nice_reduction(p, task_nice(p)))
++ goto req_priv;
++ }
++
++ /* Can't change other user's priorities: */
++ if (!check_same_owner(p))
++ goto req_priv;
++
++ /* Normal users shall not reset the sched_reset_on_fork flag: */
++ if (p->sched_reset_on_fork && !reset_on_fork)
++ goto req_priv;
++
++ return 0;
++
++req_priv:
++ if (!capable(CAP_SYS_NICE))
++ return -EPERM;
++
++ return 0;
++}
++
+ static int __sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ bool user, bool pi)
+@@ -7328,58 +7417,11 @@ recheck:
+ (rt_policy(policy) != (attr->sched_priority != 0)))
+ return -EINVAL;
+
+- /*
+- * Allow unprivileged RT tasks to decrease priority:
+- */
+- if (user && !capable(CAP_SYS_NICE)) {
+- if (fair_policy(policy)) {
+- if (attr->sched_nice < task_nice(p) &&
+- !can_nice(p, attr->sched_nice))
+- return -EPERM;
+- }
+-
+- if (rt_policy(policy)) {
+- unsigned long rlim_rtprio =
+- task_rlimit(p, RLIMIT_RTPRIO);
+-
+- /* Can't set/change the rt policy: */
+- if (policy != p->policy && !rlim_rtprio)
+- return -EPERM;
+-
+- /* Can't increase priority: */
+- if (attr->sched_priority > p->rt_priority &&
+- attr->sched_priority > rlim_rtprio)
+- return -EPERM;
+- }
+-
+- /*
+- * Can't set/change SCHED_DEADLINE policy at all for now
+- * (safest behavior); in the future we would like to allow
+- * unprivileged DL tasks to increase their relative deadline
+- * or reduce their runtime (both ways reducing utilization)
+- */
+- if (dl_policy(policy))
+- return -EPERM;
+-
+- /*
+- * Treat SCHED_IDLE as nice 20. Only allow a switch to
+- * SCHED_NORMAL if the RLIMIT_NICE would normally permit it.
+- */
+- if (task_has_idle_policy(p) && !idle_policy(policy)) {
+- if (!can_nice(p, task_nice(p)))
+- return -EPERM;
+- }
+-
+- /* Can't change other user's priorities: */
+- if (!check_same_owner(p))
+- return -EPERM;
+-
+- /* Normal users shall not reset the sched_reset_on_fork flag: */
+- if (p->sched_reset_on_fork && !reset_on_fork)
+- return -EPERM;
+- }
+-
+ if (user) {
++ retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++ if (retval)
++ return retval;
++
+ if (attr->sched_flags & SCHED_FLAG_SUGOV)
+ return -EINVAL;
+
+@@ -8947,7 +8989,7 @@ int cpuset_cpumask_can_shrink(const struct cpumask *cur,
+ }
+
+ int task_can_attach(struct task_struct *p,
+- const struct cpumask *cs_cpus_allowed)
++ const struct cpumask *cs_effective_cpus)
+ {
+ int ret = 0;
+
+@@ -8966,9 +9008,11 @@ int task_can_attach(struct task_struct *p,
+ }
+
+ if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span,
+- cs_cpus_allowed)) {
+- int cpu = cpumask_any_and(cpu_active_mask, cs_cpus_allowed);
++ cs_effective_cpus)) {
++ int cpu = cpumask_any_and(cpu_active_mask, cs_effective_cpus);
+
++ if (unlikely(cpu >= nr_cpu_ids))
++ return -EINVAL;
+ ret = dl_cpu_busy(cpu, p);
+ }
+
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 77b2048a93262..0cba1b2e295be 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -2885,6 +2885,7 @@ void init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
+ p->node_stamp = 0;
+ p->numa_scan_seq = mm ? mm->numa_scan_seq : 0;
+ p->numa_scan_period = sysctl_numa_balancing_scan_delay;
++ p->numa_migrate_retry = 0;
+ /* Protect against double add, see task_tick_numa and task_numa_work */
+ p->numa_work.next = &p->numa_work;
+ p->numa_faults = NULL;
+@@ -6336,6 +6337,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
+ {
+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
+ int i, cpu, idle_cpu = -1, nr = INT_MAX;
++ struct sched_domain_shared *sd_share;
+ struct rq *this_rq = this_rq();
+ int this = smp_processor_id();
+ struct sched_domain *this_sd;
+@@ -6375,6 +6377,17 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
+ time = cpu_clock(this);
+ }
+
++ if (sched_feat(SIS_UTIL)) {
++ sd_share = rcu_dereference(per_cpu(sd_llc_shared, target));
++ if (sd_share) {
++ /* because !--nr is the condition to stop scan */
++ nr = READ_ONCE(sd_share->nr_idle_scan) + 1;
++ /* overloaded LLC is unlikely to have idle cpu/core */
++ if (nr == 1)
++ return -1;
++ }
++ }
++
+ for_each_cpu_wrap(cpu, cpus, target + 1) {
+ if (has_idle_core) {
+ i = select_idle_core(p, cpu, cpus, &idle_cpu);
+@@ -7585,8 +7598,8 @@ enum group_type {
+ */
+ group_fully_busy,
+ /*
+- * SD_ASYM_CPUCAPACITY only: One task doesn't fit with CPU's capacity
+- * and must be migrated to a more powerful CPU.
++ * One task doesn't fit with CPU's capacity and must be migrated to a
++ * more powerful CPU.
+ */
+ group_misfit_task,
+ /*
+@@ -8669,6 +8682,19 @@ sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats *sgs
+ return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
+ }
+
++static inline bool
++sched_reduced_capacity(struct rq *rq, struct sched_domain *sd)
++{
++ /*
++ * When there is more than 1 task, the group_overloaded case already
++ * takes care of cpu with reduced capacity
++ */
++ if (rq->cfs.h_nr_running != 1)
++ return false;
++
++ return check_cpu_capacity(rq, sd);
++}
++
+ /**
+ * update_sg_lb_stats - Update sched_group's statistics for load balancing.
+ * @env: The load balancing environment.
+@@ -8691,8 +8717,9 @@ static inline void update_sg_lb_stats(struct lb_env *env,
+
+ for_each_cpu_and(i, sched_group_span(group), env->cpus) {
+ struct rq *rq = cpu_rq(i);
++ unsigned long load = cpu_load(rq);
+
+- sgs->group_load += cpu_load(rq);
++ sgs->group_load += load;
+ sgs->group_util += cpu_util_cfs(i);
+ sgs->group_runnable += cpu_runnable(rq);
+ sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+@@ -8722,11 +8749,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
+ if (local_group)
+ continue;
+
+- /* Check for a misfit task on the cpu */
+- if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
+- sgs->group_misfit_task_load < rq->misfit_task_load) {
+- sgs->group_misfit_task_load = rq->misfit_task_load;
+- *sg_status |= SG_OVERLOAD;
++ if (env->sd->flags & SD_ASYM_CPUCAPACITY) {
++ /* Check for a misfit task on the cpu */
++ if (sgs->group_misfit_task_load < rq->misfit_task_load) {
++ sgs->group_misfit_task_load = rq->misfit_task_load;
++ *sg_status |= SG_OVERLOAD;
++ }
++ } else if ((env->idle != CPU_NOT_IDLE) &&
++ sched_reduced_capacity(rq, env->sd)) {
++ /* Check for a task running on a CPU with reduced capacity */
++ if (sgs->group_misfit_task_load < load)
++ sgs->group_misfit_task_load = load;
+ }
+ }
+
+@@ -8779,7 +8812,8 @@ static bool update_sd_pick_busiest(struct lb_env *env,
+ * CPUs in the group should either be possible to resolve
+ * internally or be covered by avg_load imbalance (eventually).
+ */
+- if (sgs->group_type == group_misfit_task &&
++ if ((env->sd->flags & SD_ASYM_CPUCAPACITY) &&
++ (sgs->group_type == group_misfit_task) &&
+ (!capacity_greater(capacity_of(env->dst_cpu), sg->sgc->max_capacity) ||
+ sds->local_stat.group_type != group_has_spare))
+ return false;
+@@ -9222,6 +9256,77 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
+ return idlest;
+ }
+
++static void update_idle_cpu_scan(struct lb_env *env,
++ unsigned long sum_util)
++{
++ struct sched_domain_shared *sd_share;
++ int llc_weight, pct;
++ u64 x, y, tmp;
++ /*
++ * Update the number of CPUs to scan in LLC domain, which could
++ * be used as a hint in select_idle_cpu(). The update of sd_share
++ * could be expensive because it is within a shared cache line.
++ * So the write of this hint only occurs during periodic load
++ * balancing, rather than CPU_NEWLY_IDLE, because the latter
++ * can fire way more frequently than the former.
++ */
++ if (!sched_feat(SIS_UTIL) || env->idle == CPU_NEWLY_IDLE)
++ return;
++
++ llc_weight = per_cpu(sd_llc_size, env->dst_cpu);
++ if (env->sd->span_weight != llc_weight)
++ return;
++
++ sd_share = rcu_dereference(per_cpu(sd_llc_shared, env->dst_cpu));
++ if (!sd_share)
++ return;
++
++ /*
++ * The number of CPUs to search drops as sum_util increases, when
++ * sum_util hits 85% or above, the scan stops.
++ * The reason to choose 85% as the threshold is because this is the
++ * imbalance_pct(117) when a LLC sched group is overloaded.
++ *
++ * let y = SCHED_CAPACITY_SCALE - p * x^2 [1]
++ * and y'= y / SCHED_CAPACITY_SCALE
++ *
++ * x is the ratio of sum_util compared to the CPU capacity:
++ * x = sum_util / (llc_weight * SCHED_CAPACITY_SCALE)
++ * y' is the ratio of CPUs to be scanned in the LLC domain,
++ * and the number of CPUs to scan is calculated by:
++ *
++ * nr_scan = llc_weight * y' [2]
++ *
++ * When x hits the threshold of overloaded, AKA, when
++ * x = 100 / pct, y drops to 0. According to [1],
++ * p should be SCHED_CAPACITY_SCALE * pct^2 / 10000
++ *
++ * Scale x by SCHED_CAPACITY_SCALE:
++ * x' = sum_util / llc_weight; [3]
++ *
++ * and finally [1] becomes:
++ * y = SCHED_CAPACITY_SCALE -
++ * x'^2 * pct^2 / (10000 * SCHED_CAPACITY_SCALE) [4]
++ *
++ */
++ /* equation [3] */
++ x = sum_util;
++ do_div(x, llc_weight);
++
++ /* equation [4] */
++ pct = env->sd->imbalance_pct;
++ tmp = x * x * pct * pct;
++ do_div(tmp, 10000 * SCHED_CAPACITY_SCALE);
++ tmp = min_t(long, tmp, SCHED_CAPACITY_SCALE);
++ y = SCHED_CAPACITY_SCALE - tmp;
++
++ /* equation [2] */
++ y *= llc_weight;
++ do_div(y, SCHED_CAPACITY_SCALE);
++ if ((int)y != sd_share->nr_idle_scan)
++ WRITE_ONCE(sd_share->nr_idle_scan, (int)y);
++}
++
+ /**
+ * update_sd_lb_stats - Update sched_domain's statistics for load balancing.
+ * @env: The load balancing environment.
+@@ -9234,6 +9339,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
+ struct sched_group *sg = env->sd->groups;
+ struct sg_lb_stats *local = &sds->local_stat;
+ struct sg_lb_stats tmp_sgs;
++ unsigned long sum_util = 0;
+ int sg_status = 0;
+
+ do {
+@@ -9266,6 +9372,7 @@ next_group:
+ sds->total_load += sgs->group_load;
+ sds->total_capacity += sgs->group_capacity;
+
++ sum_util += sgs->group_util;
+ sg = sg->next;
+ } while (sg != env->sd->groups);
+
+@@ -9291,6 +9398,8 @@ next_group:
+ WRITE_ONCE(rd->overutilized, SG_OVERUTILIZED);
+ trace_sched_overutilized_tp(rd, SG_OVERUTILIZED);
+ }
++
++ update_idle_cpu_scan(env, sum_util);
+ }
+
+ #define NUMA_IMBALANCE_MIN 2
+@@ -9325,9 +9434,18 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
+ busiest = &sds->busiest_stat;
+
+ if (busiest->group_type == group_misfit_task) {
+- /* Set imbalance to allow misfit tasks to be balanced. */
+- env->migration_type = migrate_misfit;
+- env->imbalance = 1;
++ if (env->sd->flags & SD_ASYM_CPUCAPACITY) {
++ /* Set imbalance to allow misfit tasks to be balanced. */
++ env->migration_type = migrate_misfit;
++ env->imbalance = 1;
++ } else {
++ /*
++ * Set load imbalance to allow moving task from cpu
++ * with reduced capacity.
++ */
++ env->migration_type = migrate_load;
++ env->imbalance = busiest->group_misfit_task_load;
++ }
+ return;
+ }
+
+diff --git a/kernel/sched/features.h b/kernel/sched/features.h
+index 1cf435bbcd9ca..ee7f23c76bd33 100644
+--- a/kernel/sched/features.h
++++ b/kernel/sched/features.h
+@@ -60,7 +60,8 @@ SCHED_FEAT(TTWU_QUEUE, true)
+ /*
+ * When doing wakeups, attempt to limit superfluous scans of the LLC domain.
+ */
+-SCHED_FEAT(SIS_PROP, true)
++SCHED_FEAT(SIS_PROP, false)
++SCHED_FEAT(SIS_UTIL, true)
+
+ /*
+ * Issue a WARN when we do multiple update_rq_clock() calls
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 8c9ed96648409..55f39c8f42032 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -480,7 +480,7 @@ static inline void rt_queue_push_tasks(struct rq *rq)
+ #endif /* CONFIG_SMP */
+
+ static void enqueue_top_rt_rq(struct rt_rq *rt_rq);
+-static void dequeue_top_rt_rq(struct rt_rq *rt_rq);
++static void dequeue_top_rt_rq(struct rt_rq *rt_rq, unsigned int count);
+
+ static inline int on_rt_rq(struct sched_rt_entity *rt_se)
+ {
+@@ -601,7 +601,7 @@ static void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
+ rt_se = rt_rq->tg->rt_se[cpu];
+
+ if (!rt_se) {
+- dequeue_top_rt_rq(rt_rq);
++ dequeue_top_rt_rq(rt_rq, rt_rq->rt_nr_running);
+ /* Kick cpufreq (see the comment in kernel/sched/sched.h). */
+ cpufreq_update_util(rq_of_rt_rq(rt_rq), 0);
+ }
+@@ -687,7 +687,7 @@ static inline void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
+
+ static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
+ {
+- dequeue_top_rt_rq(rt_rq);
++ dequeue_top_rt_rq(rt_rq, rt_rq->rt_nr_running);
+ }
+
+ static inline int rt_rq_throttled(struct rt_rq *rt_rq)
+@@ -1089,7 +1089,7 @@ static void update_curr_rt(struct rq *rq)
+ }
+
+ static void
+-dequeue_top_rt_rq(struct rt_rq *rt_rq)
++dequeue_top_rt_rq(struct rt_rq *rt_rq, unsigned int count)
+ {
+ struct rq *rq = rq_of_rt_rq(rt_rq);
+
+@@ -1100,7 +1100,7 @@ dequeue_top_rt_rq(struct rt_rq *rt_rq)
+
+ BUG_ON(!rq->nr_running);
+
+- sub_nr_running(rq, rt_rq->rt_nr_running);
++ sub_nr_running(rq, count);
+ rt_rq->rt_queued = 0;
+
+ }
+@@ -1486,18 +1486,21 @@ static void __dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flag
+ static void dequeue_rt_stack(struct sched_rt_entity *rt_se, unsigned int flags)
+ {
+ struct sched_rt_entity *back = NULL;
++ unsigned int rt_nr_running;
+
+ for_each_sched_rt_entity(rt_se) {
+ rt_se->back = back;
+ back = rt_se;
+ }
+
+- dequeue_top_rt_rq(rt_rq_of_se(back));
++ rt_nr_running = rt_rq_of_se(back)->rt_nr_running;
+
+ for (rt_se = back; rt_se; rt_se = rt_se->back) {
+ if (on_rt_rq(rt_se))
+ __dequeue_rt_entity(rt_se, flags);
+ }
++
++ dequeue_top_rt_rq(rt_rq_of_se(back), rt_nr_running);
+ }
+
+ static void enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 47b89a0fc6e55..7b19a72408b15 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2044,7 +2044,6 @@ static inline int task_on_rq_migrating(struct task_struct *p)
+
+ #define WF_SYNC 0x10 /* Waker goes to sleep after wakeup */
+ #define WF_MIGRATED 0x20 /* Internal use, task got migrated */
+-#define WF_ON_CPU 0x40 /* Wakee is on_cpu */
+
+ #ifdef CONFIG_SMP
+ static_assert(WF_EXEC == SD_BALANCE_EXEC);
+diff --git a/kernel/smp.c b/kernel/smp.c
+index dd215f4394264..650810a6f29b3 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -174,9 +174,9 @@ static int __init csdlock_debug(char *str)
+ if (val)
+ static_branch_enable(&csdlock_debug_enabled);
+
+- return 0;
++ return 1;
+ }
+-early_param("csdlock_debug", csdlock_debug);
++__setup("csdlock_debug=", csdlock_debug);
+
+ static DEFINE_PER_CPU(call_single_data_t *, cur_csd);
+ static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func);
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 0ea8702eb5163..23af5eca11b14 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2311,6 +2311,7 @@ schedule_hrtimeout_range_clock(ktime_t *expires, u64 delta,
+
+ return !t.task ? 0 : -EINTR;
+ }
++EXPORT_SYMBOL_GPL(schedule_hrtimeout_range_clock);
+
+ /**
+ * schedule_hrtimeout_range - sleep until timeout
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 8e4b3c32fcf9d..f72b9f1de178e 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -23,6 +23,7 @@
+ #include <linux/pvclock_gtod.h>
+ #include <linux/compiler.h>
+ #include <linux/audit.h>
++#include <linux/random.h>
+
+ #include "tick-internal.h"
+ #include "ntp_internal.h"
+@@ -1343,8 +1344,10 @@ out:
+ /* Signal hrtimers about time change */
+ clock_was_set(CLOCK_SET_WALL);
+
+- if (!ret)
++ if (!ret) {
+ audit_tk_injoffset(ts_delta);
++ add_device_randomness(ts, sizeof(*ts));
++ }
+
+ return ret;
+ }
+@@ -2430,6 +2433,7 @@ int do_adjtimex(struct __kernel_timex *txc)
+ ret = timekeeping_validate_timex(txc);
+ if (ret)
+ return ret;
++ add_device_randomness(txc, sizeof(*txc));
+
+ if (txc->modes & ADJ_SETOFFSET) {
+ struct timespec64 delta;
+@@ -2447,6 +2451,7 @@ int do_adjtimex(struct __kernel_timex *txc)
+ audit_ntp_init(&ad);
+
+ ktime_get_real_ts64(&ts);
++ add_device_randomness(&ts, sizeof(ts));
+
+ raw_spin_lock_irqsave(&timekeeper_lock, flags);
+ write_seqcount_begin(&tk_core.seq);
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index fe04c6f96ca5d..b334c033cee71 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -1058,7 +1058,7 @@ static void blk_add_trace_rq_remap(void *ignore, struct request *rq, dev_t dev,
+ r.sector_from = cpu_to_be64(from);
+
+ __blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq),
+- rq_data_dir(rq), 0, BLK_TA_REMAP, 0,
++ req_op(rq), rq->cmd_flags, BLK_TA_REMAP, 0,
+ sizeof(r), &r, blk_trace_request_get_cgid(rq));
+ rcu_read_unlock();
+ }
+diff --git a/lib/bitmap.c b/lib/bitmap.c
+index b18e31ea6e664..e903e13c62e11 100644
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -1564,7 +1564,7 @@ void bitmap_to_arr64(u64 *buf, const unsigned long *bitmap, unsigned int nbits)
+
+ /* Clear tail bits in the last element of array beyond nbits. */
+ if (nbits % 64)
+- buf[-1] &= GENMASK_ULL(nbits % 64, 0);
++ buf[-1] &= GENMASK_ULL((nbits - 1) % 64, 0);
+ }
+ EXPORT_SYMBOL(bitmap_to_arr64);
+ #endif
+diff --git a/lib/crypto/blake2s-selftest.c b/lib/crypto/blake2s-selftest.c
+index 409e4b7287704..7d77dea155873 100644
+--- a/lib/crypto/blake2s-selftest.c
++++ b/lib/crypto/blake2s-selftest.c
+@@ -4,6 +4,8 @@
+ */
+
+ #include <crypto/internal/blake2s.h>
++#include <linux/kernel.h>
++#include <linux/random.h>
+ #include <linux/string.h>
+
+ /*
+@@ -587,5 +589,44 @@ bool __init blake2s_selftest(void)
+ }
+ }
+
++ for (i = 0; i < 32; ++i) {
++ enum { TEST_ALIGNMENT = 16 };
++ u8 unaligned_block[BLAKE2S_BLOCK_SIZE + TEST_ALIGNMENT - 1]
++ __aligned(TEST_ALIGNMENT);
++ u8 blocks[BLAKE2S_BLOCK_SIZE * 2];
++ struct blake2s_state state1, state2;
++
++ get_random_bytes(blocks, sizeof(blocks));
++ get_random_bytes(&state, sizeof(state));
++
++#if defined(CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC) && \
++ defined(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S)
++ memcpy(&state1, &state, sizeof(state1));
++ memcpy(&state2, &state, sizeof(state2));
++ blake2s_compress(&state1, blocks, 2, BLAKE2S_BLOCK_SIZE);
++ blake2s_compress_generic(&state2, blocks, 2, BLAKE2S_BLOCK_SIZE);
++ if (memcmp(&state1, &state2, sizeof(state1))) {
++ pr_err("blake2s random compress self-test %d: FAIL\n",
++ i + 1);
++ success = false;
++ }
++#endif
++
++ memcpy(&state1, &state, sizeof(state1));
++ blake2s_compress(&state1, blocks, 1, BLAKE2S_BLOCK_SIZE);
++ for (l = 1; l < TEST_ALIGNMENT; ++l) {
++ memcpy(unaligned_block + l, blocks,
++ BLAKE2S_BLOCK_SIZE);
++ memcpy(&state2, &state, sizeof(state2));
++ blake2s_compress(&state2, unaligned_block + l, 1,
++ BLAKE2S_BLOCK_SIZE);
++ if (memcmp(&state1, &state2, sizeof(state1))) {
++ pr_err("blake2s random compress align %d self-test %d: FAIL\n",
++ l, i + 1);
++ success = false;
++ }
++ }
++ }
++
+ return success;
+ }
+diff --git a/lib/crypto/blake2s.c b/lib/crypto/blake2s.c
+index c71c09621c09c..98e688c6d8910 100644
+--- a/lib/crypto/blake2s.c
++++ b/lib/crypto/blake2s.c
+@@ -16,16 +16,44 @@
+ #include <linux/init.h>
+ #include <linux/bug.h>
+
++static inline void blake2s_set_lastblock(struct blake2s_state *state)
++{
++ state->f[0] = -1;
++}
++
+ void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen)
+ {
+- __blake2s_update(state, in, inlen, false);
++ const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen;
++
++ if (unlikely(!inlen))
++ return;
++ if (inlen > fill) {
++ memcpy(state->buf + state->buflen, in, fill);
++ blake2s_compress(state, state->buf, 1, BLAKE2S_BLOCK_SIZE);
++ state->buflen = 0;
++ in += fill;
++ inlen -= fill;
++ }
++ if (inlen > BLAKE2S_BLOCK_SIZE) {
++ const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE);
++ blake2s_compress(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE);
++ in += BLAKE2S_BLOCK_SIZE * (nblocks - 1);
++ inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1);
++ }
++ memcpy(state->buf + state->buflen, in, inlen);
++ state->buflen += inlen;
+ }
+ EXPORT_SYMBOL(blake2s_update);
+
+ void blake2s_final(struct blake2s_state *state, u8 *out)
+ {
+ WARN_ON(IS_ENABLED(DEBUG) && !out);
+- __blake2s_final(state, out, false);
++ blake2s_set_lastblock(state);
++ memset(state->buf + state->buflen, 0,
++ BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */
++ blake2s_compress(state, state->buf, 1, state->buflen);
++ cpu_to_le32_array(state->h, ARRAY_SIZE(state->h));
++ memcpy(out, state->h, state->outlen);
+ memzero_explicit(state, sizeof(*state));
+ }
+ EXPORT_SYMBOL(blake2s_final);
+@@ -38,12 +66,7 @@ static int __init blake2s_mod_init(void)
+ return 0;
+ }
+
+-static void __exit blake2s_mod_exit(void)
+-{
+-}
+-
+ module_init(blake2s_mod_init);
+-module_exit(blake2s_mod_exit);
+ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("BLAKE2s hash function");
+ MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 0b64695ab632f..2bf20b48a04ad 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -689,6 +689,7 @@ static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes,
+ struct pipe_inode_info *pipe = i->pipe;
+ unsigned int p_mask = pipe->ring_size - 1;
+ unsigned int i_head;
++ unsigned int valid = pipe->head;
+ size_t n, off, xfer = 0;
+
+ if (!sanity(i))
+@@ -702,11 +703,17 @@ static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes,
+ rem = copy_mc_to_kernel(p + off, addr + xfer, chunk);
+ chunk -= rem;
+ kunmap_local(p);
+- i->head = i_head;
+- i->iov_offset = off + chunk;
+- xfer += chunk;
+- if (rem)
++ if (chunk) {
++ i->head = i_head;
++ i->iov_offset = off + chunk;
++ xfer += chunk;
++ valid = i_head + 1;
++ }
++ if (rem) {
++ pipe->bufs[i_head & p_mask].len -= rem;
++ pipe_discard_from(pipe, valid);
+ break;
++ }
+ n -= chunk;
+ off = 0;
+ i_head++;
+diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c
+index 96f96e42ce062..16fb88c0aca31 100644
+--- a/lib/kunit/executor.c
++++ b/lib/kunit/executor.c
+@@ -76,8 +76,10 @@ kunit_filter_tests(struct kunit_suite *const suite, const char *test_glob)
+ memcpy(copy, suite, sizeof(*copy));
+
+ filtered = kcalloc(n + 1, sizeof(*filtered), GFP_KERNEL);
+- if (!filtered)
++ if (!filtered) {
++ kfree(copy);
+ return ERR_PTR(-ENOMEM);
++ }
+
+ n = 0;
+ kunit_suite_for_each_test_case(suite, test_case) {
+diff --git a/lib/livepatch/test_klp_callbacks_busy.c b/lib/livepatch/test_klp_callbacks_busy.c
+index 7ac845f65be56..133929e0ce8ff 100644
+--- a/lib/livepatch/test_klp_callbacks_busy.c
++++ b/lib/livepatch/test_klp_callbacks_busy.c
+@@ -16,10 +16,12 @@ MODULE_PARM_DESC(block_transition, "block_transition (default=false)");
+
+ static void busymod_work_func(struct work_struct *work);
+ static DECLARE_WORK(work, busymod_work_func);
++static DECLARE_COMPLETION(busymod_work_started);
+
+ static void busymod_work_func(struct work_struct *work)
+ {
+ pr_info("%s enter\n", __func__);
++ complete(&busymod_work_started);
+
+ while (READ_ONCE(block_transition)) {
+ /*
+@@ -37,6 +39,12 @@ static int test_klp_callbacks_busy_init(void)
+ pr_info("%s\n", __func__);
+ schedule_work(&work);
+
++ /*
++ * To synchronize kernel messages, hold the init function from
++ * exiting until the work function's entry message has printed.
++ */
++ wait_for_completion(&busymod_work_started);
++
+ if (!block_transition) {
+ /*
+ * Serialize output: print all messages from the work
+diff --git a/lib/overflow_kunit.c b/lib/overflow_kunit.c
+index 475f0c064bf65..7e3e43679b73c 100644
+--- a/lib/overflow_kunit.c
++++ b/lib/overflow_kunit.c
+@@ -91,6 +91,7 @@ DEFINE_TEST_ARRAY(u32) = {
+ {-4U, 5U, 1U, -9U, -20U, true, false, true},
+ };
+
++#if BITS_PER_LONG == 64
+ DEFINE_TEST_ARRAY(u64) = {
+ {0, 0, 0, 0, 0, false, false, false},
+ {1, 1, 2, 0, 1, false, false, false},
+@@ -114,6 +115,7 @@ DEFINE_TEST_ARRAY(u64) = {
+ false, true, false},
+ {-15ULL, 10ULL, -5ULL, -25ULL, -150ULL, false, false, true},
+ };
++#endif
+
+ DEFINE_TEST_ARRAY(s8) = {
+ {0, 0, 0, 0, 0, false, false, false},
+@@ -188,6 +190,8 @@ DEFINE_TEST_ARRAY(s32) = {
+ {S32_MIN, S32_MIN, 0, 0, 0, true, false, true},
+ {S32_MAX, S32_MAX, -2, 0, 1, true, false, true},
+ };
++
++#if BITS_PER_LONG == 64
+ DEFINE_TEST_ARRAY(s64) = {
+ {0, 0, 0, 0, 0, false, false, false},
+
+@@ -216,6 +220,7 @@ DEFINE_TEST_ARRAY(s64) = {
+ {-128, -1, -129, -127, 128, false, false, false},
+ {0, -S64_MAX, -S64_MAX, S64_MAX, 0, false, false, false},
+ };
++#endif
+
+ #define check_one_op(t, fmt, op, sym, a, b, r, of) do { \
+ t _r; \
+@@ -650,6 +655,7 @@ static struct kunit_case overflow_test_cases[] = {
+ KUNIT_CASE(s16_overflow_test),
+ KUNIT_CASE(u32_overflow_test),
+ KUNIT_CASE(s32_overflow_test),
++/* Clang 13 and earlier generate unwanted libcalls on 32-bit. */
+ #if BITS_PER_LONG == 64
+ KUNIT_CASE(u64_overflow_test),
+ KUNIT_CASE(s64_overflow_test),
+diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c
+index 046ac6297c781..a2bb7738c373c 100644
+--- a/lib/smp_processor_id.c
++++ b/lib/smp_processor_id.c
+@@ -47,9 +47,9 @@ unsigned int check_preemption_disabled(const char *what1, const char *what2)
+
+ printk("caller is %pS\n", __builtin_return_address(0));
+ dump_stack();
+- instrumentation_end();
+
+ out_enable:
++ instrumentation_end();
+ preempt_enable_no_resched_notrace();
+ out:
+ return this_cpu;
+diff --git a/lib/test_bpf.c b/lib/test_bpf.c
+index 2a7836e115b4e..5820704165a64 100644
+--- a/lib/test_bpf.c
++++ b/lib/test_bpf.c
+@@ -14733,9 +14733,9 @@ static struct skb_segment_test skb_segment_tests[] __initconst = {
+ .build_skb = build_test_skb_linear_no_head_frag,
+ .features = NETIF_F_SG | NETIF_F_FRAGLIST |
+ NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_GSO |
+- NETIF_F_LLTX_BIT | NETIF_F_GRO |
++ NETIF_F_LLTX | NETIF_F_GRO |
+ NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM |
+- NETIF_F_HW_VLAN_STAG_TX_BIT
++ NETIF_F_HW_VLAN_STAG_TX
+ }
+ };
+
+diff --git a/lib/test_hmm.c b/lib/test_hmm.c
+index cfe6320478391..f2c3015c5c82c 100644
+--- a/lib/test_hmm.c
++++ b/lib/test_hmm.c
+@@ -732,7 +732,7 @@ static int dmirror_exclusive(struct dmirror *dmirror,
+
+ mmap_read_lock(mm);
+ for (addr = start; addr < end; addr = next) {
+- unsigned long mapped;
++ unsigned long mapped = 0;
+ int i;
+
+ if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT))
+@@ -741,7 +741,13 @@ static int dmirror_exclusive(struct dmirror *dmirror,
+ next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT);
+
+ ret = make_device_exclusive_range(mm, addr, next, pages, NULL);
+- mapped = dmirror_atomic_map(addr, next, pages, dmirror);
++ /*
++ * Do dmirror_atomic_map() iff all pages are marked for
++ * exclusive access to avoid accessing uninitialized
++ * fields of pages.
++ */
++ if (ret == (next - addr) >> PAGE_SHIFT)
++ mapped = dmirror_atomic_map(addr, next, pages, dmirror);
+ for (i = 0; i < ret; i++) {
+ if (pages[i]) {
+ unlock_page(pages[i]);
+diff --git a/lib/test_kasan.c b/lib/test_kasan.c
+index c233b1a4e9849..58c1b01ccfe20 100644
+--- a/lib/test_kasan.c
++++ b/lib/test_kasan.c
+@@ -131,6 +131,7 @@ static void kmalloc_oob_right(struct kunit *test)
+ ptr = kmalloc(size, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
++ OPTIMIZER_HIDE_VAR(ptr);
+ /*
+ * An unaligned access past the requested kmalloc size.
+ * Only generic KASAN can precisely detect these.
+@@ -159,6 +160,7 @@ static void kmalloc_oob_left(struct kunit *test)
+ ptr = kmalloc(size, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
++ OPTIMIZER_HIDE_VAR(ptr);
+ KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
+ kfree(ptr);
+ }
+@@ -171,6 +173,7 @@ static void kmalloc_node_oob_right(struct kunit *test)
+ ptr = kmalloc_node(size, GFP_KERNEL, 0);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
++ OPTIMIZER_HIDE_VAR(ptr);
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+ kfree(ptr);
+ }
+@@ -191,6 +194,7 @@ static void kmalloc_pagealloc_oob_right(struct kunit *test)
+ ptr = kmalloc(size, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
++ OPTIMIZER_HIDE_VAR(ptr);
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + OOB_TAG_OFF] = 0);
+
+ kfree(ptr);
+@@ -271,6 +275,7 @@ static void kmalloc_large_oob_right(struct kunit *test)
+ ptr = kmalloc(size, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
++ OPTIMIZER_HIDE_VAR(ptr);
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = 0);
+ kfree(ptr);
+ }
+@@ -410,6 +415,8 @@ static void kmalloc_oob_16(struct kunit *test)
+ ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
+
++ OPTIMIZER_HIDE_VAR(ptr1);
++ OPTIMIZER_HIDE_VAR(ptr2);
+ KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
+ kfree(ptr1);
+ kfree(ptr2);
+@@ -756,6 +763,8 @@ static void ksize_unpoisons_memory(struct kunit *test)
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ real_size = ksize(ptr);
+
++ OPTIMIZER_HIDE_VAR(ptr);
++
+ /* This access shouldn't trigger a KASAN report. */
+ ptr[size] = 'x';
+
+@@ -778,6 +787,7 @@ static void ksize_uaf(struct kunit *test)
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ kfree(ptr);
+
++ OPTIMIZER_HIDE_VAR(ptr);
+ KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr));
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
+diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
+index 4b07c29effe97..0b3c7396cb90a 100644
+--- a/mm/damon/reclaim.c
++++ b/mm/damon/reclaim.c
+@@ -441,8 +441,10 @@ static int __init damon_reclaim_init(void)
+ if (!ctx)
+ return -ENOMEM;
+
+- if (damon_select_ops(ctx, DAMON_OPS_PADDR))
++ if (damon_select_ops(ctx, DAMON_OPS_PADDR)) {
++ damon_destroy_ctx(ctx);
+ return -EINVAL;
++ }
+
+ ctx->callback.after_wmarks_check = damon_reclaim_after_wmarks_check;
+ ctx->callback.after_aggregation = damon_reclaim_after_aggregation;
+diff --git a/mm/gup.c b/mm/gup.c
+index e2a39e30756d5..fd3262ae92fc2 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1900,7 +1900,7 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
+ * Try to move out any movable page before pinning the range.
+ */
+ if (folio_test_hugetlb(folio)) {
+- if (!isolate_huge_page(&folio->page,
++ if (isolate_hugetlb(&folio->page,
+ &movable_page_list))
+ isolation_error_count++;
+ continue;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 834f288b37690..15965084816d3 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -18,6 +18,7 @@
+ #include <linux/shrinker.h>
+ #include <linux/mm_inline.h>
+ #include <linux/swapops.h>
++#include <linux/backing-dev.h>
+ #include <linux/dax.h>
+ #include <linux/khugepaged.h>
+ #include <linux/freezer.h>
+@@ -2440,11 +2441,15 @@ static void __split_huge_page(struct page *page, struct list_head *list,
+ __split_huge_page_tail(head, i, lruvec, list);
+ /* Some pages can be beyond EOF: drop them from page cache */
+ if (head[i].index >= end) {
+- ClearPageDirty(head + i);
+- __delete_from_page_cache(head + i, NULL);
++ struct folio *tail = page_folio(head + i);
++
+ if (shmem_mapping(head->mapping))
+ shmem_uncharge(head->mapping->host, 1);
+- put_page(head + i);
++ else if (folio_test_clear_dirty(tail))
++ folio_account_cleaned(tail,
++ inode_to_wb(folio->mapping->host));
++ __filemap_remove_folio(tail, NULL);
++ folio_put(tail);
+ } else if (!PageAnon(page)) {
+ __xa_store(&head->mapping->i_pages, head[i].index,
+ head + i, 0);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index a18c071c294e3..474bfbe9929e1 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2766,8 +2766,7 @@ retry:
+ * Fail with -EBUSY if not possible.
+ */
+ spin_unlock_irq(&hugetlb_lock);
+- if (!isolate_huge_page(old_page, list))
+- ret = -EBUSY;
++ ret = isolate_hugetlb(old_page, list);
+ spin_lock_irq(&hugetlb_lock);
+ goto free_new;
+ } else if (!HPageFreed(old_page)) {
+@@ -2843,7 +2842,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
+ if (hstate_is_gigantic(h))
+ return -ENOMEM;
+
+- if (page_count(head) && isolate_huge_page(head, list))
++ if (page_count(head) && !isolate_hugetlb(head, list))
+ ret = 0;
+ else if (!page_count(head))
+ ret = alloc_and_dissolve_huge_page(h, head, list);
+@@ -5708,7 +5707,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ */
+ entry = huge_ptep_get(ptep);
+ if (unlikely(is_hugetlb_entry_migration(entry))) {
+- migration_entry_wait_huge(vma, mm, ptep);
++ migration_entry_wait_huge(vma, ptep);
+ return 0;
+ } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))
+ return VM_FAULT_HWPOISON_LARGE |
+@@ -6934,7 +6933,7 @@ retry:
+ } else {
+ if (is_hugetlb_entry_migration(pte)) {
+ spin_unlock(ptl);
+- __migration_entry_wait(mm, (pte_t *)pmd, ptl);
++ __migration_entry_wait_huge((pte_t *)pmd, ptl);
+ goto retry;
+ }
+ /*
+@@ -6966,15 +6965,15 @@ follow_huge_pgd(struct mm_struct *mm, unsigned long address, pgd_t *pgd, int fla
+ return pte_page(*(pte_t *)pgd) + ((address & ~PGDIR_MASK) >> PAGE_SHIFT);
+ }
+
+-bool isolate_huge_page(struct page *page, struct list_head *list)
++int isolate_hugetlb(struct page *page, struct list_head *list)
+ {
+- bool ret = true;
++ int ret = 0;
+
+ spin_lock_irq(&hugetlb_lock);
+ if (!PageHeadHuge(page) ||
+ !HPageMigratable(page) ||
+ !get_page_unless_zero(page)) {
+- ret = false;
++ ret = -EBUSY;
+ goto unlock;
+ }
+ ClearHPageMigratable(page);
+diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
+index f9942841df18b..c86691c431fd7 100644
+--- a/mm/hugetlb_cgroup.c
++++ b/mm/hugetlb_cgroup.c
+@@ -772,6 +772,7 @@ static void __init __hugetlb_cgroup_file_dfl_init(int idx)
+ /* Add the numa stat file */
+ cft = &h->cgroup_files_dfl[6];
+ snprintf(cft->name, MAX_CFTYPE_NAME, "%s.numa_stat", buf);
++ cft->private = MEMFILE_PRIVATE(idx, 0);
+ cft->seq_show = hugetlb_cgroup_read_numa_stat;
+ cft->flags = CFTYPE_NOT_ON_ROOT;
+
+diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
+index 9e1b6544bfa8e..9ad8eff71b28d 100644
+--- a/mm/kasan/hw_tags.c
++++ b/mm/kasan/hw_tags.c
+@@ -257,27 +257,37 @@ static void unpoison_vmalloc_pages(const void *addr, u8 tag)
+ }
+ }
+
++static void init_vmalloc_pages(const void *start, unsigned long size)
++{
++ const void *addr;
++
++ for (addr = start; addr < start + size; addr += PAGE_SIZE) {
++ struct page *page = virt_to_page(addr);
++
++ clear_highpage_kasan_tagged(page);
++ }
++}
++
+ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ kasan_vmalloc_flags_t flags)
+ {
+ u8 tag;
+ unsigned long redzone_start, redzone_size;
+
+- if (!kasan_vmalloc_enabled())
+- return (void *)start;
+-
+- if (!is_vmalloc_or_module_addr(start))
++ if (!kasan_vmalloc_enabled() || !is_vmalloc_or_module_addr(start)) {
++ if (flags & KASAN_VMALLOC_INIT)
++ init_vmalloc_pages(start, size);
+ return (void *)start;
++ }
+
+ /*
+- * Skip unpoisoning and assigning a pointer tag for non-VM_ALLOC
+- * mappings as:
++ * Don't tag non-VM_ALLOC mappings, as:
+ *
+ * 1. Unlike the software KASAN modes, hardware tag-based KASAN only
+ * supports tagging physical memory. Therefore, it can only tag a
+ * single mapping of normal physical pages.
+ * 2. Hardware tag-based KASAN can only tag memory mapped with special
+- * mapping protection bits, see arch_vmalloc_pgprot_modify().
++ * mapping protection bits, see arch_vmap_pgprot_tagged().
+ * As non-VM_ALLOC mappings can be mapped outside of vmalloc code,
+ * providing these bits would require tracking all non-VM_ALLOC
+ * mappers.
+@@ -289,15 +299,19 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ *
+ * For non-VM_ALLOC allocations, page_alloc memory is tagged as usual.
+ */
+- if (!(flags & KASAN_VMALLOC_VM_ALLOC))
++ if (!(flags & KASAN_VMALLOC_VM_ALLOC)) {
++ WARN_ON(flags & KASAN_VMALLOC_INIT);
+ return (void *)start;
++ }
+
+ /*
+ * Don't tag executable memory.
+ * The kernel doesn't tolerate having the PC register tagged.
+ */
+- if (!(flags & KASAN_VMALLOC_PROT_NORMAL))
++ if (!(flags & KASAN_VMALLOC_PROT_NORMAL)) {
++ WARN_ON(flags & KASAN_VMALLOC_INIT);
+ return (void *)start;
++ }
+
+ tag = kasan_random_tag();
+ start = set_tag(start, tag);
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index da39ec8afca85..845369f839e19 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -2178,7 +2178,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
+ bool lru = PageLRU(page);
+
+ if (PageHuge(page)) {
+- isolated = isolate_huge_page(page, pagelist);
++ isolated = !isolate_hugetlb(page, pagelist);
+ } else {
+ if (lru)
+ isolated = !isolate_lru_page(page);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 1213d0c67a535..649a50ed90f3d 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1643,7 +1643,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+
+ if (PageHuge(page)) {
+ pfn = page_to_pfn(head) + compound_nr(head) - 1;
+- isolate_huge_page(head, &source);
++ isolate_hugetlb(head, &source);
+ continue;
+ } else if (PageTransHuge(page))
+ pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index d39b01fd52fe4..f4cd963550c1c 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -602,7 +602,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
+ /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */
+ if (flags & (MPOL_MF_MOVE_ALL) ||
+ (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) {
+- if (!isolate_huge_page(page, qp->pagelist) &&
++ if (isolate_hugetlb(page, qp->pagelist) &&
+ (flags & MPOL_MF_STRICT))
+ /*
+ * Failed to isolate page but allow migrating pages
+@@ -1388,7 +1388,7 @@ static int get_nodes(nodemask_t *nodes, const unsigned long __user *nmask,
+ unsigned long bits = min_t(unsigned long, maxnode, BITS_PER_LONG);
+ unsigned long t;
+
+- if (get_bitmap(&t, &nmask[maxnode / BITS_PER_LONG], bits))
++ if (get_bitmap(&t, &nmask[(maxnode - 1) / BITS_PER_LONG], bits))
+ return -EFAULT;
+
+ if (maxnode - bits >= MAX_NUMNODES) {
+diff --git a/mm/memremap.c b/mm/memremap.c
+index 745eea0f99c39..2bdb668548320 100644
+--- a/mm/memremap.c
++++ b/mm/memremap.c
+@@ -141,10 +141,10 @@ void memunmap_pages(struct dev_pagemap *pgmap)
+ for (i = 0; i < pgmap->nr_range; i++)
+ percpu_ref_put_many(&pgmap->ref, pfn_len(pgmap, i));
+ wait_for_completion(&pgmap->done);
+- percpu_ref_exit(&pgmap->ref);
+
+ for (i = 0; i < pgmap->nr_range; i++)
+ pageunmap_range(pgmap, i);
++ percpu_ref_exit(&pgmap->ref);
+
+ WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
+ devmap_managed_enable_put(pgmap);
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 6c1ea61f39d80..a480f54016b33 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -133,7 +133,7 @@ static void putback_movable_page(struct page *page)
+ *
+ * This function shall be used whenever the isolated pageset has been
+ * built from lru, balloon, hugetlbfs page. See isolate_migratepages_range()
+- * and isolate_huge_page().
++ * and isolate_hugetlb().
+ */
+ void putback_movable_pages(struct list_head *l)
+ {
+@@ -315,13 +315,28 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
+ __migration_entry_wait(mm, ptep, ptl);
+ }
+
+-void migration_entry_wait_huge(struct vm_area_struct *vma,
+- struct mm_struct *mm, pte_t *pte)
++#ifdef CONFIG_HUGETLB_PAGE
++void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl)
+ {
+- spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), mm, pte);
+- __migration_entry_wait(mm, pte, ptl);
++ pte_t pte;
++
++ spin_lock(ptl);
++ pte = huge_ptep_get(ptep);
++
++ if (unlikely(!is_hugetlb_entry_migration(pte)))
++ spin_unlock(ptl);
++ else
++ migration_entry_wait_on_locked(pte_to_swp_entry(pte), NULL, ptl);
+ }
+
++void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte)
++{
++ spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, pte);
++
++ __migration_entry_wait_huge(pte, ptl);
++}
++#endif
++
+ #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd)
+ {
+@@ -1633,8 +1648,9 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
+
+ if (PageHuge(page)) {
+ if (PageHead(page)) {
+- isolate_huge_page(page, pagelist);
+- err = 1;
++ err = isolate_hugetlb(page, pagelist);
++ if (!err)
++ err = 1;
+ }
+ } else {
+ struct page *head;
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 61e6135c54ef6..7c59ec73acc34 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1894,7 +1894,6 @@ unmap_and_free_vma:
+
+ /* Undo any partial mapping done by a device driver. */
+ unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end);
+- charged = 0;
+ if (vm_flags & VM_SHARED)
+ mapping_unmap_writable(file->f_mapping);
+ free_vma:
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index b5b14b78c4fd4..cdf0e7d707c37 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1302,12 +1302,8 @@ static void kernel_init_free_pages(struct page *page, int numpages)
+
+ /* s390's use of memset() could override KASAN redzones. */
+ kasan_disable_current();
+- for (i = 0; i < numpages; i++) {
+- u8 tag = page_kasan_tag(page + i);
+- page_kasan_tag_reset(page + i);
+- clear_highpage(page + i);
+- page_kasan_tag_set(page + i, tag);
+- }
++ for (i = 0; i < numpages; i++)
++ clear_highpage_kasan_tagged(page + i);
+ kasan_enable_current();
+ }
+
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 3633eeefaa0db..27697b2429c2e 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -3104,7 +3104,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
+ goto out_free_areas;
+ }
+ /* kmemleak tracks the percpu allocations separately */
+- kmemleak_free(ptr);
++ kmemleak_ignore_phys(__pa(ptr));
+ areas[group] = ptr;
+
+ base = min(ptr, base);
+@@ -3304,7 +3304,7 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t
+ goto enomem;
+ }
+ /* kmemleak tracks the percpu allocations separately */
+- kmemleak_free(ptr);
++ kmemleak_ignore_phys(__pa(ptr));
+ pages[j++] = virt_to_page(ptr);
+ }
+ }
+@@ -3417,7 +3417,7 @@ void __init setup_per_cpu_areas(void)
+ if (!ai || !fc)
+ panic("Failed to allocate memory for percpu areas.");
+ /* kmemleak tracks the percpu allocations separately */
+- kmemleak_free(fc);
++ kmemleak_ignore_phys(__pa(fc));
+
+ ai->dyn_size = unit_size;
+ ai->unit_size = unit_size;
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index effd1ff6a4b41..a1ab9b472571c 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -3168,15 +3168,15 @@ again:
+
+ /*
+ * Mark the pages as accessible, now that they are mapped.
+- * The init condition should match the one in post_alloc_hook()
+- * (except for the should_skip_init() check) to make sure that memory
+- * is initialized under the same conditions regardless of the enabled
+- * KASAN mode.
++ * The condition for setting KASAN_VMALLOC_INIT should complement the
++ * one in post_alloc_hook() with regards to the __GFP_SKIP_ZERO check
++ * to make sure that memory is initialized under the same conditions.
+ * Tag-based KASAN modes only assign tags to normal non-executable
+ * allocations, see __kasan_unpoison_vmalloc().
+ */
+ kasan_flags |= KASAN_VMALLOC_VM_ALLOC;
+- if (!want_init_on_free() && want_init_on_alloc(gfp_mask))
++ if (!want_init_on_free() && want_init_on_alloc(gfp_mask) &&
++ (gfp_mask & __GFP_SKIP_ZERO))
+ kasan_flags |= KASAN_VMALLOC_INIT;
+ /* KASAN_VMALLOC_PROT_NORMAL already set if required. */
+ area->addr = kasan_unpoison_vmalloc(area->addr, real_size, kasan_flags);
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 8bba0d9cf9754..87cde948f628e 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -305,7 +305,7 @@ p9_tag_alloc(struct p9_client *c, int8_t type, unsigned int max_size)
+ * callback), so p9_client_cb eats the second ref there
+ * as the pointer is duplicated directly by virtqueue_add_sgs()
+ */
+- refcount_set(&req->refcount.refcount, 2);
++ refcount_set(&req->refcount, 2);
+
+ return req;
+
+@@ -341,7 +341,7 @@ again:
+ if (!p9_req_try_get(req))
+ goto again;
+ if (req->tc.tag != tag) {
+- p9_req_put(req);
++ p9_req_put(c, req);
+ goto again;
+ }
+ }
+@@ -367,21 +367,18 @@ static int p9_tag_remove(struct p9_client *c, struct p9_req_t *r)
+ spin_lock_irqsave(&c->lock, flags);
+ idr_remove(&c->reqs, tag);
+ spin_unlock_irqrestore(&c->lock, flags);
+- return p9_req_put(r);
++ return p9_req_put(c, r);
+ }
+
+-static void p9_req_free(struct kref *ref)
++int p9_req_put(struct p9_client *c, struct p9_req_t *r)
+ {
+- struct p9_req_t *r = container_of(ref, struct p9_req_t, refcount);
+-
+- p9_fcall_fini(&r->tc);
+- p9_fcall_fini(&r->rc);
+- kmem_cache_free(p9_req_cache, r);
+-}
+-
+-int p9_req_put(struct p9_req_t *r)
+-{
+- return kref_put(&r->refcount, p9_req_free);
++ if (refcount_dec_and_test(&r->refcount)) {
++ p9_fcall_fini(&r->tc);
++ p9_fcall_fini(&r->rc);
++ kmem_cache_free(p9_req_cache, r);
++ return 1;
++ }
++ return 0;
+ }
+ EXPORT_SYMBOL(p9_req_put);
+
+@@ -426,7 +423,7 @@ void p9_client_cb(struct p9_client *c, struct p9_req_t *req, int status)
+
+ wake_up(&req->wq);
+ p9_debug(P9_DEBUG_MUX, "wakeup: %d\n", req->tc.tag);
+- p9_req_put(req);
++ p9_req_put(c, req);
+ }
+ EXPORT_SYMBOL(p9_client_cb);
+
+@@ -709,7 +706,7 @@ static struct p9_req_t *p9_client_prepare_req(struct p9_client *c,
+ reterr:
+ p9_tag_remove(c, req);
+ /* We have to put also the 2nd reference as it won't be used */
+- p9_req_put(req);
++ p9_req_put(c, req);
+ return ERR_PTR(err);
+ }
+
+@@ -746,7 +743,7 @@ p9_client_rpc(struct p9_client *c, int8_t type, const char *fmt, ...)
+ err = c->trans_mod->request(c, req);
+ if (err < 0) {
+ /* write won't happen */
+- p9_req_put(req);
++ p9_req_put(c, req);
+ if (err != -ERESTARTSYS && err != -EFAULT)
+ c->status = Disconnected;
+ goto recalc_sigpending;
+@@ -889,16 +886,13 @@ static struct p9_fid *p9_fid_create(struct p9_client *clnt)
+ struct p9_fid *fid;
+
+ p9_debug(P9_DEBUG_FID, "clnt %p\n", clnt);
+- fid = kmalloc(sizeof(*fid), GFP_KERNEL);
++ fid = kzalloc(sizeof(*fid), GFP_KERNEL);
+ if (!fid)
+ return NULL;
+
+- memset(&fid->qid, 0, sizeof(fid->qid));
+ fid->mode = -1;
+ fid->uid = current_fsuid();
+ fid->clnt = clnt;
+- fid->rdir = NULL;
+- fid->fid = 0;
+ refcount_set(&fid->count, 1);
+
+ idr_preload(GFP_KERNEL);
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 8f8f95e39b03a..e758978b44bee 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -343,6 +343,7 @@ static void p9_read_work(struct work_struct *work)
+ p9_debug(P9_DEBUG_ERROR,
+ "No recv fcall for tag %d (req %p), disconnecting!\n",
+ m->rc.tag, m->rreq);
++ p9_req_put(m->client, m->rreq);
+ m->rreq = NULL;
+ err = -EIO;
+ goto error;
+@@ -378,7 +379,7 @@ static void p9_read_work(struct work_struct *work)
+ m->rc.sdata = NULL;
+ m->rc.offset = 0;
+ m->rc.capacity = 0;
+- p9_req_put(m->rreq);
++ p9_req_put(m->client, m->rreq);
+ m->rreq = NULL;
+ }
+
+@@ -492,7 +493,7 @@ static void p9_write_work(struct work_struct *work)
+ m->wpos += err;
+ if (m->wpos == m->wsize) {
+ m->wpos = m->wsize = 0;
+- p9_req_put(m->wreq);
++ p9_req_put(m->client, m->wreq);
+ m->wreq = NULL;
+ }
+
+@@ -695,7 +696,7 @@ static int p9_fd_cancel(struct p9_client *client, struct p9_req_t *req)
+ if (req->status == REQ_STATUS_UNSENT) {
+ list_del(&req->req_list);
+ req->status = REQ_STATUS_FLSHD;
+- p9_req_put(req);
++ p9_req_put(client, req);
+ ret = 0;
+ }
+ spin_unlock(&client->lock);
+@@ -722,7 +723,7 @@ static int p9_fd_cancelled(struct p9_client *client, struct p9_req_t *req)
+ list_del(&req->req_list);
+ req->status = REQ_STATUS_FLSHD;
+ spin_unlock(&client->lock);
+- p9_req_put(req);
++ p9_req_put(client, req);
+
+ return 0;
+ }
+@@ -883,12 +884,12 @@ static void p9_conn_destroy(struct p9_conn *m)
+ p9_mux_poll_stop(m);
+ cancel_work_sync(&m->rq);
+ if (m->rreq) {
+- p9_req_put(m->rreq);
++ p9_req_put(m->client, m->rreq);
+ m->rreq = NULL;
+ }
+ cancel_work_sync(&m->wq);
+ if (m->wreq) {
+- p9_req_put(m->wreq);
++ p9_req_put(m->client, m->wreq);
+ m->wreq = NULL;
+ }
+
+diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
+index 88e5638266743..d817d3745238b 100644
+--- a/net/9p/trans_rdma.c
++++ b/net/9p/trans_rdma.c
+@@ -350,7 +350,7 @@ send_done(struct ib_cq *cq, struct ib_wc *wc)
+ c->busa, c->req->tc.size,
+ DMA_TO_DEVICE);
+ up(&rdma->sq_sem);
+- p9_req_put(c->req);
++ p9_req_put(client, c->req);
+ kfree(c);
+ }
+
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index b24a4fb0f0a23..147972bf2e797 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -199,7 +199,7 @@ static int p9_virtio_cancel(struct p9_client *client, struct p9_req_t *req)
+ /* Reply won't come, so drop req ref */
+ static int p9_virtio_cancelled(struct p9_client *client, struct p9_req_t *req)
+ {
+- p9_req_put(req);
++ p9_req_put(client, req);
+ return 0;
+ }
+
+@@ -523,7 +523,7 @@ err_out:
+ kvfree(out_pages);
+ if (!kicked) {
+ /* reply won't come */
+- p9_req_put(req);
++ p9_req_put(client, req);
+ }
+ return err;
+ }
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 833cd3792c51c..227f89cc7237c 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -163,7 +163,7 @@ again:
+ ring->intf->out_prod = prod;
+ spin_unlock_irqrestore(&ring->lock, flags);
+ notify_remote_via_irq(ring->irq);
+- p9_req_put(p9_req);
++ p9_req_put(client, p9_req);
+
+ return 0;
+ }
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 4c7030ed8d331..5b5363c99ed50 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -1065,7 +1065,7 @@ static int ax25_release(struct socket *sock)
+ del_timer_sync(&ax25->t3timer);
+ del_timer_sync(&ax25->idletimer);
+ }
+- dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
++ dev_put_track(ax25_dev->dev, &ax25->dev_tracker);
+ ax25_dev_put(ax25_dev);
+ }
+
+@@ -1146,7 +1146,7 @@ static int ax25_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+
+ if (ax25_dev) {
+ ax25_fillin_cb(ax25, ax25_dev);
+- dev_hold_track(ax25_dev->dev, &ax25_dev->dev_tracker, GFP_ATOMIC);
++ dev_hold_track(ax25_dev->dev, &ax25->dev_tracker, GFP_ATOMIC);
+ }
+
+ done:
+diff --git a/net/batman-adv/trace.h b/net/batman-adv/trace.h
+index d673ebdd04267..31c8f922651d5 100644
+--- a/net/batman-adv/trace.h
++++ b/net/batman-adv/trace.h
+@@ -28,8 +28,6 @@
+
+ #endif /* CONFIG_BATMAN_ADV_TRACING */
+
+-#define BATADV_MAX_MSG_LEN 256
+-
+ TRACE_EVENT(batadv_dbg,
+
+ TP_PROTO(struct batadv_priv *bat_priv,
+@@ -40,16 +38,13 @@ TRACE_EVENT(batadv_dbg,
+ TP_STRUCT__entry(
+ __string(device, bat_priv->soft_iface->name)
+ __string(driver, KBUILD_MODNAME)
+- __dynamic_array(char, msg, BATADV_MAX_MSG_LEN)
++ __vstring(msg, vaf->fmt, vaf->va)
+ ),
+
+ TP_fast_assign(
+ __assign_str(device, bat_priv->soft_iface->name);
+ __assign_str(driver, KBUILD_MODNAME);
+- WARN_ON_ONCE(vsnprintf(__get_dynamic_array(msg),
+- BATADV_MAX_MSG_LEN,
+- vaf->fmt,
+- *vaf->va) >= BATADV_MAX_MSG_LEN);
++ __assign_vstr(msg, vaf->fmt, vaf->va);
+ ),
+
+ TP_printk(
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index a0f99baafd357..6a53bcc5cfbb1 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -594,6 +594,11 @@ static int hci_dev_do_reset(struct hci_dev *hdev)
+ skb_queue_purge(&hdev->rx_q);
+ skb_queue_purge(&hdev->cmd_q);
+
++ /* Cancel these to avoid queueing non-chained pending work */
++ hci_dev_set_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
++ cancel_delayed_work(&hdev->cmd_timer);
++ cancel_delayed_work(&hdev->ncmd_timer);
++
+ /* Avoid potential lockdep warnings from the *_flush() calls by
+ * ensuring the workqueue is empty up front.
+ */
+@@ -607,6 +612,8 @@ static int hci_dev_do_reset(struct hci_dev *hdev)
+ if (hdev->flush)
+ hdev->flush(hdev);
+
++ hci_dev_clear_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
++
+ atomic_set(&hdev->cmd_cnt, 1);
+ hdev->acl_cnt = 0; hdev->sco_cnt = 0; hdev->le_cnt = 0;
+
+@@ -3864,7 +3871,8 @@ static void hci_cmd_work(struct work_struct *work)
+ if (res < 0)
+ __hci_cmd_sync_cancel(hdev, -res);
+
+- if (test_bit(HCI_RESET, &hdev->flags))
++ if (test_bit(HCI_RESET, &hdev->flags) ||
++ hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
+ cancel_delayed_work(&hdev->cmd_timer);
+ else
+ schedule_delayed_work(&hdev->cmd_timer,
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index af17dfb20e017..7cb956d3abb26 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3768,8 +3768,9 @@ static inline void handle_cmd_cnt_and_timer(struct hci_dev *hdev, u8 ncmd)
+ cancel_delayed_work(&hdev->ncmd_timer);
+ atomic_set(&hdev->cmd_cnt, 1);
+ } else {
+- schedule_delayed_work(&hdev->ncmd_timer,
+- HCI_NCMD_TIMEOUT);
++ if (!hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
++ schedule_delayed_work(&hdev->ncmd_timer,
++ HCI_NCMD_TIMEOUT);
+ }
+ }
+ }
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index c17021642234b..b5e7d4b8ab24a 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -1612,6 +1612,9 @@ static int hci_le_add_resolve_list_sync(struct hci_dev *hdev,
+ bacpy(&cp.bdaddr, ¶ms->addr);
+ memcpy(cp.peer_irk, irk->val, 16);
+
++ /* Default privacy mode is always Network */
++ params->privacy_mode = HCI_NETWORK_PRIVACY;
++
+ done:
+ if (hci_dev_test_flag(hdev, HCI_PRIVACY))
+ memcpy(cp.local_irk, hdev->irk, 16);
+@@ -5039,13 +5042,13 @@ static int hci_resume_scan_sync(struct hci_dev *hdev)
+ if (!hdev->scanning_paused)
+ return 0;
+
++ hdev->scanning_paused = false;
++
+ hci_update_scan_sync(hdev);
+
+ /* Reset passive scanning to normal */
+ hci_update_passive_scan_sync(hdev);
+
+- hdev->scanning_paused = false;
+-
+ return 0;
+ }
+
+@@ -5064,7 +5067,6 @@ int hci_resume_sync(struct hci_dev *hdev)
+ return 0;
+
+ hdev->suspended = false;
+- hdev->scanning_paused = false;
+
+ /* Restore event mask */
+ hci_set_event_mask_sync(hdev);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 52668662ae8de..f18d0c72713f1 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1969,11 +1969,11 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ bdaddr_t *dst,
+ u8 link_type)
+ {
+- struct l2cap_chan *c, *c1 = NULL;
++ struct l2cap_chan *c, *tmp, *c1 = NULL;
+
+ read_lock(&chan_list_lock);
+
+- list_for_each_entry(c, &chan_list, global_l) {
++ list_for_each_entry_safe(c, tmp, &chan_list, global_l) {
+ if (state && c->state != state)
+ continue;
+
+@@ -1992,11 +1992,10 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ dst_match = !bacmp(&c->dst, dst);
+ if (src_match && dst_match) {
+ c = l2cap_chan_hold_unless_zero(c);
+- if (!c)
+- continue;
+-
+- read_unlock(&chan_list_lock);
+- return c;
++ if (c) {
++ read_unlock(&chan_list_lock);
++ return c;
++ }
+ }
+
+ /* Closest match */
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 2f91a8c2b6780..cbdf0e2bc5ae0 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -6820,11 +6820,14 @@ static int get_conn_info(struct sock *sk, struct hci_dev *hdev, void *data,
+
+ cmd = mgmt_pending_new(sk, MGMT_OP_GET_CONN_INFO, hdev, data,
+ len);
+- if (!cmd)
++ if (!cmd) {
+ err = -ENOMEM;
+- else
++ } else {
++ hci_conn_hold(conn);
++ cmd->user_data = hci_conn_get(conn);
+ err = hci_cmd_sync_queue(hdev, get_conn_info_sync,
+ cmd, get_conn_info_complete);
++ }
+
+ if (err < 0) {
+ mgmt_cmd_complete(sk, hdev->id, MGMT_OP_GET_CONN_INFO,
+@@ -6836,9 +6839,6 @@ static int get_conn_info(struct sock *sk, struct hci_dev *hdev, void *data,
+ goto unlock;
+ }
+
+- hci_conn_hold(conn);
+- cmd->user_data = hci_conn_get(conn);
+-
+ conn->conn_info_timestamp = jiffies;
+ } else {
+ /* Cache is valid, just reply with values cached in hci_conn */
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 7950f75207658..74f05ed6aff29 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3918,7 +3918,7 @@ static void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
+ offset -= frag_size;
+ }
+ out:
+- return offset + len < size ? addr + offset : NULL;
++ return offset + len <= size ? addr + offset : NULL;
+ }
+
+ BPF_CALL_4(bpf_xdp_load_bytes, struct xdp_buff *, xdp, u32, offset,
+@@ -4653,6 +4653,7 @@ BPF_CALL_4(bpf_skb_set_tunnel_key, struct sk_buff *, skb,
+ } else {
+ info->key.u.ipv4.dst = cpu_to_be32(from->remote_ipv4);
+ info->key.u.ipv4.src = cpu_to_be32(from->local_ipv4);
++ info->key.flow_flags = FLOWI_FLAG_ANYSRC;
+ }
+
+ return 0;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index b0fcd0200e84a..a8dbea559c7f6 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -462,7 +462,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+
+ if (copied == len)
+ break;
+- } while (i != msg_rx->sg.end);
++ } while (!sg_is_last(sge));
+
+ if (unlikely(peek)) {
+ msg_rx = sk_psock_next_msg(psock, msg_rx);
+@@ -472,7 +472,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ }
+
+ msg_rx->sg.start = i;
+- if (!sge->length && msg_rx->sg.start == msg_rx->sg.end) {
++ if (!sge->length && sg_is_last(sge)) {
+ msg_rx = sk_psock_dequeue_msg(psock);
+ kfree_sk_msg(msg_rx);
+ }
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index eb8e128e43e8b..e13641c65f88e 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -736,11 +736,6 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+
+ lock_sock(sk);
+
+- if (dccp_qpolicy_full(sk)) {
+- rc = -EAGAIN;
+- goto out_release;
+- }
+-
+ timeo = sock_sndtimeo(sk, noblock);
+
+ /*
+@@ -759,6 +754,11 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ if (skb == NULL)
+ goto out_release;
+
++ if (dccp_qpolicy_full(sk)) {
++ rc = -EAGAIN;
++ goto out_discard;
++ }
++
+ if (sk->sk_state == DCCP_CLOSED) {
+ rc = -ENOTCONN;
+ goto out_discard;
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 252c8bceaba42..6f5556cb0d970 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1919,6 +1919,8 @@ static int __init inet_init(void)
+
+ sock_skb_cb_check_size(sizeof(struct inet_skb_parm));
+
++ raw_hashinfo_init(&raw_v4_hashinfo);
++
+ rc = proto_register(&tcp_prot, 1);
+ if (rc)
+ goto out;
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 3c6101def7d6b..b83c2bd9d7223 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -50,7 +50,7 @@
+
+ struct ping_table {
+ struct hlist_nulls_head hash[PING_HTABLE_SIZE];
+- rwlock_t lock;
++ spinlock_t lock;
+ };
+
+ static struct ping_table ping_table;
+@@ -82,7 +82,7 @@ int ping_get_port(struct sock *sk, unsigned short ident)
+ struct sock *sk2 = NULL;
+
+ isk = inet_sk(sk);
+- write_lock_bh(&ping_table.lock);
++ spin_lock(&ping_table.lock);
+ if (ident == 0) {
+ u32 i;
+ u16 result = ping_port_rover + 1;
+@@ -128,14 +128,15 @@ next_port:
+ if (sk_unhashed(sk)) {
+ pr_debug("was not hashed\n");
+ sock_hold(sk);
+- hlist_nulls_add_head(&sk->sk_nulls_node, hlist);
++ sock_set_flag(sk, SOCK_RCU_FREE);
++ hlist_nulls_add_head_rcu(&sk->sk_nulls_node, hlist);
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+ }
+- write_unlock_bh(&ping_table.lock);
++ spin_unlock(&ping_table.lock);
+ return 0;
+
+ fail:
+- write_unlock_bh(&ping_table.lock);
++ spin_unlock(&ping_table.lock);
+ return 1;
+ }
+ EXPORT_SYMBOL_GPL(ping_get_port);
+@@ -153,19 +154,19 @@ void ping_unhash(struct sock *sk)
+ struct inet_sock *isk = inet_sk(sk);
+
+ pr_debug("ping_unhash(isk=%p,isk->num=%u)\n", isk, isk->inet_num);
+- write_lock_bh(&ping_table.lock);
++ spin_lock(&ping_table.lock);
+ if (sk_hashed(sk)) {
+- hlist_nulls_del(&sk->sk_nulls_node);
+- sk_nulls_node_init(&sk->sk_nulls_node);
++ hlist_nulls_del_init_rcu(&sk->sk_nulls_node);
+ sock_put(sk);
+ isk->inet_num = 0;
+ isk->inet_sport = 0;
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+ }
+- write_unlock_bh(&ping_table.lock);
++ spin_unlock(&ping_table.lock);
+ }
+ EXPORT_SYMBOL_GPL(ping_unhash);
+
++/* Called under rcu_read_lock() */
+ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ {
+ struct hlist_nulls_head *hslot = ping_hashslot(&ping_table, net, ident);
+@@ -190,8 +191,6 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ return NULL;
+ }
+
+- read_lock_bh(&ping_table.lock);
+-
+ ping_portaddr_for_each_entry(sk, hnode, hslot) {
+ isk = inet_sk(sk);
+
+@@ -230,13 +229,11 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ sk->sk_bound_dev_if != sdif)
+ continue;
+
+- sock_hold(sk);
+ goto exit;
+ }
+
+ sk = NULL;
+ exit:
+- read_unlock_bh(&ping_table.lock);
+
+ return sk;
+ }
+@@ -592,7 +589,7 @@ void ping_err(struct sk_buff *skb, int offset, u32 info)
+ sk->sk_err = err;
+ sk_error_report(sk);
+ out:
+- sock_put(sk);
++ return;
+ }
+ EXPORT_SYMBOL_GPL(ping_err);
+
+@@ -998,7 +995,6 @@ enum skb_drop_reason ping_rcv(struct sk_buff *skb)
+ reason = __ping_queue_rcv_skb(sk, skb2);
+ else
+ reason = SKB_DROP_REASON_NOMEM;
+- sock_put(sk);
+ }
+
+ if (reason)
+@@ -1084,13 +1080,13 @@ static struct sock *ping_get_idx(struct seq_file *seq, loff_t pos)
+ }
+
+ void *ping_seq_start(struct seq_file *seq, loff_t *pos, sa_family_t family)
+- __acquires(ping_table.lock)
++ __acquires(RCU)
+ {
+ struct ping_iter_state *state = seq->private;
+ state->bucket = 0;
+ state->family = family;
+
+- read_lock_bh(&ping_table.lock);
++ rcu_read_lock();
+
+ return *pos ? ping_get_idx(seq, *pos-1) : SEQ_START_TOKEN;
+ }
+@@ -1116,9 +1112,9 @@ void *ping_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ EXPORT_SYMBOL_GPL(ping_seq_next);
+
+ void ping_seq_stop(struct seq_file *seq, void *v)
+- __releases(ping_table.lock)
++ __releases(RCU)
+ {
+- read_unlock_bh(&ping_table.lock);
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(ping_seq_stop);
+
+@@ -1202,5 +1198,5 @@ void __init ping_init(void)
+
+ for (i = 0; i < PING_HTABLE_SIZE; i++)
+ INIT_HLIST_NULLS_HEAD(&ping_table.hash[i], i);
+- rwlock_init(&ping_table.lock);
++ spin_lock_init(&ping_table.lock);
+ }
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index bbd717805b103..57ce7bd646f67 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -85,20 +85,19 @@ struct raw_frag_vec {
+ int hlen;
+ };
+
+-struct raw_hashinfo raw_v4_hashinfo = {
+- .lock = __RW_LOCK_UNLOCKED(raw_v4_hashinfo.lock),
+-};
++struct raw_hashinfo raw_v4_hashinfo;
+ EXPORT_SYMBOL_GPL(raw_v4_hashinfo);
+
+ int raw_hash_sk(struct sock *sk)
+ {
+ struct raw_hashinfo *h = sk->sk_prot->h.raw_hash;
+- struct hlist_head *head;
++ struct hlist_nulls_head *hlist;
+
+- head = &h->ht[inet_sk(sk)->inet_num & (RAW_HTABLE_SIZE - 1)];
++ hlist = &h->ht[inet_sk(sk)->inet_num & (RAW_HTABLE_SIZE - 1)];
+
+ write_lock_bh(&h->lock);
+- sk_add_node(sk, head);
++ hlist_nulls_add_head_rcu(&sk->sk_nulls_node, hlist);
++ sock_set_flag(sk, SOCK_RCU_FREE);
+ write_unlock_bh(&h->lock);
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+
+@@ -111,30 +110,25 @@ void raw_unhash_sk(struct sock *sk)
+ struct raw_hashinfo *h = sk->sk_prot->h.raw_hash;
+
+ write_lock_bh(&h->lock);
+- if (sk_del_node_init(sk))
++ if (__sk_nulls_del_node_init_rcu(sk))
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+ write_unlock_bh(&h->lock);
+ }
+ EXPORT_SYMBOL_GPL(raw_unhash_sk);
+
+-struct sock *__raw_v4_lookup(struct net *net, struct sock *sk,
+- unsigned short num, __be32 raddr, __be32 laddr,
+- int dif, int sdif)
++bool raw_v4_match(struct net *net, struct sock *sk, unsigned short num,
++ __be32 raddr, __be32 laddr, int dif, int sdif)
+ {
+- sk_for_each_from(sk) {
+- struct inet_sock *inet = inet_sk(sk);
+-
+- if (net_eq(sock_net(sk), net) && inet->inet_num == num &&
+- !(inet->inet_daddr && inet->inet_daddr != raddr) &&
+- !(inet->inet_rcv_saddr && inet->inet_rcv_saddr != laddr) &&
+- raw_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
+- goto found; /* gotcha */
+- }
+- sk = NULL;
+-found:
+- return sk;
++ struct inet_sock *inet = inet_sk(sk);
++
++ if (net_eq(sock_net(sk), net) && inet->inet_num == num &&
++ !(inet->inet_daddr && inet->inet_daddr != raddr) &&
++ !(inet->inet_rcv_saddr && inet->inet_rcv_saddr != laddr) &&
++ raw_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
++ return true;
++ return false;
+ }
+-EXPORT_SYMBOL_GPL(__raw_v4_lookup);
++EXPORT_SYMBOL_GPL(raw_v4_match);
+
+ /*
+ * 0 - deliver
+@@ -168,23 +162,20 @@ static int icmp_filter(const struct sock *sk, const struct sk_buff *skb)
+ */
+ static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash)
+ {
++ struct net *net = dev_net(skb->dev);
++ struct hlist_nulls_head *hlist;
++ struct hlist_nulls_node *hnode;
+ int sdif = inet_sdif(skb);
+ int dif = inet_iif(skb);
+- struct sock *sk;
+- struct hlist_head *head;
+ int delivered = 0;
+- struct net *net;
+-
+- read_lock(&raw_v4_hashinfo.lock);
+- head = &raw_v4_hashinfo.ht[hash];
+- if (hlist_empty(head))
+- goto out;
+-
+- net = dev_net(skb->dev);
+- sk = __raw_v4_lookup(net, __sk_head(head), iph->protocol,
+- iph->saddr, iph->daddr, dif, sdif);
++ struct sock *sk;
+
+- while (sk) {
++ hlist = &raw_v4_hashinfo.ht[hash];
++ rcu_read_lock();
++ hlist_nulls_for_each_entry(sk, hnode, hlist, sk_nulls_node) {
++ if (!raw_v4_match(net, sk, iph->protocol,
++ iph->saddr, iph->daddr, dif, sdif))
++ continue;
+ delivered = 1;
+ if ((iph->protocol != IPPROTO_ICMP || !icmp_filter(sk, skb)) &&
+ ip_mc_sf_allow(sk, iph->daddr, iph->saddr,
+@@ -195,31 +186,16 @@ static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash)
+ if (clone)
+ raw_rcv(sk, clone);
+ }
+- sk = __raw_v4_lookup(net, sk_next(sk), iph->protocol,
+- iph->saddr, iph->daddr,
+- dif, sdif);
+ }
+-out:
+- read_unlock(&raw_v4_hashinfo.lock);
++ rcu_read_unlock();
+ return delivered;
+ }
+
+ int raw_local_deliver(struct sk_buff *skb, int protocol)
+ {
+- int hash;
+- struct sock *raw_sk;
+-
+- hash = protocol & (RAW_HTABLE_SIZE - 1);
+- raw_sk = sk_head(&raw_v4_hashinfo.ht[hash]);
+-
+- /* If there maybe a raw socket we must check - if not we
+- * don't care less
+- */
+- if (raw_sk && !raw_v4_input(skb, ip_hdr(skb), hash))
+- raw_sk = NULL;
+-
+- return raw_sk != NULL;
++ int hash = protocol & (RAW_HTABLE_SIZE - 1);
+
++ return raw_v4_input(skb, ip_hdr(skb), hash);
+ }
+
+ static void raw_err(struct sock *sk, struct sk_buff *skb, u32 info)
+@@ -286,31 +262,27 @@ static void raw_err(struct sock *sk, struct sk_buff *skb, u32 info)
+
+ void raw_icmp_error(struct sk_buff *skb, int protocol, u32 info)
+ {
+- int hash;
+- struct sock *raw_sk;
++ struct net *net = dev_net(skb->dev);
++ struct hlist_nulls_head *hlist;
++ struct hlist_nulls_node *hnode;
++ int dif = skb->dev->ifindex;
++ int sdif = inet_sdif(skb);
+ const struct iphdr *iph;
+- struct net *net;
++ struct sock *sk;
++ int hash;
+
+ hash = protocol & (RAW_HTABLE_SIZE - 1);
++ hlist = &raw_v4_hashinfo.ht[hash];
+
+- read_lock(&raw_v4_hashinfo.lock);
+- raw_sk = sk_head(&raw_v4_hashinfo.ht[hash]);
+- if (raw_sk) {
+- int dif = skb->dev->ifindex;
+- int sdif = inet_sdif(skb);
+-
++ rcu_read_lock();
++ hlist_nulls_for_each_entry(sk, hnode, hlist, sk_nulls_node) {
+ iph = (const struct iphdr *)skb->data;
+- net = dev_net(skb->dev);
+-
+- while ((raw_sk = __raw_v4_lookup(net, raw_sk, protocol,
+- iph->daddr, iph->saddr,
+- dif, sdif)) != NULL) {
+- raw_err(raw_sk, skb, info);
+- raw_sk = sk_next(raw_sk);
+- iph = (const struct iphdr *)skb->data;
+- }
++ if (!raw_v4_match(net, sk, iph->protocol,
++ iph->daddr, iph->saddr, dif, sdif))
++ continue;
++ raw_err(sk, skb, info);
+ }
+- read_unlock(&raw_v4_hashinfo.lock);
++ rcu_read_unlock();
+ }
+
+ static int raw_rcv_skb(struct sock *sk, struct sk_buff *skb)
+@@ -971,44 +943,41 @@ struct proto raw_prot = {
+ };
+
+ #ifdef CONFIG_PROC_FS
+-static struct sock *raw_get_first(struct seq_file *seq)
++static struct sock *raw_get_first(struct seq_file *seq, int bucket)
+ {
+- struct sock *sk;
+ struct raw_hashinfo *h = pde_data(file_inode(seq->file));
+ struct raw_iter_state *state = raw_seq_private(seq);
++ struct hlist_nulls_head *hlist;
++ struct hlist_nulls_node *hnode;
++ struct sock *sk;
+
+- for (state->bucket = 0; state->bucket < RAW_HTABLE_SIZE;
++ for (state->bucket = bucket; state->bucket < RAW_HTABLE_SIZE;
+ ++state->bucket) {
+- sk_for_each(sk, &h->ht[state->bucket])
++ hlist = &h->ht[state->bucket];
++ hlist_nulls_for_each_entry(sk, hnode, hlist, sk_nulls_node) {
+ if (sock_net(sk) == seq_file_net(seq))
+- goto found;
++ return sk;
++ }
+ }
+- sk = NULL;
+-found:
+- return sk;
++ return NULL;
+ }
+
+ static struct sock *raw_get_next(struct seq_file *seq, struct sock *sk)
+ {
+- struct raw_hashinfo *h = pde_data(file_inode(seq->file));
+ struct raw_iter_state *state = raw_seq_private(seq);
+
+ do {
+- sk = sk_next(sk);
+-try_again:
+- ;
++ sk = sk_nulls_next(sk);
+ } while (sk && sock_net(sk) != seq_file_net(seq));
+
+- if (!sk && ++state->bucket < RAW_HTABLE_SIZE) {
+- sk = sk_head(&h->ht[state->bucket]);
+- goto try_again;
+- }
++ if (!sk)
++ return raw_get_first(seq, state->bucket + 1);
+ return sk;
+ }
+
+ static struct sock *raw_get_idx(struct seq_file *seq, loff_t pos)
+ {
+- struct sock *sk = raw_get_first(seq);
++ struct sock *sk = raw_get_first(seq, 0);
+
+ if (sk)
+ while (pos && (sk = raw_get_next(seq, sk)) != NULL)
+@@ -1017,11 +986,9 @@ static struct sock *raw_get_idx(struct seq_file *seq, loff_t pos)
+ }
+
+ void *raw_seq_start(struct seq_file *seq, loff_t *pos)
+- __acquires(&h->lock)
++ __acquires(RCU)
+ {
+- struct raw_hashinfo *h = pde_data(file_inode(seq->file));
+-
+- read_lock(&h->lock);
++ rcu_read_lock();
+ return *pos ? raw_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
+ }
+ EXPORT_SYMBOL_GPL(raw_seq_start);
+@@ -1031,7 +998,7 @@ void *raw_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ struct sock *sk;
+
+ if (v == SEQ_START_TOKEN)
+- sk = raw_get_first(seq);
++ sk = raw_get_first(seq, 0);
+ else
+ sk = raw_get_next(seq, v);
+ ++*pos;
+@@ -1040,11 +1007,9 @@ void *raw_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ EXPORT_SYMBOL_GPL(raw_seq_next);
+
+ void raw_seq_stop(struct seq_file *seq, void *v)
+- __releases(&h->lock)
++ __releases(RCU)
+ {
+- struct raw_hashinfo *h = pde_data(file_inode(seq->file));
+-
+- read_unlock(&h->lock);
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(raw_seq_stop);
+
+@@ -1106,6 +1071,7 @@ static __net_initdata struct pernet_operations raw_net_ops = {
+
+ int __init raw_proc_init(void)
+ {
++
+ return register_pernet_subsys(&raw_net_ops);
+ }
+
+diff --git a/net/ipv4/raw_diag.c b/net/ipv4/raw_diag.c
+index ccacbde30a2c5..5f208e840d859 100644
+--- a/net/ipv4/raw_diag.c
++++ b/net/ipv4/raw_diag.c
+@@ -34,57 +34,57 @@ raw_get_hashinfo(const struct inet_diag_req_v2 *r)
+ * use helper to figure it out.
+ */
+
+-static struct sock *raw_lookup(struct net *net, struct sock *from,
+- const struct inet_diag_req_v2 *req)
++static bool raw_lookup(struct net *net, struct sock *sk,
++ const struct inet_diag_req_v2 *req)
+ {
+ struct inet_diag_req_raw *r = (void *)req;
+- struct sock *sk = NULL;
+
+ if (r->sdiag_family == AF_INET)
+- sk = __raw_v4_lookup(net, from, r->sdiag_raw_protocol,
+- r->id.idiag_dst[0],
+- r->id.idiag_src[0],
+- r->id.idiag_if, 0);
++ return raw_v4_match(net, sk, r->sdiag_raw_protocol,
++ r->id.idiag_dst[0],
++ r->id.idiag_src[0],
++ r->id.idiag_if, 0);
+ #if IS_ENABLED(CONFIG_IPV6)
+ else
+- sk = __raw_v6_lookup(net, from, r->sdiag_raw_protocol,
+- (const struct in6_addr *)r->id.idiag_src,
+- (const struct in6_addr *)r->id.idiag_dst,
+- r->id.idiag_if, 0);
++ return raw_v6_match(net, sk, r->sdiag_raw_protocol,
++ (const struct in6_addr *)r->id.idiag_src,
++ (const struct in6_addr *)r->id.idiag_dst,
++ r->id.idiag_if, 0);
+ #endif
+- return sk;
++ return false;
+ }
+
+ static struct sock *raw_sock_get(struct net *net, const struct inet_diag_req_v2 *r)
+ {
+ struct raw_hashinfo *hashinfo = raw_get_hashinfo(r);
+- struct sock *sk = NULL, *s;
++ struct hlist_nulls_head *hlist;
++ struct hlist_nulls_node *hnode;
++ struct sock *sk;
+ int slot;
+
+ if (IS_ERR(hashinfo))
+ return ERR_CAST(hashinfo);
+
+- read_lock(&hashinfo->lock);
++ rcu_read_lock();
+ for (slot = 0; slot < RAW_HTABLE_SIZE; slot++) {
+- sk_for_each(s, &hashinfo->ht[slot]) {
+- sk = raw_lookup(net, s, r);
+- if (sk) {
++ hlist = &hashinfo->ht[slot];
++ hlist_nulls_for_each_entry(sk, hnode, hlist, sk_nulls_node) {
++ if (raw_lookup(net, sk, r)) {
+ /*
+ * Grab it and keep until we fill
+- * diag meaage to be reported, so
++ * diag message to be reported, so
+ * caller should call sock_put then.
+- * We can do that because we're keeping
+- * hashinfo->lock here.
+ */
+- sock_hold(sk);
+- goto out_unlock;
++ if (refcount_inc_not_zero(&sk->sk_refcnt))
++ goto out_unlock;
+ }
+ }
+ }
++ sk = ERR_PTR(-ENOENT);
+ out_unlock:
+- read_unlock(&hashinfo->lock);
++ rcu_read_unlock();
+
+- return sk ? sk : ERR_PTR(-ENOENT);
++ return sk;
+ }
+
+ static int raw_diag_dump_one(struct netlink_callback *cb,
+@@ -142,6 +142,8 @@ static void raw_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ struct raw_hashinfo *hashinfo = raw_get_hashinfo(r);
+ struct net *net = sock_net(skb->sk);
+ struct inet_diag_dump_data *cb_data;
++ struct hlist_nulls_head *hlist;
++ struct hlist_nulls_node *hnode;
+ int num, s_num, slot, s_slot;
+ struct sock *sk = NULL;
+ struct nlattr *bc;
+@@ -158,7 +160,8 @@ static void raw_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ for (slot = s_slot; slot < RAW_HTABLE_SIZE; s_num = 0, slot++) {
+ num = 0;
+
+- sk_for_each(sk, &hashinfo->ht[slot]) {
++ hlist = &hashinfo->ht[slot];
++ hlist_nulls_for_each_entry(sk, hnode, hlist, sk_nulls_node) {
+ struct inet_sock *inet = inet_sk(sk);
+
+ if (!net_eq(sock_net(sk), net))
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 766881775abb7..3ae2ea0488838 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -952,6 +952,23 @@ static int tcp_downgrade_zcopy_pure(struct sock *sk, struct sk_buff *skb)
+ return 0;
+ }
+
++static int tcp_wmem_schedule(struct sock *sk, int copy)
++{
++ int left;
++
++ if (likely(sk_wmem_schedule(sk, copy)))
++ return copy;
++
++ /* We could be in trouble if we have nothing queued.
++ * Use whatever is left in sk->sk_forward_alloc and tcp_wmem[0]
++ * to guarantee some progress.
++ */
++ left = sock_net(sk)->ipv4.sysctl_tcp_wmem[0] - sk->sk_wmem_queued;
++ if (left > 0)
++ sk_forced_mem_schedule(sk, min(left, copy));
++ return min(copy, sk->sk_forward_alloc);
++}
++
+ static struct sk_buff *tcp_build_frag(struct sock *sk, int size_goal, int flags,
+ struct page *page, int offset, size_t *size)
+ {
+@@ -987,7 +1004,11 @@ new_segment:
+ tcp_mark_push(tp, skb);
+ goto new_segment;
+ }
+- if (tcp_downgrade_zcopy_pure(sk, skb) || !sk_wmem_schedule(sk, copy))
++ if (tcp_downgrade_zcopy_pure(sk, skb))
++ return NULL;
++
++ copy = tcp_wmem_schedule(sk, copy);
++ if (!copy)
+ return NULL;
+
+ if (can_coalesce) {
+@@ -1336,8 +1357,11 @@ new_segment:
+
+ copy = min_t(int, copy, pfrag->size - pfrag->offset);
+
+- if (tcp_downgrade_zcopy_pure(sk, skb) ||
+- !sk_wmem_schedule(sk, copy))
++ if (tcp_downgrade_zcopy_pure(sk, skb))
++ goto wait_for_space;
++
++ copy = tcp_wmem_schedule(sk, copy);
++ if (!copy)
+ goto wait_for_space;
+
+ err = skb_copy_to_page_nocache(sk, &msg->msg_iter, skb,
+@@ -1364,7 +1388,8 @@ new_segment:
+ skb_shinfo(skb)->flags |= SKBFL_PURE_ZEROCOPY;
+
+ if (!skb_zcopy_pure(skb)) {
+- if (!sk_wmem_schedule(sk, copy))
++ copy = tcp_wmem_schedule(sk, copy);
++ if (!copy)
+ goto wait_for_space;
+ }
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 4c376b6d87649..aed0c5f828bef 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3142,7 +3142,7 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
+ struct tcp_sock *tp = tcp_sk(sk);
+ unsigned int cur_mss;
+ int diff, len, err;
+-
++ int avail_wnd;
+
+ /* Inconclusive MTU probe */
+ if (icsk->icsk_mtup.probe_size)
+@@ -3164,17 +3164,25 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
+ return -EHOSTUNREACH; /* Routing failure or similar. */
+
+ cur_mss = tcp_current_mss(sk);
++ avail_wnd = tcp_wnd_end(tp) - TCP_SKB_CB(skb)->seq;
+
+ /* If receiver has shrunk his window, and skb is out of
+ * new window, do not retransmit it. The exception is the
+ * case, when window is shrunk to zero. In this case
+- * our retransmit serves as a zero window probe.
++ * our retransmit of one segment serves as a zero window probe.
+ */
+- if (!before(TCP_SKB_CB(skb)->seq, tcp_wnd_end(tp)) &&
+- TCP_SKB_CB(skb)->seq != tp->snd_una)
+- return -EAGAIN;
++ if (avail_wnd <= 0) {
++ if (TCP_SKB_CB(skb)->seq != tp->snd_una)
++ return -EAGAIN;
++ avail_wnd = cur_mss;
++ }
+
+ len = cur_mss * segs;
++ if (len > avail_wnd) {
++ len = rounddown(avail_wnd, cur_mss);
++ if (!len)
++ len = avail_wnd;
++ }
+ if (skb->len > len) {
+ if (tcp_fragment(sk, TCP_FRAG_IN_RTX_QUEUE, skb, len,
+ cur_mss, GFP_ATOMIC))
+@@ -3188,8 +3196,9 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
+ diff -= tcp_skb_pcount(skb);
+ if (diff)
+ tcp_adjust_pcount(sk, skb, diff);
+- if (skb->len < cur_mss)
+- tcp_retrans_try_collapse(sk, skb, cur_mss);
++ avail_wnd = min_t(int, avail_wnd, cur_mss);
++ if (skb->len < avail_wnd)
++ tcp_retrans_try_collapse(sk, skb, avail_wnd);
+ }
+
+ /* RFC3168, section 6.1.1.1. ECN fallback */
+@@ -3360,11 +3369,12 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
+ */
+ void sk_forced_mem_schedule(struct sock *sk, int size)
+ {
+- int amt;
++ int delta, amt;
+
+- if (size <= sk->sk_forward_alloc)
++ delta = size - sk->sk_forward_alloc;
++ if (delta <= 0)
+ return;
+- amt = sk_mem_pages(size);
++ amt = sk_mem_pages(delta);
+ sk->sk_forward_alloc += amt * SK_MEM_QUANTUM;
+ sk_memory_allocated_add(sk, amt);
+
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 6f354f8be2c57..9f6f4a41245d4 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -63,6 +63,7 @@
+ #include <net/compat.h>
+ #include <net/xfrm.h>
+ #include <net/ioam6.h>
++#include <net/rawv6.h>
+
+ #include <linux/uaccess.h>
+ #include <linux/mroute6.h>
+@@ -1073,6 +1074,8 @@ static int __init inet6_init(void)
+ goto out;
+ }
+
++ raw_hashinfo_init(&raw_v6_hashinfo);
++
+ err = proto_register(&tcpv6_prot, 1);
+ if (err)
+ goto out;
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 3b7cbd522b548..1af93856f876b 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -61,46 +61,30 @@
+
+ #define ICMPV6_HDRLEN 4 /* ICMPv6 header, RFC 4443 Section 2.1 */
+
+-struct raw_hashinfo raw_v6_hashinfo = {
+- .lock = __RW_LOCK_UNLOCKED(raw_v6_hashinfo.lock),
+-};
++struct raw_hashinfo raw_v6_hashinfo;
+ EXPORT_SYMBOL_GPL(raw_v6_hashinfo);
+
+-struct sock *__raw_v6_lookup(struct net *net, struct sock *sk,
+- unsigned short num, const struct in6_addr *loc_addr,
+- const struct in6_addr *rmt_addr, int dif, int sdif)
++bool raw_v6_match(struct net *net, struct sock *sk, unsigned short num,
++ const struct in6_addr *loc_addr,
++ const struct in6_addr *rmt_addr, int dif, int sdif)
+ {
+- bool is_multicast = ipv6_addr_is_multicast(loc_addr);
+-
+- sk_for_each_from(sk)
+- if (inet_sk(sk)->inet_num == num) {
+-
+- if (!net_eq(sock_net(sk), net))
+- continue;
+-
+- if (!ipv6_addr_any(&sk->sk_v6_daddr) &&
+- !ipv6_addr_equal(&sk->sk_v6_daddr, rmt_addr))
+- continue;
+-
+- if (!raw_sk_bound_dev_eq(net, sk->sk_bound_dev_if,
+- dif, sdif))
+- continue;
+-
+- if (!ipv6_addr_any(&sk->sk_v6_rcv_saddr)) {
+- if (ipv6_addr_equal(&sk->sk_v6_rcv_saddr, loc_addr))
+- goto found;
+- if (is_multicast &&
+- inet6_mc_check(sk, loc_addr, rmt_addr))
+- goto found;
+- continue;
+- }
+- goto found;
+- }
+- sk = NULL;
+-found:
+- return sk;
++ if (inet_sk(sk)->inet_num != num ||
++ !net_eq(sock_net(sk), net) ||
++ (!ipv6_addr_any(&sk->sk_v6_daddr) &&
++ !ipv6_addr_equal(&sk->sk_v6_daddr, rmt_addr)) ||
++ !raw_sk_bound_dev_eq(net, sk->sk_bound_dev_if,
++ dif, sdif))
++ return false;
++
++ if (ipv6_addr_any(&sk->sk_v6_rcv_saddr) ||
++ ipv6_addr_equal(&sk->sk_v6_rcv_saddr, loc_addr) ||
++ (ipv6_addr_is_multicast(loc_addr) &&
++ inet6_mc_check(sk, loc_addr, rmt_addr)))
++ return true;
++
++ return false;
+ }
+-EXPORT_SYMBOL_GPL(__raw_v6_lookup);
++EXPORT_SYMBOL_GPL(raw_v6_match);
+
+ /*
+ * 0 - deliver
+@@ -156,31 +140,27 @@ EXPORT_SYMBOL(rawv6_mh_filter_unregister);
+ */
+ static bool ipv6_raw_deliver(struct sk_buff *skb, int nexthdr)
+ {
++ struct net *net = dev_net(skb->dev);
++ struct hlist_nulls_head *hlist;
++ struct hlist_nulls_node *hnode;
+ const struct in6_addr *saddr;
+ const struct in6_addr *daddr;
+ struct sock *sk;
+ bool delivered = false;
+ __u8 hash;
+- struct net *net;
+
+ saddr = &ipv6_hdr(skb)->saddr;
+ daddr = saddr + 1;
+
+ hash = nexthdr & (RAW_HTABLE_SIZE - 1);
+-
+- read_lock(&raw_v6_hashinfo.lock);
+- sk = sk_head(&raw_v6_hashinfo.ht[hash]);
+-
+- if (!sk)
+- goto out;
+-
+- net = dev_net(skb->dev);
+- sk = __raw_v6_lookup(net, sk, nexthdr, daddr, saddr,
+- inet6_iif(skb), inet6_sdif(skb));
+-
+- while (sk) {
++ hlist = &raw_v6_hashinfo.ht[hash];
++ rcu_read_lock();
++ hlist_nulls_for_each_entry(sk, hnode, hlist, sk_nulls_node) {
+ int filtered;
+
++ if (!raw_v6_match(net, sk, nexthdr, daddr, saddr,
++ inet6_iif(skb), inet6_sdif(skb)))
++ continue;
+ delivered = true;
+ switch (nexthdr) {
+ case IPPROTO_ICMPV6:
+@@ -219,23 +199,14 @@ static bool ipv6_raw_deliver(struct sk_buff *skb, int nexthdr)
+ rawv6_rcv(sk, clone);
+ }
+ }
+- sk = __raw_v6_lookup(net, sk_next(sk), nexthdr, daddr, saddr,
+- inet6_iif(skb), inet6_sdif(skb));
+ }
+-out:
+- read_unlock(&raw_v6_hashinfo.lock);
++ rcu_read_unlock();
+ return delivered;
+ }
+
+ bool raw6_local_deliver(struct sk_buff *skb, int nexthdr)
+ {
+- struct sock *raw_sk;
+-
+- raw_sk = sk_head(&raw_v6_hashinfo.ht[nexthdr & (RAW_HTABLE_SIZE - 1)]);
+- if (raw_sk && !ipv6_raw_deliver(skb, nexthdr))
+- raw_sk = NULL;
+-
+- return raw_sk != NULL;
++ return ipv6_raw_deliver(skb, nexthdr);
+ }
+
+ /* This cleans up af_inet6 a bit. -DaveM */
+@@ -361,30 +332,25 @@ static void rawv6_err(struct sock *sk, struct sk_buff *skb,
+ void raw6_icmp_error(struct sk_buff *skb, int nexthdr,
+ u8 type, u8 code, int inner_offset, __be32 info)
+ {
++ struct net *net = dev_net(skb->dev);
++ struct hlist_nulls_head *hlist;
++ struct hlist_nulls_node *hnode;
+ struct sock *sk;
+ int hash;
+- const struct in6_addr *saddr, *daddr;
+- struct net *net;
+
+ hash = nexthdr & (RAW_HTABLE_SIZE - 1);
+-
+- read_lock(&raw_v6_hashinfo.lock);
+- sk = sk_head(&raw_v6_hashinfo.ht[hash]);
+- if (sk) {
++ hlist = &raw_v6_hashinfo.ht[hash];
++ rcu_read_lock();
++ hlist_nulls_for_each_entry(sk, hnode, hlist, sk_nulls_node) {
+ /* Note: ipv6_hdr(skb) != skb->data */
+ const struct ipv6hdr *ip6h = (const struct ipv6hdr *)skb->data;
+- saddr = &ip6h->saddr;
+- daddr = &ip6h->daddr;
+- net = dev_net(skb->dev);
+-
+- while ((sk = __raw_v6_lookup(net, sk, nexthdr, saddr, daddr,
+- inet6_iif(skb), inet6_iif(skb)))) {
+- rawv6_err(sk, skb, NULL, type, code,
+- inner_offset, info);
+- sk = sk_next(sk);
+- }
++
++ if (!raw_v6_match(net, sk, nexthdr, &ip6h->saddr, &ip6h->daddr,
++ inet6_iif(skb), inet6_iif(skb)))
++ continue;
++ rawv6_err(sk, skb, NULL, type, code, inner_offset, info);
+ }
+- read_unlock(&raw_v6_hashinfo.lock);
++ rcu_read_unlock();
+ }
+
+ static inline int rawv6_rcv_skb(struct sock *sk, struct sk_buff *skb)
+diff --git a/net/mac80211/airtime.c b/net/mac80211/airtime.c
+index 4bab1683652d7..2e66598fac791 100644
+--- a/net/mac80211/airtime.c
++++ b/net/mac80211/airtime.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: ISC
+ /*
+ * Copyright (C) 2019 Felix Fietkau <nbd@nbd.name>
+- * Copyright (C) 2021 Intel Corporation
++ * Copyright (C) 2021-2022 Intel Corporation
+ */
+
+ #include <net/mac80211.h>
+@@ -637,7 +637,7 @@ u32 ieee80211_calc_expected_tx_airtime(struct ieee80211_hw *hw,
+
+ len += 38; /* Ethernet header length */
+
+- conf = rcu_dereference(vif->chanctx_conf);
++ conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (conf) {
+ band = conf->def.chan->band;
+ shift = ieee80211_chandef_get_shift(&conf->def);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 4ddf297f40f2e..9ca25ae503b04 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -53,7 +53,7 @@ static void ieee80211_set_mu_mimo_follow(struct ieee80211_sub_if_data *sdata,
+ params->vht_mumimo_follow_addr);
+ }
+
+- sdata->vif.mu_mimo_owner = mu_mimo_groups || mu_mimo_follow;
++ sdata->vif.bss_conf.mu_mimo_owner = mu_mimo_groups || mu_mimo_follow;
+ }
+
+ static int ieee80211_set_mon_options(struct ieee80211_sub_if_data *sdata,
+@@ -1326,7 +1326,7 @@ static int ieee80211_change_beacon(struct wiphy *wiphy, struct net_device *dev,
+ /* don't allow changing the beacon while a countdown is in place - offset
+ * of channel switch counter may change
+ */
+- if (sdata->vif.csa_active || sdata->vif.color_change_active)
++ if (sdata->vif.bss_conf.csa_active || sdata->vif.bss_conf.color_change_active)
+ return -EBUSY;
+
+ old = sdata_dereference(sdata->u.ap.beacon, sdata);
+@@ -1358,7 +1358,8 @@ static void ieee80211_free_next_beacon(struct ieee80211_sub_if_data *sdata)
+ sdata->u.ap.next_beacon = NULL;
+ }
+
+-static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev)
++static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
++ unsigned int link_id)
+ {
+ struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
+ struct ieee80211_sub_if_data *vlan;
+@@ -1383,7 +1384,7 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev)
+
+ /* abort any running channel switch */
+ mutex_lock(&local->mtx);
+- sdata->vif.csa_active = false;
++ sdata->vif.bss_conf.csa_active = false;
+ if (sdata->csa_block_tx) {
+ ieee80211_wake_vif_queues(local, sdata,
+ IEEE80211_QUEUE_STOP_REASON_CSA);
+@@ -3065,6 +3066,7 @@ static int ieee80211_set_cqm_rssi_range_config(struct wiphy *wiphy,
+
+ static int ieee80211_set_bitrate_mask(struct wiphy *wiphy,
+ struct net_device *dev,
++ unsigned int link_id,
+ const u8 *addr,
+ const struct cfg80211_bitrate_mask *mask)
+ {
+@@ -3081,7 +3083,7 @@ static int ieee80211_set_bitrate_mask(struct wiphy *wiphy,
+ * to send something, and if we're an AP we have to be able to do
+ * so at a basic rate so that all clients can receive it.
+ */
+- if (rcu_access_pointer(sdata->vif.chanctx_conf) &&
++ if (rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) &&
+ sdata->vif.bss_conf.chandef.chan) {
+ u32 basic_rates = sdata->vif.bss_conf.basic_rates;
+ enum nl80211_band band = sdata->vif.bss_conf.chandef.chan->band;
+@@ -3388,7 +3390,7 @@ static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
+ &sdata->csa_chandef))
+ return -EINVAL;
+
+- sdata->vif.csa_active = false;
++ sdata->vif.bss_conf.csa_active = false;
+
+ err = ieee80211_set_after_csa_beacon(sdata, &changed);
+ if (err)
+@@ -3406,7 +3408,7 @@ static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
+ if (err)
+ return err;
+
+- cfg80211_ch_switch_notify(sdata->dev, &sdata->csa_chandef);
++ cfg80211_ch_switch_notify(sdata->dev, &sdata->csa_chandef, 0);
+
+ return 0;
+ }
+@@ -3432,7 +3434,7 @@ void ieee80211_csa_finalize_work(struct work_struct *work)
+ mutex_lock(&local->chanctx_mtx);
+
+ /* AP might have been stopped while waiting for the lock. */
+- if (!sdata->vif.csa_active)
++ if (!sdata->vif.bss_conf.csa_active)
+ goto unlock;
+
+ if (!ieee80211_sdata_running(sdata))
+@@ -3584,7 +3586,7 @@ static int ieee80211_set_csa_beacon(struct ieee80211_sub_if_data *sdata,
+
+ static void ieee80211_color_change_abort(struct ieee80211_sub_if_data *sdata)
+ {
+- sdata->vif.color_change_active = false;
++ sdata->vif.bss_conf.color_change_active = false;
+
+ ieee80211_free_next_beacon(sdata);
+
+@@ -3617,11 +3619,11 @@ __ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
+ return -EINVAL;
+
+ /* don't allow another channel switch if one is already active. */
+- if (sdata->vif.csa_active)
++ if (sdata->vif.bss_conf.csa_active)
+ return -EBUSY;
+
+ mutex_lock(&local->chanctx_mtx);
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+ if (!conf) {
+ err = -EBUSY;
+@@ -3660,7 +3662,7 @@ __ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
+ }
+
+ /* if there is a color change in progress, abort it */
+- if (sdata->vif.color_change_active)
++ if (sdata->vif.bss_conf.color_change_active)
+ ieee80211_color_change_abort(sdata);
+
+ err = ieee80211_set_csa_beacon(sdata, params, &changed);
+@@ -3671,7 +3673,7 @@ __ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
+
+ sdata->csa_chandef = params->chandef;
+ sdata->csa_block_tx = params->block_tx;
+- sdata->vif.csa_active = true;
++ sdata->vif.bss_conf.csa_active = true;
+
+ if (sdata->csa_block_tx)
+ ieee80211_stop_vif_queues(local, sdata,
+@@ -3840,7 +3842,7 @@ static int ieee80211_probe_client(struct wiphy *wiphy, struct net_device *dev,
+ mutex_lock(&local->mtx);
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ ret = -EINVAL;
+ goto unlock;
+@@ -3914,6 +3916,7 @@ unlock:
+
+ static int ieee80211_cfg_get_channel(struct wiphy *wiphy,
+ struct wireless_dev *wdev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef)
+ {
+ struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
+@@ -3922,7 +3925,7 @@ static int ieee80211_cfg_get_channel(struct wiphy *wiphy,
+ int ret = -ENODATA;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (chanctx_conf) {
+ *chandef = sdata->vif.bss_conf.chandef;
+ ret = 0;
+@@ -3974,6 +3977,7 @@ static int ieee80211_set_qos_map(struct wiphy *wiphy,
+
+ static int ieee80211_set_ap_chanwidth(struct wiphy *wiphy,
+ struct net_device *dev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef)
+ {
+ struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
+@@ -4417,7 +4421,7 @@ static int ieee80211_color_change_finalize(struct ieee80211_sub_if_data *sdata)
+ sdata_assert_lock(sdata);
+ lockdep_assert_held(&local->mtx);
+
+- sdata->vif.color_change_active = false;
++ sdata->vif.bss_conf.color_change_active = false;
+
+ err = ieee80211_set_after_color_change_beacon(sdata, &changed);
+ if (err) {
+@@ -4426,7 +4430,7 @@ static int ieee80211_color_change_finalize(struct ieee80211_sub_if_data *sdata)
+ }
+
+ ieee80211_color_change_bss_config_notify(sdata,
+- sdata->vif.color_change_color,
++ sdata->vif.bss_conf.color_change_color,
+ 1, changed);
+ cfg80211_color_change_notify(sdata->dev);
+
+@@ -4444,7 +4448,7 @@ void ieee80211_color_change_finalize_work(struct work_struct *work)
+ mutex_lock(&local->mtx);
+
+ /* AP might have been stopped while waiting for the lock. */
+- if (!sdata->vif.color_change_active)
++ if (!sdata->vif.bss_conf.color_change_active)
+ goto unlock;
+
+ if (!ieee80211_sdata_running(sdata))
+@@ -4472,7 +4476,7 @@ ieeee80211_obss_color_collision_notify(struct ieee80211_vif *vif,
+ {
+ struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
+
+- if (sdata->vif.color_change_active || sdata->vif.csa_active)
++ if (sdata->vif.bss_conf.color_change_active || sdata->vif.bss_conf.csa_active)
+ return;
+
+ cfg80211_obss_color_collision_notify(sdata->dev, color_bitmap, gfp);
+@@ -4498,7 +4502,7 @@ ieee80211_color_change(struct wiphy *wiphy, struct net_device *dev,
+ /* don't allow another color change if one is already active or if csa
+ * is active
+ */
+- if (sdata->vif.color_change_active || sdata->vif.csa_active) {
++ if (sdata->vif.bss_conf.color_change_active || sdata->vif.bss_conf.csa_active) {
+ err = -EBUSY;
+ goto out;
+ }
+@@ -4507,8 +4511,8 @@ ieee80211_color_change(struct wiphy *wiphy, struct net_device *dev,
+ if (err)
+ goto out;
+
+- sdata->vif.color_change_active = true;
+- sdata->vif.color_change_color = params->color;
++ sdata->vif.bss_conf.color_change_active = true;
++ sdata->vif.bss_conf.color_change_color = params->color;
+
+ cfg80211_color_change_started_notify(sdata->dev, params->count);
+
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index d8246e00a10b1..eea4009021332 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+ * mac80211 - channel management
+- * Copyright 2020 - 2021 Intel Corporation
++ * Copyright 2020 - 2022 Intel Corporation
+ */
+
+ #include <linux/nl80211.h>
+@@ -72,7 +72,7 @@ ieee80211_vif_get_chanctx(struct ieee80211_sub_if_data *sdata)
+ struct ieee80211_local *local __maybe_unused = sdata->local;
+ struct ieee80211_chanctx_conf *conf;
+
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+ if (!conf)
+ return NULL;
+@@ -260,7 +260,7 @@ ieee80211_get_chanctx_max_required_bw(struct ieee80211_local *local,
+ if (!ieee80211_sdata_running(sdata))
+ continue;
+
+- if (rcu_access_pointer(sdata->vif.chanctx_conf) != conf)
++ if (rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) != conf)
+ continue;
+
+ switch (vif->type) {
+@@ -298,7 +298,7 @@ ieee80211_get_chanctx_max_required_bw(struct ieee80211_local *local,
+
+ /* use the configured bandwidth in case of monitor interface */
+ sdata = rcu_dereference(local->monitor_sdata);
+- if (sdata && rcu_access_pointer(sdata->vif.chanctx_conf) == conf)
++ if (sdata && rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) == conf)
+ max_bw = max(max_bw, conf->def.width);
+
+ rcu_read_unlock();
+@@ -368,7 +368,7 @@ static void ieee80211_chan_bw_change(struct ieee80211_local *local,
+ if (!ieee80211_sdata_running(sta->sdata))
+ continue;
+
+- if (rcu_access_pointer(sta->sdata->vif.chanctx_conf) !=
++ if (rcu_access_pointer(sta->sdata->vif.bss_conf.chanctx_conf) !=
+ &ctx->conf)
+ continue;
+
+@@ -533,7 +533,7 @@ ieee80211_chanctx_radar_required(struct ieee80211_local *local,
+ list_for_each_entry_rcu(sdata, &local->interfaces, list) {
+ if (!ieee80211_sdata_running(sdata))
+ continue;
+- if (rcu_access_pointer(sdata->vif.chanctx_conf) != conf)
++ if (rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) != conf)
+ continue;
+ if (!sdata->radar_required)
+ continue;
+@@ -689,7 +689,7 @@ void ieee80211_recalc_chanctx_chantype(struct ieee80211_local *local,
+
+ if (!ieee80211_sdata_running(sdata))
+ continue;
+- if (rcu_access_pointer(sdata->vif.chanctx_conf) != conf)
++ if (rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) != conf)
+ continue;
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ continue;
+@@ -759,7 +759,7 @@ static int ieee80211_assign_vif_chanctx(struct ieee80211_sub_if_data *sdata,
+ if (WARN_ON(sdata->vif.type == NL80211_IFTYPE_NAN))
+ return -ENOTSUPP;
+
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+
+ if (conf) {
+@@ -781,7 +781,7 @@ static int ieee80211_assign_vif_chanctx(struct ieee80211_sub_if_data *sdata,
+ }
+
+ out:
+- rcu_assign_pointer(sdata->vif.chanctx_conf, conf);
++ rcu_assign_pointer(sdata->vif.bss_conf.chanctx_conf, conf);
+
+ sdata->vif.bss_conf.idle = !conf;
+
+@@ -825,7 +825,7 @@ void ieee80211_recalc_smps_chanctx(struct ieee80211_local *local,
+ if (!ieee80211_sdata_running(sdata))
+ continue;
+
+- if (rcu_access_pointer(sdata->vif.chanctx_conf) !=
++ if (rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) !=
+ &chanctx->conf)
+ continue;
+
+@@ -874,7 +874,7 @@ void ieee80211_recalc_smps_chanctx(struct ieee80211_local *local,
+ /* Disable SMPS for the monitor interface */
+ sdata = rcu_dereference(local->monitor_sdata);
+ if (sdata &&
+- rcu_access_pointer(sdata->vif.chanctx_conf) == &chanctx->conf)
++ rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf) == &chanctx->conf)
+ rx_chains_dynamic = rx_chains_static = local->rx_chains;
+
+ rcu_read_unlock();
+@@ -917,7 +917,7 @@ __ieee80211_vif_copy_chanctx_to_vlans(struct ieee80211_sub_if_data *sdata,
+ * channel context pointer for a while, possibly pointing
+ * to a channel context that has already been freed.
+ */
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+ WARN_ON(!conf);
+
+@@ -925,7 +925,7 @@ __ieee80211_vif_copy_chanctx_to_vlans(struct ieee80211_sub_if_data *sdata,
+ conf = NULL;
+
+ list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list)
+- rcu_assign_pointer(vlan->vif.chanctx_conf, conf);
++ rcu_assign_pointer(vlan->vif.bss_conf.chanctx_conf, conf);
+ }
+
+ void ieee80211_vif_copy_chanctx_to_vlans(struct ieee80211_sub_if_data *sdata,
+@@ -1173,7 +1173,7 @@ ieee80211_vif_use_reserved_reassign(struct ieee80211_sub_if_data *sdata)
+ }
+
+ list_move(&sdata->assigned_chanctx_list, &new_ctx->assigned_vifs);
+- rcu_assign_pointer(sdata->vif.chanctx_conf, &new_ctx->conf);
++ rcu_assign_pointer(sdata->vif.bss_conf.chanctx_conf, &new_ctx->conf);
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP)
+ __ieee80211_vif_copy_chanctx_to_vlans(sdata, false);
+@@ -1515,7 +1515,8 @@ static int ieee80211_vif_use_reserved_switch(struct ieee80211_local *local)
+ if (!ieee80211_vif_has_in_place_reservation(sdata))
+ continue;
+
+- rcu_assign_pointer(sdata->vif.chanctx_conf, &ctx->conf);
++ rcu_assign_pointer(sdata->vif.bss_conf.chanctx_conf,
++ &ctx->conf);
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP)
+ __ieee80211_vif_copy_chanctx_to_vlans(sdata,
+@@ -1634,7 +1635,7 @@ static void __ieee80211_vif_release_channel(struct ieee80211_sub_if_data *sdata)
+
+ lockdep_assert_held(&local->chanctx_mtx);
+
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+ if (!conf)
+ return;
+@@ -1809,7 +1810,7 @@ int ieee80211_vif_change_bandwidth(struct ieee80211_sub_if_data *sdata,
+ goto out;
+ }
+
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+ if (!conf) {
+ ret = -EINVAL;
+@@ -1879,9 +1880,9 @@ void ieee80211_vif_vlan_copy_chanctx(struct ieee80211_sub_if_data *sdata)
+
+ mutex_lock(&local->chanctx_mtx);
+
+- conf = rcu_dereference_protected(ap->vif.chanctx_conf,
++ conf = rcu_dereference_protected(ap->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+- rcu_assign_pointer(sdata->vif.chanctx_conf, conf);
++ rcu_assign_pointer(sdata->vif.bss_conf.chanctx_conf, conf);
+ mutex_unlock(&local->chanctx_mtx);
+ }
+
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index 4e2fc1a08681c..fd2882348211c 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -165,7 +165,7 @@ static inline void drv_bss_info_changed(struct ieee80211_local *local,
+ if (WARN_ON_ONCE(sdata->vif.type == NL80211_IFTYPE_P2P_DEVICE ||
+ sdata->vif.type == NL80211_IFTYPE_NAN ||
+ (sdata->vif.type == NL80211_IFTYPE_MONITOR &&
+- !sdata->vif.mu_mimo_owner &&
++ !sdata->vif.bss_conf.mu_mimo_owner &&
+ !(changed & BSS_CHANGED_TXPOWER))))
+ return;
+
+diff --git a/net/mac80211/ethtool.c b/net/mac80211/ethtool.c
+index 31cd3c1ac07f6..6e1fc8788101d 100644
+--- a/net/mac80211/ethtool.c
++++ b/net/mac80211/ethtool.c
+@@ -5,7 +5,7 @@
+ * Copied from cfg.c - originally
+ * Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2014 Intel Corporation (Author: Johannes Berg)
+- * Copyright (C) 2018 Intel Corporation
++ * Copyright (C) 2018, 2022 Intel Corporation
+ */
+ #include <linux/types.h>
+ #include <net/cfg80211.h>
+@@ -150,7 +150,7 @@ do_survey:
+ survey.filled = 0;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (chanctx_conf)
+ channel = chanctx_conf->def.chan;
+ else
+diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
+index 14c04fd48b7a1..8ff547ff351ed 100644
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -9,7 +9,7 @@
+ * Copyright 2009, Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2016 Intel Deutschland GmbH
+- * Copyright(c) 2018-2021 Intel Corporation
++ * Copyright(c) 2018-2022 Intel Corporation
+ */
+
+ #include <linux/delay.h>
+@@ -622,7 +622,7 @@ ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata, const u8 *bssid,
+ }
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON_ONCE(!chanctx_conf))
+ return NULL;
+ band = chanctx_conf->def.chan->band;
+@@ -923,7 +923,7 @@ ieee80211_rx_mgmt_spectrum_mgmt(struct ieee80211_sub_if_data *sdata,
+ if (len < required_len)
+ return;
+
+- if (!sdata->vif.csa_active)
++ if (!sdata->vif.bss_conf.csa_active)
+ ieee80211_ibss_process_chanswitch(sdata, elems, false);
+ }
+
+@@ -1143,7 +1143,7 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
+ goto put_bss;
+
+ /* process channel switch */
+- if (sdata->vif.csa_active ||
++ if (sdata->vif.bss_conf.csa_active ||
+ ieee80211_ibss_process_chanswitch(sdata, elems, true))
+ goto put_bss;
+
+@@ -1220,7 +1220,7 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata,
+ return;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON_ONCE(!chanctx_conf)) {
+ rcu_read_unlock();
+ return;
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 86ef0a46a68ce..48fbccbf2a545 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1077,7 +1077,7 @@ ieee80211_vif_get_shift(struct ieee80211_vif *vif)
+ int shift = 0;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(vif->chanctx_conf);
++ chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (chanctx_conf)
+ shift = ieee80211_chandef_get_shift(&chanctx_conf->def);
+ rcu_read_unlock();
+@@ -1528,7 +1528,7 @@ ieee80211_get_sband(struct ieee80211_sub_if_data *sdata)
+ enum nl80211_band band;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+
+ if (!chanctx_conf) {
+ rcu_read_unlock();
+@@ -2225,7 +2225,7 @@ static inline void ieee80211_tx_skb_tid(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_chanctx_conf *chanctx_conf;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ kfree_skb(skb);
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 1a9ada4118793..5d29a4ca048a1 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -51,7 +51,7 @@ bool __ieee80211_recalc_txpower(struct ieee80211_sub_if_data *sdata)
+ int power;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ rcu_read_unlock();
+ return false;
+@@ -275,7 +275,7 @@ static int ieee80211_check_concurrent_iface(struct ieee80211_sub_if_data *sdata,
+ * will not add another interface while any channel
+ * switch is active.
+ */
+- if (nsdata->vif.csa_active)
++ if (nsdata->vif.bss_conf.csa_active)
+ return -EBUSY;
+
+ /*
+@@ -451,7 +451,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ cancel_work_sync(&sdata->recalc_smps);
+ sdata_lock(sdata);
+ mutex_lock(&local->mtx);
+- sdata->vif.csa_active = false;
++ sdata->vif.bss_conf.csa_active = false;
+ if (sdata->vif.type == NL80211_IFTYPE_STATION)
+ sdata->u.mgd.csa_waiting_bcn = false;
+ if (sdata->csa_block_tx) {
+@@ -503,7 +503,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ mutex_lock(&local->mtx);
+ list_del(&sdata->u.vlan.list);
+ mutex_unlock(&local->mtx);
+- RCU_INIT_POINTER(sdata->vif.chanctx_conf, NULL);
++ RCU_INIT_POINTER(sdata->vif.bss_conf.chanctx_conf, NULL);
+ /* see comment in the default case below */
+ ieee80211_free_keys(sdata, true);
+ /* no need to tell driver */
+diff --git a/net/mac80211/key.c b/net/mac80211/key.c
+index 0fcf8aebedc4e..047a06b857c9e 100644
+--- a/net/mac80211/key.c
++++ b/net/mac80211/key.c
+@@ -433,13 +433,25 @@ static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
+ int idx;
+ int ret = 0;
+ bool defunikey, defmultikey, defmgmtkey, defbeaconkey;
++ bool is_wep;
+
+ /* caller must provide at least one old/new */
+ if (WARN_ON(!new && !old))
+ return 0;
+
+- if (new)
++ if (new) {
++ idx = new->conf.keyidx;
+ list_add_tail_rcu(&new->list, &sdata->key_list);
++ is_wep = new->conf.cipher == WLAN_CIPHER_SUITE_WEP40 ||
++ new->conf.cipher == WLAN_CIPHER_SUITE_WEP104;
++ } else {
++ idx = old->conf.keyidx;
++ is_wep = old->conf.cipher == WLAN_CIPHER_SUITE_WEP40 ||
++ old->conf.cipher == WLAN_CIPHER_SUITE_WEP104;
++ }
++
++ if ((is_wep || pairwise) && idx >= NUM_DEFAULT_KEYS)
++ return -EINVAL;
+
+ WARN_ON(new && old && new->conf.keyidx != old->conf.keyidx);
+
+@@ -451,8 +463,6 @@ static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
+ }
+
+ if (old) {
+- idx = old->conf.keyidx;
+-
+ if (old->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) {
+ ieee80211_key_disable_hw_accel(old);
+
+@@ -460,8 +470,6 @@ static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
+ ret = ieee80211_key_enable_hw_accel(new);
+ }
+ } else {
+- /* new must be provided in case old is not */
+- idx = new->conf.keyidx;
+ if (!new->local->wowlan)
+ ret = ieee80211_key_enable_hw_accel(new);
+ }
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 5a385d4146b9b..6d638eb05310e 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -147,7 +147,7 @@ static u32 ieee80211_hw_conf_chan(struct ieee80211_local *local)
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(sdata, &local->interfaces, list) {
+- if (!rcu_access_pointer(sdata->vif.chanctx_conf))
++ if (!rcu_access_pointer(sdata->vif.bss_conf.chanctx_conf))
+ continue;
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ continue;
+@@ -284,7 +284,7 @@ static void ieee80211_restart_work(struct work_struct *work)
+ * Then we can have a race...
+ */
+ cancel_work_sync(&sdata->u.mgd.csa_connection_drop_work);
+- if (sdata->vif.csa_active) {
++ if (sdata->vif.bss_conf.csa_active) {
+ sdata_lock(sdata);
+ ieee80211_sta_connection_lost(sdata,
+ WLAN_REASON_UNSPECIFIED,
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index 5275f4f32a785..f60e257cba958 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+ * Copyright (c) 2008, 2009 open80211s Ltd.
+- * Copyright (C) 2018 - 2021 Intel Corporation
++ * Copyright (C) 2018 - 2022 Intel Corporation
+ * Authors: Luis Carlos Cobo <luisca@cozybit.com>
+ * Javier Cardona <javier@cozybit.com>
+ */
+@@ -399,7 +399,7 @@ static int mesh_add_ds_params_ie(struct ieee80211_sub_if_data *sdata,
+ return -ENOMEM;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ return -EINVAL;
+@@ -455,7 +455,7 @@ int mesh_add_ht_oper_ie(struct ieee80211_sub_if_data *sdata,
+ u8 *pos;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ return -EINVAL;
+@@ -527,7 +527,7 @@ int mesh_add_vht_oper_ie(struct ieee80211_sub_if_data *sdata,
+ u8 *pos;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ return -EINVAL;
+@@ -820,7 +820,7 @@ ieee80211_mesh_build_beacon(struct ieee80211_if_mesh *ifmsh)
+
+ sdata = container_of(ifmsh, struct ieee80211_sub_if_data, u.mesh);
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ band = chanctx_conf->def.chan->band;
+ rcu_read_unlock();
+
+@@ -1357,7 +1357,7 @@ static void ieee80211_mesh_rx_bcn_presp(struct ieee80211_sub_if_data *sdata,
+ rx_status);
+
+ if (ifmsh->csa_role != IEEE80211_MESH_CSA_ROLE_INIT &&
+- !sdata->vif.csa_active)
++ !sdata->vif.bss_conf.csa_active)
+ ieee80211_mesh_process_chnswitch(sdata, elems, true);
+ }
+
+@@ -1488,7 +1488,7 @@ static void mesh_rx_csa_frame(struct ieee80211_sub_if_data *sdata,
+
+ ifmsh->pre_value = pre_value;
+
+- if (!sdata->vif.csa_active &&
++ if (!sdata->vif.bss_conf.csa_active &&
+ !ieee80211_mesh_process_chnswitch(sdata, elems, false)) {
+ mcsa_dbg(sdata, "Failed to process CSA action frame");
+ goto free;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 58d48dcae0303..181aee459d7b1 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -624,7 +624,7 @@ static void ieee80211_add_vht_ie(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_sub_if_data *other;
+
+ list_for_each_entry_rcu(other, &local->interfaces, list) {
+- if (other->vif.mu_mimo_owner) {
++ if (other->vif.bss_conf.mu_mimo_owner) {
+ disable_mu_mimo = true;
+ break;
+ }
+@@ -632,7 +632,7 @@ static void ieee80211_add_vht_ie(struct ieee80211_sub_if_data *sdata,
+ if (disable_mu_mimo)
+ cap &= ~IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE;
+ else
+- sdata->vif.mu_mimo_owner = true;
++ sdata->vif.bss_conf.mu_mimo_owner = true;
+ }
+
+ mask = IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
+@@ -664,7 +664,7 @@ static void ieee80211_add_he_ie(struct ieee80211_sub_if_data *sdata,
+ bool reg_cap = false;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!WARN_ON_ONCE(!chanctx_conf))
+ reg_cap = cfg80211_chandef_usable(sdata->wdev.wiphy,
+ &chanctx_conf->def,
+@@ -705,7 +705,7 @@ static void ieee80211_add_eht_ie(struct ieee80211_sub_if_data *sdata,
+ bool reg_cap = false;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!WARN_ON_ONCE(!chanctx_conf))
+ reg_cap = cfg80211_chandef_usable(sdata->wdev.wiphy,
+ &chanctx_conf->def,
+@@ -766,7 +766,7 @@ static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+ sdata_assert_lock(sdata);
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ return -EINVAL;
+@@ -1229,7 +1229,7 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ if (!ifmgd->associated)
+ goto out;
+
+- if (!sdata->vif.csa_active)
++ if (!sdata->vif.bss_conf.csa_active)
+ goto out;
+
+ /*
+@@ -1289,7 +1289,7 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_sub_if_data *sdata)
+
+ sdata_assert_lock(sdata);
+
+- WARN_ON(!sdata->vif.csa_active);
++ WARN_ON(!sdata->vif.bss_conf.csa_active);
+
+ if (sdata->csa_block_tx) {
+ ieee80211_wake_vif_queues(local, sdata,
+@@ -1297,7 +1297,7 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_sub_if_data *sdata)
+ sdata->csa_block_tx = false;
+ }
+
+- sdata->vif.csa_active = false;
++ sdata->vif.bss_conf.csa_active = false;
+ ifmgd->csa_waiting_bcn = false;
+ /*
+ * If the CSA IE is still present on the beacon after the switch,
+@@ -1314,7 +1314,7 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_sub_if_data *sdata)
+ return;
+ }
+
+- cfg80211_ch_switch_notify(sdata->dev, &sdata->reserved_chandef);
++ cfg80211_ch_switch_notify(sdata->dev, &sdata->reserved_chandef, 0);
+ }
+
+ void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success)
+@@ -1361,7 +1361,7 @@ ieee80211_sta_abort_chanswitch(struct ieee80211_sub_if_data *sdata)
+ IEEE80211_QUEUE_STOP_REASON_CSA);
+
+ sdata->csa_block_tx = false;
+- sdata->vif.csa_active = false;
++ sdata->vif.bss_conf.csa_active = false;
+
+ mutex_unlock(&local->mtx);
+
+@@ -1412,13 +1412,13 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
+ if (res < 0)
+ goto lock_and_drop_connection;
+
+- if (beacon && sdata->vif.csa_active && !ifmgd->csa_waiting_bcn) {
++ if (beacon && sdata->vif.bss_conf.csa_active && !ifmgd->csa_waiting_bcn) {
+ if (res)
+ ieee80211_sta_abort_chanswitch(sdata);
+ else
+ drv_channel_switch_rx_beacon(sdata, &ch_switch);
+ return;
+- } else if (sdata->vif.csa_active || res) {
++ } else if (sdata->vif.bss_conf.csa_active || res) {
+ /* disregard subsequent announcements if already processing */
+ return;
+ }
+@@ -1471,7 +1471,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
+
+ mutex_lock(&local->mtx);
+ mutex_lock(&local->chanctx_mtx);
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+ if (!conf) {
+ sdata_info(sdata,
+@@ -1504,7 +1504,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
+ }
+ mutex_unlock(&local->chanctx_mtx);
+
+- sdata->vif.csa_active = true;
++ sdata->vif.bss_conf.csa_active = true;
+ sdata->csa_chandef = csa_ie.chandef;
+ sdata->csa_block_tx = csa_ie.mode;
+ ifmgd->csa_ignored_same_chan = false;
+@@ -1543,7 +1543,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
+ * send a deauthentication frame. Those two fields will be
+ * reset when the disconnection worker runs.
+ */
+- sdata->vif.csa_active = true;
++ sdata->vif.bss_conf.csa_active = true;
+ sdata->csa_block_tx = csa_ie.mode;
+
+ ieee80211_queue_work(&local->hw, &ifmgd->csa_connection_drop_work);
+@@ -2447,7 +2447,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ memset(sdata->vif.bss_conf.mu_group.position, 0,
+ sizeof(sdata->vif.bss_conf.mu_group.position));
+ changed |= BSS_CHANGED_MU_GROUPS;
+- sdata->vif.mu_mimo_owner = false;
++ sdata->vif.bss_conf.mu_mimo_owner = false;
+
+ sdata->ap_power_level = IEEE80211_UNSET_POWER_LEVEL;
+
+@@ -2482,7 +2482,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ mutex_lock(&local->mtx);
+ ieee80211_vif_release_channel(sdata);
+
+- sdata->vif.csa_active = false;
++ sdata->vif.bss_conf.csa_active = false;
+ ifmgd->csa_waiting_bcn = false;
+ ifmgd->csa_ignored_same_chan = false;
+ if (sdata->csa_block_tx) {
+@@ -2810,7 +2810,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY,
+ tx, frame_buf);
+ mutex_lock(&local->mtx);
+- sdata->vif.csa_active = false;
++ sdata->vif.bss_conf.csa_active = false;
+ ifmgd->csa_waiting_bcn = false;
+ if (sdata->csa_block_tx) {
+ ieee80211_wake_vif_queues(local, sdata,
+@@ -2950,7 +2950,7 @@ static void ieee80211_destroy_assoc_data(struct ieee80211_sub_if_data *sdata,
+ eth_zero_addr(sdata->u.mgd.bssid);
+ ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BSSID);
+ sdata->u.mgd.flags = 0;
+- sdata->vif.mu_mimo_owner = false;
++ sdata->vif.bss_conf.mu_mimo_owner = false;
+
+ mutex_lock(&sdata->local->mtx);
+ ieee80211_vif_release_channel(sdata);
+@@ -4136,7 +4136,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
+ return;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ rcu_read_unlock();
+ return;
+@@ -4805,7 +4805,7 @@ static void ieee80211_sta_bcn_mon_timer(struct timer_list *t)
+ from_timer(sdata, t, u.mgd.bcn_mon_timer);
+ struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+
+- if (sdata->vif.csa_active && !ifmgd->csa_waiting_bcn)
++ if (sdata->vif.bss_conf.csa_active && !ifmgd->csa_waiting_bcn)
+ return;
+
+ if (sdata->vif.driver_flags & IEEE80211_VIF_BEACON_FILTER)
+@@ -4825,7 +4825,7 @@ static void ieee80211_sta_conn_mon_timer(struct timer_list *t)
+ struct sta_info *sta;
+ unsigned long timeout;
+
+- if (sdata->vif.csa_active && !ifmgd->csa_waiting_bcn)
++ if (sdata->vif.bss_conf.csa_active && !ifmgd->csa_waiting_bcn)
+ return;
+
+ sta = sta_info_get(sdata, ifmgd->bssid);
+diff --git a/net/mac80211/ocb.c b/net/mac80211/ocb.c
+index f97cb4c453d3f..d0f0d96b09489 100644
+--- a/net/mac80211/ocb.c
++++ b/net/mac80211/ocb.c
+@@ -4,6 +4,7 @@
+ *
+ * Copyright: (c) 2014 Czech Technical University in Prague
+ * (c) 2014 Volkswagen Group Research
++ * Copyright (C) 2022 Intel Corporation
+ * Author: Rostislav Lisovy <rostislav.lisovy@fel.cvut.cz>
+ * Funded by: Volkswagen Group Research
+ */
+@@ -59,7 +60,7 @@ void ieee80211_ocb_rx_no_sta(struct ieee80211_sub_if_data *sdata,
+ ocb_dbg(sdata, "Adding new OCB station %pM\n", addr);
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON_ONCE(!chanctx_conf)) {
+ rcu_read_unlock();
+ return;
+diff --git a/net/mac80211/offchannel.c b/net/mac80211/offchannel.c
+index c5d2ab9df1e70..a1fbd562cac14 100644
+--- a/net/mac80211/offchannel.c
++++ b/net/mac80211/offchannel.c
+@@ -8,7 +8,7 @@
+ * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
+ * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+ * Copyright 2009 Johannes Berg <johannes@sipsolutions.net>
+- * Copyright (C) 2019 Intel Corporation
++ * Copyright (C) 2019, 2022 Intel Corporation
+ */
+ #include <linux/export.h>
+ #include <net/mac80211.h>
+@@ -845,7 +845,7 @@ int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ struct ieee80211_chanctx_conf *chanctx_conf;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+
+ if (chanctx_conf) {
+ need_offchan = params->chan &&
+@@ -876,7 +876,7 @@ int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ data = skb_put_data(skb, params->buf, params->len);
+
+ /* Update CSA counters */
+- if (sdata->vif.csa_active &&
++ if (sdata->vif.bss_conf.csa_active &&
+ (sdata->vif.type == NL80211_IFTYPE_AP ||
+ sdata->vif.type == NL80211_IFTYPE_MESH_POINT ||
+ sdata->vif.type == NL80211_IFTYPE_ADHOC) &&
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index ae9700e0a1a5b..f223811279488 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -4,6 +4,7 @@
+ * Copyright 2005-2006, Devicescape Software, Inc.
+ * Copyright (c) 2006 Jiri Benc <jbenc@suse.cz>
+ * Copyright 2017 Intel Deutschland GmbH
++ * Copyright (C) 2022 Intel Corporation
+ */
+
+ #include <linux/kernel.h>
+@@ -43,7 +44,7 @@ void rate_control_rate_init(struct sta_info *sta)
+
+ rcu_read_lock();
+
+- chanctx_conf = rcu_dereference(sta->sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sta->sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ return;
+@@ -100,7 +101,7 @@ void rate_control_rate_update(struct ieee80211_local *local,
+ if (ref && ref->ops->rate_update) {
+ rcu_read_lock();
+
+- chanctx_conf = rcu_dereference(sta->sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sta->sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ return;
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 1675f8cb87f15..b938806a5184a 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -3192,7 +3192,7 @@ ieee80211_rx_check_bss_color_collision(struct ieee80211_rx_data *rx)
+ if (ieee80211_hw_check(&rx->local->hw, DETECTS_COLOR_COLLISION))
+ return;
+
+- if (rx->sdata->vif.csa_active)
++ if (rx->sdata->vif.bss_conf.csa_active)
+ return;
+
+ baselen = mgmt->u.beacon.variable - rx->skb->data;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index e04a0905e9418..c0b2ce70e101c 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -373,6 +373,8 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata,
+
+ memcpy(sta->addr, addr, ETH_ALEN);
+ memcpy(sta->sta.addr, addr, ETH_ALEN);
++ memcpy(sta->deflink.addr, addr, ETH_ALEN);
++ memcpy(sta->sta.deflink.addr, addr, ETH_ALEN);
+ sta->sta.max_rx_aggregation_subframes =
+ local->hw.max_rx_aggregation_subframes;
+
+@@ -1467,7 +1469,7 @@ static void ieee80211_send_null_response(struct sta_info *sta, int tid,
+ skb->dev = sdata->dev;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (WARN_ON(!chanctx_conf)) {
+ rcu_read_unlock();
+ kfree_skb(skb);
+diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
+index 4e2d22e47429a..fa04021d4c0fb 100644
+--- a/net/mac80211/tdls.c
++++ b/net/mac80211/tdls.c
+@@ -6,7 +6,7 @@
+ * Copyright 2014, Intel Corporation
+ * Copyright 2014 Intel Mobile Communications GmbH
+ * Copyright 2015 - 2016 Intel Deutschland GmbH
+- * Copyright (C) 2019, 2021 Intel Corporation
++ * Copyright (C) 2019, 2021-2022 Intel Corporation
+ */
+
+ #include <linux/ieee80211.h>
+@@ -1254,7 +1254,7 @@ static void iee80211_tdls_recalc_chanctx(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_supported_band *sband;
+
+ mutex_lock(&local->chanctx_mtx);
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+ if (conf) {
+ width = conf->def.width;
+@@ -1372,7 +1372,7 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
+
+ switch (oper) {
+ case NL80211_TDLS_ENABLE_LINK:
+- if (sdata->vif.csa_active) {
++ if (sdata->vif.bss_conf.csa_active) {
+ tdls_dbg(sdata, "TDLS: disallow link during CSA\n");
+ ret = -EBUSY;
+ break;
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index c425f4fb7c2e8..3cd24d8170d32 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -57,7 +57,7 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx,
+ return 0;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(tx->sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(tx->sdata->vif.bss_conf.chanctx_conf);
+ if (chanctx_conf) {
+ shift = ieee80211_chandef_get_shift(&chanctx_conf->def);
+ rate_flags = ieee80211_chandef_rate_flags(&chanctx_conf->def);
+@@ -2347,12 +2347,12 @@ netdev_tx_t ieee80211_monitor_start_xmit(struct sk_buff *skb,
+ }
+ }
+
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ tmp_sdata = rcu_dereference(local->monitor_sdata);
+ if (tmp_sdata)
+ chanctx_conf =
+- rcu_dereference(tmp_sdata->vif.chanctx_conf);
++ rcu_dereference(tmp_sdata->vif.bss_conf.chanctx_conf);
+ }
+
+ if (chanctx_conf)
+@@ -2601,7 +2601,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
+ }
+ ap_sdata = container_of(sdata->bss, struct ieee80211_sub_if_data,
+ u.ap);
+- chanctx_conf = rcu_dereference(ap_sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(ap_sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ ret = -ENOTCONN;
+ goto free;
+@@ -2612,7 +2612,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
+ fallthrough;
+ case NL80211_IFTYPE_AP:
+ if (sdata->vif.type == NL80211_IFTYPE_AP)
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ ret = -ENOTCONN;
+ goto free;
+@@ -2691,7 +2691,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
+ skb->data + ETH_ALEN);
+
+ }
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ ret = -ENOTCONN;
+ goto free;
+@@ -2734,7 +2734,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
+ memcpy(hdr.addr3, skb->data, ETH_ALEN);
+ hdrlen = 24;
+ }
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ ret = -ENOTCONN;
+ goto free;
+@@ -2747,7 +2747,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
+ memcpy(hdr.addr2, skb->data + ETH_ALEN, ETH_ALEN);
+ eth_broadcast_addr(hdr.addr3);
+ hdrlen = 24;
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ ret = -ENOTCONN;
+ goto free;
+@@ -2760,7 +2760,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
+ memcpy(hdr.addr2, skb->data + ETH_ALEN, ETH_ALEN);
+ memcpy(hdr.addr3, sdata->u.ibss.bssid, ETH_ALEN);
+ hdrlen = 24;
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ ret = -ENOTCONN;
+ goto free;
+@@ -2974,7 +2974,7 @@ void ieee80211_check_fast_xmit(struct sta_info *sta)
+ goto out;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (!chanctx_conf) {
+ rcu_read_unlock();
+ goto out;
+@@ -4605,7 +4605,7 @@ static bool ieee80211_tx_pending_skb(struct ieee80211_local *local,
+ sdata = vif_to_sdata(info->control.vif);
+
+ if (info->control.flags & IEEE80211_TX_INTCFL_NEED_TXPROCESSING) {
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (unlikely(!chanctx_conf)) {
+ dev_kfree_skb(skb);
+ return true;
+@@ -4809,7 +4809,7 @@ static void ieee80211_set_beacon_cntdwn(struct ieee80211_sub_if_data *sdata,
+
+ bcn_offsets = beacon->cntdwn_counter_offsets;
+ count = beacon->cntdwn_current_counter;
+- if (sdata->vif.csa_active)
++ if (sdata->vif.bss_conf.csa_active)
+ max_count = IEEE80211_MAX_CNTDWN_COUNTERS_NUM;
+
+ for (i = 0; i < max_count; ++i) {
+@@ -5120,7 +5120,7 @@ __ieee80211_beacon_get(struct ieee80211_hw *hw,
+ rcu_read_lock();
+
+ sdata = vif_to_sdata(vif);
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+
+ if (!ieee80211_sdata_running(sdata) || !chanctx_conf)
+ goto out;
+@@ -5537,7 +5537,7 @@ ieee80211_get_buffered_bc(struct ieee80211_hw *hw,
+ sdata = vif_to_sdata(vif);
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+
+ if (!chanctx_conf)
+ goto out;
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index dad42d42aa84c..b58df3e63a86a 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1569,7 +1569,7 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
+ return;
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ if (chanctx_conf)
+ center_freq = chanctx_conf->def.chan->center_freq;
+
+@@ -1616,7 +1616,7 @@ void ieee80211_set_wmm_default(struct ieee80211_sub_if_data *sdata,
+ memset(&qparam, 0, sizeof(qparam));
+
+ rcu_read_lock();
+- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
++ chanctx_conf = rcu_dereference(sdata->vif.bss_conf.chanctx_conf);
+ use_11b = (chanctx_conf &&
+ chanctx_conf->def.chan->band == NL80211_BAND_2GHZ) &&
+ !(sdata->flags & IEEE80211_SDATA_OPERATING_GMODE);
+@@ -2267,7 +2267,7 @@ static void ieee80211_assign_chanctx(struct ieee80211_local *local,
+ return;
+
+ mutex_lock(&local->chanctx_mtx);
+- conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
++ conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
+ lockdep_is_held(&local->chanctx_mtx));
+ if (conf) {
+ ctx = container_of(conf, struct ieee80211_chanctx, conf);
+@@ -2526,7 +2526,7 @@ int ieee80211_reconfig(struct ieee80211_local *local)
+ BSS_CHANGED_TXPOWER |
+ BSS_CHANGED_MCAST_RATE;
+
+- if (sdata->vif.mu_mimo_owner)
++ if (sdata->vif.bss_conf.mu_mimo_owner)
+ changed |= BSS_CHANGED_MU_GROUPS;
+
+ switch (sdata->vif.type) {
+@@ -2809,8 +2809,8 @@ void ieee80211_recalc_smps(struct ieee80211_sub_if_data *sdata)
+
+ mutex_lock(&local->chanctx_mtx);
+
+- chanctx_conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
+- lockdep_is_held(&local->chanctx_mtx));
++ chanctx_conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
++ lockdep_is_held(&local->chanctx_mtx));
+
+ /*
+ * This function can be called from a work, thus it may be possible
+@@ -2835,8 +2835,8 @@ void ieee80211_recalc_min_chandef(struct ieee80211_sub_if_data *sdata)
+
+ mutex_lock(&local->chanctx_mtx);
+
+- chanctx_conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
+- lockdep_is_held(&local->chanctx_mtx));
++ chanctx_conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
++ lockdep_is_held(&local->chanctx_mtx));
+
+ if (WARN_ON_ONCE(!chanctx_conf))
+ goto unlock;
+diff --git a/net/mac80211/vht.c b/net/mac80211/vht.c
+index ff26e0c4787b0..ac97584b3a0be 100644
+--- a/net/mac80211/vht.c
++++ b/net/mac80211/vht.c
+@@ -4,7 +4,7 @@
+ *
+ * Portions of this file
+ * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2021 Intel Corporation
++ * Copyright (C) 2018 - 2022 Intel Corporation
+ */
+
+ #include <linux/ieee80211.h>
+@@ -649,7 +649,7 @@ void ieee80211_process_mu_groups(struct ieee80211_sub_if_data *sdata,
+ {
+ struct ieee80211_bss_conf *bss_conf = &sdata->vif.bss_conf;
+
+- if (!sdata->vif.mu_mimo_owner)
++ if (!sdata->vif.bss_conf.mu_mimo_owner)
+ return;
+
+ if (!memcmp(mgmt->u.action.u.vht_group_notif.position,
+@@ -673,7 +673,7 @@ void ieee80211_update_mu_groups(struct ieee80211_vif *vif,
+ {
+ struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
+
+- if (WARN_ON_ONCE(!vif->mu_mimo_owner))
++ if (WARN_ON_ONCE(!vif->bss_conf.mu_mimo_owner))
+ return;
+
+ memcpy(bss_conf->mu_group.membership, membership, WLAN_MEMBERSHIP_LEN);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 7e1518bb6115d..8ffb8aabd3244 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -323,9 +323,10 @@ static bool mptcp_rmem_schedule(struct sock *sk, struct sock *ssk, int size)
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ int amt, amount;
+
+- if (size < msk->rmem_fwd_alloc)
++ if (size <= msk->rmem_fwd_alloc)
+ return true;
+
++ size -= msk->rmem_fwd_alloc;
+ amt = sk_mem_pages(size);
+ amount = amt << SK_MEM_QUANTUM_SHIFT;
+ msk->rmem_fwd_alloc += amount;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 9f976b11d8967..f4d2a5f277952 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -153,6 +153,7 @@ static struct nft_trans *nft_trans_alloc_gfp(const struct nft_ctx *ctx,
+ if (trans == NULL)
+ return NULL;
+
++ INIT_LIST_HEAD(&trans->list);
+ trans->msg_type = msg_type;
+ trans->ctx = *ctx;
+
+@@ -2472,6 +2473,7 @@ err:
+ }
+
+ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
++ const struct nft_table *table,
+ const struct nlattr *nla)
+ {
+ struct nftables_pernet *nft_net = nft_pernet(net);
+@@ -2482,6 +2484,7 @@ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
+ struct nft_chain *chain = trans->ctx.chain;
+
+ if (trans->msg_type == NFT_MSG_NEWCHAIN &&
++ chain->table == table &&
+ id == nft_trans_chain_id(trans))
+ return chain;
+ }
+@@ -3371,6 +3374,7 @@ static int nft_table_validate(struct net *net, const struct nft_table *table)
+ }
+
+ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
++ const struct nft_chain *chain,
+ const struct nlattr *nla);
+
+ #define NFT_RULE_MAXEXPRS 128
+@@ -3417,7 +3421,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ return -EOPNOTSUPP;
+
+ } else if (nla[NFTA_RULE_CHAIN_ID]) {
+- chain = nft_chain_lookup_byid(net, nla[NFTA_RULE_CHAIN_ID]);
++ chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID]);
+ if (IS_ERR(chain)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]);
+ return PTR_ERR(chain);
+@@ -3459,7 +3463,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ return PTR_ERR(old_rule);
+ }
+ } else if (nla[NFTA_RULE_POSITION_ID]) {
+- old_rule = nft_rule_lookup_byid(net, nla[NFTA_RULE_POSITION_ID]);
++ old_rule = nft_rule_lookup_byid(net, chain, nla[NFTA_RULE_POSITION_ID]);
+ if (IS_ERR(old_rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION_ID]);
+ return PTR_ERR(old_rule);
+@@ -3604,6 +3608,7 @@ err_release_expr:
+ }
+
+ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
++ const struct nft_chain *chain,
+ const struct nlattr *nla)
+ {
+ struct nftables_pernet *nft_net = nft_pernet(net);
+@@ -3614,6 +3619,7 @@ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
+ struct nft_rule *rule = nft_trans_rule(trans);
+
+ if (trans->msg_type == NFT_MSG_NEWRULE &&
++ trans->ctx.chain == chain &&
+ id == nft_trans_rule_id(trans))
+ return rule;
+ }
+@@ -3663,7 +3669,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ err = nft_delrule(&ctx, rule);
+ } else if (nla[NFTA_RULE_ID]) {
+- rule = nft_rule_lookup_byid(net, nla[NFTA_RULE_ID]);
++ rule = nft_rule_lookup_byid(net, chain, nla[NFTA_RULE_ID]);
+ if (IS_ERR(rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_ID]);
+ return PTR_ERR(rule);
+@@ -3842,6 +3848,7 @@ static struct nft_set *nft_set_lookup_byhandle(const struct nft_table *table,
+ }
+
+ static struct nft_set *nft_set_lookup_byid(const struct net *net,
++ const struct nft_table *table,
+ const struct nlattr *nla, u8 genmask)
+ {
+ struct nftables_pernet *nft_net = nft_pernet(net);
+@@ -3853,6 +3860,7 @@ static struct nft_set *nft_set_lookup_byid(const struct net *net,
+ struct nft_set *set = nft_trans_set(trans);
+
+ if (id == nft_trans_set_id(trans) &&
++ set->table == table &&
+ nft_active_genmask(set, genmask))
+ return set;
+ }
+@@ -3873,7 +3881,7 @@ struct nft_set *nft_set_lookup_global(const struct net *net,
+ if (!nla_set_id)
+ return set;
+
+- set = nft_set_lookup_byid(net, nla_set_id, genmask);
++ set = nft_set_lookup_byid(net, table, nla_set_id, genmask);
+ }
+ return set;
+ }
+@@ -5195,19 +5203,13 @@ static int nft_setelem_parse_flags(const struct nft_set *set,
+ static int nft_setelem_parse_key(struct nft_ctx *ctx, struct nft_set *set,
+ struct nft_data *key, struct nlattr *attr)
+ {
+- struct nft_data_desc desc;
+- int err;
+-
+- err = nft_data_init(ctx, key, NFT_DATA_VALUE_MAXLEN, &desc, attr);
+- if (err < 0)
+- return err;
+-
+- if (desc.type != NFT_DATA_VALUE || desc.len != set->klen) {
+- nft_data_release(key, desc.type);
+- return -EINVAL;
+- }
++ struct nft_data_desc desc = {
++ .type = NFT_DATA_VALUE,
++ .size = NFT_DATA_VALUE_MAXLEN,
++ .len = set->klen,
++ };
+
+- return 0;
++ return nft_data_init(ctx, key, &desc, attr);
+ }
+
+ static int nft_setelem_parse_data(struct nft_ctx *ctx, struct nft_set *set,
+@@ -5216,24 +5218,18 @@ static int nft_setelem_parse_data(struct nft_ctx *ctx, struct nft_set *set,
+ struct nlattr *attr)
+ {
+ u32 dtype;
+- int err;
+-
+- err = nft_data_init(ctx, data, NFT_DATA_VALUE_MAXLEN, desc, attr);
+- if (err < 0)
+- return err;
+
+ if (set->dtype == NFT_DATA_VERDICT)
+ dtype = NFT_DATA_VERDICT;
+ else
+ dtype = NFT_DATA_VALUE;
+
+- if (dtype != desc->type ||
+- set->dlen != desc->len) {
+- nft_data_release(data, desc->type);
+- return -EINVAL;
+- }
++ desc->type = dtype;
++ desc->size = NFT_DATA_VALUE_MAXLEN;
++ desc->len = set->dlen;
++ desc->flags = NFT_DATA_DESC_SETELEM;
+
+- return 0;
++ return nft_data_init(ctx, data, desc, attr);
+ }
+
+ static void *nft_setelem_catchall_get(const struct net *net,
+@@ -9605,7 +9601,7 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ tb[NFTA_VERDICT_CHAIN],
+ genmask);
+ } else if (tb[NFTA_VERDICT_CHAIN_ID]) {
+- chain = nft_chain_lookup_byid(ctx->net,
++ chain = nft_chain_lookup_byid(ctx->net, ctx->table,
+ tb[NFTA_VERDICT_CHAIN_ID]);
+ if (IS_ERR(chain))
+ return PTR_ERR(chain);
+@@ -9617,6 +9613,9 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ return PTR_ERR(chain);
+ if (nft_is_base_chain(chain))
+ return -EOPNOTSUPP;
++ if (desc->flags & NFT_DATA_DESC_SETELEM &&
++ chain->flags & NFT_CHAIN_BINDING)
++ return -EINVAL;
+
+ chain->use++;
+ data->verdict.chain = chain;
+@@ -9624,7 +9623,7 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ }
+
+ desc->len = sizeof(data->verdict);
+- desc->type = NFT_DATA_VERDICT;
++
+ return 0;
+ }
+
+@@ -9677,20 +9676,25 @@ nla_put_failure:
+ }
+
+ static int nft_value_init(const struct nft_ctx *ctx,
+- struct nft_data *data, unsigned int size,
+- struct nft_data_desc *desc, const struct nlattr *nla)
++ struct nft_data *data, struct nft_data_desc *desc,
++ const struct nlattr *nla)
+ {
+ unsigned int len;
+
+ len = nla_len(nla);
+ if (len == 0)
+ return -EINVAL;
+- if (len > size)
++ if (len > desc->size)
+ return -EOVERFLOW;
++ if (desc->len) {
++ if (len != desc->len)
++ return -EINVAL;
++ } else {
++ desc->len = len;
++ }
+
+ nla_memcpy(data->data, nla, len);
+- desc->type = NFT_DATA_VALUE;
+- desc->len = len;
++
+ return 0;
+ }
+
+@@ -9710,7 +9714,6 @@ static const struct nla_policy nft_data_policy[NFTA_DATA_MAX + 1] = {
+ *
+ * @ctx: context of the expression using the data
+ * @data: destination struct nft_data
+- * @size: maximum data length
+ * @desc: data description
+ * @nla: netlink attribute containing data
+ *
+@@ -9720,24 +9723,35 @@ static const struct nla_policy nft_data_policy[NFTA_DATA_MAX + 1] = {
+ * The caller can indicate that it only wants to accept data of type
+ * NFT_DATA_VALUE by passing NULL for the ctx argument.
+ */
+-int nft_data_init(const struct nft_ctx *ctx,
+- struct nft_data *data, unsigned int size,
++int nft_data_init(const struct nft_ctx *ctx, struct nft_data *data,
+ struct nft_data_desc *desc, const struct nlattr *nla)
+ {
+ struct nlattr *tb[NFTA_DATA_MAX + 1];
+ int err;
+
++ if (WARN_ON_ONCE(!desc->size))
++ return -EINVAL;
++
+ err = nla_parse_nested_deprecated(tb, NFTA_DATA_MAX, nla,
+ nft_data_policy, NULL);
+ if (err < 0)
+ return err;
+
+- if (tb[NFTA_DATA_VALUE])
+- return nft_value_init(ctx, data, size, desc,
+- tb[NFTA_DATA_VALUE]);
+- if (tb[NFTA_DATA_VERDICT] && ctx != NULL)
+- return nft_verdict_init(ctx, data, desc, tb[NFTA_DATA_VERDICT]);
+- return -EINVAL;
++ if (tb[NFTA_DATA_VALUE]) {
++ if (desc->type != NFT_DATA_VALUE)
++ return -EINVAL;
++
++ err = nft_value_init(ctx, data, desc, tb[NFTA_DATA_VALUE]);
++ } else if (tb[NFTA_DATA_VERDICT] && ctx != NULL) {
++ if (desc->type != NFT_DATA_VERDICT)
++ return -EINVAL;
++
++ err = nft_verdict_init(ctx, data, desc, tb[NFTA_DATA_VERDICT]);
++ } else {
++ err = -EINVAL;
++ }
++
++ return err;
+ }
+ EXPORT_SYMBOL_GPL(nft_data_init);
+
+diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c
+index 83590afe3768e..e6e402b247d09 100644
+--- a/net/netfilter/nft_bitwise.c
++++ b/net/netfilter/nft_bitwise.c
+@@ -93,7 +93,16 @@ static const struct nla_policy nft_bitwise_policy[NFTA_BITWISE_MAX + 1] = {
+ static int nft_bitwise_init_bool(struct nft_bitwise *priv,
+ const struct nlattr *const tb[])
+ {
+- struct nft_data_desc mask, xor;
++ struct nft_data_desc mask = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(priv->mask),
++ .len = priv->len,
++ };
++ struct nft_data_desc xor = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(priv->xor),
++ .len = priv->len,
++ };
+ int err;
+
+ if (tb[NFTA_BITWISE_DATA])
+@@ -103,37 +112,30 @@ static int nft_bitwise_init_bool(struct nft_bitwise *priv,
+ !tb[NFTA_BITWISE_XOR])
+ return -EINVAL;
+
+- err = nft_data_init(NULL, &priv->mask, sizeof(priv->mask), &mask,
+- tb[NFTA_BITWISE_MASK]);
++ err = nft_data_init(NULL, &priv->mask, &mask, tb[NFTA_BITWISE_MASK]);
+ if (err < 0)
+ return err;
+- if (mask.type != NFT_DATA_VALUE || mask.len != priv->len) {
+- err = -EINVAL;
+- goto err_mask_release;
+- }
+
+- err = nft_data_init(NULL, &priv->xor, sizeof(priv->xor), &xor,
+- tb[NFTA_BITWISE_XOR]);
++ err = nft_data_init(NULL, &priv->xor, &xor, tb[NFTA_BITWISE_XOR]);
+ if (err < 0)
+- goto err_mask_release;
+- if (xor.type != NFT_DATA_VALUE || xor.len != priv->len) {
+- err = -EINVAL;
+- goto err_xor_release;
+- }
++ goto err_xor_err;
+
+ return 0;
+
+-err_xor_release:
+- nft_data_release(&priv->xor, xor.type);
+-err_mask_release:
++err_xor_err:
+ nft_data_release(&priv->mask, mask.type);
++
+ return err;
+ }
+
+ static int nft_bitwise_init_shift(struct nft_bitwise *priv,
+ const struct nlattr *const tb[])
+ {
+- struct nft_data_desc d;
++ struct nft_data_desc desc = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(priv->data),
++ .len = sizeof(u32),
++ };
+ int err;
+
+ if (tb[NFTA_BITWISE_MASK] ||
+@@ -143,13 +145,12 @@ static int nft_bitwise_init_shift(struct nft_bitwise *priv,
+ if (!tb[NFTA_BITWISE_DATA])
+ return -EINVAL;
+
+- err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &d,
+- tb[NFTA_BITWISE_DATA]);
++ err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_BITWISE_DATA]);
+ if (err < 0)
+ return err;
+- if (d.type != NFT_DATA_VALUE || d.len != sizeof(u32) ||
+- priv->data.data[0] >= BITS_PER_TYPE(u32)) {
+- nft_data_release(&priv->data, d.type);
++
++ if (priv->data.data[0] >= BITS_PER_TYPE(u32)) {
++ nft_data_release(&priv->data, desc.type);
+ return -EINVAL;
+ }
+
+@@ -339,22 +340,21 @@ static const struct nft_expr_ops nft_bitwise_ops = {
+ static int
+ nft_bitwise_extract_u32_data(const struct nlattr * const tb, u32 *out)
+ {
+- struct nft_data_desc desc;
+ struct nft_data data;
+- int err = 0;
++ struct nft_data_desc desc = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(data),
++ .len = sizeof(u32),
++ };
++ int err;
+
+- err = nft_data_init(NULL, &data, sizeof(data), &desc, tb);
++ err = nft_data_init(NULL, &data, &desc, tb);
+ if (err < 0)
+ return err;
+
+- if (desc.type != NFT_DATA_VALUE || desc.len != sizeof(u32)) {
+- err = -EINVAL;
+- goto err;
+- }
+ *out = data.data[0];
+-err:
+- nft_data_release(&data, desc.type);
+- return err;
++
++ return 0;
+ }
+
+ static int nft_bitwise_fast_init(const struct nft_ctx *ctx,
+diff --git a/net/netfilter/nft_cmp.c b/net/netfilter/nft_cmp.c
+index 6528f76ca29ec..8481e72269d77 100644
+--- a/net/netfilter/nft_cmp.c
++++ b/net/netfilter/nft_cmp.c
+@@ -73,20 +73,16 @@ static int nft_cmp_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ const struct nlattr * const tb[])
+ {
+ struct nft_cmp_expr *priv = nft_expr_priv(expr);
+- struct nft_data_desc desc;
++ struct nft_data_desc desc = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(priv->data),
++ };
+ int err;
+
+- err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &desc,
+- tb[NFTA_CMP_DATA]);
++ err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_CMP_DATA]);
+ if (err < 0)
+ return err;
+
+- if (desc.type != NFT_DATA_VALUE) {
+- err = -EINVAL;
+- nft_data_release(&priv->data, desc.type);
+- return err;
+- }
+-
+ err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len);
+ if (err < 0)
+ return err;
+@@ -202,12 +198,14 @@ static int nft_cmp_fast_init(const struct nft_ctx *ctx,
+ const struct nlattr * const tb[])
+ {
+ struct nft_cmp_fast_expr *priv = nft_expr_priv(expr);
+- struct nft_data_desc desc;
+ struct nft_data data;
++ struct nft_data_desc desc = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(data),
++ };
+ int err;
+
+- err = nft_data_init(NULL, &data, sizeof(data), &desc,
+- tb[NFTA_CMP_DATA]);
++ err = nft_data_init(NULL, &data, &desc, tb[NFTA_CMP_DATA]);
+ if (err < 0)
+ return err;
+
+@@ -301,11 +299,13 @@ static int nft_cmp16_fast_init(const struct nft_ctx *ctx,
+ const struct nlattr * const tb[])
+ {
+ struct nft_cmp16_fast_expr *priv = nft_expr_priv(expr);
+- struct nft_data_desc desc;
++ struct nft_data_desc desc = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(priv->data),
++ };
+ int err;
+
+- err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &desc,
+- tb[NFTA_CMP_DATA]);
++ err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_CMP_DATA]);
+ if (err < 0)
+ return err;
+
+@@ -368,8 +368,11 @@ const struct nft_expr_ops nft_cmp16_fast_ops = {
+ static const struct nft_expr_ops *
+ nft_cmp_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[])
+ {
+- struct nft_data_desc desc;
+ struct nft_data data;
++ struct nft_data_desc desc = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(data),
++ };
+ enum nft_cmp_ops op;
+ u8 sreg;
+ int err;
+@@ -392,14 +395,10 @@ nft_cmp_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[])
+ return ERR_PTR(-EINVAL);
+ }
+
+- err = nft_data_init(NULL, &data, sizeof(data), &desc,
+- tb[NFTA_CMP_DATA]);
++ err = nft_data_init(NULL, &data, &desc, tb[NFTA_CMP_DATA]);
+ if (err < 0)
+ return ERR_PTR(err);
+
+- if (desc.type != NFT_DATA_VALUE)
+- goto err1;
+-
+ sreg = ntohl(nla_get_be32(tb[NFTA_CMP_SREG]));
+
+ if (op == NFT_CMP_EQ || op == NFT_CMP_NEQ) {
+@@ -411,9 +410,6 @@ nft_cmp_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[])
+ return &nft_cmp16_fast_ops;
+ }
+ return &nft_cmp_ops;
+-err1:
+- nft_data_release(&data, desc.type);
+- return ERR_PTR(-EINVAL);
+ }
+
+ struct nft_expr_type nft_cmp_type __read_mostly = {
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index b80f7b5073495..5f28b21abc7df 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -29,20 +29,36 @@ static const struct nla_policy nft_immediate_policy[NFTA_IMMEDIATE_MAX + 1] = {
+ [NFTA_IMMEDIATE_DATA] = { .type = NLA_NESTED },
+ };
+
++static enum nft_data_types nft_reg_to_type(const struct nlattr *nla)
++{
++ enum nft_data_types type;
++ u8 reg;
++
++ reg = ntohl(nla_get_be32(nla));
++ if (reg == NFT_REG_VERDICT)
++ type = NFT_DATA_VERDICT;
++ else
++ type = NFT_DATA_VALUE;
++
++ return type;
++}
++
+ static int nft_immediate_init(const struct nft_ctx *ctx,
+ const struct nft_expr *expr,
+ const struct nlattr * const tb[])
+ {
+ struct nft_immediate_expr *priv = nft_expr_priv(expr);
+- struct nft_data_desc desc;
++ struct nft_data_desc desc = {
++ .size = sizeof(priv->data),
++ };
+ int err;
+
+ if (tb[NFTA_IMMEDIATE_DREG] == NULL ||
+ tb[NFTA_IMMEDIATE_DATA] == NULL)
+ return -EINVAL;
+
+- err = nft_data_init(ctx, &priv->data, sizeof(priv->data), &desc,
+- tb[NFTA_IMMEDIATE_DATA]);
++ desc.type = nft_reg_to_type(tb[NFTA_IMMEDIATE_DREG]);
++ err = nft_data_init(ctx, &priv->data, &desc, tb[NFTA_IMMEDIATE_DATA]);
+ if (err < 0)
+ return err;
+
+diff --git a/net/netfilter/nft_range.c b/net/netfilter/nft_range.c
+index 66f77484c2279..832f0d725a9e2 100644
+--- a/net/netfilter/nft_range.c
++++ b/net/netfilter/nft_range.c
+@@ -51,7 +51,14 @@ static int nft_range_init(const struct nft_ctx *ctx, const struct nft_expr *expr
+ const struct nlattr * const tb[])
+ {
+ struct nft_range_expr *priv = nft_expr_priv(expr);
+- struct nft_data_desc desc_from, desc_to;
++ struct nft_data_desc desc_from = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(priv->data_from),
++ };
++ struct nft_data_desc desc_to = {
++ .type = NFT_DATA_VALUE,
++ .size = sizeof(priv->data_to),
++ };
+ int err;
+ u32 op;
+
+@@ -61,26 +68,16 @@ static int nft_range_init(const struct nft_ctx *ctx, const struct nft_expr *expr
+ !tb[NFTA_RANGE_TO_DATA])
+ return -EINVAL;
+
+- err = nft_data_init(NULL, &priv->data_from, sizeof(priv->data_from),
+- &desc_from, tb[NFTA_RANGE_FROM_DATA]);
++ err = nft_data_init(NULL, &priv->data_from, &desc_from,
++ tb[NFTA_RANGE_FROM_DATA]);
+ if (err < 0)
+ return err;
+
+- if (desc_from.type != NFT_DATA_VALUE) {
+- err = -EINVAL;
+- goto err1;
+- }
+-
+- err = nft_data_init(NULL, &priv->data_to, sizeof(priv->data_to),
+- &desc_to, tb[NFTA_RANGE_TO_DATA]);
++ err = nft_data_init(NULL, &priv->data_to, &desc_to,
++ tb[NFTA_RANGE_TO_DATA]);
+ if (err < 0)
+ goto err1;
+
+- if (desc_to.type != NFT_DATA_VALUE) {
+- err = -EINVAL;
+- goto err2;
+- }
+-
+ if (desc_from.len != desc_to.len) {
+ err = -EINVAL;
+ goto err2;
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index bf2d986a6bc39..a8e3ec800a9c8 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -192,6 +192,7 @@ static void rose_kill_by_device(struct net_device *dev)
+ rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
+ if (rose->neighbour)
+ rose->neighbour->use--;
++ dev_put(rose->device);
+ rose->device = NULL;
+ }
+ }
+@@ -592,6 +593,8 @@ static struct sock *rose_make_new(struct sock *osk)
+ rose->idle = orose->idle;
+ rose->defer = orose->defer;
+ rose->device = orose->device;
++ if (rose->device)
++ dev_hold(rose->device);
+ rose->qbitincl = orose->qbitincl;
+
+ return sk;
+@@ -645,6 +648,7 @@ static int rose_release(struct socket *sock)
+ break;
+ }
+
++ dev_put(rose->device);
+ sock->sk = NULL;
+ release_sock(sk);
+ sock_put(sk);
+@@ -721,7 +725,6 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ struct rose_sock *rose = rose_sk(sk);
+ struct sockaddr_rose *addr = (struct sockaddr_rose *)uaddr;
+ unsigned char cause, diagnostic;
+- struct net_device *dev;
+ ax25_uid_assoc *user;
+ int n, err = 0;
+
+@@ -778,9 +781,12 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ }
+
+ if (sock_flag(sk, SOCK_ZAPPED)) { /* Must bind first - autobinding in this may or may not work */
++ struct net_device *dev;
++
+ sock_reset_flag(sk, SOCK_ZAPPED);
+
+- if ((dev = rose_dev_first()) == NULL) {
++ dev = rose_dev_first();
++ if (!dev) {
+ err = -ENETUNREACH;
+ goto out_release;
+ }
+@@ -788,6 +794,7 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ user = ax25_findbyuid(current_euid());
+ if (!user) {
+ err = -EINVAL;
++ dev_put(dev);
+ goto out_release;
+ }
+
+diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c
+index eb0b8197ac825..fee772b4637c8 100644
+--- a/net/rose/rose_route.c
++++ b/net/rose/rose_route.c
+@@ -615,6 +615,8 @@ struct net_device *rose_dev_first(void)
+ if (first == NULL || strncmp(dev->name, first->name, 3) < 0)
+ first = dev;
+ }
++ if (first)
++ dev_hold(first);
+ rcu_read_unlock();
+
+ return first;
+diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c
+index a35ab8c27866e..3f935cbbaff66 100644
+--- a/net/sched/cls_route.c
++++ b/net/sched/cls_route.c
+@@ -526,7 +526,7 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
+ rcu_assign_pointer(f->next, f1);
+ rcu_assign_pointer(*fp, f);
+
+- if (fold && fold->handle && f->handle != fold->handle) {
++ if (fold) {
+ th = to_hash(fold->handle);
+ h = from_hash(fold->handle >> 16);
+ b = rtnl_dereference(head->table[th]);
+diff --git a/net/wireless/ap.c b/net/wireless/ap.c
+index 550ac9d827fe7..e68923200018b 100644
+--- a/net/wireless/ap.c
++++ b/net/wireless/ap.c
+@@ -1,4 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
++/*
++ * Parts of this file are
++ * Copyright (C) 2022 Intel Corporation
++ */
+ #include <linux/ieee80211.h>
+ #include <linux/export.h>
+ #include <net/cfg80211.h>
+@@ -7,8 +11,9 @@
+ #include "rdev-ops.h"
+
+
+-int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
+- struct net_device *dev, bool notify)
++static int ___cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
++ struct net_device *dev, unsigned int link_id,
++ bool notify)
+ {
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ int err;
+@@ -22,15 +27,16 @@ int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
+ dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_GO)
+ return -EOPNOTSUPP;
+
+- if (!wdev->beacon_interval)
++ if (!wdev->links[link_id].ap.beacon_interval)
+ return -ENOENT;
+
+- err = rdev_stop_ap(rdev, dev);
++ err = rdev_stop_ap(rdev, dev, link_id);
+ if (!err) {
+ wdev->conn_owner_nlportid = 0;
+- wdev->beacon_interval = 0;
+- memset(&wdev->chandef, 0, sizeof(wdev->chandef));
+- wdev->ssid_len = 0;
++ wdev->links[link_id].ap.beacon_interval = 0;
++ memset(&wdev->links[link_id].ap.chandef, 0,
++ sizeof(wdev->links[link_id].ap.chandef));
++ wdev->u.ap.ssid_len = 0;
+ rdev_set_qos_map(rdev, dev, NULL);
+ if (notify)
+ nl80211_send_ap_stopped(wdev);
+@@ -46,14 +52,36 @@ int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
+ return err;
+ }
+
++int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
++ struct net_device *dev, int link_id,
++ bool notify)
++{
++ unsigned int link;
++ int ret = 0;
++
++ if (link_id >= 0)
++ return ___cfg80211_stop_ap(rdev, dev, link_id, notify);
++
++ for_each_valid_link(dev->ieee80211_ptr, link) {
++ int ret1 = ___cfg80211_stop_ap(rdev, dev, link, notify);
++
++ if (ret1)
++ ret = ret1;
++ /* try the next one also if one errored */
++ }
++
++ return ret;
++}
++
+ int cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
+- struct net_device *dev, bool notify)
++ struct net_device *dev, int link_id,
++ bool notify)
+ {
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ int err;
+
+ wdev_lock(wdev);
+- err = __cfg80211_stop_ap(rdev, dev, notify);
++ err = __cfg80211_stop_ap(rdev, dev, link_id, notify);
+ wdev_unlock(wdev);
+
+ return err;
+diff --git a/net/wireless/chan.c b/net/wireless/chan.c
+index f74f176e0d9dc..0e5835cd8c618 100644
+--- a/net/wireless/chan.c
++++ b/net/wireless/chan.c
+@@ -672,14 +672,21 @@ bool cfg80211_chandef_dfs_usable(struct wiphy *wiphy,
+ * range of chandef.
+ */
+ bool cfg80211_is_sub_chan(struct cfg80211_chan_def *chandef,
+- struct ieee80211_channel *chan)
++ struct ieee80211_channel *chan,
++ bool primary_only)
+ {
+ int width;
+ u32 freq;
+
++ if (!chandef->chan)
++ return false;
++
+ if (chandef->chan->center_freq == chan->center_freq)
+ return true;
+
++ if (primary_only)
++ return false;
++
+ width = cfg80211_chandef_get_width(chandef);
+ if (width <= 20)
+ return false;
+@@ -704,23 +711,25 @@ bool cfg80211_is_sub_chan(struct cfg80211_chan_def *chandef,
+
+ bool cfg80211_beaconing_iface_active(struct wireless_dev *wdev)
+ {
+- bool active = false;
++ unsigned int link;
+
+ ASSERT_WDEV_LOCK(wdev);
+
+- if (!wdev->chandef.chan)
+- return false;
+-
+ switch (wdev->iftype) {
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_P2P_GO:
+- active = wdev->beacon_interval != 0;
++ for_each_valid_link(wdev, link) {
++ if (wdev->links[link].ap.beacon_interval)
++ return true;
++ }
+ break;
+ case NL80211_IFTYPE_ADHOC:
+- active = wdev->ssid_len != 0;
++ if (wdev->u.ibss.ssid_len)
++ return true;
+ break;
+ case NL80211_IFTYPE_MESH_POINT:
+- active = wdev->mesh_id_len != 0;
++ if (wdev->u.mesh.id_len)
++ return true;
+ break;
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_OCB:
+@@ -737,7 +746,35 @@ bool cfg80211_beaconing_iface_active(struct wireless_dev *wdev)
+ WARN_ON(1);
+ }
+
+- return active;
++ return false;
++}
++
++bool cfg80211_wdev_on_sub_chan(struct wireless_dev *wdev,
++ struct ieee80211_channel *chan,
++ bool primary_only)
++{
++ unsigned int link;
++
++ switch (wdev->iftype) {
++ case NL80211_IFTYPE_AP:
++ case NL80211_IFTYPE_P2P_GO:
++ for_each_valid_link(wdev, link) {
++ if (cfg80211_is_sub_chan(&wdev->links[link].ap.chandef,
++ chan, primary_only))
++ return true;
++ }
++ break;
++ case NL80211_IFTYPE_ADHOC:
++ return cfg80211_is_sub_chan(&wdev->u.ibss.chandef, chan,
++ primary_only);
++ case NL80211_IFTYPE_MESH_POINT:
++ return cfg80211_is_sub_chan(&wdev->u.mesh.chandef, chan,
++ primary_only);
++ default:
++ break;
++ }
++
++ return false;
+ }
+
+ static bool cfg80211_is_wiphy_oper_chan(struct wiphy *wiphy,
+@@ -752,7 +789,7 @@ static bool cfg80211_is_wiphy_oper_chan(struct wiphy *wiphy,
+ continue;
+ }
+
+- if (cfg80211_is_sub_chan(&wdev->chandef, chan)) {
++ if (cfg80211_wdev_on_sub_chan(wdev, chan, false)) {
+ wdev_unlock(wdev);
+ return true;
+ }
+@@ -772,7 +809,8 @@ cfg80211_offchan_chain_is_active(struct cfg80211_registered_device *rdev,
+ if (!cfg80211_chandef_valid(&rdev->background_radar_chandef))
+ return false;
+
+- return cfg80211_is_sub_chan(&rdev->background_radar_chandef, channel);
++ return cfg80211_is_sub_chan(&rdev->background_radar_chandef, channel,
++ false);
+ }
+
+ bool cfg80211_any_wiphy_oper_chan(struct wiphy *wiphy,
+@@ -1176,6 +1214,68 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy,
+ }
+ EXPORT_SYMBOL(cfg80211_chandef_usable);
+
++static bool cfg80211_ir_permissive_check_wdev(enum nl80211_iftype iftype,
++ struct wireless_dev *wdev,
++ struct ieee80211_channel *chan)
++{
++ struct ieee80211_channel *other_chan = NULL;
++ unsigned int link_id;
++ int r1, r2;
++
++ for_each_valid_link(wdev, link_id) {
++ if (wdev->iftype == NL80211_IFTYPE_STATION &&
++ wdev->links[link_id].client.current_bss)
++ other_chan = wdev->links[link_id].client.current_bss->pub.channel;
++
++ /*
++ * If a GO already operates on the same GO_CONCURRENT channel,
++ * this one (maybe the same one) can beacon as well. We allow
++ * the operation even if the station we relied on with
++ * GO_CONCURRENT is disconnected now. But then we must make sure
++ * we're not outdoor on an indoor-only channel.
++ */
++ if (iftype == NL80211_IFTYPE_P2P_GO &&
++ wdev->iftype == NL80211_IFTYPE_P2P_GO &&
++ wdev->links[link_id].ap.beacon_interval &&
++ !(chan->flags & IEEE80211_CHAN_INDOOR_ONLY))
++ other_chan = wdev->links[link_id].ap.chandef.chan;
++
++ if (!other_chan)
++ continue;
++
++ if (chan == other_chan)
++ return true;
++
++ if (chan->band != NL80211_BAND_5GHZ &&
++ chan->band != NL80211_BAND_6GHZ)
++ continue;
++
++ r1 = cfg80211_get_unii(chan->center_freq);
++ r2 = cfg80211_get_unii(other_chan->center_freq);
++
++ if (r1 != -EINVAL && r1 == r2) {
++ /*
++ * At some locations channels 149-165 are considered a
++ * bundle, but at other locations, e.g., Indonesia,
++ * channels 149-161 are considered a bundle while
++ * channel 165 is left out and considered to be in a
++ * different bundle. Thus, in case that there is a
++ * station interface connected to an AP on channel 165,
++ * it is assumed that channels 149-161 are allowed for
++ * GO operations. However, having a station interface
++ * connected to an AP on channels 149-161, does not
++ * allow GO operation on channel 165.
++ */
++ if (chan->center_freq == 5825 &&
++ other_chan->center_freq != 5825)
++ continue;
++ return true;
++ }
++ }
++
++ return false;
++}
++
+ /*
+ * Check if the channel can be used under permissive conditions mandated by
+ * some regulatory bodies, i.e., the channel is marked with
+@@ -1219,59 +1319,14 @@ static bool cfg80211_ir_permissive_chan(struct wiphy *wiphy,
+ * the current registered device.
+ */
+ list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {
+- struct ieee80211_channel *other_chan = NULL;
+- int r1, r2;
++ bool ret;
+
+ wdev_lock(wdev);
+- if (wdev->iftype == NL80211_IFTYPE_STATION &&
+- wdev->current_bss)
+- other_chan = wdev->current_bss->pub.channel;
+-
+- /*
+- * If a GO already operates on the same GO_CONCURRENT channel,
+- * this one (maybe the same one) can beacon as well. We allow
+- * the operation even if the station we relied on with
+- * GO_CONCURRENT is disconnected now. But then we must make sure
+- * we're not outdoor on an indoor-only channel.
+- */
+- if (iftype == NL80211_IFTYPE_P2P_GO &&
+- wdev->iftype == NL80211_IFTYPE_P2P_GO &&
+- wdev->beacon_interval &&
+- !(chan->flags & IEEE80211_CHAN_INDOOR_ONLY))
+- other_chan = wdev->chandef.chan;
++ ret = cfg80211_ir_permissive_check_wdev(iftype, wdev, chan);
+ wdev_unlock(wdev);
+
+- if (!other_chan)
+- continue;
+-
+- if (chan == other_chan)
+- return true;
+-
+- if (chan->band != NL80211_BAND_5GHZ &&
+- chan->band != NL80211_BAND_6GHZ)
+- continue;
+-
+- r1 = cfg80211_get_unii(chan->center_freq);
+- r2 = cfg80211_get_unii(other_chan->center_freq);
+-
+- if (r1 != -EINVAL && r1 == r2) {
+- /*
+- * At some locations channels 149-165 are considered a
+- * bundle, but at other locations, e.g., Indonesia,
+- * channels 149-161 are considered a bundle while
+- * channel 165 is left out and considered to be in a
+- * different bundle. Thus, in case that there is a
+- * station interface connected to an AP on channel 165,
+- * it is assumed that channels 149-161 are allowed for
+- * GO operations. However, having a station interface
+- * connected to an AP on channels 149-161, does not
+- * allow GO operation on channel 165.
+- */
+- if (chan->center_freq == 5825 &&
+- other_chan->center_freq != 5825)
+- continue;
+- return true;
+- }
++ if (ret)
++ return ret;
+ }
+
+ return false;
+@@ -1374,3 +1429,34 @@ bool cfg80211_any_usable_channels(struct wiphy *wiphy,
+ return false;
+ }
+ EXPORT_SYMBOL(cfg80211_any_usable_channels);
++
++struct cfg80211_chan_def *wdev_chandef(struct wireless_dev *wdev,
++ unsigned int link_id)
++{
++ /*
++ * We need to sort out the locking here - in some cases
++ * where we get here we really just don't care (yet)
++ * about the valid links, but in others we do. But we
++ * get here with various driver cases, so we cannot
++ * easily require the wdev mutex.
++ */
++ if (link_id || wdev->valid_links & BIT(0)) {
++ ASSERT_WDEV_LOCK(wdev);
++ WARN_ON(!(wdev->valid_links & BIT(link_id)));
++ }
++
++ switch (wdev->iftype) {
++ case NL80211_IFTYPE_MESH_POINT:
++ return &wdev->u.mesh.chandef;
++ case NL80211_IFTYPE_ADHOC:
++ return &wdev->u.ibss.chandef;
++ case NL80211_IFTYPE_OCB:
++ return &wdev->u.ocb.chandef;
++ case NL80211_IFTYPE_AP:
++ case NL80211_IFTYPE_P2P_GO:
++ return &wdev->links[link_id].ap.chandef;
++ default:
++ return NULL;
++ }
++}
++EXPORT_SYMBOL(wdev_chandef);
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index f08d4b3bb148b..3e5d120407269 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -1118,6 +1118,7 @@ static void _cfg80211_unregister_wdev(struct wireless_dev *wdev,
+ bool unregister_netdev)
+ {
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
++ unsigned int link_id;
+
+ ASSERT_RTNL();
+ lockdep_assert_held(&rdev->wiphy.mtx);
+@@ -1167,11 +1168,22 @@ static void _cfg80211_unregister_wdev(struct wireless_dev *wdev,
+ */
+ cfg80211_process_wdev_events(wdev);
+
+- if (WARN_ON(wdev->current_bss)) {
+- cfg80211_unhold_bss(wdev->current_bss);
+- cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
+- wdev->current_bss = NULL;
++ if (wdev->iftype == NL80211_IFTYPE_STATION ||
++ wdev->iftype == NL80211_IFTYPE_P2P_CLIENT) {
++ for (link_id = 0; link_id < ARRAY_SIZE(wdev->links); link_id++) {
++ struct cfg80211_internal_bss *curbss;
++
++ curbss = wdev->links[link_id].client.current_bss;
++
++ if (WARN_ON(curbss)) {
++ cfg80211_unhold_bss(curbss);
++ cfg80211_put_bss(wdev->wiphy, &curbss->pub);
++ wdev->links[link_id].client.current_bss = NULL;
++ }
++ }
+ }
++
++ wdev->connected = false;
+ }
+
+ void cfg80211_unregister_wdev(struct wireless_dev *wdev)
+@@ -1233,7 +1245,7 @@ void __cfg80211_leave(struct cfg80211_registered_device *rdev,
+ break;
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_P2P_GO:
+- __cfg80211_stop_ap(rdev, dev, true);
++ __cfg80211_stop_ap(rdev, dev, -1, true);
+ break;
+ case NL80211_IFTYPE_OCB:
+ __cfg80211_leave_ocb(rdev, dev);
+@@ -1463,9 +1475,9 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
+ memcpy(&setup, &default_mesh_setup,
+ sizeof(setup));
+ /* back compat only needed for mesh_id */
+- setup.mesh_id = wdev->ssid;
+- setup.mesh_id_len = wdev->mesh_id_up_len;
+- if (wdev->mesh_id_up_len)
++ setup.mesh_id = wdev->u.mesh.id;
++ setup.mesh_id_len = wdev->u.mesh.id_up_len;
++ if (wdev->u.mesh.id_up_len)
+ __cfg80211_join_mesh(rdev, dev,
+ &setup,
+ &default_mesh_config);
+diff --git a/net/wireless/core.h b/net/wireless/core.h
+index 5436ada91b1a4..2c195067ddff6 100644
+--- a/net/wireless/core.h
++++ b/net/wireless/core.h
+@@ -307,6 +307,7 @@ void cfg80211_bss_expire(struct cfg80211_registered_device *rdev);
+ void cfg80211_bss_age(struct cfg80211_registered_device *rdev,
+ unsigned long age_secs);
+ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev,
++ unsigned int link,
+ struct ieee80211_channel *channel);
+
+ /* IBSS */
+@@ -353,9 +354,11 @@ int cfg80211_leave_ocb(struct cfg80211_registered_device *rdev,
+
+ /* AP */
+ int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
+- struct net_device *dev, bool notify);
++ struct net_device *dev, int link,
++ bool notify);
+ int cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
+- struct net_device *dev, bool notify);
++ struct net_device *dev, int link,
++ bool notify);
+
+ /* MLME */
+ int cfg80211_mlme_auth(struct cfg80211_registered_device *rdev,
+@@ -507,7 +510,11 @@ bool cfg80211_any_wiphy_oper_chan(struct wiphy *wiphy,
+ bool cfg80211_beaconing_iface_active(struct wireless_dev *wdev);
+
+ bool cfg80211_is_sub_chan(struct cfg80211_chan_def *chandef,
+- struct ieee80211_channel *chan);
++ struct ieee80211_channel *chan,
++ bool primary_only);
++bool cfg80211_wdev_on_sub_chan(struct wireless_dev *wdev,
++ struct ieee80211_channel *chan,
++ bool primary_only);
+
+ static inline unsigned int elapsed_jiffies_msecs(unsigned long start)
+ {
+diff --git a/net/wireless/ibss.c b/net/wireless/ibss.c
+index 5d89eec2869a1..4935f94d1acc8 100644
+--- a/net/wireless/ibss.c
++++ b/net/wireless/ibss.c
+@@ -28,7 +28,7 @@ void __cfg80211_ibss_joined(struct net_device *dev, const u8 *bssid,
+ if (WARN_ON(wdev->iftype != NL80211_IFTYPE_ADHOC))
+ return;
+
+- if (!wdev->ssid_len)
++ if (!wdev->u.ibss.ssid_len)
+ return;
+
+ bss = cfg80211_get_bss(wdev->wiphy, channel, bssid, NULL, 0,
+@@ -37,13 +37,13 @@ void __cfg80211_ibss_joined(struct net_device *dev, const u8 *bssid,
+ if (WARN_ON(!bss))
+ return;
+
+- if (wdev->current_bss) {
+- cfg80211_unhold_bss(wdev->current_bss);
+- cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
++ if (wdev->u.ibss.current_bss) {
++ cfg80211_unhold_bss(wdev->u.ibss.current_bss);
++ cfg80211_put_bss(wdev->wiphy, &wdev->u.ibss.current_bss->pub);
+ }
+
+ cfg80211_hold_bss(bss_from_pub(bss));
+- wdev->current_bss = bss_from_pub(bss);
++ wdev->u.ibss.current_bss = bss_from_pub(bss);
+
+ if (!(wdev->wiphy->flags & WIPHY_FLAG_HAS_STATIC_WEP))
+ cfg80211_upload_connect_keys(wdev);
+@@ -96,7 +96,7 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
+ lockdep_assert_held(&rdev->wiphy.mtx);
+ ASSERT_WDEV_LOCK(wdev);
+
+- if (wdev->ssid_len)
++ if (wdev->u.ibss.ssid_len)
+ return -EALREADY;
+
+ if (!params->basic_rates) {
+@@ -131,7 +131,7 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
+ kfree_sensitive(wdev->connect_keys);
+ wdev->connect_keys = connkeys;
+
+- wdev->chandef = params->chandef;
++ wdev->u.ibss.chandef = params->chandef;
+ if (connkeys) {
+ params->wep_keys = connkeys->params;
+ params->wep_tx_key = connkeys->def;
+@@ -146,8 +146,8 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
+ return err;
+ }
+
+- memcpy(wdev->ssid, params->ssid, params->ssid_len);
+- wdev->ssid_len = params->ssid_len;
++ memcpy(wdev->u.ibss.ssid, params->ssid, params->ssid_len);
++ wdev->u.ibss.ssid_len = params->ssid_len;
+
+ return 0;
+ }
+@@ -173,14 +173,14 @@ static void __cfg80211_clear_ibss(struct net_device *dev, bool nowext)
+ for (i = 0; i < 6; i++)
+ rdev_del_key(rdev, dev, i, false, NULL);
+
+- if (wdev->current_bss) {
+- cfg80211_unhold_bss(wdev->current_bss);
+- cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
++ if (wdev->u.ibss.current_bss) {
++ cfg80211_unhold_bss(wdev->u.ibss.current_bss);
++ cfg80211_put_bss(wdev->wiphy, &wdev->u.ibss.current_bss->pub);
+ }
+
+- wdev->current_bss = NULL;
+- wdev->ssid_len = 0;
+- memset(&wdev->chandef, 0, sizeof(wdev->chandef));
++ wdev->u.ibss.current_bss = NULL;
++ wdev->u.ibss.ssid_len = 0;
++ memset(&wdev->u.ibss.chandef, 0, sizeof(wdev->u.ibss.chandef));
+ #ifdef CONFIG_CFG80211_WEXT
+ if (!nowext)
+ wdev->wext.ibss.ssid_len = 0;
+@@ -205,7 +205,7 @@ int __cfg80211_leave_ibss(struct cfg80211_registered_device *rdev,
+
+ ASSERT_WDEV_LOCK(wdev);
+
+- if (!wdev->ssid_len)
++ if (!wdev->u.ibss.ssid_len)
+ return -ENOLINK;
+
+ err = rdev_leave_ibss(rdev, dev);
+@@ -339,7 +339,7 @@ int cfg80211_ibss_wext_siwfreq(struct net_device *dev,
+
+ wdev_lock(wdev);
+ err = 0;
+- if (wdev->ssid_len)
++ if (wdev->u.ibss.ssid_len)
+ err = __cfg80211_leave_ibss(rdev, dev, true);
+ wdev_unlock(wdev);
+
+@@ -374,8 +374,8 @@ int cfg80211_ibss_wext_giwfreq(struct net_device *dev,
+ return -EINVAL;
+
+ wdev_lock(wdev);
+- if (wdev->current_bss)
+- chan = wdev->current_bss->pub.channel;
++ if (wdev->u.ibss.current_bss)
++ chan = wdev->u.ibss.current_bss->pub.channel;
+ else if (wdev->wext.ibss.chandef.chan)
+ chan = wdev->wext.ibss.chandef.chan;
+ wdev_unlock(wdev);
+@@ -408,7 +408,7 @@ int cfg80211_ibss_wext_siwessid(struct net_device *dev,
+
+ wdev_lock(wdev);
+ err = 0;
+- if (wdev->ssid_len)
++ if (wdev->u.ibss.ssid_len)
+ err = __cfg80211_leave_ibss(rdev, dev, true);
+ wdev_unlock(wdev);
+
+@@ -419,8 +419,8 @@ int cfg80211_ibss_wext_siwessid(struct net_device *dev,
+ if (len > 0 && ssid[len - 1] == '\0')
+ len--;
+
+- memcpy(wdev->ssid, ssid, len);
+- wdev->wext.ibss.ssid = wdev->ssid;
++ memcpy(wdev->u.ibss.ssid, ssid, len);
++ wdev->wext.ibss.ssid = wdev->u.ibss.ssid;
+ wdev->wext.ibss.ssid_len = len;
+
+ wdev_lock(wdev);
+@@ -443,10 +443,10 @@ int cfg80211_ibss_wext_giwessid(struct net_device *dev,
+ data->flags = 0;
+
+ wdev_lock(wdev);
+- if (wdev->ssid_len) {
++ if (wdev->u.ibss.ssid_len) {
+ data->flags = 1;
+- data->length = wdev->ssid_len;
+- memcpy(ssid, wdev->ssid, data->length);
++ data->length = wdev->u.ibss.ssid_len;
++ memcpy(ssid, wdev->u.ibss.ssid, data->length);
+ } else if (wdev->wext.ibss.ssid && wdev->wext.ibss.ssid_len) {
+ data->flags = 1;
+ data->length = wdev->wext.ibss.ssid_len;
+@@ -494,7 +494,7 @@ int cfg80211_ibss_wext_siwap(struct net_device *dev,
+
+ wdev_lock(wdev);
+ err = 0;
+- if (wdev->ssid_len)
++ if (wdev->u.ibss.ssid_len)
+ err = __cfg80211_leave_ibss(rdev, dev, true);
+ wdev_unlock(wdev);
+
+@@ -527,8 +527,9 @@ int cfg80211_ibss_wext_giwap(struct net_device *dev,
+ ap_addr->sa_family = ARPHRD_ETHER;
+
+ wdev_lock(wdev);
+- if (wdev->current_bss)
+- memcpy(ap_addr->sa_data, wdev->current_bss->pub.bssid, ETH_ALEN);
++ if (wdev->u.ibss.current_bss)
++ memcpy(ap_addr->sa_data, wdev->u.ibss.current_bss->pub.bssid,
++ ETH_ALEN);
+ else if (wdev->wext.ibss.bssid)
+ memcpy(ap_addr->sa_data, wdev->wext.ibss.bssid, ETH_ALEN);
+ else
+diff --git a/net/wireless/mesh.c b/net/wireless/mesh.c
+index e4e363138279c..59a3c5c092b1b 100644
+--- a/net/wireless/mesh.c
++++ b/net/wireless/mesh.c
+@@ -1,4 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
++/*
++ * Portions
++ * Copyright (C) 2022 Intel Corporation
++ */
+ #include <linux/ieee80211.h>
+ #include <linux/export.h>
+ #include <net/cfg80211.h>
+@@ -114,7 +118,7 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
+ setup->is_secure)
+ return -EOPNOTSUPP;
+
+- if (wdev->mesh_id_len)
++ if (wdev->u.mesh.id_len)
+ return -EALREADY;
+
+ if (!setup->mesh_id_len)
+@@ -125,7 +129,7 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
+
+ if (!setup->chandef.chan) {
+ /* if no channel explicitly given, use preset channel */
+- setup->chandef = wdev->preset_chandef;
++ setup->chandef = wdev->u.mesh.preset_chandef;
+ }
+
+ if (!setup->chandef.chan) {
+@@ -209,10 +213,10 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
+
+ err = rdev_join_mesh(rdev, dev, conf, setup);
+ if (!err) {
+- memcpy(wdev->ssid, setup->mesh_id, setup->mesh_id_len);
+- wdev->mesh_id_len = setup->mesh_id_len;
+- wdev->chandef = setup->chandef;
+- wdev->beacon_interval = setup->beacon_interval;
++ memcpy(wdev->u.mesh.id, setup->mesh_id, setup->mesh_id_len);
++ wdev->u.mesh.id_len = setup->mesh_id_len;
++ wdev->u.mesh.chandef = setup->chandef;
++ wdev->u.mesh.beacon_interval = setup->beacon_interval;
+ }
+
+ return err;
+@@ -241,15 +245,15 @@ int cfg80211_set_mesh_channel(struct cfg80211_registered_device *rdev,
+ err = rdev_libertas_set_mesh_channel(rdev, wdev->netdev,
+ chandef->chan);
+ if (!err)
+- wdev->chandef = *chandef;
++ wdev->u.mesh.chandef = *chandef;
+
+ return err;
+ }
+
+- if (wdev->mesh_id_len)
++ if (wdev->u.mesh.id_len)
+ return -EBUSY;
+
+- wdev->preset_chandef = *chandef;
++ wdev->u.mesh.preset_chandef = *chandef;
+ return 0;
+ }
+
+@@ -267,15 +271,16 @@ int __cfg80211_leave_mesh(struct cfg80211_registered_device *rdev,
+ if (!rdev->ops->leave_mesh)
+ return -EOPNOTSUPP;
+
+- if (!wdev->mesh_id_len)
++ if (!wdev->u.mesh.id_len)
+ return -ENOTCONN;
+
+ err = rdev_leave_mesh(rdev, dev);
+ if (!err) {
+ wdev->conn_owner_nlportid = 0;
+- wdev->mesh_id_len = 0;
+- wdev->beacon_interval = 0;
+- memset(&wdev->chandef, 0, sizeof(wdev->chandef));
++ wdev->u.mesh.id_len = 0;
++ wdev->u.mesh.beacon_interval = 0;
++ memset(&wdev->u.mesh.chandef, 0,
++ sizeof(wdev->u.mesh.chandef));
+ rdev_set_qos_map(rdev, dev, NULL);
+ cfg80211_sched_dfs_chan_update(rdev);
+ }
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index c8155a483ec24..b9204d0f1e55b 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -92,8 +92,7 @@ static void cfg80211_process_deauth(struct wireless_dev *wdev,
+
+ nl80211_send_deauth(rdev, wdev->netdev, buf, len, reconnect, GFP_KERNEL);
+
+- if (!wdev->current_bss ||
+- !ether_addr_equal(wdev->current_bss->pub.bssid, bssid))
++ if (!wdev->connected || !ether_addr_equal(wdev->u.client.connected_addr, bssid))
+ return;
+
+ __cfg80211_disconnected(wdev->netdev, NULL, 0, reason_code, from_ap);
+@@ -113,8 +112,8 @@ static void cfg80211_process_disassoc(struct wireless_dev *wdev,
+ nl80211_send_disassoc(rdev, wdev->netdev, buf, len, reconnect,
+ GFP_KERNEL);
+
+- if (WARN_ON(!wdev->current_bss ||
+- !ether_addr_equal(wdev->current_bss->pub.bssid, bssid)))
++ if (WARN_ON(!wdev->connected ||
++ !ether_addr_equal(wdev->u.client.connected_addr, bssid)))
+ return;
+
+ __cfg80211_disconnected(wdev->netdev, NULL, 0, reason_code, from_ap);
+@@ -260,8 +259,8 @@ int cfg80211_mlme_auth(struct cfg80211_registered_device *rdev,
+ if (!key || !key_len || key_idx < 0 || key_idx > 3)
+ return -EINVAL;
+
+- if (wdev->current_bss &&
+- ether_addr_equal(bssid, wdev->current_bss->pub.bssid))
++ if (wdev->connected &&
++ ether_addr_equal(bssid, wdev->u.client.connected_addr))
+ return -EALREADY;
+
+ req.bss = cfg80211_get_bss(&rdev->wiphy, chan, bssid, ssid, ssid_len,
+@@ -322,9 +321,9 @@ int cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev,
+
+ ASSERT_WDEV_LOCK(wdev);
+
+- if (wdev->current_bss &&
+- (!req->prev_bssid || !ether_addr_equal(wdev->current_bss->pub.bssid,
+- req->prev_bssid)))
++ if (wdev->connected &&
++ (!req->prev_bssid ||
++ !ether_addr_equal(wdev->u.client.connected_addr, req->prev_bssid)))
+ return -EALREADY;
+
+ cfg80211_oper_and_ht_capa(&req->ht_capa_mask,
+@@ -364,13 +363,13 @@ int cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev,
+ ASSERT_WDEV_LOCK(wdev);
+
+ if (local_state_change &&
+- (!wdev->current_bss ||
+- !ether_addr_equal(wdev->current_bss->pub.bssid, bssid)))
++ (!wdev->connected ||
++ !ether_addr_equal(wdev->u.client.connected_addr, bssid)))
+ return 0;
+
+ if (ether_addr_equal(wdev->disconnect_bssid, bssid) ||
+- (wdev->current_bss &&
+- ether_addr_equal(wdev->current_bss->pub.bssid, bssid)))
++ (wdev->connected &&
++ ether_addr_equal(wdev->u.client.connected_addr, bssid)))
+ wdev->conn_owner_nlportid = 0;
+
+ return rdev_deauth(rdev, dev, &req);
+@@ -392,11 +391,12 @@ int cfg80211_mlme_disassoc(struct cfg80211_registered_device *rdev,
+
+ ASSERT_WDEV_LOCK(wdev);
+
+- if (!wdev->current_bss)
++ if (!wdev->connected)
+ return -ENOTCONN;
+
+- if (ether_addr_equal(wdev->current_bss->pub.bssid, bssid))
+- req.bss = &wdev->current_bss->pub;
++ if (ether_addr_equal(wdev->links[0].client.current_bss->pub.bssid,
++ bssid))
++ req.bss = &wdev->links[0].client.current_bss->pub;
+ else
+ return -ENOTCONN;
+
+@@ -405,7 +405,7 @@ int cfg80211_mlme_disassoc(struct cfg80211_registered_device *rdev,
+ return err;
+
+ /* driver should have reported the disassoc */
+- WARN_ON(wdev->current_bss);
++ WARN_ON(wdev->connected);
+ return 0;
+ }
+
+@@ -420,10 +420,10 @@ void cfg80211_mlme_down(struct cfg80211_registered_device *rdev,
+ if (!rdev->ops->deauth)
+ return;
+
+- if (!wdev->current_bss)
++ if (!wdev->connected)
+ return;
+
+- memcpy(bssid, wdev->current_bss->pub.bssid, ETH_ALEN);
++ memcpy(bssid, wdev->u.client.connected_addr, ETH_ALEN);
+ cfg80211_mlme_deauth(rdev, dev, bssid, NULL, 0,
+ WLAN_REASON_DEAUTH_LEAVING, false);
+ }
+@@ -676,28 +676,34 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
+
+ switch (wdev->iftype) {
+ case NL80211_IFTYPE_ADHOC:
++ /*
++ * check for IBSS DA must be done by driver as
++ * cfg80211 doesn't track the stations
++ */
++ if (!wdev->u.ibss.current_bss ||
++ !ether_addr_equal(wdev->u.ibss.current_bss->pub.bssid,
++ mgmt->bssid)) {
++ err = -ENOTCONN;
++ break;
++ }
++ break;
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_P2P_CLIENT:
+- if (!wdev->current_bss) {
++ if (!wdev->connected) {
+ err = -ENOTCONN;
+ break;
+ }
+
+- if (!ether_addr_equal(wdev->current_bss->pub.bssid,
++ /* FIXME: MLD may address this differently */
++
++ if (!ether_addr_equal(wdev->u.client.connected_addr,
+ mgmt->bssid)) {
+ err = -ENOTCONN;
+ break;
+ }
+
+- /*
+- * check for IBSS DA must be done by driver as
+- * cfg80211 doesn't track the stations
+- */
+- if (wdev->iftype == NL80211_IFTYPE_ADHOC)
+- break;
+-
+ /* for station, check that DA is the AP */
+- if (!ether_addr_equal(wdev->current_bss->pub.bssid,
++ if (!ether_addr_equal(wdev->u.client.connected_addr,
+ mgmt->da)) {
+ err = -ENOTCONN;
+ break;
+@@ -743,12 +749,12 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
+ if (!ieee80211_is_action(mgmt->frame_control) ||
+ mgmt->u.action.category != WLAN_CATEGORY_PUBLIC)
+ return -EINVAL;
+- if (!wdev->current_bss &&
++ if (!wdev->connected &&
+ !wiphy_ext_feature_isset(
+ &rdev->wiphy,
+ NL80211_EXT_FEATURE_MGMT_TX_RANDOM_TA))
+ return -EINVAL;
+- if (wdev->current_bss &&
++ if (wdev->connected &&
+ !wiphy_ext_feature_isset(
+ &rdev->wiphy,
+ NL80211_EXT_FEATURE_MGMT_TX_RANDOM_TA_CONNECTED))
+@@ -940,14 +946,15 @@ void cfg80211_cac_event(struct net_device *netdev,
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+ unsigned long timeout;
+
++ /* not yet supported */
++ if (wdev->valid_links)
++ return;
++
+ trace_cfg80211_cac_event(netdev, event);
+
+ if (WARN_ON(!wdev->cac_started && event != NL80211_RADAR_CAC_STARTED))
+ return;
+
+- if (WARN_ON(!wdev->chandef.chan))
+- return;
+-
+ switch (event) {
+ case NL80211_RADAR_CAC_FINISHED:
+ timeout = wdev->cac_start_time +
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 740b29481bc6f..fc1166819b768 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -792,6 +792,10 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ NL80211_EHT_MIN_CAPABILITY_LEN,
+ NL80211_EHT_MAX_CAPABILITY_LEN),
+ [NL80211_ATTR_DISABLE_EHT] = { .type = NLA_FLAG },
++ [NL80211_ATTR_MLO_LINKS] =
++ NLA_POLICY_NESTED_ARRAY(nl80211_policy),
++ [NL80211_ATTR_MLO_LINK_ID] =
++ NLA_POLICY_RANGE(NLA_U8, 0, IEEE80211_MLD_MAX_NUM_LINKS),
+ };
+
+ /* policy for the key attributes */
+@@ -1225,6 +1229,37 @@ static bool nl80211_put_txq_stats(struct sk_buff *msg,
+
+ /* netlink command implementations */
+
++/**
++ * nl80211_link_id - return link ID
++ * @attrs: attributes to look at
++ *
++ * Returns: the link ID or 0 if not given
++ *
++ * Note this function doesn't do any validation of the link
++ * ID validity wrt. links that were actually added, so it must
++ * be called only from ops with %NL80211_FLAG_MLO_VALID_LINK_ID
++ * or if additional validation is done.
++ */
++static unsigned int nl80211_link_id(struct nlattr **attrs)
++{
++ struct nlattr *linkid = attrs[NL80211_ATTR_MLO_LINK_ID];
++
++ if (!linkid)
++ return 0;
++
++ return nla_get_u8(linkid);
++}
++
++static int nl80211_link_id_or_invalid(struct nlattr **attrs)
++{
++ struct nlattr *linkid = attrs[NL80211_ATTR_MLO_LINK_ID];
++
++ if (!linkid)
++ return -1;
++
++ return nla_get_u8(linkid);
++}
++
+ struct key_parse {
+ struct key_params p;
+ int idx;
+@@ -1496,11 +1531,15 @@ static int nl80211_key_allowed(struct wireless_dev *wdev)
+ case NL80211_IFTYPE_MESH_POINT:
+ break;
+ case NL80211_IFTYPE_ADHOC:
++ if (wdev->u.ibss.current_bss)
++ return 0;
++ return -ENOLINK;
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_P2P_CLIENT:
+- if (!wdev->current_bss)
+- return -ENOLINK;
+- break;
++ /* for MLO, require driver validation of the link ID */
++ if (wdev->connected)
++ return 0;
++ return -ENOLINK;
+ case NL80211_IFTYPE_UNSPECIFIED:
+ case NL80211_IFTYPE_OCB:
+ case NL80211_IFTYPE_MONITOR:
+@@ -3232,12 +3271,14 @@ int nl80211_parse_chandef(struct cfg80211_registered_device *rdev,
+
+ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev,
+ struct net_device *dev,
+- struct genl_info *info)
++ struct genl_info *info,
++ int _link_id)
+ {
+ struct cfg80211_chan_def chandef;
+ int result;
+ enum nl80211_iftype iftype = NL80211_IFTYPE_MONITOR;
+ struct wireless_dev *wdev = NULL;
++ int link_id = _link_id;
+
+ if (dev)
+ wdev = dev->ieee80211_ptr;
+@@ -3246,6 +3287,12 @@ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev,
+ if (wdev)
+ iftype = wdev->iftype;
+
++ if (link_id < 0) {
++ if (wdev && wdev->valid_links)
++ return -EINVAL;
++ link_id = 0;
++ }
++
+ result = nl80211_parse_chandef(rdev, info, &chandef);
+ if (result)
+ return result;
+@@ -3254,49 +3301,48 @@ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev,
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_P2P_GO:
+ if (!cfg80211_reg_can_beacon_relax(&rdev->wiphy, &chandef,
+- iftype)) {
+- result = -EINVAL;
+- break;
+- }
+- if (wdev->beacon_interval) {
++ iftype))
++ return -EINVAL;
++ if (wdev->links[link_id].ap.beacon_interval) {
++ struct ieee80211_channel *cur_chan;
++
+ if (!dev || !rdev->ops->set_ap_chanwidth ||
+ !(rdev->wiphy.features &
+- NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE)) {
+- result = -EBUSY;
+- break;
+- }
++ NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE))
++ return -EBUSY;
+
+ /* Only allow dynamic channel width changes */
+- if (chandef.chan != wdev->preset_chandef.chan) {
+- result = -EBUSY;
+- break;
+- }
+- result = rdev_set_ap_chanwidth(rdev, dev, &chandef);
++ cur_chan = wdev->links[link_id].ap.chandef.chan;
++ if (chandef.chan != cur_chan)
++ return -EBUSY;
++
++ result = rdev_set_ap_chanwidth(rdev, dev, link_id,
++ &chandef);
+ if (result)
+- break;
++ return result;
++ wdev->links[link_id].ap.chandef = chandef;
++ } else {
++ wdev->u.ap.preset_chandef = chandef;
+ }
+- wdev->preset_chandef = chandef;
+- result = 0;
+- break;
++ return 0;
+ case NL80211_IFTYPE_MESH_POINT:
+- result = cfg80211_set_mesh_channel(rdev, wdev, &chandef);
+- break;
++ return cfg80211_set_mesh_channel(rdev, wdev, &chandef);
+ case NL80211_IFTYPE_MONITOR:
+- result = cfg80211_set_monitor_channel(rdev, &chandef);
+- break;
++ return cfg80211_set_monitor_channel(rdev, &chandef);
+ default:
+- result = -EINVAL;
++ break;
+ }
+
+- return result;
++ return -EINVAL;
+ }
+
+ static int nl80211_set_channel(struct sk_buff *skb, struct genl_info *info)
+ {
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
++ int link_id = nl80211_link_id_or_invalid(info->attrs);
+ struct net_device *netdev = info->user_ptr[1];
+
+- return __nl80211_set_channel(rdev, netdev, info);
++ return __nl80211_set_channel(rdev, netdev, info, link_id);
+ }
+
+ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
+@@ -3411,7 +3457,7 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
+ result = __nl80211_set_channel(
+ rdev,
+ nl80211_can_set_dev_channel(wdev) ? netdev : NULL,
+- info);
++ info, -1);
+ if (result)
+ goto out;
+ }
+@@ -3696,15 +3742,13 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
+ nla_put_u8(msg, NL80211_ATTR_4ADDR, wdev->use_4addr))
+ goto nla_put_failure;
+
+- if (rdev->ops->get_channel) {
+- int ret;
++ if (rdev->ops->get_channel && !wdev->valid_links) {
+ struct cfg80211_chan_def chandef = {};
++ int ret;
+
+- ret = rdev_get_channel(rdev, wdev, &chandef);
+- if (ret == 0) {
+- if (nl80211_send_chandef(msg, &chandef))
+- goto nla_put_failure;
+- }
++ ret = rdev_get_channel(rdev, wdev, 0, &chandef);
++ if (ret == 0 && nl80211_send_chandef(msg, &chandef))
++ goto nla_put_failure;
+ }
+
+ if (rdev->ops->get_tx_power) {
+@@ -3721,27 +3765,24 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
+ switch (wdev->iftype) {
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_P2P_GO:
+- if (wdev->ssid_len &&
+- nla_put(msg, NL80211_ATTR_SSID, wdev->ssid_len, wdev->ssid))
++ if (wdev->u.ap.ssid_len &&
++ nla_put(msg, NL80211_ATTR_SSID, wdev->u.ap.ssid_len,
++ wdev->u.ap.ssid))
+ goto nla_put_failure_locked;
+ break;
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_P2P_CLIENT:
+- case NL80211_IFTYPE_ADHOC: {
+- const struct element *ssid_elem;
+-
+- if (!wdev->current_bss)
+- break;
+- rcu_read_lock();
+- ssid_elem = ieee80211_bss_get_elem(&wdev->current_bss->pub,
+- WLAN_EID_SSID);
+- if (ssid_elem &&
+- nla_put(msg, NL80211_ATTR_SSID, ssid_elem->datalen,
+- ssid_elem->data))
+- goto nla_put_failure_rcu_locked;
+- rcu_read_unlock();
++ if (wdev->u.client.ssid_len &&
++ nla_put(msg, NL80211_ATTR_SSID, wdev->u.client.ssid_len,
++ wdev->u.client.ssid))
++ goto nla_put_failure_locked;
++ break;
++ case NL80211_IFTYPE_ADHOC:
++ if (wdev->u.ibss.ssid_len &&
++ nla_put(msg, NL80211_ATTR_SSID, wdev->u.ibss.ssid_len,
++ wdev->u.ibss.ssid))
++ goto nla_put_failure_locked;
+ break;
+- }
+ default:
+ /* nothing */
+ break;
+@@ -3761,8 +3802,6 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
+ genlmsg_end(msg, hdr);
+ return 0;
+
+- nla_put_failure_rcu_locked:
+- rcu_read_unlock();
+ nla_put_failure_locked:
+ wdev_unlock(wdev);
+ nla_put_failure:
+@@ -4014,10 +4053,11 @@ static int nl80211_set_interface(struct sk_buff *skb, struct genl_info *info)
+ wdev_lock(wdev);
+ BUILD_BUG_ON(IEEE80211_MAX_SSID_LEN !=
+ IEEE80211_MAX_MESH_ID_LEN);
+- wdev->mesh_id_up_len =
++ wdev->u.mesh.id_up_len =
+ nla_len(info->attrs[NL80211_ATTR_MESH_ID]);
+- memcpy(wdev->ssid, nla_data(info->attrs[NL80211_ATTR_MESH_ID]),
+- wdev->mesh_id_up_len);
++ memcpy(wdev->u.mesh.id,
++ nla_data(info->attrs[NL80211_ATTR_MESH_ID]),
++ wdev->u.mesh.id_up_len);
+ wdev_unlock(wdev);
+ }
+
+@@ -4122,10 +4162,11 @@ static int _nl80211_new_interface(struct sk_buff *skb, struct genl_info *info)
+ wdev_lock(wdev);
+ BUILD_BUG_ON(IEEE80211_MAX_SSID_LEN !=
+ IEEE80211_MAX_MESH_ID_LEN);
+- wdev->mesh_id_up_len =
++ wdev->u.mesh.id_up_len =
+ nla_len(info->attrs[NL80211_ATTR_MESH_ID]);
+- memcpy(wdev->ssid, nla_data(info->attrs[NL80211_ATTR_MESH_ID]),
+- wdev->mesh_id_up_len);
++ memcpy(wdev->u.mesh.id,
++ nla_data(info->attrs[NL80211_ATTR_MESH_ID]),
++ wdev->u.mesh.id_up_len);
+ wdev_unlock(wdev);
+ break;
+ case NL80211_IFTYPE_NAN:
+@@ -4662,7 +4703,7 @@ static int nl80211_set_mac_acl(struct sk_buff *skb, struct genl_info *info)
+ dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_GO)
+ return -EOPNOTSUPP;
+
+- if (!dev->ieee80211_ptr->beacon_interval)
++ if (!dev->ieee80211_ptr->links[0].ap.beacon_interval)
+ return -EINVAL;
+
+ acl = parse_acl_data(&rdev->wiphy, info);
+@@ -4818,14 +4859,24 @@ static void he_build_mcs_mask(u16 he_mcs_map,
+ }
+ }
+
+-static u16 he_get_txmcsmap(struct genl_info *info,
++static u16 he_get_txmcsmap(struct genl_info *info, unsigned int link_id,
+ const struct ieee80211_sta_he_cap *he_cap)
+ {
+ struct net_device *dev = info->user_ptr[1];
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+- __le16 tx_mcs;
++ struct cfg80211_chan_def *chandef;
++ __le16 tx_mcs;
+
+- switch (wdev->chandef.width) {
++ chandef = wdev_chandef(wdev, link_id);
++ if (!chandef) {
++ /*
++ * This is probably broken, but we never maintained
++ * a chandef in these cases, so it always was.
++ */
++ return le16_to_cpu(he_cap->he_mcs_nss_supp.tx_mcs_80);
++ }
++
++ switch (chandef->width) {
+ case NL80211_CHAN_WIDTH_80P80:
+ tx_mcs = he_cap->he_mcs_nss_supp.tx_mcs_80p80;
+ break;
+@@ -4836,6 +4887,7 @@ static u16 he_get_txmcsmap(struct genl_info *info,
+ tx_mcs = he_cap->he_mcs_nss_supp.tx_mcs_80;
+ break;
+ }
++
+ return le16_to_cpu(tx_mcs);
+ }
+
+@@ -4843,7 +4895,8 @@ static bool he_set_mcs_mask(struct genl_info *info,
+ struct wireless_dev *wdev,
+ struct ieee80211_supported_band *sband,
+ struct nl80211_txrate_he *txrate,
+- u16 mcs[NL80211_HE_NSS_MAX])
++ u16 mcs[NL80211_HE_NSS_MAX],
++ unsigned int link_id)
+ {
+ const struct ieee80211_sta_he_cap *he_cap;
+ u16 tx_mcs_mask[NL80211_HE_NSS_MAX] = {};
+@@ -4856,7 +4909,7 @@ static bool he_set_mcs_mask(struct genl_info *info,
+
+ memset(mcs, 0, sizeof(u16) * NL80211_HE_NSS_MAX);
+
+- tx_mcs_map = he_get_txmcsmap(info, he_cap);
++ tx_mcs_map = he_get_txmcsmap(info, link_id, he_cap);
+
+ /* Build he_mcs_mask from HE capabilities */
+ he_build_mcs_mask(tx_mcs_map, tx_mcs_mask);
+@@ -4876,7 +4929,8 @@ static int nl80211_parse_tx_bitrate_mask(struct genl_info *info,
+ enum nl80211_attrs attr,
+ struct cfg80211_bitrate_mask *mask,
+ struct net_device *dev,
+- bool default_all_enabled)
++ bool default_all_enabled,
++ unsigned int link_id)
+ {
+ struct nlattr *tb[NL80211_TXRATE_MAX + 1];
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
+@@ -4913,7 +4967,7 @@ static int nl80211_parse_tx_bitrate_mask(struct genl_info *info,
+ if (!he_cap)
+ continue;
+
+- he_tx_mcs_map = he_get_txmcsmap(info, he_cap);
++ he_tx_mcs_map = he_get_txmcsmap(info, link_id, he_cap);
+ he_build_mcs_mask(he_tx_mcs_map, mask->control[i].he_mcs);
+
+ mask->control[i].he_gi = 0xFF;
+@@ -4978,7 +5032,8 @@ static int nl80211_parse_tx_bitrate_mask(struct genl_info *info,
+ if (tb[NL80211_TXRATE_HE] &&
+ !he_set_mcs_mask(info, wdev, sband,
+ nla_data(tb[NL80211_TXRATE_HE]),
+- mask->control[band].he_mcs))
++ mask->control[band].he_mcs,
++ link_id))
+ return -EINVAL;
+
+ if (tb[NL80211_TXRATE_HE_GI])
+@@ -5215,6 +5270,8 @@ static int nl80211_parse_beacon(struct cfg80211_registered_device *rdev,
+
+ memset(bcn, 0, sizeof(*bcn));
+
++ bcn->link_id = nl80211_link_id(attrs);
++
+ if (attrs[NL80211_ATTR_BEACON_HEAD]) {
+ bcn->head = nla_data(attrs[NL80211_ATTR_BEACON_HEAD]);
+ bcn->head_len = nla_len(attrs[NL80211_ATTR_BEACON_HEAD]);
+@@ -5468,22 +5525,20 @@ static bool nl80211_get_ap_channel(struct cfg80211_registered_device *rdev,
+ struct cfg80211_ap_settings *params)
+ {
+ struct wireless_dev *wdev;
+- bool ret = false;
+
+ list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {
+ if (wdev->iftype != NL80211_IFTYPE_AP &&
+ wdev->iftype != NL80211_IFTYPE_P2P_GO)
+ continue;
+
+- if (!wdev->preset_chandef.chan)
++ if (!wdev->u.ap.preset_chandef.chan)
+ continue;
+
+- params->chandef = wdev->preset_chandef;
+- ret = true;
+- break;
++ params->chandef = wdev->u.ap.preset_chandef;
++ return true;
+ }
+
+- return ret;
++ return false;
+ }
+
+ static bool nl80211_valid_auth_type(struct cfg80211_registered_device *rdev,
+@@ -5541,6 +5596,7 @@ static bool nl80211_valid_auth_type(struct cfg80211_registered_device *rdev,
+ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ {
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
++ unsigned int link_id = nl80211_link_id(info->attrs);
+ struct net_device *dev = info->user_ptr[1];
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ struct cfg80211_ap_settings *params;
+@@ -5553,7 +5609,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ if (!rdev->ops->start_ap)
+ return -EOPNOTSUPP;
+
+- if (wdev->beacon_interval)
++ if (wdev->links[link_id].ap.beacon_interval)
+ return -EALREADY;
+
+ /* these are required for START_AP */
+@@ -5595,6 +5651,18 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ err = -EINVAL;
+ goto out;
+ }
++
++ if (wdev->u.ap.ssid_len &&
++ (wdev->u.ap.ssid_len != params->ssid_len ||
++ memcmp(wdev->u.ap.ssid, params->ssid, params->ssid_len))) {
++ /* require identical SSID for MLO */
++ err = -EINVAL;
++ goto out;
++ }
++ } else if (wdev->valid_links) {
++ /* require SSID for MLO */
++ err = -EINVAL;
++ goto out;
+ }
+
+ if (info->attrs[NL80211_ATTR_HIDDEN_SSID])
+@@ -5662,8 +5730,12 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ err = nl80211_parse_chandef(rdev, info, ¶ms->chandef);
+ if (err)
+ goto out;
+- } else if (wdev->preset_chandef.chan) {
+- params->chandef = wdev->preset_chandef;
++ } else if (wdev->valid_links) {
++ /* with MLD need to specify the channel configuration */
++ err = -EINVAL;
++ goto out;
++ } else if (wdev->u.ap.preset_chandef.chan) {
++ params->chandef = wdev->u.ap.preset_chandef;
+ } else if (!nl80211_get_ap_channel(rdev, params)) {
+ err = -EINVAL;
+ goto out;
+@@ -5675,18 +5747,20 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ goto out;
+ }
+
++ wdev_lock(wdev);
++
+ if (info->attrs[NL80211_ATTR_TX_RATES]) {
+ err = nl80211_parse_tx_bitrate_mask(info, info->attrs,
+ NL80211_ATTR_TX_RATES,
+ ¶ms->beacon_rate,
+- dev, false);
++ dev, false, link_id);
+ if (err)
+- goto out;
++ goto out_unlock;
+
+ err = validate_beacon_tx_rate(rdev, params->chandef.chan->band,
+ ¶ms->beacon_rate);
+ if (err)
+- goto out;
++ goto out_unlock;
+ }
+
+ if (info->attrs[NL80211_ATTR_SMPS_MODE]) {
+@@ -5699,19 +5773,19 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ if (!(rdev->wiphy.features &
+ NL80211_FEATURE_STATIC_SMPS)) {
+ err = -EINVAL;
+- goto out;
++ goto out_unlock;
+ }
+ break;
+ case NL80211_SMPS_DYNAMIC:
+ if (!(rdev->wiphy.features &
+ NL80211_FEATURE_DYNAMIC_SMPS)) {
+ err = -EINVAL;
+- goto out;
++ goto out_unlock;
+ }
+ break;
+ default:
+ err = -EINVAL;
+- goto out;
++ goto out_unlock;
+ }
+ } else {
+ params->smps_mode = NL80211_SMPS_OFF;
+@@ -5720,7 +5794,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ params->pbss = nla_get_flag(info->attrs[NL80211_ATTR_PBSS]);
+ if (params->pbss && !rdev->wiphy.bands[NL80211_BAND_60GHZ]) {
+ err = -EOPNOTSUPP;
+- goto out;
++ goto out_unlock;
+ }
+
+ if (info->attrs[NL80211_ATTR_ACL_POLICY]) {
+@@ -5728,7 +5802,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ if (IS_ERR(params->acl)) {
+ err = PTR_ERR(params->acl);
+ params->acl = NULL;
+- goto out;
++ goto out_unlock;
+ }
+ }
+
+@@ -5740,7 +5814,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ info->attrs[NL80211_ATTR_HE_OBSS_PD],
+ ¶ms->he_obss_pd);
+ if (err)
+- goto out;
++ goto out_unlock;
+ }
+
+ if (info->attrs[NL80211_ATTR_FILS_DISCOVERY]) {
+@@ -5748,7 +5822,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ info->attrs[NL80211_ATTR_FILS_DISCOVERY],
+ params);
+ if (err)
+- goto out;
++ goto out_unlock;
+ }
+
+ if (info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP]) {
+@@ -5756,7 +5830,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ rdev, info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP],
+ params);
+ if (err)
+- goto out;
++ goto out_unlock;
+ }
+
+ if (info->attrs[NL80211_ATTR_MBSSID_CONFIG]) {
+@@ -5767,7 +5841,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ params->beacon.mbssid_ies->cnt :
+ 0);
+ if (err)
+- goto out;
++ goto out_unlock;
+ }
+
+ nl80211_calculate_ap_params(params);
+@@ -5778,20 +5852,28 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ else if (info->attrs[NL80211_ATTR_EXTERNAL_AUTH_SUPPORT])
+ params->flags |= NL80211_AP_SETTINGS_EXTERNAL_AUTH_SUPPORT;
+
+- wdev_lock(wdev);
++ if (wdev->conn_owner_nlportid &&
++ info->attrs[NL80211_ATTR_SOCKET_OWNER] &&
++ wdev->conn_owner_nlportid != info->snd_portid) {
++ err = -EINVAL;
++ goto out_unlock;
++ }
++
++ /* FIXME: validate MLO/link-id against driver capabilities */
++
+ err = rdev_start_ap(rdev, dev, params);
+ if (!err) {
+- wdev->preset_chandef = params->chandef;
+- wdev->beacon_interval = params->beacon_interval;
+- wdev->chandef = params->chandef;
+- wdev->ssid_len = params->ssid_len;
+- memcpy(wdev->ssid, params->ssid, wdev->ssid_len);
++ wdev->links[link_id].ap.beacon_interval = params->beacon_interval;
++ wdev->links[link_id].ap.chandef = params->chandef;
++ wdev->u.ap.ssid_len = params->ssid_len;
++ memcpy(wdev->u.ap.ssid, params->ssid,
++ params->ssid_len);
+
+ if (info->attrs[NL80211_ATTR_SOCKET_OWNER])
+ wdev->conn_owner_nlportid = info->snd_portid;
+ }
++out_unlock:
+ wdev_unlock(wdev);
+-
+ out:
+ kfree(params->acl);
+ kfree(params->beacon.mbssid_ies);
+@@ -5807,6 +5889,7 @@ out:
+ static int nl80211_set_beacon(struct sk_buff *skb, struct genl_info *info)
+ {
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
++ unsigned int link_id = nl80211_link_id(info->attrs);
+ struct net_device *dev = info->user_ptr[1];
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ struct cfg80211_beacon_data params;
+@@ -5819,7 +5902,7 @@ static int nl80211_set_beacon(struct sk_buff *skb, struct genl_info *info)
+ if (!rdev->ops->change_beacon)
+ return -EOPNOTSUPP;
+
+- if (!wdev->beacon_interval)
++ if (!wdev->links[link_id].ap.beacon_interval)
+ return -EINVAL;
+
+ err = nl80211_parse_beacon(rdev, info->attrs, ¶ms);
+@@ -5838,9 +5921,10 @@ out:
+ static int nl80211_stop_ap(struct sk_buff *skb, struct genl_info *info)
+ {
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
++ unsigned int link_id = nl80211_link_id(info->attrs);
+ struct net_device *dev = info->user_ptr[1];
+
+- return cfg80211_stop_ap(rdev, dev, false);
++ return cfg80211_stop_ap(rdev, dev, link_id, false);
+ }
+
+ static const struct nla_policy sta_flags_policy[NL80211_STA_FLAG_MAX + 1] = {
+@@ -7590,7 +7674,7 @@ static int nl80211_get_mesh_config(struct sk_buff *skb,
+
+ wdev_lock(wdev);
+ /* If not connected, get default parameters */
+- if (!wdev->mesh_id_len)
++ if (!wdev->u.mesh.id_len)
+ memcpy(&cur_params, &default_mesh_config, sizeof(cur_params));
+ else
+ err = rdev_get_mesh_config(rdev, dev, &cur_params);
+@@ -7971,7 +8055,7 @@ static int nl80211_update_mesh_config(struct sk_buff *skb,
+ return err;
+
+ wdev_lock(wdev);
+- if (!wdev->mesh_id_len)
++ if (!wdev->u.mesh.id_len)
+ err = -ENOLINK;
+
+ if (!err)
+@@ -8463,14 +8547,44 @@ int nl80211_parse_random_mac(struct nlattr **attrs,
+ return 0;
+ }
+
+-static bool cfg80211_off_channel_oper_allowed(struct wireless_dev *wdev)
++static bool cfg80211_off_channel_oper_allowed(struct wireless_dev *wdev,
++ struct ieee80211_channel *chan)
+ {
++ unsigned int link_id;
++ bool all_ok = true;
++
+ ASSERT_WDEV_LOCK(wdev);
+
+ if (!cfg80211_beaconing_iface_active(wdev))
+ return true;
+
+- if (!(wdev->chandef.chan->flags & IEEE80211_CHAN_RADAR))
++ /*
++ * FIXME: check if we have a free HW resource/link for chan
++ *
++ * This, as well as the FIXME below, requires knowing the link
++ * capabilities of the hardware.
++ */
++
++ /* we cannot leave radar channels */
++ for_each_valid_link(wdev, link_id) {
++ struct cfg80211_chan_def *chandef;
++
++ chandef = wdev_chandef(wdev, link_id);
++ if (!chandef)
++ continue;
++
++ /*
++ * FIXME: don't require all_ok, but rather check only the
++ * correct HW resource/link onto which 'chan' falls,
++ * as only that link leaves the channel for doing
++ * the off-channel operation.
++ */
++
++ if (chandef->chan->flags & IEEE80211_CHAN_RADAR)
++ all_ok = false;
++ }
++
++ if (all_ok)
+ return true;
+
+ return regulatory_pre_cac_allowed(wdev->wiphy);
+@@ -8553,7 +8667,7 @@ nl80211_check_scan_flags(struct wiphy *wiphy, struct wireless_dev *wdev,
+ int err;
+
+ if (!(wiphy->features & randomness_flag) ||
+- (wdev && wdev->current_bss))
++ (wdev && wdev->connected))
+ return -EOPNOTSUPP;
+
+ err = nl80211_parse_random_mac(attrs, mac_addr, mac_addr_mask);
+@@ -8690,17 +8804,14 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info)
+ request->n_channels = i;
+
+ wdev_lock(wdev);
+- if (!cfg80211_off_channel_oper_allowed(wdev)) {
+- struct ieee80211_channel *chan;
++ for (i = 0; i < request->n_channels; i++) {
++ struct ieee80211_channel *chan = request->channels[i];
+
+- if (request->n_channels != 1) {
+- wdev_unlock(wdev);
+- err = -EBUSY;
+- goto out_free;
+- }
++ /* if we can go off-channel to the target channel we're good */
++ if (cfg80211_off_channel_oper_allowed(wdev, chan))
++ continue;
+
+- chan = request->channels[0];
+- if (chan->center_freq != wdev->chandef.chan->center_freq) {
++ if (!cfg80211_wdev_on_sub_chan(wdev, chan, true)) {
+ wdev_unlock(wdev);
+ err = -EBUSY;
+ goto out_free;
+@@ -9445,7 +9556,7 @@ static int nl80211_start_radar_detection(struct sk_buff *skb,
+
+ err = rdev_start_radar_detection(rdev, dev, &chandef, cac_time_ms);
+ if (!err) {
+- wdev->chandef = chandef;
++ wdev->links[0].ap.chandef = chandef;
+ wdev->cac_started = true;
+ wdev->cac_start_time = jiffies;
+ wdev->cac_time_ms = cac_time_ms;
+@@ -9513,6 +9624,7 @@ static int nl80211_notify_radar_detection(struct sk_buff *skb,
+ static int nl80211_channel_switch(struct sk_buff *skb, struct genl_info *info)
+ {
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
++ unsigned int link_id = nl80211_link_id(info->attrs);
+ struct net_device *dev = info->user_ptr[1];
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ struct cfg80211_csa_settings params;
+@@ -9539,15 +9651,15 @@ static int nl80211_channel_switch(struct sk_buff *skb, struct genl_info *info)
+ need_handle_dfs_flag = false;
+
+ /* useless if AP is not running */
+- if (!wdev->beacon_interval)
++ if (!wdev->links[link_id].ap.beacon_interval)
+ return -ENOTCONN;
+ break;
+ case NL80211_IFTYPE_ADHOC:
+- if (!wdev->ssid_len)
++ if (!wdev->u.ibss.ssid_len)
+ return -ENOTCONN;
+ break;
+ case NL80211_IFTYPE_MESH_POINT:
+- if (!wdev->mesh_id_len)
++ if (!wdev->u.mesh.id_len)
+ return -ENOTCONN;
+ break;
+ default:
+@@ -9718,6 +9830,7 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb,
+ {
+ struct cfg80211_bss *res = &intbss->pub;
+ const struct cfg80211_bss_ies *ies;
++ unsigned int link_id;
+ void *hdr;
+ struct nlattr *bss;
+
+@@ -9822,13 +9935,15 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb,
+ switch (wdev->iftype) {
+ case NL80211_IFTYPE_P2P_CLIENT:
+ case NL80211_IFTYPE_STATION:
+- if (intbss == wdev->current_bss &&
+- nla_put_u32(msg, NL80211_BSS_STATUS,
+- NL80211_BSS_STATUS_ASSOCIATED))
+- goto nla_put_failure;
++ for_each_valid_link(wdev, link_id) {
++ if (intbss == wdev->links[link_id].client.current_bss &&
++ nla_put_u32(msg, NL80211_BSS_STATUS,
++ NL80211_BSS_STATUS_ASSOCIATED))
++ goto nla_put_failure;
++ }
+ break;
+ case NL80211_IFTYPE_ADHOC:
+- if (intbss == wdev->current_bss &&
++ if (intbss == wdev->u.ibss.current_bss &&
+ nla_put_u32(msg, NL80211_BSS_STATUS,
+ NL80211_BSS_STATUS_IBSS_JOINED))
+ goto nla_put_failure;
+@@ -10012,7 +10127,9 @@ static int nl80211_dump_survey(struct sk_buff *skb, struct netlink_callback *cb)
+ }
+
+ while (1) {
++ wdev_lock(wdev);
+ res = rdev_dump_survey(rdev, wdev->netdev, survey_idx, &survey);
++ wdev_unlock(wdev);
+ if (res == -ENOENT)
+ break;
+ if (res)
+@@ -11362,7 +11479,7 @@ static int nl80211_update_connect_params(struct sk_buff *skb,
+ }
+
+ wdev_lock(dev->ieee80211_ptr);
+- if (!wdev->current_bss)
++ if (!wdev->connected)
+ ret = -ENOLINK;
+ else
+ ret = rdev_update_connect_params(rdev, dev, &connect, changed);
+@@ -11575,9 +11692,9 @@ static int nl80211_remain_on_channel(struct sk_buff *skb,
+ struct genl_info *info)
+ {
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
++ unsigned int link_id = nl80211_link_id(info->attrs);
+ struct wireless_dev *wdev = info->user_ptr[1];
+ struct cfg80211_chan_def chandef;
+- const struct cfg80211_chan_def *compat_chandef;
+ struct sk_buff *msg;
+ void *hdr;
+ u64 cookie;
+@@ -11607,10 +11724,22 @@ static int nl80211_remain_on_channel(struct sk_buff *skb,
+ return err;
+
+ wdev_lock(wdev);
+- if (!cfg80211_off_channel_oper_allowed(wdev) &&
+- !cfg80211_chandef_identical(&wdev->chandef, &chandef)) {
+- compat_chandef = cfg80211_chandef_compatible(&wdev->chandef,
+- &chandef);
++ if (!cfg80211_off_channel_oper_allowed(wdev, chandef.chan)) {
++ const struct cfg80211_chan_def *oper_chandef, *compat_chandef;
++
++ oper_chandef = wdev_chandef(wdev, link_id);
++
++ if (WARN_ON(!oper_chandef)) {
++ /* cannot happen since we must beacon to get here */
++ WARN_ON(1);
++ wdev_unlock(wdev);
++ return -EBUSY;
++ }
++
++ /* note: returns first one if identical chandefs */
++ compat_chandef = cfg80211_chandef_compatible(&chandef,
++ oper_chandef);
++
+ if (compat_chandef != &chandef) {
+ wdev_unlock(wdev);
+ return -EBUSY;
+@@ -11672,6 +11801,7 @@ static int nl80211_set_tx_bitrate_mask(struct sk_buff *skb,
+ struct genl_info *info)
+ {
+ struct cfg80211_bitrate_mask mask;
++ unsigned int link_id = nl80211_link_id(info->attrs);
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
+ struct net_device *dev = info->user_ptr[1];
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+@@ -11683,11 +11813,11 @@ static int nl80211_set_tx_bitrate_mask(struct sk_buff *skb,
+ wdev_lock(wdev);
+ err = nl80211_parse_tx_bitrate_mask(info, info->attrs,
+ NL80211_ATTR_TX_RATES, &mask,
+- dev, true);
++ dev, true, link_id);
+ if (err)
+ goto out;
+
+- err = rdev_set_bitrate_mask(rdev, dev, NULL, &mask);
++ err = rdev_set_bitrate_mask(rdev, dev, link_id, NULL, &mask);
+ out:
+ wdev_unlock(wdev);
+ return err;
+@@ -11812,7 +11942,8 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info)
+ return -EINVAL;
+
+ wdev_lock(wdev);
+- if (params.offchan && !cfg80211_off_channel_oper_allowed(wdev)) {
++ if (params.offchan &&
++ !cfg80211_off_channel_oper_allowed(wdev, chandef.chan)) {
+ wdev_unlock(wdev);
+ return -EBUSY;
+ }
+@@ -12030,12 +12161,13 @@ static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev,
+ * connection is established and enough beacons received to calculate
+ * the average.
+ */
+- if (!wdev->cqm_config->last_rssi_event_value && wdev->current_bss &&
++ if (!wdev->cqm_config->last_rssi_event_value &&
++ wdev->links[0].client.current_bss &&
+ rdev->ops->get_station) {
+ struct station_info sinfo = {};
+ u8 *mac_addr;
+
+- mac_addr = wdev->current_bss->pub.bssid;
++ mac_addr = wdev->links[0].client.current_bss->pub.bssid;
+
+ err = rdev_get_station(rdev, dev, mac_addr, &sinfo);
+ if (err)
+@@ -12298,7 +12430,7 @@ static int nl80211_join_mesh(struct sk_buff *skb, struct genl_info *info)
+ err = nl80211_parse_tx_bitrate_mask(info, info->attrs,
+ NL80211_ATTR_TX_RATES,
+ &setup.beacon_rate,
+- dev, false);
++ dev, false, 0);
+ if (err)
+ return err;
+
+@@ -13268,7 +13400,7 @@ static int nl80211_set_rekey_data(struct sk_buff *skb, struct genl_info *info)
+ rekey_data.akm = nla_get_u32(tb[NL80211_REKEY_DATA_AKM]);
+
+ wdev_lock(wdev);
+- if (!wdev->current_bss) {
++ if (!wdev->connected) {
+ err = -ENOTCONN;
+ goto out;
+ }
+@@ -14537,7 +14669,7 @@ static int nl80211_add_tx_ts(struct sk_buff *skb, struct genl_info *info)
+ switch (wdev->iftype) {
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_P2P_CLIENT:
+- if (wdev->current_bss)
++ if (wdev->connected)
+ break;
+ err = -ENOTCONN;
+ goto out;
+@@ -14710,13 +14842,13 @@ static int nl80211_set_pmk(struct sk_buff *skb, struct genl_info *info)
+ return -EINVAL;
+
+ wdev_lock(wdev);
+- if (!wdev->current_bss) {
++ if (!wdev->connected) {
+ ret = -ENOTCONN;
+ goto out;
+ }
+
+ pmk_conf.aa = nla_data(info->attrs[NL80211_ATTR_MAC]);
+- if (memcmp(pmk_conf.aa, wdev->current_bss->pub.bssid, ETH_ALEN)) {
++ if (memcmp(pmk_conf.aa, wdev->u.client.connected_addr, ETH_ALEN)) {
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -14844,9 +14976,13 @@ static int nl80211_tx_control_port(struct sk_buff *skb, struct genl_info *info)
+ case NL80211_IFTYPE_MESH_POINT:
+ break;
+ case NL80211_IFTYPE_ADHOC:
++ if (wdev->u.ibss.current_bss)
++ break;
++ err = -ENOTCONN;
++ goto out;
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_P2P_CLIENT:
+- if (wdev->current_bss)
++ if (wdev->connected)
+ break;
+ err = -ENOTCONN;
+ goto out;
+@@ -14882,12 +15018,14 @@ static int nl80211_get_ftm_responder_stats(struct sk_buff *skb,
+ struct net_device *dev = info->user_ptr[1];
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ struct cfg80211_ftm_responder_stats ftm_stats = {};
++ unsigned int link_id = nl80211_link_id(info->attrs);
+ struct sk_buff *msg;
+ void *hdr;
+ struct nlattr *ftm_stats_attr;
+ int err;
+
+- if (wdev->iftype != NL80211_IFTYPE_AP || !wdev->beacon_interval)
++ if (wdev->iftype != NL80211_IFTYPE_AP ||
++ !wdev->links[link_id].ap.beacon_interval)
+ return -EOPNOTSUPP;
+
+ err = rdev_get_ftm_responder_stats(rdev, dev, &ftm_stats);
+@@ -15017,7 +15155,8 @@ static int nl80211_probe_mesh_link(struct sk_buff *skb, struct genl_info *info)
+ static int parse_tid_conf(struct cfg80211_registered_device *rdev,
+ struct nlattr *attrs[], struct net_device *dev,
+ struct cfg80211_tid_cfg *tid_conf,
+- struct genl_info *info, const u8 *peer)
++ struct genl_info *info, const u8 *peer,
++ unsigned int link_id)
+ {
+ struct netlink_ext_ack *extack = info->extack;
+ u64 mask;
+@@ -15092,7 +15231,7 @@ static int parse_tid_conf(struct cfg80211_registered_device *rdev,
+ attr = NL80211_TID_CONFIG_ATTR_TX_RATE;
+ err = nl80211_parse_tx_bitrate_mask(info, attrs, attr,
+ &tid_conf->txrate_mask, dev,
+- true);
++ true, link_id);
+ if (err)
+ return err;
+
+@@ -15119,6 +15258,7 @@ static int nl80211_set_tid_config(struct sk_buff *skb,
+ {
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
+ struct nlattr *attrs[NL80211_TID_CONFIG_ATTR_MAX + 1];
++ unsigned int link_id = nl80211_link_id(info->attrs);
+ struct net_device *dev = info->user_ptr[1];
+ struct cfg80211_tid_config *tid_config;
+ struct nlattr *tid;
+@@ -15146,6 +15286,8 @@ static int nl80211_set_tid_config(struct sk_buff *skb,
+ if (info->attrs[NL80211_ATTR_MAC])
+ tid_config->peer = nla_data(info->attrs[NL80211_ATTR_MAC]);
+
++ wdev_lock(dev->ieee80211_ptr);
++
+ nla_for_each_nested(tid, info->attrs[NL80211_ATTR_TID_CONFIG],
+ rem_conf) {
+ ret = nla_parse_nested(attrs, NL80211_TID_CONFIG_ATTR_MAX,
+@@ -15156,7 +15298,7 @@ static int nl80211_set_tid_config(struct sk_buff *skb,
+
+ ret = parse_tid_conf(rdev, attrs, dev,
+ &tid_config->tid_conf[conf_idx],
+- info, tid_config->peer);
++ info, tid_config->peer, link_id);
+ if (ret)
+ goto bad_tid_conf;
+
+@@ -15167,6 +15309,7 @@ static int nl80211_set_tid_config(struct sk_buff *skb,
+
+ bad_tid_conf:
+ kfree(tid_config);
++ wdev_unlock(dev->ieee80211_ptr);
+ return ret;
+ }
+
+@@ -15295,6 +15438,62 @@ static int nl80211_set_fils_aad(struct sk_buff *skb,
+ return rdev_set_fils_aad(rdev, dev, &fils_aad);
+ }
+
++static int nl80211_add_link(struct sk_buff *skb, struct genl_info *info)
++{
++ unsigned int link_id = nl80211_link_id(info->attrs);
++ struct net_device *dev = info->user_ptr[1];
++ struct wireless_dev *wdev = dev->ieee80211_ptr;
++
++ if (!(wdev->wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO))
++ return -EINVAL;
++
++ switch (wdev->iftype) {
++ case NL80211_IFTYPE_AP:
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ if (!info->attrs[NL80211_ATTR_MAC] ||
++ !is_valid_ether_addr(nla_data(info->attrs[NL80211_ATTR_MAC])))
++ return -EINVAL;
++
++ wdev_lock(wdev);
++ wdev->valid_links |= BIT(link_id);
++ ether_addr_copy(wdev->links[link_id].addr,
++ nla_data(info->attrs[NL80211_ATTR_MAC]));
++ wdev_unlock(wdev);
++
++ return 0;
++}
++
++static int nl80211_remove_link(struct sk_buff *skb, struct genl_info *info)
++{
++ unsigned int link_id = nl80211_link_id(info->attrs);
++ struct net_device *dev = info->user_ptr[1];
++ struct wireless_dev *wdev = dev->ieee80211_ptr;
++
++ /* cannot remove if there's no link */
++ if (!info->attrs[NL80211_ATTR_MLO_LINK_ID])
++ return -EINVAL;
++
++ switch (wdev->iftype) {
++ case NL80211_IFTYPE_AP:
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ /* FIXME: stop the link operations first */
++
++ wdev_lock(wdev);
++ wdev->valid_links &= ~BIT(link_id);
++ eth_zero_addr(wdev->links[link_id].addr);
++ wdev_unlock(wdev);
++
++ return 0;
++}
++
+ #define NL80211_FLAG_NEED_WIPHY 0x01
+ #define NL80211_FLAG_NEED_NETDEV 0x02
+ #define NL80211_FLAG_NEED_RTNL 0x04
+@@ -15307,6 +15506,8 @@ static int nl80211_set_fils_aad(struct sk_buff *skb,
+ NL80211_FLAG_CHECK_NETDEV_UP)
+ #define NL80211_FLAG_CLEAR_SKB 0x20
+ #define NL80211_FLAG_NO_WIPHY_MTX 0x40
++#define NL80211_FLAG_MLO_VALID_LINK_ID 0x80
++#define NL80211_FLAG_MLO_UNSUPPORTED 0x100
+
+ #define INTERNAL_FLAG_SELECTORS(__sel) \
+ SELECTOR(__sel, NONE, 0) /* must be first */ \
+@@ -15316,6 +15517,12 @@ static int nl80211_set_fils_aad(struct sk_buff *skb,
+ NL80211_FLAG_NEED_WDEV) \
+ SELECTOR(__sel, NETDEV, \
+ NL80211_FLAG_NEED_NETDEV) \
++ SELECTOR(__sel, NETDEV_LINK, \
++ NL80211_FLAG_NEED_NETDEV | \
++ NL80211_FLAG_MLO_VALID_LINK_ID) \
++ SELECTOR(__sel, NETDEV_NO_MLO, \
++ NL80211_FLAG_NEED_NETDEV | \
++ NL80211_FLAG_MLO_UNSUPPORTED) \
+ SELECTOR(__sel, WIPHY_RTNL, \
+ NL80211_FLAG_NEED_WIPHY | \
+ NL80211_FLAG_NEED_RTNL) \
+@@ -15331,14 +15538,31 @@ static int nl80211_set_fils_aad(struct sk_buff *skb,
+ NL80211_FLAG_NEED_RTNL) \
+ SELECTOR(__sel, NETDEV_UP, \
+ NL80211_FLAG_NEED_NETDEV_UP) \
++ SELECTOR(__sel, NETDEV_UP_LINK, \
++ NL80211_FLAG_NEED_NETDEV_UP | \
++ NL80211_FLAG_MLO_VALID_LINK_ID) \
++ SELECTOR(__sel, NETDEV_UP_NO_MLO, \
++ NL80211_FLAG_NEED_NETDEV_UP | \
++ NL80211_FLAG_MLO_UNSUPPORTED) \
++ SELECTOR(__sel, NETDEV_UP_NO_MLO_CLEAR, \
++ NL80211_FLAG_NEED_NETDEV_UP | \
++ NL80211_FLAG_CLEAR_SKB | \
++ NL80211_FLAG_MLO_UNSUPPORTED) \
+ SELECTOR(__sel, NETDEV_UP_NOTMX, \
+ NL80211_FLAG_NEED_NETDEV_UP | \
+ NL80211_FLAG_NO_WIPHY_MTX) \
++ SELECTOR(__sel, NETDEV_UP_NOTMX_NOMLO, \
++ NL80211_FLAG_NEED_NETDEV_UP | \
++ NL80211_FLAG_NO_WIPHY_MTX | \
++ NL80211_FLAG_MLO_UNSUPPORTED) \
+ SELECTOR(__sel, NETDEV_UP_CLEAR, \
+ NL80211_FLAG_NEED_NETDEV_UP | \
+ NL80211_FLAG_CLEAR_SKB) \
+ SELECTOR(__sel, WDEV_UP, \
+ NL80211_FLAG_NEED_WDEV_UP) \
++ SELECTOR(__sel, WDEV_UP_LINK, \
++ NL80211_FLAG_NEED_WDEV_UP | \
++ NL80211_FLAG_MLO_VALID_LINK_ID) \
+ SELECTOR(__sel, WDEV_UP_RTNL, \
+ NL80211_FLAG_NEED_WDEV_UP | \
+ NL80211_FLAG_NEED_RTNL) \
+@@ -15362,9 +15586,10 @@ static int nl80211_pre_doit(const struct genl_ops *ops, struct sk_buff *skb,
+ struct genl_info *info)
+ {
+ struct cfg80211_registered_device *rdev = NULL;
+- struct wireless_dev *wdev;
+- struct net_device *dev;
++ struct wireless_dev *wdev = NULL;
++ struct net_device *dev = NULL;
+ u32 internal_flags;
++ int err;
+
+ if (WARN_ON(ops->internal_flags >= ARRAY_SIZE(nl80211_internal_flags)))
+ return -EINVAL;
+@@ -15375,8 +15600,8 @@ static int nl80211_pre_doit(const struct genl_ops *ops, struct sk_buff *skb,
+ if (internal_flags & NL80211_FLAG_NEED_WIPHY) {
+ rdev = cfg80211_get_dev_from_info(genl_info_net(info), info);
+ if (IS_ERR(rdev)) {
+- rtnl_unlock();
+- return PTR_ERR(rdev);
++ err = PTR_ERR(rdev);
++ goto out_unlock;
+ }
+ info->user_ptr[0] = rdev;
+ } else if (internal_flags & NL80211_FLAG_NEED_NETDEV ||
+@@ -15384,17 +15609,18 @@ static int nl80211_pre_doit(const struct genl_ops *ops, struct sk_buff *skb,
+ wdev = __cfg80211_wdev_from_attrs(NULL, genl_info_net(info),
+ info->attrs);
+ if (IS_ERR(wdev)) {
+- rtnl_unlock();
+- return PTR_ERR(wdev);
++ err = PTR_ERR(wdev);
++ goto out_unlock;
+ }
+
+ dev = wdev->netdev;
++ dev_hold(dev);
+ rdev = wiphy_to_rdev(wdev->wiphy);
+
+ if (internal_flags & NL80211_FLAG_NEED_NETDEV) {
+ if (!dev) {
+- rtnl_unlock();
+- return -EINVAL;
++ err = -EINVAL;
++ goto out_unlock;
+ }
+
+ info->user_ptr[1] = dev;
+@@ -15404,14 +15630,44 @@ static int nl80211_pre_doit(const struct genl_ops *ops, struct sk_buff *skb,
+
+ if (internal_flags & NL80211_FLAG_CHECK_NETDEV_UP &&
+ !wdev_running(wdev)) {
+- rtnl_unlock();
+- return -ENETDOWN;
++ err = -ENETDOWN;
++ goto out_unlock;
+ }
+
+- dev_hold(dev);
+ info->user_ptr[0] = rdev;
+ }
+
++ if (internal_flags & NL80211_FLAG_MLO_VALID_LINK_ID) {
++ struct nlattr *link_id = info->attrs[NL80211_ATTR_MLO_LINK_ID];
++
++ if (!wdev) {
++ err = -EINVAL;
++ goto out_unlock;
++ }
++
++ /* MLO -> require valid link ID */
++ if (wdev->valid_links &&
++ (!link_id ||
++ !(wdev->valid_links & BIT(nla_get_u16(link_id))))) {
++ err = -EINVAL;
++ goto out_unlock;
++ }
++
++ /* non-MLO -> no link ID attribute accepted */
++ if (!wdev->valid_links && link_id) {
++ err = -EINVAL;
++ goto out_unlock;
++ }
++ }
++
++ if (internal_flags & NL80211_FLAG_MLO_UNSUPPORTED) {
++ if (info->attrs[NL80211_ATTR_MLO_LINK_ID] ||
++ (wdev && wdev->valid_links)) {
++ err = -EINVAL;
++ goto out_unlock;
++ }
++ }
++
+ if (rdev && !(internal_flags & NL80211_FLAG_NO_WIPHY_MTX)) {
+ wiphy_lock(&rdev->wiphy);
+ /* we keep the mutex locked until post_doit */
+@@ -15421,6 +15677,10 @@ static int nl80211_pre_doit(const struct genl_ops *ops, struct sk_buff *skb,
+ rtnl_unlock();
+
+ return 0;
++out_unlock:
++ rtnl_unlock();
++ dev_put(dev);
++ return err;
+ }
+
+ static void nl80211_post_doit(const struct genl_ops *ops, struct sk_buff *skb,
+@@ -15636,6 +15896,7 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_set_key,
+ .flags = GENL_UNS_ADMIN_PERM,
++ /* cannot use NL80211_FLAG_MLO_VALID_LINK_ID, depends on key */
+ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
+ NL80211_FLAG_CLEAR_SKB),
+ },
+@@ -15659,21 +15920,24 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .flags = GENL_UNS_ADMIN_PERM,
+ .doit = nl80211_set_beacon,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_START_AP,
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .flags = GENL_UNS_ADMIN_PERM,
+ .doit = nl80211_start_ap,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_STOP_AP,
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .flags = GENL_UNS_ADMIN_PERM,
+ .doit = nl80211_stop_ap,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_GET_STATION,
+@@ -15939,7 +16203,9 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_remain_on_channel,
+ .flags = GENL_UNS_ADMIN_PERM,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_WDEV_UP),
++ /* FIXME: requiring a link ID here is probably not good */
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_WDEV_UP |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_CANCEL_REMAIN_ON_CHANNEL,
+@@ -15953,7 +16219,8 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_set_tx_bitrate_mask,
+ .flags = GENL_UNS_ADMIN_PERM,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_REGISTER_FRAME,
+@@ -16002,7 +16269,8 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_set_channel,
+ .flags = GENL_UNS_ADMIN_PERM,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_JOIN_MESH,
+@@ -16163,7 +16431,8 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_set_mac_acl,
+ .flags = GENL_UNS_ADMIN_PERM,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV |
++ NL80211_FLAG_MLO_UNSUPPORTED),
+ },
+ {
+ .cmd = NL80211_CMD_RADAR_DETECT,
+@@ -16171,7 +16440,8 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .doit = nl80211_start_radar_detection,
+ .flags = GENL_UNS_ADMIN_PERM,
+ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
+- NL80211_FLAG_NO_WIPHY_MTX),
++ NL80211_FLAG_NO_WIPHY_MTX |
++ NL80211_FLAG_MLO_UNSUPPORTED),
+ },
+ {
+ .cmd = NL80211_CMD_GET_PROTOCOL_FEATURES,
+@@ -16217,7 +16487,8 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_channel_switch,
+ .flags = GENL_UNS_ADMIN_PERM,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_VENDOR,
+@@ -16240,7 +16511,8 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_add_tx_ts,
+ .flags = GENL_UNS_ADMIN_PERM,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
++ NL80211_FLAG_MLO_UNSUPPORTED),
+ },
+ {
+ .cmd = NL80211_CMD_DEL_TX_TS,
+@@ -16301,7 +16573,8 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .cmd = NL80211_CMD_GET_FTM_RESPONDER_STATS,
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_get_ftm_responder_stats,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_PEER_MEASUREMENT_START,
+@@ -16333,7 +16606,8 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .cmd = NL80211_CMD_SET_TID_CONFIG,
+ .doit = nl80211_set_tid_config,
+ .flags = GENL_UNS_ADMIN_PERM,
+- .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV),
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
+ },
+ {
+ .cmd = NL80211_CMD_SET_SAR_SPECS,
+@@ -16357,6 +16631,19 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .flags = GENL_UNS_ADMIN_PERM,
+ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP),
+ },
++ {
++ .cmd = NL80211_CMD_ADD_LINK,
++ .doit = nl80211_add_link,
++ .flags = GENL_UNS_ADMIN_PERM,
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP),
++ },
++ {
++ .cmd = NL80211_CMD_REMOVE_LINK,
++ .doit = nl80211_remove_link,
++ .flags = GENL_UNS_ADMIN_PERM,
++ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
++ NL80211_FLAG_MLO_VALID_LINK_ID),
++ },
+ };
+
+ static struct genl_family nl80211_fam __ro_after_init = {
+@@ -17984,23 +18271,40 @@ static void nl80211_ch_switch_notify(struct cfg80211_registered_device *rdev,
+ }
+
+ void cfg80211_ch_switch_notify(struct net_device *dev,
+- struct cfg80211_chan_def *chandef)
++ struct cfg80211_chan_def *chandef,
++ unsigned int link_id)
+ {
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ struct wiphy *wiphy = wdev->wiphy;
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+
+ ASSERT_WDEV_LOCK(wdev);
++ WARN_INVALID_LINK_ID(wdev, link_id);
+
+- trace_cfg80211_ch_switch_notify(dev, chandef);
++ trace_cfg80211_ch_switch_notify(dev, chandef, link_id);
+
+- wdev->chandef = *chandef;
+- wdev->preset_chandef = *chandef;
+-
+- if ((wdev->iftype == NL80211_IFTYPE_STATION ||
+- wdev->iftype == NL80211_IFTYPE_P2P_CLIENT) &&
+- !WARN_ON(!wdev->current_bss))
+- cfg80211_update_assoc_bss_entry(wdev, chandef->chan);
++ switch (wdev->iftype) {
++ case NL80211_IFTYPE_STATION:
++ case NL80211_IFTYPE_P2P_CLIENT:
++ if (!WARN_ON(!wdev->links[link_id].client.current_bss))
++ cfg80211_update_assoc_bss_entry(wdev, link_id,
++ chandef->chan);
++ break;
++ case NL80211_IFTYPE_MESH_POINT:
++ wdev->u.mesh.chandef = *chandef;
++ wdev->u.mesh.preset_chandef = *chandef;
++ break;
++ case NL80211_IFTYPE_AP:
++ case NL80211_IFTYPE_P2P_GO:
++ wdev->links[link_id].ap.chandef = *chandef;
++ break;
++ case NL80211_IFTYPE_ADHOC:
++ wdev->u.ibss.chandef = *chandef;
++ break;
++ default:
++ WARN_ON(1);
++ break;
++ }
+
+ cfg80211_sched_dfs_chan_update(rdev);
+
+diff --git a/net/wireless/ocb.c b/net/wireless/ocb.c
+index 2d26a6d980bf2..27a1732264f95 100644
+--- a/net/wireless/ocb.c
++++ b/net/wireless/ocb.c
+@@ -4,6 +4,7 @@
+ *
+ * Copyright: (c) 2014 Czech Technical University in Prague
+ * (c) 2014 Volkswagen Group Research
++ * Copyright (C) 2022 Intel Corporation
+ * Author: Rostislav Lisovy <rostislav.lisovy@fel.cvut.cz>
+ * Funded by: Volkswagen Group Research
+ */
+@@ -34,7 +35,7 @@ int __cfg80211_join_ocb(struct cfg80211_registered_device *rdev,
+
+ err = rdev_join_ocb(rdev, dev, setup);
+ if (!err)
+- wdev->chandef = setup->chandef;
++ wdev->u.ocb.chandef = setup->chandef;
+
+ return err;
+ }
+@@ -69,7 +70,7 @@ int __cfg80211_leave_ocb(struct cfg80211_registered_device *rdev,
+
+ err = rdev_leave_ocb(rdev, dev);
+ if (!err)
+- memset(&wdev->chandef, 0, sizeof(wdev->chandef));
++ memset(&wdev->u.ocb.chandef, 0, sizeof(wdev->u.ocb.chandef));
+
+ return err;
+ }
+diff --git a/net/wireless/rdev-ops.h b/net/wireless/rdev-ops.h
+index 439bcf52369c7..d2300eff03ae1 100644
+--- a/net/wireless/rdev-ops.h
++++ b/net/wireless/rdev-ops.h
+@@ -1,4 +1,9 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Portions of this file
++ * Copyright(c) 2016-2017 Intel Deutschland GmbH
++ * Copyright (C) 2018, 2021-2022 Intel Corporation
++ */
+ #ifndef __CFG80211_RDEV_OPS
+ #define __CFG80211_RDEV_OPS
+
+@@ -172,11 +177,11 @@ static inline int rdev_change_beacon(struct cfg80211_registered_device *rdev,
+ }
+
+ static inline int rdev_stop_ap(struct cfg80211_registered_device *rdev,
+- struct net_device *dev)
++ struct net_device *dev, unsigned int link_id)
+ {
+ int ret;
+- trace_rdev_stop_ap(&rdev->wiphy, dev);
+- ret = rdev->ops->stop_ap(&rdev->wiphy, dev);
++ trace_rdev_stop_ap(&rdev->wiphy, dev, link_id);
++ ret = rdev->ops->stop_ap(&rdev->wiphy, dev, link_id);
+ trace_rdev_return_int(&rdev->wiphy, ret);
+ return ret;
+ }
+@@ -651,12 +656,14 @@ static inline int rdev_testmode_dump(struct cfg80211_registered_device *rdev,
+
+ static inline int
+ rdev_set_bitrate_mask(struct cfg80211_registered_device *rdev,
+- struct net_device *dev, const u8 *peer,
++ struct net_device *dev, unsigned int link_id,
++ const u8 *peer,
+ const struct cfg80211_bitrate_mask *mask)
+ {
+ int ret;
+- trace_rdev_set_bitrate_mask(&rdev->wiphy, dev, peer, mask);
+- ret = rdev->ops->set_bitrate_mask(&rdev->wiphy, dev, peer, mask);
++ trace_rdev_set_bitrate_mask(&rdev->wiphy, dev, link_id, peer, mask);
++ ret = rdev->ops->set_bitrate_mask(&rdev->wiphy, dev, link_id,
++ peer, mask);
+ trace_rdev_return_int(&rdev->wiphy, ret);
+ return ret;
+ }
+@@ -944,12 +951,13 @@ static inline int rdev_set_noack_map(struct cfg80211_registered_device *rdev,
+ static inline int
+ rdev_get_channel(struct cfg80211_registered_device *rdev,
+ struct wireless_dev *wdev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef)
+ {
+ int ret;
+
+- trace_rdev_get_channel(&rdev->wiphy, wdev);
+- ret = rdev->ops->get_channel(&rdev->wiphy, wdev, chandef);
++ trace_rdev_get_channel(&rdev->wiphy, wdev, link_id);
++ ret = rdev->ops->get_channel(&rdev->wiphy, wdev, link_id, chandef);
+ trace_rdev_return_chandef(&rdev->wiphy, ret, chandef);
+
+ return ret;
+@@ -1107,12 +1115,14 @@ static inline int rdev_set_qos_map(struct cfg80211_registered_device *rdev,
+
+ static inline int
+ rdev_set_ap_chanwidth(struct cfg80211_registered_device *rdev,
+- struct net_device *dev, struct cfg80211_chan_def *chandef)
++ struct net_device *dev,
++ unsigned int link_id,
++ struct cfg80211_chan_def *chandef)
+ {
+ int ret;
+
+- trace_rdev_set_ap_chanwidth(&rdev->wiphy, dev, chandef);
+- ret = rdev->ops->set_ap_chanwidth(&rdev->wiphy, dev, chandef);
++ trace_rdev_set_ap_chanwidth(&rdev->wiphy, dev, link_id, chandef);
++ ret = rdev->ops->set_ap_chanwidth(&rdev->wiphy, dev, link_id, chandef);
+ trace_rdev_return_int(&rdev->wiphy, ret);
+
+ return ret;
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 58e83ce642ad2..c7383ede794fc 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -5,7 +5,7 @@
+ * Copyright 2008-2011 Luis R. Rodriguez <mcgrof@qca.qualcomm.com>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2021 Intel Corporation
++ * Copyright (C) 2018 - 2022 Intel Corporation
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+@@ -2370,6 +2370,7 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+ enum nl80211_iftype iftype;
+ bool ret;
++ int link;
+
+ wdev_lock(wdev);
+ iftype = wdev->iftype;
+@@ -2378,62 +2379,83 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
+ if (!wdev->netdev || !netif_running(wdev->netdev))
+ goto wdev_inactive_unlock;
+
+- switch (iftype) {
+- case NL80211_IFTYPE_AP:
+- case NL80211_IFTYPE_P2P_GO:
+- case NL80211_IFTYPE_MESH_POINT:
+- if (!wdev->beacon_interval)
+- goto wdev_inactive_unlock;
+- chandef = wdev->chandef;
+- break;
+- case NL80211_IFTYPE_ADHOC:
+- if (!wdev->ssid_len)
+- goto wdev_inactive_unlock;
+- chandef = wdev->chandef;
+- break;
+- case NL80211_IFTYPE_STATION:
+- case NL80211_IFTYPE_P2P_CLIENT:
+- if (!wdev->current_bss ||
+- !wdev->current_bss->pub.channel)
+- goto wdev_inactive_unlock;
+-
+- if (!rdev->ops->get_channel ||
+- rdev_get_channel(rdev, wdev, &chandef))
+- cfg80211_chandef_create(&chandef,
+- wdev->current_bss->pub.channel,
+- NL80211_CHAN_NO_HT);
+- break;
+- case NL80211_IFTYPE_MONITOR:
+- case NL80211_IFTYPE_AP_VLAN:
+- case NL80211_IFTYPE_P2P_DEVICE:
+- /* no enforcement required */
+- break;
+- default:
+- /* others not implemented for now */
+- WARN_ON(1);
+- break;
+- }
++ for (link = 0; link < ARRAY_SIZE(wdev->links); link++) {
++ struct ieee80211_channel *chan;
+
+- wdev_unlock(wdev);
++ if (!wdev->valid_links && link > 0)
++ break;
++ if (!(wdev->valid_links & BIT(link)))
++ continue;
++ switch (iftype) {
++ case NL80211_IFTYPE_AP:
++ case NL80211_IFTYPE_P2P_GO:
++ case NL80211_IFTYPE_MESH_POINT:
++ if (!wdev->u.mesh.beacon_interval)
++ continue;
++ chandef = wdev->u.mesh.chandef;
++ break;
++ case NL80211_IFTYPE_ADHOC:
++ if (!wdev->u.ibss.ssid_len)
++ continue;
++ chandef = wdev->u.ibss.chandef;
++ break;
++ case NL80211_IFTYPE_STATION:
++ case NL80211_IFTYPE_P2P_CLIENT:
++ /* Maybe we could consider disabling that link only? */
++ if (!wdev->links[link].client.current_bss)
++ continue;
+
+- switch (iftype) {
+- case NL80211_IFTYPE_AP:
+- case NL80211_IFTYPE_P2P_GO:
+- case NL80211_IFTYPE_ADHOC:
+- case NL80211_IFTYPE_MESH_POINT:
+- wiphy_lock(wiphy);
+- ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef, iftype);
+- wiphy_unlock(wiphy);
++ chan = wdev->links[link].client.current_bss->pub.channel;
++ if (!chan)
++ continue;
+
+- return ret;
+- case NL80211_IFTYPE_STATION:
+- case NL80211_IFTYPE_P2P_CLIENT:
+- return cfg80211_chandef_usable(wiphy, &chandef,
+- IEEE80211_CHAN_DISABLED);
+- default:
+- break;
++ if (!rdev->ops->get_channel ||
++ rdev_get_channel(rdev, wdev, link, &chandef))
++ cfg80211_chandef_create(&chandef, chan,
++ NL80211_CHAN_NO_HT);
++ break;
++ case NL80211_IFTYPE_MONITOR:
++ case NL80211_IFTYPE_AP_VLAN:
++ case NL80211_IFTYPE_P2P_DEVICE:
++ /* no enforcement required */
++ break;
++ default:
++ /* others not implemented for now */
++ WARN_ON(1);
++ break;
++ }
++
++ wdev_unlock(wdev);
++
++ switch (iftype) {
++ case NL80211_IFTYPE_AP:
++ case NL80211_IFTYPE_P2P_GO:
++ case NL80211_IFTYPE_ADHOC:
++ case NL80211_IFTYPE_MESH_POINT:
++ wiphy_lock(wiphy);
++ ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef,
++ iftype);
++ wiphy_unlock(wiphy);
++
++ if (!ret)
++ return ret;
++ break;
++ case NL80211_IFTYPE_STATION:
++ case NL80211_IFTYPE_P2P_CLIENT:
++ ret = cfg80211_chandef_usable(wiphy, &chandef,
++ IEEE80211_CHAN_DISABLED);
++ if (!ret)
++ return ret;
++ break;
++ default:
++ break;
++ }
++
++ wdev_lock(wdev);
+ }
+
++ wdev_unlock(wdev);
++
+ return true;
+
+ wdev_inactive_unlock:
+@@ -4215,8 +4237,17 @@ static void cfg80211_check_and_end_cac(struct cfg80211_registered_device *rdev)
+ * In both cases we should end the CAC on the wdev.
+ */
+ list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {
+- if (wdev->cac_started &&
+- !cfg80211_chandef_dfs_usable(&rdev->wiphy, &wdev->chandef))
++ struct cfg80211_chan_def *chandef;
++
++ if (!wdev->cac_started)
++ continue;
++
++ /* FIXME: radar detection is tied to link 0 for now */
++ chandef = wdev_chandef(wdev, 0);
++ if (!chandef)
++ continue;
++
++ if (!cfg80211_chandef_dfs_usable(&rdev->wiphy, chandef))
+ rdev_end_cac(rdev, wdev->netdev);
+ }
+ }
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 6d82bd9eaf8c7..0134e5d5c81a4 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -5,7 +5,7 @@
+ * Copyright 2008 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright 2016 Intel Deutschland GmbH
+- * Copyright (C) 2018-2021 Intel Corporation
++ * Copyright (C) 2018-2022 Intel Corporation
+ */
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+@@ -2617,7 +2617,8 @@ void cfg80211_bss_iter(struct wiphy *wiphy,
+ spin_lock_bh(&rdev->bss_lock);
+
+ list_for_each_entry(bss, &rdev->bss_list, list) {
+- if (!chandef || cfg80211_is_sub_chan(chandef, bss->pub.channel))
++ if (!chandef || cfg80211_is_sub_chan(chandef, bss->pub.channel,
++ false))
+ iter(wiphy, &bss->pub, iter_data);
+ }
+
+@@ -2626,11 +2627,12 @@ void cfg80211_bss_iter(struct wiphy *wiphy,
+ EXPORT_SYMBOL(cfg80211_bss_iter);
+
+ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev,
++ unsigned int link_id,
+ struct ieee80211_channel *chan)
+ {
+ struct wiphy *wiphy = wdev->wiphy;
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+- struct cfg80211_internal_bss *cbss = wdev->current_bss;
++ struct cfg80211_internal_bss *cbss = wdev->links[link_id].client.current_bss;
+ struct cfg80211_internal_bss *new = NULL;
+ struct cfg80211_internal_bss *bss;
+ struct cfg80211_bss *nontrans_bss;
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index 607a689110471..ca674649d7875 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -5,7 +5,7 @@
+ * (for nl80211's connect() and wext)
+ *
+ * Copyright 2009 Johannes Berg <johannes@sipsolutions.net>
+- * Copyright (C) 2009, 2020 Intel Corporation. All rights reserved.
++ * Copyright (C) 2009, 2020, 2022 Intel Corporation. All rights reserved.
+ * Copyright 2017 Intel Deutschland GmbH
+ */
+
+@@ -454,6 +454,20 @@ void cfg80211_sme_abandon_assoc(struct wireless_dev *wdev)
+ schedule_work(&rdev->conn_work);
+ }
+
++static void cfg80211_wdev_release_bsses(struct wireless_dev *wdev)
++{
++ unsigned int link;
++
++ for_each_valid_link(wdev, link) {
++ if (!wdev->links[link].client.current_bss)
++ continue;
++ cfg80211_unhold_bss(wdev->links[link].client.current_bss);
++ cfg80211_put_bss(wdev->wiphy,
++ &wdev->links[link].client.current_bss->pub);
++ wdev->links[link].client.current_bss = NULL;
++ }
++}
++
+ static int cfg80211_sme_get_conn_ies(struct wireless_dev *wdev,
+ const u8 *ies, size_t ies_len,
+ const u8 **out_ies, size_t *out_ies_len)
+@@ -521,12 +535,11 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
+ if (!rdev->ops->auth || !rdev->ops->assoc)
+ return -EOPNOTSUPP;
+
+- if (wdev->current_bss) {
+- cfg80211_unhold_bss(wdev->current_bss);
+- cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
+- wdev->current_bss = NULL;
++ cfg80211_wdev_release_bsses(wdev);
+
++ if (wdev->connected) {
+ cfg80211_sme_free(wdev);
++ wdev->connected = false;
+ }
+
+ if (wdev->conn)
+@@ -563,8 +576,8 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
+ wdev->conn->auto_auth = false;
+ }
+
+- wdev->conn->params.ssid = wdev->ssid;
+- wdev->conn->params.ssid_len = wdev->ssid_len;
++ wdev->conn->params.ssid = wdev->u.client.ssid;
++ wdev->conn->params.ssid_len = wdev->u.client.ssid_len;
+
+ /* see if we have the bss already */
+ bss = cfg80211_get_conn_bss(wdev);
+@@ -648,7 +661,7 @@ static bool cfg80211_is_all_idle(void)
+ list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {
+ wdev_lock(wdev);
+- if (wdev->conn || wdev->current_bss ||
++ if (wdev->conn || wdev->connected ||
+ cfg80211_beaconing_iface_active(wdev))
+ is_all_idle = false;
+ wdev_unlock(wdev);
+@@ -668,7 +681,6 @@ static void disconnect_work(struct work_struct *work)
+
+ DECLARE_WORK(cfg80211_disconnect_work, disconnect_work);
+
+-
+ /*
+ * API calls for drivers implementing connect/disconnect and
+ * SME event handling
+@@ -729,23 +741,19 @@ void __cfg80211_connect_result(struct net_device *dev,
+ if (!cr->bss && (cr->status == WLAN_STATUS_SUCCESS)) {
+ WARN_ON_ONCE(!wiphy_to_rdev(wdev->wiphy)->ops->connect);
+ cr->bss = cfg80211_get_bss(wdev->wiphy, NULL, cr->bssid,
+- wdev->ssid, wdev->ssid_len,
++ wdev->u.client.ssid, wdev->u.client.ssid_len,
+ wdev->conn_bss_type,
+ IEEE80211_PRIVACY_ANY);
+ if (cr->bss)
+ cfg80211_hold_bss(bss_from_pub(cr->bss));
+ }
+
+- if (wdev->current_bss) {
+- cfg80211_unhold_bss(wdev->current_bss);
+- cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
+- wdev->current_bss = NULL;
+- }
++ cfg80211_wdev_release_bsses(wdev);
+
+ if (cr->status != WLAN_STATUS_SUCCESS) {
+ kfree_sensitive(wdev->connect_keys);
+ wdev->connect_keys = NULL;
+- wdev->ssid_len = 0;
++ wdev->u.client.ssid_len = 0;
+ wdev->conn_owner_nlportid = 0;
+ if (cr->bss) {
+ cfg80211_unhold_bss(bss_from_pub(cr->bss));
+@@ -758,7 +766,9 @@ void __cfg80211_connect_result(struct net_device *dev,
+ if (WARN_ON(!cr->bss))
+ return;
+
+- wdev->current_bss = bss_from_pub(cr->bss);
++ wdev->links[0].client.current_bss = bss_from_pub(cr->bss);
++ wdev->connected = true;
++ ether_addr_copy(wdev->u.client.connected_addr, cr->bss->bssid);
+
+ if (!(wdev->wiphy->flags & WIPHY_FLAG_HAS_STATIC_WEP))
+ cfg80211_upload_connect_keys(wdev);
+@@ -801,7 +811,7 @@ void cfg80211_connect_done(struct net_device *dev,
+
+ found = cfg80211_get_bss(wdev->wiphy, NULL,
+ params->bss->bssid,
+- wdev->ssid, wdev->ssid_len,
++ wdev->u.client.ssid, wdev->u.client.ssid_len,
+ wdev->conn_bss_type,
+ IEEE80211_PRIVACY_ANY);
+ if (found) {
+@@ -906,18 +916,17 @@ void __cfg80211_roamed(struct wireless_dev *wdev,
+ wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
+ goto out;
+
+- if (WARN_ON(!wdev->current_bss))
++ if (WARN_ON(!wdev->connected))
+ goto out;
+
+- cfg80211_unhold_bss(wdev->current_bss);
+- cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
+- wdev->current_bss = NULL;
++ cfg80211_wdev_release_bsses(wdev);
+
+ if (WARN_ON(!info->bss))
+ return;
+
+ cfg80211_hold_bss(bss_from_pub(info->bss));
+- wdev->current_bss = bss_from_pub(info->bss);
++ wdev->links[0].client.current_bss = bss_from_pub(info->bss);
++ ether_addr_copy(wdev->u.client.connected_addr, info->bss->bssid);
+
+ wdev->unprot_beacon_reported = 0;
+ nl80211_send_roamed(wiphy_to_rdev(wdev->wiphy),
+@@ -963,8 +972,8 @@ void cfg80211_roamed(struct net_device *dev, struct cfg80211_roam_info *info,
+
+ if (!info->bss) {
+ info->bss = cfg80211_get_bss(wdev->wiphy, info->channel,
+- info->bssid, wdev->ssid,
+- wdev->ssid_len,
++ info->bssid, wdev->u.client.ssid,
++ wdev->u.client.ssid_len,
+ wdev->conn_bss_type,
+ IEEE80211_PRIVACY_ANY);
+ }
+@@ -1035,8 +1044,8 @@ void __cfg80211_port_authorized(struct wireless_dev *wdev, const u8 *bssid)
+ wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
+ return;
+
+- if (WARN_ON(!wdev->current_bss) ||
+- WARN_ON(!ether_addr_equal(wdev->current_bss->pub.bssid, bssid)))
++ if (WARN_ON(!wdev->connected) ||
++ WARN_ON(!ether_addr_equal(wdev->u.client.connected_addr, bssid)))
+ return;
+
+ nl80211_send_port_authorized(wiphy_to_rdev(wdev->wiphy), wdev->netdev,
+@@ -1088,13 +1097,9 @@ void __cfg80211_disconnected(struct net_device *dev, const u8 *ie,
+ wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
+ return;
+
+- if (wdev->current_bss) {
+- cfg80211_unhold_bss(wdev->current_bss);
+- cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub);
+- }
+-
+- wdev->current_bss = NULL;
+- wdev->ssid_len = 0;
++ cfg80211_wdev_release_bsses(wdev);
++ wdev->connected = false;
++ wdev->u.client.ssid_len = 0;
+ wdev->conn_owner_nlportid = 0;
+ kfree_sensitive(wdev->connect_keys);
+ wdev->connect_keys = NULL;
+@@ -1183,19 +1188,20 @@ int cfg80211_connect(struct cfg80211_registered_device *rdev,
+ * already connected, so reject a new SSID unless it's the
+ * same (which is the case for re-association.)
+ */
+- if (wdev->ssid_len &&
+- (wdev->ssid_len != connect->ssid_len ||
+- memcmp(wdev->ssid, connect->ssid, wdev->ssid_len)))
++ if (wdev->u.client.ssid_len &&
++ (wdev->u.client.ssid_len != connect->ssid_len ||
++ memcmp(wdev->u.client.ssid, connect->ssid, wdev->u.client.ssid_len)))
+ return -EALREADY;
+
+ /*
+ * If connected, reject (re-)association unless prev_bssid
+ * matches the current BSSID.
+ */
+- if (wdev->current_bss) {
++ if (wdev->connected) {
+ if (!prev_bssid)
+ return -EALREADY;
+- if (!ether_addr_equal(prev_bssid, wdev->current_bss->pub.bssid))
++ if (!ether_addr_equal(prev_bssid,
++ wdev->u.client.connected_addr))
+ return -ENOTCONN;
+ }
+
+@@ -1246,8 +1252,8 @@ int cfg80211_connect(struct cfg80211_registered_device *rdev,
+ }
+
+ wdev->connect_keys = connkeys;
+- memcpy(wdev->ssid, connect->ssid, connect->ssid_len);
+- wdev->ssid_len = connect->ssid_len;
++ memcpy(wdev->u.client.ssid, connect->ssid, connect->ssid_len);
++ wdev->u.client.ssid_len = connect->ssid_len;
+
+ wdev->conn_bss_type = connect->pbss ? IEEE80211_BSS_TYPE_PBSS :
+ IEEE80211_BSS_TYPE_ESS;
+@@ -1263,8 +1269,8 @@ int cfg80211_connect(struct cfg80211_registered_device *rdev,
+ * This could be reassoc getting refused, don't clear
+ * ssid_len in that case.
+ */
+- if (!wdev->current_bss)
+- wdev->ssid_len = 0;
++ if (!wdev->connected)
++ wdev->u.client.ssid_len = 0;
+ return err;
+ }
+
+@@ -1288,7 +1294,7 @@ int cfg80211_disconnect(struct cfg80211_registered_device *rdev,
+ err = cfg80211_sme_disconnect(wdev, reason);
+ else if (!rdev->ops->disconnect)
+ cfg80211_mlme_down(rdev, dev);
+- else if (wdev->ssid_len)
++ else if (wdev->u.client.ssid_len)
+ err = rdev_disconnect(rdev, dev, reason);
+
+ /*
+@@ -1296,8 +1302,8 @@ int cfg80211_disconnect(struct cfg80211_registered_device *rdev,
+ * in which case cfg80211_disconnected() will take care of
+ * this later.
+ */
+- if (!wdev->current_bss)
+- wdev->ssid_len = 0;
++ if (!wdev->connected)
++ wdev->u.client.ssid_len = 0;
+
+ return err;
+ }
+@@ -1321,7 +1327,7 @@ void cfg80211_autodisconnect_wk(struct work_struct *work)
+ break;
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_P2P_GO:
+- __cfg80211_stop_ap(rdev, wdev->netdev, false);
++ __cfg80211_stop_ap(rdev, wdev->netdev, -1, false);
+ break;
+ case NL80211_IFTYPE_MESH_POINT:
+ __cfg80211_leave_mesh(rdev, wdev->netdev);
+@@ -1333,7 +1339,7 @@ void cfg80211_autodisconnect_wk(struct work_struct *work)
+ * ops->disconnect not implemented. Otherwise we can
+ * use cfg80211_disconnect.
+ */
+- if (rdev->ops->disconnect || wdev->current_bss)
++ if (rdev->ops->disconnect || wdev->connected)
+ cfg80211_disconnect(rdev, wdev->netdev,
+ WLAN_REASON_DEAUTH_LEAVING,
+ true);
+diff --git a/net/wireless/trace.h b/net/wireless/trace.h
+index 228079d7690a4..3b2c956b8d783 100644
+--- a/net/wireless/trace.h
++++ b/net/wireless/trace.h
+@@ -569,6 +569,7 @@ TRACE_EVENT(rdev_start_ap,
+ __field(bool, privacy)
+ __field(enum nl80211_auth_type, auth_type)
+ __field(int, inactivity_timeout)
++ __field(unsigned int, link_id)
+ ),
+ TP_fast_assign(
+ WIPHY_ASSIGN;
+@@ -583,16 +584,17 @@ TRACE_EVENT(rdev_start_ap,
+ __entry->inactivity_timeout = settings->inactivity_timeout;
+ memset(__entry->ssid, 0, IEEE80211_MAX_SSID_LEN + 1);
+ memcpy(__entry->ssid, settings->ssid, settings->ssid_len);
++ __entry->link_id = settings->beacon.link_id;
+ ),
+ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", AP settings - ssid: %s, "
+ CHAN_DEF_PR_FMT ", beacon interval: %d, dtim period: %d, "
+ "hidden ssid: %d, wpa versions: %u, privacy: %s, "
+- "auth type: %d, inactivity timeout: %d",
++ "auth type: %d, inactivity timeout: %d, link_id: %d",
+ WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->ssid, CHAN_DEF_PR_ARG,
+ __entry->beacon_interval, __entry->dtim_period,
+ __entry->hidden_ssid, __entry->wpa_ver,
+ BOOL_TO_STR(__entry->privacy), __entry->auth_type,
+- __entry->inactivity_timeout)
++ __entry->inactivity_timeout, __entry->link_id)
+ );
+
+ TRACE_EVENT(rdev_change_beacon,
+@@ -602,6 +604,7 @@ TRACE_EVENT(rdev_change_beacon,
+ TP_STRUCT__entry(
+ WIPHY_ENTRY
+ NETDEV_ENTRY
++ __field(int, link_id)
+ __dynamic_array(u8, head, info ? info->head_len : 0)
+ __dynamic_array(u8, tail, info ? info->tail_len : 0)
+ __dynamic_array(u8, beacon_ies, info ? info->beacon_ies_len : 0)
+@@ -615,6 +618,7 @@ TRACE_EVENT(rdev_change_beacon,
+ WIPHY_ASSIGN;
+ NETDEV_ASSIGN;
+ if (info) {
++ __entry->link_id = info->link_id;
+ if (info->head)
+ memcpy(__get_dynamic_array(head), info->head,
+ info->head_len);
+@@ -635,9 +639,30 @@ TRACE_EVENT(rdev_change_beacon,
+ if (info->probe_resp)
+ memcpy(__get_dynamic_array(probe_resp),
+ info->probe_resp, info->probe_resp_len);
++ } else {
++ __entry->link_id = -1;
+ }
+ ),
+- TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT, WIPHY_PR_ARG, NETDEV_PR_ARG)
++ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", link_id:%d",
++ WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->link_id)
++);
++
++TRACE_EVENT(rdev_stop_ap,
++ TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
++ unsigned int link_id),
++ TP_ARGS(wiphy, netdev, link_id),
++ TP_STRUCT__entry(
++ WIPHY_ENTRY
++ NETDEV_ENTRY
++ __field(unsigned int, link_id)
++ ),
++ TP_fast_assign(
++ WIPHY_ASSIGN;
++ NETDEV_ASSIGN;
++ __entry->link_id = link_id;
++ ),
++ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", link_id: %d",
++ WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->link_id)
+ );
+
+ DECLARE_EVENT_CLASS(wiphy_netdev_evt,
+@@ -654,11 +679,6 @@ DECLARE_EVENT_CLASS(wiphy_netdev_evt,
+ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT, WIPHY_PR_ARG, NETDEV_PR_ARG)
+ );
+
+-DEFINE_EVENT(wiphy_netdev_evt, rdev_stop_ap,
+- TP_PROTO(struct wiphy *wiphy, struct net_device *netdev),
+- TP_ARGS(wiphy, netdev)
+-);
+-
+ DEFINE_EVENT(wiphy_netdev_evt, rdev_set_rekey_data,
+ TP_PROTO(struct wiphy *wiphy, struct net_device *netdev),
+ TP_ARGS(wiphy, netdev)
+@@ -1619,20 +1639,24 @@ TRACE_EVENT(rdev_testmode_dump,
+
+ TRACE_EVENT(rdev_set_bitrate_mask,
+ TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
++ unsigned int link_id,
+ const u8 *peer, const struct cfg80211_bitrate_mask *mask),
+- TP_ARGS(wiphy, netdev, peer, mask),
++ TP_ARGS(wiphy, netdev, link_id, peer, mask),
+ TP_STRUCT__entry(
+ WIPHY_ENTRY
+ NETDEV_ENTRY
++ __field(unsigned int, link_id)
+ MAC_ENTRY(peer)
+ ),
+ TP_fast_assign(
+ WIPHY_ASSIGN;
+ NETDEV_ASSIGN;
++ __entry->link_id = link_id;
+ MAC_ASSIGN(peer, peer);
+ ),
+- TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", peer: " MAC_PR_FMT,
+- WIPHY_PR_ARG, NETDEV_PR_ARG, MAC_PR_ARG(peer))
++ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", link_id: %d, peer: " MAC_PR_FMT,
++ WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->link_id,
++ MAC_PR_ARG(peer))
+ );
+
+ TRACE_EVENT(rdev_update_mgmt_frame_registrations,
+@@ -2040,9 +2064,22 @@ TRACE_EVENT(rdev_set_noack_map,
+ WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->noack_map)
+ );
+
+-DEFINE_EVENT(wiphy_wdev_evt, rdev_get_channel,
+- TP_PROTO(struct wiphy *wiphy, struct wireless_dev *wdev),
+- TP_ARGS(wiphy, wdev)
++TRACE_EVENT(rdev_get_channel,
++ TP_PROTO(struct wiphy *wiphy, struct wireless_dev *wdev,
++ unsigned int link_id),
++ TP_ARGS(wiphy, wdev, link_id),
++ TP_STRUCT__entry(
++ WIPHY_ENTRY
++ WDEV_ENTRY
++ __field(unsigned int, link_id)
++ ),
++ TP_fast_assign(
++ WIPHY_ASSIGN;
++ WDEV_ASSIGN;
++ __entry->link_id = link_id;
++ ),
++ TP_printk(WIPHY_PR_FMT ", " WDEV_PR_FMT ", link_id: %u",
++ WIPHY_PR_ARG, WDEV_PR_ARG, __entry->link_id)
+ );
+
+ TRACE_EVENT(rdev_return_chandef,
+@@ -2296,20 +2333,24 @@ TRACE_EVENT(rdev_set_qos_map,
+
+ TRACE_EVENT(rdev_set_ap_chanwidth,
+ TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
++ unsigned int link_id,
+ struct cfg80211_chan_def *chandef),
+- TP_ARGS(wiphy, netdev, chandef),
++ TP_ARGS(wiphy, netdev, link_id, chandef),
+ TP_STRUCT__entry(
+ WIPHY_ENTRY
+ NETDEV_ENTRY
+ CHAN_DEF_ENTRY
++ __field(unsigned int, link_id)
+ ),
+ TP_fast_assign(
+ WIPHY_ASSIGN;
+ NETDEV_ASSIGN;
+ CHAN_DEF_ASSIGN(chandef);
++ __entry->link_id = link_id;
+ ),
+- TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", " CHAN_DEF_PR_FMT,
+- WIPHY_PR_ARG, NETDEV_PR_ARG, CHAN_DEF_PR_ARG)
++ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", " CHAN_DEF_PR_FMT ", link:%d",
++ WIPHY_PR_ARG, NETDEV_PR_ARG, CHAN_DEF_PR_ARG,
++ __entry->link_id)
+ );
+
+ TRACE_EVENT(rdev_add_tx_ts,
+@@ -3022,18 +3063,21 @@ TRACE_EVENT(cfg80211_chandef_dfs_required,
+
+ TRACE_EVENT(cfg80211_ch_switch_notify,
+ TP_PROTO(struct net_device *netdev,
+- struct cfg80211_chan_def *chandef),
+- TP_ARGS(netdev, chandef),
++ struct cfg80211_chan_def *chandef,
++ unsigned int link_id),
++ TP_ARGS(netdev, chandef, link_id),
+ TP_STRUCT__entry(
+ NETDEV_ENTRY
+ CHAN_DEF_ENTRY
++ __field(unsigned int, link_id)
+ ),
+ TP_fast_assign(
+ NETDEV_ASSIGN;
+ CHAN_DEF_ASSIGN(chandef);
++ __entry->link_id = link_id;
+ ),
+- TP_printk(NETDEV_PR_FMT ", " CHAN_DEF_PR_FMT,
+- NETDEV_PR_ARG, CHAN_DEF_PR_ARG)
++ TP_printk(NETDEV_PR_FMT ", " CHAN_DEF_PR_FMT ", link:%d",
++ NETDEV_PR_ARG, CHAN_DEF_PR_ARG, __entry->link_id)
+ );
+
+ TRACE_EVENT(cfg80211_ch_switch_started_notify,
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index a60d7d638e72b..b7257862e0fe6 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -5,7 +5,7 @@
+ * Copyright 2007-2009 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2021 Intel Corporation
++ * Copyright (C) 2018-2022 Intel Corporation
+ */
+ #include <linux/export.h>
+ #include <linux/bitops.h>
+@@ -1041,7 +1041,6 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+ return -EBUSY;
+
+ dev->ieee80211_ptr->use_4addr = false;
+- dev->ieee80211_ptr->mesh_id_up_len = 0;
+ wdev_lock(dev->ieee80211_ptr);
+ rdev_set_qos_map(rdev, dev, NULL);
+ wdev_unlock(dev->ieee80211_ptr);
+@@ -1049,7 +1048,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+ switch (otype) {
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_P2P_GO:
+- cfg80211_stop_ap(rdev, dev, true);
++ cfg80211_stop_ap(rdev, dev, -1, true);
+ break;
+ case NL80211_IFTYPE_ADHOC:
+ cfg80211_leave_ibss(rdev, dev, false);
+@@ -1073,6 +1072,11 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+
+ cfg80211_process_rdev_events(rdev);
+ cfg80211_mlme_purge_registrations(dev->ieee80211_ptr);
++
++ memset(&dev->ieee80211_ptr->u, 0,
++ sizeof(dev->ieee80211_ptr->u));
++ memset(&dev->ieee80211_ptr->links, 0,
++ sizeof(dev->ieee80211_ptr->links));
+ }
+
+ err = rdev_change_virtual_intf(rdev, dev, ntype, params);
+@@ -1930,6 +1934,24 @@ bool ieee80211_chandef_to_operating_class(struct cfg80211_chan_def *chandef,
+ }
+ EXPORT_SYMBOL(ieee80211_chandef_to_operating_class);
+
++static int cfg80211_wdev_bi(struct wireless_dev *wdev)
++{
++ switch (wdev->iftype) {
++ case NL80211_IFTYPE_AP:
++ case NL80211_IFTYPE_P2P_GO:
++ WARN_ON(wdev->valid_links);
++ return wdev->links[0].ap.beacon_interval;
++ case NL80211_IFTYPE_MESH_POINT:
++ return wdev->u.mesh.beacon_interval;
++ case NL80211_IFTYPE_ADHOC:
++ return wdev->u.ibss.beacon_interval;
++ default:
++ break;
++ }
++
++ return 0;
++}
++
+ static void cfg80211_calculate_bi_data(struct wiphy *wiphy, u32 new_beacon_int,
+ u32 *beacon_int_gcd,
+ bool *beacon_int_different)
+@@ -1940,19 +1962,27 @@ static void cfg80211_calculate_bi_data(struct wiphy *wiphy, u32 new_beacon_int,
+ *beacon_int_different = false;
+
+ list_for_each_entry(wdev, &wiphy->wdev_list, list) {
+- if (!wdev->beacon_interval)
++ int wdev_bi;
++
++ /* this feature isn't supported with MLO */
++ if (wdev->valid_links)
++ continue;
++
++ wdev_bi = cfg80211_wdev_bi(wdev);
++
++ if (!wdev_bi)
+ continue;
+
+ if (!*beacon_int_gcd) {
+- *beacon_int_gcd = wdev->beacon_interval;
++ *beacon_int_gcd = wdev_bi;
+ continue;
+ }
+
+- if (wdev->beacon_interval == *beacon_int_gcd)
++ if (wdev_bi == *beacon_int_gcd)
+ continue;
+
+ *beacon_int_different = true;
+- *beacon_int_gcd = gcd(*beacon_int_gcd, wdev->beacon_interval);
++ *beacon_int_gcd = gcd(*beacon_int_gcd, wdev_bi);
+ }
+
+ if (new_beacon_int && *beacon_int_gcd != new_beacon_int) {
+diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c
+index a32065d600a1c..a9767bfe73300 100644
+--- a/net/wireless/wext-compat.c
++++ b/net/wireless/wext-compat.c
+@@ -7,7 +7,7 @@
+ * we directly assign the wireless handlers of wireless interfaces.
+ *
+ * Copyright 2008-2009 Johannes Berg <johannes@sipsolutions.net>
+- * Copyright (C) 2019-2021 Intel Corporation
++ * Copyright (C) 2019-2022 Intel Corporation
+ */
+
+ #include <linux/export.h>
+@@ -415,6 +415,9 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
+ int err, i;
+ bool rejoin = false;
+
++ if (wdev->valid_links)
++ return -EINVAL;
++
+ if (pairwise && !addr)
+ return -EINVAL;
+
+@@ -437,7 +440,7 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
+ return -EOPNOTSUPP;
+
+ if (params->cipher == WLAN_CIPHER_SUITE_AES_CMAC) {
+- if (!wdev->current_bss)
++ if (!wdev->connected)
+ return -ENOLINK;
+
+ if (!rdev->ops->set_default_mgmt_key)
+@@ -450,7 +453,9 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
+
+ if (remove) {
+ err = 0;
+- if (wdev->current_bss) {
++ if (wdev->connected ||
++ (wdev->iftype == NL80211_IFTYPE_ADHOC &&
++ wdev->u.ibss.current_bss)) {
+ /*
+ * If removing the current TX key, we will need to
+ * join a new IBSS without the privacy bit clear.
+@@ -501,7 +506,9 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
+ return -EINVAL;
+
+ err = 0;
+- if (wdev->current_bss)
++ if (wdev->connected ||
++ (wdev->iftype == NL80211_IFTYPE_ADHOC &&
++ wdev->u.ibss.current_bss))
+ err = rdev_add_key(rdev, dev, idx, pairwise, addr, params);
+ else if (params->cipher != WLAN_CIPHER_SUITE_WEP40 &&
+ params->cipher != WLAN_CIPHER_SUITE_WEP104)
+@@ -526,7 +533,9 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
+ if ((params->cipher == WLAN_CIPHER_SUITE_WEP40 ||
+ params->cipher == WLAN_CIPHER_SUITE_WEP104) &&
+ (tx_key || (!addr && wdev->wext.default_key == -1))) {
+- if (wdev->current_bss) {
++ if (wdev->connected ||
++ (wdev->iftype == NL80211_IFTYPE_ADHOC &&
++ wdev->u.ibss.current_bss)) {
+ /*
+ * If we are getting a new TX key from not having
+ * had one before we need to join a new IBSS with
+@@ -549,7 +558,9 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
+
+ if (params->cipher == WLAN_CIPHER_SUITE_AES_CMAC &&
+ (tx_key || (!addr && wdev->wext.default_mgmt_key == -1))) {
+- if (wdev->current_bss)
++ if (wdev->connected ||
++ (wdev->iftype == NL80211_IFTYPE_ADHOC &&
++ wdev->u.ibss.current_bss))
+ err = rdev_set_default_mgmt_key(rdev, dev, idx);
+ if (!err)
+ wdev->wext.default_mgmt_key = idx;
+@@ -595,6 +606,11 @@ static int cfg80211_wext_siwencode(struct net_device *dev,
+ return -EOPNOTSUPP;
+
+ wiphy_lock(&rdev->wiphy);
++ if (wdev->valid_links) {
++ err = -EOPNOTSUPP;
++ goto out;
++ }
++
+ idx = erq->flags & IW_ENCODE_INDEX;
+ if (idx == 0) {
+ idx = wdev->wext.default_key;
+@@ -613,7 +629,9 @@ static int cfg80211_wext_siwencode(struct net_device *dev,
+ /* No key data - just set the default TX key index */
+ err = 0;
+ wdev_lock(wdev);
+- if (wdev->current_bss)
++ if (wdev->connected ||
++ (wdev->iftype == NL80211_IFTYPE_ADHOC &&
++ wdev->u.ibss.current_bss))
+ err = rdev_set_default_key(rdev, dev, idx, true,
+ true);
+ if (!err)
+@@ -865,7 +883,7 @@ static int cfg80211_wext_giwfreq(struct net_device *dev,
+ break;
+ }
+
+- ret = rdev_get_channel(rdev, wdev, &chandef);
++ ret = rdev_get_channel(rdev, wdev, 0, &chandef);
+ if (ret)
+ break;
+ freq->m = chandef.chan->center_freq;
+@@ -1270,7 +1288,10 @@ static int cfg80211_wext_siwrate(struct net_device *dev,
+ return -EINVAL;
+
+ wiphy_lock(&rdev->wiphy);
+- ret = rdev_set_bitrate_mask(rdev, dev, NULL, &mask);
++ if (dev->ieee80211_ptr->valid_links)
++ ret = -EOPNOTSUPP;
++ else
++ ret = rdev_set_bitrate_mask(rdev, dev, 0, NULL, &mask);
+ wiphy_unlock(&rdev->wiphy);
+
+ return ret;
+@@ -1294,8 +1315,9 @@ static int cfg80211_wext_giwrate(struct net_device *dev,
+
+ err = 0;
+ wdev_lock(wdev);
+- if (wdev->current_bss)
+- memcpy(addr, wdev->current_bss->pub.bssid, ETH_ALEN);
++ if (!wdev->valid_links && wdev->links[0].client.current_bss)
++ memcpy(addr, wdev->links[0].client.current_bss->pub.bssid,
++ ETH_ALEN);
+ else
+ err = -EOPNOTSUPP;
+ wdev_unlock(wdev);
+@@ -1339,11 +1361,11 @@ static struct iw_statistics *cfg80211_wireless_stats(struct net_device *dev)
+
+ /* Grab BSSID of current BSS, if any */
+ wdev_lock(wdev);
+- if (!wdev->current_bss) {
++ if (wdev->valid_links || !wdev->links[0].client.current_bss) {
+ wdev_unlock(wdev);
+ return NULL;
+ }
+- memcpy(bssid, wdev->current_bss->pub.bssid, ETH_ALEN);
++ memcpy(bssid, wdev->links[0].client.current_bss->pub.bssid, ETH_ALEN);
+ wdev_unlock(wdev);
+
+ memset(&sinfo, 0, sizeof(sinfo));
+diff --git a/net/wireless/wext-sme.c b/net/wireless/wext-sme.c
+index cd09a9042261f..68f45afc352de 100644
+--- a/net/wireless/wext-sme.c
++++ b/net/wireless/wext-sme.c
+@@ -3,7 +3,7 @@
+ * cfg80211 wext compat for managed mode.
+ *
+ * Copyright 2009 Johannes Berg <johannes@sipsolutions.net>
+- * Copyright (C) 2009, 2020-2021 Intel Corporation.
++ * Copyright (C) 2009, 2020-2022 Intel Corporation
+ */
+
+ #include <linux/export.h>
+@@ -124,9 +124,12 @@ int cfg80211_mgd_wext_giwfreq(struct net_device *dev,
+ if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION))
+ return -EINVAL;
+
++ if (wdev->valid_links)
++ return -EOPNOTSUPP;
++
+ wdev_lock(wdev);
+- if (wdev->current_bss)
+- chan = wdev->current_bss->pub.channel;
++ if (wdev->links[0].client.current_bss)
++ chan = wdev->links[0].client.current_bss->pub.channel;
+ else if (wdev->wext.connect.channel)
+ chan = wdev->wext.connect.channel;
+ wdev_unlock(wdev);
+@@ -208,15 +211,19 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev,
+ if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION))
+ return -EINVAL;
+
++ if (wdev->valid_links)
++ return -EINVAL;
++
+ data->flags = 0;
+
+ wdev_lock(wdev);
+- if (wdev->current_bss) {
++ if (wdev->links[0].client.current_bss) {
+ const struct element *ssid_elem;
+
+ rcu_read_lock();
+- ssid_elem = ieee80211_bss_get_elem(&wdev->current_bss->pub,
+- WLAN_EID_SSID);
++ ssid_elem = ieee80211_bss_get_elem(
++ &wdev->links[0].client.current_bss->pub,
++ WLAN_EID_SSID);
+ if (ssid_elem) {
+ data->flags = 1;
+ data->length = ssid_elem->datalen;
+@@ -300,8 +307,14 @@ int cfg80211_mgd_wext_giwap(struct net_device *dev,
+ ap_addr->sa_family = ARPHRD_ETHER;
+
+ wdev_lock(wdev);
+- if (wdev->current_bss)
+- memcpy(ap_addr->sa_data, wdev->current_bss->pub.bssid, ETH_ALEN);
++ if (wdev->valid_links) {
++ wdev_unlock(wdev);
++ return -EOPNOTSUPP;
++ }
++ if (wdev->links[0].client.current_bss)
++ memcpy(ap_addr->sa_data,
++ wdev->links[0].client.current_bss->pub.bssid,
++ ETH_ALEN);
+ else
+ eth_zero_addr(ap_addr->sa_data);
+ wdev_unlock(wdev);
+diff --git a/samples/bpf/xdp_router_ipv4.bpf.c b/samples/bpf/xdp_router_ipv4.bpf.c
+index 248119ca79387..0643330d1d2e5 100644
+--- a/samples/bpf/xdp_router_ipv4.bpf.c
++++ b/samples/bpf/xdp_router_ipv4.bpf.c
+@@ -150,6 +150,15 @@ int xdp_router_ipv4_prog(struct xdp_md *ctx)
+
+ dest_mac = bpf_map_lookup_elem(&arp_table,
+ &prefix_value->gw);
++ if (!dest_mac) {
++ /* Forward the packet to the kernel in
++ * order to trigger ARP discovery for
++ * the default gw.
++ */
++ if (rec)
++ NO_TEAR_INC(rec->xdp_pass);
++ return XDP_PASS;
++ }
+ }
+ }
+
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index 94ed98dd899f3..57099687e5e1d 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -112,7 +112,9 @@ __faddr2line() {
+ # section offsets.
+ local file_type=$(${READELF} --file-header $objfile |
+ ${AWK} '$1 == "Type:" { print $2; exit }')
+- [[ $file_type = "EXEC" ]] && is_vmlinux=1
++ if [[ $file_type = "EXEC" ]] || [[ $file_type == "DYN" ]]; then
++ is_vmlinux=1
++ fi
+
+ # Go through each of the object's symbols which match the func name.
+ # In rare cases there might be duplicates, in which case we print all
+diff --git a/scripts/gdb/linux/dmesg.py b/scripts/gdb/linux/dmesg.py
+index d5983cf3db7d0..c771831eb077d 100644
+--- a/scripts/gdb/linux/dmesg.py
++++ b/scripts/gdb/linux/dmesg.py
+@@ -22,7 +22,6 @@ prb_desc_type = utils.CachedType("struct prb_desc")
+ prb_desc_ring_type = utils.CachedType("struct prb_desc_ring")
+ prb_data_ring_type = utils.CachedType("struct prb_data_ring")
+ printk_ringbuffer_type = utils.CachedType("struct printk_ringbuffer")
+-atomic_long_type = utils.CachedType("atomic_long_t")
+
+ class LxDmesg(gdb.Command):
+ """Print Linux kernel log buffer."""
+@@ -68,8 +67,6 @@ class LxDmesg(gdb.Command):
+ off = prb_data_ring_type.get_type()['data'].bitpos // 8
+ text_data_addr = utils.read_ulong(text_data_ring, off)
+
+- counter_off = atomic_long_type.get_type()['counter'].bitpos // 8
+-
+ sv_off = prb_desc_type.get_type()['state_var'].bitpos // 8
+
+ off = prb_desc_type.get_type()['text_blk_lpos'].bitpos // 8
+@@ -89,9 +86,9 @@ class LxDmesg(gdb.Command):
+
+ # read in tail and head descriptor ids
+ off = prb_desc_ring_type.get_type()['tail_id'].bitpos // 8
+- tail_id = utils.read_u64(desc_ring, off + counter_off)
++ tail_id = utils.read_atomic_long(desc_ring, off)
+ off = prb_desc_ring_type.get_type()['head_id'].bitpos // 8
+- head_id = utils.read_u64(desc_ring, off + counter_off)
++ head_id = utils.read_atomic_long(desc_ring, off)
+
+ did = tail_id
+ while True:
+@@ -102,7 +99,7 @@ class LxDmesg(gdb.Command):
+ desc = utils.read_memoryview(inf, desc_addr + desc_off, desc_sz).tobytes()
+
+ # skip non-committed record
+- state = 3 & (utils.read_u64(desc, sv_off + counter_off) >> desc_flags_shift)
++ state = 3 & (utils.read_atomic_long(desc, sv_off) >> desc_flags_shift)
+ if state != desc_committed and state != desc_finalized:
+ if did == head_id:
+ break
+diff --git a/scripts/gdb/linux/utils.py b/scripts/gdb/linux/utils.py
+index ff7c1799d588f..1553f68716cc2 100644
+--- a/scripts/gdb/linux/utils.py
++++ b/scripts/gdb/linux/utils.py
+@@ -35,13 +35,12 @@ class CachedType:
+
+
+ long_type = CachedType("long")
+-
++atomic_long_type = CachedType("atomic_long_t")
+
+ def get_long_type():
+ global long_type
+ return long_type.get_type()
+
+-
+ def offset_of(typeobj, field):
+ element = gdb.Value(0).cast(typeobj)
+ return int(str(element[field].address).split()[0], 16)
+@@ -129,6 +128,17 @@ def read_ulong(buffer, offset):
+ else:
+ return read_u32(buffer, offset)
+
++atomic_long_counter_offset = atomic_long_type.get_type()['counter'].bitpos
++atomic_long_counter_sizeof = atomic_long_type.get_type()['counter'].type.sizeof
++
++def read_atomic_long(buffer, offset):
++ global atomic_long_counter_offset
++ global atomic_long_counter_sizeof
++
++ if atomic_long_counter_sizeof == 8:
++ return read_u64(buffer, offset + atomic_long_counter_offset)
++ else:
++ return read_u32(buffer, offset + atomic_long_counter_offset)
+
+ target_arch = None
+
+diff --git a/security/selinux/ss/policydb.h b/security/selinux/ss/policydb.h
+index c24d4e1063ea0..ffc4e7bad2054 100644
+--- a/security/selinux/ss/policydb.h
++++ b/security/selinux/ss/policydb.h
+@@ -370,6 +370,8 @@ static inline int put_entry(const void *buf, size_t bytes, int num, struct polic
+ {
+ size_t len = bytes * num;
+
++ if (len > fp->len)
++ return -EINVAL;
+ memcpy(fp->data, buf, len);
+ fp->data += len;
+ fp->len -= len;
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 69b2734311a69..fe5fcf571c564 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -4048,6 +4048,7 @@ int security_read_policy(struct selinux_state *state,
+ int security_read_state_kernel(struct selinux_state *state,
+ void **data, size_t *len)
+ {
++ int err;
+ struct selinux_policy *policy;
+
+ policy = rcu_dereference_protected(
+@@ -4060,5 +4061,11 @@ int security_read_state_kernel(struct selinux_state *state,
+ if (!*data)
+ return -ENOMEM;
+
+- return __security_read_policy(policy, *data, len);
++ err = __security_read_policy(policy, *data, len);
++ if (err) {
++ vfree(*data);
++ *data = NULL;
++ *len = 0;
++ }
++ return err;
+ }
+diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
+index 678fbcaf2a3bc..6807b4708a176 100644
+--- a/sound/pci/hda/patch_cirrus.c
++++ b/sound/pci/hda/patch_cirrus.c
+@@ -395,6 +395,7 @@ static const struct snd_pci_quirk cs420x_fixup_tbl[] = {
+
+ /* codec SSID */
+ SND_PCI_QUIRK(0x106b, 0x0600, "iMac 14,1", CS420X_IMAC27_122),
++ SND_PCI_QUIRK(0x106b, 0x0900, "iMac 12,1", CS420X_IMAC27_122),
+ SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81),
+ SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122),
+ SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101),
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 83ae21a01bbf9..7b1a30a551f64 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -222,6 +222,7 @@ enum {
+ CXT_PINCFG_LEMOTE_A1205,
+ CXT_PINCFG_COMPAQ_CQ60,
+ CXT_FIXUP_STEREO_DMIC,
++ CXT_PINCFG_LENOVO_NOTEBOOK,
+ CXT_FIXUP_INC_MIC_BOOST,
+ CXT_FIXUP_HEADPHONE_MIC_PIN,
+ CXT_FIXUP_HEADPHONE_MIC,
+@@ -772,6 +773,14 @@ static const struct hda_fixup cxt_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cxt_fixup_stereo_dmic,
+ },
++ [CXT_PINCFG_LENOVO_NOTEBOOK] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1a, 0x05d71030 },
++ { }
++ },
++ .chain_id = CXT_FIXUP_STEREO_DMIC,
++ },
+ [CXT_FIXUP_INC_MIC_BOOST] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cxt5066_increase_mic_boost,
+@@ -971,7 +980,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+- SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
++ SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_PINCFG_LENOVO_NOTEBOOK),
+ SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo G50-70", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2f55bc43bfa9c..619e6025ba97c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6787,6 +6787,43 @@ static void alc_fixup_dell4_mic_no_presence_quiet(struct hda_codec *codec,
+ }
+ }
+
++static void alc287_fixup_yoga9_14iap7_bass_spk_pin(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ /*
++ * The Pin Complex 0x17 for the bass speakers is wrongly reported as
++ * unconnected.
++ */
++ static const struct hda_pintbl pincfgs[] = {
++ { 0x17, 0x90170121 },
++ { }
++ };
++ /*
++ * Avoid DAC 0x06 and 0x08, as they have no volume controls.
++ * DAC 0x02 and 0x03 would be fine.
++ */
++ static const hda_nid_t conn[] = { 0x02, 0x03 };
++ /*
++ * Prefer both speakerbar (0x14) and bass speakers (0x17) connected to DAC 0x02.
++ * Headphones (0x21) are connected to DAC 0x03.
++ */
++ static const hda_nid_t preferred_pairs[] = {
++ 0x14, 0x02,
++ 0x17, 0x02,
++ 0x21, 0x03,
++ 0
++ };
++ struct alc_spec *spec = codec->spec;
++
++ switch (action) {
++ case HDA_FIXUP_ACT_PRE_PROBE:
++ snd_hda_apply_pincfgs(codec, pincfgs);
++ snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
++ spec->gen.preferred_dacs = preferred_pairs;
++ break;
++ }
++}
++
+ enum {
+ ALC269_FIXUP_GPIO2,
+ ALC269_FIXUP_SONY_VAIO,
+@@ -6842,6 +6879,7 @@ enum {
+ ALC269_FIXUP_LIMIT_INT_MIC_BOOST,
+ ALC269VB_FIXUP_ASUS_ZENBOOK,
+ ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A,
++ ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE,
+ ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED,
+ ALC269VB_FIXUP_ORDISSIMO_EVE2,
+ ALC283_FIXUP_CHROME_BOOK,
+@@ -7023,6 +7061,8 @@ enum {
+ ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED,
+ ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED,
+ ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE,
++ ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK,
++ ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN,
+ };
+
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -7427,6 +7467,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269VB_FIXUP_ASUS_ZENBOOK,
+ },
++ [ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x18, 0x01a110f0 }, /* use as headset mic */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MIC
++ },
+ [ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_limit_int_mic_boost,
+@@ -8865,6 +8914,74 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
+ },
++ [ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ // enable left speaker
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
++
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x1a },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xf },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x42 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x10 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x40 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++ // enable right speaker
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x46 },
++
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x2a },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xf },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x46 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x10 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x44 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++ { },
++ },
++ },
++ [ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc287_fixup_yoga9_14iap7_bass_spk_pin,
++ .chained = true,
++ .chain_id = ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK,
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -9044,6 +9161,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
++ SND_PCI_QUIRK(0x103c, 0x86e7, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
++ SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+@@ -9059,6 +9178,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation",
+ ALC285_FIXUP_HP_GPIO_AMP_INIT),
++ SND_PCI_QUIRK(0x103c, 0x8786, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x8787, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+@@ -9128,6 +9248,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+@@ -9203,6 +9324,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1558, 0x4018, "Clevo NV40M[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x4019, "Clevo NV40MZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x4020, "Clevo NV40MB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1558, 0x4041, "Clevo NV4[15]PZ", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x40a1, "Clevo NL40GU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x40c1, "Clevo NL40[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x40d1, "Clevo NL41DU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -9315,6 +9437,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
++ SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+ SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7),
+@@ -9560,6 +9683,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
+ {.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"},
+ {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
++ {.id = ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN, .name = "alc287-yoga9-bass-spk-pin"},
+ {.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+ {.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+ {.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"},
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index f06e6c1a77992..ecfe7a7907901 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -105,28 +105,14 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "21AW"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21CM"),
+ }
+ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "21AX"),
+- }
+- },
+- {
+- .driver_data = &acp6x_card,
+- .matches = {
+- DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "21BN"),
+- }
+- },
+- {
+- .driver_data = &acp6x_card,
+- .matches = {
+- DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "21BQ"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21CN"),
+ }
+ },
+ {
+@@ -157,20 +143,6 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CL"),
+ }
+ },
+- {
+- .driver_data = &acp6x_card,
+- .matches = {
+- DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "21D8"),
+- }
+- },
+- {
+- .driver_data = &acp6x_card,
+- .matches = {
+- DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "21D9"),
+- }
+- },
+ {}
+ };
+
+diff --git a/sound/soc/amd/yc/pci-acp6x.c b/sound/soc/amd/yc/pci-acp6x.c
+index 20f7a99783f20..77c5fa1f7af14 100644
+--- a/sound/soc/amd/yc/pci-acp6x.c
++++ b/sound/soc/amd/yc/pci-acp6x.c
+@@ -159,7 +159,7 @@ static int snd_acp6x_probe(struct pci_dev *pci,
+ case 0x6f:
+ break;
+ default:
+- dev_err(&pci->dev, "acp6x pci device not found\n");
++ dev_dbg(&pci->dev, "acp6x pci device not found\n");
+ return -ENODEV;
+ }
+ if (pci_enable_device(pci)) {
+diff --git a/sound/soc/atmel/mchp-spdifrx.c b/sound/soc/atmel/mchp-spdifrx.c
+index 5fc968483f2c8..a7baa0385ec58 100644
+--- a/sound/soc/atmel/mchp-spdifrx.c
++++ b/sound/soc/atmel/mchp-spdifrx.c
+@@ -288,15 +288,17 @@ static void mchp_spdifrx_isr_blockend_en(struct mchp_spdifrx_dev *dev)
+ spin_unlock_irqrestore(&dev->blockend_lock, flags);
+ }
+
+-/* called from atomic context only */
++/* called from atomic/non-atomic context */
+ static void mchp_spdifrx_isr_blockend_dis(struct mchp_spdifrx_dev *dev)
+ {
+- spin_lock(&dev->blockend_lock);
++ unsigned long flags;
++
++ spin_lock_irqsave(&dev->blockend_lock, flags);
+ dev->blockend_refcount--;
+ /* don't enable BLOCKEND interrupt if it's already enabled */
+ if (dev->blockend_refcount == 0)
+ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
+- spin_unlock(&dev->blockend_lock);
++ spin_unlock_irqrestore(&dev->blockend_lock, flags);
+ }
+
+ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+@@ -575,6 +577,7 @@ static int mchp_spdifrx_subcode_ch_get(struct mchp_spdifrx_dev *dev,
+ if (ret <= 0) {
+ dev_dbg(dev->dev, "user data for channel %d timeout\n",
+ channel);
++ mchp_spdifrx_isr_blockend_dis(dev);
+ return ret;
+ }
+
+diff --git a/sound/soc/codecs/cros_ec_codec.c b/sound/soc/codecs/cros_ec_codec.c
+index 8b0a9c788a264..11e7b3f6d410b 100644
+--- a/sound/soc/codecs/cros_ec_codec.c
++++ b/sound/soc/codecs/cros_ec_codec.c
+@@ -995,6 +995,7 @@ static int cros_ec_codec_platform_probe(struct platform_device *pdev)
+ dev_dbg(dev, "ap_shm_phys_addr=%#llx len=%#x\n",
+ priv->ap_shm_phys_addr, priv->ap_shm_len);
+ }
++ of_node_put(node);
+ }
+ #endif
+
+diff --git a/sound/soc/codecs/cs35l45.c b/sound/soc/codecs/cs35l45.c
+index 2367c1a4c10eb..145051390471b 100644
+--- a/sound/soc/codecs/cs35l45.c
++++ b/sound/soc/codecs/cs35l45.c
+@@ -500,6 +500,8 @@ static const struct snd_soc_component_driver cs35l45_component = {
+ .num_controls = ARRAY_SIZE(cs35l45_controls),
+
+ .name = "cs35l45",
++
++ .endianness = 1,
+ };
+
+ static int __maybe_unused cs35l45_runtime_suspend(struct device *dev)
+diff --git a/sound/soc/codecs/da7210.c b/sound/soc/codecs/da7210.c
+index 3fa3042e44242..76a21976ccddb 100644
+--- a/sound/soc/codecs/da7210.c
++++ b/sound/soc/codecs/da7210.c
+@@ -1335,6 +1335,8 @@ static int __init da7210_modinit(void)
+ int ret = 0;
+ #if IS_ENABLED(CONFIG_I2C)
+ ret = i2c_add_driver(&da7210_i2c_driver);
++ if (ret)
++ return ret;
+ #endif
+ #if defined(CONFIG_SPI_MASTER)
+ ret = spi_register_driver(&da7210_spi_driver);
+diff --git a/sound/soc/codecs/max98390.c b/sound/soc/codecs/max98390.c
+index 2a6b1648c8844..d83f81d9ff4ea 100644
+--- a/sound/soc/codecs/max98390.c
++++ b/sound/soc/codecs/max98390.c
+@@ -10,7 +10,7 @@
+ #include <linux/cdev.h>
+ #include <linux/dmi.h>
+ #include <linux/firmware.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/of_gpio.h>
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index 20a07c92b2fc2..098a58990f07d 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -328,8 +328,8 @@ static const struct snd_kcontrol_new rx1_mix2_inp1_mux = SOC_DAPM_ENUM(
+ static const struct snd_kcontrol_new rx2_mix2_inp1_mux = SOC_DAPM_ENUM(
+ "RX2 MIX2 INP1 Mux", rx2_mix2_inp1_chain_enum);
+
+-/* Digital Gain control -38.4 dB to +38.4 dB in 0.3 dB steps */
+-static const DECLARE_TLV_DB_SCALE(digital_gain, -3840, 30, 0);
++/* Digital Gain control -84 dB to +40 dB in 1 dB steps */
++static const DECLARE_TLV_DB_SCALE(digital_gain, -8400, 100, -8400);
+
+ /* Cutoff Freq for High Pass Filter at -3dB */
+ static const char * const hpf_cutoff_text[] = {
+@@ -510,15 +510,15 @@ static int wcd_iir_filter_info(struct snd_kcontrol *kcontrol,
+
+ static const struct snd_kcontrol_new msm8916_wcd_digital_snd_controls[] = {
+ SOC_SINGLE_S8_TLV("RX1 Digital Volume", LPASS_CDC_RX1_VOL_CTL_B2_CTL,
+- -128, 127, digital_gain),
++ -84, 40, digital_gain),
+ SOC_SINGLE_S8_TLV("RX2 Digital Volume", LPASS_CDC_RX2_VOL_CTL_B2_CTL,
+- -128, 127, digital_gain),
++ -84, 40, digital_gain),
+ SOC_SINGLE_S8_TLV("RX3 Digital Volume", LPASS_CDC_RX3_VOL_CTL_B2_CTL,
+- -128, 127, digital_gain),
++ -84, 40, digital_gain),
+ SOC_SINGLE_S8_TLV("TX1 Digital Volume", LPASS_CDC_TX1_VOL_CTL_GAIN,
+- -128, 127, digital_gain),
++ -84, 40, digital_gain),
+ SOC_SINGLE_S8_TLV("TX2 Digital Volume", LPASS_CDC_TX2_VOL_CTL_GAIN,
+- -128, 127, digital_gain),
++ -84, 40, digital_gain),
+ SOC_ENUM("TX1 HPF Cutoff", tx1_hpf_cutoff_enum),
+ SOC_ENUM("TX2 HPF Cutoff", tx2_hpf_cutoff_enum),
+ SOC_SINGLE("TX1 HPF Switch", LPASS_CDC_TX1_MUX_CTL, 3, 1, 0),
+@@ -553,22 +553,22 @@ static const struct snd_kcontrol_new msm8916_wcd_digital_snd_controls[] = {
+ WCD_IIR_FILTER_CTL("IIR2 Band3", IIR2, BAND3),
+ WCD_IIR_FILTER_CTL("IIR2 Band4", IIR2, BAND4),
+ WCD_IIR_FILTER_CTL("IIR2 Band5", IIR2, BAND5),
+- SOC_SINGLE_SX_TLV("IIR1 INP1 Volume", LPASS_CDC_IIR1_GAIN_B1_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("IIR1 INP2 Volume", LPASS_CDC_IIR1_GAIN_B2_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("IIR1 INP3 Volume", LPASS_CDC_IIR1_GAIN_B3_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("IIR1 INP4 Volume", LPASS_CDC_IIR1_GAIN_B4_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("IIR2 INP1 Volume", LPASS_CDC_IIR2_GAIN_B1_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("IIR2 INP2 Volume", LPASS_CDC_IIR2_GAIN_B2_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("IIR2 INP3 Volume", LPASS_CDC_IIR2_GAIN_B3_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("IIR2 INP4 Volume", LPASS_CDC_IIR2_GAIN_B4_CTL,
+- 0, -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("IIR1 INP1 Volume", LPASS_CDC_IIR1_GAIN_B1_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("IIR1 INP2 Volume", LPASS_CDC_IIR1_GAIN_B2_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("IIR1 INP3 Volume", LPASS_CDC_IIR1_GAIN_B3_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("IIR1 INP4 Volume", LPASS_CDC_IIR1_GAIN_B4_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("IIR2 INP1 Volume", LPASS_CDC_IIR2_GAIN_B1_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("IIR2 INP2 Volume", LPASS_CDC_IIR2_GAIN_B2_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("IIR2 INP3 Volume", LPASS_CDC_IIR2_GAIN_B3_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("IIR2 INP4 Volume", LPASS_CDC_IIR2_GAIN_B4_CTL,
++ -84, 40, digital_gain),
+
+ };
+
+diff --git a/sound/soc/codecs/mt6359-accdet.c b/sound/soc/codecs/mt6359-accdet.c
+index 6d3d170144a0a..c190628e29056 100644
+--- a/sound/soc/codecs/mt6359-accdet.c
++++ b/sound/soc/codecs/mt6359-accdet.c
+@@ -675,6 +675,7 @@ static int mt6359_accdet_parse_dt(struct mt6359_accdet *priv)
+ sizeof(struct three_key_threshold));
+ }
+
++ of_node_put(node);
+ dev_warn(priv->dev, "accdet caps=%x\n", priv->caps);
+
+ return 0;
+diff --git a/sound/soc/codecs/mt6359.c b/sound/soc/codecs/mt6359.c
+index 23709b180409c..c9a453ce8a2a8 100644
+--- a/sound/soc/codecs/mt6359.c
++++ b/sound/soc/codecs/mt6359.c
+@@ -2778,6 +2778,7 @@ static int mt6359_parse_dt(struct mt6359_priv *priv)
+
+ ret = of_property_read_u32(np, "mediatek,mic-type-2",
+ &priv->mux_select[MUX_MIC_TYPE_2]);
++ of_node_put(np);
+ if (ret) {
+ dev_info(priv->dev,
+ "%s() failed to read mic-type-2, use default (%d)\n",
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 3cb7a3eab8c74..541ef1cd3b74e 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -2264,51 +2264,42 @@ static int wcd9335_rx_hph_mode_put(struct snd_kcontrol *kc,
+
+ static const struct snd_kcontrol_new wcd9335_snd_controls[] = {
+ /* -84dB min - 40dB max */
+- SOC_SINGLE_SX_TLV("RX0 Digital Volume", WCD9335_CDC_RX0_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX1 Digital Volume", WCD9335_CDC_RX1_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX2 Digital Volume", WCD9335_CDC_RX2_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX3 Digital Volume", WCD9335_CDC_RX3_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX4 Digital Volume", WCD9335_CDC_RX4_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX5 Digital Volume", WCD9335_CDC_RX5_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX6 Digital Volume", WCD9335_CDC_RX6_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX7 Digital Volume", WCD9335_CDC_RX7_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX8 Digital Volume", WCD9335_CDC_RX8_RX_VOL_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX0 Mix Digital Volume",
+- WCD9335_CDC_RX0_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX1 Mix Digital Volume",
+- WCD9335_CDC_RX1_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX2 Mix Digital Volume",
+- WCD9335_CDC_RX2_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX3 Mix Digital Volume",
+- WCD9335_CDC_RX3_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX4 Mix Digital Volume",
+- WCD9335_CDC_RX4_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX5 Mix Digital Volume",
+- WCD9335_CDC_RX5_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX6 Mix Digital Volume",
+- WCD9335_CDC_RX6_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX7 Mix Digital Volume",
+- WCD9335_CDC_RX7_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
+- SOC_SINGLE_SX_TLV("RX8 Mix Digital Volume",
+- WCD9335_CDC_RX8_RX_VOL_MIX_CTL,
+- 0, -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX0 Digital Volume", WCD9335_CDC_RX0_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX1 Digital Volume", WCD9335_CDC_RX1_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX2 Digital Volume", WCD9335_CDC_RX2_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX3 Digital Volume", WCD9335_CDC_RX3_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX4 Digital Volume", WCD9335_CDC_RX4_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX5 Digital Volume", WCD9335_CDC_RX5_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX6 Digital Volume", WCD9335_CDC_RX6_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX7 Digital Volume", WCD9335_CDC_RX7_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX8 Digital Volume", WCD9335_CDC_RX8_RX_VOL_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX0 Mix Digital Volume", WCD9335_CDC_RX0_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX1 Mix Digital Volume", WCD9335_CDC_RX1_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX2 Mix Digital Volume", WCD9335_CDC_RX2_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX3 Mix Digital Volume", WCD9335_CDC_RX3_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX4 Mix Digital Volume", WCD9335_CDC_RX4_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX5 Mix Digital Volume", WCD9335_CDC_RX5_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX6 Mix Digital Volume", WCD9335_CDC_RX6_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX7 Mix Digital Volume", WCD9335_CDC_RX7_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
++ SOC_SINGLE_S8_TLV("RX8 Mix Digital Volume", WCD9335_CDC_RX8_RX_VOL_MIX_CTL,
++ -84, 40, digital_gain),
+ SOC_ENUM("RX INT0_1 HPF cut off", cf_int0_1_enum),
+ SOC_ENUM("RX INT0_2 HPF cut off", cf_int0_2_enum),
+ SOC_ENUM("RX INT1_1 HPF cut off", cf_int1_1_enum),
+diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c
+index f3a56f3ce4871..02a438c6c4c7a 100644
+--- a/sound/soc/codecs/wsa881x.c
++++ b/sound/soc/codecs/wsa881x.c
+@@ -1175,11 +1175,17 @@ static int __maybe_unused wsa881x_runtime_resume(struct device *dev)
+ struct sdw_slave *slave = dev_to_sdw_dev(dev);
+ struct regmap *regmap = dev_get_regmap(dev, NULL);
+ struct wsa881x_priv *wsa881x = dev_get_drvdata(dev);
++ unsigned long time;
+
+ gpiod_direction_output(wsa881x->sd_n, 1);
+
+- wait_for_completion_timeout(&slave->initialization_complete,
+- msecs_to_jiffies(WSA881X_PROBE_TIMEOUT));
++ time = wait_for_completion_timeout(&slave->initialization_complete,
++ msecs_to_jiffies(WSA881X_PROBE_TIMEOUT));
++ if (!time) {
++ dev_err(dev, "Initialization not complete, timed out\n");
++ gpiod_direction_output(wsa881x->sd_n, 0);
++ return -ETIMEDOUT;
++ }
+
+ regcache_cache_only(regmap, false);
+ regcache_sync(regmap);
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index d9a0d4768c4d5..c836848ef0a65 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -537,6 +537,7 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ struct device *codec_dev = NULL;
+ const char *codec_dai_name;
+ const char *codec_dev_name;
++ u32 asrc_fmt = 0;
+ u32 width;
+ int ret;
+
+@@ -829,8 +830,8 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ goto asrc_fail;
+ }
+
+- ret = of_property_read_u32(asrc_np, "fsl,asrc-format",
+- &priv->asrc_format);
++ ret = of_property_read_u32(asrc_np, "fsl,asrc-format", &asrc_fmt);
++ priv->asrc_format = (__force snd_pcm_format_t)asrc_fmt;
+ if (ret) {
+ /* Fallback to old binding; translate to asrc_format */
+ ret = of_property_read_u32(asrc_np, "fsl,asrc-width",
+diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
+index 20a9f8e924b33..aa5edf32d9889 100644
+--- a/sound/soc/fsl/fsl_asrc.c
++++ b/sound/soc/fsl/fsl_asrc.c
+@@ -1066,6 +1066,7 @@ static int fsl_asrc_probe(struct platform_device *pdev)
+ struct resource *res;
+ void __iomem *regs;
+ int irq, ret, i;
++ u32 asrc_fmt = 0;
+ u32 map_idx;
+ char tmp[16];
+ u32 width;
+@@ -1174,7 +1175,8 @@ static int fsl_asrc_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- ret = of_property_read_u32(np, "fsl,asrc-format", &asrc->asrc_format);
++ ret = of_property_read_u32(np, "fsl,asrc-format", &asrc_fmt);
++ asrc->asrc_format = (__force snd_pcm_format_t)asrc_fmt;
+ if (ret) {
+ ret = of_property_read_u32(np, "fsl,asrc-width", &width);
+ if (ret) {
+@@ -1197,7 +1199,7 @@ static int fsl_asrc_probe(struct platform_device *pdev)
+ }
+ }
+
+- if (!(FSL_ASRC_FORMATS & (1ULL << asrc->asrc_format))) {
++ if (!(FSL_ASRC_FORMATS & pcm_format_to_bits(asrc->asrc_format))) {
+ dev_warn(&pdev->dev, "unsupported width, use default S24_LE\n");
+ asrc->asrc_format = SNDRV_PCM_FORMAT_S24_LE;
+ }
+diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
+index be14f84796cb4..cf0e10d17dbe3 100644
+--- a/sound/soc/fsl/fsl_easrc.c
++++ b/sound/soc/fsl/fsl_easrc.c
+@@ -476,7 +476,8 @@ static int fsl_easrc_prefilter_config(struct fsl_asrc *easrc,
+ struct fsl_asrc_pair *ctx;
+ struct device *dev;
+ u32 inrate, outrate, offset = 0;
+- u32 in_s_rate, out_s_rate, in_s_fmt, out_s_fmt;
++ u32 in_s_rate, out_s_rate;
++ snd_pcm_format_t in_s_fmt, out_s_fmt;
+ int ret, i;
+
+ if (!easrc)
+@@ -1873,6 +1874,7 @@ static int fsl_easrc_probe(struct platform_device *pdev)
+ struct resource *res;
+ struct device_node *np;
+ void __iomem *regs;
++ u32 asrc_fmt = 0;
+ int ret, irq;
+
+ easrc = devm_kzalloc(dev, sizeof(*easrc), GFP_KERNEL);
+@@ -1933,13 +1935,14 @@ static int fsl_easrc_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- ret = of_property_read_u32(np, "fsl,asrc-format", &easrc->asrc_format);
++ ret = of_property_read_u32(np, "fsl,asrc-format", &asrc_fmt);
++ easrc->asrc_format = (__force snd_pcm_format_t)asrc_fmt;
+ if (ret) {
+ dev_err(dev, "failed to asrc format\n");
+ return ret;
+ }
+
+- if (!(FSL_EASRC_FORMATS & (1ULL << easrc->asrc_format))) {
++ if (!(FSL_EASRC_FORMATS & (pcm_format_to_bits(easrc->asrc_format)))) {
+ dev_warn(dev, "unsupported format, switching to S24_LE\n");
+ easrc->asrc_format = SNDRV_PCM_FORMAT_S24_LE;
+ }
+diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h
+index 86d5c360d4f53..7c70dac527137 100644
+--- a/sound/soc/fsl/fsl_easrc.h
++++ b/sound/soc/fsl/fsl_easrc.h
+@@ -569,7 +569,7 @@ struct fsl_easrc_io_params {
+ unsigned int access_len;
+ unsigned int fifo_wtmk;
+ unsigned int sample_rate;
+- unsigned int sample_format;
++ snd_pcm_format_t sample_format;
+ unsigned int norm_rate;
+ };
+
+diff --git a/sound/soc/fsl/imx-audmux.c b/sound/soc/fsl/imx-audmux.c
+index dfa05d40b2764..a8e5e0f57faf9 100644
+--- a/sound/soc/fsl/imx-audmux.c
++++ b/sound/soc/fsl/imx-audmux.c
+@@ -298,7 +298,7 @@ static int imx_audmux_probe(struct platform_device *pdev)
+ audmux_clk = NULL;
+ }
+
+- audmux_type = (enum imx_audmux_type)of_device_get_match_data(&pdev->dev);
++ audmux_type = (uintptr_t)of_device_get_match_data(&pdev->dev);
+
+ switch (audmux_type) {
+ case IMX31_AUDMUX:
+diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c
+index 6f8efd838fcc8..4a8609b0d700d 100644
+--- a/sound/soc/fsl/imx-card.c
++++ b/sound/soc/fsl/imx-card.c
+@@ -17,6 +17,9 @@
+
+ #include "fsl_sai.h"
+
++#define IMX_CARD_MCLK_22P5792MHZ 22579200
++#define IMX_CARD_MCLK_24P576MHZ 24576000
++
+ enum codec_type {
+ CODEC_DUMMY = 0,
+ CODEC_AK5558 = 1,
+@@ -115,7 +118,7 @@ struct imx_card_data {
+ struct snd_soc_card card;
+ int num_dapm_routes;
+ u32 asrc_rate;
+- u32 asrc_format;
++ snd_pcm_format_t asrc_format;
+ };
+
+ static struct imx_akcodec_fs_mul ak4458_fs_mul[] = {
+@@ -353,9 +356,14 @@ static int imx_aif_hw_params(struct snd_pcm_substream *substream,
+ mclk_freq = akcodec_get_mclk_rate(substream, params, slots, slot_width);
+ else
+ mclk_freq = params_rate(params) * slots * slot_width;
+- /* Use the maximum freq from DSD512 (512*44100 = 22579200) */
+- if (format_is_dsd(params))
+- mclk_freq = 22579200;
++
++ if (format_is_dsd(params)) {
++ /* Use the maximum freq from DSD512 (512*44100 = 22579200) */
++ if (!(params_rate(params) % 11025))
++ mclk_freq = IMX_CARD_MCLK_22P5792MHZ;
++ else
++ mclk_freq = IMX_CARD_MCLK_24P576MHZ;
++ }
+
+ ret = snd_soc_dai_set_sysclk(cpu_dai, link_data->cpu_sysclk_id, mclk_freq,
+ SND_SOC_CLOCK_OUT);
+@@ -466,7 +474,7 @@ static int be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
+
+ mask = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
+ snd_mask_none(mask);
+- snd_mask_set(mask, data->asrc_format);
++ snd_mask_set(mask, (__force unsigned int)data->asrc_format);
+
+ return 0;
+ }
+@@ -485,6 +493,7 @@ static int imx_card_parse_of(struct imx_card_data *data)
+ struct dai_link_data *link_data;
+ struct of_phandle_args args;
+ int ret, num_links;
++ u32 asrc_fmt = 0;
+ u32 width;
+
+ ret = snd_soc_of_parse_card_name(card, "model");
+@@ -631,7 +640,8 @@ static int imx_card_parse_of(struct imx_card_data *data)
+ goto err;
+ }
+
+- ret = of_property_read_u32(args.np, "fsl,asrc-format", &data->asrc_format);
++ ret = of_property_read_u32(args.np, "fsl,asrc-format", &asrc_fmt);
++ data->asrc_format = (__force snd_pcm_format_t)asrc_fmt;
+ if (ret) {
+ /* Fallback to old binding; translate to asrc_format */
+ ret = of_property_read_u32(args.np, "fsl,asrc-width", &width);
+diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
+index 2b598af8feef8..b327372f2e4ae 100644
+--- a/sound/soc/generic/audio-graph-card.c
++++ b/sound/soc/generic/audio-graph-card.c
+@@ -158,8 +158,10 @@ static int asoc_simple_parse_dai(struct device_node *ep,
+ * if he unbinded CPU or Codec.
+ */
+ ret = snd_soc_get_dai_name(&args, &dlc->dai_name);
+- if (ret < 0)
++ if (ret < 0) {
++ of_node_put(node);
+ return ret;
++ }
+
+ dlc->of_node = node;
+
+diff --git a/sound/soc/generic/audio-graph-card2.c b/sound/soc/generic/audio-graph-card2.c
+index d34b29a49268e..a7144defb8fbe 100644
+--- a/sound/soc/generic/audio-graph-card2.c
++++ b/sound/soc/generic/audio-graph-card2.c
+@@ -229,7 +229,8 @@ enum graph_type {
+
+ static enum graph_type __graph_get_type(struct device_node *lnk)
+ {
+- struct device_node *np;
++ struct device_node *np, *parent_np;
++ enum graph_type ret;
+
+ /*
+ * target {
+@@ -240,19 +241,33 @@ static enum graph_type __graph_get_type(struct device_node *lnk)
+ * };
+ */
+ np = of_get_parent(lnk);
+- if (of_node_name_eq(np, "ports"))
+- np = of_get_parent(np);
++ if (of_node_name_eq(np, "ports")) {
++ parent_np = of_get_parent(np);
++ of_node_put(np);
++ np = parent_np;
++ }
++
++ if (of_node_name_eq(np, GRAPH_NODENAME_MULTI)) {
++ ret = GRAPH_MULTI;
++ goto out_put;
++ }
++
++ if (of_node_name_eq(np, GRAPH_NODENAME_DPCM)) {
++ ret = GRAPH_DPCM;
++ goto out_put;
++ }
+
+- if (of_node_name_eq(np, GRAPH_NODENAME_MULTI))
+- return GRAPH_MULTI;
++ if (of_node_name_eq(np, GRAPH_NODENAME_C2C)) {
++ ret = GRAPH_C2C;
++ goto out_put;
++ }
+
+- if (of_node_name_eq(np, GRAPH_NODENAME_DPCM))
+- return GRAPH_DPCM;
++ ret = GRAPH_NORMAL;
+
+- if (of_node_name_eq(np, GRAPH_NODENAME_C2C))
+- return GRAPH_C2C;
++out_put:
++ of_node_put(np);
++ return ret;
+
+- return GRAPH_NORMAL;
+ }
+
+ static enum graph_type graph_get_type(struct asoc_simple_priv *priv,
+@@ -430,8 +445,10 @@ static int asoc_simple_parse_dai(struct device_node *ep,
+ * if he unbinded CPU or Codec.
+ */
+ ret = snd_soc_get_dai_name(&args, &dlc->dai_name);
+- if (ret < 0)
++ if (ret < 0) {
++ of_node_put(node);
+ return ret;
++ }
+
+ dlc->of_node = node;
+
+@@ -856,7 +873,7 @@ int audio_graph2_link_c2c(struct asoc_simple_priv *priv,
+ struct device_node *port0, *port1, *ports;
+ struct device_node *codec0_port, *codec1_port;
+ struct device_node *ep0, *ep1;
+- u32 val;
++ u32 val = 0;
+ int ret = -EINVAL;
+
+ /*
+@@ -880,7 +897,8 @@ int audio_graph2_link_c2c(struct asoc_simple_priv *priv,
+ ports = of_get_parent(port0);
+ port1 = of_get_next_child(ports, lnk);
+
+- if (!of_get_property(ports, "rate", &val)) {
++ of_property_read_u32(ports, "rate", &val);
++ if (!val) {
+ struct device *dev = simple_priv_to_dev(priv);
+
+ dev_err(dev, "Codec2Codec needs rate settings\n");
+diff --git a/sound/soc/intel/avs/path.c b/sound/soc/intel/avs/path.c
+index 3d46dd5e5bc49..ce157a8d65520 100644
+--- a/sound/soc/intel/avs/path.c
++++ b/sound/soc/intel/avs/path.c
+@@ -449,35 +449,39 @@ static int avs_modext_create(struct avs_dev *adev, struct avs_path_module *mod)
+ return ret;
+ }
+
++static int avs_probe_create(struct avs_dev *adev, struct avs_path_module *mod)
++{
++ dev_err(adev->dev, "Probe module can't be instantiated by topology");
++ return -EINVAL;
++}
++
++struct avs_module_create {
++ guid_t *guid;
++ int (*create)(struct avs_dev *adev, struct avs_path_module *mod);
++};
++
++static struct avs_module_create avs_module_create[] = {
++ { &AVS_MIXIN_MOD_UUID, avs_modbase_create },
++ { &AVS_MIXOUT_MOD_UUID, avs_modbase_create },
++ { &AVS_KPBUFF_MOD_UUID, avs_modbase_create },
++ { &AVS_COPIER_MOD_UUID, avs_copier_create },
++ { &AVS_MICSEL_MOD_UUID, avs_micsel_create },
++ { &AVS_MUX_MOD_UUID, avs_mux_create },
++ { &AVS_UPDWMIX_MOD_UUID, avs_updown_mix_create },
++ { &AVS_SRCINTC_MOD_UUID, avs_src_create },
++ { &AVS_AEC_MOD_UUID, avs_aec_create },
++ { &AVS_ASRC_MOD_UUID, avs_asrc_create },
++ { &AVS_INTELWOV_MOD_UUID, avs_wov_create },
++ { &AVS_PROBE_MOD_UUID, avs_probe_create },
++};
++
+ static int avs_path_module_type_create(struct avs_dev *adev, struct avs_path_module *mod)
+ {
+ const guid_t *type = &mod->template->cfg_ext->type;
+
+- if (guid_equal(type, &AVS_MIXIN_MOD_UUID) ||
+- guid_equal(type, &AVS_MIXOUT_MOD_UUID) ||
+- guid_equal(type, &AVS_KPBUFF_MOD_UUID))
+- return avs_modbase_create(adev, mod);
+- if (guid_equal(type, &AVS_COPIER_MOD_UUID))
+- return avs_copier_create(adev, mod);
+- if (guid_equal(type, &AVS_MICSEL_MOD_UUID))
+- return avs_micsel_create(adev, mod);
+- if (guid_equal(type, &AVS_MUX_MOD_UUID))
+- return avs_mux_create(adev, mod);
+- if (guid_equal(type, &AVS_UPDWMIX_MOD_UUID))
+- return avs_updown_mix_create(adev, mod);
+- if (guid_equal(type, &AVS_SRCINTC_MOD_UUID))
+- return avs_src_create(adev, mod);
+- if (guid_equal(type, &AVS_AEC_MOD_UUID))
+- return avs_aec_create(adev, mod);
+- if (guid_equal(type, &AVS_ASRC_MOD_UUID))
+- return avs_asrc_create(adev, mod);
+- if (guid_equal(type, &AVS_INTELWOV_MOD_UUID))
+- return avs_wov_create(adev, mod);
+-
+- if (guid_equal(type, &AVS_PROBE_MOD_UUID)) {
+- dev_err(adev->dev, "Probe module can't be instantiated by topology");
+- return -EINVAL;
+- }
++ for (int i = 0; i < ARRAY_SIZE(avs_module_create); i++)
++ if (guid_equal(type, avs_module_create[i].guid))
++ return avs_module_create[i].create(adev, mod);
+
+ return avs_modext_create(adev, mod);
+ }
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index 4a90a0a5d8315..cf4d3f059b403 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -434,6 +434,15 @@ static int sof_card_late_probe(struct snd_soc_card *card)
+ struct sof_hdmi_pcm *pcm;
+ int err;
+
++ if (sof_rt5682_quirk & SOF_MAX98373_SPEAKER_AMP_PRESENT) {
++ /* Disable Left and Right Spk pin after boot */
++ snd_soc_dapm_disable_pin(dapm, "Left Spk");
++ snd_soc_dapm_disable_pin(dapm, "Right Spk");
++ err = snd_soc_dapm_sync(dapm);
++ if (err < 0)
++ return err;
++ }
++
+ /* HDMI is not supported by SOF on Baytrail/CherryTrail */
+ if (is_legacy_cpu || !ctx->idisp_codec)
+ return 0;
+@@ -464,15 +473,6 @@ static int sof_card_late_probe(struct snd_soc_card *card)
+ return err;
+ }
+
+- if (sof_rt5682_quirk & SOF_MAX98373_SPEAKER_AMP_PRESENT) {
+- /* Disable Left and Right Spk pin after boot */
+- snd_soc_dapm_disable_pin(dapm, "Left Spk");
+- snd_soc_dapm_disable_pin(dapm, "Right Spk");
+- err = snd_soc_dapm_sync(dapm);
+- if (err < 0)
+- return err;
+- }
+-
+ return hdac_hdmi_jack_port_init(component, &card->dapm);
+ }
+
+diff --git a/sound/soc/mediatek/mt6797/mt6797-mt6351.c b/sound/soc/mediatek/mt6797/mt6797-mt6351.c
+index 496f32bcfb5e3..d2f6213a6bfcc 100644
+--- a/sound/soc/mediatek/mt6797/mt6797-mt6351.c
++++ b/sound/soc/mediatek/mt6797/mt6797-mt6351.c
+@@ -217,7 +217,8 @@ static int mt6797_mt6351_dev_probe(struct platform_device *pdev)
+ if (!codec_node) {
+ dev_err(&pdev->dev,
+ "Property 'audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_platform_node;
+ }
+ for_each_card_prelinks(card, i, dai_link) {
+ if (dai_link->codecs->name)
+@@ -230,6 +231,9 @@ static int mt6797_mt6351_dev_probe(struct platform_device *pdev)
+ dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ __func__, ret);
+
++ of_node_put(codec_node);
++put_platform_node:
++ of_node_put(platform_node);
+ return ret;
+ }
+
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
+index 70bf312e855f6..8794720cea3a0 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
+@@ -256,14 +256,16 @@ static int mt8173_rt5650_rt5676_dev_probe(struct platform_device *pdev)
+ if (!mt8173_rt5650_rt5676_dais[DAI_LINK_CODEC_I2S].codecs[0].of_node) {
+ dev_err(&pdev->dev,
+ "Property 'audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_node;
+ }
+ mt8173_rt5650_rt5676_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node =
+ of_parse_phandle(pdev->dev.of_node, "mediatek,audio-codec", 1);
+ if (!mt8173_rt5650_rt5676_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node) {
+ dev_err(&pdev->dev,
+ "Property 'audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_node;
+ }
+ mt8173_rt5650_rt5676_codec_conf[0].dlc.of_node =
+ mt8173_rt5650_rt5676_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node;
+@@ -276,13 +278,15 @@ static int mt8173_rt5650_rt5676_dev_probe(struct platform_device *pdev)
+ if (!mt8173_rt5650_rt5676_dais[DAI_LINK_HDMI_I2S].codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_node;
+ }
+
+ card->dev = &pdev->dev;
+
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+
++put_node:
+ of_node_put(platform_node);
+ return ret;
+ }
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650.c b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
+index d1c94acb45169..e05f2b0231fe8 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
+@@ -280,7 +280,8 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ if (!mt8173_rt5650_dais[DAI_LINK_CODEC_I2S].codecs[0].of_node) {
+ dev_err(&pdev->dev,
+ "Property 'audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_platform_node;
+ }
+ mt8173_rt5650_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node =
+ mt8173_rt5650_dais[DAI_LINK_CODEC_I2S].codecs[0].of_node;
+@@ -293,7 +294,7 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ dev_err(&pdev->dev,
+ "%s codec_capture_dai name fail %d\n",
+ __func__, ret);
+- return ret;
++ goto put_platform_node;
+ }
+ mt8173_rt5650_dais[DAI_LINK_CODEC_I2S].codecs[1].dai_name =
+ codec_capture_dai;
+@@ -315,12 +316,14 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ if (!mt8173_rt5650_dais[DAI_LINK_HDMI_I2S].codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_platform_node;
+ }
+ card->dev = &pdev->dev;
+
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+
++put_platform_node:
+ of_node_put(platform_node);
+ return ret;
+ }
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index e6846ad2b5fa4..964eb07f46d6a 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -1090,6 +1090,7 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev)
+ dsp_of_node = of_parse_phandle(pdev->dev.of_node, "qcom,adsp", 0);
+ if (dsp_of_node) {
+ dev_err(dev, "DSP exists and holds audio resources\n");
++ of_node_put(dsp_of_node);
+ return -EBUSY;
+ }
+
+diff --git a/sound/soc/qcom/qdsp6/q6adm.c b/sound/soc/qcom/qdsp6/q6adm.c
+index 72c5719f1d253..a0678e8cf20a8 100644
+--- a/sound/soc/qcom/qdsp6/q6adm.c
++++ b/sound/soc/qcom/qdsp6/q6adm.c
+@@ -217,7 +217,7 @@ static struct q6copp *q6adm_alloc_copp(struct q6adm *adm, int port_idx)
+ idx = find_first_zero_bit(&adm->copp_bitmap[port_idx],
+ MAX_COPPS_PER_PORT);
+
+- if (idx > MAX_COPPS_PER_PORT)
++ if (idx >= MAX_COPPS_PER_PORT)
+ return ERR_PTR(-EBUSY);
+
+ c = kzalloc(sizeof(*c), GFP_ATOMIC);
+diff --git a/sound/soc/samsung/aries_wm8994.c b/sound/soc/samsung/aries_wm8994.c
+index bb0cf4244e007..edee02d7f100a 100644
+--- a/sound/soc/samsung/aries_wm8994.c
++++ b/sound/soc/samsung/aries_wm8994.c
+@@ -628,8 +628,10 @@ static int aries_audio_probe(struct platform_device *pdev)
+ return -EINVAL;
+
+ codec = of_get_child_by_name(dev->of_node, "codec");
+- if (!codec)
+- return -EINVAL;
++ if (!codec) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ for_each_card_prelinks(card, i, dai_link) {
+ dai_link->codecs->of_node = of_parse_phandle(codec,
+diff --git a/sound/soc/samsung/h1940_uda1380.c b/sound/soc/samsung/h1940_uda1380.c
+index 907266aee839f..fa45a54ab18f9 100644
+--- a/sound/soc/samsung/h1940_uda1380.c
++++ b/sound/soc/samsung/h1940_uda1380.c
+@@ -8,7 +8,7 @@
+ // Based on version from Arnaud Patard <arnaud.patard@rtp-net.org>
+
+ #include <linux/types.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/module.h>
+
+ #include <sound/soc.h>
+diff --git a/sound/soc/samsung/rx1950_uda1380.c b/sound/soc/samsung/rx1950_uda1380.c
+index ff3acc94a454c..abf28321f7d76 100644
+--- a/sound/soc/samsung/rx1950_uda1380.c
++++ b/sound/soc/samsung/rx1950_uda1380.c
+@@ -128,7 +128,7 @@ static int rx1950_startup(struct snd_pcm_substream *substream)
+ &hw_rates);
+ }
+
+-struct gpio_desc *gpiod_speaker_power;
++static struct gpio_desc *gpiod_speaker_power;
+
+ static int rx1950_spk_power(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+@@ -228,7 +228,7 @@ static int rx1950_probe(struct platform_device *pdev)
+ return devm_snd_soc_register_card(dev, &rx1950_asoc);
+ }
+
+-struct platform_driver rx1950_audio = {
++static struct platform_driver rx1950_audio = {
+ .driver = {
+ .name = "rx1950-audio",
+ .pm = &snd_soc_pm_ops,
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 9574f86dd4de2..46f0e8eb79b3f 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -3433,26 +3433,26 @@ int snd_soc_of_get_dai_link_cpus(struct device *dev,
+ struct of_phandle_args args;
+ struct snd_soc_dai_link_component *component;
+ char *name;
+- int index, num_codecs, ret;
++ int index, num_cpus, ret;
+
+- /* Count the number of CODECs */
++ /* Count the number of CPUs */
+ name = "sound-dai";
+- num_codecs = of_count_phandle_with_args(of_node, name,
++ num_cpus = of_count_phandle_with_args(of_node, name,
+ "#sound-dai-cells");
+- if (num_codecs <= 0) {
+- if (num_codecs == -ENOENT)
++ if (num_cpus <= 0) {
++ if (num_cpus == -ENOENT)
+ dev_err(dev, "No 'sound-dai' property\n");
+ else
+ dev_err(dev, "Bad phandle in 'sound-dai'\n");
+- return num_codecs;
++ return num_cpus;
+ }
+ component = devm_kcalloc(dev,
+- num_codecs, sizeof(*component),
++ num_cpus, sizeof(*component),
+ GFP_KERNEL);
+ if (!component)
+ return -ENOMEM;
+ dai_link->cpus = component;
+- dai_link->num_cpus = num_codecs;
++ dai_link->num_cpus = num_cpus;
+
+ /* Parse the list */
+ for_each_link_cpus(dai_link, index, component) {
+@@ -3468,7 +3468,7 @@ int snd_soc_of_get_dai_link_cpus(struct device *dev,
+ }
+ return 0;
+ err:
+- snd_soc_of_put_dai_link_codecs(dai_link);
++ snd_soc_of_put_dai_link_cpus(dai_link);
+ dai_link->cpus = NULL;
+ dai_link->num_cpus = 0;
+ return ret;
+diff --git a/sound/soc/sof/ipc3-topology.c b/sound/soc/sof/ipc3-topology.c
+index 10740c55294dc..e97f50d5bcba1 100644
+--- a/sound/soc/sof/ipc3-topology.c
++++ b/sound/soc/sof/ipc3-topology.c
+@@ -1628,6 +1628,7 @@ static int sof_ipc3_control_load_bytes(struct snd_sof_dev *sdev, struct snd_sof_
+ return 0;
+ err:
+ kfree(scontrol->ipc_control_data);
++ scontrol->ipc_control_data = NULL;
+ return ret;
+ }
+
+diff --git a/sound/soc/sof/mediatek/mt8195/mt8195-loader.c b/sound/soc/sof/mediatek/mt8195/mt8195-loader.c
+index ed18d6379e922..ef2664c3cd47d 100644
+--- a/sound/soc/sof/mediatek/mt8195/mt8195-loader.c
++++ b/sound/soc/sof/mediatek/mt8195/mt8195-loader.c
+@@ -21,7 +21,7 @@ void sof_hifixdsp_boot_sequence(struct snd_sof_dev *sdev, u32 boot_addr)
+
+ /* pull high StatVectorSel to use AltResetVec (set bit4 to 1) */
+ snd_sof_dsp_update_bits(sdev, DSP_REG_BAR, DSP_RESET_SW,
+- DSP_RESET_SW, DSP_RESET_SW);
++ STATVECTOR_SEL, STATVECTOR_SEL);
+
+ /* toggle DReset & BReset */
+ /* pull high DReset & BReset */
+diff --git a/sound/soc/sof/sof-client-ipc-msg-injector.c b/sound/soc/sof/sof-client-ipc-msg-injector.c
+index 6bdfa527b7f76..752d5320680f1 100644
+--- a/sound/soc/sof/sof-client-ipc-msg-injector.c
++++ b/sound/soc/sof/sof-client-ipc-msg-injector.c
+@@ -181,7 +181,7 @@ static ssize_t sof_msg_inject_ipc4_dfs_write(struct file *file,
+ struct sof_client_dev *cdev = file->private_data;
+ struct sof_msg_inject_priv *priv = cdev->data;
+ struct sof_ipc4_msg *ipc4_msg = priv->tx_buffer;
+- ssize_t size;
++ size_t data_size;
+ int ret;
+
+ if (*ppos)
+@@ -191,25 +191,20 @@ static ssize_t sof_msg_inject_ipc4_dfs_write(struct file *file,
+ return -EINVAL;
+
+ /* copy the header first */
+- size = simple_write_to_buffer(&ipc4_msg->header_u64,
+- sizeof(ipc4_msg->header_u64),
+- ppos, buffer, count);
+- if (size < 0)
+- return size;
+- if (size != sizeof(ipc4_msg->header_u64))
++ if (copy_from_user(&ipc4_msg->header_u64, buffer,
++ sizeof(ipc4_msg->header_u64)))
+ return -EFAULT;
+
+- count -= size;
++ data_size = count - sizeof(ipc4_msg->header_u64);
++ if (data_size > priv->max_msg_size)
++ return -EINVAL;
++
+ /* Copy the payload */
+- size = simple_write_to_buffer(ipc4_msg->data_ptr,
+- priv->max_msg_size, ppos, buffer,
+- count);
+- if (size < 0)
+- return size;
+- if (size != count)
++ if (copy_from_user(ipc4_msg->data_ptr,
++ buffer + sizeof(ipc4_msg->header_u64), data_size))
+ return -EFAULT;
+
+- ipc4_msg->data_size = count;
++ ipc4_msg->data_size = data_size;
+
+ /* Initialize the reply storage */
+ ipc4_msg = priv->rx_buffer;
+@@ -221,9 +216,9 @@ static ssize_t sof_msg_inject_ipc4_dfs_write(struct file *file,
+
+ /* return the error code if test failed */
+ if (ret < 0)
+- size = ret;
++ return ret;
+
+- return size;
++ return count;
+ };
+
+ static int sof_msg_inject_dfs_release(struct inode *inode, struct file *file)
+diff --git a/sound/soc/sof/sof-priv.h b/sound/soc/sof/sof-priv.h
+index f0f3d72c0da73..f11f575fd1da2 100644
+--- a/sound/soc/sof/sof-priv.h
++++ b/sound/soc/sof/sof-priv.h
+@@ -378,8 +378,8 @@ struct sof_ipc_fw_tracing_ops {
+
+ /**
+ * struct sof_ipc_pm_ops - IPC-specific PM ops
+- * @ctx_save: Function pointer for context save
+- * @ctx_restore: Function pointer for context restore
++ * @ctx_save: Optional function pointer for context save
++ * @ctx_restore: Optional function pointer for context restore
+ */
+ struct sof_ipc_pm_ops {
+ int (*ctx_save)(struct snd_sof_dev *sdev);
+diff --git a/sound/usb/bcd2000/bcd2000.c b/sound/usb/bcd2000/bcd2000.c
+index cd4a0bc6d278f..7aec0a95c609a 100644
+--- a/sound/usb/bcd2000/bcd2000.c
++++ b/sound/usb/bcd2000/bcd2000.c
+@@ -348,7 +348,8 @@ static int bcd2000_init_midi(struct bcd2000 *bcd2k)
+ static void bcd2000_free_usb_related_resources(struct bcd2000 *bcd2k,
+ struct usb_interface *interface)
+ {
+- /* usb_kill_urb not necessary, urb is aborted automatically */
++ usb_kill_urb(bcd2k->midi_out_urb);
++ usb_kill_urb(bcd2k->midi_in_urb);
+
+ usb_free_urb(bcd2k->midi_out_urb);
+ usb_free_urb(bcd2k->midi_in_urb);
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 968d90caeefa0..168fd802d70bd 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1843,6 +1843,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
++ DEVICE_FLG(0x1397, 0x0507, /* Behringer UMC202HD */
++ QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x1397, 0x0508, /* Behringer UMC204HD */
+ QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x1397, 0x0509, /* Behringer UMC404HD */
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index 01ce121c302df..11f9096407fc4 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -233,7 +233,7 @@ struct pt_regs___arm64 {
+ #define __PT_PARM5_REG a4
+ #define __PT_RET_REG ra
+ #define __PT_FP_REG s0
+-#define __PT_RC_REG a5
++#define __PT_RC_REG a0
+ #define __PT_SP_REG sp
+ #define __PT_IP_REG pc
+ /* riscv does not select ARCH_HAS_SYSCALL_WRAPPER. */
+diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
+index 927745b080141..23f5c46708f8f 100644
+--- a/tools/lib/bpf/gen_loader.c
++++ b/tools/lib/bpf/gen_loader.c
+@@ -533,7 +533,7 @@ void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
+ gen->attach_kind = kind;
+ ret = snprintf(gen->attach_target, sizeof(gen->attach_target), "%s%s",
+ prefix, attach_name);
+- if (ret == sizeof(gen->attach_target))
++ if (ret >= sizeof(gen->attach_target))
+ gen->error = -ENOSPC;
+ }
+
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index e89cc9c885b3c..266357b1dca13 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -2398,6 +2398,37 @@ int parse_btf_map_def(const char *map_name, struct btf *btf,
+ return 0;
+ }
+
++static size_t adjust_ringbuf_sz(size_t sz)
++{
++ __u32 page_sz = sysconf(_SC_PAGE_SIZE);
++ __u32 mul;
++
++ /* if user forgot to set any size, make sure they see error */
++ if (sz == 0)
++ return 0;
++ /* Kernel expects BPF_MAP_TYPE_RINGBUF's max_entries to be
++ * a power-of-2 multiple of kernel's page size. If user diligently
++ * satisified these conditions, pass the size through.
++ */
++ if ((sz % page_sz) == 0 && is_pow_of_2(sz / page_sz))
++ return sz;
++
++ /* Otherwise find closest (page_sz * power_of_2) product bigger than
++ * user-set size to satisfy both user size request and kernel
++ * requirements and substitute correct max_entries for map creation.
++ */
++ for (mul = 1; mul <= UINT_MAX / page_sz; mul <<= 1) {
++ if (mul * page_sz > sz)
++ return mul * page_sz;
++ }
++
++ /* if it's impossible to satisfy the conditions (i.e., user size is
++ * very close to UINT_MAX but is not a power-of-2 multiple of
++ * page_size) then just return original size and let kernel reject it
++ */
++ return sz;
++}
++
+ static void fill_map_from_def(struct bpf_map *map, const struct btf_map_def *def)
+ {
+ map->def.type = def->map_type;
+@@ -2411,6 +2442,10 @@ static void fill_map_from_def(struct bpf_map *map, const struct btf_map_def *def
+ map->btf_key_type_id = def->key_type_id;
+ map->btf_value_type_id = def->value_type_id;
+
++ /* auto-adjust BPF ringbuf map max_entries to be a multiple of page size */
++ if (map->def.type == BPF_MAP_TYPE_RINGBUF)
++ map->def.max_entries = adjust_ringbuf_sz(map->def.max_entries);
++
+ if (def->parts & MAP_DEF_MAP_TYPE)
+ pr_debug("map '%s': found type = %u.\n", map->name, def->map_type);
+
+@@ -4327,7 +4362,7 @@ int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
+ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ {
+ struct bpf_map_info info = {};
+- __u32 len = sizeof(info);
++ __u32 len = sizeof(info), name_len;
+ int new_fd, err;
+ char *new_name;
+
+@@ -4337,7 +4372,12 @@ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ if (err)
+ return libbpf_err(err);
+
+- new_name = strdup(info.name);
++ name_len = strlen(info.name);
++ if (name_len == BPF_OBJ_NAME_LEN - 1 && strncmp(map->name, info.name, name_len) == 0)
++ new_name = strdup(map->name);
++ else
++ new_name = strdup(info.name);
++
+ if (!new_name)
+ return libbpf_err(-errno);
+
+@@ -4396,9 +4436,15 @@ struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
+
+ int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
+ {
+- if (map->fd >= 0)
++ if (map->obj->loaded)
+ return libbpf_err(-EBUSY);
++
+ map->def.max_entries = max_entries;
++
++ /* auto-adjust BPF ringbuf map max_entries to be a multiple of page size */
++ if (map->def.type == BPF_MAP_TYPE_RINGBUF)
++ map->def.max_entries = adjust_ringbuf_sz(map->def.max_entries);
++
+ return 0;
+ }
+
+@@ -4943,42 +4989,6 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+
+ static void bpf_map__destroy(struct bpf_map *map);
+
+-static bool is_pow_of_2(size_t x)
+-{
+- return x && (x & (x - 1));
+-}
+-
+-static size_t adjust_ringbuf_sz(size_t sz)
+-{
+- __u32 page_sz = sysconf(_SC_PAGE_SIZE);
+- __u32 mul;
+-
+- /* if user forgot to set any size, make sure they see error */
+- if (sz == 0)
+- return 0;
+- /* Kernel expects BPF_MAP_TYPE_RINGBUF's max_entries to be
+- * a power-of-2 multiple of kernel's page size. If user diligently
+- * satisified these conditions, pass the size through.
+- */
+- if ((sz % page_sz) == 0 && is_pow_of_2(sz / page_sz))
+- return sz;
+-
+- /* Otherwise find closest (page_sz * power_of_2) product bigger than
+- * user-set size to satisfy both user size request and kernel
+- * requirements and substitute correct max_entries for map creation.
+- */
+- for (mul = 1; mul <= UINT_MAX / page_sz; mul <<= 1) {
+- if (mul * page_sz > sz)
+- return mul * page_sz;
+- }
+-
+- /* if it's impossible to satisfy the conditions (i.e., user size is
+- * very close to UINT_MAX but is not a power-of-2 multiple of
+- * page_size) then just return original size and let kernel reject it
+- */
+- return sz;
+-}
+-
+ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
+ {
+ LIBBPF_OPTS(bpf_map_create_opts, create_attr);
+@@ -5017,9 +5027,6 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ }
+
+ switch (def->type) {
+- case BPF_MAP_TYPE_RINGBUF:
+- map->def.max_entries = adjust_ringbuf_sz(map->def.max_entries);
+- /* fallthrough */
+ case BPF_MAP_TYPE_PERF_EVENT_ARRAY:
+ case BPF_MAP_TYPE_CGROUP_ARRAY:
+ case BPF_MAP_TYPE_STACK_TRACE:
+@@ -10988,43 +10995,6 @@ static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe,
+ return pfd;
+ }
+
+-/* uprobes deal in relative offsets; subtract the base address associated with
+- * the mapped binary. See Documentation/trace/uprobetracer.rst for more
+- * details.
+- */
+-static long elf_find_relative_offset(const char *filename, Elf *elf, long addr)
+-{
+- size_t n;
+- int i;
+-
+- if (elf_getphdrnum(elf, &n)) {
+- pr_warn("elf: failed to find program headers for '%s': %s\n", filename,
+- elf_errmsg(-1));
+- return -ENOENT;
+- }
+-
+- for (i = 0; i < n; i++) {
+- int seg_start, seg_end, seg_offset;
+- GElf_Phdr phdr;
+-
+- if (!gelf_getphdr(elf, i, &phdr)) {
+- pr_warn("elf: failed to get program header %d from '%s': %s\n", i, filename,
+- elf_errmsg(-1));
+- return -ENOENT;
+- }
+- if (phdr.p_type != PT_LOAD || !(phdr.p_flags & PF_X))
+- continue;
+-
+- seg_start = phdr.p_vaddr;
+- seg_end = seg_start + phdr.p_memsz;
+- seg_offset = phdr.p_offset;
+- if (addr >= seg_start && addr < seg_end)
+- return addr - seg_start + seg_offset;
+- }
+- pr_warn("elf: failed to find prog header containing 0x%lx in '%s'\n", addr, filename);
+- return -ENOENT;
+-}
+-
+ /* Return next ELF section of sh_type after scn, or first of that type if scn is NULL. */
+ static Elf_Scn *elf_find_next_scn_by_type(Elf *elf, int sh_type, Elf_Scn *scn)
+ {
+@@ -11111,6 +11081,8 @@ static long elf_find_func_offset(const char *binary_path, const char *name)
+ for (idx = 0; idx < nr_syms; idx++) {
+ int curr_bind;
+ GElf_Sym sym;
++ Elf_Scn *sym_scn;
++ GElf_Shdr sym_sh;
+
+ if (!gelf_getsym(symbols, idx, &sym))
+ continue;
+@@ -11148,12 +11120,28 @@ static long elf_find_func_offset(const char *binary_path, const char *name)
+ continue;
+ }
+ }
+- ret = sym.st_value;
++
++ /* Transform symbol's virtual address (absolute for
++ * binaries and relative for shared libs) into file
++ * offset, which is what kernel is expecting for
++ * uprobe/uretprobe attachment.
++ * See Documentation/trace/uprobetracer.rst for more
++ * details.
++ * This is done by looking up symbol's containing
++ * section's header and using it's virtual address
++ * (sh_addr) and corresponding file offset (sh_offset)
++ * to transform sym.st_value (virtual address) into
++ * desired final file offset.
++ */
++ sym_scn = elf_getscn(elf, sym.st_shndx);
++ if (!sym_scn)
++ continue;
++ if (!gelf_getshdr(sym_scn, &sym_sh))
++ continue;
++
++ ret = sym.st_value - sym_sh.sh_addr + sym_sh.sh_offset;
+ last_bind = curr_bind;
+ }
+- /* For binaries that are not shared libraries, we need relative offset */
+- if (ret > 0 && !is_shared_lib)
+- ret = elf_find_relative_offset(binary_path, elf, ret);
+ if (ret > 0)
+ break;
+ }
+diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
+index 4abdbe2fea9d7..230ac5699c3d9 100644
+--- a/tools/lib/bpf/libbpf_internal.h
++++ b/tools/lib/bpf/libbpf_internal.h
+@@ -109,9 +109,9 @@ static inline bool str_has_sfx(const char *str, const char *sfx)
+ size_t str_len = strlen(str);
+ size_t sfx_len = strlen(sfx);
+
+- if (sfx_len <= str_len)
+- return strcmp(str + str_len - sfx_len, sfx);
+- return false;
++ if (sfx_len > str_len)
++ return false;
++ return strcmp(str + str_len - sfx_len, sfx) == 0;
+ }
+
+ /* Symbol versioning is different between static and shared library.
+@@ -580,4 +580,9 @@ struct bpf_link * usdt_manager_attach_usdt(struct usdt_manager *man,
+ const char *usdt_provider, const char *usdt_name,
+ __u64 usdt_cookie);
+
++static inline bool is_pow_of_2(size_t x)
++{
++ return x && (x & (x - 1)) == 0;
++}
++
+ #endif /* __LIBBPF_LIBBPF_INTERNAL_H */
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 9aa016fb55aa6..85c0fddf55d12 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -697,11 +697,6 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
+ return err;
+ }
+
+-static bool is_pow_of_2(size_t x)
+-{
+- return x && (x & (x - 1)) == 0;
+-}
+-
+ static int linker_sanity_check_elf(struct src_obj *obj)
+ {
+ struct src_sec *sec;
+diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c
+index f1c9339cfbbc2..5159207cbfd9c 100644
+--- a/tools/lib/bpf/usdt.c
++++ b/tools/lib/bpf/usdt.c
+@@ -441,7 +441,7 @@ static int parse_elf_segs(Elf *elf, const char *path, struct elf_seg **segs, siz
+ return 0;
+ }
+
+-static int parse_lib_segs(int pid, const char *lib_path, struct elf_seg **segs, size_t *seg_cnt)
++static int parse_vma_segs(int pid, const char *lib_path, struct elf_seg **segs, size_t *seg_cnt)
+ {
+ char path[PATH_MAX], line[PATH_MAX], mode[16];
+ size_t seg_start, seg_end, seg_off;
+@@ -531,35 +531,40 @@ err_out:
+ return err;
+ }
+
+-static struct elf_seg *find_elf_seg(struct elf_seg *segs, size_t seg_cnt, long addr, bool relative)
++static struct elf_seg *find_elf_seg(struct elf_seg *segs, size_t seg_cnt, long virtaddr)
+ {
+ struct elf_seg *seg;
+ int i;
+
+- if (relative) {
+- /* for shared libraries, address is relative offset and thus
+- * should be fall within logical offset-based range of
+- * [offset_start, offset_end)
+- */
+- for (i = 0, seg = segs; i < seg_cnt; i++, seg++) {
+- if (seg->offset <= addr && addr < seg->offset + (seg->end - seg->start))
+- return seg;
+- }
+- } else {
+- /* for binaries, address is absolute and thus should be within
+- * absolute address range of [seg_start, seg_end)
+- */
+- for (i = 0, seg = segs; i < seg_cnt; i++, seg++) {
+- if (seg->start <= addr && addr < seg->end)
+- return seg;
+- }
++ /* for ELF binaries (both executables and shared libraries), we are
++ * given virtual address (absolute for executables, relative for
++ * libraries) which should match address range of [seg_start, seg_end)
++ */
++ for (i = 0, seg = segs; i < seg_cnt; i++, seg++) {
++ if (seg->start <= virtaddr && virtaddr < seg->end)
++ return seg;
+ }
++ return NULL;
++}
+
++static struct elf_seg *find_vma_seg(struct elf_seg *segs, size_t seg_cnt, long offset)
++{
++ struct elf_seg *seg;
++ int i;
++
++ /* for VMA segments from /proc/<pid>/maps file, provided "address" is
++ * actually a file offset, so should be fall within logical
++ * offset-based range of [offset_start, offset_end)
++ */
++ for (i = 0, seg = segs; i < seg_cnt; i++, seg++) {
++ if (seg->offset <= offset && offset < seg->offset + (seg->end - seg->start))
++ return seg;
++ }
+ return NULL;
+ }
+
+-static int parse_usdt_note(Elf *elf, const char *path, long base_addr,
+- GElf_Nhdr *nhdr, const char *data, size_t name_off, size_t desc_off,
++static int parse_usdt_note(Elf *elf, const char *path, GElf_Nhdr *nhdr,
++ const char *data, size_t name_off, size_t desc_off,
+ struct usdt_note *usdt_note);
+
+ static int parse_usdt_spec(struct usdt_spec *spec, const struct usdt_note *note, __u64 usdt_cookie);
+@@ -568,8 +573,8 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ const char *usdt_provider, const char *usdt_name, __u64 usdt_cookie,
+ struct usdt_target **out_targets, size_t *out_target_cnt)
+ {
+- size_t off, name_off, desc_off, seg_cnt = 0, lib_seg_cnt = 0, target_cnt = 0;
+- struct elf_seg *segs = NULL, *lib_segs = NULL;
++ size_t off, name_off, desc_off, seg_cnt = 0, vma_seg_cnt = 0, target_cnt = 0;
++ struct elf_seg *segs = NULL, *vma_segs = NULL;
+ struct usdt_target *targets = NULL, *target;
+ long base_addr = 0;
+ Elf_Scn *notes_scn, *base_scn;
+@@ -613,8 +618,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ struct elf_seg *seg = NULL;
+ void *tmp;
+
+- err = parse_usdt_note(elf, path, base_addr, &nhdr,
+- data->d_buf, name_off, desc_off, ¬e);
++ err = parse_usdt_note(elf, path, &nhdr, data->d_buf, name_off, desc_off, ¬e);
+ if (err)
+ goto err_out;
+
+@@ -654,30 +658,29 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ usdt_rel_ip += base_addr - note.base_addr;
+ }
+
+- if (ehdr.e_type == ET_EXEC) {
+- /* When attaching uprobes (which what USDTs basically
+- * are) kernel expects a relative IP to be specified,
+- * so if we are attaching to an executable ELF binary
+- * (i.e., not a shared library), we need to calculate
+- * proper relative IP based on ELF's load address
+- */
+- seg = find_elf_seg(segs, seg_cnt, usdt_abs_ip, false /* relative */);
+- if (!seg) {
+- err = -ESRCH;
+- pr_warn("usdt: failed to find ELF program segment for '%s:%s' in '%s' at IP 0x%lx\n",
+- usdt_provider, usdt_name, path, usdt_abs_ip);
+- goto err_out;
+- }
+- if (!seg->is_exec) {
+- err = -ESRCH;
+- pr_warn("usdt: matched ELF binary '%s' segment [0x%lx, 0x%lx) for '%s:%s' at IP 0x%lx is not executable\n",
+- path, seg->start, seg->end, usdt_provider, usdt_name,
+- usdt_abs_ip);
+- goto err_out;
+- }
++ /* When attaching uprobes (which is what USDTs basically are)
++ * kernel expects file offset to be specified, not a relative
++ * virtual address, so we need to translate virtual address to
++ * file offset, for both ET_EXEC and ET_DYN binaries.
++ */
++ seg = find_elf_seg(segs, seg_cnt, usdt_abs_ip);
++ if (!seg) {
++ err = -ESRCH;
++ pr_warn("usdt: failed to find ELF program segment for '%s:%s' in '%s' at IP 0x%lx\n",
++ usdt_provider, usdt_name, path, usdt_abs_ip);
++ goto err_out;
++ }
++ if (!seg->is_exec) {
++ err = -ESRCH;
++ pr_warn("usdt: matched ELF binary '%s' segment [0x%lx, 0x%lx) for '%s:%s' at IP 0x%lx is not executable\n",
++ path, seg->start, seg->end, usdt_provider, usdt_name,
++ usdt_abs_ip);
++ goto err_out;
++ }
++ /* translate from virtual address to file offset */
++ usdt_rel_ip = usdt_abs_ip - seg->start + seg->offset;
+
+- usdt_rel_ip = usdt_abs_ip - (seg->start - seg->offset);
+- } else if (!man->has_bpf_cookie) { /* ehdr.e_type == ET_DYN */
++ if (ehdr.e_type == ET_DYN && !man->has_bpf_cookie) {
+ /* If we don't have BPF cookie support but need to
+ * attach to a shared library, we'll need to know and
+ * record absolute addresses of attach points due to
+@@ -697,9 +700,9 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ goto err_out;
+ }
+
+- /* lib_segs are lazily initialized only if necessary */
+- if (lib_seg_cnt == 0) {
+- err = parse_lib_segs(pid, path, &lib_segs, &lib_seg_cnt);
++ /* vma_segs are lazily initialized only if necessary */
++ if (vma_seg_cnt == 0) {
++ err = parse_vma_segs(pid, path, &vma_segs, &vma_seg_cnt);
+ if (err) {
+ pr_warn("usdt: failed to get memory segments in PID %d for shared library '%s': %d\n",
+ pid, path, err);
+@@ -707,7 +710,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ }
+ }
+
+- seg = find_elf_seg(lib_segs, lib_seg_cnt, usdt_rel_ip, true /* relative */);
++ seg = find_vma_seg(vma_segs, vma_seg_cnt, usdt_rel_ip);
+ if (!seg) {
+ err = -ESRCH;
+ pr_warn("usdt: failed to find shared lib memory segment for '%s:%s' in '%s' at relative IP 0x%lx\n",
+@@ -715,7 +718,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ goto err_out;
+ }
+
+- usdt_abs_ip = seg->start + (usdt_rel_ip - seg->offset);
++ usdt_abs_ip = seg->start - seg->offset + usdt_rel_ip;
+ }
+
+ pr_debug("usdt: probe for '%s:%s' in %s '%s': addr 0x%lx base 0x%lx (resolved abs_ip 0x%lx rel_ip 0x%lx) args '%s' in segment [0x%lx, 0x%lx) at offset 0x%lx\n",
+@@ -723,7 +726,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ note.loc_addr, note.base_addr, usdt_abs_ip, usdt_rel_ip, note.args,
+ seg ? seg->start : 0, seg ? seg->end : 0, seg ? seg->offset : 0);
+
+- /* Adjust semaphore address to be a relative offset */
++ /* Adjust semaphore address to be a file offset */
+ if (note.sema_addr) {
+ if (!man->has_sema_refcnt) {
+ pr_warn("usdt: kernel doesn't support USDT semaphore refcounting for '%s:%s' in '%s'\n",
+@@ -732,7 +735,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ goto err_out;
+ }
+
+- seg = find_elf_seg(segs, seg_cnt, note.sema_addr, false /* relative */);
++ seg = find_elf_seg(segs, seg_cnt, note.sema_addr);
+ if (!seg) {
+ err = -ESRCH;
+ pr_warn("usdt: failed to find ELF loadable segment with semaphore of '%s:%s' in '%s' at 0x%lx\n",
+@@ -747,7 +750,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ goto err_out;
+ }
+
+- usdt_sema_off = note.sema_addr - (seg->start - seg->offset);
++ usdt_sema_off = note.sema_addr - seg->start + seg->offset;
+
+ pr_debug("usdt: sema for '%s:%s' in %s '%s': addr 0x%lx base 0x%lx (resolved 0x%lx) in segment [0x%lx, 0x%lx] at offset 0x%lx\n",
+ usdt_provider, usdt_name, ehdr.e_type == ET_EXEC ? "exec" : "lib ",
+@@ -770,7 +773,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ target->rel_ip = usdt_rel_ip;
+ target->sema_off = usdt_sema_off;
+
+- /* notes->args references strings from Elf itself, so they can
++ /* notes.args references strings from Elf itself, so they can
+ * be referenced safely until elf_end() call
+ */
+ target->spec_str = note.args;
+@@ -788,7 +791,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+
+ err_out:
+ free(segs);
+- free(lib_segs);
++ free(vma_segs);
+ if (err < 0)
+ free(targets);
+ return err;
+@@ -1089,8 +1092,8 @@ err_out:
+ /* Parse out USDT ELF note from '.note.stapsdt' section.
+ * Logic inspired by perf's code.
+ */
+-static int parse_usdt_note(Elf *elf, const char *path, long base_addr,
+- GElf_Nhdr *nhdr, const char *data, size_t name_off, size_t desc_off,
++static int parse_usdt_note(Elf *elf, const char *path, GElf_Nhdr *nhdr,
++ const char *data, size_t name_off, size_t desc_off,
+ struct usdt_note *note)
+ {
+ const char *provider, *name, *args;
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index af136f73b09d0..67dc010e9fe3b 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -1147,8 +1147,6 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
+ goto out_mmap_tx;
+ }
+
+- ctx->prog_fd = -1;
+-
+ if (!(xsk->config.libbpf_flags & XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD)) {
+ err = __xsk_setup_xdp_prog(xsk, NULL);
+ if (err)
+@@ -1229,7 +1227,10 @@ void xsk_socket__delete(struct xsk_socket *xsk)
+
+ ctx = xsk->ctx;
+ umem = ctx->umem;
+- if (ctx->prog_fd != -1) {
++
++ xsk_put_ctx(ctx, true);
++
++ if (!ctx->refcount) {
+ xsk_delete_bpf_maps(xsk);
+ close(ctx->prog_fd);
+ if (ctx->has_bpf_link)
+@@ -1248,8 +1249,6 @@ void xsk_socket__delete(struct xsk_socket *xsk)
+ }
+ }
+
+- xsk_put_ctx(ctx, true);
+-
+ umem->refcount--;
+ /* Do not close an fd that also has an associated umem connected
+ * to it.
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index d2ecd4d296243..86f838c5661ee 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -1685,12 +1685,6 @@ static int add_default_attributes(void)
+ { .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
+ { .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_MISSES },
+
+-};
+- struct perf_event_attr default_sw_attrs[] = {
+- { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK },
+- { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CONTEXT_SWITCHES },
+- { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CPU_MIGRATIONS },
+- { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_PAGE_FAULTS },
+ };
+
+ /*
+@@ -1947,30 +1941,6 @@ setup_metrics:
+ }
+
+ if (!evsel_list->core.nr_entries) {
+- if (perf_pmu__has_hybrid()) {
+- struct parse_events_error errinfo;
+- const char *hybrid_str = "cycles,instructions,branches,branch-misses";
+-
+- if (target__has_cpu(&target))
+- default_sw_attrs[0].config = PERF_COUNT_SW_CPU_CLOCK;
+-
+- if (evlist__add_default_attrs(evsel_list,
+- default_sw_attrs) < 0) {
+- return -1;
+- }
+-
+- parse_events_error__init(&errinfo);
+- err = parse_events(evsel_list, hybrid_str, &errinfo);
+- if (err) {
+- fprintf(stderr,
+- "Cannot set up hybrid events %s: %d\n",
+- hybrid_str, err);
+- parse_events_error__print(&errinfo, hybrid_str);
+- }
+- parse_events_error__exit(&errinfo);
+- return err ? -1 : 0;
+- }
+-
+ if (target__has_cpu(&target))
+ default_attrs0[0].config = PERF_COUNT_SW_CPU_CLOCK;
+
+diff --git a/tools/perf/tests/shell/stat+csv_output.sh b/tools/perf/tests/shell/stat+csv_output.sh
+index 38c26f3ef4c15..eb5196f58190e 100755
+--- a/tools/perf/tests/shell/stat+csv_output.sh
++++ b/tools/perf/tests/shell/stat+csv_output.sh
+@@ -8,7 +8,8 @@ set -e
+
+ function commachecker()
+ {
+- local -i cnt=0 exp=0
++ local -i cnt=0
++ local exp=0
+
+ case "$1"
+ in "--no-args") exp=6
+@@ -17,7 +18,7 @@ function commachecker()
+ ;; "--interval") exp=7
+ ;; "--per-thread") exp=7
+ ;; "--system-wide-no-aggr") exp=7
+- [ $(uname -m) = "s390x" ] && exp=6
++ [ $(uname -m) = "s390x" ] && exp='^[6-7]$'
+ ;; "--per-core") exp=8
+ ;; "--per-socket") exp=8
+ ;; "--per-node") exp=8
+@@ -34,7 +35,7 @@ function commachecker()
+ x=$(echo $line | tr -d -c ',')
+ cnt="${#x}"
+ # echo $line $cnt
+- [ "$cnt" -ne "$exp" ] && {
++ [[ ! "$cnt" =~ $exp ]] && {
+ echo "wrong number of fields. expected $exp in $line" 1>&2
+ exit 1;
+ }
+diff --git a/tools/perf/util/dsos.c b/tools/perf/util/dsos.c
+index b97366f77bbf7..2bd23e4cf19ef 100644
+--- a/tools/perf/util/dsos.c
++++ b/tools/perf/util/dsos.c
+@@ -23,8 +23,19 @@ static int __dso_id__cmp(struct dso_id *a, struct dso_id *b)
+ if (a->ino > b->ino) return -1;
+ if (a->ino < b->ino) return 1;
+
+- if (a->ino_generation > b->ino_generation) return -1;
+- if (a->ino_generation < b->ino_generation) return 1;
++ /*
++ * Synthesized MMAP events have zero ino_generation, avoid comparing
++ * them with MMAP events with actual ino_generation.
++ *
++ * I found it harmful because the mismatch resulted in a new
++ * dso that did not have a build ID whereas the original dso did have a
++ * build ID. The build ID was essential because the object was not found
++ * otherwise. - Adrian
++ */
++ if (a->ino_generation && b->ino_generation) {
++ if (a->ino_generation > b->ino_generation) return -1;
++ if (a->ino_generation < b->ino_generation) return 1;
++ }
+
+ return 0;
+ }
+diff --git a/tools/perf/util/genelf.c b/tools/perf/util/genelf.c
+index aed49806a09ba..953338b9e887e 100644
+--- a/tools/perf/util/genelf.c
++++ b/tools/perf/util/genelf.c
+@@ -30,7 +30,11 @@
+
+ #define BUILD_ID_URANDOM /* different uuid for each run */
+
+-#ifdef HAVE_LIBCRYPTO
++// FIXME, remove this and fix the deprecation warnings before its removed and
++// We'll break for good here...
++#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
++
++#ifdef HAVE_LIBCRYPTO_SUPPORT
+
+ #define BUILD_ID_MD5
+ #undef BUILD_ID_SHA /* does not seem to work well when linked with Java */
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index b3be5b1d9dbb0..75bec32d4f571 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -1305,16 +1305,29 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
+
+ if (elf_read_program_header(syms_ss->elf,
+ (u64)sym.st_value, &phdr)) {
+- pr_warning("%s: failed to find program header for "
++ pr_debug4("%s: failed to find program header for "
+ "symbol: %s st_value: %#" PRIx64 "\n",
+ __func__, elf_name, (u64)sym.st_value);
+- continue;
++ pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
++ "sh_addr: %#" PRIx64 " sh_offset: %#" PRIx64 "\n",
++ __func__, (u64)sym.st_value, (u64)shdr.sh_addr,
++ (u64)shdr.sh_offset);
++ /*
++ * Fail to find program header, let's rollback
++ * to use shdr.sh_addr and shdr.sh_offset to
++ * calibrate symbol's file address, though this
++ * is not necessary for normal C ELF file, we
++ * still need to handle java JIT symbols in this
++ * case.
++ */
++ sym.st_value -= shdr.sh_addr - shdr.sh_offset;
++ } else {
++ pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
++ "p_vaddr: %#" PRIx64 " p_offset: %#" PRIx64 "\n",
++ __func__, (u64)sym.st_value, (u64)phdr.p_vaddr,
++ (u64)phdr.p_offset);
++ sym.st_value -= phdr.p_vaddr - phdr.p_offset;
+ }
+- pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
+- "p_vaddr: %#" PRIx64 " p_offset: %#" PRIx64 "\n",
+- __func__, (u64)sym.st_value, (u64)phdr.p_vaddr,
+- (u64)phdr.p_offset);
+- sym.st_value -= phdr.p_vaddr - phdr.p_offset;
+ }
+
+ demangled = demangle_sym(dso, kmodule, elf_name);
+diff --git a/tools/power/x86/intel-speed-select/isst-daemon.c b/tools/power/x86/intel-speed-select/isst-daemon.c
+index dd372924bc826..d0400c6684ba9 100644
+--- a/tools/power/x86/intel-speed-select/isst-daemon.c
++++ b/tools/power/x86/intel-speed-select/isst-daemon.c
+@@ -41,7 +41,7 @@ void process_level_change(int cpu)
+ time_t tm;
+ int ret;
+
+- if (pkg_id >= MAX_PACKAGE_COUNT || die_id > MAX_DIE_PER_PACKAGE) {
++ if (pkg_id >= MAX_PACKAGE_COUNT || die_id >= MAX_DIE_PER_PACKAGE) {
+ debug_printf("Invalid package/die info for cpu:%d\n", cpu);
+ return;
+ }
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index ede31a4287a07..2e9a751af260a 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -2035,9 +2035,9 @@ int get_core_throt_cnt(int cpu, unsigned long long *cnt)
+ if (!fp)
+ return -1;
+ ret = fscanf(fp, "%lld", &tmp);
++ fclose(fp);
+ if (ret != 1)
+ return -1;
+- fclose(fp);
+ *cnt = tmp;
+
+ return 0;
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 2d3c8c8f558a8..21be936dd1af6 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -168,17 +168,26 @@ $(OUTPUT)/%:%.c
+ $(call msg,BINARY,,$@)
+ $(Q)$(LINK.c) $^ $(LDLIBS) -o $@
+
++# LLVM's ld.lld doesn't support all the architectures, so use it only on x86
++ifeq ($(SRCARCH),x86)
++LLD := lld
++else
++LLD := ld
++endif
++
+ # Filter out -static for liburandom_read.so and its dependent targets so that static builds
+ # do not fail. Static builds leave urandom_read relying on system-wide shared libraries.
+ $(OUTPUT)/liburandom_read.so: urandom_read_lib1.c urandom_read_lib2.c
+ $(call msg,LIB,,$@)
+- $(Q)$(CC) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $^ $(LDLIBS) -fPIC -shared -o $@
++ $(Q)$(CLANG) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $^ $(LDLIBS) \
++ -fuse-ld=$(LLD) -Wl,-znoseparate-code -fPIC -shared -o $@
+
+ $(OUTPUT)/urandom_read: urandom_read.c urandom_read_aux.c $(OUTPUT)/liburandom_read.so
+ $(call msg,BINARY,,$@)
+- $(Q)$(CC) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $(filter %.c,$^) \
+- liburandom_read.so $(LDLIBS) \
+- -Wl,-rpath=. -Wl,--build-id=sha1 -o $@
++ $(Q)$(CLANG) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $(filter %.c,$^) \
++ liburandom_read.so $(LDLIBS) \
++ -fuse-ld=$(LLD) -Wl,-znoseparate-code \
++ -Wl,-rpath=. -Wl,--build-id=sha1 -o $@
+
+ $(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_testmod/*.[ch])
+ $(call msg,MOD,,$@)
+@@ -578,6 +587,8 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \
+ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
+ prog_tests/tests.h map_tests/tests.h verifier/tests.h \
+ feature bpftool \
+- $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h *.subskel.h no_alu32 bpf_gcc bpf_testmod.ko)
++ $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h *.subskel.h \
++ no_alu32 bpf_gcc bpf_testmod.ko \
++ liburandom_read.so)
+
+ .PHONY: docs docs-clean
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
+index ba5bde53d418f..5af690063af5a 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf.c
+@@ -5324,7 +5324,7 @@ static void do_test_pprint(int test_num)
+ ret = snprintf(pin_path, sizeof(pin_path), "%s/%s",
+ "/sys/fs/bpf", test->map_name);
+
+- if (CHECK(ret == sizeof(pin_path), "pin_path %s/%s is too long",
++ if (CHECK(ret >= sizeof(pin_path), "pin_path %s/%s is too long",
+ "/sys/fs/bpf", test->map_name)) {
+ err = -1;
+ goto done;
+diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_stress.c b/tools/testing/selftests/bpf/prog_tests/fexit_stress.c
+index a7e74297f15f5..5a7e6011f6bf9 100644
+--- a/tools/testing/selftests/bpf/prog_tests/fexit_stress.c
++++ b/tools/testing/selftests/bpf/prog_tests/fexit_stress.c
+@@ -7,11 +7,9 @@
+
+ void serial_test_fexit_stress(void)
+ {
+- char test_skb[128] = {};
+ int fexit_fd[CNT] = {};
+ int link_fd[CNT] = {};
+- char error[4096];
+- int err, i, filter_fd;
++ int err, i;
+
+ const struct bpf_insn trace_program[] = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+@@ -20,25 +18,9 @@ void serial_test_fexit_stress(void)
+
+ LIBBPF_OPTS(bpf_prog_load_opts, trace_opts,
+ .expected_attach_type = BPF_TRACE_FEXIT,
+- .log_buf = error,
+- .log_size = sizeof(error),
+ );
+
+- const struct bpf_insn skb_program[] = {
+- BPF_MOV64_IMM(BPF_REG_0, 0),
+- BPF_EXIT_INSN(),
+- };
+-
+- LIBBPF_OPTS(bpf_prog_load_opts, skb_opts,
+- .log_buf = error,
+- .log_size = sizeof(error),
+- );
+-
+- LIBBPF_OPTS(bpf_test_run_opts, topts,
+- .data_in = test_skb,
+- .data_size_in = sizeof(test_skb),
+- .repeat = 1,
+- );
++ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ err = libbpf_find_vmlinux_btf_id("bpf_fentry_test1",
+ trace_opts.expected_attach_type);
+@@ -58,15 +40,9 @@ void serial_test_fexit_stress(void)
+ goto out;
+ }
+
+- filter_fd = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL",
+- skb_program, sizeof(skb_program) / sizeof(struct bpf_insn),
+- &skb_opts);
+- if (!ASSERT_GE(filter_fd, 0, "test_program_loaded"))
+- goto out;
++ err = bpf_prog_test_run_opts(fexit_fd[0], &topts);
++ ASSERT_OK(err, "bpf_prog_test_run_opts");
+
+- err = bpf_prog_test_run_opts(filter_fd, &topts);
+- close(filter_fd);
+- CHECK_FAIL(err);
+ out:
+ for (i = 0; i < CNT; i++) {
+ if (link_fd[i])
+diff --git a/tools/testing/selftests/bpf/prog_tests/sock_fields.c b/tools/testing/selftests/bpf/prog_tests/sock_fields.c
+index 9d211b5c22c41..7d23166c77af5 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sock_fields.c
++++ b/tools/testing/selftests/bpf/prog_tests/sock_fields.c
+@@ -394,7 +394,6 @@ void serial_test_sock_fields(void)
+ test();
+
+ done:
+- test_sock_fields__detach(skel);
+ test_sock_fields__destroy(skel);
+ if (child_cg_fd >= 0)
+ close(child_cg_fd);
+diff --git a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+index 958dae769c52f..cb6a53b3e023c 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+@@ -646,7 +646,7 @@ static void test_tcp_clear_dtime(struct test_tc_dtime *skel)
+ __u32 *errs = skel->bss->errs[t];
+
+ skel->bss->test = t;
+- test_inet_dtime(AF_INET6, SOCK_STREAM, IP6_DST, 0);
++ test_inet_dtime(AF_INET6, SOCK_STREAM, IP6_DST, 50000 + t);
+
+ ASSERT_EQ(dtimes[INGRESS_FWDNS_P100], 0,
+ dtime_cnt_str(t, INGRESS_FWDNS_P100));
+@@ -683,7 +683,7 @@ static void test_tcp_dtime(struct test_tc_dtime *skel, int family, bool bpf_fwd)
+ errs = skel->bss->errs[t];
+
+ skel->bss->test = t;
+- test_inet_dtime(family, SOCK_STREAM, addr, 0);
++ test_inet_dtime(family, SOCK_STREAM, addr, 50000 + t);
+
+ /* fwdns_prio100 prog does not read delivery_time_type, so
+ * kernel puts the (rcv) timetamp in __sk_buff->tstamp
+@@ -715,13 +715,13 @@ static void test_udp_dtime(struct test_tc_dtime *skel, int family, bool bpf_fwd)
+ errs = skel->bss->errs[t];
+
+ skel->bss->test = t;
+- test_inet_dtime(family, SOCK_DGRAM, addr, 0);
++ test_inet_dtime(family, SOCK_DGRAM, addr, 50000 + t);
+
+ ASSERT_EQ(dtimes[INGRESS_FWDNS_P100], 0,
+ dtime_cnt_str(t, INGRESS_FWDNS_P100));
+ /* non mono delivery time is not forwarded */
+ ASSERT_EQ(dtimes[INGRESS_FWDNS_P101], 0,
+- dtime_cnt_str(t, INGRESS_FWDNS_P100));
++ dtime_cnt_str(t, INGRESS_FWDNS_P101));
+ for (i = EGRESS_FWDNS_P100; i < SET_DTIME; i++)
+ ASSERT_GT(dtimes[i], 0, dtime_cnt_str(t, i));
+
+diff --git a/tools/testing/selftests/bpf/progs/test_tc_dtime.c b/tools/testing/selftests/bpf/progs/test_tc_dtime.c
+index 06f300d06dbd7..b596479a9ebeb 100644
+--- a/tools/testing/selftests/bpf/progs/test_tc_dtime.c
++++ b/tools/testing/selftests/bpf/progs/test_tc_dtime.c
+@@ -11,6 +11,8 @@
+ #include <linux/in.h>
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
++#include <linux/tcp.h>
++#include <linux/udp.h>
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_endian.h>
+ #include <sys/socket.h>
+@@ -115,6 +117,19 @@ static bool bpf_fwd(void)
+ return test < TCP_IP4_RT_FWD;
+ }
+
++static __u8 get_proto(void)
++{
++ switch (test) {
++ case UDP_IP4:
++ case UDP_IP6:
++ case UDP_IP4_RT_FWD:
++ case UDP_IP6_RT_FWD:
++ return IPPROTO_UDP;
++ default:
++ return IPPROTO_TCP;
++ }
++}
++
+ /* -1: parse error: TC_ACT_SHOT
+ * 0: not testing traffic: TC_ACT_OK
+ * >0: first byte is the inet_proto, second byte has the netns
+@@ -122,11 +137,16 @@ static bool bpf_fwd(void)
+ */
+ static int skb_get_type(struct __sk_buff *skb)
+ {
++ __u16 dst_ns_port = __bpf_htons(50000 + test);
+ void *data_end = ctx_ptr(skb->data_end);
+ void *data = ctx_ptr(skb->data);
+ __u8 inet_proto = 0, ns = 0;
+ struct ipv6hdr *ip6h;
++ __u16 sport, dport;
+ struct iphdr *iph;
++ struct tcphdr *th;
++ struct udphdr *uh;
++ void *trans;
+
+ switch (skb->protocol) {
+ case __bpf_htons(ETH_P_IP):
+@@ -138,6 +158,7 @@ static int skb_get_type(struct __sk_buff *skb)
+ else if (iph->saddr == ip4_dst)
+ ns = DST_NS;
+ inet_proto = iph->protocol;
++ trans = iph + 1;
+ break;
+ case __bpf_htons(ETH_P_IPV6):
+ ip6h = data + sizeof(struct ethhdr);
+@@ -148,15 +169,43 @@ static int skb_get_type(struct __sk_buff *skb)
+ else if (v6_equal(ip6h->saddr, (struct in6_addr)ip6_dst))
+ ns = DST_NS;
+ inet_proto = ip6h->nexthdr;
++ trans = ip6h + 1;
+ break;
+ default:
+ return 0;
+ }
+
+- if ((inet_proto != IPPROTO_TCP && inet_proto != IPPROTO_UDP) || !ns)
++ /* skb is not from src_ns or dst_ns.
++ * skb is not the testing IPPROTO.
++ */
++ if (!ns || inet_proto != get_proto())
+ return 0;
+
+- return (ns << 8 | inet_proto);
++ switch (inet_proto) {
++ case IPPROTO_TCP:
++ th = trans;
++ if (th + 1 > data_end)
++ return -1;
++ sport = th->source;
++ dport = th->dest;
++ break;
++ case IPPROTO_UDP:
++ uh = trans;
++ if (uh + 1 > data_end)
++ return -1;
++ sport = uh->source;
++ dport = uh->dest;
++ break;
++ default:
++ return 0;
++ }
++
++ /* The skb is the testing traffic */
++ if ((ns == SRC_NS && dport == dst_ns_port) ||
++ (ns == DST_NS && sport == dst_ns_port))
++ return (ns << 8 | inet_proto);
++
++ return 0;
+ }
+
+ /* format: direction@iface@netns
+diff --git a/tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c b/tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c
+index 86b9e611ad871..21c31fe10c1a2 100644
+--- a/tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c
++++ b/tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c
+@@ -8,8 +8,6 @@
+ #include "test_util.h"
+ #include "kvm_util.h"
+
+-#define VCPU_ID 6
+-
+ #define ICPT_INSTRUCTION 0x04
+ #define IPA0_DIAG 0x8300
+
+@@ -27,14 +25,15 @@ static void guest_code(void)
+ */
+ static uint64_t diag318_handler(void)
+ {
++ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ struct kvm_run *run;
+ uint64_t reg;
+ uint64_t diag318_info;
+
+- vm = vm_create_default(VCPU_ID, 0, guest_code);
+- vcpu_run(vm, VCPU_ID);
+- run = vcpu_state(vm, VCPU_ID);
++ vm = vm_create_with_one_vcpu(&vcpu, guest_code);
++ vcpu_run(vm, vcpu->id);
++ run = vcpu->run;
+
+ TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+ "DIAGNOSE 0x0318 instruction was not intercepted");
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+index ead7011ee8f61..5d85e1c021da8 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+@@ -1422,7 +1422,7 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2,
+
+ asm volatile("vmcall"
+ : "=a"(r)
+- : "b"(a0), "c"(a1), "d"(a2), "S"(a3));
++ : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3));
+ return r;
+ }
+
+diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c
+index 15f046e19cb2c..d59918d5cbe2d 100644
+--- a/tools/testing/selftests/kvm/max_guest_memory_test.c
++++ b/tools/testing/selftests/kvm/max_guest_memory_test.c
+@@ -28,8 +28,7 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
+ }
+
+ struct vcpu_info {
+- struct kvm_vm *vm;
+- uint32_t id;
++ struct kvm_vcpu *vcpu;
+ uint64_t start_gpa;
+ uint64_t end_gpa;
+ };
+@@ -60,12 +59,13 @@ static void run_vcpu(struct kvm_vm *vm, uint32_t vcpu_id)
+
+ static void *vcpu_worker(void *data)
+ {
+- struct vcpu_info *vcpu = data;
++ struct vcpu_info *info = data;
++ struct kvm_vcpu *vcpu = info->vcpu;
+ struct kvm_vm *vm = vcpu->vm;
+ struct kvm_sregs sregs;
+ struct kvm_regs regs;
+
+- vcpu_args_set(vm, vcpu->id, 3, vcpu->start_gpa, vcpu->end_gpa,
++ vcpu_args_set(vm, vcpu->id, 3, info->start_gpa, info->end_gpa,
+ vm_get_page_size(vm));
+
+ /* Snapshot regs before the first run. */
+@@ -89,8 +89,8 @@ static void *vcpu_worker(void *data)
+ return NULL;
+ }
+
+-static pthread_t *spawn_workers(struct kvm_vm *vm, uint64_t start_gpa,
+- uint64_t end_gpa)
++static pthread_t *spawn_workers(struct kvm_vm *vm, struct kvm_vcpu **vcpus,
++ uint64_t start_gpa, uint64_t end_gpa)
+ {
+ struct vcpu_info *info;
+ uint64_t gpa, nr_bytes;
+@@ -108,8 +108,7 @@ static pthread_t *spawn_workers(struct kvm_vm *vm, uint64_t start_gpa,
+ TEST_ASSERT(nr_bytes, "C'mon, no way you have %d CPUs", nr_vcpus);
+
+ for (i = 0, gpa = start_gpa; i < nr_vcpus; i++, gpa += nr_bytes) {
+- info[i].vm = vm;
+- info[i].id = i;
++ info[i].vcpu = vcpus[i];
+ info[i].start_gpa = gpa;
+ info[i].end_gpa = gpa + nr_bytes;
+ pthread_create(&threads[i], NULL, vcpu_worker, &info[i]);
+@@ -172,6 +171,7 @@ int main(int argc, char *argv[])
+ uint64_t max_gpa, gpa, slot_size, max_mem, i;
+ int max_slots, slot, opt, fd;
+ bool hugepages = false;
++ struct kvm_vcpu **vcpus;
+ pthread_t *threads;
+ struct kvm_vm *vm;
+ void *mem;
+@@ -215,7 +215,10 @@ int main(int argc, char *argv[])
+ }
+ }
+
+- vm = vm_create_default_with_vcpus(nr_vcpus, 0, 0, guest_code, NULL);
++ vcpus = malloc(nr_vcpus * sizeof(*vcpus));
++ TEST_ASSERT(vcpus, "Failed to allocate vCPU array");
++
++ vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus);
+
+ max_gpa = vm_get_max_gfn(vm) << vm_get_page_shift(vm);
+ TEST_ASSERT(max_gpa > (4 * slot_size), "MAXPHYADDR <4gb ");
+@@ -252,7 +255,10 @@ int main(int argc, char *argv[])
+ }
+
+ atomic_set(&rendezvous, nr_vcpus + 1);
+- threads = spawn_workers(vm, start_gpa, gpa);
++ threads = spawn_workers(vm, vcpus, start_gpa, gpa);
++
++ free(vcpus);
++ vcpus = NULL;
+
+ pr_info("Running with %lugb of guest memory and %u vCPUs\n",
+ (gpa - start_gpa) / size_1gb, nr_vcpus);
+diff --git a/tools/testing/selftests/net/fib_rule_tests.sh b/tools/testing/selftests/net/fib_rule_tests.sh
+index bbe3b379927ab..c245476fa29d6 100755
+--- a/tools/testing/selftests/net/fib_rule_tests.sh
++++ b/tools/testing/selftests/net/fib_rule_tests.sh
+@@ -303,6 +303,29 @@ run_fibrule_tests()
+ log_section "IPv6 fib rule"
+ fib_rule6_test
+ }
++################################################################################
++# usage
++
++usage()
++{
++ cat <<EOF
++usage: ${0##*/} OPTS
++
++ -t <test> Test(s) to run (default: all)
++ (options: $TESTS)
++EOF
++}
++
++################################################################################
++# main
++
++while getopts ":t:h" opt; do
++ case $opt in
++ t) TESTS=$OPTARG;;
++ h) usage; exit 0;;
++ *) usage; exit 1;;
++ esac
++done
+
+ if [ "$(id -u)" -ne 0 ];then
+ echo "SKIP: Need root privileges"
+diff --git a/tools/testing/selftests/powerpc/math/mma.S b/tools/testing/selftests/powerpc/math/mma.S
+index 8528c98495659..61cc88b1b26bc 100644
+--- a/tools/testing/selftests/powerpc/math/mma.S
++++ b/tools/testing/selftests/powerpc/math/mma.S
+@@ -20,6 +20,9 @@ test_mma:
+ /* xvi16ger2s */
+ .long 0xec042958
+
++ /* Deprime the accumulator - xxmfacc 0 */
++ .long 0x7c000162
++
+ /* Store result in image passed in r5 */
+ stxvw4x 0,0,5
+ addi 5,5,16
+diff --git a/tools/testing/selftests/powerpc/papr_attributes/attr_test.c b/tools/testing/selftests/powerpc/papr_attributes/attr_test.c
+index bab0dc06e90b7..9b655be641c90 100644
+--- a/tools/testing/selftests/powerpc/papr_attributes/attr_test.c
++++ b/tools/testing/selftests/powerpc/papr_attributes/attr_test.c
+@@ -7,6 +7,7 @@
+ * Copyright 2022, Pratik Rajesh Sampat, IBM Corp.
+ */
+
++#include <errno.h>
+ #include <stdio.h>
+ #include <string.h>
+ #include <dirent.h>
+@@ -32,7 +33,7 @@ enum type {
+ NUM_VAL
+ };
+
+-int value_type(int id)
++static int value_type(int id)
+ {
+ int val_type;
+
+@@ -54,15 +55,21 @@ int value_type(int id)
+ return val_type;
+ }
+
+-int verify_energy_info(void)
++static int verify_energy_info(void)
+ {
+ const char *path = "/sys/firmware/papr/energy_scale_info";
+ struct dirent *entry;
+ struct stat s;
+ DIR *dirp;
+
+- if (stat(path, &s) || !S_ISDIR(s.st_mode))
+- return -1;
++ errno = 0;
++ if (stat(path, &s)) {
++ SKIP_IF(errno == ENOENT);
++ FAIL_IF(errno);
++ }
++
++ FAIL_IF(!S_ISDIR(s.st_mode));
++
+ dirp = opendir(path);
+
+ while ((entry = readdir(dirp)) != NULL) {
+@@ -76,25 +83,24 @@ int verify_energy_info(void)
+
+ id = atoi(entry->d_name);
+ attr_type = value_type(id);
+- if (attr_type == INVALID)
+- return -1;
++ FAIL_IF(attr_type == INVALID);
+
+ /* Check if the files exist and have data in them */
+ sprintf(file_name, "%s/%d/desc", path, id);
+ f = fopen(file_name, "r");
+- if (!f || fgetc(f) == EOF)
+- return -1;
++ FAIL_IF(!f);
++ FAIL_IF(fgetc(f) == EOF);
+
+ sprintf(file_name, "%s/%d/value", path, id);
+ f = fopen(file_name, "r");
+- if (!f || fgetc(f) == EOF)
+- return -1;
++ FAIL_IF(!f);
++ FAIL_IF(fgetc(f) == EOF);
+
+ if (attr_type == STR_VAL) {
+ sprintf(file_name, "%s/%d/value_desc", path, id);
+ f = fopen(file_name, "r");
+- if (!f || fgetc(f) == EOF)
+- return -1;
++ FAIL_IF(!f);
++ FAIL_IF(fgetc(f) == EOF);
+ }
+ }
+
+diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh
+index 263e16aeca0e4..6c734818a8757 100755
+--- a/tools/testing/selftests/rcutorture/bin/kvm.sh
++++ b/tools/testing/selftests/rcutorture/bin/kvm.sh
+@@ -164,7 +164,7 @@ do
+ shift
+ ;;
+ --gdb)
+- TORTURE_KCONFIG_GDB_ARG="CONFIG_DEBUG_INFO=y"; export TORTURE_KCONFIG_GDB_ARG
++ TORTURE_KCONFIG_GDB_ARG="CONFIG_DEBUG_INFO_NONE=n CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y"; export TORTURE_KCONFIG_GDB_ARG
+ TORTURE_BOOT_GDB_ARG="nokaslr"; export TORTURE_BOOT_GDB_ARG
+ TORTURE_QEMU_GDB_ARG="-s -S"; export TORTURE_QEMU_GDB_ARG
+ ;;
+@@ -180,7 +180,7 @@ do
+ shift
+ ;;
+ --kasan)
+- TORTURE_KCONFIG_KASAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KASAN=y"; export TORTURE_KCONFIG_KASAN_ARG
++ TORTURE_KCONFIG_KASAN_ARG="CONFIG_DEBUG_INFO_NONE=n CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y CONFIG_KASAN=y"; export TORTURE_KCONFIG_KASAN_ARG
+ if test -n "$torture_qemu_mem_default"
+ then
+ TORTURE_QEMU_MEM=2G
+@@ -192,7 +192,7 @@ do
+ shift
+ ;;
+ --kcsan)
+- TORTURE_KCONFIG_KCSAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KCSAN=y CONFIG_KCSAN_STRICT=y CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y"; export TORTURE_KCONFIG_KCSAN_ARG
++ TORTURE_KCONFIG_KCSAN_ARG="CONFIG_DEBUG_INFO_NONE=n CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y CONFIG_KCSAN=y CONFIG_KCSAN_STRICT=y CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y"; export TORTURE_KCONFIG_KCSAN_ARG
+ ;;
+ --kmake-arg|--kmake-args)
+ checkarg --kmake-arg "(kernel make arguments)" $# "$2" '.*' '^error$'
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 136df5b76319d..4ae6c89913074 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -809,7 +809,7 @@ void kill_thread_or_group(struct __test_metadata *_metadata,
+ .len = (unsigned short)ARRAY_SIZE(filter_thread),
+ .filter = filter_thread,
+ };
+- int kill = kill_how == KILL_PROCESS ? SECCOMP_RET_KILL_PROCESS : 0xAAAAAAAAA;
++ int kill = kill_how == KILL_PROCESS ? SECCOMP_RET_KILL_PROCESS : 0xAAAAAAAA;
+ struct sock_filter filter_process[] = {
+ BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
+ offsetof(struct seccomp_data, nr)),
+diff --git a/tools/testing/selftests/timers/clocksource-switch.c b/tools/testing/selftests/timers/clocksource-switch.c
+index ef8eb3604595e..b57f0a9be4902 100644
+--- a/tools/testing/selftests/timers/clocksource-switch.c
++++ b/tools/testing/selftests/timers/clocksource-switch.c
+@@ -110,10 +110,10 @@ int run_tests(int secs)
+
+ sprintf(buf, "./inconsistency-check -t %i", secs);
+ ret = system(buf);
+- if (ret)
+- return ret;
++ if (WIFEXITED(ret) && WEXITSTATUS(ret))
++ return WEXITSTATUS(ret);
+ ret = system("./nanosleep");
+- return ret;
++ return WIFEXITED(ret) ? WEXITSTATUS(ret) : 0;
+ }
+
+
+diff --git a/tools/testing/selftests/timers/valid-adjtimex.c b/tools/testing/selftests/timers/valid-adjtimex.c
+index 5397de708d3c2..48b9a803235a8 100644
+--- a/tools/testing/selftests/timers/valid-adjtimex.c
++++ b/tools/testing/selftests/timers/valid-adjtimex.c
+@@ -40,7 +40,7 @@
+ #define ADJ_SETOFFSET 0x0100
+
+ #include <sys/syscall.h>
+-static int clock_adjtime(clockid_t id, struct timex *tx)
++int clock_adjtime(clockid_t id, struct timex *tx)
+ {
+ return syscall(__NR_clock_adjtime, id, tx);
+ }
+diff --git a/tools/testing/selftests/vm/hugepage-mremap.c b/tools/testing/selftests/vm/hugepage-mremap.c
+index 585978f181ed1..e63a0214f6399 100644
+--- a/tools/testing/selftests/vm/hugepage-mremap.c
++++ b/tools/testing/selftests/vm/hugepage-mremap.c
+@@ -107,7 +107,7 @@ static void register_region_with_uffd(char *addr, size_t len)
+
+ int main(int argc, char *argv[])
+ {
+- size_t length;
++ size_t length = 0;
+
+ if (argc != 2 && argc != 3) {
+ printf("Usage: %s [length_in_MB] <hugetlb_file>\n", argv[0]);
+diff --git a/tools/testing/selftests/vm/hugetlb-madvise.c b/tools/testing/selftests/vm/hugetlb-madvise.c
+index 6c6af40f57478..3c9943131881e 100644
+--- a/tools/testing/selftests/vm/hugetlb-madvise.c
++++ b/tools/testing/selftests/vm/hugetlb-madvise.c
+@@ -89,10 +89,11 @@ void write_fault_pages(void *addr, unsigned long nr_pages)
+
+ void read_fault_pages(void *addr, unsigned long nr_pages)
+ {
+- unsigned long i, tmp;
++ unsigned long dummy = 0;
++ unsigned long i;
+
+ for (i = 0; i < nr_pages; i++)
+- tmp += *((unsigned long *)(addr + (i * huge_page_size)));
++ dummy += *((unsigned long *)(addr + (i * huge_page_size)));
+ }
+
+ int main(int argc, char **argv)
+diff --git a/tools/testing/selftests/vm/mrelease_test.c b/tools/testing/selftests/vm/mrelease_test.c
+index 96671c2f7d485..6c62966ab5dbc 100644
+--- a/tools/testing/selftests/vm/mrelease_test.c
++++ b/tools/testing/selftests/vm/mrelease_test.c
+@@ -62,19 +62,22 @@ static int alloc_noexit(unsigned long nr_pages, int pipefd)
+ /* The process_mrelease calls in this test are expected to fail */
+ static void run_negative_tests(int pidfd)
+ {
++ int res;
+ /* Test invalid flags. Expect to fail with EINVAL error code. */
+ if (!syscall(__NR_process_mrelease, pidfd, (unsigned int)-1) ||
+ errno != EINVAL) {
++ res = (errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
+ perror("process_mrelease with wrong flags");
+- exit(errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
++ exit(res);
+ }
+ /*
+ * Test reaping while process is alive with no pending SIGKILL.
+ * Expect to fail with EINVAL error code.
+ */
+ if (!syscall(__NR_process_mrelease, pidfd, 0) || errno != EINVAL) {
++ res = (errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
+ perror("process_mrelease on a live process");
+- exit(errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
++ exit(res);
+ }
+ }
+
+@@ -100,8 +103,9 @@ int main(void)
+
+ /* Test a wrong pidfd */
+ if (!syscall(__NR_process_mrelease, -1, 0) || errno != EBADF) {
++ res = (errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
+ perror("process_mrelease with wrong pidfd");
+- exit(errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
++ exit(res);
+ }
+
+ /* Start the test with 1MB child memory allocation */
+@@ -156,8 +160,9 @@ retry:
+ run_negative_tests(pidfd);
+
+ if (kill(pid, SIGKILL)) {
++ res = (errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
+ perror("kill");
+- exit(errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
++ exit(res);
+ }
+
+ success = (syscall(__NR_process_mrelease, pidfd, 0) == 0);
+@@ -172,9 +177,10 @@ retry:
+ if (errno == ESRCH) {
+ retry = (size <= MAX_SIZE_MB);
+ } else {
++ res = (errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
+ perror("process_mrelease");
+ waitpid(pid, NULL, 0);
+- exit(errno == ENOSYS ? KSFT_SKIP : KSFT_FAIL);
++ exit(res);
+ }
+ }
+
+diff --git a/tools/testing/selftests/wireguard/qemu/arch/riscv32.config b/tools/testing/selftests/wireguard/qemu/arch/riscv32.config
+index 0bd0e72d95d49..2fc36efb166dc 100644
+--- a/tools/testing/selftests/wireguard/qemu/arch/riscv32.config
++++ b/tools/testing/selftests/wireguard/qemu/arch/riscv32.config
+@@ -1,3 +1,4 @@
++CONFIG_NONPORTABLE=y
+ CONFIG_ARCH_RV32I=y
+ CONFIG_MMU=y
+ CONFIG_FPU=y
+diff --git a/tools/thermal/tmon/sysfs.c b/tools/thermal/tmon/sysfs.c
+index b00b1bfd9d8e7..cb1108bc92498 100644
+--- a/tools/thermal/tmon/sysfs.c
++++ b/tools/thermal/tmon/sysfs.c
+@@ -13,6 +13,7 @@
+ #include <stdint.h>
+ #include <dirent.h>
+ #include <libintl.h>
++#include <limits.h>
+ #include <ctype.h>
+ #include <time.h>
+ #include <syslog.h>
+@@ -33,9 +34,9 @@ int sysfs_set_ulong(char *path, char *filename, unsigned long val)
+ {
+ FILE *fd;
+ int ret = -1;
+- char filepath[256];
++ char filepath[PATH_MAX + 2]; /* NUL and '/' */
+
+- snprintf(filepath, 256, "%s/%s", path, filename);
++ snprintf(filepath, sizeof(filepath), "%s/%s", path, filename);
+
+ fd = fopen(filepath, "w");
+ if (!fd) {
+@@ -57,9 +58,9 @@ static int sysfs_get_ulong(char *path, char *filename, unsigned long *p_ulong)
+ {
+ FILE *fd;
+ int ret = -1;
+- char filepath[256];
++ char filepath[PATH_MAX + 2]; /* NUL and '/' */
+
+- snprintf(filepath, 256, "%s/%s", path, filename);
++ snprintf(filepath, sizeof(filepath), "%s/%s", path, filename);
+
+ fd = fopen(filepath, "r");
+ if (!fd) {
+@@ -76,9 +77,9 @@ static int sysfs_get_string(char *path, char *filename, char *str)
+ {
+ FILE *fd;
+ int ret = -1;
+- char filepath[256];
++ char filepath[PATH_MAX + 2]; /* NUL and '/' */
+
+- snprintf(filepath, 256, "%s/%s", path, filename);
++ snprintf(filepath, sizeof(filepath), "%s/%s", path, filename);
+
+ fd = fopen(filepath, "r");
+ if (!fd) {
+@@ -199,8 +200,8 @@ static int find_tzone_cdev(struct dirent *nl, char *tz_name,
+ {
+ unsigned long trip_instance = 0;
+ char cdev_name_linked[256];
+- char cdev_name[256];
+- char cdev_trip_name[256];
++ char cdev_name[PATH_MAX];
++ char cdev_trip_name[PATH_MAX];
+ int cdev_id;
+
+ if (nl->d_type == DT_LNK) {
+@@ -213,7 +214,8 @@ static int find_tzone_cdev(struct dirent *nl, char *tz_name,
+ return -EINVAL;
+ }
+ /* find the link to real cooling device record binding */
+- snprintf(cdev_name, 256, "%s/%s", tz_name, nl->d_name);
++ snprintf(cdev_name, sizeof(cdev_name) - 2, "%s/%s",
++ tz_name, nl->d_name);
+ memset(cdev_name_linked, 0, sizeof(cdev_name_linked));
+ if (readlink(cdev_name, cdev_name_linked,
+ sizeof(cdev_name_linked) - 1) != -1) {
+@@ -226,8 +228,8 @@ static int find_tzone_cdev(struct dirent *nl, char *tz_name,
+ /* find the trip point in which the cdev is binded to
+ * in this tzone
+ */
+- snprintf(cdev_trip_name, 256, "%s%s", nl->d_name,
+- "_trip_point");
++ snprintf(cdev_trip_name, sizeof(cdev_trip_name) - 1,
++ "%s%s", nl->d_name, "_trip_point");
+ sysfs_get_ulong(tz_name, cdev_trip_name,
+ &trip_instance);
+ /* validate trip point range, e.g. trip could return -1
+diff --git a/tools/thermal/tmon/tmon.h b/tools/thermal/tmon/tmon.h
+index c9066ec104ddd..44d16d778f044 100644
+--- a/tools/thermal/tmon/tmon.h
++++ b/tools/thermal/tmon/tmon.h
+@@ -27,6 +27,9 @@
+ #define NR_LINES_TZDATA 1
+ #define TMON_LOG_FILE "/var/tmp/tmon.log"
+
++#include <sys/time.h>
++#include <pthread.h>
++
+ extern unsigned long ticktime;
+ extern double time_elapsed;
+ extern unsigned long target_temp_user;
+diff --git a/tools/tracing/rtla/Makefile b/tools/tracing/rtla/Makefile
+index 3822f4ea5f495..1bea2d16d4c11 100644
+--- a/tools/tracing/rtla/Makefile
++++ b/tools/tracing/rtla/Makefile
+@@ -1,6 +1,6 @@
+ NAME := rtla
+ # Follow the kernel version
+-VERSION := $(shell cat VERSION 2> /dev/null || make -sC ../../.. kernelversion)
++VERSION := $(shell cat VERSION 2> /dev/null || make -sC ../../.. kernelversion | grep -v make)
+
+ # From libtracefs:
+ # Makefiles suck: This macro sets a default value of $(2) for the
+diff --git a/tools/tracing/rtla/src/trace.c b/tools/tracing/rtla/src/trace.c
+index 5784c9f9e5706..e1ba6d9f42658 100644
+--- a/tools/tracing/rtla/src/trace.c
++++ b/tools/tracing/rtla/src/trace.c
+@@ -134,13 +134,18 @@ void trace_instance_destroy(struct trace_instance *trace)
+ if (trace->inst) {
+ disable_tracer(trace->inst);
+ destroy_instance(trace->inst);
++ trace->inst = NULL;
+ }
+
+- if (trace->seq)
++ if (trace->seq) {
+ free(trace->seq);
++ trace->seq = NULL;
++ }
+
+- if (trace->tep)
++ if (trace->tep) {
+ tep_free(trace->tep);
++ trace->tep = NULL;
++ }
+ }
+
+ /*
+diff --git a/tools/tracing/rtla/src/utils.c b/tools/tracing/rtla/src/utils.c
+index 5352167a1e751..5ae2fa96fde1e 100644
+--- a/tools/tracing/rtla/src/utils.c
++++ b/tools/tracing/rtla/src/utils.c
+@@ -106,8 +106,9 @@ int parse_cpu_list(char *cpu_list, char **monitored_cpus)
+
+ nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
+
+- mon_cpus = malloc(nr_cpus * sizeof(char));
+- memset(mon_cpus, 0, (nr_cpus * sizeof(char)));
++ mon_cpus = calloc(nr_cpus, sizeof(char));
++ if (!mon_cpus)
++ goto err;
+
+ for (p = cpu_list; *p; ) {
+ cpu = atoi(p);
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index a49df8988cd6a..98246f3dea87c 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -724,6 +724,15 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ kvm->mn_active_invalidate_count++;
+ spin_unlock(&kvm->mn_invalidate_lock);
+
++ /*
++ * Invalidate pfn caches _before_ invalidating the secondary MMUs, i.e.
++ * before acquiring mmu_lock, to avoid holding mmu_lock while acquiring
++ * each cache's lock. There are relatively few caches in existence at
++ * any given time, and the caches themselves can check for hva overlap,
++ * i.e. don't need to rely on memslot overlap checks for performance.
++ * Because this runs without holding mmu_lock, the pfn caches must use
++ * mn_active_invalidate_count (see above) instead of mmu_notifier_count.
++ */
+ gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end,
+ hva_range.may_block);
+
+@@ -2844,16 +2853,28 @@ void kvm_release_pfn_dirty(kvm_pfn_t pfn)
+ }
+ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
+
++static bool kvm_is_ad_tracked_pfn(kvm_pfn_t pfn)
++{
++ if (!pfn_valid(pfn))
++ return false;
++
++ /*
++ * Per page-flags.h, pages tagged PG_reserved "should in general not be
++ * touched (e.g. set dirty) except by its owner".
++ */
++ return !PageReserved(pfn_to_page(pfn));
++}
++
+ void kvm_set_pfn_dirty(kvm_pfn_t pfn)
+ {
+- if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
++ if (kvm_is_ad_tracked_pfn(pfn))
+ SetPageDirty(pfn_to_page(pfn));
+ }
+ EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);
+
+ void kvm_set_pfn_accessed(kvm_pfn_t pfn)
+ {
+- if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
++ if (kvm_is_ad_tracked_pfn(pfn))
+ mark_page_accessed(pfn_to_page(pfn));
+ }
+ EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed);
+diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
+index dd84676615f1a..b0b6783673765 100644
+--- a/virt/kvm/pfncache.c
++++ b/virt/kvm/pfncache.c
+@@ -95,7 +95,7 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+ }
+ EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
+
+-static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva, gpa_t gpa)
++static void gpc_release_pfn_and_khva(struct kvm *kvm, kvm_pfn_t pfn, void *khva)
+ {
+ /* Unmap the old page if it was mapped before, and release it */
+ if (!is_error_noslot_pfn(pfn)) {
+@@ -112,31 +112,122 @@ static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva, gpa_t gpa)
+ }
+ }
+
+-static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, unsigned long uhva)
++static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_seq)
+ {
++ /*
++ * mn_active_invalidate_count acts for all intents and purposes
++ * like mmu_notifier_count here; but the latter cannot be used
++ * here because the invalidation of caches in the mmu_notifier
++ * event occurs _before_ mmu_notifier_count is elevated.
++ *
++ * Note, it does not matter that mn_active_invalidate_count
++ * is not protected by gpc->lock. It is guaranteed to
++ * be elevated before the mmu_notifier acquires gpc->lock, and
++ * isn't dropped until after mmu_notifier_seq is updated.
++ */
++ if (kvm->mn_active_invalidate_count)
++ return true;
++
++ /*
++ * Ensure mn_active_invalidate_count is read before
++ * mmu_notifier_seq. This pairs with the smp_wmb() in
++ * mmu_notifier_invalidate_range_end() to guarantee either the
++ * old (non-zero) value of mn_active_invalidate_count or the
++ * new (incremented) value of mmu_notifier_seq is observed.
++ */
++ smp_rmb();
++ return kvm->mmu_notifier_seq != mmu_seq;
++}
++
++static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
++{
++ /* Note, the new page offset may be different than the old! */
++ void *old_khva = gpc->khva - offset_in_page(gpc->khva);
++ kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT;
++ void *new_khva = NULL;
+ unsigned long mmu_seq;
+- kvm_pfn_t new_pfn;
+- int retry;
++
++ lockdep_assert_held(&gpc->refresh_lock);
++
++ lockdep_assert_held_write(&gpc->lock);
++
++ /*
++ * Invalidate the cache prior to dropping gpc->lock, the gpa=>uhva
++ * assets have already been updated and so a concurrent check() from a
++ * different task may not fail the gpa/uhva/generation checks.
++ */
++ gpc->valid = false;
+
+ do {
+ mmu_seq = kvm->mmu_notifier_seq;
+ smp_rmb();
+
++ write_unlock_irq(&gpc->lock);
++
++ /*
++ * If the previous iteration "failed" due to an mmu_notifier
++ * event, release the pfn and unmap the kernel virtual address
++ * from the previous attempt. Unmapping might sleep, so this
++ * needs to be done after dropping the lock. Opportunistically
++ * check for resched while the lock isn't held.
++ */
++ if (new_pfn != KVM_PFN_ERR_FAULT) {
++ /*
++ * Keep the mapping if the previous iteration reused
++ * the existing mapping and didn't create a new one.
++ */
++ if (new_khva == old_khva)
++ new_khva = NULL;
++
++ gpc_release_pfn_and_khva(kvm, new_pfn, new_khva);
++
++ cond_resched();
++ }
++
+ /* We always request a writeable mapping */
+- new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
++ new_pfn = hva_to_pfn(gpc->uhva, false, NULL, true, NULL);
+ if (is_error_noslot_pfn(new_pfn))
+- break;
++ goto out_error;
++
++ /*
++ * Obtain a new kernel mapping if KVM itself will access the
++ * pfn. Note, kmap() and memremap() can both sleep, so this
++ * too must be done outside of gpc->lock!
++ */
++ if (gpc->usage & KVM_HOST_USES_PFN) {
++ if (new_pfn == gpc->pfn) {
++ new_khva = old_khva;
++ } else if (pfn_valid(new_pfn)) {
++ new_khva = kmap(pfn_to_page(new_pfn));
++#ifdef CONFIG_HAS_IOMEM
++ } else {
++ new_khva = memremap(pfn_to_hpa(new_pfn), PAGE_SIZE, MEMREMAP_WB);
++#endif
++ }
++ if (!new_khva) {
++ kvm_release_pfn_clean(new_pfn);
++ goto out_error;
++ }
++ }
+
+- KVM_MMU_READ_LOCK(kvm);
+- retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
+- KVM_MMU_READ_UNLOCK(kvm);
+- if (!retry)
+- break;
++ write_lock_irq(&gpc->lock);
+
+- cond_resched();
+- } while (1);
++ /*
++ * Other tasks must wait for _this_ refresh to complete before
++ * attempting to refresh.
++ */
++ WARN_ON_ONCE(gpc->valid);
++ } while (mmu_notifier_retry_cache(kvm, mmu_seq));
+
+- return new_pfn;
++ gpc->valid = true;
++ gpc->pfn = new_pfn;
++ gpc->khva = new_khva + (gpc->gpa & ~PAGE_MASK);
++ return 0;
++
++out_error:
++ write_lock_irq(&gpc->lock);
++
++ return -EFAULT;
+ }
+
+ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+@@ -146,9 +237,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+ unsigned long page_offset = gpa & ~PAGE_MASK;
+ kvm_pfn_t old_pfn, new_pfn;
+ unsigned long old_uhva;
+- gpa_t old_gpa;
+ void *old_khva;
+- bool old_valid;
+ int ret = 0;
+
+ /*
+@@ -158,13 +247,18 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+ if (page_offset + len > PAGE_SIZE)
+ return -EINVAL;
+
++ /*
++ * If another task is refreshing the cache, wait for it to complete.
++ * There is no guarantee that concurrent refreshes will see the same
++ * gpa, memslots generation, etc..., so they must be fully serialized.
++ */
++ mutex_lock(&gpc->refresh_lock);
++
+ write_lock_irq(&gpc->lock);
+
+- old_gpa = gpc->gpa;
+ old_pfn = gpc->pfn;
+ old_khva = gpc->khva - offset_in_page(gpc->khva);
+ old_uhva = gpc->uhva;
+- old_valid = gpc->valid;
+
+ /* If the userspace HVA is invalid, refresh that first */
+ if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+@@ -177,64 +271,17 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+ gpc->uhva = gfn_to_hva_memslot(gpc->memslot, gfn);
+
+ if (kvm_is_error_hva(gpc->uhva)) {
+- gpc->pfn = KVM_PFN_ERR_FAULT;
+ ret = -EFAULT;
+ goto out;
+ }
+-
+- gpc->uhva += page_offset;
+ }
+
+ /*
+ * If the userspace HVA changed or the PFN was already invalid,
+ * drop the lock and do the HVA to PFN lookup again.
+ */
+- if (!old_valid || old_uhva != gpc->uhva) {
+- unsigned long uhva = gpc->uhva;
+- void *new_khva = NULL;
+-
+- /* Placeholders for "hva is valid but not yet mapped" */
+- gpc->pfn = KVM_PFN_ERR_FAULT;
+- gpc->khva = NULL;
+- gpc->valid = true;
+-
+- write_unlock_irq(&gpc->lock);
+-
+- new_pfn = hva_to_pfn_retry(kvm, uhva);
+- if (is_error_noslot_pfn(new_pfn)) {
+- ret = -EFAULT;
+- goto map_done;
+- }
+-
+- if (gpc->usage & KVM_HOST_USES_PFN) {
+- if (new_pfn == old_pfn) {
+- new_khva = old_khva;
+- old_pfn = KVM_PFN_ERR_FAULT;
+- old_khva = NULL;
+- } else if (pfn_valid(new_pfn)) {
+- new_khva = kmap(pfn_to_page(new_pfn));
+-#ifdef CONFIG_HAS_IOMEM
+- } else {
+- new_khva = memremap(pfn_to_hpa(new_pfn), PAGE_SIZE, MEMREMAP_WB);
+-#endif
+- }
+- if (new_khva)
+- new_khva += page_offset;
+- else
+- ret = -EFAULT;
+- }
+-
+- map_done:
+- write_lock_irq(&gpc->lock);
+- if (ret) {
+- gpc->valid = false;
+- gpc->pfn = KVM_PFN_ERR_FAULT;
+- gpc->khva = NULL;
+- } else {
+- /* At this point, gpc->valid may already have been cleared */
+- gpc->pfn = new_pfn;
+- gpc->khva = new_khva;
+- }
++ if (!gpc->valid || old_uhva != gpc->uhva) {
++ ret = hva_to_pfn_retry(kvm, gpc);
+ } else {
+ /* If the HVA→PFN mapping was already valid, don't unmap it. */
+ old_pfn = KVM_PFN_ERR_FAULT;
+@@ -242,9 +289,26 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+ }
+
+ out:
++ /*
++ * Invalidate the cache and purge the pfn/khva if the refresh failed.
++ * Some/all of the uhva, gpa, and memslot generation info may still be
++ * valid, leave it as is.
++ */
++ if (ret) {
++ gpc->valid = false;
++ gpc->pfn = KVM_PFN_ERR_FAULT;
++ gpc->khva = NULL;
++ }
++
++ /* Snapshot the new pfn before dropping the lock! */
++ new_pfn = gpc->pfn;
++
+ write_unlock_irq(&gpc->lock);
+
+- __release_gpc(kvm, old_pfn, old_khva, old_gpa);
++ mutex_unlock(&gpc->refresh_lock);
++
++ if (old_pfn != new_pfn)
++ gpc_release_pfn_and_khva(kvm, old_pfn, old_khva);
+
+ return ret;
+ }
+@@ -254,14 +318,13 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+ {
+ void *old_khva;
+ kvm_pfn_t old_pfn;
+- gpa_t old_gpa;
+
++ mutex_lock(&gpc->refresh_lock);
+ write_lock_irq(&gpc->lock);
+
+ gpc->valid = false;
+
+ old_khva = gpc->khva - offset_in_page(gpc->khva);
+- old_gpa = gpc->gpa;
+ old_pfn = gpc->pfn;
+
+ /*
+@@ -272,8 +335,9 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+ gpc->pfn = KVM_PFN_ERR_FAULT;
+
+ write_unlock_irq(&gpc->lock);
++ mutex_unlock(&gpc->refresh_lock);
+
+- __release_gpc(kvm, old_pfn, old_khva, old_gpa);
++ gpc_release_pfn_and_khva(kvm, old_pfn, old_khva);
+ }
+ EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap);
+
+@@ -286,6 +350,7 @@ int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+
+ if (!gpc->active) {
+ rwlock_init(&gpc->lock);
++ mutex_init(&gpc->refresh_lock);
+
+ gpc->khva = NULL;
+ gpc->pfn = KVM_PFN_ERR_FAULT;
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-19 13:32 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-19 13:32 UTC (permalink / raw
To: gentoo-commits
commit: bd3abb7ea65a7c0c7a1c12f1dc536c62f65f6840
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 19 13:16:00 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 19 13:16:00 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bd3abb7e
Fixes for BMQ, thanks to TK-Glitch
Source: https://github.com/Frogging-Family/linux-tkg
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +-
...Q-and-PDS-io-scheduler-v5.19-r0-linux-tkg.patch | 318 +++++++++++++++++++++
2 files changed, 320 insertions(+), 2 deletions(-)
diff --git a/0000_README b/0000_README
index 8f7da639..d4f51c59 100644
--- a/0000_README
+++ b/0000_README
@@ -87,8 +87,8 @@ Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
-Patch: 5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch
-From: https://gitlab.com/alfredchen/linux-prjc
+Patch: 5020_BMQ-and-PDS-io-scheduler-v5.19-r0-linux-tkg.patch
+From: https://github.com/Frogging-Family/linux-tkg
Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
Patch: 5021_BMQ-and-PDS-gentoo-defaults.patch
diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v5.19-r0-linux-tkg.patch
similarity index 96%
rename from 5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch
rename to 5020_BMQ-and-PDS-io-scheduler-v5.19-r0-linux-tkg.patch
index 610cfe83..25c71a6c 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v5.19-r0.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v5.19-r0-linux-tkg.patch
@@ -9954,3 +9954,321 @@ index a2d301f58ced..2ccdede8585c 100644
};
struct wakeup_test_data *x = data;
+From 3728c383c5031dce5ae0f5ea53fc47afba71270f Mon Sep 17 00:00:00 2001
+From: Juuso Alasuutari <juuso.alasuutari@gmail.com>
+Date: Sun, 14 Aug 2022 18:19:09 +0300
+Subject: [PATCH 01/10] sched/alt: [Sync] sched/core: Always flush pending
+ blk_plug
+
+---
+ kernel/sched/alt_core.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+index 588c7b983e3ba..8a6aa5b7279d3 100644
+--- a/kernel/sched/alt_core.c
++++ b/kernel/sched/alt_core.c
+@@ -4663,8 +4663,12 @@ static inline void sched_submit_work(struct task_struct *tsk)
+ io_wq_worker_sleeping(tsk);
+ }
+
+- if (tsk_is_pi_blocked(tsk))
+- return;
++ /*
++ * spinlock and rwlock must not flush block requests. This will
++ * deadlock if the callback attempts to acquire a lock which is
++ * already acquired.
++ */
++ SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
+
+ /*
+ * If we are going to sleep and we have plugged IO queued,
+
+From 379df22366dfa47d021a6bfe149c10a02d39a59e Mon Sep 17 00:00:00 2001
+From: Juuso Alasuutari <juuso.alasuutari@gmail.com>
+Date: Sun, 14 Aug 2022 18:19:09 +0300
+Subject: [PATCH 02/10] sched/alt: [Sync] io_uring: move to separate directory
+
+---
+ kernel/sched/alt_core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+index 8a6aa5b7279d3..200d12b0ba6a9 100644
+--- a/kernel/sched/alt_core.c
++++ b/kernel/sched/alt_core.c
+@@ -43,7 +43,7 @@
+
+ #include "pelt.h"
+
+-#include "../../fs/io-wq.h"
++#include "../../io_uring/io-wq.h"
+ #include "../smpboot.h"
+
+ /*
+
+From 289d4f9619656155c2d467f9ea9fa5258b4aacd0 Mon Sep 17 00:00:00 2001
+From: Juuso Alasuutari <juuso.alasuutari@gmail.com>
+Date: Sun, 14 Aug 2022 18:19:09 +0300
+Subject: [PATCH 03/10] sched/alt: [Sync] sched, cpuset: Fix dl_cpu_busy()
+ panic due to empty cs->cpus_allowed
+
+---
+ kernel/sched/alt_core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+index 200d12b0ba6a9..1aeb7a225d9bd 100644
+--- a/kernel/sched/alt_core.c
++++ b/kernel/sched/alt_core.c
+@@ -6737,7 +6737,7 @@ int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
+ }
+
+ int task_can_attach(struct task_struct *p,
+- const struct cpumask *cs_cpus_allowed)
++ const struct cpumask *cs_effective_cpus)
+ {
+ int ret = 0;
+
+
+From 95e712f92034119e23b4157aba72e8ffb2d74fed Mon Sep 17 00:00:00 2001
+From: Tor Vic <torvic9@mailbox.org>
+Date: Wed, 17 Aug 2022 21:44:18 +0200
+Subject: [PATCH 05/10] sched/alt: Transpose the sched_rq_watermark array
+
+This is not my work.
+All credits go to Torge Matthies as in below link.
+
+Link: https://gitlab.com/alfredchen/linux-prjc/-/merge_requests/11
+---
+ kernel/sched/alt_core.c | 124 +++++++++++++++++++++++++++++++++-------
+ 1 file changed, 104 insertions(+), 20 deletions(-)
+
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+index cf71defb0e0be..7929b810ba74f 100644
+--- a/kernel/sched/alt_core.c
++++ b/kernel/sched/alt_core.c
+@@ -147,7 +147,87 @@ DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+ #ifdef CONFIG_SCHED_SMT
+ static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
+ #endif
+-static cpumask_t sched_rq_watermark[SCHED_QUEUE_BITS] ____cacheline_aligned_in_smp;
++
++#define BITS_PER_ATOMIC_LONG_T BITS_PER_LONG
++typedef struct sched_bitmask {
++ atomic_long_t bits[DIV_ROUND_UP(SCHED_QUEUE_BITS, BITS_PER_ATOMIC_LONG_T)];
++} sched_bitmask_t;
++static sched_bitmask_t sched_rq_watermark[NR_CPUS] ____cacheline_aligned_in_smp;
++
++#define x(p, set, mask) \
++ do { \
++ if (set) \
++ atomic_long_or((mask), (p)); \
++ else \
++ atomic_long_and(~(mask), (p)); \
++ } while (0)
++
++static __always_inline void sched_rq_watermark_fill_downwards(int cpu, unsigned int end,
++ unsigned int start, bool set)
++{
++ unsigned int start_idx, start_bit;
++ unsigned int end_idx, end_bit;
++ atomic_long_t *p;
++
++ if (end == start) {
++ return;
++ }
++
++ start_idx = start / BITS_PER_ATOMIC_LONG_T;
++ start_bit = start % BITS_PER_ATOMIC_LONG_T;
++ end_idx = (end - 1) / BITS_PER_ATOMIC_LONG_T;
++ end_bit = (end - 1) % BITS_PER_ATOMIC_LONG_T;
++ p = &sched_rq_watermark[cpu].bits[end_idx];
++
++ if (end_idx == start_idx) {
++ x(p, set, (~0UL >> (BITS_PER_ATOMIC_LONG_T - 1 - end_bit)) & (~0UL << start_bit));
++ return;
++ }
++
++ if (end_bit != BITS_PER_ATOMIC_LONG_T - 1) {
++ x(p, set, (~0UL >> (BITS_PER_ATOMIC_LONG_T - 1 - end_bit)));
++ p -= 1;
++ end_idx -= 1;
++ }
++
++ while (end_idx != start_idx) {
++ atomic_long_set(p, set ? ~0UL : 0);
++ p -= 1;
++ end_idx -= 1;
++ }
++
++ x(p, set, ~0UL << start_bit);
++}
++
++#undef x
++
++static __always_inline bool sched_rq_watermark_and(cpumask_t *dstp, const cpumask_t *cpus, int prio, bool not)
++{
++ int cpu;
++ bool ret = false;
++ int idx = prio / BITS_PER_ATOMIC_LONG_T;
++ int bit = prio % BITS_PER_ATOMIC_LONG_T;
++
++ cpumask_clear(dstp);
++ for_each_cpu(cpu, cpus)
++ if (test_bit(bit, (long*)&sched_rq_watermark[cpu].bits[idx].counter) == !not) {
++ __cpumask_set_cpu(cpu, dstp);
++ ret = true;
++ }
++ return ret;
++}
++
++static __always_inline bool sched_rq_watermark_test(const cpumask_t *cpus, int prio, bool not)
++{
++ int cpu;
++ int idx = prio / BITS_PER_ATOMIC_LONG_T;
++ int bit = prio % BITS_PER_ATOMIC_LONG_T;
++
++ for_each_cpu(cpu, cpus)
++ if (test_bit(bit, (long*)&sched_rq_watermark[cpu].bits[idx].counter) == !not)
++ return true;
++ return false;
++}
+
+ /* sched_queue related functions */
+ static inline void sched_queue_init(struct sched_queue *q)
+@@ -176,7 +256,6 @@ static inline void update_sched_rq_watermark(struct rq *rq)
+ {
+ unsigned long watermark = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
+ unsigned long last_wm = rq->watermark;
+- unsigned long i;
+ int cpu;
+
+ if (watermark == last_wm)
+@@ -185,28 +264,25 @@ static inline void update_sched_rq_watermark(struct rq *rq)
+ rq->watermark = watermark;
+ cpu = cpu_of(rq);
+ if (watermark < last_wm) {
+- for (i = last_wm; i > watermark; i--)
+- cpumask_clear_cpu(cpu, sched_rq_watermark + SCHED_QUEUE_BITS - i);
++ sched_rq_watermark_fill_downwards(cpu, SCHED_QUEUE_BITS - watermark, SCHED_QUEUE_BITS - last_wm, false);
+ #ifdef CONFIG_SCHED_SMT
+ if (static_branch_likely(&sched_smt_present) &&
+- IDLE_TASK_SCHED_PRIO == last_wm)
++ unlikely(IDLE_TASK_SCHED_PRIO == last_wm))
+ cpumask_andnot(&sched_sg_idle_mask,
+ &sched_sg_idle_mask, cpu_smt_mask(cpu));
+ #endif
+ return;
+ }
+ /* last_wm < watermark */
+- for (i = watermark; i > last_wm; i--)
+- cpumask_set_cpu(cpu, sched_rq_watermark + SCHED_QUEUE_BITS - i);
++ sched_rq_watermark_fill_downwards(cpu, SCHED_QUEUE_BITS - last_wm, SCHED_QUEUE_BITS - watermark, true);
+ #ifdef CONFIG_SCHED_SMT
+ if (static_branch_likely(&sched_smt_present) &&
+- IDLE_TASK_SCHED_PRIO == watermark) {
+- cpumask_t tmp;
++ unlikely(IDLE_TASK_SCHED_PRIO == watermark)) {
++ const cpumask_t *smt_mask = cpu_smt_mask(cpu);
+
+- cpumask_and(&tmp, cpu_smt_mask(cpu), sched_rq_watermark);
+- if (cpumask_equal(&tmp, cpu_smt_mask(cpu)))
++ if (!sched_rq_watermark_test(smt_mask, 0, true))
+ cpumask_or(&sched_sg_idle_mask,
+- &sched_sg_idle_mask, cpu_smt_mask(cpu));
++ &sched_sg_idle_mask, smt_mask);
+ }
+ #endif
+ }
+@@ -1903,9 +1979,9 @@ static inline int select_task_rq(struct task_struct *p)
+ #ifdef CONFIG_SCHED_SMT
+ cpumask_and(&tmp, &chk_mask, &sched_sg_idle_mask) ||
+ #endif
+- cpumask_and(&tmp, &chk_mask, sched_rq_watermark) ||
+- cpumask_and(&tmp, &chk_mask,
+- sched_rq_watermark + SCHED_QUEUE_BITS - 1 - task_sched_prio(p)))
++ sched_rq_watermark_and(&tmp, &chk_mask, 0, false) ||
++ sched_rq_watermark_and(&tmp, &chk_mask,
++ SCHED_QUEUE_BITS - 1 - task_sched_prio(p), false))
+ return best_mask_cpu(task_cpu(p), &tmp);
+
+ return best_mask_cpu(task_cpu(p), &chk_mask);
+@@ -3977,7 +4053,7 @@ static inline void sg_balance(struct rq *rq)
+ * find potential cpus which can migrate the current running task
+ */
+ if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
+- cpumask_andnot(&chk, cpu_online_mask, sched_rq_watermark) &&
++ sched_rq_watermark_and(&chk, cpu_online_mask, 0, true) &&
+ cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
+ int i;
+
+@@ -4285,9 +4361,8 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
+ #ifdef ALT_SCHED_DEBUG
+ void alt_sched_debug(void)
+ {
+- printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++ printk(KERN_INFO "sched: pending: 0x%04lx, sg_idle: 0x%04lx\n",
+ sched_rq_pending_mask.bits[0],
+- sched_rq_watermark[0].bits[0],
+ sched_sg_idle_mask.bits[0]);
+ }
+ #else
+@@ -7285,8 +7360,17 @@ void __init sched_init(void)
+ wait_bit_init();
+
+ #ifdef CONFIG_SMP
+- for (i = 0; i < SCHED_QUEUE_BITS; i++)
+- cpumask_copy(sched_rq_watermark + i, cpu_present_mask);
++ for (i = 0; i < nr_cpu_ids; i++) {
++ long val = cpumask_test_cpu(i, cpu_present_mask) ? -1L : 0;
++ int j;
++ for (j = 0; j < DIV_ROUND_UP(SCHED_QUEUE_BITS, BITS_PER_ATOMIC_LONG_T); j++)
++ atomic_long_set(&sched_rq_watermark[i].bits[j], val);
++ }
++ for (i = nr_cpu_ids; i < NR_CPUS; i++) {
++ int j;
++ for (j = 0; j < DIV_ROUND_UP(SCHED_QUEUE_BITS, BITS_PER_ATOMIC_LONG_T); j++)
++ atomic_long_set(&sched_rq_watermark[i].bits[j], 0);
++ }
+ #endif
+
+ #ifdef CONFIG_CGROUP_SCHED
+
+From 5b3b4b3d14c234196c807568905ee2e013565508 Mon Sep 17 00:00:00 2001
+From: Torge Matthies <openglfreak@googlemail.com>
+Date: Tue, 15 Mar 2022 23:08:54 +0100
+Subject: [PATCH 06/10] sched/alt: Add memory barriers around atomics.
+
+---
+ kernel/sched/alt_core.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+index 7929b810ba74f..b0cb6b772d5fa 100644
+--- a/kernel/sched/alt_core.c
++++ b/kernel/sched/alt_core.c
+@@ -156,10 +156,12 @@ static sched_bitmask_t sched_rq_watermark[NR_CPUS] ____cacheline_aligned_in_smp;
+
+ #define x(p, set, mask) \
+ do { \
++ smp_mb__before_atomic(); \
+ if (set) \
+ atomic_long_or((mask), (p)); \
+ else \
+ atomic_long_and(~(mask), (p)); \
++ smp_mb__after_atomic(); \
+ } while (0)
+
+ static __always_inline void sched_rq_watermark_fill_downwards(int cpu, unsigned int end,
+@@ -191,7 +193,9 @@ static __always_inline void sched_rq_watermark_fill_downwards(int cpu, unsigned
+ }
+
+ while (end_idx != start_idx) {
++ smp_mb__before_atomic();
+ atomic_long_set(p, set ? ~0UL : 0);
++ smp_mb__after_atomic();
+ p -= 1;
+ end_idx -= 1;
+ }
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-21 16:55 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-21 16:55 UTC (permalink / raw
To: gentoo-commits
commit: 1074023988dab5e355af917aa71cc3ced437c37c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 21 16:54:57 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug 21 16:54:57 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=10740239
Linux patch 5.19.3
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1002_linux-5.19.3.patch | 363 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 367 insertions(+)
diff --git a/0000_README b/0000_README
index d4f51c59..7a9bbb26 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-5.19.2.patch
From: http://www.kernel.org
Desc: Linux 5.19.2
+Patch: 1002_linux-5.19.3.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-5.19.3.patch b/1002_linux-5.19.3.patch
new file mode 100644
index 00000000..221b88b6
--- /dev/null
+++ b/1002_linux-5.19.3.patch
@@ -0,0 +1,363 @@
+diff --git a/Makefile b/Makefile
+index e2edc38ce52c1..8595916561f3f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm64/kernel/kexec_image.c b/arch/arm64/kernel/kexec_image.c
+index 9ec34690e2551..5ed6a585f21fd 100644
+--- a/arch/arm64/kernel/kexec_image.c
++++ b/arch/arm64/kernel/kexec_image.c
+@@ -14,7 +14,6 @@
+ #include <linux/kexec.h>
+ #include <linux/pe.h>
+ #include <linux/string.h>
+-#include <linux/verification.h>
+ #include <asm/byteorder.h>
+ #include <asm/cpufeature.h>
+ #include <asm/image.h>
+@@ -130,18 +129,10 @@ static void *image_load(struct kimage *image,
+ return NULL;
+ }
+
+-#ifdef CONFIG_KEXEC_IMAGE_VERIFY_SIG
+-static int image_verify_sig(const char *kernel, unsigned long kernel_len)
+-{
+- return verify_pefile_signature(kernel, kernel_len, NULL,
+- VERIFYING_KEXEC_PE_SIGNATURE);
+-}
+-#endif
+-
+ const struct kexec_file_ops kexec_image_ops = {
+ .probe = image_probe,
+ .load = image_load,
+ #ifdef CONFIG_KEXEC_IMAGE_VERIFY_SIG
+- .verify_sig = image_verify_sig,
++ .verify_sig = kexec_kernel_verify_pe_sig,
+ #endif
+ };
+diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
+index 170d0fd68b1f4..f299b48f9c9f0 100644
+--- a/arch/x86/kernel/kexec-bzimage64.c
++++ b/arch/x86/kernel/kexec-bzimage64.c
+@@ -17,7 +17,6 @@
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/efi.h>
+-#include <linux/verification.h>
+
+ #include <asm/bootparam.h>
+ #include <asm/setup.h>
+@@ -528,28 +527,11 @@ static int bzImage64_cleanup(void *loader_data)
+ return 0;
+ }
+
+-#ifdef CONFIG_KEXEC_BZIMAGE_VERIFY_SIG
+-static int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len)
+-{
+- int ret;
+-
+- ret = verify_pefile_signature(kernel, kernel_len,
+- VERIFY_USE_SECONDARY_KEYRING,
+- VERIFYING_KEXEC_PE_SIGNATURE);
+- if (ret == -ENOKEY && IS_ENABLED(CONFIG_INTEGRITY_PLATFORM_KEYRING)) {
+- ret = verify_pefile_signature(kernel, kernel_len,
+- VERIFY_USE_PLATFORM_KEYRING,
+- VERIFYING_KEXEC_PE_SIGNATURE);
+- }
+- return ret;
+-}
+-#endif
+-
+ const struct kexec_file_ops kexec_bzImage64_ops = {
+ .probe = bzImage64_probe,
+ .load = bzImage64_load,
+ .cleanup = bzImage64_cleanup,
+ #ifdef CONFIG_KEXEC_BZIMAGE_VERIFY_SIG
+- .verify_sig = bzImage64_verify_sig,
++ .verify_sig = kexec_kernel_verify_pe_sig,
+ #endif
+ };
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index f2b1bcefcadd7..1175f3a46859f 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -326,6 +326,9 @@ struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx,
+ void *ret;
+ int id;
+
++ if (!access_ok((void __user *)addr, length))
++ return ERR_PTR(-EFAULT);
++
+ mutex_lock(&teedev->mutex);
+ id = idr_alloc(&teedev->idr, NULL, 1, 0, GFP_KERNEL);
+ mutex_unlock(&teedev->mutex);
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index 13e0bb0479e63..93975e3d50705 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -403,6 +403,9 @@ static void merge_rbio(struct btrfs_raid_bio *dest,
+ {
+ bio_list_merge(&dest->bio_list, &victim->bio_list);
+ dest->bio_list_bytes += victim->bio_list_bytes;
++ /* Also inherit the bitmaps from @victim. */
++ bitmap_or(dest->dbitmap, victim->dbitmap, dest->dbitmap,
++ dest->stripe_nsectors);
+ dest->generic_bio_cnt += victim->generic_bio_cnt;
+ bio_list_init(&victim->bio_list);
+ }
+@@ -944,6 +947,12 @@ static void rbio_orig_end_io(struct btrfs_raid_bio *rbio, blk_status_t err)
+
+ if (rbio->generic_bio_cnt)
+ btrfs_bio_counter_sub(rbio->bioc->fs_info, rbio->generic_bio_cnt);
++ /*
++ * Clear the data bitmap, as the rbio may be cached for later usage.
++ * do this before before unlock_stripe() so there will be no new bio
++ * for this bio.
++ */
++ bitmap_clear(rbio->dbitmap, 0, rbio->stripe_nsectors);
+
+ /*
+ * At this moment, rbio->bio_list is empty, however since rbio does not
+@@ -1294,6 +1303,9 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
+ else
+ BUG();
+
++ /* We should have at least one data sector. */
++ ASSERT(bitmap_weight(rbio->dbitmap, rbio->stripe_nsectors));
++
+ /* at this point we either have a full stripe,
+ * or we've read the full stripe from the drive.
+ * recalculate the parity and write the new results.
+@@ -1368,6 +1380,10 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
+ for (sectornr = 0; sectornr < rbio->stripe_nsectors; sectornr++) {
+ struct sector_ptr *sector;
+
++ /* This vertical stripe has no data, skip it. */
++ if (!test_bit(sectornr, rbio->dbitmap))
++ continue;
++
+ if (stripe < rbio->nr_data) {
+ sector = sector_in_rbio(rbio, stripe, sectornr, 1);
+ if (!sector)
+@@ -1394,6 +1410,10 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
+ for (sectornr = 0; sectornr < rbio->stripe_nsectors; sectornr++) {
+ struct sector_ptr *sector;
+
++ /* This vertical stripe has no data, skip it. */
++ if (!test_bit(sectornr, rbio->dbitmap))
++ continue;
++
+ if (stripe < rbio->nr_data) {
+ sector = sector_in_rbio(rbio, stripe, sectornr, 1);
+ if (!sector)
+@@ -1845,6 +1865,33 @@ static void btrfs_raid_unplug(struct blk_plug_cb *cb, bool from_schedule)
+ run_plug(plug);
+ }
+
++/* Add the original bio into rbio->bio_list, and update rbio::dbitmap. */
++static void rbio_add_bio(struct btrfs_raid_bio *rbio, struct bio *orig_bio)
++{
++ const struct btrfs_fs_info *fs_info = rbio->bioc->fs_info;
++ const u64 orig_logical = orig_bio->bi_iter.bi_sector << SECTOR_SHIFT;
++ const u64 full_stripe_start = rbio->bioc->raid_map[0];
++ const u32 orig_len = orig_bio->bi_iter.bi_size;
++ const u32 sectorsize = fs_info->sectorsize;
++ u64 cur_logical;
++
++ ASSERT(orig_logical >= full_stripe_start &&
++ orig_logical + orig_len <= full_stripe_start +
++ rbio->nr_data * rbio->stripe_len);
++
++ bio_list_add(&rbio->bio_list, orig_bio);
++ rbio->bio_list_bytes += orig_bio->bi_iter.bi_size;
++
++ /* Update the dbitmap. */
++ for (cur_logical = orig_logical; cur_logical < orig_logical + orig_len;
++ cur_logical += sectorsize) {
++ int bit = ((u32)(cur_logical - full_stripe_start) >>
++ fs_info->sectorsize_bits) % rbio->stripe_nsectors;
++
++ set_bit(bit, rbio->dbitmap);
++ }
++}
++
+ /*
+ * our main entry point for writes from the rest of the FS.
+ */
+@@ -1861,9 +1908,8 @@ int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc, u32 stri
+ btrfs_put_bioc(bioc);
+ return PTR_ERR(rbio);
+ }
+- bio_list_add(&rbio->bio_list, bio);
+- rbio->bio_list_bytes = bio->bi_iter.bi_size;
+ rbio->operation = BTRFS_RBIO_WRITE;
++ rbio_add_bio(rbio, bio);
+
+ btrfs_bio_counter_inc_noblocked(fs_info);
+ rbio->generic_bio_cnt = 1;
+@@ -2172,9 +2218,12 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
+ atomic_set(&rbio->error, 0);
+
+ /*
+- * read everything that hasn't failed. Thanks to the
+- * stripe cache, it is possible that some or all of these
+- * pages are going to be uptodate.
++ * Read everything that hasn't failed. However this time we will
++ * not trust any cached sector.
++ * As we may read out some stale data but higher layer is not reading
++ * that stale part.
++ *
++ * So here we always re-read everything in recovery path.
+ */
+ for (stripe = 0; stripe < rbio->real_stripes; stripe++) {
+ if (rbio->faila == stripe || rbio->failb == stripe) {
+@@ -2185,13 +2234,7 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
+ for (sectornr = 0; sectornr < rbio->stripe_nsectors; sectornr++) {
+ struct sector_ptr *sector;
+
+- /*
+- * the rmw code may have already read this
+- * page in
+- */
+ sector = rbio_stripe_sector(rbio, stripe, sectornr);
+- if (sector->uptodate)
+- continue;
+
+ ret = rbio_add_io_sector(rbio, &bio_list, sector,
+ stripe, sectornr, rbio->stripe_len,
+@@ -2268,8 +2311,7 @@ int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc,
+ }
+
+ rbio->operation = BTRFS_RBIO_READ_REBUILD;
+- bio_list_add(&rbio->bio_list, bio);
+- rbio->bio_list_bytes = bio->bi_iter.bi_size;
++ rbio_add_bio(rbio, bio);
+
+ rbio->faila = find_logical_bio_stripe(rbio, bio);
+ if (rbio->faila == -1) {
+diff --git a/include/linux/kexec.h b/include/linux/kexec.h
+index 6e7510f393680..bf24e7fce1fca 100644
+--- a/include/linux/kexec.h
++++ b/include/linux/kexec.h
+@@ -19,6 +19,7 @@
+ #include <asm/io.h>
+
+ #include <uapi/linux/kexec.h>
++#include <linux/verification.h>
+
+ /* Location of a reserved region to hold the crash kernel.
+ */
+@@ -212,6 +213,12 @@ static inline void *arch_kexec_kernel_image_load(struct kimage *image)
+ }
+ #endif
+
++#ifdef CONFIG_KEXEC_SIG
++#ifdef CONFIG_SIGNED_PE_FILE_VERIFICATION
++int kexec_kernel_verify_pe_sig(const char *kernel, unsigned long kernel_len);
++#endif
++#endif
++
+ extern int kexec_add_buffer(struct kexec_buf *kbuf);
+ int kexec_locate_mem_hole(struct kexec_buf *kbuf);
+
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index 6dc1294c90fcf..a7b411c22f19c 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -123,6 +123,23 @@ void kimage_file_post_load_cleanup(struct kimage *image)
+ }
+
+ #ifdef CONFIG_KEXEC_SIG
++#ifdef CONFIG_SIGNED_PE_FILE_VERIFICATION
++int kexec_kernel_verify_pe_sig(const char *kernel, unsigned long kernel_len)
++{
++ int ret;
++
++ ret = verify_pefile_signature(kernel, kernel_len,
++ VERIFY_USE_SECONDARY_KEYRING,
++ VERIFYING_KEXEC_PE_SIGNATURE);
++ if (ret == -ENOKEY && IS_ENABLED(CONFIG_INTEGRITY_PLATFORM_KEYRING)) {
++ ret = verify_pefile_signature(kernel, kernel_len,
++ VERIFY_USE_PLATFORM_KEYRING,
++ VERIFYING_KEXEC_PE_SIGNATURE);
++ }
++ return ret;
++}
++#endif
++
+ static int kexec_image_verify_sig(struct kimage *image, void *buf,
+ unsigned long buf_len)
+ {
+diff --git a/mm/kfence/core.c b/mm/kfence/core.c
+index 6aff49f6b79ec..4b5e5a3d3a638 100644
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -603,6 +603,14 @@ static unsigned long kfence_init_pool(void)
+ addr += 2 * PAGE_SIZE;
+ }
+
++ /*
++ * The pool is live and will never be deallocated from this point on.
++ * Remove the pool object from the kmemleak object tree, as it would
++ * otherwise overlap with allocations returned by kfence_alloc(), which
++ * are registered with kmemleak through the slab post-alloc hook.
++ */
++ kmemleak_free(__kfence_pool);
++
+ return 0;
+ }
+
+@@ -615,16 +623,8 @@ static bool __init kfence_init_pool_early(void)
+
+ addr = kfence_init_pool();
+
+- if (!addr) {
+- /*
+- * The pool is live and will never be deallocated from this point on.
+- * Ignore the pool object from the kmemleak phys object tree, as it would
+- * otherwise overlap with allocations returned by kfence_alloc(), which
+- * are registered with kmemleak through the slab post-alloc hook.
+- */
+- kmemleak_ignore_phys(__pa(__kfence_pool));
++ if (!addr)
+ return true;
+- }
+
+ /*
+ * Only release unprotected pages, and do not try to go back and change
+diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c
+index 3f935cbbaff66..48712bc51bda7 100644
+--- a/net/sched/cls_route.c
++++ b/net/sched/cls_route.c
+@@ -424,6 +424,11 @@ static int route4_set_parms(struct net *net, struct tcf_proto *tp,
+ return -EINVAL;
+ }
+
++ if (!nhandle) {
++ NL_SET_ERR_MSG(extack, "Replacing with handle of 0 is invalid");
++ return -EINVAL;
++ }
++
+ h1 = to_hash(nhandle);
+ b = rtnl_dereference(head->table[h1]);
+ if (!b) {
+@@ -477,6 +482,11 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
+ int err;
+ bool new = true;
+
++ if (!handle) {
++ NL_SET_ERR_MSG(extack, "Creating with handle of 0 is invalid");
++ return -EINVAL;
++ }
++
+ if (opt == NULL)
+ return handle ? -EINVAL : 0;
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-25 10:31 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-25 10:31 UTC (permalink / raw
To: gentoo-commits
commit: fbe75912097fd2d987df503261ed461b94514678
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 25 10:31:43 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 25 10:31:43 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fbe75912
Linux patch 5.19.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1003_linux-5.19.4.patch | 17636 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 17640 insertions(+)
diff --git a/0000_README b/0000_README
index 7a9bbb26..920f6ada 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-5.19.3.patch
From: http://www.kernel.org
Desc: Linux 5.19.3
+Patch: 1003_linux-5.19.4.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1003_linux-5.19.4.patch b/1003_linux-5.19.4.patch
new file mode 100644
index 00000000..0df49d2c
--- /dev/null
+++ b/1003_linux-5.19.4.patch
@@ -0,0 +1,17636 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index ddccd10774623..9b7fa1baf225b 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -592,6 +592,18 @@ to the guest kernel command line (see
+ Documentation/admin-guide/kernel-parameters.rst).
+
+
++nmi_wd_lpm_factor (PPC only)
++============================
++
++Factor to apply to the NMI watchdog timeout (only when ``nmi_watchdog`` is
++set to 1). This factor represents the percentage added to
++``watchdog_thresh`` when calculating the NMI watchdog timeout during an
++LPM. The soft lockup timeout is not impacted.
++
++A value of 0 means no change. The default value is 200 meaning the NMI
++watchdog is set to 30s (based on ``watchdog_thresh`` equal to 10).
++
++
+ numa_balancing
+ ==============
+
+diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
+index 093cdaefdb373..d8b101c97031b 100644
+--- a/Documentation/atomic_bitops.txt
++++ b/Documentation/atomic_bitops.txt
+@@ -59,7 +59,7 @@ Like with atomic_t, the rule of thumb is:
+ - RMW operations that have a return value are fully ordered.
+
+ - RMW operations that are conditional are unordered on FAILURE,
+- otherwise the above rules apply. In the case of test_and_{}_bit() operations,
++ otherwise the above rules apply. In the case of test_and_set_bit_lock(),
+ if the bit in memory is unchanged by the operation then it is deemed to have
+ failed.
+
+diff --git a/Documentation/devicetree/bindings/arm/qcom.yaml b/Documentation/devicetree/bindings/arm/qcom.yaml
+index 5c06d1bfc046c..88759f8a3049f 100644
+--- a/Documentation/devicetree/bindings/arm/qcom.yaml
++++ b/Documentation/devicetree/bindings/arm/qcom.yaml
+@@ -153,28 +153,34 @@ properties:
+ - const: qcom,msm8974
+
+ - items:
+- - enum:
+- - alcatel,idol347
+- - const: qcom,msm8916-mtp/1
+ - const: qcom,msm8916-mtp
++ - const: qcom,msm8916-mtp/1
+ - const: qcom,msm8916
+
+ - items:
+ - enum:
+- - longcheer,l8150
++ - alcatel,idol347
+ - samsung,a3u-eur
+ - samsung,a5u-eur
+ - const: qcom,msm8916
+
++ - items:
++ - const: longcheer,l8150
++ - const: qcom,msm8916-v1-qrd/9-v1
++ - const: qcom,msm8916
++
+ - items:
+ - enum:
+ - sony,karin_windy
++ - const: qcom,apq8094
++
++ - items:
++ - enum:
+ - sony,karin-row
+ - sony,satsuki-row
+ - sony,sumire-row
+ - sony,suzuran-row
+- - qcom,msm8994
+- - const: qcom,apq8094
++ - const: qcom,msm8994
+
+ - items:
+ - enum:
+diff --git a/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml b/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
+index 5a5b2214f0cae..005e0edd4609a 100644
+--- a/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
++++ b/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
+@@ -22,16 +22,32 @@ properties:
+ const: qcom,gcc-msm8996
+
+ clocks:
++ minItems: 3
+ items:
+ - description: XO source
+ - description: Second XO source
+ - description: Sleep clock source
++ - description: PCIe 0 PIPE clock (optional)
++ - description: PCIe 1 PIPE clock (optional)
++ - description: PCIe 2 PIPE clock (optional)
++ - description: USB3 PIPE clock (optional)
++ - description: UFS RX symbol 0 clock (optional)
++ - description: UFS RX symbol 1 clock (optional)
++ - description: UFS TX symbol 0 clock (optional)
+
+ clock-names:
++ minItems: 3
+ items:
+ - const: cxo
+ - const: cxo2
+ - const: sleep_clk
++ - const: pcie_0_pipe_clk_src
++ - const: pcie_1_pipe_clk_src
++ - const: pcie_2_pipe_clk_src
++ - const: usb3_phy_pipe_clk_src
++ - const: ufs_rx_symbol_0_clk_src
++ - const: ufs_rx_symbol_1_clk_src
++ - const: ufs_tx_symbol_0_clk_src
+
+ '#clock-cells':
+ const: 1
+diff --git a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml
+index 4a92a4c7dcd70..f8168986a0a9e 100644
+--- a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml
++++ b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml
+@@ -233,6 +233,7 @@ allOf:
+ - allwinner,sun8i-a83t-tcon-lcd
+ - allwinner,sun8i-v3s-tcon
+ - allwinner,sun9i-a80-tcon-lcd
++ - allwinner,sun20i-d1-tcon-lcd
+
+ then:
+ properties:
+@@ -252,6 +253,7 @@ allOf:
+ - allwinner,sun8i-a83t-tcon-tv
+ - allwinner,sun8i-r40-tcon-tv
+ - allwinner,sun9i-a80-tcon-tv
++ - allwinner,sun20i-d1-tcon-tv
+
+ then:
+ properties:
+@@ -278,6 +280,7 @@ allOf:
+ - allwinner,sun9i-a80-tcon-lcd
+ - allwinner,sun4i-a10-tcon
+ - allwinner,sun8i-a83t-tcon-lcd
++ - allwinner,sun20i-d1-tcon-lcd
+
+ then:
+ required:
+@@ -294,6 +297,7 @@ allOf:
+ - allwinner,sun8i-a23-tcon
+ - allwinner,sun8i-a33-tcon
+ - allwinner,sun8i-a83t-tcon-lcd
++ - allwinner,sun20i-d1-tcon-lcd
+
+ then:
+ properties:
+diff --git a/Documentation/devicetree/bindings/gpio/gpio-zynq.yaml b/Documentation/devicetree/bindings/gpio/gpio-zynq.yaml
+index 378da2649e668..980f92ad9eba2 100644
+--- a/Documentation/devicetree/bindings/gpio/gpio-zynq.yaml
++++ b/Documentation/devicetree/bindings/gpio/gpio-zynq.yaml
+@@ -11,7 +11,11 @@ maintainers:
+
+ properties:
+ compatible:
+- const: xlnx,zynq-gpio-1.0
++ enum:
++ - xlnx,zynq-gpio-1.0
++ - xlnx,zynqmp-gpio-1.0
++ - xlnx,versal-gpio-1.0
++ - xlnx,pmc-gpio-1.0
+
+ reg:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml b/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
+index a3a1e5a65306b..32d0d51903342 100644
+--- a/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
++++ b/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
+@@ -37,10 +37,6 @@ properties:
+ device is temporarily held in hardware reset prior to initialization if
+ this property is present.
+
+- azoteq,rf-filt-enable:
+- type: boolean
+- description: Enables the device's internal RF filter.
+-
+ azoteq,max-counts:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ enum: [0, 1, 2, 3]
+@@ -537,9 +533,8 @@ patternProperties:
+
+ azoteq,bottom-speed:
+ $ref: /schemas/types.yaml#/definitions/uint32
+- multipleOf: 4
+ minimum: 0
+- maximum: 1020
++ maximum: 255
+ description:
+ Specifies the speed of movement after which coordinate filtering is
+ linearly reduced.
+@@ -616,16 +611,15 @@ patternProperties:
+ azoteq,gpio-select:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+- maxItems: 1
++ maxItems: 3
+ items:
+ minimum: 0
+- maximum: 0
++ maximum: 2
+ description: |
+- Specifies an individual GPIO mapped to a tap, swipe or flick
+- gesture as follows:
++ Specifies one or more GPIO mapped to the event as follows:
+ 0: GPIO0
+- 1: GPIO3 (reserved)
+- 2: GPIO4 (reserved)
++ 1: GPIO3 (IQS7222C only)
++ 2: GPIO4 (IQS7222C only)
+
+ Note that although multiple events can be mapped to a single
+ GPIO, they must all be of the same type (proximity, touch or
+@@ -710,6 +704,14 @@ allOf:
+ multipleOf: 4
+ maximum: 1020
+
++ patternProperties:
++ "^event-(press|tap|(swipe|flick)-(pos|neg))$":
++ properties:
++ azoteq,gpio-select:
++ maxItems: 1
++ items:
++ maximum: 0
++
+ else:
+ patternProperties:
+ "^channel-([0-9]|1[0-9])$":
+@@ -726,8 +728,6 @@ allOf:
+
+ azoteq,gesture-dist: false
+
+- azoteq,gpio-select: false
+-
+ required:
+ - compatible
+ - reg
+diff --git a/Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml b/Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
+index 30f7b596d609b..59663e897dae9 100644
+--- a/Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
++++ b/Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
+@@ -98,6 +98,8 @@ examples:
+ capacity-dmips-mhz = <1024>;
+ clocks = <&kryocc 0>;
+ operating-points-v2 = <&cluster0_opp>;
++ power-domains = <&cpr>;
++ power-domain-names = "cpr";
+ #cooling-cells = <2>;
+ next-level-cache = <&L2_0>;
+ L2_0: l2-cache {
+@@ -115,6 +117,8 @@ examples:
+ capacity-dmips-mhz = <1024>;
+ clocks = <&kryocc 0>;
+ operating-points-v2 = <&cluster0_opp>;
++ power-domains = <&cpr>;
++ power-domain-names = "cpr";
+ #cooling-cells = <2>;
+ next-level-cache = <&L2_0>;
+ };
+@@ -128,6 +132,8 @@ examples:
+ capacity-dmips-mhz = <1024>;
+ clocks = <&kryocc 1>;
+ operating-points-v2 = <&cluster1_opp>;
++ power-domains = <&cpr>;
++ power-domain-names = "cpr";
+ #cooling-cells = <2>;
+ next-level-cache = <&L2_1>;
+ L2_1: l2-cache {
+@@ -145,6 +151,8 @@ examples:
+ capacity-dmips-mhz = <1024>;
+ clocks = <&kryocc 1>;
+ operating-points-v2 = <&cluster1_opp>;
++ power-domains = <&cpr>;
++ power-domain-names = "cpr";
+ #cooling-cells = <2>;
+ next-level-cache = <&L2_1>;
+ };
+@@ -182,18 +190,21 @@ examples:
+ opp-microvolt = <905000 905000 1140000>;
+ opp-supported-hw = <0x7>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp1>;
+ };
+ opp-1401600000 {
+ opp-hz = /bits/ 64 <1401600000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x5>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp2>;
+ };
+ opp-1593600000 {
+ opp-hz = /bits/ 64 <1593600000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x1>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp3>;
+ };
+ };
+
+@@ -207,24 +218,28 @@ examples:
+ opp-microvolt = <905000 905000 1140000>;
+ opp-supported-hw = <0x7>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp1>;
+ };
+ opp-1804800000 {
+ opp-hz = /bits/ 64 <1804800000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x6>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp4>;
+ };
+ opp-1900800000 {
+ opp-hz = /bits/ 64 <1900800000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x4>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp5>;
+ };
+ opp-2150400000 {
+ opp-hz = /bits/ 64 <2150400000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x1>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp6>;
+ };
+ };
+
+diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
+index 0b69b12b849ee..9b3ebee938e88 100644
+--- a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
++++ b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
+@@ -614,7 +614,7 @@ allOf:
+ - if:
+ not:
+ properties:
+- compatibles:
++ compatible:
+ contains:
+ enum:
+ - qcom,pcie-msm8996
+diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8186.yaml b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8186.yaml
+index 8a2bb86082910..dbc41d1c4c7dc 100644
+--- a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8186.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8186.yaml
+@@ -105,31 +105,8 @@ patternProperties:
+ drive-strength:
+ enum: [2, 4, 6, 8, 10, 12, 14, 16]
+
+- mediatek,drive-strength-adv:
+- description: |
+- Describe the specific driving setup property.
+- For I2C pins, the existing generic driving setup can only support
+- 2/4/6/8/10/12/14/16mA driving. But in specific driving setup, they
+- can support 0.125/0.25/0.5/1mA adjustment. If we enable specific
+- driving setup, the existing generic setup will be disabled.
+- The specific driving setup is controlled by E1E0EN.
+- When E1=0/E0=0, the strength is 0.125mA.
+- When E1=0/E0=1, the strength is 0.25mA.
+- When E1=1/E0=0, the strength is 0.5mA.
+- When E1=1/E0=1, the strength is 1mA.
+- EN is used to enable or disable the specific driving setup.
+- Valid arguments are described as below:
+- 0: (E1, E0, EN) = (0, 0, 0)
+- 1: (E1, E0, EN) = (0, 0, 1)
+- 2: (E1, E0, EN) = (0, 1, 0)
+- 3: (E1, E0, EN) = (0, 1, 1)
+- 4: (E1, E0, EN) = (1, 0, 0)
+- 5: (E1, E0, EN) = (1, 0, 1)
+- 6: (E1, E0, EN) = (1, 1, 0)
+- 7: (E1, E0, EN) = (1, 1, 1)
+- So the valid arguments are from 0 to 7.
+- $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [0, 1, 2, 3, 4, 5, 6, 7]
++ drive-strength-microamp:
++ enum: [125, 250, 500, 1000]
+
+ bias-pull-down:
+ oneOf:
+@@ -291,7 +268,7 @@ examples:
+ pinmux = <PINMUX_GPIO127__FUNC_SCL0>,
+ <PINMUX_GPIO128__FUNC_SDA0>;
+ bias-pull-up = <MTK_PULL_SET_RSEL_001>;
+- mediatek,drive-strength-adv = <7>;
++ drive-strength-microamp = <1000>;
+ };
+ };
+ };
+diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8192.yaml b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8192.yaml
+index c90a132fbc790..e39f5893bf16d 100644
+--- a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8192.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8192.yaml
+@@ -80,46 +80,24 @@ patternProperties:
+ dt-bindings/pinctrl/mt65xx.h. It can only support 2/4/6/8/10/12/14/16mA in mt8192.
+ enum: [2, 4, 6, 8, 10, 12, 14, 16]
+
+- mediatek,drive-strength-adv:
+- description: |
+- Describe the specific driving setup property.
+- For I2C pins, the existing generic driving setup can only support
+- 2/4/6/8/10/12/14/16mA driving. But in specific driving setup, they
+- can support 0.125/0.25/0.5/1mA adjustment. If we enable specific
+- driving setup, the existing generic setup will be disabled.
+- The specific driving setup is controlled by E1E0EN.
+- When E1=0/E0=0, the strength is 0.125mA.
+- When E1=0/E0=1, the strength is 0.25mA.
+- When E1=1/E0=0, the strength is 0.5mA.
+- When E1=1/E0=1, the strength is 1mA.
+- EN is used to enable or disable the specific driving setup.
+- Valid arguments are described as below:
+- 0: (E1, E0, EN) = (0, 0, 0)
+- 1: (E1, E0, EN) = (0, 0, 1)
+- 2: (E1, E0, EN) = (0, 1, 0)
+- 3: (E1, E0, EN) = (0, 1, 1)
+- 4: (E1, E0, EN) = (1, 0, 0)
+- 5: (E1, E0, EN) = (1, 0, 1)
+- 6: (E1, E0, EN) = (1, 1, 0)
+- 7: (E1, E0, EN) = (1, 1, 1)
+- So the valid arguments are from 0 to 7.
+- $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [0, 1, 2, 3, 4, 5, 6, 7]
+-
+- mediatek,pull-up-adv:
+- description: |
+- Pull up settings for 2 pull resistors, R0 and R1. User can
+- configure those special pins. Valid arguments are described as below:
+- 0: (R1, R0) = (0, 0) which means R1 disabled and R0 disabled.
+- 1: (R1, R0) = (0, 1) which means R1 disabled and R0 enabled.
+- 2: (R1, R0) = (1, 0) which means R1 enabled and R0 disabled.
+- 3: (R1, R0) = (1, 1) which means R1 enabled and R0 enabled.
+- $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [0, 1, 2, 3]
+-
+- bias-pull-down: true
+-
+- bias-pull-up: true
++ drive-strength-microamp:
++ enum: [125, 250, 500, 1000]
++
++ bias-pull-down:
++ oneOf:
++ - type: boolean
++ description: normal pull down.
++ - enum: [100, 101, 102, 103]
++ description: PUPD/R1/R0 pull down type. See MTK_PUPD_SET_R1R0_
++ defines in dt-bindings/pinctrl/mt65xx.h.
++
++ bias-pull-up:
++ oneOf:
++ - type: boolean
++ description: normal pull up.
++ - enum: [100, 101, 102, 103]
++ description: PUPD/R1/R0 pull up type. See MTK_PUPD_SET_R1R0_
++ defines in dt-bindings/pinctrl/mt65xx.h.
+
+ bias-disable: true
+
+diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
+index c5b755514c46f..3d8afb3d5695b 100644
+--- a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
+@@ -49,7 +49,7 @@ properties:
+ description: The interrupt outputs to sysirq.
+ maxItems: 1
+
+- mediatek,rsel_resistance_in_si_unit:
++ mediatek,rsel-resistance-in-si-unit:
+ type: boolean
+ description: |
+ Identifying i2c pins pull up/down type which is RSEL. It can support
+@@ -98,31 +98,8 @@ patternProperties:
+ drive-strength:
+ enum: [2, 4, 6, 8, 10, 12, 14, 16]
+
+- mediatek,drive-strength-adv:
+- description: |
+- Describe the specific driving setup property.
+- For I2C pins, the existing generic driving setup can only support
+- 2/4/6/8/10/12/14/16mA driving. But in specific driving setup, they
+- can support 0.125/0.25/0.5/1mA adjustment. If we enable specific
+- driving setup, the existing generic setup will be disabled.
+- The specific driving setup is controlled by E1E0EN.
+- When E1=0/E0=0, the strength is 0.125mA.
+- When E1=0/E0=1, the strength is 0.25mA.
+- When E1=1/E0=0, the strength is 0.5mA.
+- When E1=1/E0=1, the strength is 1mA.
+- EN is used to enable or disable the specific driving setup.
+- Valid arguments are described as below:
+- 0: (E1, E0, EN) = (0, 0, 0)
+- 1: (E1, E0, EN) = (0, 0, 1)
+- 2: (E1, E0, EN) = (0, 1, 0)
+- 3: (E1, E0, EN) = (0, 1, 1)
+- 4: (E1, E0, EN) = (1, 0, 0)
+- 5: (E1, E0, EN) = (1, 0, 1)
+- 6: (E1, E0, EN) = (1, 1, 0)
+- 7: (E1, E0, EN) = (1, 1, 1)
+- So the valid arguments are from 0 to 7.
+- $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [0, 1, 2, 3, 4, 5, 6, 7]
++ drive-strength-microamp:
++ enum: [125, 250, 500, 1000]
+
+ bias-pull-down:
+ oneOf:
+@@ -142,7 +119,7 @@ patternProperties:
+ "MTK_PUPD_SET_R1R0_11" define in mt8195.
+ For pull down type is RSEL, it can add RSEL define & resistance
+ value(ohm) to set different resistance by identifying property
+- "mediatek,rsel_resistance_in_si_unit".
++ "mediatek,rsel-resistance-in-si-unit".
+ It can support "MTK_PULL_SET_RSEL_000" & "MTK_PULL_SET_RSEL_001"
+ & "MTK_PULL_SET_RSEL_010" & "MTK_PULL_SET_RSEL_011"
+ & "MTK_PULL_SET_RSEL_100" & "MTK_PULL_SET_RSEL_101"
+@@ -161,7 +138,7 @@ patternProperties:
+ };
+ An example of using si unit resistance value(ohm):
+ &pio {
+- mediatek,rsel_resistance_in_si_unit;
++ mediatek,rsel-resistance-in-si-unit;
+ }
+ pincontroller {
+ i2c0_pin {
+@@ -190,7 +167,7 @@ patternProperties:
+ "MTK_PUPD_SET_R1R0_11" define in mt8195.
+ For pull up type is RSEL, it can add RSEL define & resistance
+ value(ohm) to set different resistance by identifying property
+- "mediatek,rsel_resistance_in_si_unit".
++ "mediatek,rsel-resistance-in-si-unit".
+ It can support "MTK_PULL_SET_RSEL_000" & "MTK_PULL_SET_RSEL_001"
+ & "MTK_PULL_SET_RSEL_010" & "MTK_PULL_SET_RSEL_011"
+ & "MTK_PULL_SET_RSEL_100" & "MTK_PULL_SET_RSEL_101"
+@@ -209,7 +186,7 @@ patternProperties:
+ };
+ An example of using si unit resistance value(ohm):
+ &pio {
+- mediatek,rsel_resistance_in_si_unit;
++ mediatek,rsel-resistance-in-si-unit;
+ }
+ pincontroller {
+ i2c0-pins {
+diff --git a/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml b/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
+index b539781e39aa4..835b53302db80 100644
+--- a/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
+@@ -47,12 +47,6 @@ properties:
+ description:
+ Properties for single LDO regulator.
+
+- properties:
+- regulator-name:
+- pattern: "^LDO[1-5]$"
+- description:
+- should be "LDO1", ..., "LDO5"
+-
+ unevaluatedProperties: false
+
+ "^BUCK[1-6]$":
+@@ -62,11 +56,6 @@ properties:
+ Properties for single BUCK regulator.
+
+ properties:
+- regulator-name:
+- pattern: "^BUCK[1-6]$"
+- description:
+- should be "BUCK1", ..., "BUCK6"
+-
+ nxp,dvs-run-voltage:
+ $ref: "/schemas/types.yaml#/definitions/uint32"
+ minimum: 600000
+diff --git a/Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml b/Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
+index 78ceb9d67754f..2e20ca313ec1d 100644
+--- a/Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
++++ b/Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
+@@ -45,12 +45,15 @@ properties:
+ - const: rx
+
+ interconnects:
+- maxItems: 2
++ minItems: 2
++ maxItems: 3
+
+ interconnect-names:
++ minItems: 2
+ items:
+ - const: qup-core
+ - const: qup-config
++ - const: qup-memory
+
+ interrupts:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/spi/spi-cadence.yaml b/Documentation/devicetree/bindings/spi/spi-cadence.yaml
+index 9787be21318e6..82d0ca5c00f3b 100644
+--- a/Documentation/devicetree/bindings/spi/spi-cadence.yaml
++++ b/Documentation/devicetree/bindings/spi/spi-cadence.yaml
+@@ -49,6 +49,13 @@ properties:
+ enum: [ 0, 1 ]
+ default: 0
+
++required:
++ - compatible
++ - reg
++ - interrupts
++ - clock-names
++ - clocks
++
+ unevaluatedProperties: false
+
+ examples:
+diff --git a/Documentation/devicetree/bindings/spi/spi-zynqmp-qspi.yaml b/Documentation/devicetree/bindings/spi/spi-zynqmp-qspi.yaml
+index ea72c8001256f..fafde1c06be67 100644
+--- a/Documentation/devicetree/bindings/spi/spi-zynqmp-qspi.yaml
++++ b/Documentation/devicetree/bindings/spi/spi-zynqmp-qspi.yaml
+@@ -30,6 +30,13 @@ properties:
+ clocks:
+ maxItems: 2
+
++required:
++ - compatible
++ - reg
++ - interrupts
++ - clock-names
++ - clocks
++
+ unevaluatedProperties: false
+
+ examples:
+diff --git a/Documentation/devicetree/bindings/usb/mediatek,mtk-xhci.yaml b/Documentation/devicetree/bindings/usb/mediatek,mtk-xhci.yaml
+index 084d7135b2d9f..5603359e7b29e 100644
+--- a/Documentation/devicetree/bindings/usb/mediatek,mtk-xhci.yaml
++++ b/Documentation/devicetree/bindings/usb/mediatek,mtk-xhci.yaml
+@@ -57,6 +57,7 @@ properties:
+ - description: optional, wakeup interrupt used to support runtime PM
+
+ interrupt-names:
++ minItems: 1
+ items:
+ - const: host
+ - const: wakeup
+diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst
+index 55e2331a64380..d6b61d22f5258 100644
+--- a/Documentation/firmware-guide/acpi/apei/einj.rst
++++ b/Documentation/firmware-guide/acpi/apei/einj.rst
+@@ -168,7 +168,7 @@ An error injection example::
+ 0x00000008 Memory Correctable
+ 0x00000010 Memory Uncorrectable non-fatal
+ # echo 0x12345000 > param1 # Set memory address for injection
+- # echo $((-1 << 12)) > param2 # Mask 0xfffffffffffff000 - anywhere in this page
++ # echo 0xfffffffffffff000 > param2 # Mask - anywhere in this page
+ # echo 0x8 > error_type # Choose correctable memory error
+ # echo 1 > error_inject # Inject now
+
+diff --git a/Makefile b/Makefile
+index 8595916561f3f..65dc4f93ffdbf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+@@ -1112,13 +1112,11 @@ vmlinux-alldirs := $(sort $(vmlinux-dirs) Documentation \
+ $(patsubst %/,%,$(filter %/, $(core-) \
+ $(drivers-) $(libs-))))
+
+-subdir-modorder := $(addsuffix modules.order,$(filter %/, \
+- $(core-y) $(core-m) $(libs-y) $(libs-m) \
+- $(drivers-y) $(drivers-m)))
+-
+ build-dirs := $(vmlinux-dirs)
+ clean-dirs := $(vmlinux-alldirs)
+
++subdir-modorder := $(addsuffix /modules.order, $(build-dirs))
++
+ # Externally visible symbols (used by link-vmlinux.sh)
+ KBUILD_VMLINUX_OBJS := $(head-y) $(patsubst %/,%/built-in.a, $(core-y))
+ KBUILD_VMLINUX_OBJS += $(addsuffix built-in.a, $(filter %/, $(libs-y)))
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index de32152cea048..7aaf755770962 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -838,6 +838,10 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
+ (system_supports_mte() && \
+ test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &(kvm)->arch.flags))
+
++#define kvm_supports_32bit_el0() \
++ (system_supports_32bit_el0() && \
++ !static_branch_unlikely(&arm64_mismatched_32bit_el0))
++
+ int kvm_trng_call(struct kvm_vcpu *vcpu);
+ #ifdef CONFIG_KVM
+ extern phys_addr_t hyp_mem_base;
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 83a7f61354d3f..e21b245741182 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -751,8 +751,7 @@ static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
+ if (likely(!vcpu_mode_is_32bit(vcpu)))
+ return false;
+
+- return !system_supports_32bit_el0() ||
+- static_branch_unlikely(&arm64_mismatched_32bit_el0);
++ return !kvm_supports_32bit_el0();
+ }
+
+ /**
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index 8c607199cad14..f802a3b3f8dbc 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -242,7 +242,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ u64 mode = (*(u64 *)valp) & PSR_AA32_MODE_MASK;
+ switch (mode) {
+ case PSR_AA32_MODE_USR:
+- if (!system_supports_32bit_el0())
++ if (!kvm_supports_32bit_el0())
+ return -EINVAL;
+ break;
+ case PSR_AA32_MODE_FIQ:
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index c06c0477fab52..be7edd21537f9 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -692,7 +692,7 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+ */
+ val = ((pmcr & ~ARMV8_PMU_PMCR_MASK)
+ | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
+- if (!system_supports_32bit_el0())
++ if (!kvm_supports_32bit_el0())
+ val |= ARMV8_PMU_PMCR_LC;
+ __vcpu_sys_reg(vcpu, r->reg) = val;
+ }
+@@ -741,7 +741,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+ val &= ~ARMV8_PMU_PMCR_MASK;
+ val |= p->regval & ARMV8_PMU_PMCR_MASK;
+- if (!system_supports_32bit_el0())
++ if (!kvm_supports_32bit_el0())
+ val |= ARMV8_PMU_PMCR_LC;
+ __vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+ kvm_pmu_handle_pmcr(vcpu, val);
+diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
+index 34ba684d5962b..3c6e5c725d814 100644
+--- a/arch/csky/kernel/probes/kprobes.c
++++ b/arch/csky/kernel/probes/kprobes.c
+@@ -124,6 +124,10 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
+
+ void __kprobes arch_remove_kprobe(struct kprobe *p)
+ {
++ if (p->ainsn.api.insn) {
++ free_insn_slot(p->ainsn.api.insn, 0);
++ p->ainsn.api.insn = NULL;
++ }
+ }
+
+ static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 4218750414bbf..7dab46728aeda 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -581,7 +581,7 @@ static struct platform_device mcf_esdhc = {
+ };
+ #endif /* MCFSDHC_BASE */
+
+-#if IS_ENABLED(CONFIG_CAN_FLEXCAN)
++#ifdef MCFFLEXCAN_SIZE
+
+ #include <linux/can/platform/flexcan.h>
+
+@@ -620,7 +620,7 @@ static struct platform_device mcf_flexcan0 = {
+ .resource = mcf5441x_flexcan0_resource,
+ .dev.platform_data = &mcf5441x_flexcan_info,
+ };
+-#endif /* IS_ENABLED(CONFIG_CAN_FLEXCAN) */
++#endif /* MCFFLEXCAN_SIZE */
+
+ static struct platform_device *mcf_devices[] __initdata = {
+ &mcf_uart,
+@@ -657,7 +657,7 @@ static struct platform_device *mcf_devices[] __initdata = {
+ #ifdef MCFSDHC_BASE
+ &mcf_esdhc,
+ #endif
+-#if IS_ENABLED(CONFIG_CAN_FLEXCAN)
++#ifdef MCFFLEXCAN_SIZE
+ &mcf_flexcan0,
+ #endif
+ };
+diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
+index a994022e32c9f..ce05c0dd3acd7 100644
+--- a/arch/mips/cavium-octeon/octeon-platform.c
++++ b/arch/mips/cavium-octeon/octeon-platform.c
+@@ -86,11 +86,12 @@ static void octeon2_usb_clocks_start(struct device *dev)
+ "refclk-frequency", &clock_rate);
+ if (i) {
+ dev_err(dev, "No UCTL \"refclk-frequency\"\n");
++ of_node_put(uctl_node);
+ goto exit;
+ }
+ i = of_property_read_string(uctl_node,
+ "refclk-type", &clock_type);
+-
++ of_node_put(uctl_node);
+ if (!i && strcmp("crystal", clock_type) == 0)
+ is_crystal_clock = true;
+ }
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index 8dbbd99fc7e85..be4d4670d6499 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -626,7 +626,7 @@ static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
+ return;
+ }
+
+- if (cpu_has_rixi && !!_PAGE_NO_EXEC) {
++ if (cpu_has_rixi && _PAGE_NO_EXEC != 0) {
+ if (fill_includes_sw_bits) {
+ UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
+ } else {
+@@ -2565,7 +2565,7 @@ static void check_pabits(void)
+ unsigned long entry;
+ unsigned pabits, fillbits;
+
+- if (!cpu_has_rixi || !_PAGE_NO_EXEC) {
++ if (!cpu_has_rixi || _PAGE_NO_EXEC == 0) {
+ /*
+ * We'll only be making use of the fact that we can rotate bits
+ * into the fill if the CPU supports RIXI, so don't bother
+diff --git a/arch/nios2/include/asm/entry.h b/arch/nios2/include/asm/entry.h
+index cf37f55efbc22..bafb7b2ca59fc 100644
+--- a/arch/nios2/include/asm/entry.h
++++ b/arch/nios2/include/asm/entry.h
+@@ -50,7 +50,8 @@
+ stw r13, PT_R13(sp)
+ stw r14, PT_R14(sp)
+ stw r15, PT_R15(sp)
+- stw r2, PT_ORIG_R2(sp)
++ movi r24, -1
++ stw r24, PT_ORIG_R2(sp)
+ stw r7, PT_ORIG_R7(sp)
+
+ stw ra, PT_RA(sp)
+diff --git a/arch/nios2/include/asm/ptrace.h b/arch/nios2/include/asm/ptrace.h
+index 6424621448728..9da34c3022a27 100644
+--- a/arch/nios2/include/asm/ptrace.h
++++ b/arch/nios2/include/asm/ptrace.h
+@@ -74,6 +74,8 @@ extern void show_regs(struct pt_regs *);
+ ((struct pt_regs *)((unsigned long)current_thread_info() + THREAD_SIZE)\
+ - 1)
+
++#define force_successful_syscall_return() (current_pt_regs()->orig_r2 = -1)
++
+ int do_syscall_trace_enter(void);
+ void do_syscall_trace_exit(void);
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/nios2/kernel/entry.S b/arch/nios2/kernel/entry.S
+index 0794cd7803dfe..99f0a65e62347 100644
+--- a/arch/nios2/kernel/entry.S
++++ b/arch/nios2/kernel/entry.S
+@@ -185,6 +185,7 @@ ENTRY(handle_system_call)
+ ldw r5, PT_R5(sp)
+
+ local_restart:
++ stw r2, PT_ORIG_R2(sp)
+ /* Check that the requested system call is within limits */
+ movui r1, __NR_syscalls
+ bgeu r2, r1, ret_invsyscall
+@@ -192,7 +193,6 @@ local_restart:
+ movhi r11, %hiadj(sys_call_table)
+ add r1, r1, r11
+ ldw r1, %lo(sys_call_table)(r1)
+- beq r1, r0, ret_invsyscall
+
+ /* Check if we are being traced */
+ GET_THREAD_INFO r11
+@@ -213,6 +213,9 @@ local_restart:
+ translate_rc_and_ret:
+ movi r1, 0
+ bge r2, zero, 3f
++ ldw r1, PT_ORIG_R2(sp)
++ addi r1, r1, 1
++ beq r1, zero, 3f
+ sub r2, zero, r2
+ movi r1, 1
+ 3:
+@@ -255,9 +258,9 @@ traced_system_call:
+ ldw r6, PT_R6(sp)
+ ldw r7, PT_R7(sp)
+
+- /* Fetch the syscall function, we don't need to check the boundaries
+- * since this is already done.
+- */
++ /* Fetch the syscall function. */
++ movui r1, __NR_syscalls
++ bgeu r2, r1, traced_invsyscall
+ slli r1, r2, 2
+ movhi r11,%hiadj(sys_call_table)
+ add r1, r1, r11
+@@ -276,6 +279,9 @@ traced_system_call:
+ translate_rc_and_ret2:
+ movi r1, 0
+ bge r2, zero, 4f
++ ldw r1, PT_ORIG_R2(sp)
++ addi r1, r1, 1
++ beq r1, zero, 4f
+ sub r2, zero, r2
+ movi r1, 1
+ 4:
+@@ -287,6 +293,11 @@ end_translate_rc_and_ret2:
+ RESTORE_SWITCH_STACK
+ br ret_from_exception
+
++ /* If the syscall number was invalid return ENOSYS */
++traced_invsyscall:
++ movi r2, -ENOSYS
++ br translate_rc_and_ret2
++
+ Luser_return:
+ GET_THREAD_INFO r11 /* get thread_info pointer */
+ ldw r10, TI_FLAGS(r11) /* get thread_info->flags */
+@@ -336,9 +347,6 @@ external_interrupt:
+ /* skip if no interrupt is pending */
+ beq r12, r0, ret_from_interrupt
+
+- movi r24, -1
+- stw r24, PT_ORIG_R2(sp)
+-
+ /*
+ * Process an external hardware interrupt.
+ */
+diff --git a/arch/nios2/kernel/signal.c b/arch/nios2/kernel/signal.c
+index cb0b91589cf20..a5b93a30c6eb2 100644
+--- a/arch/nios2/kernel/signal.c
++++ b/arch/nios2/kernel/signal.c
+@@ -242,7 +242,7 @@ static int do_signal(struct pt_regs *regs)
+ /*
+ * If we were from a system call, check for system call restarting...
+ */
+- if (regs->orig_r2 >= 0) {
++ if (regs->orig_r2 >= 0 && regs->r1) {
+ continue_addr = regs->ea;
+ restart_addr = continue_addr - 4;
+ retval = regs->r2;
+@@ -264,6 +264,7 @@ static int do_signal(struct pt_regs *regs)
+ regs->ea = restart_addr;
+ break;
+ }
++ regs->orig_r2 = -1;
+ }
+
+ if (get_signal(&ksig)) {
+diff --git a/arch/nios2/kernel/syscall_table.c b/arch/nios2/kernel/syscall_table.c
+index 6176d63023c1d..c2875a6dd5a4a 100644
+--- a/arch/nios2/kernel/syscall_table.c
++++ b/arch/nios2/kernel/syscall_table.c
+@@ -13,5 +13,6 @@
+ #define __SYSCALL(nr, call) [nr] = (call),
+
+ void *sys_call_table[__NR_syscalls] = {
++ [0 ... __NR_syscalls-1] = sys_ni_syscall,
+ #include <asm/unistd.h>
+ };
+diff --git a/arch/openrisc/include/asm/io.h b/arch/openrisc/include/asm/io.h
+index c298061c70a7e..8aa3e78181e9a 100644
+--- a/arch/openrisc/include/asm/io.h
++++ b/arch/openrisc/include/asm/io.h
+@@ -31,7 +31,7 @@
+ void __iomem *ioremap(phys_addr_t offset, unsigned long size);
+
+ #define iounmap iounmap
+-extern void iounmap(void __iomem *addr);
++extern void iounmap(volatile void __iomem *addr);
+
+ #include <asm-generic/io.h>
+
+diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c
+index daae13a76743b..8ec0dafecf257 100644
+--- a/arch/openrisc/mm/ioremap.c
++++ b/arch/openrisc/mm/ioremap.c
+@@ -77,7 +77,7 @@ void __iomem *__ref ioremap(phys_addr_t addr, unsigned long size)
+ }
+ EXPORT_SYMBOL(ioremap);
+
+-void iounmap(void __iomem *addr)
++void iounmap(volatile void __iomem *addr)
+ {
+ /* If the page is from the fixmap pool then we just clear out
+ * the fixmap mapping.
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index a0cd707120619..d54e1fe035517 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -15,23 +15,6 @@ HAS_BIARCH := $(call cc-option-yn, -m32)
+ # Set default 32 bits cross compilers for vdso and boot wrapper
+ CROSS32_COMPILE ?=
+
+-ifeq ($(HAS_BIARCH),y)
+-ifeq ($(CROSS32_COMPILE),)
+-ifdef CONFIG_PPC32
+-# These options will be overridden by any -mcpu option that the CPU
+-# or platform code sets later on the command line, but they are needed
+-# to set a sane 32-bit cpu target for the 64-bit cross compiler which
+-# may default to the wrong ISA.
+-KBUILD_CFLAGS += -mcpu=powerpc
+-KBUILD_AFLAGS += -mcpu=powerpc
+-endif
+-endif
+-endif
+-
+-ifdef CONFIG_PPC_BOOK3S_32
+-KBUILD_CFLAGS += -mcpu=powerpc
+-endif
+-
+ # If we're on a ppc/ppc64/ppc64le machine use that defconfig, otherwise just use
+ # ppc64_defconfig because we have nothing better to go on.
+ uname := $(shell uname -m)
+@@ -183,6 +166,7 @@ endif
+ endif
+
+ CFLAGS-$(CONFIG_TARGET_CPU_BOOL) += $(call cc-option,-mcpu=$(CONFIG_TARGET_CPU))
++AFLAGS-$(CONFIG_TARGET_CPU_BOOL) += $(call cc-option,-mcpu=$(CONFIG_TARGET_CPU))
+
+ # Altivec option not allowed with e500mc64 in GCC.
+ ifdef CONFIG_ALTIVEC
+@@ -193,14 +177,6 @@ endif
+ CFLAGS-$(CONFIG_E5500_CPU) += $(E5500_CPU)
+ CFLAGS-$(CONFIG_E6500_CPU) += $(call cc-option,-mcpu=e6500,$(E5500_CPU))
+
+-ifdef CONFIG_PPC32
+-ifdef CONFIG_PPC_E500MC
+-CFLAGS-y += $(call cc-option,-mcpu=e500mc,-mcpu=powerpc)
+-else
+-CFLAGS-$(CONFIG_E500) += $(call cc-option,-mcpu=8540 -msoft-float,-mcpu=powerpc)
+-endif
+-endif
+-
+ asinstr := $(call as-instr,lis 9$(comma)foo@high,-DHAVE_AS_ATHIGH=1)
+
+ KBUILD_CPPFLAGS += -I $(srctree)/arch/$(ARCH) $(asinstr)
+diff --git a/arch/powerpc/include/asm/nmi.h b/arch/powerpc/include/asm/nmi.h
+index ea0e487f87b1f..c3c7adef74de0 100644
+--- a/arch/powerpc/include/asm/nmi.h
++++ b/arch/powerpc/include/asm/nmi.h
+@@ -5,8 +5,10 @@
+ #ifdef CONFIG_PPC_WATCHDOG
+ extern void arch_touch_nmi_watchdog(void);
+ long soft_nmi_interrupt(struct pt_regs *regs);
++void watchdog_nmi_set_timeout_pct(u64 pct);
+ #else
+ static inline void arch_touch_nmi_watchdog(void) {}
++static inline void watchdog_nmi_set_timeout_pct(u64 pct) {}
+ #endif
+
+ #ifdef CONFIG_NMI_IPI
+diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
+index 6c739beb938c9..519b606951675 100644
+--- a/arch/powerpc/kernel/head_book3s_32.S
++++ b/arch/powerpc/kernel/head_book3s_32.S
+@@ -418,14 +418,14 @@ InstructionTLBMiss:
+ */
+ /* Get PTE (linux-style) and check access */
+ mfspr r3,SPRN_IMISS
+-#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
++#ifdef CONFIG_MODULES
+ lis r1, TASK_SIZE@h /* check if kernel address */
+ cmplw 0,r1,r3
+ #endif
+ mfspr r2, SPRN_SDR1
+ li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC | _PAGE_USER
+ rlwinm r2, r2, 28, 0xfffff000
+-#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
++#ifdef CONFIG_MODULES
+ bgt- 112f
+ lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
+ li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
+diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
+index c787df126ada2..2df8e1350297e 100644
+--- a/arch/powerpc/kernel/pci-common.c
++++ b/arch/powerpc/kernel/pci-common.c
+@@ -67,10 +67,6 @@ void __init set_pci_dma_ops(const struct dma_map_ops *dma_ops)
+ pci_dma_ops = dma_ops;
+ }
+
+-/*
+- * This function should run under locking protection, specifically
+- * hose_spinlock.
+- */
+ static int get_phb_number(struct device_node *dn)
+ {
+ int ret, phb_id = -1;
+@@ -107,15 +103,20 @@ static int get_phb_number(struct device_node *dn)
+ if (!ret)
+ phb_id = (int)(prop & (MAX_PHBS - 1));
+
++ spin_lock(&hose_spinlock);
++
+ /* We need to be sure to not use the same PHB number twice. */
+ if ((phb_id >= 0) && !test_and_set_bit(phb_id, phb_bitmap))
+- return phb_id;
++ goto out_unlock;
+
+ /* If everything fails then fallback to dynamic PHB numbering. */
+ phb_id = find_first_zero_bit(phb_bitmap, MAX_PHBS);
+ BUG_ON(phb_id >= MAX_PHBS);
+ set_bit(phb_id, phb_bitmap);
+
++out_unlock:
++ spin_unlock(&hose_spinlock);
++
+ return phb_id;
+ }
+
+@@ -126,10 +127,13 @@ struct pci_controller *pcibios_alloc_controller(struct device_node *dev)
+ phb = zalloc_maybe_bootmem(sizeof(struct pci_controller), GFP_KERNEL);
+ if (phb == NULL)
+ return NULL;
+- spin_lock(&hose_spinlock);
++
+ phb->global_number = get_phb_number(dev);
++
++ spin_lock(&hose_spinlock);
+ list_add_tail(&phb->list_node, &hose_list);
+ spin_unlock(&hose_spinlock);
++
+ phb->dn = dev;
+ phb->is_dynamic = slab_is_available();
+ #ifdef CONFIG_PPC64
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index feae8509b59c9..b64c3f06c069e 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -751,6 +751,13 @@ void __init early_init_devtree(void *params)
+ early_init_dt_scan_root();
+ early_init_dt_scan_memory_ppc();
+
++ /*
++ * As generic code authors expect to be able to use static keys
++ * in early_param() handlers, we initialize the static keys just
++ * before parsing early params (it's fine to call jump_label_init()
++ * more than once).
++ */
++ jump_label_init();
+ parse_early_param();
+
+ /* make sure we've parsed cmdline for mem= before this */
+diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
+index 7d28b95536540..5d903e63f932a 100644
+--- a/arch/powerpc/kernel/watchdog.c
++++ b/arch/powerpc/kernel/watchdog.c
+@@ -91,6 +91,10 @@ static cpumask_t wd_smp_cpus_pending;
+ static cpumask_t wd_smp_cpus_stuck;
+ static u64 wd_smp_last_reset_tb;
+
++#ifdef CONFIG_PPC_PSERIES
++static u64 wd_timeout_pct;
++#endif
++
+ /*
+ * Try to take the exclusive watchdog action / NMI IPI / printing lock.
+ * wd_smp_lock must be held. If this fails, we should return and wait
+@@ -527,7 +531,13 @@ static int stop_watchdog_on_cpu(unsigned int cpu)
+
+ static void watchdog_calc_timeouts(void)
+ {
+- wd_panic_timeout_tb = watchdog_thresh * ppc_tb_freq;
++ u64 threshold = watchdog_thresh;
++
++#ifdef CONFIG_PPC_PSERIES
++ threshold += (READ_ONCE(wd_timeout_pct) * threshold) / 100;
++#endif
++
++ wd_panic_timeout_tb = threshold * ppc_tb_freq;
+
+ /* Have the SMP detector trigger a bit later */
+ wd_smp_panic_timeout_tb = wd_panic_timeout_tb * 3 / 2;
+@@ -570,3 +580,12 @@ int __init watchdog_nmi_probe(void)
+ }
+ return 0;
+ }
++
++#ifdef CONFIG_PPC_PSERIES
++void watchdog_nmi_set_timeout_pct(u64 pct)
++{
++ pr_info("Set the NMI watchdog timeout factor to %llu%%\n", pct);
++ WRITE_ONCE(wd_timeout_pct, pct);
++ lockup_detector_reconfigure();
++}
++#endif
+diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
+index 112a09b333288..7f88be386b27a 100644
+--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
++++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
+@@ -438,15 +438,6 @@ void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
+ EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs);
+
+ #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
+-static void __start_timing(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next)
+-{
+- struct kvmppc_vcore *vc = vcpu->arch.vcore;
+- u64 tb = mftb() - vc->tb_offset_applied;
+-
+- vcpu->arch.cur_activity = next;
+- vcpu->arch.cur_tb_start = tb;
+-}
+-
+ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next)
+ {
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+@@ -478,8 +469,8 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator
+ curr->seqcount = seq + 2;
+ }
+
+-#define start_timing(vcpu, next) __start_timing(vcpu, next)
+-#define end_timing(vcpu) __start_timing(vcpu, NULL)
++#define start_timing(vcpu, next) __accumulate_time(vcpu, next)
++#define end_timing(vcpu) __accumulate_time(vcpu, NULL)
+ #define accumulate_time(vcpu, next) __accumulate_time(vcpu, next)
+ #else
+ #define start_timing(vcpu, next) do {} while (0)
+diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
+index 49a737fbbd189..40029280c3206 100644
+--- a/arch/powerpc/mm/book3s32/mmu.c
++++ b/arch/powerpc/mm/book3s32/mmu.c
+@@ -159,7 +159,10 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
+ {
+ unsigned long done;
+ unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
++ unsigned long size;
+
++ size = roundup_pow_of_two((unsigned long)_einittext - PAGE_OFFSET);
++ setibat(0, PAGE_OFFSET, 0, size, PAGE_KERNEL_X);
+
+ if (debug_pagealloc_enabled_or_kfence() || __map_without_bats) {
+ pr_debug_once("Read-Write memory mapped without BATs\n");
+@@ -245,10 +248,9 @@ void mmu_mark_rodata_ro(void)
+ }
+
+ /*
+- * Set up one of the I/D BAT (block address translation) register pairs.
++ * Set up one of the D BAT (block address translation) register pairs.
+ * The parameters are not checked; in particular size must be a power
+ * of 2 between 128k and 256M.
+- * On 603+, only set IBAT when _PAGE_EXEC is set
+ */
+ void __init setbat(int index, unsigned long virt, phys_addr_t phys,
+ unsigned int size, pgprot_t prot)
+@@ -284,10 +286,6 @@ void __init setbat(int index, unsigned long virt, phys_addr_t phys,
+ /* G bit must be zero in IBATs */
+ flags &= ~_PAGE_EXEC;
+ }
+- if (flags & _PAGE_EXEC)
+- bat[0] = bat[1];
+- else
+- bat[0].batu = bat[0].batl = 0;
+
+ bat_addrs[index].start = virt;
+ bat_addrs[index].limit = virt + ((bl + 1) << 17) - 1;
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 3629fd73083e2..18115cabaedf9 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -136,9 +136,9 @@ config GENERIC_CPU
+ select ARCH_HAS_FAST_MULTIPLIER
+ select PPC_64S_HASH_MMU
+
+-config GENERIC_CPU
++config POWERPC_CPU
+ bool "Generic 32 bits powerpc"
+- depends on PPC32 && !PPC_8xx
++ depends on PPC32 && !PPC_8xx && !PPC_85xx
+
+ config CELL_CPU
+ bool "Cell Broadband Engine"
+@@ -197,11 +197,23 @@ config G4_CPU
+ depends on PPC_BOOK3S_32
+ select ALTIVEC
+
++config E500_CPU
++ bool "e500 (8540)"
++ depends on PPC_85xx && !PPC_E500MC
++
++config E500MC_CPU
++ bool "e500mc"
++ depends on PPC_85xx && PPC_E500MC
++
++config TOOLCHAIN_DEFAULT_CPU
++ bool "Rely on the toolchain's implicit default CPU"
++ depends on PPC32
++
+ endchoice
+
+ config TARGET_CPU_BOOL
+ bool
+- default !GENERIC_CPU
++ default !GENERIC_CPU && !TOOLCHAIN_DEFAULT_CPU
+
+ config TARGET_CPU
+ string
+@@ -216,6 +228,9 @@ config TARGET_CPU
+ default "e300c2" if E300C2_CPU
+ default "e300c3" if E300C3_CPU
+ default "G4" if G4_CPU
++ default "8540" if E500_CPU
++ default "e500mc" if E500MC_CPU
++ default "powerpc" if POWERPC_CPU
+
+ config PPC_BOOK3S
+ def_bool y
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index c8cf2728031a2..9de9b2fb163d3 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -1609,6 +1609,7 @@ found:
+ tbl->it_ops = &pnv_ioda1_iommu_ops;
+ pe->table_group.tce32_start = tbl->it_offset << tbl->it_page_shift;
+ pe->table_group.tce32_size = tbl->it_size << tbl->it_page_shift;
++ tbl->it_index = (phb->hose->global_number << 16) | pe->pe_number;
+ if (!iommu_init_table(tbl, phb->hose->node, 0, 0))
+ panic("Failed to initialize iommu table");
+
+@@ -1779,6 +1780,7 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe)
+ res_end = min(window_size, SZ_4G) >> tbl->it_page_shift;
+ }
+
++ tbl->it_index = (pe->phb->hose->global_number << 16) | pe->pe_number;
+ if (iommu_init_table(tbl, pe->phb->hose->node, res_start, res_end))
+ rc = pnv_pci_ioda2_set_window(&pe->table_group, 0, tbl);
+ else
+diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
+index 78f3f74c7056b..cbe0989239bf2 100644
+--- a/arch/powerpc/platforms/pseries/mobility.c
++++ b/arch/powerpc/platforms/pseries/mobility.c
+@@ -48,6 +48,39 @@ struct update_props_workarea {
+ #define MIGRATION_SCOPE (1)
+ #define PRRN_SCOPE -2
+
++#ifdef CONFIG_PPC_WATCHDOG
++static unsigned int nmi_wd_lpm_factor = 200;
++
++#ifdef CONFIG_SYSCTL
++static struct ctl_table nmi_wd_lpm_factor_ctl_table[] = {
++ {
++ .procname = "nmi_wd_lpm_factor",
++ .data = &nmi_wd_lpm_factor,
++ .maxlen = sizeof(int),
++ .mode = 0644,
++ .proc_handler = proc_douintvec_minmax,
++ },
++ {}
++};
++static struct ctl_table nmi_wd_lpm_factor_sysctl_root[] = {
++ {
++ .procname = "kernel",
++ .mode = 0555,
++ .child = nmi_wd_lpm_factor_ctl_table,
++ },
++ {}
++};
++
++static int __init register_nmi_wd_lpm_factor_sysctl(void)
++{
++ register_sysctl_table(nmi_wd_lpm_factor_sysctl_root);
++
++ return 0;
++}
++device_initcall(register_nmi_wd_lpm_factor_sysctl);
++#endif /* CONFIG_SYSCTL */
++#endif /* CONFIG_PPC_WATCHDOG */
++
+ static int mobility_rtas_call(int token, char *buf, s32 scope)
+ {
+ int rc;
+@@ -665,19 +698,29 @@ static int pseries_suspend(u64 handle)
+ static int pseries_migrate_partition(u64 handle)
+ {
+ int ret;
++ unsigned int factor = 0;
+
++#ifdef CONFIG_PPC_WATCHDOG
++ factor = nmi_wd_lpm_factor;
++#endif
+ ret = wait_for_vasi_session_suspending(handle);
+ if (ret)
+ return ret;
+
+ vas_migration_handler(VAS_SUSPEND);
+
++ if (factor)
++ watchdog_nmi_set_timeout_pct(factor);
++
+ ret = pseries_suspend(handle);
+ if (ret == 0)
+ post_mobility_fixup();
+ else
+ pseries_cancel_migration(handle, ret);
+
++ if (factor)
++ watchdog_nmi_set_timeout_pct(0);
++
+ vas_migration_handler(VAS_RESUME);
+
+ return ret;
+diff --git a/arch/riscv/boot/dts/canaan/k210.dtsi b/arch/riscv/boot/dts/canaan/k210.dtsi
+index 44d3385147612..ec944d1537dc4 100644
+--- a/arch/riscv/boot/dts/canaan/k210.dtsi
++++ b/arch/riscv/boot/dts/canaan/k210.dtsi
+@@ -65,6 +65,18 @@
+ compatible = "riscv,cpu-intc";
+ };
+ };
++
++ cpu-map {
++ cluster0 {
++ core0 {
++ cpu = <&cpu0>;
++ };
++
++ core1 {
++ cpu = <&cpu1>;
++ };
++ };
++ };
+ };
+
+ sram: memory@80000000 {
+diff --git a/arch/riscv/boot/dts/sifive/fu740-c000.dtsi b/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
+index 7b77c13496d83..43bed6c0a84fe 100644
+--- a/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
++++ b/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
+@@ -134,6 +134,30 @@
+ interrupt-controller;
+ };
+ };
++
++ cpu-map {
++ cluster0 {
++ core0 {
++ cpu = <&cpu0>;
++ };
++
++ core1 {
++ cpu = <&cpu1>;
++ };
++
++ core2 {
++ cpu = <&cpu2>;
++ };
++
++ core3 {
++ cpu = <&cpu3>;
++ };
++
++ core4 {
++ cpu = <&cpu4>;
++ };
++ };
++ };
+ };
+ soc {
+ #address-cells = <2>;
+diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
+index 9c0194f176fc0..571556bb9261a 100644
+--- a/arch/riscv/kernel/sys_riscv.c
++++ b/arch/riscv/kernel/sys_riscv.c
+@@ -18,9 +18,8 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
+ if (unlikely(offset & (~PAGE_MASK >> page_shift_offset)))
+ return -EINVAL;
+
+- if ((prot & PROT_WRITE) && (prot & PROT_EXEC))
+- if (unlikely(!(prot & PROT_READ)))
+- return -EINVAL;
++ if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ)))
++ return -EINVAL;
+
+ return ksys_mmap_pgoff(addr, len, prot, flags, fd,
+ offset >> (PAGE_SHIFT - page_shift_offset));
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index b404265092447..39d0f8bba4b40 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -16,6 +16,7 @@
+ #include <linux/mm.h>
+ #include <linux/module.h>
+ #include <linux/irq.h>
++#include <linux/kexec.h>
+
+ #include <asm/asm-prototypes.h>
+ #include <asm/bug.h>
+@@ -44,6 +45,9 @@ void die(struct pt_regs *regs, const char *str)
+
+ ret = notify_die(DIE_OOPS, str, regs, 0, regs->cause, SIGSEGV);
+
++ if (regs && kexec_should_crash(current))
++ crash_kexec(regs);
++
+ bust_spinlocks(0);
+ add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+ spin_unlock_irq(&die_lock);
+diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c
+index c316c993a9491..b24db6017ded6 100644
+--- a/arch/um/os-Linux/skas/process.c
++++ b/arch/um/os-Linux/skas/process.c
+@@ -5,6 +5,7 @@
+ */
+
+ #include <stdlib.h>
++#include <stdbool.h>
+ #include <unistd.h>
+ #include <sched.h>
+ #include <errno.h>
+@@ -707,10 +708,24 @@ void halt_skas(void)
+ UML_LONGJMP(&initial_jmpbuf, INIT_JMP_HALT);
+ }
+
++static bool noreboot;
++
++static int __init noreboot_cmd_param(char *str, int *add)
++{
++ noreboot = true;
++ return 0;
++}
++
++__uml_setup("noreboot", noreboot_cmd_param,
++"noreboot\n"
++" Rather than rebooting, exit always, akin to QEMU's -no-reboot option.\n"
++" This is useful if you're using CONFIG_PANIC_TIMEOUT in order to catch\n"
++" crashes in CI\n");
++
+ void reboot_skas(void)
+ {
+ block_signals_trace();
+- UML_LONGJMP(&initial_jmpbuf, INIT_JMP_REBOOT);
++ UML_LONGJMP(&initial_jmpbuf, noreboot ? INIT_JMP_HALT : INIT_JMP_REBOOT);
+ }
+
+ void __switch_mm(struct mm_id *mm_idp)
+diff --git a/arch/x86/include/asm/ibt.h b/arch/x86/include/asm/ibt.h
+index 689880eca9bab..9b08082a5d9f5 100644
+--- a/arch/x86/include/asm/ibt.h
++++ b/arch/x86/include/asm/ibt.h
+@@ -31,6 +31,16 @@
+
+ #define __noendbr __attribute__((nocf_check))
+
++/*
++ * Create a dummy function pointer reference to prevent objtool from marking
++ * the function as needing to be "sealed" (i.e. ENDBR converted to NOP by
++ * apply_ibt_endbr()).
++ */
++#define IBT_NOSEAL(fname) \
++ ".pushsection .discard.ibt_endbr_noseal\n\t" \
++ _ASM_PTR fname "\n\t" \
++ ".popsection\n\t"
++
+ static inline __attribute_const__ u32 gen_endbr(void)
+ {
+ u32 endbr;
+@@ -84,6 +94,7 @@ extern __noendbr void ibt_restore(u64 save);
+ #ifndef __ASSEMBLY__
+
+ #define ASM_ENDBR
++#define IBT_NOSEAL(name)
+
+ #define __noendbr
+
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 74167dc5f55ec..4c3c27b6aea3b 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -505,7 +505,7 @@ static void kprobe_emulate_jcc(struct kprobe *p, struct pt_regs *regs)
+ match = ((regs->flags & X86_EFLAGS_SF) >> X86_EFLAGS_SF_BIT) ^
+ ((regs->flags & X86_EFLAGS_OF) >> X86_EFLAGS_OF_BIT);
+ if (p->ainsn.jcc.type >= 0xe)
+- match = match && (regs->flags & X86_EFLAGS_ZF);
++ match = match || (regs->flags & X86_EFLAGS_ZF);
+ }
+ __kprobe_emulate_jmp(p, regs, (match && !invert) || (!match && invert));
+ }
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index aa907cec09187..09fa8a94807bf 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -316,7 +316,8 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
+ ".align " __stringify(FASTOP_SIZE) " \n\t" \
+ ".type " name ", @function \n\t" \
+ name ":\n\t" \
+- ASM_ENDBR
++ ASM_ENDBR \
++ IBT_NOSEAL(name)
+
+ #define FOP_FUNC(name) \
+ __FOP_FUNC(#name)
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 39c5246964a91..0fe690ebc269b 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -645,7 +645,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
+ pages++;
+ spin_lock(&init_mm.page_table_lock);
+
+- prot = __pgprot(pgprot_val(prot) | __PAGE_KERNEL_LARGE);
++ prot = __pgprot(pgprot_val(prot) | _PAGE_PSE);
+
+ set_pte_init((pte_t *)pud,
+ pfn_pte((paddr & PUD_MASK) >> PAGE_SHIFT,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 93d9d60980fb5..1eb13d57a946f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2568,7 +2568,7 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule)
+ break;
+ case BLK_STS_RESOURCE:
+ case BLK_STS_DEV_RESOURCE:
+- blk_mq_request_bypass_insert(rq, false, last);
++ blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_commit_rqs(hctx, &queued, from_schedule);
+ return;
+ default:
+diff --git a/drivers/acpi/pci_mcfg.c b/drivers/acpi/pci_mcfg.c
+index 53cab975f612c..63b98eae5e75e 100644
+--- a/drivers/acpi/pci_mcfg.c
++++ b/drivers/acpi/pci_mcfg.c
+@@ -41,6 +41,8 @@ struct mcfg_fixup {
+ static struct mcfg_fixup mcfg_quirks[] = {
+ /* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */
+
++#ifdef CONFIG_ARM64
++
+ #define AL_ECAM(table_id, rev, seg, ops) \
+ { "AMAZON", table_id, rev, seg, MCFG_BUS_ANY, ops }
+
+@@ -169,6 +171,7 @@ static struct mcfg_fixup mcfg_quirks[] = {
+ ALTRA_ECAM_QUIRK(1, 13),
+ ALTRA_ECAM_QUIRK(1, 14),
+ ALTRA_ECAM_QUIRK(1, 15),
++#endif /* ARM64 */
+ };
+
+ static char mcfg_oem_id[ACPI_OEM_ID_SIZE];
+diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
+index 701f61c01359f..3ad2823eb6f84 100644
+--- a/drivers/acpi/pptt.c
++++ b/drivers/acpi/pptt.c
+@@ -532,21 +532,37 @@ static int topology_get_acpi_cpu_tag(struct acpi_table_header *table,
+ return -ENOENT;
+ }
+
++
++static struct acpi_table_header *acpi_get_pptt(void)
++{
++ static struct acpi_table_header *pptt;
++ acpi_status status;
++
++ /*
++ * PPTT will be used at runtime on every CPU hotplug in path, so we
++ * don't need to call acpi_put_table() to release the table mapping.
++ */
++ if (!pptt) {
++ status = acpi_get_table(ACPI_SIG_PPTT, 0, &pptt);
++ if (ACPI_FAILURE(status))
++ acpi_pptt_warn_missing();
++ }
++
++ return pptt;
++}
++
+ static int find_acpi_cpu_topology_tag(unsigned int cpu, int level, int flag)
+ {
+ struct acpi_table_header *table;
+- acpi_status status;
+ int retval;
+
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
++ table = acpi_get_pptt();
++ if (!table)
+ return -ENOENT;
+- }
++
+ retval = topology_get_acpi_cpu_tag(table, cpu, level, flag);
+ pr_debug("Topology Setup ACPI CPU %d, level %d ret = %d\n",
+ cpu, level, retval);
+- acpi_put_table(table);
+
+ return retval;
+ }
+@@ -567,16 +583,13 @@ static int find_acpi_cpu_topology_tag(unsigned int cpu, int level, int flag)
+ static int check_acpi_cpu_flag(unsigned int cpu, int rev, u32 flag)
+ {
+ struct acpi_table_header *table;
+- acpi_status status;
+ u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+ struct acpi_pptt_processor *cpu_node = NULL;
+ int ret = -ENOENT;
+
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
+- return ret;
+- }
++ table = acpi_get_pptt();
++ if (!table)
++ return -ENOENT;
+
+ if (table->revision >= rev)
+ cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+@@ -584,8 +597,6 @@ static int check_acpi_cpu_flag(unsigned int cpu, int rev, u32 flag)
+ if (cpu_node)
+ ret = (cpu_node->flags & flag) != 0;
+
+- acpi_put_table(table);
+-
+ return ret;
+ }
+
+@@ -604,18 +615,15 @@ int acpi_find_last_cache_level(unsigned int cpu)
+ u32 acpi_cpu_id;
+ struct acpi_table_header *table;
+ int number_of_levels = 0;
+- acpi_status status;
++
++ table = acpi_get_pptt();
++ if (!table)
++ return -ENOENT;
+
+ pr_debug("Cache Setup find last level CPU=%d\n", cpu);
+
+ acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
+- } else {
+- number_of_levels = acpi_find_cache_levels(table, acpi_cpu_id);
+- acpi_put_table(table);
+- }
++ number_of_levels = acpi_find_cache_levels(table, acpi_cpu_id);
+ pr_debug("Cache Setup find last level level=%d\n", number_of_levels);
+
+ return number_of_levels;
+@@ -637,20 +645,16 @@ int acpi_find_last_cache_level(unsigned int cpu)
+ int cache_setup_acpi(unsigned int cpu)
+ {
+ struct acpi_table_header *table;
+- acpi_status status;
+
+- pr_debug("Cache Setup ACPI CPU %d\n", cpu);
+-
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
++ table = acpi_get_pptt();
++ if (!table)
+ return -ENOENT;
+- }
++
++ pr_debug("Cache Setup ACPI CPU %d\n", cpu);
+
+ cache_setup_acpi_cpu(table, cpu);
+- acpi_put_table(table);
+
+- return status;
++ return 0;
+ }
+
+ /**
+@@ -766,50 +770,38 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
+ int find_acpi_cpu_topology_cluster(unsigned int cpu)
+ {
+ struct acpi_table_header *table;
+- acpi_status status;
+ struct acpi_pptt_processor *cpu_node, *cluster_node;
+ u32 acpi_cpu_id;
+ int retval;
+ int is_thread;
+
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
++ table = acpi_get_pptt();
++ if (!table)
+ return -ENOENT;
+- }
+
+ acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+ cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+- if (cpu_node == NULL || !cpu_node->parent) {
+- retval = -ENOENT;
+- goto put_table;
+- }
++ if (!cpu_node || !cpu_node->parent)
++ return -ENOENT;
+
+ is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+ cluster_node = fetch_pptt_node(table, cpu_node->parent);
+- if (cluster_node == NULL) {
+- retval = -ENOENT;
+- goto put_table;
+- }
++ if (!cluster_node)
++ return -ENOENT;
++
+ if (is_thread) {
+- if (!cluster_node->parent) {
+- retval = -ENOENT;
+- goto put_table;
+- }
++ if (!cluster_node->parent)
++ return -ENOENT;
++
+ cluster_node = fetch_pptt_node(table, cluster_node->parent);
+- if (cluster_node == NULL) {
+- retval = -ENOENT;
+- goto put_table;
+- }
++ if (!cluster_node)
++ return -ENOENT;
+ }
+ if (cluster_node->flags & ACPI_PPTT_ACPI_PROCESSOR_ID_VALID)
+ retval = cluster_node->acpi_processor_id;
+ else
+ retval = ACPI_PTR_DIFF(cluster_node, table);
+
+-put_table:
+- acpi_put_table(table);
+-
+ return retval;
+ }
+
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index d3173811614ec..bc9a645f8bb77 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -155,10 +155,10 @@ static bool acpi_nondev_subnode_ok(acpi_handle scope,
+ return acpi_nondev_subnode_data_ok(handle, link, list, parent);
+ }
+
+-static int acpi_add_nondev_subnodes(acpi_handle scope,
+- const union acpi_object *links,
+- struct list_head *list,
+- struct fwnode_handle *parent)
++static bool acpi_add_nondev_subnodes(acpi_handle scope,
++ const union acpi_object *links,
++ struct list_head *list,
++ struct fwnode_handle *parent)
+ {
+ bool ret = false;
+ int i;
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 3307ed45fe4d0..ceb0c64cb670a 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2122,6 +2122,7 @@ const char *ata_get_cmd_name(u8 command)
+ { ATA_CMD_WRITE_QUEUED_FUA_EXT, "WRITE DMA QUEUED FUA EXT" },
+ { ATA_CMD_FPDMA_READ, "READ FPDMA QUEUED" },
+ { ATA_CMD_FPDMA_WRITE, "WRITE FPDMA QUEUED" },
++ { ATA_CMD_NCQ_NON_DATA, "NCQ NON-DATA" },
+ { ATA_CMD_FPDMA_SEND, "SEND FPDMA QUEUED" },
+ { ATA_CMD_FPDMA_RECV, "RECEIVE FPDMA QUEUED" },
+ { ATA_CMD_PIO_READ, "READ SECTOR(S)" },
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index 81ce81a75fc67..681cb3786794d 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -3752,6 +3752,7 @@ static void __exit idt77252_exit(void)
+ card = idt77252_chain;
+ dev = card->atmdev;
+ idt77252_chain = card->next;
++ del_timer_sync(&card->tst_timer);
+
+ if (dev->phy->stop)
+ dev->phy->stop(dev);
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 6fc7850c2b0a0..d756423e0059a 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -101,6 +101,14 @@ static inline blk_status_t virtblk_result(struct virtblk_req *vbr)
+ }
+ }
+
++static inline struct virtio_blk_vq *get_virtio_blk_vq(struct blk_mq_hw_ctx *hctx)
++{
++ struct virtio_blk *vblk = hctx->queue->queuedata;
++ struct virtio_blk_vq *vq = &vblk->vqs[hctx->queue_num];
++
++ return vq;
++}
++
+ static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr)
+ {
+ struct scatterlist hdr, status, *sgs[3];
+@@ -416,7 +424,7 @@ static void virtio_queue_rqs(struct request **rqlist)
+ struct request *requeue_list = NULL;
+
+ rq_list_for_each_safe(rqlist, req, next) {
+- struct virtio_blk_vq *vq = req->mq_hctx->driver_data;
++ struct virtio_blk_vq *vq = get_virtio_blk_vq(req->mq_hctx);
+ bool kick;
+
+ if (!virtblk_prep_rq_batch(req)) {
+@@ -837,7 +845,7 @@ static void virtblk_complete_batch(struct io_comp_batch *iob)
+ static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+ {
+ struct virtio_blk *vblk = hctx->queue->queuedata;
+- struct virtio_blk_vq *vq = hctx->driver_data;
++ struct virtio_blk_vq *vq = get_virtio_blk_vq(hctx);
+ struct virtblk_req *vbr;
+ unsigned long flags;
+ unsigned int len;
+@@ -862,22 +870,10 @@ static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+ return found;
+ }
+
+-static int virtblk_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
+- unsigned int hctx_idx)
+-{
+- struct virtio_blk *vblk = data;
+- struct virtio_blk_vq *vq = &vblk->vqs[hctx_idx];
+-
+- WARN_ON(vblk->tag_set.tags[hctx_idx] != hctx->tags);
+- hctx->driver_data = vq;
+- return 0;
+-}
+-
+ static const struct blk_mq_ops virtio_mq_ops = {
+ .queue_rq = virtio_queue_rq,
+ .queue_rqs = virtio_queue_rqs,
+ .commit_rqs = virtio_commit_rqs,
+- .init_hctx = virtblk_init_hctx,
+ .complete = virtblk_request_done,
+ .map_queues = virtblk_map_queues,
+ .poll = virtblk_poll,
+diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
+index 052aa3f65514e..0916de952e091 100644
+--- a/drivers/block/zram/zcomp.c
++++ b/drivers/block/zram/zcomp.c
+@@ -63,12 +63,6 @@ static int zcomp_strm_init(struct zcomp_strm *zstrm, struct zcomp *comp)
+
+ bool zcomp_available_algorithm(const char *comp)
+ {
+- int i;
+-
+- i = sysfs_match_string(backends, comp);
+- if (i >= 0)
+- return true;
+-
+ /*
+ * Crypto does not ignore a trailing new line symbol,
+ * so make sure you don't supply a string containing
+@@ -217,6 +211,11 @@ struct zcomp *zcomp_create(const char *compress)
+ struct zcomp *comp;
+ int error;
+
++ /*
++ * Crypto API will execute /sbin/modprobe if the compression module
++ * is not loaded yet. We must do it here, otherwise we are about to
++ * call /sbin/modprobe under CPU hot-plug lock.
++ */
+ if (!zcomp_available_algorithm(compress))
+ return ERR_PTR(-EINVAL);
+
+diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
+index 26885bd3971c4..f5c9fa40491c5 100644
+--- a/drivers/clk/imx/clk-imx93.c
++++ b/drivers/clk/imx/clk-imx93.c
+@@ -160,7 +160,7 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_SEMA2_GATE, "sema2", "bus_wakeup_root", 0x8480, },
+ { IMX93_CLK_MU_A_GATE, "mu_a", "bus_aon_root", 0x84c0, },
+ { IMX93_CLK_MU_B_GATE, "mu_b", "bus_aon_root", 0x8500, },
+- { IMX93_CLK_EDMA1_GATE, "edma1", "wakeup_axi_root", 0x8540, },
++ { IMX93_CLK_EDMA1_GATE, "edma1", "m33_root", 0x8540, },
+ { IMX93_CLK_EDMA2_GATE, "edma2", "wakeup_axi_root", 0x8580, },
+ { IMX93_CLK_FLEXSPI1_GATE, "flexspi", "flexspi_root", 0x8640, },
+ { IMX93_CLK_GPIO1_GATE, "gpio1", "m33_root", 0x8880, },
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 4406cf609aae7..288692f0ea390 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1439,7 +1439,7 @@ const struct clk_ops clk_alpha_pll_postdiv_fabia_ops = {
+ EXPORT_SYMBOL_GPL(clk_alpha_pll_postdiv_fabia_ops);
+
+ /**
+- * clk_lucid_pll_configure - configure the lucid pll
++ * clk_trion_pll_configure - configure the trion pll
+ *
+ * @pll: clk alpha pll
+ * @regmap: register map
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 2c2ecfc5e61f5..d6d5defb82c9f 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -662,6 +662,7 @@ static struct clk_branch gcc_sleep_clk_src = {
+ },
+ .num_parents = 1,
+ .ops = &clk_branch2_ops,
++ .flags = CLK_IS_CRITICAL,
+ },
+ },
+ };
+diff --git a/drivers/clk/ti/clk-44xx.c b/drivers/clk/ti/clk-44xx.c
+index d078e5d73ed94..868bc7af21b0b 100644
+--- a/drivers/clk/ti/clk-44xx.c
++++ b/drivers/clk/ti/clk-44xx.c
+@@ -56,7 +56,7 @@ static const struct omap_clkctrl_bit_data omap4_aess_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_dmic_abe_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0018:26",
++ "abe-clkctrl:0018:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -76,7 +76,7 @@ static const struct omap_clkctrl_bit_data omap4_dmic_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_mcasp_abe_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0020:26",
++ "abe-clkctrl:0020:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -89,7 +89,7 @@ static const struct omap_clkctrl_bit_data omap4_mcasp_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_mcbsp1_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0028:26",
++ "abe-clkctrl:0028:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -102,7 +102,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp1_bit_data[] __initconst =
+ };
+
+ static const char * const omap4_func_mcbsp2_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0030:26",
++ "abe-clkctrl:0030:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -115,7 +115,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp2_bit_data[] __initconst =
+ };
+
+ static const char * const omap4_func_mcbsp3_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0038:26",
++ "abe-clkctrl:0038:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -183,18 +183,18 @@ static const struct omap_clkctrl_bit_data omap4_timer8_bit_data[] __initconst =
+
+ static const struct omap_clkctrl_reg_data omap4_abe_clkctrl_regs[] __initconst = {
+ { OMAP4_L4_ABE_CLKCTRL, NULL, 0, "ocp_abe_iclk" },
+- { OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
++ { OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
+ { OMAP4_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+- { OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
+- { OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe_cm:clk:0020:24" },
+- { OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
+- { OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
+- { OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
+- { OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0040:8" },
+- { OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
+- { OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
+- { OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
+- { OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
++ { OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
++ { OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe-clkctrl:0020:24" },
++ { OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
++ { OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
++ { OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
++ { OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0040:8" },
++ { OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
++ { OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
++ { OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
++ { OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
+ { OMAP4_WD_TIMER3_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+ };
+@@ -287,7 +287,7 @@ static const struct omap_clkctrl_bit_data omap4_fdif_bit_data[] __initconst = {
+
+ static const struct omap_clkctrl_reg_data omap4_iss_clkctrl_regs[] __initconst = {
+ { OMAP4_ISS_CLKCTRL, omap4_iss_bit_data, CLKF_SW_SUP, "ducati_clk_mux_ck" },
+- { OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss_cm:clk:0008:24" },
++ { OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss-clkctrl:0008:24" },
+ { 0 },
+ };
+
+@@ -320,7 +320,7 @@ static const struct omap_clkctrl_bit_data omap4_dss_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_dss_clkctrl_regs[] __initconst = {
+- { OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3_dss_cm:clk:0000:8" },
++ { OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3-dss-clkctrl:0000:8" },
+ { 0 },
+ };
+
+@@ -336,7 +336,7 @@ static const struct omap_clkctrl_bit_data omap4_gpu_bit_data[] __initconst = {
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_gfx_clkctrl_regs[] __initconst = {
+- { OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3_gfx_cm:clk:0000:24" },
++ { OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3-gfx-clkctrl:0000:24" },
+ { 0 },
+ };
+
+@@ -372,12 +372,12 @@ static const struct omap_clkctrl_bit_data omap4_hsi_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+- "l3_init_cm:clk:0038:24",
++ "l3-init-clkctrl:0038:24",
+ NULL,
+ };
+
+ static const char * const omap4_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+- "l3_init_cm:clk:0038:25",
++ "l3-init-clkctrl:0038:25",
+ NULL,
+ };
+
+@@ -418,7 +418,7 @@ static const struct omap_clkctrl_bit_data omap4_usb_host_hs_bit_data[] __initcon
+ };
+
+ static const char * const omap4_usb_otg_hs_xclk_parents[] __initconst = {
+- "l3_init_cm:clk:0040:24",
++ "l3-init-clkctrl:0040:24",
+ NULL,
+ };
+
+@@ -452,14 +452,14 @@ static const struct omap_clkctrl_bit_data omap4_ocp2scp_usb_phy_bit_data[] __ini
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_init_clkctrl_regs[] __initconst = {
+- { OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0008:24" },
+- { OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0010:24" },
+- { OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:0018:24" },
++ { OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0008:24" },
++ { OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0010:24" },
++ { OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:0018:24" },
+ { OMAP4_USB_HOST_HS_CLKCTRL, omap4_usb_host_hs_bit_data, CLKF_SW_SUP, "init_60m_fclk" },
+ { OMAP4_USB_OTG_HS_CLKCTRL, omap4_usb_otg_hs_bit_data, CLKF_HW_SUP, "l3_div_ck" },
+ { OMAP4_USB_TLL_HS_CLKCTRL, omap4_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ { OMAP4_USB_HOST_FS_CLKCTRL, NULL, CLKF_SW_SUP, "func_48mc_fclk" },
+- { OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:00c0:8" },
++ { OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:00c0:8" },
+ { 0 },
+ };
+
+@@ -530,7 +530,7 @@ static const struct omap_clkctrl_bit_data omap4_gpio6_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_per_mcbsp4_gfclk_parents[] __initconst = {
+- "l4_per_cm:clk:00c0:26",
++ "l4-per-clkctrl:00c0:26",
+ "pad_clks_ck",
+ NULL,
+ };
+@@ -570,12 +570,12 @@ static const struct omap_clkctrl_bit_data omap4_slimbus2_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initconst = {
+- { OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0008:24" },
+- { OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0010:24" },
+- { OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0018:24" },
+- { OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0020:24" },
+- { OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0028:24" },
+- { OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0030:24" },
++ { OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0008:24" },
++ { OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0010:24" },
++ { OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0018:24" },
++ { OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0020:24" },
++ { OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0028:24" },
++ { OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0030:24" },
+ { OMAP4_ELM_CLKCTRL, NULL, 0, "l4_div_ck" },
+ { OMAP4_GPIO2_CLKCTRL, omap4_gpio2_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ { OMAP4_GPIO3_CLKCTRL, omap4_gpio3_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+@@ -588,14 +588,14 @@ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initcons
+ { OMAP4_I2C3_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ { OMAP4_I2C4_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ { OMAP4_L4_PER_CLKCTRL, NULL, 0, "l4_div_ck" },
+- { OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:00c0:24" },
++ { OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:00c0:24" },
+ { OMAP4_MCSPI1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MMC3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MMC4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+- { OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0118:8" },
++ { OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0118:8" },
+ { OMAP4_UART1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_UART2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_UART3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -630,7 +630,7 @@ static const struct omap_clkctrl_reg_data omap4_l4_wkup_clkctrl_regs[] __initcon
+ { OMAP4_L4_WKUP_CLKCTRL, NULL, 0, "l4_wkup_clk_mux_ck" },
+ { OMAP4_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { OMAP4_GPIO1_CLKCTRL, omap4_gpio1_bit_data, CLKF_HW_SUP, "l4_wkup_clk_mux_ck" },
+- { OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4_wkup_cm:clk:0020:24" },
++ { OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4-wkup-clkctrl:0020:24" },
+ { OMAP4_COUNTER_32K_CLKCTRL, NULL, 0, "sys_32k_ck" },
+ { OMAP4_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+@@ -644,7 +644,7 @@ static const char * const omap4_pmd_stm_clock_mux_ck_parents[] __initconst = {
+ };
+
+ static const char * const omap4_trace_clk_div_div_ck_parents[] __initconst = {
+- "emu_sys_cm:clk:0000:22",
++ "emu-sys-clkctrl:0000:22",
+ NULL,
+ };
+
+@@ -662,7 +662,7 @@ static const struct omap_clkctrl_div_data omap4_trace_clk_div_div_ck_data __init
+ };
+
+ static const char * const omap4_stm_clk_div_ck_parents[] __initconst = {
+- "emu_sys_cm:clk:0000:20",
++ "emu-sys-clkctrl:0000:20",
+ NULL,
+ };
+
+@@ -716,73 +716,73 @@ static struct ti_dt_clk omap44xx_clks[] = {
+ * hwmod support. Once hwmod is removed, these can be removed
+ * also.
+ */
+- DT_CLK(NULL, "aess_fclk", "abe_cm:0008:24"),
+- DT_CLK(NULL, "cm2_dm10_mux", "l4_per_cm:0008:24"),
+- DT_CLK(NULL, "cm2_dm11_mux", "l4_per_cm:0010:24"),
+- DT_CLK(NULL, "cm2_dm2_mux", "l4_per_cm:0018:24"),
+- DT_CLK(NULL, "cm2_dm3_mux", "l4_per_cm:0020:24"),
+- DT_CLK(NULL, "cm2_dm4_mux", "l4_per_cm:0028:24"),
+- DT_CLK(NULL, "cm2_dm9_mux", "l4_per_cm:0030:24"),
+- DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
+- DT_CLK(NULL, "dmt1_clk_mux", "l4_wkup_cm:0020:24"),
+- DT_CLK(NULL, "dss_48mhz_clk", "l3_dss_cm:0000:9"),
+- DT_CLK(NULL, "dss_dss_clk", "l3_dss_cm:0000:8"),
+- DT_CLK(NULL, "dss_sys_clk", "l3_dss_cm:0000:10"),
+- DT_CLK(NULL, "dss_tv_clk", "l3_dss_cm:0000:11"),
+- DT_CLK(NULL, "fdif_fck", "iss_cm:0008:24"),
+- DT_CLK(NULL, "func_dmic_abe_gfclk", "abe_cm:0018:24"),
+- DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe_cm:0020:24"),
+- DT_CLK(NULL, "func_mcbsp1_gfclk", "abe_cm:0028:24"),
+- DT_CLK(NULL, "func_mcbsp2_gfclk", "abe_cm:0030:24"),
+- DT_CLK(NULL, "func_mcbsp3_gfclk", "abe_cm:0038:24"),
+- DT_CLK(NULL, "gpio1_dbclk", "l4_wkup_cm:0018:8"),
+- DT_CLK(NULL, "gpio2_dbclk", "l4_per_cm:0040:8"),
+- DT_CLK(NULL, "gpio3_dbclk", "l4_per_cm:0048:8"),
+- DT_CLK(NULL, "gpio4_dbclk", "l4_per_cm:0050:8"),
+- DT_CLK(NULL, "gpio5_dbclk", "l4_per_cm:0058:8"),
+- DT_CLK(NULL, "gpio6_dbclk", "l4_per_cm:0060:8"),
+- DT_CLK(NULL, "hsi_fck", "l3_init_cm:0018:24"),
+- DT_CLK(NULL, "hsmmc1_fclk", "l3_init_cm:0008:24"),
+- DT_CLK(NULL, "hsmmc2_fclk", "l3_init_cm:0010:24"),
+- DT_CLK(NULL, "iss_ctrlclk", "iss_cm:0000:8"),
+- DT_CLK(NULL, "mcasp_sync_mux_ck", "abe_cm:0020:26"),
+- DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
+- DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
+- DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
+- DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4_per_cm:00c0:26"),
+- DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3_init_cm:00c0:8"),
+- DT_CLK(NULL, "otg_60m_gfclk", "l3_init_cm:0040:24"),
+- DT_CLK(NULL, "per_mcbsp4_gfclk", "l4_per_cm:00c0:24"),
+- DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu_sys_cm:0000:20"),
+- DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu_sys_cm:0000:22"),
+- DT_CLK(NULL, "sgx_clk_mux", "l3_gfx_cm:0000:24"),
+- DT_CLK(NULL, "slimbus1_fclk_0", "abe_cm:0040:8"),
+- DT_CLK(NULL, "slimbus1_fclk_1", "abe_cm:0040:9"),
+- DT_CLK(NULL, "slimbus1_fclk_2", "abe_cm:0040:10"),
+- DT_CLK(NULL, "slimbus1_slimbus_clk", "abe_cm:0040:11"),
+- DT_CLK(NULL, "slimbus2_fclk_0", "l4_per_cm:0118:8"),
+- DT_CLK(NULL, "slimbus2_fclk_1", "l4_per_cm:0118:9"),
+- DT_CLK(NULL, "slimbus2_slimbus_clk", "l4_per_cm:0118:10"),
+- DT_CLK(NULL, "stm_clk_div_ck", "emu_sys_cm:0000:27"),
+- DT_CLK(NULL, "timer5_sync_mux", "abe_cm:0048:24"),
+- DT_CLK(NULL, "timer6_sync_mux", "abe_cm:0050:24"),
+- DT_CLK(NULL, "timer7_sync_mux", "abe_cm:0058:24"),
+- DT_CLK(NULL, "timer8_sync_mux", "abe_cm:0060:24"),
+- DT_CLK(NULL, "trace_clk_div_div_ck", "emu_sys_cm:0000:24"),
+- DT_CLK(NULL, "usb_host_hs_func48mclk", "l3_init_cm:0038:15"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3_init_cm:0038:13"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3_init_cm:0038:14"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3_init_cm:0038:11"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3_init_cm:0038:12"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3_init_cm:0038:8"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3_init_cm:0038:9"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init_cm:0038:10"),
+- DT_CLK(NULL, "usb_otg_hs_xclk", "l3_init_cm:0040:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3_init_cm:0048:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3_init_cm:0048:9"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3_init_cm:0048:10"),
+- DT_CLK(NULL, "utmi_p1_gfclk", "l3_init_cm:0038:24"),
+- DT_CLK(NULL, "utmi_p2_gfclk", "l3_init_cm:0038:25"),
++ DT_CLK(NULL, "aess_fclk", "abe-clkctrl:0008:24"),
++ DT_CLK(NULL, "cm2_dm10_mux", "l4-per-clkctrl:0008:24"),
++ DT_CLK(NULL, "cm2_dm11_mux", "l4-per-clkctrl:0010:24"),
++ DT_CLK(NULL, "cm2_dm2_mux", "l4-per-clkctrl:0018:24"),
++ DT_CLK(NULL, "cm2_dm3_mux", "l4-per-clkctrl:0020:24"),
++ DT_CLK(NULL, "cm2_dm4_mux", "l4-per-clkctrl:0028:24"),
++ DT_CLK(NULL, "cm2_dm9_mux", "l4-per-clkctrl:0030:24"),
++ DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
++ DT_CLK(NULL, "dmt1_clk_mux", "l4-wkup-clkctrl:0020:24"),
++ DT_CLK(NULL, "dss_48mhz_clk", "l3-dss-clkctrl:0000:9"),
++ DT_CLK(NULL, "dss_dss_clk", "l3-dss-clkctrl:0000:8"),
++ DT_CLK(NULL, "dss_sys_clk", "l3-dss-clkctrl:0000:10"),
++ DT_CLK(NULL, "dss_tv_clk", "l3-dss-clkctrl:0000:11"),
++ DT_CLK(NULL, "fdif_fck", "iss-clkctrl:0008:24"),
++ DT_CLK(NULL, "func_dmic_abe_gfclk", "abe-clkctrl:0018:24"),
++ DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe-clkctrl:0020:24"),
++ DT_CLK(NULL, "func_mcbsp1_gfclk", "abe-clkctrl:0028:24"),
++ DT_CLK(NULL, "func_mcbsp2_gfclk", "abe-clkctrl:0030:24"),
++ DT_CLK(NULL, "func_mcbsp3_gfclk", "abe-clkctrl:0038:24"),
++ DT_CLK(NULL, "gpio1_dbclk", "l4-wkup-clkctrl:0018:8"),
++ DT_CLK(NULL, "gpio2_dbclk", "l4-per-clkctrl:0040:8"),
++ DT_CLK(NULL, "gpio3_dbclk", "l4-per-clkctrl:0048:8"),
++ DT_CLK(NULL, "gpio4_dbclk", "l4-per-clkctrl:0050:8"),
++ DT_CLK(NULL, "gpio5_dbclk", "l4-per-clkctrl:0058:8"),
++ DT_CLK(NULL, "gpio6_dbclk", "l4-per-clkctrl:0060:8"),
++ DT_CLK(NULL, "hsi_fck", "l3-init-clkctrl:0018:24"),
++ DT_CLK(NULL, "hsmmc1_fclk", "l3-init-clkctrl:0008:24"),
++ DT_CLK(NULL, "hsmmc2_fclk", "l3-init-clkctrl:0010:24"),
++ DT_CLK(NULL, "iss_ctrlclk", "iss-clkctrl:0000:8"),
++ DT_CLK(NULL, "mcasp_sync_mux_ck", "abe-clkctrl:0020:26"),
++ DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
++ DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
++ DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
++ DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4-per-clkctrl:00c0:26"),
++ DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3-init-clkctrl:00c0:8"),
++ DT_CLK(NULL, "otg_60m_gfclk", "l3-init-clkctrl:0040:24"),
++ DT_CLK(NULL, "per_mcbsp4_gfclk", "l4-per-clkctrl:00c0:24"),
++ DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu-sys-clkctrl:0000:20"),
++ DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu-sys-clkctrl:0000:22"),
++ DT_CLK(NULL, "sgx_clk_mux", "l3-gfx-clkctrl:0000:24"),
++ DT_CLK(NULL, "slimbus1_fclk_0", "abe-clkctrl:0040:8"),
++ DT_CLK(NULL, "slimbus1_fclk_1", "abe-clkctrl:0040:9"),
++ DT_CLK(NULL, "slimbus1_fclk_2", "abe-clkctrl:0040:10"),
++ DT_CLK(NULL, "slimbus1_slimbus_clk", "abe-clkctrl:0040:11"),
++ DT_CLK(NULL, "slimbus2_fclk_0", "l4-per-clkctrl:0118:8"),
++ DT_CLK(NULL, "slimbus2_fclk_1", "l4-per-clkctrl:0118:9"),
++ DT_CLK(NULL, "slimbus2_slimbus_clk", "l4-per-clkctrl:0118:10"),
++ DT_CLK(NULL, "stm_clk_div_ck", "emu-sys-clkctrl:0000:27"),
++ DT_CLK(NULL, "timer5_sync_mux", "abe-clkctrl:0048:24"),
++ DT_CLK(NULL, "timer6_sync_mux", "abe-clkctrl:0050:24"),
++ DT_CLK(NULL, "timer7_sync_mux", "abe-clkctrl:0058:24"),
++ DT_CLK(NULL, "timer8_sync_mux", "abe-clkctrl:0060:24"),
++ DT_CLK(NULL, "trace_clk_div_div_ck", "emu-sys-clkctrl:0000:24"),
++ DT_CLK(NULL, "usb_host_hs_func48mclk", "l3-init-clkctrl:0038:15"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3-init-clkctrl:0038:13"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3-init-clkctrl:0038:14"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3-init-clkctrl:0038:11"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3-init-clkctrl:0038:12"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3-init-clkctrl:0038:8"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3-init-clkctrl:0038:9"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init-clkctrl:0038:10"),
++ DT_CLK(NULL, "usb_otg_hs_xclk", "l3-init-clkctrl:0040:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3-init-clkctrl:0048:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3-init-clkctrl:0048:9"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3-init-clkctrl:0048:10"),
++ DT_CLK(NULL, "utmi_p1_gfclk", "l3-init-clkctrl:0038:24"),
++ DT_CLK(NULL, "utmi_p2_gfclk", "l3-init-clkctrl:0038:25"),
+ { .node_name = NULL },
+ };
+
+diff --git a/drivers/clk/ti/clk-54xx.c b/drivers/clk/ti/clk-54xx.c
+index 90e0a9ea63515..b4aff76eb3735 100644
+--- a/drivers/clk/ti/clk-54xx.c
++++ b/drivers/clk/ti/clk-54xx.c
+@@ -50,7 +50,7 @@ static const struct omap_clkctrl_bit_data omap5_aess_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_dmic_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0018:26",
++ "abe-clkctrl:0018:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -70,7 +70,7 @@ static const struct omap_clkctrl_bit_data omap5_dmic_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_mcbsp1_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0028:26",
++ "abe-clkctrl:0028:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -83,7 +83,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp1_bit_data[] __initconst =
+ };
+
+ static const char * const omap5_mcbsp2_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0030:26",
++ "abe-clkctrl:0030:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -96,7 +96,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp2_bit_data[] __initconst =
+ };
+
+ static const char * const omap5_mcbsp3_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0038:26",
++ "abe-clkctrl:0038:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -136,16 +136,16 @@ static const struct omap_clkctrl_bit_data omap5_timer8_bit_data[] __initconst =
+
+ static const struct omap_clkctrl_reg_data omap5_abe_clkctrl_regs[] __initconst = {
+ { OMAP5_L4_ABE_CLKCTRL, NULL, 0, "abe_iclk" },
+- { OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
++ { OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
+ { OMAP5_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+- { OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
+- { OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
+- { OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
+- { OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
+- { OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
+- { OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
+- { OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
+- { OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
++ { OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
++ { OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
++ { OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
++ { OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
++ { OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
++ { OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
++ { OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
++ { OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
+ { 0 },
+ };
+
+@@ -268,12 +268,12 @@ static const struct omap_clkctrl_bit_data omap5_gpio8_bit_data[] __initconst = {
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_l4per_clkctrl_regs[] __initconst = {
+- { OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0008:24" },
+- { OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0010:24" },
+- { OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0018:24" },
+- { OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0020:24" },
+- { OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0028:24" },
+- { OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0030:24" },
++ { OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0008:24" },
++ { OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0010:24" },
++ { OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0018:24" },
++ { OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0020:24" },
++ { OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0028:24" },
++ { OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0030:24" },
+ { OMAP5_GPIO2_CLKCTRL, omap5_gpio2_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_GPIO3_CLKCTRL, omap5_gpio3_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_GPIO4_CLKCTRL, omap5_gpio4_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+@@ -345,7 +345,7 @@ static const struct omap_clkctrl_bit_data omap5_dss_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_dss_clkctrl_regs[] __initconst = {
+- { OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss_cm:clk:0000:8" },
++ { OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss-clkctrl:0000:8" },
+ { 0 },
+ };
+
+@@ -378,7 +378,7 @@ static const struct omap_clkctrl_bit_data omap5_gpu_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_gpu_clkctrl_regs[] __initconst = {
+- { OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu_cm:clk:0000:24" },
++ { OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu-clkctrl:0000:24" },
+ { 0 },
+ };
+
+@@ -389,7 +389,7 @@ static const char * const omap5_mmc1_fclk_mux_parents[] __initconst = {
+ };
+
+ static const char * const omap5_mmc1_fclk_parents[] __initconst = {
+- "l3init_cm:clk:0008:24",
++ "l3init-clkctrl:0008:24",
+ NULL,
+ };
+
+@@ -405,7 +405,7 @@ static const struct omap_clkctrl_bit_data omap5_mmc1_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_mmc2_fclk_parents[] __initconst = {
+- "l3init_cm:clk:0010:24",
++ "l3init-clkctrl:0010:24",
+ NULL,
+ };
+
+@@ -430,12 +430,12 @@ static const char * const omap5_usb_host_hs_hsic480m_p3_clk_parents[] __initcons
+ };
+
+ static const char * const omap5_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+- "l3init_cm:clk:0038:24",
++ "l3init-clkctrl:0038:24",
+ NULL,
+ };
+
+ static const char * const omap5_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+- "l3init_cm:clk:0038:25",
++ "l3init-clkctrl:0038:25",
+ NULL,
+ };
+
+@@ -494,8 +494,8 @@ static const struct omap_clkctrl_bit_data omap5_usb_otg_ss_bit_data[] __initcons
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_l3init_clkctrl_regs[] __initconst = {
+- { OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0008:25" },
+- { OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0010:25" },
++ { OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0008:25" },
++ { OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0010:25" },
+ { OMAP5_USB_HOST_HS_CLKCTRL, omap5_usb_host_hs_bit_data, CLKF_SW_SUP, "l3init_60m_fclk" },
+ { OMAP5_USB_TLL_HS_CLKCTRL, omap5_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_SATA_CLKCTRL, omap5_sata_bit_data, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -519,7 +519,7 @@ static const struct omap_clkctrl_reg_data omap5_wkupaon_clkctrl_regs[] __initcon
+ { OMAP5_L4_WKUP_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ { OMAP5_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { OMAP5_GPIO1_CLKCTRL, omap5_gpio1_bit_data, CLKF_HW_SUP, "wkupaon_iclk_mux" },
+- { OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon_cm:clk:0020:24" },
++ { OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon-clkctrl:0020:24" },
+ { OMAP5_COUNTER_32K_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ { OMAP5_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+@@ -549,58 +549,58 @@ const struct omap_clkctrl_data omap5_clkctrl_data[] __initconst = {
+ static struct ti_dt_clk omap54xx_clks[] = {
+ DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
+ DT_CLK(NULL, "sys_clkin_ck", "sys_clkin"),
+- DT_CLK(NULL, "dmic_gfclk", "abe_cm:0018:24"),
+- DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
+- DT_CLK(NULL, "dss_32khz_clk", "dss_cm:0000:11"),
+- DT_CLK(NULL, "dss_48mhz_clk", "dss_cm:0000:9"),
+- DT_CLK(NULL, "dss_dss_clk", "dss_cm:0000:8"),
+- DT_CLK(NULL, "dss_sys_clk", "dss_cm:0000:10"),
+- DT_CLK(NULL, "gpio1_dbclk", "wkupaon_cm:0018:8"),
+- DT_CLK(NULL, "gpio2_dbclk", "l4per_cm:0040:8"),
+- DT_CLK(NULL, "gpio3_dbclk", "l4per_cm:0048:8"),
+- DT_CLK(NULL, "gpio4_dbclk", "l4per_cm:0050:8"),
+- DT_CLK(NULL, "gpio5_dbclk", "l4per_cm:0058:8"),
+- DT_CLK(NULL, "gpio6_dbclk", "l4per_cm:0060:8"),
+- DT_CLK(NULL, "gpio7_dbclk", "l4per_cm:00f0:8"),
+- DT_CLK(NULL, "gpio8_dbclk", "l4per_cm:00f8:8"),
+- DT_CLK(NULL, "mcbsp1_gfclk", "abe_cm:0028:24"),
+- DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
+- DT_CLK(NULL, "mcbsp2_gfclk", "abe_cm:0030:24"),
+- DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
+- DT_CLK(NULL, "mcbsp3_gfclk", "abe_cm:0038:24"),
+- DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
+- DT_CLK(NULL, "mmc1_32khz_clk", "l3init_cm:0008:8"),
+- DT_CLK(NULL, "mmc1_fclk", "l3init_cm:0008:25"),
+- DT_CLK(NULL, "mmc1_fclk_mux", "l3init_cm:0008:24"),
+- DT_CLK(NULL, "mmc2_fclk", "l3init_cm:0010:25"),
+- DT_CLK(NULL, "mmc2_fclk_mux", "l3init_cm:0010:24"),
+- DT_CLK(NULL, "sata_ref_clk", "l3init_cm:0068:8"),
+- DT_CLK(NULL, "timer10_gfclk_mux", "l4per_cm:0008:24"),
+- DT_CLK(NULL, "timer11_gfclk_mux", "l4per_cm:0010:24"),
+- DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon_cm:0020:24"),
+- DT_CLK(NULL, "timer2_gfclk_mux", "l4per_cm:0018:24"),
+- DT_CLK(NULL, "timer3_gfclk_mux", "l4per_cm:0020:24"),
+- DT_CLK(NULL, "timer4_gfclk_mux", "l4per_cm:0028:24"),
+- DT_CLK(NULL, "timer5_gfclk_mux", "abe_cm:0048:24"),
+- DT_CLK(NULL, "timer6_gfclk_mux", "abe_cm:0050:24"),
+- DT_CLK(NULL, "timer7_gfclk_mux", "abe_cm:0058:24"),
+- DT_CLK(NULL, "timer8_gfclk_mux", "abe_cm:0060:24"),
+- DT_CLK(NULL, "timer9_gfclk_mux", "l4per_cm:0030:24"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init_cm:0038:13"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init_cm:0038:14"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init_cm:0038:7"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init_cm:0038:11"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init_cm:0038:12"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init_cm:0038:6"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init_cm:0038:8"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init_cm:0038:9"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init_cm:0038:10"),
+- DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init_cm:00d0:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init_cm:0048:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init_cm:0048:9"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init_cm:0048:10"),
+- DT_CLK(NULL, "utmi_p1_gfclk", "l3init_cm:0038:24"),
+- DT_CLK(NULL, "utmi_p2_gfclk", "l3init_cm:0038:25"),
++ DT_CLK(NULL, "dmic_gfclk", "abe-clkctrl:0018:24"),
++ DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
++ DT_CLK(NULL, "dss_32khz_clk", "dss-clkctrl:0000:11"),
++ DT_CLK(NULL, "dss_48mhz_clk", "dss-clkctrl:0000:9"),
++ DT_CLK(NULL, "dss_dss_clk", "dss-clkctrl:0000:8"),
++ DT_CLK(NULL, "dss_sys_clk", "dss-clkctrl:0000:10"),
++ DT_CLK(NULL, "gpio1_dbclk", "wkupaon-clkctrl:0018:8"),
++ DT_CLK(NULL, "gpio2_dbclk", "l4per-clkctrl:0040:8"),
++ DT_CLK(NULL, "gpio3_dbclk", "l4per-clkctrl:0048:8"),
++ DT_CLK(NULL, "gpio4_dbclk", "l4per-clkctrl:0050:8"),
++ DT_CLK(NULL, "gpio5_dbclk", "l4per-clkctrl:0058:8"),
++ DT_CLK(NULL, "gpio6_dbclk", "l4per-clkctrl:0060:8"),
++ DT_CLK(NULL, "gpio7_dbclk", "l4per-clkctrl:00f0:8"),
++ DT_CLK(NULL, "gpio8_dbclk", "l4per-clkctrl:00f8:8"),
++ DT_CLK(NULL, "mcbsp1_gfclk", "abe-clkctrl:0028:24"),
++ DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
++ DT_CLK(NULL, "mcbsp2_gfclk", "abe-clkctrl:0030:24"),
++ DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
++ DT_CLK(NULL, "mcbsp3_gfclk", "abe-clkctrl:0038:24"),
++ DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
++ DT_CLK(NULL, "mmc1_32khz_clk", "l3init-clkctrl:0008:8"),
++ DT_CLK(NULL, "mmc1_fclk", "l3init-clkctrl:0008:25"),
++ DT_CLK(NULL, "mmc1_fclk_mux", "l3init-clkctrl:0008:24"),
++ DT_CLK(NULL, "mmc2_fclk", "l3init-clkctrl:0010:25"),
++ DT_CLK(NULL, "mmc2_fclk_mux", "l3init-clkctrl:0010:24"),
++ DT_CLK(NULL, "sata_ref_clk", "l3init-clkctrl:0068:8"),
++ DT_CLK(NULL, "timer10_gfclk_mux", "l4per-clkctrl:0008:24"),
++ DT_CLK(NULL, "timer11_gfclk_mux", "l4per-clkctrl:0010:24"),
++ DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon-clkctrl:0020:24"),
++ DT_CLK(NULL, "timer2_gfclk_mux", "l4per-clkctrl:0018:24"),
++ DT_CLK(NULL, "timer3_gfclk_mux", "l4per-clkctrl:0020:24"),
++ DT_CLK(NULL, "timer4_gfclk_mux", "l4per-clkctrl:0028:24"),
++ DT_CLK(NULL, "timer5_gfclk_mux", "abe-clkctrl:0048:24"),
++ DT_CLK(NULL, "timer6_gfclk_mux", "abe-clkctrl:0050:24"),
++ DT_CLK(NULL, "timer7_gfclk_mux", "abe-clkctrl:0058:24"),
++ DT_CLK(NULL, "timer8_gfclk_mux", "abe-clkctrl:0060:24"),
++ DT_CLK(NULL, "timer9_gfclk_mux", "l4per-clkctrl:0030:24"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init-clkctrl:0038:13"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init-clkctrl:0038:14"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init-clkctrl:0038:7"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init-clkctrl:0038:11"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init-clkctrl:0038:12"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init-clkctrl:0038:6"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init-clkctrl:0038:8"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init-clkctrl:0038:9"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init-clkctrl:0038:10"),
++ DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init-clkctrl:00d0:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init-clkctrl:0048:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init-clkctrl:0048:9"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init-clkctrl:0048:10"),
++ DT_CLK(NULL, "utmi_p1_gfclk", "l3init-clkctrl:0038:24"),
++ DT_CLK(NULL, "utmi_p2_gfclk", "l3init-clkctrl:0038:25"),
+ { .node_name = NULL },
+ };
+
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 617360e20d86f..e23bf04586320 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -528,10 +528,6 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ char *c;
+ u16 soc_mask = 0;
+
+- if (!(ti_clk_get_features()->flags & TI_CLK_CLKCTRL_COMPAT) &&
+- of_node_name_eq(node, "clk"))
+- ti_clk_features.flags |= TI_CLK_CLKCTRL_COMPAT;
+-
+ addrp = of_get_address(node, 0, NULL, NULL);
+ addr = (u32)of_translate_address(node, addrp);
+
+diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+index c741da02b67e9..a183d93bd7e29 100644
+--- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
++++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+@@ -982,6 +982,11 @@ static int dw_axi_dma_chan_slave_config(struct dma_chan *dchan,
+ static void axi_chan_dump_lli(struct axi_dma_chan *chan,
+ struct axi_dma_hw_desc *desc)
+ {
++ if (!desc->lli) {
++ dev_err(dchan2dev(&chan->vc.chan), "NULL LLI\n");
++ return;
++ }
++
+ dev_err(dchan2dev(&chan->vc.chan),
+ "SAR: 0x%llx DAR: 0x%llx LLP: 0x%llx BTS 0x%x CTL: 0x%x:%08x",
+ le64_to_cpu(desc->lli->sar),
+@@ -1049,6 +1054,11 @@ static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
+
+ /* The completed descriptor currently is in the head of vc list */
+ vd = vchan_next_desc(&chan->vc);
++ if (!vd) {
++ dev_err(chan2dev(chan), "BUG: %s, IRQ with no descriptors\n",
++ axi_chan_name(chan));
++ goto out;
++ }
+
+ if (chan->cyclic) {
+ desc = vd_to_axi_desc(vd);
+@@ -1078,6 +1088,7 @@ static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
+ axi_chan_start_first_queued(chan);
+ }
+
++out:
+ spin_unlock_irqrestore(&chan->vc.lock, flags);
+ }
+
+diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c
+index 2138b80435abf..474d3ba8ec9f9 100644
+--- a/drivers/dma/sprd-dma.c
++++ b/drivers/dma/sprd-dma.c
+@@ -1237,11 +1237,8 @@ static int sprd_dma_remove(struct platform_device *pdev)
+ {
+ struct sprd_dma_dev *sdev = platform_get_drvdata(pdev);
+ struct sprd_dma_chn *c, *cn;
+- int ret;
+
+- ret = pm_runtime_get_sync(&pdev->dev);
+- if (ret < 0)
+- return ret;
++ pm_runtime_get_sync(&pdev->dev);
+
+ /* explicitly free the irq */
+ if (sdev->irq > 0)
+diff --git a/drivers/dma/tegra186-gpc-dma.c b/drivers/dma/tegra186-gpc-dma.c
+index 05cd451f541d8..fa9bda4a2bc6f 100644
+--- a/drivers/dma/tegra186-gpc-dma.c
++++ b/drivers/dma/tegra186-gpc-dma.c
+@@ -157,8 +157,8 @@
+ * If any burst is in flight and DMA paused then this is the time to complete
+ * on-flight burst and update DMA status register.
+ */
+-#define TEGRA_GPCDMA_BURST_COMPLETE_TIME 20
+-#define TEGRA_GPCDMA_BURST_COMPLETION_TIMEOUT 100
++#define TEGRA_GPCDMA_BURST_COMPLETE_TIME 10
++#define TEGRA_GPCDMA_BURST_COMPLETION_TIMEOUT 5000 /* 5 msec */
+
+ /* Channel base address offset from GPCDMA base address */
+ #define TEGRA_GPCDMA_CHANNEL_BASE_ADD_OFFSET 0x20000
+@@ -432,6 +432,17 @@ static int tegra_dma_device_resume(struct dma_chan *dc)
+ return 0;
+ }
+
++static inline int tegra_dma_pause_noerr(struct tegra_dma_channel *tdc)
++{
++ /* Return 0 irrespective of PAUSE status.
++ * This is useful to recover channels that can exit out of flush
++ * state when the channel is disabled.
++ */
++
++ tegra_dma_pause(tdc);
++ return 0;
++}
++
+ static void tegra_dma_disable(struct tegra_dma_channel *tdc)
+ {
+ u32 csr, status;
+@@ -1292,6 +1303,14 @@ static const struct tegra_dma_chip_data tegra194_dma_chip_data = {
+ .terminate = tegra_dma_pause,
+ };
+
++static const struct tegra_dma_chip_data tegra234_dma_chip_data = {
++ .nr_channels = 31,
++ .channel_reg_size = SZ_64K,
++ .max_dma_count = SZ_1G,
++ .hw_support_pause = true,
++ .terminate = tegra_dma_pause_noerr,
++};
++
+ static const struct of_device_id tegra_dma_of_match[] = {
+ {
+ .compatible = "nvidia,tegra186-gpcdma",
+@@ -1299,6 +1318,9 @@ static const struct of_device_id tegra_dma_of_match[] = {
+ }, {
+ .compatible = "nvidia,tegra194-gpcdma",
+ .data = &tegra194_dma_chip_data,
++ }, {
++ .compatible = "nvidia,tegra234-gpcdma",
++ .data = &tegra234_dma_chip_data,
+ }, {
+ },
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+index c6cc493a54866..2b97b8a96fb49 100644
+--- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
++++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+@@ -148,30 +148,22 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+ struct amdgpu_reset_context *reset_context)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
++ struct list_head *reset_device_list = reset_context->reset_device_list;
+ struct amdgpu_device *tmp_adev = NULL;
+- struct list_head reset_device_list;
+ int r = 0;
+
+ dev_dbg(adev->dev, "aldebaran perform hw reset\n");
++
++ if (reset_device_list == NULL)
++ return -EINVAL;
++
+ if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
+ reset_context->hive == NULL) {
+ /* Wrong context, return error */
+ return -EINVAL;
+ }
+
+- INIT_LIST_HEAD(&reset_device_list);
+- if (reset_context->hive) {
+- list_for_each_entry (tmp_adev,
+- &reset_context->hive->device_list,
+- gmc.xgmi.head)
+- list_add_tail(&tmp_adev->reset_list,
+- &reset_device_list);
+- } else {
+- list_add_tail(&reset_context->reset_req_dev->reset_list,
+- &reset_device_list);
+- }
+-
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ mutex_lock(&tmp_adev->reset_cntl->reset_lock);
+ tmp_adev->reset_cntl->active_reset = AMD_RESET_METHOD_MODE2;
+ }
+@@ -179,7 +171,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+ * Mode2 reset doesn't need any sync between nodes in XGMI hive, instead launch
+ * them together so that they can be completed asynchronously on multiple nodes
+ */
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ /* For XGMI run all resets in parallel to speed up the process */
+ if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
+ if (!queue_work(system_unbound_wq,
+@@ -197,7 +189,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+
+ /* For XGMI wait for all resets to complete before proceed */
+ if (!r) {
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
+ flush_work(&tmp_adev->reset_cntl->reset_work);
+ r = tmp_adev->asic_reset_res;
+@@ -207,7 +199,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+ }
+ }
+
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ mutex_unlock(&tmp_adev->reset_cntl->reset_lock);
+ tmp_adev->reset_cntl->active_reset = AMD_RESET_METHOD_NONE;
+ }
+@@ -339,10 +331,13 @@ static int
+ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
+ struct amdgpu_reset_context *reset_context)
+ {
++ struct list_head *reset_device_list = reset_context->reset_device_list;
+ struct amdgpu_device *tmp_adev = NULL;
+- struct list_head reset_device_list;
+ int r;
+
++ if (reset_device_list == NULL)
++ return -EINVAL;
++
+ if (reset_context->reset_req_dev->ip_versions[MP1_HWIP][0] ==
+ IP_VERSION(13, 0, 2) &&
+ reset_context->hive == NULL) {
+@@ -350,19 +345,7 @@ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
+ return -EINVAL;
+ }
+
+- INIT_LIST_HEAD(&reset_device_list);
+- if (reset_context->hive) {
+- list_for_each_entry (tmp_adev,
+- &reset_context->hive->device_list,
+- gmc.xgmi.head)
+- list_add_tail(&tmp_adev->reset_list,
+- &reset_device_list);
+- } else {
+- list_add_tail(&reset_context->reset_req_dev->reset_list,
+- &reset_device_list);
+- }
+-
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ dev_info(tmp_adev->dev,
+ "GPU reset succeeded, trying to resume\n");
+ r = aldebaran_mode2_restore_ip(tmp_adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+index fd8f3731758ed..b81b77a9efa61 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+@@ -314,7 +314,7 @@ amdgpu_atomfirmware_get_vram_info(struct amdgpu_device *adev,
+ mem_channel_number = vram_info->v30.channel_num;
+ mem_channel_width = vram_info->v30.channel_width;
+ if (vram_width)
+- *vram_width = mem_channel_number * mem_channel_width;
++ *vram_width = mem_channel_number * (1 << mem_channel_width);
+ break;
+ default:
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index d8f1335bc68f4..b7bae833c804b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -837,16 +837,12 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
+ continue;
+
+ r = amdgpu_vm_bo_update(adev, bo_va, false);
+- if (r) {
+- mutex_unlock(&p->bo_list->bo_list_mutex);
++ if (r)
+ return r;
+- }
+
+ r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
+- if (r) {
+- mutex_unlock(&p->bo_list->bo_list_mutex);
++ if (r)
+ return r;
+- }
+ }
+
+ r = amdgpu_vm_handle_moved(adev, vm);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 58df107e3beba..3adebb63680e0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4746,6 +4746,8 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
+ tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,
+ reset_list);
+ amdgpu_reset_reg_dumps(tmp_adev);
++
++ reset_context->reset_device_list = device_list_handle;
+ r = amdgpu_reset_perform_reset(tmp_adev, reset_context);
+ /* If reset handler not implemented, continue; otherwise return */
+ if (r == -ENOSYS)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
+index 1949dbe28a865..0c3ad85d84a43 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
+@@ -37,6 +37,7 @@ struct amdgpu_reset_context {
+ struct amdgpu_device *reset_req_dev;
+ struct amdgpu_job *job;
+ struct amdgpu_hive_info *hive;
++ struct list_head *reset_device_list;
+ unsigned long flags;
+ };
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+index 108e8e8a1a367..576849e952964 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+@@ -496,8 +496,7 @@ static int amdgpu_vkms_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_height = YRES_MAX;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+
+ adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
+index 9c964cd3b5d4e..288fce7dc0ed1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
+@@ -2796,8 +2796,7 @@ static int dce_v10_0_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+index e0ad9f27dc3f9..cbe5250b31cb4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+@@ -2914,8 +2914,7 @@ static int dce_v11_0_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
+index 3caf6f386042f..982855e6cf52e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
+@@ -2673,8 +2673,7 @@ static int dce_v6_0_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_width = 16384;
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+ adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
+index 7c75df5bffed3..cf44d1b054acf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
+@@ -2693,8 +2693,11 @@ static int dce_v8_0_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ if (adev->asic_type == CHIP_HAWAII)
++ /* disable prefer shadow for now due to hibernation issues */
++ adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ else
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index d055d3c7eed6a..0424570c736fa 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3894,8 +3894,11 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ if (adev->asic_type == CHIP_HAWAII)
++ /* disable prefer shadow for now due to hibernation issues */
++ adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ else
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+ /* indicates support for immediate flip */
+ adev_to_drm(adev)->mode_config.async_page_flip = true;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+index 76f863eb86ef2..401ccc676ae9a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+@@ -372,7 +372,7 @@ static struct stream_encoder *dcn303_stream_encoder_create(enum engine_id eng_id
+ int afmt_inst;
+
+ /* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
+- if (eng_id <= ENGINE_ID_DIGE) {
++ if (eng_id <= ENGINE_ID_DIGB) {
+ vpg_inst = eng_id;
+ afmt_inst = eng_id;
+ } else
+diff --git a/drivers/gpu/drm/bridge/lvds-codec.c b/drivers/gpu/drm/bridge/lvds-codec.c
+index 702ea803a743c..39e7004de7200 100644
+--- a/drivers/gpu/drm/bridge/lvds-codec.c
++++ b/drivers/gpu/drm/bridge/lvds-codec.c
+@@ -180,7 +180,7 @@ static int lvds_codec_probe(struct platform_device *pdev)
+ of_node_put(bus_node);
+ if (ret == -ENODEV) {
+ dev_warn(dev, "missing 'data-mapping' DT property\n");
+- } else if (ret) {
++ } else if (ret < 0) {
+ dev_err(dev, "invalid 'data-mapping' DT property\n");
+ return ret;
+ } else {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
+index 06b1b188ce5a4..cbe607cadd7fb 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
+@@ -268,7 +268,7 @@ static void __i915_gem_object_free_mmaps(struct drm_i915_gem_object *obj)
+ */
+ void __i915_gem_object_pages_fini(struct drm_i915_gem_object *obj)
+ {
+- assert_object_held(obj);
++ assert_object_held_shared(obj);
+
+ if (!list_empty(&obj->vma.list)) {
+ struct i915_vma *vma;
+@@ -331,15 +331,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
+ continue;
+ }
+
+- if (!i915_gem_object_trylock(obj, NULL)) {
+- /* busy, toss it back to the pile */
+- if (llist_add(&obj->freed, &i915->mm.free_list))
+- queue_delayed_work(i915->wq, &i915->mm.free_work, msecs_to_jiffies(10));
+- continue;
+- }
+-
+ __i915_gem_object_pages_fini(obj);
+- i915_gem_object_unlock(obj);
+ __i915_gem_free_object(obj);
+
+ /* But keep the pointer alive for RCU-protected lookups */
+@@ -359,7 +351,7 @@ void i915_gem_flush_free_objects(struct drm_i915_private *i915)
+ static void __i915_gem_free_work(struct work_struct *work)
+ {
+ struct drm_i915_private *i915 =
+- container_of(work, struct drm_i915_private, mm.free_work.work);
++ container_of(work, struct drm_i915_private, mm.free_work);
+
+ i915_gem_flush_free_objects(i915);
+ }
+@@ -391,7 +383,7 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
+ */
+
+ if (llist_add(&obj->freed, &i915->mm.free_list))
+- queue_delayed_work(i915->wq, &i915->mm.free_work, 0);
++ queue_work(i915->wq, &i915->mm.free_work);
+ }
+
+ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
+@@ -719,7 +711,7 @@ bool i915_gem_object_placement_possible(struct drm_i915_gem_object *obj,
+
+ void i915_gem_init__objects(struct drm_i915_private *i915)
+ {
+- INIT_DELAYED_WORK(&i915->mm.free_work, __i915_gem_free_work);
++ INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
+ }
+
+ void i915_objects_module_exit(void)
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+index 2c88bdb8ff7cc..4e224ef359406 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+@@ -335,7 +335,6 @@ struct drm_i915_gem_object {
+ #define I915_BO_READONLY BIT(7)
+ #define I915_TILING_QUIRK_BIT 8 /* unknown swizzling; do not release! */
+ #define I915_BO_PROTECTED BIT(9)
+-#define I915_BO_WAS_BOUND_BIT 10
+ /**
+ * @mem_flags - Mutable placement-related flags
+ *
+@@ -598,6 +597,8 @@ struct drm_i915_gem_object {
+ * pages were last acquired.
+ */
+ bool dirty:1;
++
++ u32 tlb;
+ } mm;
+
+ struct {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+index 97c820eee115a..8357dbdcab5cb 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+@@ -6,14 +6,15 @@
+
+ #include <drm/drm_cache.h>
+
++#include "gt/intel_gt.h"
++#include "gt/intel_gt_pm.h"
++
+ #include "i915_drv.h"
+ #include "i915_gem_object.h"
+ #include "i915_scatterlist.h"
+ #include "i915_gem_lmem.h"
+ #include "i915_gem_mman.h"
+
+-#include "gt/intel_gt.h"
+-
+ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
+ struct sg_table *pages,
+ unsigned int sg_page_sizes)
+@@ -190,6 +191,18 @@ static void unmap_object(struct drm_i915_gem_object *obj, void *ptr)
+ vunmap(ptr);
+ }
+
++static void flush_tlb_invalidate(struct drm_i915_gem_object *obj)
++{
++ struct drm_i915_private *i915 = to_i915(obj->base.dev);
++ struct intel_gt *gt = to_gt(i915);
++
++ if (!obj->mm.tlb)
++ return;
++
++ intel_gt_invalidate_tlb(gt, obj->mm.tlb);
++ obj->mm.tlb = 0;
++}
++
+ struct sg_table *
+ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
+ {
+@@ -215,13 +228,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
+ __i915_gem_object_reset_page_iter(obj);
+ obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
+
+- if (test_and_clear_bit(I915_BO_WAS_BOUND_BIT, &obj->flags)) {
+- struct drm_i915_private *i915 = to_i915(obj->base.dev);
+- intel_wakeref_t wakeref;
+-
+- with_intel_runtime_pm_if_active(&i915->runtime_pm, wakeref)
+- intel_gt_invalidate_tlbs(to_gt(i915));
+- }
++ flush_tlb_invalidate(obj);
+
+ return pages;
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index 531af6ad70071..a47dcf7663aef 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -10,7 +10,9 @@
+ #include "pxp/intel_pxp.h"
+
+ #include "i915_drv.h"
++#include "i915_perf_oa_regs.h"
+ #include "intel_context.h"
++#include "intel_engine_pm.h"
+ #include "intel_engine_regs.h"
+ #include "intel_gt.h"
+ #include "intel_gt_buffer_pool.h"
+@@ -34,8 +36,6 @@ static void __intel_gt_init_early(struct intel_gt *gt)
+ {
+ spin_lock_init(>->irq_lock);
+
+- mutex_init(>->tlb_invalidate_lock);
+-
+ INIT_LIST_HEAD(>->closed_vma);
+ spin_lock_init(>->closed_lock);
+
+@@ -46,6 +46,8 @@ static void __intel_gt_init_early(struct intel_gt *gt)
+ intel_gt_init_reset(gt);
+ intel_gt_init_requests(gt);
+ intel_gt_init_timelines(gt);
++ mutex_init(>->tlb.invalidate_lock);
++ seqcount_mutex_init(>->tlb.seqno, >->tlb.invalidate_lock);
+ intel_gt_pm_init_early(gt);
+
+ intel_uc_init_early(>->uc);
+@@ -831,6 +833,7 @@ void intel_gt_driver_late_release_all(struct drm_i915_private *i915)
+ intel_gt_fini_requests(gt);
+ intel_gt_fini_reset(gt);
+ intel_gt_fini_timelines(gt);
++ mutex_destroy(>->tlb.invalidate_lock);
+ intel_engines_free(gt);
+ }
+ }
+@@ -1163,7 +1166,7 @@ get_reg_and_bit(const struct intel_engine_cs *engine, const bool gen8,
+ return rb;
+ }
+
+-void intel_gt_invalidate_tlbs(struct intel_gt *gt)
++static void mmio_invalidate_full(struct intel_gt *gt)
+ {
+ static const i915_reg_t gen8_regs[] = {
+ [RENDER_CLASS] = GEN8_RTCR,
+@@ -1181,13 +1184,11 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ struct drm_i915_private *i915 = gt->i915;
+ struct intel_uncore *uncore = gt->uncore;
+ struct intel_engine_cs *engine;
++ intel_engine_mask_t awake, tmp;
+ enum intel_engine_id id;
+ const i915_reg_t *regs;
+ unsigned int num = 0;
+
+- if (I915_SELFTEST_ONLY(gt->awake == -ENODEV))
+- return;
+-
+ if (GRAPHICS_VER(i915) == 12) {
+ regs = gen12_regs;
+ num = ARRAY_SIZE(gen12_regs);
+@@ -1202,28 +1203,41 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ "Platform does not implement TLB invalidation!"))
+ return;
+
+- GEM_TRACE("\n");
+-
+- assert_rpm_wakelock_held(&i915->runtime_pm);
+-
+- mutex_lock(>->tlb_invalidate_lock);
+ intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
+
+ spin_lock_irq(&uncore->lock); /* serialise invalidate with GT reset */
+
++ awake = 0;
+ for_each_engine(engine, gt, id) {
+ struct reg_and_bit rb;
+
++ if (!intel_engine_pm_is_awake(engine))
++ continue;
++
+ rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
+ if (!i915_mmio_reg_offset(rb.reg))
+ continue;
+
+ intel_uncore_write_fw(uncore, rb.reg, rb.bit);
++ awake |= engine->mask;
+ }
+
++ GT_TRACE(gt, "invalidated engines %08x\n", awake);
++
++ /* Wa_2207587034:tgl,dg1,rkl,adl-s,adl-p */
++ if (awake &&
++ (IS_TIGERLAKE(i915) ||
++ IS_DG1(i915) ||
++ IS_ROCKETLAKE(i915) ||
++ IS_ALDERLAKE_S(i915) ||
++ IS_ALDERLAKE_P(i915)))
++ intel_uncore_write_fw(uncore, GEN12_OA_TLB_INV_CR, 1);
++
+ spin_unlock_irq(&uncore->lock);
+
+- for_each_engine(engine, gt, id) {
++ for_each_engine_masked(engine, gt, awake, tmp) {
++ struct reg_and_bit rb;
++
+ /*
+ * HW architecture suggest typical invalidation time at 40us,
+ * with pessimistic cases up to 100us and a recommendation to
+@@ -1231,12 +1245,8 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ */
+ const unsigned int timeout_us = 100;
+ const unsigned int timeout_ms = 4;
+- struct reg_and_bit rb;
+
+ rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
+- if (!i915_mmio_reg_offset(rb.reg))
+- continue;
+-
+ if (__intel_wait_for_register_fw(uncore,
+ rb.reg, rb.bit, 0,
+ timeout_us, timeout_ms,
+@@ -1253,5 +1263,38 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ * transitions.
+ */
+ intel_uncore_forcewake_put_delayed(uncore, FORCEWAKE_ALL);
+- mutex_unlock(>->tlb_invalidate_lock);
++}
++
++static bool tlb_seqno_passed(const struct intel_gt *gt, u32 seqno)
++{
++ u32 cur = intel_gt_tlb_seqno(gt);
++
++ /* Only skip if a *full* TLB invalidate barrier has passed */
++ return (s32)(cur - ALIGN(seqno, 2)) > 0;
++}
++
++void intel_gt_invalidate_tlb(struct intel_gt *gt, u32 seqno)
++{
++ intel_wakeref_t wakeref;
++
++ if (I915_SELFTEST_ONLY(gt->awake == -ENODEV))
++ return;
++
++ if (intel_gt_is_wedged(gt))
++ return;
++
++ if (tlb_seqno_passed(gt, seqno))
++ return;
++
++ with_intel_gt_pm_if_awake(gt, wakeref) {
++ mutex_lock(>->tlb.invalidate_lock);
++ if (tlb_seqno_passed(gt, seqno))
++ goto unlock;
++
++ mmio_invalidate_full(gt);
++
++ write_seqcount_invalidate(>->tlb.seqno);
++unlock:
++ mutex_unlock(>->tlb.invalidate_lock);
++ }
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
+index 44c6cb63ccbc8..d5a2af76d6a52 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt.h
+@@ -123,7 +123,17 @@ void intel_gt_info_print(const struct intel_gt_info *info,
+
+ void intel_gt_watchdog_work(struct work_struct *work);
+
+-void intel_gt_invalidate_tlbs(struct intel_gt *gt);
++static inline u32 intel_gt_tlb_seqno(const struct intel_gt *gt)
++{
++ return seqprop_sequence(>->tlb.seqno);
++}
++
++static inline u32 intel_gt_next_invalidate_tlb_full(const struct intel_gt *gt)
++{
++ return intel_gt_tlb_seqno(gt) | 1;
++}
++
++void intel_gt_invalidate_tlb(struct intel_gt *gt, u32 seqno);
+
+ struct resource intel_pci_resource(struct pci_dev *pdev, int bar);
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+index bc898df7a48cc..a334787a4939f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+@@ -55,6 +55,9 @@ static inline void intel_gt_pm_might_put(struct intel_gt *gt)
+ for (tmp = 1, intel_gt_pm_get(gt); tmp; \
+ intel_gt_pm_put(gt), tmp = 0)
+
++#define with_intel_gt_pm_if_awake(gt, wf) \
++ for (wf = intel_gt_pm_get_if_awake(gt); wf; intel_gt_pm_put_async(gt), wf = 0)
++
+ static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt)
+ {
+ return intel_wakeref_wait_for_idle(>->wakeref);
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
+index edd7a3cf5f5f5..b1120f73dc8c4 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
+@@ -11,6 +11,7 @@
+ #include <linux/llist.h>
+ #include <linux/mutex.h>
+ #include <linux/notifier.h>
++#include <linux/seqlock.h>
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+ #include <linux/workqueue.h>
+@@ -76,7 +77,22 @@ struct intel_gt {
+ struct intel_uc uc;
+ struct intel_gsc gsc;
+
+- struct mutex tlb_invalidate_lock;
++ struct {
++ /* Serialize global tlb invalidations */
++ struct mutex invalidate_lock;
++
++ /*
++ * Batch TLB invalidations
++ *
++ * After unbinding the PTE, we need to ensure the TLB
++ * are invalidated prior to releasing the physical pages.
++ * But we only need one such invalidation for all unbinds,
++ * so we track how many TLB invalidations have been
++ * performed since unbind the PTE and only emit an extra
++ * invalidate if no full barrier has been passed.
++ */
++ seqcount_mutex_t seqno;
++ } tlb;
+
+ struct i915_wa_list wa_list;
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c
+index 2c35324b5f68c..2b10b96b17b5b 100644
+--- a/drivers/gpu/drm/i915/gt/intel_migrate.c
++++ b/drivers/gpu/drm/i915/gt/intel_migrate.c
+@@ -708,7 +708,7 @@ intel_context_migrate_copy(struct intel_context *ce,
+ u8 src_access, dst_access;
+ struct i915_request *rq;
+ int src_sz, dst_sz;
+- bool ccs_is_src;
++ bool ccs_is_src, overwrite_ccs;
+ int err;
+
+ GEM_BUG_ON(ce->vm != ce->engine->gt->migrate.context->vm);
+@@ -749,6 +749,8 @@ intel_context_migrate_copy(struct intel_context *ce,
+ get_ccs_sg_sgt(&it_ccs, bytes_to_cpy);
+ }
+
++ overwrite_ccs = HAS_FLAT_CCS(i915) && !ccs_bytes_to_cpy && dst_is_lmem;
++
+ src_offset = 0;
+ dst_offset = CHUNK_SZ;
+ if (HAS_64K_PAGES(ce->engine->i915)) {
+@@ -852,6 +854,25 @@ intel_context_migrate_copy(struct intel_context *ce,
+ if (err)
+ goto out_rq;
+ ccs_bytes_to_cpy -= ccs_sz;
++ } else if (overwrite_ccs) {
++ err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
++ if (err)
++ goto out_rq;
++
++ /*
++ * While we can't always restore/manage the CCS state,
++ * we still need to ensure we don't leak the CCS state
++ * from the previous user, so make sure we overwrite it
++ * with something.
++ */
++ err = emit_copy_ccs(rq, dst_offset, INDIRECT_ACCESS,
++ dst_offset, DIRECT_ACCESS, len);
++ if (err)
++ goto out_rq;
++
++ err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
++ if (err)
++ goto out_rq;
+ }
+
+ /* Arbitration is re-enabled between requests. */
+diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c
+index d8b94d6385598..6ee8d11270168 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c
+@@ -206,8 +206,12 @@ void ppgtt_bind_vma(struct i915_address_space *vm,
+ void ppgtt_unbind_vma(struct i915_address_space *vm,
+ struct i915_vma_resource *vma_res)
+ {
+- if (vma_res->allocated)
+- vm->clear_range(vm, vma_res->start, vma_res->vma_size);
++ if (!vma_res->allocated)
++ return;
++
++ vm->clear_range(vm, vma_res->start, vma_res->vma_size);
++ if (vma_res->tlb)
++ vma_invalidate_tlb(vm, vma_res->tlb);
+ }
+
+ static unsigned long pd_count(u64 size, int shift)
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 00d7eeae33bd3..5184d70d48382 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -254,7 +254,7 @@ struct i915_gem_mm {
+ * List of objects which are pending destruction.
+ */
+ struct llist_head free_list;
+- struct delayed_work free_work;
++ struct work_struct free_work;
+ /**
+ * Count of objects pending destructions. Used to skip needlessly
+ * waiting on an RCU barrier if no objects are waiting to be freed.
+@@ -1415,7 +1415,7 @@ static inline void i915_gem_drain_freed_objects(struct drm_i915_private *i915)
+ * armed the work again.
+ */
+ while (atomic_read(&i915->mm.free_count)) {
+- flush_delayed_work(&i915->mm.free_work);
++ flush_work(&i915->mm.free_work);
+ flush_delayed_work(&i915->bdev.wq);
+ rcu_barrier();
+ }
+diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
+index 04d12f278f572..16460b169ed21 100644
+--- a/drivers/gpu/drm/i915/i915_vma.c
++++ b/drivers/gpu/drm/i915/i915_vma.c
+@@ -537,8 +537,6 @@ int i915_vma_bind(struct i915_vma *vma,
+ bind_flags);
+ }
+
+- set_bit(I915_BO_WAS_BOUND_BIT, &vma->obj->flags);
+-
+ atomic_or(bind_flags, &vma->flags);
+ return 0;
+ }
+@@ -1301,6 +1299,19 @@ err_unpin:
+ return err;
+ }
+
++void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb)
++{
++ /*
++ * Before we release the pages that were bound by this vma, we
++ * must invalidate all the TLBs that may still have a reference
++ * back to our physical address. It only needs to be done once,
++ * so after updating the PTE to point away from the pages, record
++ * the most recent TLB invalidation seqno, and if we have not yet
++ * flushed the TLBs upon release, perform a full invalidation.
++ */
++ WRITE_ONCE(*tlb, intel_gt_next_invalidate_tlb_full(vm->gt));
++}
++
+ static void __vma_put_pages(struct i915_vma *vma, unsigned int count)
+ {
+ /* We allocate under vma_get_pages, so beware the shrinker */
+@@ -1927,7 +1938,12 @@ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async)
+ vma->vm->skip_pte_rewrite;
+ trace_i915_vma_unbind(vma);
+
+- unbind_fence = i915_vma_resource_unbind(vma_res);
++ if (async)
++ unbind_fence = i915_vma_resource_unbind(vma_res,
++ &vma->obj->mm.tlb);
++ else
++ unbind_fence = i915_vma_resource_unbind(vma_res, NULL);
++
+ vma->resource = NULL;
+
+ atomic_and(~(I915_VMA_BIND_MASK | I915_VMA_ERROR | I915_VMA_GGTT_WRITE),
+@@ -1935,10 +1951,13 @@ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async)
+
+ i915_vma_detach(vma);
+
+- if (!async && unbind_fence) {
+- dma_fence_wait(unbind_fence, false);
+- dma_fence_put(unbind_fence);
+- unbind_fence = NULL;
++ if (!async) {
++ if (unbind_fence) {
++ dma_fence_wait(unbind_fence, false);
++ dma_fence_put(unbind_fence);
++ unbind_fence = NULL;
++ }
++ vma_invalidate_tlb(vma->vm, &vma->obj->mm.tlb);
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
+index 88ca0bd9c9003..33a58f605d75c 100644
+--- a/drivers/gpu/drm/i915/i915_vma.h
++++ b/drivers/gpu/drm/i915/i915_vma.h
+@@ -213,6 +213,7 @@ bool i915_vma_misplaced(const struct i915_vma *vma,
+ u64 size, u64 alignment, u64 flags);
+ void __i915_vma_set_map_and_fenceable(struct i915_vma *vma);
+ void i915_vma_revoke_mmap(struct i915_vma *vma);
++void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb);
+ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async);
+ int __i915_vma_unbind(struct i915_vma *vma);
+ int __must_check i915_vma_unbind(struct i915_vma *vma);
+diff --git a/drivers/gpu/drm/i915/i915_vma_resource.c b/drivers/gpu/drm/i915/i915_vma_resource.c
+index 27c55027387a0..5a67995ea5fe2 100644
+--- a/drivers/gpu/drm/i915/i915_vma_resource.c
++++ b/drivers/gpu/drm/i915/i915_vma_resource.c
+@@ -223,10 +223,13 @@ i915_vma_resource_fence_notify(struct i915_sw_fence *fence,
+ * Return: A refcounted pointer to a dma-fence that signals when unbinding is
+ * complete.
+ */
+-struct dma_fence *i915_vma_resource_unbind(struct i915_vma_resource *vma_res)
++struct dma_fence *i915_vma_resource_unbind(struct i915_vma_resource *vma_res,
++ u32 *tlb)
+ {
+ struct i915_address_space *vm = vma_res->vm;
+
++ vma_res->tlb = tlb;
++
+ /* Reference for the sw fence */
+ i915_vma_resource_get(vma_res);
+
+diff --git a/drivers/gpu/drm/i915/i915_vma_resource.h b/drivers/gpu/drm/i915/i915_vma_resource.h
+index 5d8427caa2ba2..06923d1816e7e 100644
+--- a/drivers/gpu/drm/i915/i915_vma_resource.h
++++ b/drivers/gpu/drm/i915/i915_vma_resource.h
+@@ -67,6 +67,7 @@ struct i915_page_sizes {
+ * taken when the unbind is scheduled.
+ * @skip_pte_rewrite: During ggtt suspend and vm takedown pte rewriting
+ * needs to be skipped for unbind.
++ * @tlb: pointer for obj->mm.tlb, if async unbind. Otherwise, NULL
+ *
+ * The lifetime of a struct i915_vma_resource is from a binding request to
+ * the actual possible asynchronous unbind has completed.
+@@ -119,6 +120,8 @@ struct i915_vma_resource {
+ bool immediate_unbind:1;
+ bool needs_wakeref:1;
+ bool skip_pte_rewrite:1;
++
++ u32 *tlb;
+ };
+
+ bool i915_vma_resource_hold(struct i915_vma_resource *vma_res,
+@@ -131,7 +134,8 @@ struct i915_vma_resource *i915_vma_resource_alloc(void);
+
+ void i915_vma_resource_free(struct i915_vma_resource *vma_res);
+
+-struct dma_fence *i915_vma_resource_unbind(struct i915_vma_resource *vma_res);
++struct dma_fence *i915_vma_resource_unbind(struct i915_vma_resource *vma_res,
++ u32 *tlb);
+
+ void __i915_vma_resource_init(struct i915_vma_resource *vma_res);
+
+diff --git a/drivers/gpu/drm/imx/dcss/dcss-kms.c b/drivers/gpu/drm/imx/dcss/dcss-kms.c
+index 9b84df34a6a12..8cf3352d88582 100644
+--- a/drivers/gpu/drm/imx/dcss/dcss-kms.c
++++ b/drivers/gpu/drm/imx/dcss/dcss-kms.c
+@@ -142,8 +142,6 @@ struct dcss_kms_dev *dcss_kms_attach(struct dcss_dev *dcss)
+
+ drm_kms_helper_poll_init(drm);
+
+- drm_bridge_connector_enable_hpd(kms->connector);
+-
+ ret = drm_dev_register(drm, 0);
+ if (ret)
+ goto cleanup_crtc;
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 1b70938cfd2c4..bd4ca11d3ff53 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -115,8 +115,11 @@ static bool meson_vpu_has_available_connectors(struct device *dev)
+ for_each_endpoint_of_node(dev->of_node, ep) {
+ /* If the endpoint node exists, consider it enabled */
+ remote = of_graph_get_remote_port(ep);
+- if (remote)
++ if (remote) {
++ of_node_put(remote);
++ of_node_put(ep);
+ return true;
++ }
+ }
+
+ return false;
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index 259f3e6bec90a..bb7e109534de1 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -469,17 +469,17 @@ void meson_viu_init(struct meson_drm *priv)
+ priv->io_base + _REG(VD2_IF0_LUMA_FIFO_SIZE));
+
+ if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {
+- writel_relaxed(VIU_OSD_BLEND_REORDER(0, 1) |
+- VIU_OSD_BLEND_REORDER(1, 0) |
+- VIU_OSD_BLEND_REORDER(2, 0) |
+- VIU_OSD_BLEND_REORDER(3, 0) |
+- VIU_OSD_BLEND_DIN_EN(1) |
+- VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
+- VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
+- VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
+- VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
+- VIU_OSD_BLEND_HOLD_LINES(4),
+- priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
++ u32 val = (u32)VIU_OSD_BLEND_REORDER(0, 1) |
++ (u32)VIU_OSD_BLEND_REORDER(1, 0) |
++ (u32)VIU_OSD_BLEND_REORDER(2, 0) |
++ (u32)VIU_OSD_BLEND_REORDER(3, 0) |
++ (u32)VIU_OSD_BLEND_DIN_EN(1) |
++ (u32)VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
++ (u32)VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
++ (u32)VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
++ (u32)VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
++ (u32)VIU_OSD_BLEND_HOLD_LINES(4);
++ writel_relaxed(val, priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
+
+ writel_relaxed(OSD_BLEND_PATH_SEL_ENABLE,
+ priv->io_base + _REG(OSD1_BLEND_SRC_CTRL));
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index 62efbd0f38466..b7246b146e51d 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -2605,6 +2605,27 @@ nv172_chipset = {
+ .fifo = { 0x00000001, ga102_fifo_new },
+ };
+
++static const struct nvkm_device_chip
++nv173_chipset = {
++ .name = "GA103",
++ .bar = { 0x00000001, tu102_bar_new },
++ .bios = { 0x00000001, nvkm_bios_new },
++ .devinit = { 0x00000001, ga100_devinit_new },
++ .fb = { 0x00000001, ga102_fb_new },
++ .gpio = { 0x00000001, ga102_gpio_new },
++ .i2c = { 0x00000001, gm200_i2c_new },
++ .imem = { 0x00000001, nv50_instmem_new },
++ .mc = { 0x00000001, ga100_mc_new },
++ .mmu = { 0x00000001, tu102_mmu_new },
++ .pci = { 0x00000001, gp100_pci_new },
++ .privring = { 0x00000001, gm200_privring_new },
++ .timer = { 0x00000001, gk20a_timer_new },
++ .top = { 0x00000001, ga100_top_new },
++ .disp = { 0x00000001, ga102_disp_new },
++ .dma = { 0x00000001, gv100_dma_new },
++ .fifo = { 0x00000001, ga102_fifo_new },
++};
++
+ static const struct nvkm_device_chip
+ nv174_chipset = {
+ .name = "GA104",
+@@ -3092,6 +3113,7 @@ nvkm_device_ctor(const struct nvkm_device_func *func,
+ case 0x167: device->chip = &nv167_chipset; break;
+ case 0x168: device->chip = &nv168_chipset; break;
+ case 0x172: device->chip = &nv172_chipset; break;
++ case 0x173: device->chip = &nv173_chipset; break;
+ case 0x174: device->chip = &nv174_chipset; break;
+ case 0x176: device->chip = &nv176_chipset; break;
+ case 0x177: device->chip = &nv177_chipset; break;
+diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+index b4dfa166eccdf..34234a144e87d 100644
+--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
++++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+@@ -531,7 +531,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ struct drm_display_mode *mode)
+ {
+ struct mipi_dsi_device *device = dsi->device;
+- unsigned int Bpp = mipi_dsi_pixel_format_to_bpp(device->format) / 8;
++ int Bpp = mipi_dsi_pixel_format_to_bpp(device->format) / 8;
+ u16 hbp = 0, hfp = 0, hsa = 0, hblk = 0, vblk = 0;
+ u32 basic_ctl = 0;
+ size_t bytes;
+@@ -555,7 +555,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ * (4 bytes). Its minimal size is therefore 10 bytes
+ */
+ #define HSA_PACKET_OVERHEAD 10
+- hsa = max((unsigned int)HSA_PACKET_OVERHEAD,
++ hsa = max(HSA_PACKET_OVERHEAD,
+ (mode->hsync_end - mode->hsync_start) * Bpp - HSA_PACKET_OVERHEAD);
+
+ /*
+@@ -564,7 +564,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ * therefore 6 bytes
+ */
+ #define HBP_PACKET_OVERHEAD 6
+- hbp = max((unsigned int)HBP_PACKET_OVERHEAD,
++ hbp = max(HBP_PACKET_OVERHEAD,
+ (mode->htotal - mode->hsync_end) * Bpp - HBP_PACKET_OVERHEAD);
+
+ /*
+@@ -574,7 +574,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ * 16 bytes
+ */
+ #define HFP_PACKET_OVERHEAD 16
+- hfp = max((unsigned int)HFP_PACKET_OVERHEAD,
++ hfp = max(HFP_PACKET_OVERHEAD,
+ (mode->hsync_start - mode->hdisplay) * Bpp - HFP_PACKET_OVERHEAD);
+
+ /*
+@@ -583,7 +583,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ * bytes). Its minimal size is therefore 10 bytes.
+ */
+ #define HBLK_PACKET_OVERHEAD 10
+- hblk = max((unsigned int)HBLK_PACKET_OVERHEAD,
++ hblk = max(HBLK_PACKET_OVERHEAD,
+ (mode->htotal - (mode->hsync_end - mode->hsync_start)) * Bpp -
+ HBLK_PACKET_OVERHEAD);
+
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 406e9c324e76a..5bf7124ece96d 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -918,7 +918,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo,
+ /*
+ * We might need to add a TTM.
+ */
+- if (bo->resource->mem_type == TTM_PL_SYSTEM) {
++ if (!bo->resource || bo->resource->mem_type == TTM_PL_SYSTEM) {
+ ret = ttm_tt_create(bo, true);
+ if (ret)
+ return ret;
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 6bb3890b0f2c9..2e72922e36f56 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -194,6 +194,7 @@ static void mt_post_parse(struct mt_device *td, struct mt_application *app);
+ #define MT_CLS_WIN_8_FORCE_MULTI_INPUT 0x0015
+ #define MT_CLS_WIN_8_DISABLE_WAKEUP 0x0016
+ #define MT_CLS_WIN_8_NO_STICKY_FINGERS 0x0017
++#define MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU 0x0018
+
+ /* vendor specific classes */
+ #define MT_CLS_3M 0x0101
+@@ -286,6 +287,15 @@ static const struct mt_class mt_classes[] = {
+ MT_QUIRK_WIN8_PTP_BUTTONS |
+ MT_QUIRK_FORCE_MULTI_INPUT,
+ .export_all_inputs = true },
++ { .name = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++ .quirks = MT_QUIRK_IGNORE_DUPLICATES |
++ MT_QUIRK_HOVERING |
++ MT_QUIRK_CONTACT_CNT_ACCURATE |
++ MT_QUIRK_STICKY_FINGERS |
++ MT_QUIRK_WIN8_PTP_BUTTONS |
++ MT_QUIRK_FORCE_MULTI_INPUT |
++ MT_QUIRK_NOT_SEEN_MEANS_UP,
++ .export_all_inputs = true },
+ { .name = MT_CLS_WIN_8_DISABLE_WAKEUP,
+ .quirks = MT_QUIRK_ALWAYS_VALID |
+ MT_QUIRK_IGNORE_DUPLICATES |
+@@ -783,6 +793,7 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ case HID_DG_CONFIDENCE:
+ if ((cls->name == MT_CLS_WIN_8 ||
+ cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT ||
++ cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU ||
+ cls->name == MT_CLS_WIN_8_DISABLE_WAKEUP) &&
+ (field->application == HID_DG_TOUCHPAD ||
+ field->application == HID_DG_TOUCHSCREEN))
+@@ -2035,7 +2046,7 @@ static const struct hid_device_id mt_devices[] = {
+ USB_DEVICE_ID_LENOVO_X1_TAB3) },
+
+ /* Lenovo X12 TAB Gen 1 */
+- { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+ HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
+ USB_VENDOR_ID_LENOVO,
+ USB_DEVICE_ID_LENOVO_X12_TAB) },
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
+index 33869c1d20c31..a7bfea31f7d8f 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.h
++++ b/drivers/hwtracing/coresight/coresight-etm4x.h
+@@ -7,6 +7,7 @@
+ #define _CORESIGHT_CORESIGHT_ETM_H
+
+ #include <asm/local.h>
++#include <linux/const.h>
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+ #include "coresight-priv.h"
+@@ -515,7 +516,7 @@
+ ({ \
+ u64 __val; \
+ \
+- if (__builtin_constant_p((offset))) \
++ if (__is_constexpr((offset))) \
+ __val = read_etm4x_sysreg_const_offset((offset)); \
+ else \
+ __val = etm4x_sysreg_read((offset), true, (_64bit)); \
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 78fb1a4274a6c..e47fa34656717 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1572,9 +1572,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev);
+ int irq, ret;
+
+- ret = pm_runtime_resume_and_get(&pdev->dev);
+- if (ret < 0)
+- return ret;
++ ret = pm_runtime_get_sync(&pdev->dev);
+
+ hrtimer_cancel(&i2c_imx->slave_timer);
+
+@@ -1585,17 +1583,21 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ if (i2c_imx->dma)
+ i2c_imx_dma_free(i2c_imx);
+
+- /* setup chip registers to defaults */
+- imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
+- imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
+- imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2CR);
+- imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR);
++ if (ret == 0) {
++ /* setup chip registers to defaults */
++ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
++ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
++ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2CR);
++ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR);
++ clk_disable(i2c_imx->clk);
++ }
+
+ clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb);
+ irq = platform_get_irq(pdev, 0);
+ if (irq >= 0)
+ free_irq(irq, i2c_imx);
+- clk_disable_unprepare(i2c_imx->clk);
++
++ clk_unprepare(i2c_imx->clk);
+
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 3bec7c782824a..906df87a89f23 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -484,12 +484,12 @@ static void geni_i2c_gpi_unmap(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ {
+ if (tx_buf) {
+ dma_unmap_single(gi2c->se.dev->parent, tx_addr, msg->len, DMA_TO_DEVICE);
+- i2c_put_dma_safe_msg_buf(tx_buf, msg, false);
++ i2c_put_dma_safe_msg_buf(tx_buf, msg, !gi2c->err);
+ }
+
+ if (rx_buf) {
+ dma_unmap_single(gi2c->se.dev->parent, rx_addr, msg->len, DMA_FROM_DEVICE);
+- i2c_put_dma_safe_msg_buf(rx_buf, msg, false);
++ i2c_put_dma_safe_msg_buf(rx_buf, msg, !gi2c->err);
+ }
+ }
+
+@@ -553,6 +553,7 @@ static int geni_i2c_gpi(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ desc->callback_param = gi2c;
+
+ dmaengine_submit(desc);
++ *buf = dma_buf;
+ *dma_addr_p = addr;
+
+ return 0;
+diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c
+index fce80a4a5147c..04c04e6d24c35 100644
+--- a/drivers/infiniband/core/umem_dmabuf.c
++++ b/drivers/infiniband/core/umem_dmabuf.c
+@@ -18,6 +18,7 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
+ struct scatterlist *sg;
+ unsigned long start, end, cur = 0;
+ unsigned int nmap = 0;
++ long ret;
+ int i;
+
+ dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv);
+@@ -67,9 +68,14 @@ wait_fence:
+ * may be not up-to-date. Wait for the exporter to finish
+ * the migration.
+ */
+- return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv,
++ ret = dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv,
+ DMA_RESV_USAGE_KERNEL,
+ false, MAX_SCHEDULE_TIMEOUT);
++ if (ret < 0)
++ return ret;
++ if (ret == 0)
++ return -ETIMEDOUT;
++ return 0;
+ }
+ EXPORT_SYMBOL(ib_umem_dmabuf_map_pages);
+
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index c16017f6e8db2..14392c942f492 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -2468,31 +2468,24 @@ static int accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
+ opt2 |= CCTRL_ECN_V(1);
+ }
+
+- skb_get(skb);
+- rpl = cplhdr(skb);
+ if (!is_t4(adapter_type)) {
+- BUILD_BUG_ON(sizeof(*rpl5) != roundup(sizeof(*rpl5), 16));
+- skb_trim(skb, sizeof(*rpl5));
+- rpl5 = (void *)rpl;
+- INIT_TP_WR(rpl5, ep->hwtid);
+- } else {
+- skb_trim(skb, sizeof(*rpl));
+- INIT_TP_WR(rpl, ep->hwtid);
+- }
+- OPCODE_TID(rpl) = cpu_to_be32(MK_OPCODE_TID(CPL_PASS_ACCEPT_RPL,
+- ep->hwtid));
+-
+- if (CHELSIO_CHIP_VERSION(adapter_type) > CHELSIO_T4) {
+ u32 isn = (prandom_u32() & ~7UL) - 1;
++
++ skb = get_skb(skb, roundup(sizeof(*rpl5), 16), GFP_KERNEL);
++ rpl5 = __skb_put_zero(skb, roundup(sizeof(*rpl5), 16));
++ rpl = (void *)rpl5;
++ INIT_TP_WR_CPL(rpl5, CPL_PASS_ACCEPT_RPL, ep->hwtid);
+ opt2 |= T5_OPT_2_VALID_F;
+ opt2 |= CONG_CNTRL_V(CONG_ALG_TAHOE);
+ opt2 |= T5_ISS_F;
+- rpl5 = (void *)rpl;
+- memset_after(rpl5, 0, iss);
+ if (peer2peer)
+ isn += 4;
+ rpl5->iss = cpu_to_be32(isn);
+ pr_debug("iss %u\n", be32_to_cpu(rpl5->iss));
++ } else {
++ skb = get_skb(skb, sizeof(*rpl), GFP_KERNEL);
++ rpl = __skb_put_zero(skb, sizeof(*rpl));
++ INIT_TP_WR_CPL(rpl, CPL_PASS_ACCEPT_RPL, ep->hwtid);
+ }
+
+ rpl->opt0 = cpu_to_be64(opt0);
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index b68fddeac0f12..63c89a72cc352 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2738,26 +2738,24 @@ static int set_has_smi_cap(struct mlx5_ib_dev *dev)
+ int err;
+ int port;
+
+- for (port = 1; port <= ARRAY_SIZE(dev->port_caps); port++) {
+- dev->port_caps[port - 1].has_smi = false;
+- if (MLX5_CAP_GEN(dev->mdev, port_type) ==
+- MLX5_CAP_PORT_TYPE_IB) {
+- if (MLX5_CAP_GEN(dev->mdev, ib_virt)) {
+- err = mlx5_query_hca_vport_context(dev->mdev, 0,
+- port, 0,
+- &vport_ctx);
+- if (err) {
+- mlx5_ib_err(dev, "query_hca_vport_context for port=%d failed %d\n",
+- port, err);
+- return err;
+- }
+- dev->port_caps[port - 1].has_smi =
+- vport_ctx.has_smi;
+- } else {
+- dev->port_caps[port - 1].has_smi = true;
+- }
++ if (MLX5_CAP_GEN(dev->mdev, port_type) != MLX5_CAP_PORT_TYPE_IB)
++ return 0;
++
++ for (port = 1; port <= dev->num_ports; port++) {
++ if (!MLX5_CAP_GEN(dev->mdev, ib_virt)) {
++ dev->port_caps[port - 1].has_smi = true;
++ continue;
+ }
++ err = mlx5_query_hca_vport_context(dev->mdev, 0, port, 0,
++ &vport_ctx);
++ if (err) {
++ mlx5_ib_err(dev, "query_hca_vport_context for port=%d failed %d\n",
++ port, err);
++ return err;
++ }
++ dev->port_caps[port - 1].has_smi = vport_ctx.has_smi;
+ }
++
+ return 0;
+ }
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index 37484a559d209..d86253c6d6b56 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -79,7 +79,6 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length);
+ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
+ int rxe_invalidate_mr(struct rxe_qp *qp, u32 key);
+ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
+-int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr);
+ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
+ void rxe_mr_cleanup(struct rxe_pool_elem *elem);
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index 3add521290064..c28b18d59a064 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -24,7 +24,7 @@ u8 rxe_get_next_key(u32 last_key)
+
+ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
+ {
+- struct rxe_map_set *set = mr->cur_map_set;
++
+
+ switch (mr->type) {
+ case IB_MR_TYPE_DMA:
+@@ -32,8 +32,8 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
+
+ case IB_MR_TYPE_USER:
+ case IB_MR_TYPE_MEM_REG:
+- if (iova < set->iova || length > set->length ||
+- iova > set->iova + set->length - length)
++ if (iova < mr->iova || length > mr->length ||
++ iova > mr->iova + mr->length - length)
+ return -EFAULT;
+ return 0;
+
+@@ -65,89 +65,41 @@ static void rxe_mr_init(int access, struct rxe_mr *mr)
+ mr->map_shift = ilog2(RXE_BUF_PER_MAP);
+ }
+
+-static void rxe_mr_free_map_set(int num_map, struct rxe_map_set *set)
+-{
+- int i;
+-
+- for (i = 0; i < num_map; i++)
+- kfree(set->map[i]);
+-
+- kfree(set->map);
+- kfree(set);
+-}
+-
+-static int rxe_mr_alloc_map_set(int num_map, struct rxe_map_set **setp)
++static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf)
+ {
+ int i;
+- struct rxe_map_set *set;
++ int num_map;
++ struct rxe_map **map = mr->map;
+
+- set = kmalloc(sizeof(*set), GFP_KERNEL);
+- if (!set)
+- goto err_out;
++ num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP;
+
+- set->map = kmalloc_array(num_map, sizeof(struct rxe_map *), GFP_KERNEL);
+- if (!set->map)
+- goto err_free_set;
++ mr->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL);
++ if (!mr->map)
++ goto err1;
+
+ for (i = 0; i < num_map; i++) {
+- set->map[i] = kmalloc(sizeof(struct rxe_map), GFP_KERNEL);
+- if (!set->map[i])
+- goto err_free_map;
++ mr->map[i] = kmalloc(sizeof(**map), GFP_KERNEL);
++ if (!mr->map[i])
++ goto err2;
+ }
+
+- *setp = set;
+-
+- return 0;
+-
+-err_free_map:
+- for (i--; i >= 0; i--)
+- kfree(set->map[i]);
+-
+- kfree(set->map);
+-err_free_set:
+- kfree(set);
+-err_out:
+- return -ENOMEM;
+-}
+-
+-/**
+- * rxe_mr_alloc() - Allocate memory map array(s) for MR
+- * @mr: Memory region
+- * @num_buf: Number of buffer descriptors to support
+- * @both: If non zero allocate both mr->map and mr->next_map
+- * else just allocate mr->map. Used for fast MRs
+- *
+- * Return: 0 on success else an error
+- */
+-static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf, int both)
+-{
+- int ret;
+- int num_map;
+-
+ BUILD_BUG_ON(!is_power_of_2(RXE_BUF_PER_MAP));
+- num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP;
+
+ mr->map_shift = ilog2(RXE_BUF_PER_MAP);
+ mr->map_mask = RXE_BUF_PER_MAP - 1;
++
+ mr->num_buf = num_buf;
+- mr->max_buf = num_map * RXE_BUF_PER_MAP;
+ mr->num_map = num_map;
+-
+- ret = rxe_mr_alloc_map_set(num_map, &mr->cur_map_set);
+- if (ret)
+- return -ENOMEM;
+-
+- if (both) {
+- ret = rxe_mr_alloc_map_set(num_map, &mr->next_map_set);
+- if (ret)
+- goto err_free;
+- }
++ mr->max_buf = num_map * RXE_BUF_PER_MAP;
+
+ return 0;
+
+-err_free:
+- rxe_mr_free_map_set(mr->num_map, mr->cur_map_set);
+- mr->cur_map_set = NULL;
++err2:
++ for (i--; i >= 0; i--)
++ kfree(mr->map[i]);
++
++ kfree(mr->map);
++err1:
+ return -ENOMEM;
+ }
+
+@@ -164,7 +116,6 @@ void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr)
+ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ int access, struct rxe_mr *mr)
+ {
+- struct rxe_map_set *set;
+ struct rxe_map **map;
+ struct rxe_phys_buf *buf = NULL;
+ struct ib_umem *umem;
+@@ -172,6 +123,7 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ int num_buf;
+ void *vaddr;
+ int err;
++ int i;
+
+ umem = ib_umem_get(pd->ibpd.device, start, length, access);
+ if (IS_ERR(umem)) {
+@@ -185,20 +137,18 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+
+ rxe_mr_init(access, mr);
+
+- err = rxe_mr_alloc(mr, num_buf, 0);
++ err = rxe_mr_alloc(mr, num_buf);
+ if (err) {
+ pr_warn("%s: Unable to allocate memory for map\n",
+ __func__);
+ goto err_release_umem;
+ }
+
+- set = mr->cur_map_set;
+- set->page_shift = PAGE_SHIFT;
+- set->page_mask = PAGE_SIZE - 1;
+-
+- num_buf = 0;
+- map = set->map;
++ mr->page_shift = PAGE_SHIFT;
++ mr->page_mask = PAGE_SIZE - 1;
+
++ num_buf = 0;
++ map = mr->map;
+ if (length > 0) {
+ buf = map[0]->buf;
+
+@@ -214,29 +164,33 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ pr_warn("%s: Unable to get virtual address\n",
+ __func__);
+ err = -ENOMEM;
+- goto err_release_umem;
++ goto err_cleanup_map;
+ }
+
+ buf->addr = (uintptr_t)vaddr;
+ buf->size = PAGE_SIZE;
+ num_buf++;
+ buf++;
++
+ }
+ }
+
+ mr->ibmr.pd = &pd->ibpd;
+ mr->umem = umem;
+ mr->access = access;
++ mr->length = length;
++ mr->iova = iova;
++ mr->va = start;
++ mr->offset = ib_umem_offset(umem);
+ mr->state = RXE_MR_STATE_VALID;
+ mr->type = IB_MR_TYPE_USER;
+
+- set->length = length;
+- set->iova = iova;
+- set->va = start;
+- set->offset = ib_umem_offset(umem);
+-
+ return 0;
+
++err_cleanup_map:
++ for (i = 0; i < mr->num_map; i++)
++ kfree(mr->map[i]);
++ kfree(mr->map);
+ err_release_umem:
+ ib_umem_release(umem);
+ err_out:
+@@ -250,7 +204,7 @@ int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr)
+ /* always allow remote access for FMRs */
+ rxe_mr_init(IB_ACCESS_REMOTE, mr);
+
+- err = rxe_mr_alloc(mr, max_pages, 1);
++ err = rxe_mr_alloc(mr, max_pages);
+ if (err)
+ goto err1;
+
+@@ -268,24 +222,21 @@ err1:
+ static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out,
+ size_t *offset_out)
+ {
+- struct rxe_map_set *set = mr->cur_map_set;
+- size_t offset = iova - set->iova + set->offset;
++ size_t offset = iova - mr->iova + mr->offset;
+ int map_index;
+ int buf_index;
+ u64 length;
+- struct rxe_map *map;
+
+- if (likely(set->page_shift)) {
+- *offset_out = offset & set->page_mask;
+- offset >>= set->page_shift;
++ if (likely(mr->page_shift)) {
++ *offset_out = offset & mr->page_mask;
++ offset >>= mr->page_shift;
+ *n_out = offset & mr->map_mask;
+ *m_out = offset >> mr->map_shift;
+ } else {
+ map_index = 0;
+ buf_index = 0;
+
+- map = set->map[map_index];
+- length = map->buf[buf_index].size;
++ length = mr->map[map_index]->buf[buf_index].size;
+
+ while (offset >= length) {
+ offset -= length;
+@@ -295,8 +246,7 @@ static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out,
+ map_index++;
+ buf_index = 0;
+ }
+- map = set->map[map_index];
+- length = map->buf[buf_index].size;
++ length = mr->map[map_index]->buf[buf_index].size;
+ }
+
+ *m_out = map_index;
+@@ -317,7 +267,7 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length)
+ goto out;
+ }
+
+- if (!mr->cur_map_set) {
++ if (!mr->map) {
+ addr = (void *)(uintptr_t)iova;
+ goto out;
+ }
+@@ -330,13 +280,13 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length)
+
+ lookup_iova(mr, iova, &m, &n, &offset);
+
+- if (offset + length > mr->cur_map_set->map[m]->buf[n].size) {
++ if (offset + length > mr->map[m]->buf[n].size) {
+ pr_warn("crosses page boundary\n");
+ addr = NULL;
+ goto out;
+ }
+
+- addr = (void *)(uintptr_t)mr->cur_map_set->map[m]->buf[n].addr + offset;
++ addr = (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset;
+
+ out:
+ return addr;
+@@ -372,7 +322,7 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
+ return 0;
+ }
+
+- WARN_ON_ONCE(!mr->cur_map_set);
++ WARN_ON_ONCE(!mr->map);
+
+ err = mr_check_range(mr, iova, length);
+ if (err) {
+@@ -382,7 +332,7 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
+
+ lookup_iova(mr, iova, &m, &i, &offset);
+
+- map = mr->cur_map_set->map + m;
++ map = mr->map + m;
+ buf = map[0]->buf + i;
+
+ while (length > 0) {
+@@ -628,9 +578,8 @@ err:
+ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ {
+ struct rxe_mr *mr = to_rmr(wqe->wr.wr.reg.mr);
+- u32 key = wqe->wr.wr.reg.key & 0xff;
++ u32 key = wqe->wr.wr.reg.key;
+ u32 access = wqe->wr.wr.reg.access;
+- struct rxe_map_set *set;
+
+ /* user can only register MR in free state */
+ if (unlikely(mr->state != RXE_MR_STATE_FREE)) {
+@@ -646,36 +595,19 @@ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ return -EINVAL;
+ }
+
++ /* user is only allowed to change key portion of l/rkey */
++ if (unlikely((mr->lkey & ~0xff) != (key & ~0xff))) {
++ pr_warn("%s: key = 0x%x has wrong index mr->lkey = 0x%x\n",
++ __func__, key, mr->lkey);
++ return -EINVAL;
++ }
++
+ mr->access = access;
+- mr->lkey = (mr->lkey & ~0xff) | key;
+- mr->rkey = (access & IB_ACCESS_REMOTE) ? mr->lkey : 0;
++ mr->lkey = key;
++ mr->rkey = (access & IB_ACCESS_REMOTE) ? key : 0;
++ mr->iova = wqe->wr.wr.reg.mr->iova;
+ mr->state = RXE_MR_STATE_VALID;
+
+- set = mr->cur_map_set;
+- mr->cur_map_set = mr->next_map_set;
+- mr->cur_map_set->iova = wqe->wr.wr.reg.mr->iova;
+- mr->next_map_set = set;
+-
+- return 0;
+-}
+-
+-int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr)
+-{
+- struct rxe_mr *mr = to_rmr(ibmr);
+- struct rxe_map_set *set = mr->next_map_set;
+- struct rxe_map *map;
+- struct rxe_phys_buf *buf;
+-
+- if (unlikely(set->nbuf == mr->num_buf))
+- return -ENOMEM;
+-
+- map = set->map[set->nbuf / RXE_BUF_PER_MAP];
+- buf = &map->buf[set->nbuf % RXE_BUF_PER_MAP];
+-
+- buf->addr = addr;
+- buf->size = ibmr->page_size;
+- set->nbuf++;
+-
+ return 0;
+ }
+
+@@ -695,14 +627,15 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
+ void rxe_mr_cleanup(struct rxe_pool_elem *elem)
+ {
+ struct rxe_mr *mr = container_of(elem, typeof(*mr), elem);
++ int i;
+
+ rxe_put(mr_pd(mr));
+-
+ ib_umem_release(mr->umem);
+
+- if (mr->cur_map_set)
+- rxe_mr_free_map_set(mr->num_map, mr->cur_map_set);
++ if (mr->map) {
++ for (i = 0; i < mr->num_map; i++)
++ kfree(mr->map[i]);
+
+- if (mr->next_map_set)
+- rxe_mr_free_map_set(mr->num_map, mr->next_map_set);
++ kfree(mr->map);
++ }
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
+index 824739008d5b6..6c24bc4318e82 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mw.c
++++ b/drivers/infiniband/sw/rxe/rxe_mw.c
+@@ -112,15 +112,15 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+
+ /* C10-75 */
+ if (mw->access & IB_ZERO_BASED) {
+- if (unlikely(wqe->wr.wr.mw.length > mr->cur_map_set->length)) {
++ if (unlikely(wqe->wr.wr.mw.length > mr->length)) {
+ pr_err_once(
+ "attempt to bind a ZB MW outside of the MR\n");
+ return -EINVAL;
+ }
+ } else {
+- if (unlikely((wqe->wr.wr.mw.addr < mr->cur_map_set->iova) ||
++ if (unlikely((wqe->wr.wr.mw.addr < mr->iova) ||
+ ((wqe->wr.wr.mw.addr + wqe->wr.wr.mw.length) >
+- (mr->cur_map_set->iova + mr->cur_map_set->length)))) {
++ (mr->iova + mr->length)))) {
+ pr_err_once(
+ "attempt to bind a VA MW outside of the MR\n");
+ return -EINVAL;
+diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
+index 568a7cbd13d4c..86c7a8bf3cbbd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_param.h
++++ b/drivers/infiniband/sw/rxe/rxe_param.h
+@@ -105,6 +105,12 @@ enum rxe_device_param {
+ RXE_INFLIGHT_SKBS_PER_QP_HIGH = 64,
+ RXE_INFLIGHT_SKBS_PER_QP_LOW = 16,
+
++ /* Max number of interations of each tasklet
++ * before yielding the cpu to let other
++ * work make progress
++ */
++ RXE_MAX_ITERATIONS = 1024,
++
+ /* Delay before calling arbiter timer */
+ RXE_NSEC_ARB_TIMER_DELAY = 200,
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
+index 0c4db5bb17d75..2248cf33d7766 100644
+--- a/drivers/infiniband/sw/rxe/rxe_task.c
++++ b/drivers/infiniband/sw/rxe/rxe_task.c
+@@ -8,7 +8,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/hardirq.h>
+
+-#include "rxe_task.h"
++#include "rxe.h"
+
+ int __rxe_do_task(struct rxe_task *task)
+
+@@ -33,6 +33,7 @@ void rxe_do_task(struct tasklet_struct *t)
+ int cont;
+ int ret;
+ struct rxe_task *task = from_tasklet(task, t, tasklet);
++ unsigned int iterations = RXE_MAX_ITERATIONS;
+
+ spin_lock_bh(&task->state_lock);
+ switch (task->state) {
+@@ -61,13 +62,20 @@ void rxe_do_task(struct tasklet_struct *t)
+ spin_lock_bh(&task->state_lock);
+ switch (task->state) {
+ case TASK_STATE_BUSY:
+- if (ret)
++ if (ret) {
+ task->state = TASK_STATE_START;
+- else
++ } else if (iterations--) {
+ cont = 1;
++ } else {
++ /* reschedule the tasklet and exit
++ * the loop to give up the cpu
++ */
++ tasklet_schedule(&task->tasklet);
++ task->state = TASK_STATE_START;
++ }
+ break;
+
+- /* soneone tried to run the task since the last time we called
++ /* someone tried to run the task since the last time we called
+ * func, so we will call one more time regardless of the
+ * return value
+ */
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 9d995854a1749..d2b4e68402d45 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -967,26 +967,41 @@ err1:
+ return ERR_PTR(err);
+ }
+
+-/* build next_map_set from scatterlist
+- * The IB_WR_REG_MR WR will swap map_sets
+- */
++static int rxe_set_page(struct ib_mr *ibmr, u64 addr)
++{
++ struct rxe_mr *mr = to_rmr(ibmr);
++ struct rxe_map *map;
++ struct rxe_phys_buf *buf;
++
++ if (unlikely(mr->nbuf == mr->num_buf))
++ return -ENOMEM;
++
++ map = mr->map[mr->nbuf / RXE_BUF_PER_MAP];
++ buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP];
++
++ buf->addr = addr;
++ buf->size = ibmr->page_size;
++ mr->nbuf++;
++
++ return 0;
++}
++
+ static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
+ int sg_nents, unsigned int *sg_offset)
+ {
+ struct rxe_mr *mr = to_rmr(ibmr);
+- struct rxe_map_set *set = mr->next_map_set;
+ int n;
+
+- set->nbuf = 0;
++ mr->nbuf = 0;
+
+- n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_mr_set_page);
++ n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page);
+
+- set->va = ibmr->iova;
+- set->iova = ibmr->iova;
+- set->length = ibmr->length;
+- set->page_shift = ilog2(ibmr->page_size);
+- set->page_mask = ibmr->page_size - 1;
+- set->offset = set->iova & set->page_mask;
++ mr->va = ibmr->iova;
++ mr->iova = ibmr->iova;
++ mr->length = ibmr->length;
++ mr->page_shift = ilog2(ibmr->page_size);
++ mr->page_mask = ibmr->page_size - 1;
++ mr->offset = mr->iova & mr->page_mask;
+
+ return n;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index 9bdf333465114..3d524238e5c4e 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -289,17 +289,6 @@ struct rxe_map {
+ struct rxe_phys_buf buf[RXE_BUF_PER_MAP];
+ };
+
+-struct rxe_map_set {
+- struct rxe_map **map;
+- u64 va;
+- u64 iova;
+- size_t length;
+- u32 offset;
+- u32 nbuf;
+- int page_shift;
+- int page_mask;
+-};
+-
+ static inline int rkey_is_mw(u32 rkey)
+ {
+ u32 index = rkey >> 8;
+@@ -317,20 +306,26 @@ struct rxe_mr {
+ u32 rkey;
+ enum rxe_mr_state state;
+ enum ib_mr_type type;
++ u64 va;
++ u64 iova;
++ size_t length;
++ u32 offset;
+ int access;
+
++ int page_shift;
++ int page_mask;
+ int map_shift;
+ int map_mask;
+
+ u32 num_buf;
++ u32 nbuf;
+
+ u32 max_buf;
+ u32 num_map;
+
+ atomic_t num_mw;
+
+- struct rxe_map_set *cur_map_set;
+- struct rxe_map_set *next_map_set;
++ struct rxe_map **map;
+ };
+
+ enum rxe_mw_state {
+diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
+index bd5f3b5e17278..7b83f48f60c5e 100644
+--- a/drivers/infiniband/ulp/iser/iser_initiator.c
++++ b/drivers/infiniband/ulp/iser/iser_initiator.c
+@@ -537,6 +537,7 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
+ struct iscsi_hdr *hdr;
+ char *data;
+ int length;
++ bool full_feature_phase;
+
+ if (unlikely(wc->status != IB_WC_SUCCESS)) {
+ iser_err_comp(wc, "login_rsp");
+@@ -550,6 +551,9 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
+ hdr = desc->rsp + sizeof(struct iser_ctrl);
+ data = desc->rsp + ISER_HEADERS_LEN;
+ length = wc->byte_len - ISER_HEADERS_LEN;
++ full_feature_phase = ((hdr->flags & ISCSI_FULL_FEATURE_PHASE) ==
++ ISCSI_FULL_FEATURE_PHASE) &&
++ (hdr->flags & ISCSI_FLAG_CMD_FINAL);
+
+ iser_dbg("op 0x%x itt 0x%x dlen %d\n", hdr->opcode,
+ hdr->itt, length);
+@@ -560,7 +564,8 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
+ desc->rsp_dma, ISER_RX_LOGIN_SIZE,
+ DMA_FROM_DEVICE);
+
+- if (iser_conn->iscsi_conn->session->discovery_sess)
++ if (!full_feature_phase ||
++ iser_conn->iscsi_conn->session->discovery_sess)
+ return;
+
+ /* Post the first RX buffer that is skipped in iser_post_rx_bufs() */
+diff --git a/drivers/input/keyboard/mt6779-keypad.c b/drivers/input/keyboard/mt6779-keypad.c
+index 2e7c9187c10f2..bd86cb95bde30 100644
+--- a/drivers/input/keyboard/mt6779-keypad.c
++++ b/drivers/input/keyboard/mt6779-keypad.c
+@@ -42,7 +42,7 @@ static irqreturn_t mt6779_keypad_irq_handler(int irq, void *dev_id)
+ const unsigned short *keycode = keypad->input_dev->keycode;
+ DECLARE_BITMAP(new_state, MTK_KPD_NUM_BITS);
+ DECLARE_BITMAP(change, MTK_KPD_NUM_BITS);
+- unsigned int bit_nr;
++ unsigned int bit_nr, key;
+ unsigned int row, col;
+ unsigned int scancode;
+ unsigned int row_shift = get_count_order(keypad->n_cols);
+@@ -61,8 +61,10 @@ static irqreturn_t mt6779_keypad_irq_handler(int irq, void *dev_id)
+ if (bit_nr % 32 >= 16)
+ continue;
+
+- row = bit_nr / 32;
+- col = bit_nr % 32;
++ key = bit_nr / 32 * 16 + bit_nr % 32;
++ row = key / 9;
++ col = key % 9;
++
+ scancode = MATRIX_SCAN_CODE(row, col, row_shift);
+ /* 1: not pressed, 0: pressed */
+ pressed = !test_bit(bit_nr, new_state);
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index 6b4138771a3f2..b2e8097a2e6d9 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -40,7 +40,6 @@
+ #define IQS7222_SLDR_SETUP_2_RES_MASK GENMASK(15, 8)
+ #define IQS7222_SLDR_SETUP_2_RES_SHIFT 8
+ #define IQS7222_SLDR_SETUP_2_TOP_SPEED_MASK GENMASK(7, 0)
+-#define IQS7222_SLDR_SETUP_3_CHAN_SEL_MASK GENMASK(9, 0)
+
+ #define IQS7222_GPIO_SETUP_0_GPIO_EN BIT(0)
+
+@@ -54,6 +53,9 @@
+ #define IQS7222_SYS_SETUP_ACK_RESET BIT(0)
+
+ #define IQS7222_EVENT_MASK_ATI BIT(12)
++#define IQS7222_EVENT_MASK_SLDR BIT(10)
++#define IQS7222_EVENT_MASK_TOUCH BIT(1)
++#define IQS7222_EVENT_MASK_PROX BIT(0)
+
+ #define IQS7222_COMMS_HOLD BIT(0)
+ #define IQS7222_COMMS_ERROR 0xEEEE
+@@ -92,11 +94,11 @@ enum iqs7222_reg_key_id {
+
+ enum iqs7222_reg_grp_id {
+ IQS7222_REG_GRP_STAT,
++ IQS7222_REG_GRP_FILT,
+ IQS7222_REG_GRP_CYCLE,
+ IQS7222_REG_GRP_GLBL,
+ IQS7222_REG_GRP_BTN,
+ IQS7222_REG_GRP_CHAN,
+- IQS7222_REG_GRP_FILT,
+ IQS7222_REG_GRP_SLDR,
+ IQS7222_REG_GRP_GPIO,
+ IQS7222_REG_GRP_SYS,
+@@ -135,12 +137,12 @@ struct iqs7222_event_desc {
+ static const struct iqs7222_event_desc iqs7222_kp_events[] = {
+ {
+ .name = "event-prox",
+- .enable = BIT(0),
++ .enable = IQS7222_EVENT_MASK_PROX,
+ .reg_key = IQS7222_REG_KEY_PROX,
+ },
+ {
+ .name = "event-touch",
+- .enable = BIT(1),
++ .enable = IQS7222_EVENT_MASK_TOUCH,
+ .reg_key = IQS7222_REG_KEY_TOUCH,
+ },
+ };
+@@ -555,13 +557,6 @@ static const struct iqs7222_prop_desc iqs7222_props[] = {
+ .reg_width = 4,
+ .label = "current reference trim",
+ },
+- {
+- .name = "azoteq,rf-filt-enable",
+- .reg_grp = IQS7222_REG_GRP_GLBL,
+- .reg_offset = 0,
+- .reg_shift = 15,
+- .reg_width = 1,
+- },
+ {
+ .name = "azoteq,max-counts",
+ .reg_grp = IQS7222_REG_GRP_GLBL,
+@@ -1272,9 +1267,22 @@ static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222)
+ struct i2c_client *client = iqs7222->client;
+ ktime_t ati_timeout;
+ u16 sys_status = 0;
+- u16 sys_setup = iqs7222->sys_setup[0] & ~IQS7222_SYS_SETUP_ACK_RESET;
++ u16 sys_setup;
+ int error, i;
+
++ /*
++ * The reserved fields of the system setup register may have changed
++ * as a result of other registers having been written. As such, read
++ * the register's latest value to avoid unexpected behavior when the
++ * register is written in the loop that follows.
++ */
++ error = iqs7222_read_word(iqs7222, IQS7222_SYS_SETUP, &sys_setup);
++ if (error)
++ return error;
++
++ sys_setup &= ~IQS7222_SYS_SETUP_INTF_MODE_MASK;
++ sys_setup &= ~IQS7222_SYS_SETUP_PWR_MODE_MASK;
++
+ for (i = 0; i < IQS7222_NUM_RETRIES; i++) {
+ /*
+ * Trigger ATI from streaming and normal-power modes so that
+@@ -1299,12 +1307,15 @@ static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222)
+ if (error)
+ return error;
+
+- if (sys_status & IQS7222_SYS_STATUS_ATI_ACTIVE)
+- continue;
++ if (sys_status & IQS7222_SYS_STATUS_RESET)
++ return 0;
+
+ if (sys_status & IQS7222_SYS_STATUS_ATI_ERROR)
+ break;
+
++ if (sys_status & IQS7222_SYS_STATUS_ATI_ACTIVE)
++ continue;
++
+ /*
+ * Use stream-in-touch mode if either slider reports
+ * absolute position.
+@@ -1321,7 +1332,7 @@ static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222)
+ dev_err(&client->dev,
+ "ATI attempt %d of %d failed with status 0x%02X, %s\n",
+ i + 1, IQS7222_NUM_RETRIES, (u8)sys_status,
+- i < IQS7222_NUM_RETRIES ? "retrying..." : "stopping");
++ i + 1 < IQS7222_NUM_RETRIES ? "retrying" : "stopping");
+ }
+
+ return -ETIMEDOUT;
+@@ -1333,6 +1344,34 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir)
+ int comms_offset = dev_desc->comms_offset;
+ int error, i, j, k;
+
++ /*
++ * Acknowledge reset before writing any registers in case the device
++ * suffers a spurious reset during initialization. Because this step
++ * may change the reserved fields of the second filter beta register,
++ * its cache must be updated.
++ *
++ * Writing the second filter beta register, in turn, may clobber the
++ * system status register. As such, the filter beta register pair is
++ * written first to protect against this hazard.
++ */
++ if (dir == WRITE) {
++ u16 reg = dev_desc->reg_grps[IQS7222_REG_GRP_FILT].base + 1;
++ u16 filt_setup;
++
++ error = iqs7222_write_word(iqs7222, IQS7222_SYS_SETUP,
++ iqs7222->sys_setup[0] |
++ IQS7222_SYS_SETUP_ACK_RESET);
++ if (error)
++ return error;
++
++ error = iqs7222_read_word(iqs7222, reg, &filt_setup);
++ if (error)
++ return error;
++
++ iqs7222->filt_setup[1] &= GENMASK(7, 0);
++ iqs7222->filt_setup[1] |= (filt_setup & ~GENMASK(7, 0));
++ }
++
+ /*
+ * Take advantage of the stop-bit disable function, if available, to
+ * save the trouble of having to reopen a communication window after
+@@ -1957,8 +1996,8 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row;
+ int ext_chan = rounddown(num_chan, 10);
+ int count, error, reg_offset, i;
++ u16 *event_mask = &iqs7222->sys_setup[dev_desc->event_offset];
+ u16 *sldr_setup = iqs7222->sldr_setup[sldr_index];
+- u16 *sys_setup = iqs7222->sys_setup;
+ unsigned int chan_sel[4], val;
+
+ error = iqs7222_parse_props(iqs7222, &sldr_node, sldr_index,
+@@ -2003,7 +2042,7 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ reg_offset = dev_desc->sldr_res < U16_MAX ? 0 : 1;
+
+ sldr_setup[0] |= count;
+- sldr_setup[3 + reg_offset] &= ~IQS7222_SLDR_SETUP_3_CHAN_SEL_MASK;
++ sldr_setup[3 + reg_offset] &= ~GENMASK(ext_chan - 1, 0);
+
+ for (i = 0; i < ARRAY_SIZE(chan_sel); i++) {
+ sldr_setup[5 + reg_offset + i] = 0;
+@@ -2081,17 +2120,19 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ sldr_setup[0] |= dev_desc->wheel_enable;
+ }
+
++ /*
++ * The absence of a register offset makes it safe to assume the device
++ * supports gestures, each of which is first disabled until explicitly
++ * enabled.
++ */
++ if (!reg_offset)
++ for (i = 0; i < ARRAY_SIZE(iqs7222_sl_events); i++)
++ sldr_setup[9] &= ~iqs7222_sl_events[i].enable;
++
+ for (i = 0; i < ARRAY_SIZE(iqs7222_sl_events); i++) {
+ const char *event_name = iqs7222_sl_events[i].name;
+ struct fwnode_handle *event_node;
+
+- /*
+- * The absence of a register offset means the remaining fields
+- * in the group represent gesture settings.
+- */
+- if (iqs7222_sl_events[i].enable && !reg_offset)
+- sldr_setup[9] &= ~iqs7222_sl_events[i].enable;
+-
+ event_node = fwnode_get_named_child_node(sldr_node, event_name);
+ if (!event_node)
+ continue;
+@@ -2104,6 +2145,22 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ if (error)
+ return error;
+
++ /*
++ * The press/release event does not expose a direct GPIO link,
++ * but one can be emulated by tying each of the participating
++ * channels to the same GPIO.
++ */
++ error = iqs7222_gpio_select(iqs7222, event_node,
++ i ? iqs7222_sl_events[i].enable
++ : sldr_setup[3 + reg_offset],
++ i ? 1568 + sldr_index * 30
++ : sldr_setup[4 + reg_offset]);
++ if (error)
++ return error;
++
++ if (!reg_offset)
++ sldr_setup[9] |= iqs7222_sl_events[i].enable;
++
+ error = fwnode_property_read_u32(event_node, "linux,code",
+ &val);
+ if (error) {
+@@ -2115,26 +2172,20 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ iqs7222->sl_code[sldr_index][i] = val;
+ input_set_capability(iqs7222->keypad, EV_KEY, val);
+
+- /*
+- * The press/release event is determined based on whether the
+- * coordinate field reports 0xFFFF and has no explicit enable
+- * control.
+- */
+- if (!iqs7222_sl_events[i].enable || reg_offset)
+- continue;
+-
+- sldr_setup[9] |= iqs7222_sl_events[i].enable;
+-
+- error = iqs7222_gpio_select(iqs7222, event_node,
+- iqs7222_sl_events[i].enable,
+- 1568 + sldr_index * 30);
+- if (error)
+- return error;
+-
+ if (!dev_desc->event_offset)
+ continue;
+
+- sys_setup[dev_desc->event_offset] |= BIT(10 + sldr_index);
++ /*
++ * The press/release event is determined based on whether the
++ * coordinate field reports 0xFFFF and solely relies on touch
++ * or proximity interrupts to be unmasked.
++ */
++ if (i && !reg_offset)
++ *event_mask |= (IQS7222_EVENT_MASK_SLDR << sldr_index);
++ else if (sldr_setup[4 + reg_offset] == dev_desc->touch_link)
++ *event_mask |= IQS7222_EVENT_MASK_TOUCH;
++ else
++ *event_mask |= IQS7222_EVENT_MASK_PROX;
+ }
+
+ /*
+@@ -2227,11 +2278,6 @@ static int iqs7222_parse_all(struct iqs7222_private *iqs7222)
+ return error;
+ }
+
+- sys_setup[0] &= ~IQS7222_SYS_SETUP_INTF_MODE_MASK;
+- sys_setup[0] &= ~IQS7222_SYS_SETUP_PWR_MODE_MASK;
+-
+- sys_setup[0] |= IQS7222_SYS_SETUP_ACK_RESET;
+-
+ return iqs7222_parse_props(iqs7222, NULL, 0, IQS7222_REG_GRP_SYS,
+ IQS7222_REG_KEY_NONE);
+ }
+@@ -2299,29 +2345,37 @@ static int iqs7222_report(struct iqs7222_private *iqs7222)
+ input_report_abs(iqs7222->keypad, iqs7222->sl_axis[i],
+ sldr_pos);
+
+- for (j = 0; j < ARRAY_SIZE(iqs7222_sl_events); j++) {
+- u16 mask = iqs7222_sl_events[j].mask;
+- u16 val = iqs7222_sl_events[j].val;
++ input_report_key(iqs7222->keypad, iqs7222->sl_code[i][0],
++ sldr_pos < dev_desc->sldr_res);
+
+- if (!iqs7222_sl_events[j].enable) {
+- input_report_key(iqs7222->keypad,
+- iqs7222->sl_code[i][j],
+- sldr_pos < dev_desc->sldr_res);
+- continue;
+- }
++ /*
++ * A maximum resolution indicates the device does not support
++ * gestures, in which case the remaining fields are ignored.
++ */
++ if (dev_desc->sldr_res == U16_MAX)
++ continue;
+
+- /*
+- * The remaining offsets represent gesture state, and
+- * are discarded in the case of IQS7222C because only
+- * absolute position is reported.
+- */
+- if (num_stat < IQS7222_MAX_COLS_STAT)
+- continue;
++ if (!(le16_to_cpu(status[1]) & IQS7222_EVENT_MASK_SLDR << i))
++ continue;
++
++ /*
++ * Skip the press/release event, as it does not have separate
++ * status fields and is handled separately.
++ */
++ for (j = 1; j < ARRAY_SIZE(iqs7222_sl_events); j++) {
++ u16 mask = iqs7222_sl_events[j].mask;
++ u16 val = iqs7222_sl_events[j].val;
+
+ input_report_key(iqs7222->keypad,
+ iqs7222->sl_code[i][j],
+ (state & mask) == val);
+ }
++
++ input_sync(iqs7222->keypad);
++
++ for (j = 1; j < ARRAY_SIZE(iqs7222_sl_events); j++)
++ input_report_key(iqs7222->keypad,
++ iqs7222->sl_code[i][j], 0);
+ }
+
+ input_sync(iqs7222->keypad);
+diff --git a/drivers/input/touchscreen/exc3000.c b/drivers/input/touchscreen/exc3000.c
+index cbe0dd4129121..4b7eee01c6aad 100644
+--- a/drivers/input/touchscreen/exc3000.c
++++ b/drivers/input/touchscreen/exc3000.c
+@@ -220,6 +220,7 @@ static int exc3000_vendor_data_request(struct exc3000_data *data, u8 *request,
+ {
+ u8 buf[EXC3000_LEN_VENDOR_REQUEST] = { 0x67, 0x00, 0x42, 0x00, 0x03 };
+ int ret;
++ unsigned long time_left;
+
+ mutex_lock(&data->query_lock);
+
+@@ -233,9 +234,9 @@ static int exc3000_vendor_data_request(struct exc3000_data *data, u8 *request,
+ goto out_unlock;
+
+ if (response) {
+- ret = wait_for_completion_timeout(&data->wait_event,
+- timeout * HZ);
+- if (ret <= 0) {
++ time_left = wait_for_completion_timeout(&data->wait_event,
++ timeout * HZ);
++ if (time_left == 0) {
+ ret = -ETIMEDOUT;
+ goto out_unlock;
+ }
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index be066c1503d37..ba3115fd0f86a 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -182,14 +182,8 @@ static bool arm_v7s_is_mtk_enabled(struct io_pgtable_cfg *cfg)
+ (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_EXT);
+ }
+
+-static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
+- struct io_pgtable_cfg *cfg)
++static arm_v7s_iopte to_mtk_iopte(phys_addr_t paddr, arm_v7s_iopte pte)
+ {
+- arm_v7s_iopte pte = paddr & ARM_V7S_LVL_MASK(lvl);
+-
+- if (!arm_v7s_is_mtk_enabled(cfg))
+- return pte;
+-
+ if (paddr & BIT_ULL(32))
+ pte |= ARM_V7S_ATTR_MTK_PA_BIT32;
+ if (paddr & BIT_ULL(33))
+@@ -199,6 +193,17 @@ static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
+ return pte;
+ }
+
++static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
++ struct io_pgtable_cfg *cfg)
++{
++ arm_v7s_iopte pte = paddr & ARM_V7S_LVL_MASK(lvl);
++
++ if (arm_v7s_is_mtk_enabled(cfg))
++ return to_mtk_iopte(paddr, pte);
++
++ return pte;
++}
++
+ static phys_addr_t iopte_to_paddr(arm_v7s_iopte pte, int lvl,
+ struct io_pgtable_cfg *cfg)
+ {
+@@ -240,10 +245,17 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ dma_addr_t dma;
+ size_t size = ARM_V7S_TABLE_SIZE(lvl, cfg);
+ void *table = NULL;
++ gfp_t gfp_l1;
++
++ /*
++ * ARM_MTK_TTBR_EXT extend the translation table base support larger
++ * memory address.
++ */
++ gfp_l1 = cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
++ GFP_KERNEL : ARM_V7S_TABLE_GFP_DMA;
+
+ if (lvl == 1)
+- table = (void *)__get_free_pages(
+- __GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
++ table = (void *)__get_free_pages(gfp_l1 | __GFP_ZERO, get_order(size));
+ else if (lvl == 2)
+ table = kmem_cache_zalloc(data->l2_tables, gfp);
+
+@@ -251,7 +263,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ return NULL;
+
+ phys = virt_to_phys(table);
+- if (phys != (arm_v7s_iopte)phys) {
++ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
++ phys >= (1ULL << cfg->oas) : phys != (arm_v7s_iopte)phys) {
+ /* Doesn't fit in PTE */
+ dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
+ goto out_free;
+@@ -457,9 +470,14 @@ static arm_v7s_iopte arm_v7s_install_table(arm_v7s_iopte *table,
+ arm_v7s_iopte curr,
+ struct io_pgtable_cfg *cfg)
+ {
++ phys_addr_t phys = virt_to_phys(table);
+ arm_v7s_iopte old, new;
+
+- new = virt_to_phys(table) | ARM_V7S_PTE_TYPE_TABLE;
++ new = phys | ARM_V7S_PTE_TYPE_TABLE;
++
++ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT)
++ new = to_mtk_iopte(phys, new);
++
+ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
+ new |= ARM_V7S_ATTR_NS_TABLE;
+
+@@ -779,6 +797,8 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ void *cookie)
+ {
+ struct arm_v7s_io_pgtable *data;
++ slab_flags_t slab_flag;
++ phys_addr_t paddr;
+
+ if (cfg->ias > (arm_v7s_is_mtk_enabled(cfg) ? 34 : ARM_V7S_ADDR_BITS))
+ return NULL;
+@@ -788,7 +808,8 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+
+ if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS |
+ IO_PGTABLE_QUIRK_NO_PERMS |
+- IO_PGTABLE_QUIRK_ARM_MTK_EXT))
++ IO_PGTABLE_QUIRK_ARM_MTK_EXT |
++ IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT))
+ return NULL;
+
+ /* If ARM_MTK_4GB is enabled, the NO_PERMS is also expected. */
+@@ -796,15 +817,27 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS))
+ return NULL;
+
++ if ((cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT) &&
++ !arm_v7s_is_mtk_enabled(cfg))
++ return NULL;
++
+ data = kmalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return NULL;
+
+ spin_lock_init(&data->split_lock);
++
++ /*
++ * ARM_MTK_TTBR_EXT extend the translation table base support larger
++ * memory address.
++ */
++ slab_flag = cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
++ 0 : ARM_V7S_TABLE_SLAB_FLAGS;
++
+ data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
+ ARM_V7S_TABLE_SIZE(2, cfg),
+ ARM_V7S_TABLE_SIZE(2, cfg),
+- ARM_V7S_TABLE_SLAB_FLAGS, NULL);
++ slab_flag, NULL);
+ if (!data->l2_tables)
+ goto out_free_data;
+
+@@ -850,12 +883,16 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ wmb();
+
+ /* TTBR */
+- cfg->arm_v7s_cfg.ttbr = virt_to_phys(data->pgd) | ARM_V7S_TTBR_S |
+- (cfg->coherent_walk ? (ARM_V7S_TTBR_NOS |
+- ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
+- ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA)) :
+- (ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_NC) |
+- ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_NC)));
++ paddr = virt_to_phys(data->pgd);
++ if (arm_v7s_is_mtk_enabled(cfg))
++ cfg->arm_v7s_cfg.ttbr = paddr | upper_32_bits(paddr);
++ else
++ cfg->arm_v7s_cfg.ttbr = paddr | ARM_V7S_TTBR_S |
++ (cfg->coherent_walk ? (ARM_V7S_TTBR_NOS |
++ ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
++ ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA)) :
++ (ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_NC) |
++ ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_NC)));
+ return &data->iop;
+
+ out_free_data:
+diff --git a/drivers/irqchip/irq-tegra.c b/drivers/irqchip/irq-tegra.c
+index e1f771c72fc4c..ad3e2c1b3c87b 100644
+--- a/drivers/irqchip/irq-tegra.c
++++ b/drivers/irqchip/irq-tegra.c
+@@ -148,10 +148,10 @@ static int tegra_ictlr_suspend(void)
+ lic->cop_iep[i] = readl_relaxed(ictlr + ICTLR_COP_IEP_CLASS);
+
+ /* Disable COP interrupts */
+- writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
+
+ /* Disable CPU interrupts */
+- writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
+
+ /* Enable the wakeup sources of ictlr */
+ writel_relaxed(lic->ictlr_wake_mask[i], ictlr + ICTLR_CPU_IER_SET);
+@@ -172,12 +172,12 @@ static void tegra_ictlr_resume(void)
+
+ writel_relaxed(lic->cpu_iep[i],
+ ictlr + ICTLR_CPU_IEP_CLASS);
+- writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
+ writel_relaxed(lic->cpu_ier[i],
+ ictlr + ICTLR_CPU_IER_SET);
+ writel_relaxed(lic->cop_iep[i],
+ ictlr + ICTLR_COP_IEP_CLASS);
+- writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
+ writel_relaxed(lic->cop_ier[i],
+ ictlr + ICTLR_COP_IER_SET);
+ }
+@@ -312,7 +312,7 @@ static int __init tegra_ictlr_init(struct device_node *node,
+ lic->base[i] = base;
+
+ /* Disable all interrupts */
+- writel_relaxed(~0UL, base + ICTLR_CPU_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), base + ICTLR_CPU_IER_CLR);
+ /* All interrupts target IRQ */
+ writel_relaxed(0, base + ICTLR_CPU_IEP_CLASS);
+
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 660c52d48256d..522b3d6b8c46b 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -9466,6 +9466,7 @@ void md_reap_sync_thread(struct mddev *mddev)
+ wake_up(&resync_wait);
+ /* flag recovery needed just to double check */
+ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
++ sysfs_notify_dirent_safe(mddev->sysfs_completed);
+ sysfs_notify_dirent_safe(mddev->sysfs_action);
+ md_new_event();
+ if (mddev->event_work.func)
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index c8539d0e12dd7..1c1310d539f2e 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2881,10 +2881,10 @@ static void raid5_end_write_request(struct bio *bi)
+ if (!test_and_clear_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags))
+ clear_bit(R5_LOCKED, &sh->dev[i].flags);
+ set_bit(STRIPE_HANDLE, &sh->state);
+- raid5_release_stripe(sh);
+
+ if (sh->batch_head && sh != sh->batch_head)
+ raid5_release_stripe(sh->batch_head);
++ raid5_release_stripe(sh);
+ }
+
+ static void raid5_error(struct mddev *mddev, struct md_rdev *rdev)
+@@ -5841,7 +5841,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ if ((bi->bi_opf & REQ_NOWAIT) &&
+ (conf->reshape_progress != MaxSector) &&
+ (mddev->reshape_backwards
+- ? (logical_sector > conf->reshape_progress && logical_sector <= conf->reshape_safe)
++ ? (logical_sector >= conf->reshape_progress && logical_sector < conf->reshape_safe)
+ : (logical_sector >= conf->reshape_safe && logical_sector < conf->reshape_progress))) {
+ bio_wouldblock_error(bi);
+ if (rw == WRITE)
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index cb48c5ff3dee2..c93d2906e4c7d 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -875,7 +875,7 @@ static int vcodec_domains_get(struct venus_core *core)
+ }
+
+ skip_pmdomains:
+- if (!core->has_opp_table)
++ if (!core->res->opp_pmdomain)
+ return 0;
+
+ /* Attach the power domain for setting performance state */
+@@ -1007,6 +1007,10 @@ static int core_get_v4(struct venus_core *core)
+ if (ret)
+ return ret;
+
++ ret = vcodec_domains_get(core);
++ if (ret)
++ return ret;
++
+ if (core->res->opp_pmdomain) {
+ ret = devm_pm_opp_of_add_table(dev);
+ if (!ret) {
+@@ -1017,10 +1021,6 @@ static int core_get_v4(struct venus_core *core)
+ }
+ }
+
+- ret = vcodec_domains_get(core);
+- if (ret)
+- return ret;
+-
+ return 0;
+ }
+
+diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
+index 5f0e2dcebb349..ac3795a7e1f66 100644
+--- a/drivers/misc/cxl/irq.c
++++ b/drivers/misc/cxl/irq.c
+@@ -350,6 +350,7 @@ int afu_allocate_irqs(struct cxl_context *ctx, u32 count)
+
+ out:
+ cxl_ops->release_irq_ranges(&ctx->irqs, ctx->afu->adapter);
++ bitmap_free(ctx->irq_bitmap);
+ afu_irq_name_free(ctx);
+ return -ENOMEM;
+ }
+diff --git a/drivers/misc/habanalabs/common/sysfs.c b/drivers/misc/habanalabs/common/sysfs.c
+index 9ebeb18ab85e9..da81810688951 100644
+--- a/drivers/misc/habanalabs/common/sysfs.c
++++ b/drivers/misc/habanalabs/common/sysfs.c
+@@ -73,6 +73,7 @@ static DEVICE_ATTR_RO(clk_cur_freq_mhz);
+ static struct attribute *hl_dev_clk_attrs[] = {
+ &dev_attr_clk_max_freq_mhz.attr,
+ &dev_attr_clk_cur_freq_mhz.attr,
++ NULL,
+ };
+
+ static ssize_t vrm_ver_show(struct device *dev, struct device_attribute *attr, char *buf)
+@@ -93,6 +94,7 @@ static DEVICE_ATTR_RO(vrm_ver);
+
+ static struct attribute *hl_dev_vrm_attrs[] = {
+ &dev_attr_vrm_ver.attr,
++ NULL,
+ };
+
+ static ssize_t uboot_ver_show(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index fba3222410960..b33616aacb33b 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -3339,19 +3339,19 @@ static void gaudi_init_nic_qman(struct hl_device *hdev, u32 nic_offset,
+ u32 nic_qm_err_cfg, irq_handler_offset;
+ u32 q_off;
+
+- mtr_base_en_lo = lower_32_bits(CFG_BASE +
++ mtr_base_en_lo = lower_32_bits((CFG_BASE & U32_MAX) +
+ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0);
+ mtr_base_en_hi = upper_32_bits(CFG_BASE +
+ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0);
+- so_base_en_lo = lower_32_bits(CFG_BASE +
++ so_base_en_lo = lower_32_bits((CFG_BASE & U32_MAX) +
+ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_SOB_OBJ_0);
+ so_base_en_hi = upper_32_bits(CFG_BASE +
+ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_SOB_OBJ_0);
+- mtr_base_ws_lo = lower_32_bits(CFG_BASE +
++ mtr_base_ws_lo = lower_32_bits((CFG_BASE & U32_MAX) +
+ mmSYNC_MNGR_W_S_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0);
+ mtr_base_ws_hi = upper_32_bits(CFG_BASE +
+ mmSYNC_MNGR_W_S_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0);
+- so_base_ws_lo = lower_32_bits(CFG_BASE +
++ so_base_ws_lo = lower_32_bits((CFG_BASE & U32_MAX) +
+ mmSYNC_MNGR_W_S_SYNC_MNGR_OBJS_SOB_OBJ_0);
+ so_base_ws_hi = upper_32_bits(CFG_BASE +
+ mmSYNC_MNGR_W_S_SYNC_MNGR_OBJS_SOB_OBJ_0);
+@@ -5654,15 +5654,17 @@ static int gaudi_parse_cb_no_ext_queue(struct hl_device *hdev,
+ {
+ struct asic_fixed_properties *asic_prop = &hdev->asic_prop;
+ struct gaudi_device *gaudi = hdev->asic_specific;
+- u32 nic_mask_q_id = 1 << (HW_CAP_NIC_SHIFT +
+- ((parser->hw_queue_id - GAUDI_QUEUE_ID_NIC_0_0) >> 2));
++ u32 nic_queue_offset, nic_mask_q_id;
+
+ if ((parser->hw_queue_id >= GAUDI_QUEUE_ID_NIC_0_0) &&
+- (parser->hw_queue_id <= GAUDI_QUEUE_ID_NIC_9_3) &&
+- (!(gaudi->hw_cap_initialized & nic_mask_q_id))) {
+- dev_err(hdev->dev, "h/w queue %d is disabled\n",
+- parser->hw_queue_id);
+- return -EINVAL;
++ (parser->hw_queue_id <= GAUDI_QUEUE_ID_NIC_9_3)) {
++ nic_queue_offset = parser->hw_queue_id - GAUDI_QUEUE_ID_NIC_0_0;
++ nic_mask_q_id = 1 << (HW_CAP_NIC_SHIFT + (nic_queue_offset >> 2));
++
++ if (!(gaudi->hw_cap_initialized & nic_mask_q_id)) {
++ dev_err(hdev->dev, "h/w queue %d is disabled\n", parser->hw_queue_id);
++ return -EINVAL;
++ }
+ }
+
+ /* For internal queue jobs just check if CB address is valid */
+@@ -7717,10 +7719,10 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ struct gaudi_device *gaudi = hdev->asic_specific;
+ u64 data = le64_to_cpu(eq_entry->data[0]);
+ u32 ctl = le32_to_cpu(eq_entry->hdr.ctl);
+- u32 fw_fatal_err_flag = 0;
++ u32 fw_fatal_err_flag = 0, flags = 0;
+ u16 event_type = ((ctl & EQ_CTL_EVENT_TYPE_MASK)
+ >> EQ_CTL_EVENT_TYPE_SHIFT);
+- bool reset_required;
++ bool reset_required, reset_direct = false;
+ u8 cause;
+ int rc;
+
+@@ -7808,7 +7810,8 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ dev_err(hdev->dev, "reset required due to %s\n",
+ gaudi_irq_map_table[event_type].name);
+
+- hl_device_reset(hdev, 0);
++ reset_direct = true;
++ goto reset_device;
+ } else {
+ hl_fw_unmask_irq(hdev, event_type);
+ }
+@@ -7830,7 +7833,8 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ dev_err(hdev->dev, "reset required due to %s\n",
+ gaudi_irq_map_table[event_type].name);
+
+- hl_device_reset(hdev, 0);
++ reset_direct = true;
++ goto reset_device;
+ } else {
+ hl_fw_unmask_irq(hdev, event_type);
+ }
+@@ -7981,12 +7985,17 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ return;
+
+ reset_device:
+- if (hdev->asic_prop.fw_security_enabled)
+- hl_device_reset(hdev, HL_DRV_RESET_HARD
+- | HL_DRV_RESET_BYPASS_REQ_TO_FW
+- | fw_fatal_err_flag);
++ reset_required = true;
++
++ if (hdev->asic_prop.fw_security_enabled && !reset_direct)
++ flags = HL_DRV_RESET_HARD | HL_DRV_RESET_BYPASS_REQ_TO_FW | fw_fatal_err_flag;
+ else if (hdev->hard_reset_on_fw_events)
+- hl_device_reset(hdev, HL_DRV_RESET_HARD | HL_DRV_RESET_DELAY | fw_fatal_err_flag);
++ flags = HL_DRV_RESET_HARD | HL_DRV_RESET_DELAY | fw_fatal_err_flag;
++ else
++ reset_required = false;
++
++ if (reset_required)
++ hl_device_reset(hdev, flags);
+ else
+ hl_fw_unmask_irq(hdev, event_type);
+ }
+@@ -9187,6 +9196,7 @@ static DEVICE_ATTR_RO(infineon_ver);
+
+ static struct attribute *gaudi_vrm_dev_attrs[] = {
+ &dev_attr_infineon_ver.attr,
++ NULL,
+ };
+
+ static void gaudi_add_device_attr(struct hl_device *hdev, struct attribute_group *dev_clk_attr_grp,
+diff --git a/drivers/misc/habanalabs/goya/goya_hwmgr.c b/drivers/misc/habanalabs/goya/goya_hwmgr.c
+index 6580fc6a486a8..b595721751c1b 100644
+--- a/drivers/misc/habanalabs/goya/goya_hwmgr.c
++++ b/drivers/misc/habanalabs/goya/goya_hwmgr.c
+@@ -359,6 +359,7 @@ static struct attribute *goya_clk_dev_attrs[] = {
+ &dev_attr_pm_mng_profile.attr,
+ &dev_attr_tpc_clk.attr,
+ &dev_attr_tpc_clk_curr.attr,
++ NULL,
+ };
+
+ static ssize_t infineon_ver_show(struct device *dev, struct device_attribute *attr, char *buf)
+@@ -375,6 +376,7 @@ static DEVICE_ATTR_RO(infineon_ver);
+
+ static struct attribute *goya_vrm_dev_attrs[] = {
+ &dev_attr_infineon_ver.attr,
++ NULL,
+ };
+
+ void goya_add_device_attr(struct hl_device *hdev, struct attribute_group *dev_clk_attr_grp,
+diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
+index 281c54003edc4..b70a013139c74 100644
+--- a/drivers/misc/uacce/uacce.c
++++ b/drivers/misc/uacce/uacce.c
+@@ -9,43 +9,38 @@
+
+ static struct class *uacce_class;
+ static dev_t uacce_devt;
+-static DEFINE_MUTEX(uacce_mutex);
+ static DEFINE_XARRAY_ALLOC(uacce_xa);
+
+-static int uacce_start_queue(struct uacce_queue *q)
++/*
++ * If the parent driver or the device disappears, the queue state is invalid and
++ * ops are not usable anymore.
++ */
++static bool uacce_queue_is_valid(struct uacce_queue *q)
+ {
+- int ret = 0;
++ return q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED;
++}
+
+- mutex_lock(&uacce_mutex);
++static int uacce_start_queue(struct uacce_queue *q)
++{
++ int ret;
+
+- if (q->state != UACCE_Q_INIT) {
+- ret = -EINVAL;
+- goto out_with_lock;
+- }
++ if (q->state != UACCE_Q_INIT)
++ return -EINVAL;
+
+ if (q->uacce->ops->start_queue) {
+ ret = q->uacce->ops->start_queue(q);
+ if (ret < 0)
+- goto out_with_lock;
++ return ret;
+ }
+
+ q->state = UACCE_Q_STARTED;
+-
+-out_with_lock:
+- mutex_unlock(&uacce_mutex);
+-
+- return ret;
++ return 0;
+ }
+
+ static int uacce_put_queue(struct uacce_queue *q)
+ {
+ struct uacce_device *uacce = q->uacce;
+
+- mutex_lock(&uacce_mutex);
+-
+- if (q->state == UACCE_Q_ZOMBIE)
+- goto out;
+-
+ if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue)
+ uacce->ops->stop_queue(q);
+
+@@ -54,8 +49,6 @@ static int uacce_put_queue(struct uacce_queue *q)
+ uacce->ops->put_queue(q);
+
+ q->state = UACCE_Q_ZOMBIE;
+-out:
+- mutex_unlock(&uacce_mutex);
+
+ return 0;
+ }
+@@ -65,20 +58,36 @@ static long uacce_fops_unl_ioctl(struct file *filep,
+ {
+ struct uacce_queue *q = filep->private_data;
+ struct uacce_device *uacce = q->uacce;
++ long ret = -ENXIO;
++
++ /*
++ * uacce->ops->ioctl() may take the mmap_lock when copying arg to/from
++ * user. Avoid a circular lock dependency with uacce_fops_mmap(), which
++ * gets called with mmap_lock held, by taking uacce->mutex instead of
++ * q->mutex. Doing this in uacce_fops_mmap() is not possible because
++ * uacce_fops_open() calls iommu_sva_bind_device(), which takes
++ * mmap_lock, while holding uacce->mutex.
++ */
++ mutex_lock(&uacce->mutex);
++ if (!uacce_queue_is_valid(q))
++ goto out_unlock;
+
+ switch (cmd) {
+ case UACCE_CMD_START_Q:
+- return uacce_start_queue(q);
+-
++ ret = uacce_start_queue(q);
++ break;
+ case UACCE_CMD_PUT_Q:
+- return uacce_put_queue(q);
+-
++ ret = uacce_put_queue(q);
++ break;
+ default:
+- if (!uacce->ops->ioctl)
+- return -EINVAL;
+-
+- return uacce->ops->ioctl(q, cmd, arg);
++ if (uacce->ops->ioctl)
++ ret = uacce->ops->ioctl(q, cmd, arg);
++ else
++ ret = -EINVAL;
+ }
++out_unlock:
++ mutex_unlock(&uacce->mutex);
++ return ret;
+ }
+
+ #ifdef CONFIG_COMPAT
+@@ -136,6 +145,13 @@ static int uacce_fops_open(struct inode *inode, struct file *filep)
+ if (!q)
+ return -ENOMEM;
+
++ mutex_lock(&uacce->mutex);
++
++ if (!uacce->parent) {
++ ret = -EINVAL;
++ goto out_with_mem;
++ }
++
+ ret = uacce_bind_queue(uacce, q);
+ if (ret)
+ goto out_with_mem;
+@@ -152,10 +168,9 @@ static int uacce_fops_open(struct inode *inode, struct file *filep)
+ filep->private_data = q;
+ uacce->inode = inode;
+ q->state = UACCE_Q_INIT;
+-
+- mutex_lock(&uacce->queues_lock);
++ mutex_init(&q->mutex);
+ list_add(&q->list, &uacce->queues);
+- mutex_unlock(&uacce->queues_lock);
++ mutex_unlock(&uacce->mutex);
+
+ return 0;
+
+@@ -163,18 +178,20 @@ out_with_bond:
+ uacce_unbind_queue(q);
+ out_with_mem:
+ kfree(q);
++ mutex_unlock(&uacce->mutex);
+ return ret;
+ }
+
+ static int uacce_fops_release(struct inode *inode, struct file *filep)
+ {
+ struct uacce_queue *q = filep->private_data;
++ struct uacce_device *uacce = q->uacce;
+
+- mutex_lock(&q->uacce->queues_lock);
+- list_del(&q->list);
+- mutex_unlock(&q->uacce->queues_lock);
++ mutex_lock(&uacce->mutex);
+ uacce_put_queue(q);
+ uacce_unbind_queue(q);
++ list_del(&q->list);
++ mutex_unlock(&uacce->mutex);
+ kfree(q);
+
+ return 0;
+@@ -217,10 +234,9 @@ static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+ vma->vm_private_data = q;
+ qfr->type = type;
+
+- mutex_lock(&uacce_mutex);
+-
+- if (q->state != UACCE_Q_INIT && q->state != UACCE_Q_STARTED) {
+- ret = -EINVAL;
++ mutex_lock(&q->mutex);
++ if (!uacce_queue_is_valid(q)) {
++ ret = -ENXIO;
+ goto out_with_lock;
+ }
+
+@@ -248,12 +264,12 @@ static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+ }
+
+ q->qfrs[type] = qfr;
+- mutex_unlock(&uacce_mutex);
++ mutex_unlock(&q->mutex);
+
+ return ret;
+
+ out_with_lock:
+- mutex_unlock(&uacce_mutex);
++ mutex_unlock(&q->mutex);
+ kfree(qfr);
+ return ret;
+ }
+@@ -262,12 +278,20 @@ static __poll_t uacce_fops_poll(struct file *file, poll_table *wait)
+ {
+ struct uacce_queue *q = file->private_data;
+ struct uacce_device *uacce = q->uacce;
++ __poll_t ret = 0;
++
++ mutex_lock(&q->mutex);
++ if (!uacce_queue_is_valid(q))
++ goto out_unlock;
+
+ poll_wait(file, &q->wait, wait);
++
+ if (uacce->ops->is_q_updated && uacce->ops->is_q_updated(q))
+- return EPOLLIN | EPOLLRDNORM;
++ ret = EPOLLIN | EPOLLRDNORM;
+
+- return 0;
++out_unlock:
++ mutex_unlock(&q->mutex);
++ return ret;
+ }
+
+ static const struct file_operations uacce_fops = {
+@@ -450,7 +474,7 @@ struct uacce_device *uacce_alloc(struct device *parent,
+ goto err_with_uacce;
+
+ INIT_LIST_HEAD(&uacce->queues);
+- mutex_init(&uacce->queues_lock);
++ mutex_init(&uacce->mutex);
+ device_initialize(&uacce->dev);
+ uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
+ uacce->dev.class = uacce_class;
+@@ -507,13 +531,23 @@ void uacce_remove(struct uacce_device *uacce)
+ if (uacce->inode)
+ unmap_mapping_range(uacce->inode->i_mapping, 0, 0, 1);
+
++ /*
++ * uacce_fops_open() may be running concurrently, even after we remove
++ * the cdev. Holding uacce->mutex ensures that open() does not obtain a
++ * removed uacce device.
++ */
++ mutex_lock(&uacce->mutex);
+ /* ensure no open queue remains */
+- mutex_lock(&uacce->queues_lock);
+ list_for_each_entry_safe(q, next_q, &uacce->queues, list) {
++ /*
++ * Taking q->mutex ensures that fops do not use the defunct
++ * uacce->ops after the queue is disabled.
++ */
++ mutex_lock(&q->mutex);
+ uacce_put_queue(q);
++ mutex_unlock(&q->mutex);
+ uacce_unbind_queue(q);
+ }
+- mutex_unlock(&uacce->queues_lock);
+
+ /* disable sva now since no opened queues */
+ uacce_disable_sva(uacce);
+@@ -521,6 +555,13 @@ void uacce_remove(struct uacce_device *uacce)
+ if (uacce->cdev)
+ cdev_device_del(uacce->cdev, &uacce->dev);
+ xa_erase(&uacce_xa, uacce->dev_id);
++ /*
++ * uacce exists as long as there are open fds, but ops will be freed
++ * now. Ensure that bugs cause NULL deref rather than use-after-free.
++ */
++ uacce->ops = NULL;
++ uacce->parent = NULL;
++ mutex_unlock(&uacce->mutex);
+ put_device(&uacce->dev);
+ }
+ EXPORT_SYMBOL_GPL(uacce_remove);
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 2f08d442e5577..fc462995cf94a 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -1172,8 +1172,10 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ }
+
+ ret = device_reset_optional(&pdev->dev);
+- if (ret)
+- return dev_err_probe(&pdev->dev, ret, "device reset failed\n");
++ if (ret) {
++ dev_err_probe(&pdev->dev, ret, "device reset failed\n");
++ goto free_host;
++ }
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ host->regs = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
+index 0db9490dc6595..e4003f6058eb5 100644
+--- a/drivers/mmc/host/pxamci.c
++++ b/drivers/mmc/host/pxamci.c
+@@ -648,7 +648,7 @@ static int pxamci_probe(struct platform_device *pdev)
+
+ ret = pxamci_of_init(pdev, mmc);
+ if (ret)
+- return ret;
++ goto out;
+
+ host = mmc_priv(mmc);
+ host->mmc = mmc;
+@@ -672,7 +672,7 @@ static int pxamci_probe(struct platform_device *pdev)
+
+ ret = pxamci_init_ocr(host);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ mmc->caps = 0;
+ host->cmdat = 0;
+diff --git a/drivers/mmc/host/renesas_sdhi.h b/drivers/mmc/host/renesas_sdhi.h
+index 1a1e3e020a8c2..c4abfee1ebae1 100644
+--- a/drivers/mmc/host/renesas_sdhi.h
++++ b/drivers/mmc/host/renesas_sdhi.h
+@@ -43,6 +43,7 @@ struct renesas_sdhi_quirks {
+ bool hs400_4taps;
+ bool fixed_addr_mode;
+ bool dma_one_rx_only;
++ bool manual_tap_correction;
+ u32 hs400_bad_taps;
+ const u8 (*hs400_calib_table)[SDHI_CALIB_TABLE_MAX];
+ };
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 0d258b6e1a436..6edbf5c161ab9 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -49,9 +49,6 @@
+ #define HOST_MODE_GEN3_32BIT (HOST_MODE_GEN3_WMODE | HOST_MODE_GEN3_BUSWIDTH)
+ #define HOST_MODE_GEN3_64BIT 0
+
+-#define CTL_SDIF_MODE 0xe6
+-#define SDIF_MODE_HS400 BIT(0)
+-
+ #define SDHI_VER_GEN2_SDR50 0x490c
+ #define SDHI_VER_RZ_A1 0x820b
+ /* very old datasheets said 0x490c for SDR104, too. They are wrong! */
+@@ -383,8 +380,7 @@ static void renesas_sdhi_hs400_complete(struct mmc_host *mmc)
+ sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_DT2FF,
+ priv->scc_tappos_hs400);
+
+- /* Gen3 can't do automatic tap correction with HS400, so disable it */
+- if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN3_SDMMC)
++ if (priv->quirks && priv->quirks->manual_tap_correction)
+ sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL,
+ ~SH_MOBILE_SDHI_SCC_RVSCNTL_RVSEN &
+ sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL));
+@@ -562,23 +558,25 @@ static void renesas_sdhi_scc_reset(struct tmio_mmc_host *host, struct renesas_sd
+ }
+
+ /* only populated for TMIO_MMC_MIN_RCAR2 */
+-static void renesas_sdhi_reset(struct tmio_mmc_host *host)
++static void renesas_sdhi_reset(struct tmio_mmc_host *host, bool preserve)
+ {
+ struct renesas_sdhi *priv = host_to_priv(host);
+ int ret;
+ u16 val;
+
+- if (priv->rstc) {
+- reset_control_reset(priv->rstc);
+- /* Unknown why but without polling reset status, it will hang */
+- read_poll_timeout(reset_control_status, ret, ret == 0, 1, 100,
+- false, priv->rstc);
+- /* At least SDHI_VER_GEN2_SDR50 needs manual release of reset */
+- sd_ctrl_write16(host, CTL_RESET_SD, 0x0001);
+- priv->needs_adjust_hs400 = false;
+- renesas_sdhi_set_clock(host, host->clk_cache);
+- } else if (priv->scc_ctl) {
+- renesas_sdhi_scc_reset(host, priv);
++ if (!preserve) {
++ if (priv->rstc) {
++ reset_control_reset(priv->rstc);
++ /* Unknown why but without polling reset status, it will hang */
++ read_poll_timeout(reset_control_status, ret, ret == 0, 1, 100,
++ false, priv->rstc);
++ /* At least SDHI_VER_GEN2_SDR50 needs manual release of reset */
++ sd_ctrl_write16(host, CTL_RESET_SD, 0x0001);
++ priv->needs_adjust_hs400 = false;
++ renesas_sdhi_set_clock(host, host->clk_cache);
++ } else if (priv->scc_ctl) {
++ renesas_sdhi_scc_reset(host, priv);
++ }
+ }
+
+ if (sd_ctrl_read16(host, CTL_VERSION) >= SDHI_VER_GEN3_SD) {
+@@ -719,7 +717,7 @@ static bool renesas_sdhi_manual_correction(struct tmio_mmc_host *host, bool use_
+ sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSREQ, 0);
+
+ /* Change TAP position according to correction status */
+- if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN3_SDMMC &&
++ if (priv->quirks && priv->quirks->manual_tap_correction &&
+ host->mmc->ios.timing == MMC_TIMING_MMC_HS400) {
+ u32 bad_taps = priv->quirks ? priv->quirks->hs400_bad_taps : 0;
+ /*
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index 3084b15ae2cbb..52915404eb071 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -170,6 +170,7 @@ static const struct renesas_sdhi_quirks sdhi_quirks_4tap_nohs400_one_rx = {
+ static const struct renesas_sdhi_quirks sdhi_quirks_4tap = {
+ .hs400_4taps = true,
+ .hs400_bad_taps = BIT(2) | BIT(3) | BIT(6) | BIT(7),
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_nohs400 = {
+@@ -182,25 +183,30 @@ static const struct renesas_sdhi_quirks sdhi_quirks_fixed_addr = {
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_bad_taps1357 = {
+ .hs400_bad_taps = BIT(1) | BIT(3) | BIT(5) | BIT(7),
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_bad_taps2367 = {
+ .hs400_bad_taps = BIT(2) | BIT(3) | BIT(6) | BIT(7),
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_r8a7796_es13 = {
+ .hs400_4taps = true,
+ .hs400_bad_taps = BIT(2) | BIT(3) | BIT(6) | BIT(7),
+ .hs400_calib_table = r8a7796_es13_calib_table,
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_r8a77965 = {
+ .hs400_bad_taps = BIT(2) | BIT(3) | BIT(6) | BIT(7),
+ .hs400_calib_table = r8a77965_calib_table,
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_r8a77990 = {
+ .hs400_calib_table = r8a77990_calib_table,
++ .manual_tap_correction = true,
+ };
+
+ /*
+diff --git a/drivers/mmc/host/tmio_mmc.c b/drivers/mmc/host/tmio_mmc.c
+index b55a29c53d9c3..53a2ad9a24b87 100644
+--- a/drivers/mmc/host/tmio_mmc.c
++++ b/drivers/mmc/host/tmio_mmc.c
+@@ -75,7 +75,7 @@ static void tmio_mmc_set_clock(struct tmio_mmc_host *host,
+ tmio_mmc_clk_start(host);
+ }
+
+-static void tmio_mmc_reset(struct tmio_mmc_host *host)
++static void tmio_mmc_reset(struct tmio_mmc_host *host, bool preserve)
+ {
+ sd_ctrl_write16(host, CTL_RESET_SDIO, 0x0000);
+ usleep_range(10000, 11000);
+diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
+index e754bb3f5c323..501613c744063 100644
+--- a/drivers/mmc/host/tmio_mmc.h
++++ b/drivers/mmc/host/tmio_mmc.h
+@@ -42,6 +42,7 @@
+ #define CTL_DMA_ENABLE 0xd8
+ #define CTL_RESET_SD 0xe0
+ #define CTL_VERSION 0xe2
++#define CTL_SDIF_MODE 0xe6 /* only known on R-Car 2+ */
+
+ /* Definitions for values the CTL_STOP_INTERNAL_ACTION register can take */
+ #define TMIO_STOP_STP BIT(0)
+@@ -98,6 +99,9 @@
+ /* Definitions for values the CTL_DMA_ENABLE register can take */
+ #define DMA_ENABLE_DMASDRW BIT(1)
+
++/* Definitions for values the CTL_SDIF_MODE register can take */
++#define SDIF_MODE_HS400 BIT(0) /* only known on R-Car 2+ */
++
+ /* Define some IRQ masks */
+ /* This is the mask used at reset by the chip */
+ #define TMIO_MASK_ALL 0x837f031d
+@@ -181,7 +185,7 @@ struct tmio_mmc_host {
+ int (*multi_io_quirk)(struct mmc_card *card,
+ unsigned int direction, int blk_size);
+ int (*write16_hook)(struct tmio_mmc_host *host, int addr);
+- void (*reset)(struct tmio_mmc_host *host);
++ void (*reset)(struct tmio_mmc_host *host, bool preserve);
+ bool (*check_retune)(struct tmio_mmc_host *host, struct mmc_request *mrq);
+ void (*fixup_request)(struct tmio_mmc_host *host, struct mmc_request *mrq);
+ unsigned int (*get_timeout_cycles)(struct tmio_mmc_host *host);
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index a5850d83908be..437048bb80273 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -179,8 +179,17 @@ static void tmio_mmc_set_bus_width(struct tmio_mmc_host *host,
+ sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, reg);
+ }
+
+-static void tmio_mmc_reset(struct tmio_mmc_host *host)
++static void tmio_mmc_reset(struct tmio_mmc_host *host, bool preserve)
+ {
++ u16 card_opt, clk_ctrl, sdif_mode;
++
++ if (preserve) {
++ card_opt = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT);
++ clk_ctrl = sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL);
++ if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
++ sdif_mode = sd_ctrl_read16(host, CTL_SDIF_MODE);
++ }
++
+ /* FIXME - should we set stop clock reg here */
+ sd_ctrl_write16(host, CTL_RESET_SD, 0x0000);
+ usleep_range(10000, 11000);
+@@ -190,7 +199,7 @@ static void tmio_mmc_reset(struct tmio_mmc_host *host)
+ tmio_mmc_abort_dma(host);
+
+ if (host->reset)
+- host->reset(host);
++ host->reset(host, preserve);
+
+ sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask_all);
+ host->sdcard_irq_mask = host->sdcard_irq_mask_all;
+@@ -206,6 +215,13 @@ static void tmio_mmc_reset(struct tmio_mmc_host *host)
+ sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0001);
+ }
+
++ if (preserve) {
++ sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, card_opt);
++ sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, clk_ctrl);
++ if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
++ sd_ctrl_write16(host, CTL_SDIF_MODE, sdif_mode);
++ }
++
+ if (host->mmc->card)
+ mmc_retune_needed(host->mmc);
+ }
+@@ -248,7 +264,7 @@ static void tmio_mmc_reset_work(struct work_struct *work)
+
+ spin_unlock_irqrestore(&host->lock, flags);
+
+- tmio_mmc_reset(host);
++ tmio_mmc_reset(host, true);
+
+ /* Ready for new calls */
+ host->mrq = NULL;
+@@ -961,7 +977,7 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ tmio_mmc_power_off(host);
+ /* For R-Car Gen2+, we need to reset SDHI specific SCC */
+ if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
+- tmio_mmc_reset(host);
++ tmio_mmc_reset(host, false);
+
+ host->set_clock(host, 0);
+ break;
+@@ -1189,7 +1205,7 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host)
+ _host->sdcard_irq_mask_all = TMIO_MASK_ALL;
+
+ _host->set_clock(_host, 0);
+- tmio_mmc_reset(_host);
++ tmio_mmc_reset(_host, false);
+
+ spin_lock_init(&_host->lock);
+ mutex_init(&_host->ios_lock);
+@@ -1285,7 +1301,7 @@ int tmio_mmc_host_runtime_resume(struct device *dev)
+ struct tmio_mmc_host *host = dev_get_drvdata(dev);
+
+ tmio_mmc_clk_enable(host);
+- tmio_mmc_reset(host);
++ tmio_mmc_reset(host, false);
+
+ if (host->clk_cache)
+ host->set_clock(host, host->clk_cache);
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 666a4505a55a9..fa46a54d22ed4 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -1069,9 +1069,6 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+
+ mcp251x_read_2regs(spi, CANINTF, &intf, &eflag);
+
+- /* mask out flags we don't care about */
+- intf &= CANINTF_RX | CANINTF_TX | CANINTF_ERR;
+-
+ /* receive buffer 0 */
+ if (intf & CANINTF_RX0IF) {
+ mcp251x_hw_rx(spi, 0);
+@@ -1081,6 +1078,18 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ if (mcp251x_is_2510(spi))
+ mcp251x_write_bits(spi, CANINTF,
+ CANINTF_RX0IF, 0x00);
++
++ /* check if buffer 1 is already known to be full, no need to re-read */
++ if (!(intf & CANINTF_RX1IF)) {
++ u8 intf1, eflag1;
++
++ /* intf needs to be read again to avoid a race condition */
++ mcp251x_read_2regs(spi, CANINTF, &intf1, &eflag1);
++
++ /* combine flags from both operations for error handling */
++ intf |= intf1;
++ eflag |= eflag1;
++ }
+ }
+
+ /* receive buffer 1 */
+@@ -1091,6 +1100,9 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ clear_intf |= CANINTF_RX1IF;
+ }
+
++ /* mask out flags we don't care about */
++ intf &= CANINTF_RX | CANINTF_TX | CANINTF_ERR;
++
+ /* any error or tx interrupt we need to clear? */
+ if (intf & (CANINTF_ERR | CANINTF_TX))
+ clear_intf |= intf & (CANINTF_ERR | CANINTF_TX);
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index bbec3311d8934..e09b6732117cf 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -194,7 +194,7 @@ struct __packed ems_cpc_msg {
+ __le32 ts_sec; /* timestamp in seconds */
+ __le32 ts_nsec; /* timestamp in nano seconds */
+
+- union {
++ union __packed {
+ u8 generic[64];
+ struct cpc_can_msg can_msg;
+ struct cpc_can_params can_params;
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index ab40b700cf1af..ebad795e4e95f 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -658,6 +658,9 @@ static int ksz9477_port_fdb_dump(struct dsa_switch *ds, int port,
+ goto exit;
+ }
+
++ if (!(ksz_data & ALU_VALID))
++ continue;
++
+ /* read ALU table */
+ ksz9477_read_table(dev, alu_table);
+
+diff --git a/drivers/net/dsa/mv88e6060.c b/drivers/net/dsa/mv88e6060.c
+index a4c6eb9a52d0d..83dca9179aa07 100644
+--- a/drivers/net/dsa/mv88e6060.c
++++ b/drivers/net/dsa/mv88e6060.c
+@@ -118,6 +118,9 @@ static int mv88e6060_setup_port(struct mv88e6060_priv *priv, int p)
+ int addr = REG_PORT(p);
+ int ret;
+
++ if (dsa_is_unused_port(priv->ds, p))
++ return 0;
++
+ /* Do not force flow control, disable Ingress and Egress
+ * Header tagging, disable VLAN tunneling, and set the port
+ * state to Forwarding. Additionally, if this is the CPU
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 859196898a7d0..aadb0bd7c24f1 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -610,6 +610,9 @@ static int felix_change_tag_protocol(struct dsa_switch *ds,
+
+ old_proto_ops = felix->tag_proto_ops;
+
++ if (proto_ops == old_proto_ops)
++ return 0;
++
+ err = proto_ops->setup(ds);
+ if (err)
+ goto setup_failed;
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index d0920f5a8f04f..6439b56f381f9 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -280,19 +280,23 @@ static const u32 vsc9959_sys_regmap[] = {
+ REG(SYS_COUNT_RX_64, 0x000024),
+ REG(SYS_COUNT_RX_65_127, 0x000028),
+ REG(SYS_COUNT_RX_128_255, 0x00002c),
+- REG(SYS_COUNT_RX_256_1023, 0x000030),
+- REG(SYS_COUNT_RX_1024_1526, 0x000034),
+- REG(SYS_COUNT_RX_1527_MAX, 0x000038),
+- REG(SYS_COUNT_RX_LONGS, 0x000044),
++ REG(SYS_COUNT_RX_256_511, 0x000030),
++ REG(SYS_COUNT_RX_512_1023, 0x000034),
++ REG(SYS_COUNT_RX_1024_1526, 0x000038),
++ REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
++ REG(SYS_COUNT_RX_PAUSE, 0x000040),
++ REG(SYS_COUNT_RX_CONTROL, 0x000044),
++ REG(SYS_COUNT_RX_LONGS, 0x000048),
+ REG(SYS_COUNT_TX_OCTETS, 0x000200),
+ REG(SYS_COUNT_TX_COLLISION, 0x000210),
+ REG(SYS_COUNT_TX_DROPS, 0x000214),
+ REG(SYS_COUNT_TX_64, 0x00021c),
+ REG(SYS_COUNT_TX_65_127, 0x000220),
+- REG(SYS_COUNT_TX_128_511, 0x000224),
+- REG(SYS_COUNT_TX_512_1023, 0x000228),
+- REG(SYS_COUNT_TX_1024_1526, 0x00022c),
+- REG(SYS_COUNT_TX_1527_MAX, 0x000230),
++ REG(SYS_COUNT_TX_128_255, 0x000224),
++ REG(SYS_COUNT_TX_256_511, 0x000228),
++ REG(SYS_COUNT_TX_512_1023, 0x00022c),
++ REG(SYS_COUNT_TX_1024_1526, 0x000230),
++ REG(SYS_COUNT_TX_1527_MAX, 0x000234),
+ REG(SYS_COUNT_TX_AGING, 0x000278),
+ REG(SYS_RESET_CFG, 0x000e00),
+ REG(SYS_SR_ETYPE_CFG, 0x000e04),
+@@ -546,100 +550,379 @@ static const struct reg_field vsc9959_regfields[REGFIELD_MAX] = {
+ [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 7, 4),
+ };
+
+-static const struct ocelot_stat_layout vsc9959_stats_layout[] = {
+- { .offset = 0x00, .name = "rx_octets", },
+- { .offset = 0x01, .name = "rx_unicast", },
+- { .offset = 0x02, .name = "rx_multicast", },
+- { .offset = 0x03, .name = "rx_broadcast", },
+- { .offset = 0x04, .name = "rx_shorts", },
+- { .offset = 0x05, .name = "rx_fragments", },
+- { .offset = 0x06, .name = "rx_jabbers", },
+- { .offset = 0x07, .name = "rx_crc_align_errs", },
+- { .offset = 0x08, .name = "rx_sym_errs", },
+- { .offset = 0x09, .name = "rx_frames_below_65_octets", },
+- { .offset = 0x0A, .name = "rx_frames_65_to_127_octets", },
+- { .offset = 0x0B, .name = "rx_frames_128_to_255_octets", },
+- { .offset = 0x0C, .name = "rx_frames_256_to_511_octets", },
+- { .offset = 0x0D, .name = "rx_frames_512_to_1023_octets", },
+- { .offset = 0x0E, .name = "rx_frames_1024_to_1526_octets", },
+- { .offset = 0x0F, .name = "rx_frames_over_1526_octets", },
+- { .offset = 0x10, .name = "rx_pause", },
+- { .offset = 0x11, .name = "rx_control", },
+- { .offset = 0x12, .name = "rx_longs", },
+- { .offset = 0x13, .name = "rx_classified_drops", },
+- { .offset = 0x14, .name = "rx_red_prio_0", },
+- { .offset = 0x15, .name = "rx_red_prio_1", },
+- { .offset = 0x16, .name = "rx_red_prio_2", },
+- { .offset = 0x17, .name = "rx_red_prio_3", },
+- { .offset = 0x18, .name = "rx_red_prio_4", },
+- { .offset = 0x19, .name = "rx_red_prio_5", },
+- { .offset = 0x1A, .name = "rx_red_prio_6", },
+- { .offset = 0x1B, .name = "rx_red_prio_7", },
+- { .offset = 0x1C, .name = "rx_yellow_prio_0", },
+- { .offset = 0x1D, .name = "rx_yellow_prio_1", },
+- { .offset = 0x1E, .name = "rx_yellow_prio_2", },
+- { .offset = 0x1F, .name = "rx_yellow_prio_3", },
+- { .offset = 0x20, .name = "rx_yellow_prio_4", },
+- { .offset = 0x21, .name = "rx_yellow_prio_5", },
+- { .offset = 0x22, .name = "rx_yellow_prio_6", },
+- { .offset = 0x23, .name = "rx_yellow_prio_7", },
+- { .offset = 0x24, .name = "rx_green_prio_0", },
+- { .offset = 0x25, .name = "rx_green_prio_1", },
+- { .offset = 0x26, .name = "rx_green_prio_2", },
+- { .offset = 0x27, .name = "rx_green_prio_3", },
+- { .offset = 0x28, .name = "rx_green_prio_4", },
+- { .offset = 0x29, .name = "rx_green_prio_5", },
+- { .offset = 0x2A, .name = "rx_green_prio_6", },
+- { .offset = 0x2B, .name = "rx_green_prio_7", },
+- { .offset = 0x80, .name = "tx_octets", },
+- { .offset = 0x81, .name = "tx_unicast", },
+- { .offset = 0x82, .name = "tx_multicast", },
+- { .offset = 0x83, .name = "tx_broadcast", },
+- { .offset = 0x84, .name = "tx_collision", },
+- { .offset = 0x85, .name = "tx_drops", },
+- { .offset = 0x86, .name = "tx_pause", },
+- { .offset = 0x87, .name = "tx_frames_below_65_octets", },
+- { .offset = 0x88, .name = "tx_frames_65_to_127_octets", },
+- { .offset = 0x89, .name = "tx_frames_128_255_octets", },
+- { .offset = 0x8B, .name = "tx_frames_256_511_octets", },
+- { .offset = 0x8C, .name = "tx_frames_1024_1526_octets", },
+- { .offset = 0x8D, .name = "tx_frames_over_1526_octets", },
+- { .offset = 0x8E, .name = "tx_yellow_prio_0", },
+- { .offset = 0x8F, .name = "tx_yellow_prio_1", },
+- { .offset = 0x90, .name = "tx_yellow_prio_2", },
+- { .offset = 0x91, .name = "tx_yellow_prio_3", },
+- { .offset = 0x92, .name = "tx_yellow_prio_4", },
+- { .offset = 0x93, .name = "tx_yellow_prio_5", },
+- { .offset = 0x94, .name = "tx_yellow_prio_6", },
+- { .offset = 0x95, .name = "tx_yellow_prio_7", },
+- { .offset = 0x96, .name = "tx_green_prio_0", },
+- { .offset = 0x97, .name = "tx_green_prio_1", },
+- { .offset = 0x98, .name = "tx_green_prio_2", },
+- { .offset = 0x99, .name = "tx_green_prio_3", },
+- { .offset = 0x9A, .name = "tx_green_prio_4", },
+- { .offset = 0x9B, .name = "tx_green_prio_5", },
+- { .offset = 0x9C, .name = "tx_green_prio_6", },
+- { .offset = 0x9D, .name = "tx_green_prio_7", },
+- { .offset = 0x9E, .name = "tx_aged", },
+- { .offset = 0x100, .name = "drop_local", },
+- { .offset = 0x101, .name = "drop_tail", },
+- { .offset = 0x102, .name = "drop_yellow_prio_0", },
+- { .offset = 0x103, .name = "drop_yellow_prio_1", },
+- { .offset = 0x104, .name = "drop_yellow_prio_2", },
+- { .offset = 0x105, .name = "drop_yellow_prio_3", },
+- { .offset = 0x106, .name = "drop_yellow_prio_4", },
+- { .offset = 0x107, .name = "drop_yellow_prio_5", },
+- { .offset = 0x108, .name = "drop_yellow_prio_6", },
+- { .offset = 0x109, .name = "drop_yellow_prio_7", },
+- { .offset = 0x10A, .name = "drop_green_prio_0", },
+- { .offset = 0x10B, .name = "drop_green_prio_1", },
+- { .offset = 0x10C, .name = "drop_green_prio_2", },
+- { .offset = 0x10D, .name = "drop_green_prio_3", },
+- { .offset = 0x10E, .name = "drop_green_prio_4", },
+- { .offset = 0x10F, .name = "drop_green_prio_5", },
+- { .offset = 0x110, .name = "drop_green_prio_6", },
+- { .offset = 0x111, .name = "drop_green_prio_7", },
+- OCELOT_STAT_END
++static const struct ocelot_stat_layout vsc9959_stats_layout[OCELOT_NUM_STATS] = {
++ [OCELOT_STAT_RX_OCTETS] = {
++ .name = "rx_octets",
++ .offset = 0x00,
++ },
++ [OCELOT_STAT_RX_UNICAST] = {
++ .name = "rx_unicast",
++ .offset = 0x01,
++ },
++ [OCELOT_STAT_RX_MULTICAST] = {
++ .name = "rx_multicast",
++ .offset = 0x02,
++ },
++ [OCELOT_STAT_RX_BROADCAST] = {
++ .name = "rx_broadcast",
++ .offset = 0x03,
++ },
++ [OCELOT_STAT_RX_SHORTS] = {
++ .name = "rx_shorts",
++ .offset = 0x04,
++ },
++ [OCELOT_STAT_RX_FRAGMENTS] = {
++ .name = "rx_fragments",
++ .offset = 0x05,
++ },
++ [OCELOT_STAT_RX_JABBERS] = {
++ .name = "rx_jabbers",
++ .offset = 0x06,
++ },
++ [OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
++ .name = "rx_crc_align_errs",
++ .offset = 0x07,
++ },
++ [OCELOT_STAT_RX_SYM_ERRS] = {
++ .name = "rx_sym_errs",
++ .offset = 0x08,
++ },
++ [OCELOT_STAT_RX_64] = {
++ .name = "rx_frames_below_65_octets",
++ .offset = 0x09,
++ },
++ [OCELOT_STAT_RX_65_127] = {
++ .name = "rx_frames_65_to_127_octets",
++ .offset = 0x0A,
++ },
++ [OCELOT_STAT_RX_128_255] = {
++ .name = "rx_frames_128_to_255_octets",
++ .offset = 0x0B,
++ },
++ [OCELOT_STAT_RX_256_511] = {
++ .name = "rx_frames_256_to_511_octets",
++ .offset = 0x0C,
++ },
++ [OCELOT_STAT_RX_512_1023] = {
++ .name = "rx_frames_512_to_1023_octets",
++ .offset = 0x0D,
++ },
++ [OCELOT_STAT_RX_1024_1526] = {
++ .name = "rx_frames_1024_to_1526_octets",
++ .offset = 0x0E,
++ },
++ [OCELOT_STAT_RX_1527_MAX] = {
++ .name = "rx_frames_over_1526_octets",
++ .offset = 0x0F,
++ },
++ [OCELOT_STAT_RX_PAUSE] = {
++ .name = "rx_pause",
++ .offset = 0x10,
++ },
++ [OCELOT_STAT_RX_CONTROL] = {
++ .name = "rx_control",
++ .offset = 0x11,
++ },
++ [OCELOT_STAT_RX_LONGS] = {
++ .name = "rx_longs",
++ .offset = 0x12,
++ },
++ [OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
++ .name = "rx_classified_drops",
++ .offset = 0x13,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_0] = {
++ .name = "rx_red_prio_0",
++ .offset = 0x14,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_1] = {
++ .name = "rx_red_prio_1",
++ .offset = 0x15,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_2] = {
++ .name = "rx_red_prio_2",
++ .offset = 0x16,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_3] = {
++ .name = "rx_red_prio_3",
++ .offset = 0x17,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_4] = {
++ .name = "rx_red_prio_4",
++ .offset = 0x18,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_5] = {
++ .name = "rx_red_prio_5",
++ .offset = 0x19,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_6] = {
++ .name = "rx_red_prio_6",
++ .offset = 0x1A,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_7] = {
++ .name = "rx_red_prio_7",
++ .offset = 0x1B,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_0] = {
++ .name = "rx_yellow_prio_0",
++ .offset = 0x1C,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_1] = {
++ .name = "rx_yellow_prio_1",
++ .offset = 0x1D,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_2] = {
++ .name = "rx_yellow_prio_2",
++ .offset = 0x1E,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_3] = {
++ .name = "rx_yellow_prio_3",
++ .offset = 0x1F,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_4] = {
++ .name = "rx_yellow_prio_4",
++ .offset = 0x20,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_5] = {
++ .name = "rx_yellow_prio_5",
++ .offset = 0x21,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_6] = {
++ .name = "rx_yellow_prio_6",
++ .offset = 0x22,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_7] = {
++ .name = "rx_yellow_prio_7",
++ .offset = 0x23,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_0] = {
++ .name = "rx_green_prio_0",
++ .offset = 0x24,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_1] = {
++ .name = "rx_green_prio_1",
++ .offset = 0x25,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_2] = {
++ .name = "rx_green_prio_2",
++ .offset = 0x26,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_3] = {
++ .name = "rx_green_prio_3",
++ .offset = 0x27,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_4] = {
++ .name = "rx_green_prio_4",
++ .offset = 0x28,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_5] = {
++ .name = "rx_green_prio_5",
++ .offset = 0x29,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_6] = {
++ .name = "rx_green_prio_6",
++ .offset = 0x2A,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_7] = {
++ .name = "rx_green_prio_7",
++ .offset = 0x2B,
++ },
++ [OCELOT_STAT_TX_OCTETS] = {
++ .name = "tx_octets",
++ .offset = 0x80,
++ },
++ [OCELOT_STAT_TX_UNICAST] = {
++ .name = "tx_unicast",
++ .offset = 0x81,
++ },
++ [OCELOT_STAT_TX_MULTICAST] = {
++ .name = "tx_multicast",
++ .offset = 0x82,
++ },
++ [OCELOT_STAT_TX_BROADCAST] = {
++ .name = "tx_broadcast",
++ .offset = 0x83,
++ },
++ [OCELOT_STAT_TX_COLLISION] = {
++ .name = "tx_collision",
++ .offset = 0x84,
++ },
++ [OCELOT_STAT_TX_DROPS] = {
++ .name = "tx_drops",
++ .offset = 0x85,
++ },
++ [OCELOT_STAT_TX_PAUSE] = {
++ .name = "tx_pause",
++ .offset = 0x86,
++ },
++ [OCELOT_STAT_TX_64] = {
++ .name = "tx_frames_below_65_octets",
++ .offset = 0x87,
++ },
++ [OCELOT_STAT_TX_65_127] = {
++ .name = "tx_frames_65_to_127_octets",
++ .offset = 0x88,
++ },
++ [OCELOT_STAT_TX_128_255] = {
++ .name = "tx_frames_128_255_octets",
++ .offset = 0x89,
++ },
++ [OCELOT_STAT_TX_256_511] = {
++ .name = "tx_frames_256_511_octets",
++ .offset = 0x8A,
++ },
++ [OCELOT_STAT_TX_512_1023] = {
++ .name = "tx_frames_512_1023_octets",
++ .offset = 0x8B,
++ },
++ [OCELOT_STAT_TX_1024_1526] = {
++ .name = "tx_frames_1024_1526_octets",
++ .offset = 0x8C,
++ },
++ [OCELOT_STAT_TX_1527_MAX] = {
++ .name = "tx_frames_over_1526_octets",
++ .offset = 0x8D,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_0] = {
++ .name = "tx_yellow_prio_0",
++ .offset = 0x8E,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_1] = {
++ .name = "tx_yellow_prio_1",
++ .offset = 0x8F,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_2] = {
++ .name = "tx_yellow_prio_2",
++ .offset = 0x90,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_3] = {
++ .name = "tx_yellow_prio_3",
++ .offset = 0x91,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_4] = {
++ .name = "tx_yellow_prio_4",
++ .offset = 0x92,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_5] = {
++ .name = "tx_yellow_prio_5",
++ .offset = 0x93,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_6] = {
++ .name = "tx_yellow_prio_6",
++ .offset = 0x94,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_7] = {
++ .name = "tx_yellow_prio_7",
++ .offset = 0x95,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_0] = {
++ .name = "tx_green_prio_0",
++ .offset = 0x96,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_1] = {
++ .name = "tx_green_prio_1",
++ .offset = 0x97,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_2] = {
++ .name = "tx_green_prio_2",
++ .offset = 0x98,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_3] = {
++ .name = "tx_green_prio_3",
++ .offset = 0x99,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_4] = {
++ .name = "tx_green_prio_4",
++ .offset = 0x9A,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_5] = {
++ .name = "tx_green_prio_5",
++ .offset = 0x9B,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_6] = {
++ .name = "tx_green_prio_6",
++ .offset = 0x9C,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_7] = {
++ .name = "tx_green_prio_7",
++ .offset = 0x9D,
++ },
++ [OCELOT_STAT_TX_AGED] = {
++ .name = "tx_aged",
++ .offset = 0x9E,
++ },
++ [OCELOT_STAT_DROP_LOCAL] = {
++ .name = "drop_local",
++ .offset = 0x100,
++ },
++ [OCELOT_STAT_DROP_TAIL] = {
++ .name = "drop_tail",
++ .offset = 0x101,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
++ .name = "drop_yellow_prio_0",
++ .offset = 0x102,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
++ .name = "drop_yellow_prio_1",
++ .offset = 0x103,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
++ .name = "drop_yellow_prio_2",
++ .offset = 0x104,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
++ .name = "drop_yellow_prio_3",
++ .offset = 0x105,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
++ .name = "drop_yellow_prio_4",
++ .offset = 0x106,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
++ .name = "drop_yellow_prio_5",
++ .offset = 0x107,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
++ .name = "drop_yellow_prio_6",
++ .offset = 0x108,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
++ .name = "drop_yellow_prio_7",
++ .offset = 0x109,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_0] = {
++ .name = "drop_green_prio_0",
++ .offset = 0x10A,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_1] = {
++ .name = "drop_green_prio_1",
++ .offset = 0x10B,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_2] = {
++ .name = "drop_green_prio_2",
++ .offset = 0x10C,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_3] = {
++ .name = "drop_green_prio_3",
++ .offset = 0x10D,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_4] = {
++ .name = "drop_green_prio_4",
++ .offset = 0x10E,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_5] = {
++ .name = "drop_green_prio_5",
++ .offset = 0x10F,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_6] = {
++ .name = "drop_green_prio_6",
++ .offset = 0x110,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_7] = {
++ .name = "drop_green_prio_7",
++ .offset = 0x111,
++ },
+ };
+
+ static const struct vcap_field vsc9959_vcap_es0_keys[] = {
+@@ -2172,7 +2455,7 @@ static void vsc9959_psfp_sgi_table_del(struct ocelot *ocelot,
+ static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index,
+ struct felix_stream_filter_counters *counters)
+ {
+- mutex_lock(&ocelot->stats_lock);
++ spin_lock(&ocelot->stats_lock);
+
+ ocelot_rmw(ocelot, SYS_STAT_CFG_STAT_VIEW(index),
+ SYS_STAT_CFG_STAT_VIEW_M,
+@@ -2189,7 +2472,7 @@ static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index,
+ SYS_STAT_CFG_STAT_CLEAR_SHOT(0x10),
+ SYS_STAT_CFG);
+
+- mutex_unlock(&ocelot->stats_lock);
++ spin_unlock(&ocelot->stats_lock);
+ }
+
+ static int vsc9959_psfp_filter_add(struct ocelot *ocelot, int port,
+diff --git a/drivers/net/dsa/ocelot/seville_vsc9953.c b/drivers/net/dsa/ocelot/seville_vsc9953.c
+index ea06492113568..fe5d4642d0bcb 100644
+--- a/drivers/net/dsa/ocelot/seville_vsc9953.c
++++ b/drivers/net/dsa/ocelot/seville_vsc9953.c
+@@ -277,19 +277,21 @@ static const u32 vsc9953_sys_regmap[] = {
+ REG(SYS_COUNT_RX_64, 0x000024),
+ REG(SYS_COUNT_RX_65_127, 0x000028),
+ REG(SYS_COUNT_RX_128_255, 0x00002c),
+- REG(SYS_COUNT_RX_256_1023, 0x000030),
+- REG(SYS_COUNT_RX_1024_1526, 0x000034),
+- REG(SYS_COUNT_RX_1527_MAX, 0x000038),
++ REG(SYS_COUNT_RX_256_511, 0x000030),
++ REG(SYS_COUNT_RX_512_1023, 0x000034),
++ REG(SYS_COUNT_RX_1024_1526, 0x000038),
++ REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
+ REG(SYS_COUNT_RX_LONGS, 0x000048),
+ REG(SYS_COUNT_TX_OCTETS, 0x000100),
+ REG(SYS_COUNT_TX_COLLISION, 0x000110),
+ REG(SYS_COUNT_TX_DROPS, 0x000114),
+ REG(SYS_COUNT_TX_64, 0x00011c),
+ REG(SYS_COUNT_TX_65_127, 0x000120),
+- REG(SYS_COUNT_TX_128_511, 0x000124),
+- REG(SYS_COUNT_TX_512_1023, 0x000128),
+- REG(SYS_COUNT_TX_1024_1526, 0x00012c),
+- REG(SYS_COUNT_TX_1527_MAX, 0x000130),
++ REG(SYS_COUNT_TX_128_255, 0x000124),
++ REG(SYS_COUNT_TX_256_511, 0x000128),
++ REG(SYS_COUNT_TX_512_1023, 0x00012c),
++ REG(SYS_COUNT_TX_1024_1526, 0x000130),
++ REG(SYS_COUNT_TX_1527_MAX, 0x000134),
+ REG(SYS_COUNT_TX_AGING, 0x000178),
+ REG(SYS_RESET_CFG, 0x000318),
+ REG_RESERVED(SYS_SR_ETYPE_CFG),
+@@ -543,101 +545,379 @@ static const struct reg_field vsc9953_regfields[REGFIELD_MAX] = {
+ [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 11, 4),
+ };
+
+-static const struct ocelot_stat_layout vsc9953_stats_layout[] = {
+- { .offset = 0x00, .name = "rx_octets", },
+- { .offset = 0x01, .name = "rx_unicast", },
+- { .offset = 0x02, .name = "rx_multicast", },
+- { .offset = 0x03, .name = "rx_broadcast", },
+- { .offset = 0x04, .name = "rx_shorts", },
+- { .offset = 0x05, .name = "rx_fragments", },
+- { .offset = 0x06, .name = "rx_jabbers", },
+- { .offset = 0x07, .name = "rx_crc_align_errs", },
+- { .offset = 0x08, .name = "rx_sym_errs", },
+- { .offset = 0x09, .name = "rx_frames_below_65_octets", },
+- { .offset = 0x0A, .name = "rx_frames_65_to_127_octets", },
+- { .offset = 0x0B, .name = "rx_frames_128_to_255_octets", },
+- { .offset = 0x0C, .name = "rx_frames_256_to_511_octets", },
+- { .offset = 0x0D, .name = "rx_frames_512_to_1023_octets", },
+- { .offset = 0x0E, .name = "rx_frames_1024_to_1526_octets", },
+- { .offset = 0x0F, .name = "rx_frames_over_1526_octets", },
+- { .offset = 0x10, .name = "rx_pause", },
+- { .offset = 0x11, .name = "rx_control", },
+- { .offset = 0x12, .name = "rx_longs", },
+- { .offset = 0x13, .name = "rx_classified_drops", },
+- { .offset = 0x14, .name = "rx_red_prio_0", },
+- { .offset = 0x15, .name = "rx_red_prio_1", },
+- { .offset = 0x16, .name = "rx_red_prio_2", },
+- { .offset = 0x17, .name = "rx_red_prio_3", },
+- { .offset = 0x18, .name = "rx_red_prio_4", },
+- { .offset = 0x19, .name = "rx_red_prio_5", },
+- { .offset = 0x1A, .name = "rx_red_prio_6", },
+- { .offset = 0x1B, .name = "rx_red_prio_7", },
+- { .offset = 0x1C, .name = "rx_yellow_prio_0", },
+- { .offset = 0x1D, .name = "rx_yellow_prio_1", },
+- { .offset = 0x1E, .name = "rx_yellow_prio_2", },
+- { .offset = 0x1F, .name = "rx_yellow_prio_3", },
+- { .offset = 0x20, .name = "rx_yellow_prio_4", },
+- { .offset = 0x21, .name = "rx_yellow_prio_5", },
+- { .offset = 0x22, .name = "rx_yellow_prio_6", },
+- { .offset = 0x23, .name = "rx_yellow_prio_7", },
+- { .offset = 0x24, .name = "rx_green_prio_0", },
+- { .offset = 0x25, .name = "rx_green_prio_1", },
+- { .offset = 0x26, .name = "rx_green_prio_2", },
+- { .offset = 0x27, .name = "rx_green_prio_3", },
+- { .offset = 0x28, .name = "rx_green_prio_4", },
+- { .offset = 0x29, .name = "rx_green_prio_5", },
+- { .offset = 0x2A, .name = "rx_green_prio_6", },
+- { .offset = 0x2B, .name = "rx_green_prio_7", },
+- { .offset = 0x40, .name = "tx_octets", },
+- { .offset = 0x41, .name = "tx_unicast", },
+- { .offset = 0x42, .name = "tx_multicast", },
+- { .offset = 0x43, .name = "tx_broadcast", },
+- { .offset = 0x44, .name = "tx_collision", },
+- { .offset = 0x45, .name = "tx_drops", },
+- { .offset = 0x46, .name = "tx_pause", },
+- { .offset = 0x47, .name = "tx_frames_below_65_octets", },
+- { .offset = 0x48, .name = "tx_frames_65_to_127_octets", },
+- { .offset = 0x49, .name = "tx_frames_128_255_octets", },
+- { .offset = 0x4A, .name = "tx_frames_256_511_octets", },
+- { .offset = 0x4B, .name = "tx_frames_512_1023_octets", },
+- { .offset = 0x4C, .name = "tx_frames_1024_1526_octets", },
+- { .offset = 0x4D, .name = "tx_frames_over_1526_octets", },
+- { .offset = 0x4E, .name = "tx_yellow_prio_0", },
+- { .offset = 0x4F, .name = "tx_yellow_prio_1", },
+- { .offset = 0x50, .name = "tx_yellow_prio_2", },
+- { .offset = 0x51, .name = "tx_yellow_prio_3", },
+- { .offset = 0x52, .name = "tx_yellow_prio_4", },
+- { .offset = 0x53, .name = "tx_yellow_prio_5", },
+- { .offset = 0x54, .name = "tx_yellow_prio_6", },
+- { .offset = 0x55, .name = "tx_yellow_prio_7", },
+- { .offset = 0x56, .name = "tx_green_prio_0", },
+- { .offset = 0x57, .name = "tx_green_prio_1", },
+- { .offset = 0x58, .name = "tx_green_prio_2", },
+- { .offset = 0x59, .name = "tx_green_prio_3", },
+- { .offset = 0x5A, .name = "tx_green_prio_4", },
+- { .offset = 0x5B, .name = "tx_green_prio_5", },
+- { .offset = 0x5C, .name = "tx_green_prio_6", },
+- { .offset = 0x5D, .name = "tx_green_prio_7", },
+- { .offset = 0x5E, .name = "tx_aged", },
+- { .offset = 0x80, .name = "drop_local", },
+- { .offset = 0x81, .name = "drop_tail", },
+- { .offset = 0x82, .name = "drop_yellow_prio_0", },
+- { .offset = 0x83, .name = "drop_yellow_prio_1", },
+- { .offset = 0x84, .name = "drop_yellow_prio_2", },
+- { .offset = 0x85, .name = "drop_yellow_prio_3", },
+- { .offset = 0x86, .name = "drop_yellow_prio_4", },
+- { .offset = 0x87, .name = "drop_yellow_prio_5", },
+- { .offset = 0x88, .name = "drop_yellow_prio_6", },
+- { .offset = 0x89, .name = "drop_yellow_prio_7", },
+- { .offset = 0x8A, .name = "drop_green_prio_0", },
+- { .offset = 0x8B, .name = "drop_green_prio_1", },
+- { .offset = 0x8C, .name = "drop_green_prio_2", },
+- { .offset = 0x8D, .name = "drop_green_prio_3", },
+- { .offset = 0x8E, .name = "drop_green_prio_4", },
+- { .offset = 0x8F, .name = "drop_green_prio_5", },
+- { .offset = 0x90, .name = "drop_green_prio_6", },
+- { .offset = 0x91, .name = "drop_green_prio_7", },
+- OCELOT_STAT_END
++static const struct ocelot_stat_layout vsc9953_stats_layout[OCELOT_NUM_STATS] = {
++ [OCELOT_STAT_RX_OCTETS] = {
++ .name = "rx_octets",
++ .offset = 0x00,
++ },
++ [OCELOT_STAT_RX_UNICAST] = {
++ .name = "rx_unicast",
++ .offset = 0x01,
++ },
++ [OCELOT_STAT_RX_MULTICAST] = {
++ .name = "rx_multicast",
++ .offset = 0x02,
++ },
++ [OCELOT_STAT_RX_BROADCAST] = {
++ .name = "rx_broadcast",
++ .offset = 0x03,
++ },
++ [OCELOT_STAT_RX_SHORTS] = {
++ .name = "rx_shorts",
++ .offset = 0x04,
++ },
++ [OCELOT_STAT_RX_FRAGMENTS] = {
++ .name = "rx_fragments",
++ .offset = 0x05,
++ },
++ [OCELOT_STAT_RX_JABBERS] = {
++ .name = "rx_jabbers",
++ .offset = 0x06,
++ },
++ [OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
++ .name = "rx_crc_align_errs",
++ .offset = 0x07,
++ },
++ [OCELOT_STAT_RX_SYM_ERRS] = {
++ .name = "rx_sym_errs",
++ .offset = 0x08,
++ },
++ [OCELOT_STAT_RX_64] = {
++ .name = "rx_frames_below_65_octets",
++ .offset = 0x09,
++ },
++ [OCELOT_STAT_RX_65_127] = {
++ .name = "rx_frames_65_to_127_octets",
++ .offset = 0x0A,
++ },
++ [OCELOT_STAT_RX_128_255] = {
++ .name = "rx_frames_128_to_255_octets",
++ .offset = 0x0B,
++ },
++ [OCELOT_STAT_RX_256_511] = {
++ .name = "rx_frames_256_to_511_octets",
++ .offset = 0x0C,
++ },
++ [OCELOT_STAT_RX_512_1023] = {
++ .name = "rx_frames_512_to_1023_octets",
++ .offset = 0x0D,
++ },
++ [OCELOT_STAT_RX_1024_1526] = {
++ .name = "rx_frames_1024_to_1526_octets",
++ .offset = 0x0E,
++ },
++ [OCELOT_STAT_RX_1527_MAX] = {
++ .name = "rx_frames_over_1526_octets",
++ .offset = 0x0F,
++ },
++ [OCELOT_STAT_RX_PAUSE] = {
++ .name = "rx_pause",
++ .offset = 0x10,
++ },
++ [OCELOT_STAT_RX_CONTROL] = {
++ .name = "rx_control",
++ .offset = 0x11,
++ },
++ [OCELOT_STAT_RX_LONGS] = {
++ .name = "rx_longs",
++ .offset = 0x12,
++ },
++ [OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
++ .name = "rx_classified_drops",
++ .offset = 0x13,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_0] = {
++ .name = "rx_red_prio_0",
++ .offset = 0x14,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_1] = {
++ .name = "rx_red_prio_1",
++ .offset = 0x15,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_2] = {
++ .name = "rx_red_prio_2",
++ .offset = 0x16,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_3] = {
++ .name = "rx_red_prio_3",
++ .offset = 0x17,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_4] = {
++ .name = "rx_red_prio_4",
++ .offset = 0x18,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_5] = {
++ .name = "rx_red_prio_5",
++ .offset = 0x19,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_6] = {
++ .name = "rx_red_prio_6",
++ .offset = 0x1A,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_7] = {
++ .name = "rx_red_prio_7",
++ .offset = 0x1B,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_0] = {
++ .name = "rx_yellow_prio_0",
++ .offset = 0x1C,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_1] = {
++ .name = "rx_yellow_prio_1",
++ .offset = 0x1D,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_2] = {
++ .name = "rx_yellow_prio_2",
++ .offset = 0x1E,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_3] = {
++ .name = "rx_yellow_prio_3",
++ .offset = 0x1F,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_4] = {
++ .name = "rx_yellow_prio_4",
++ .offset = 0x20,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_5] = {
++ .name = "rx_yellow_prio_5",
++ .offset = 0x21,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_6] = {
++ .name = "rx_yellow_prio_6",
++ .offset = 0x22,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_7] = {
++ .name = "rx_yellow_prio_7",
++ .offset = 0x23,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_0] = {
++ .name = "rx_green_prio_0",
++ .offset = 0x24,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_1] = {
++ .name = "rx_green_prio_1",
++ .offset = 0x25,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_2] = {
++ .name = "rx_green_prio_2",
++ .offset = 0x26,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_3] = {
++ .name = "rx_green_prio_3",
++ .offset = 0x27,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_4] = {
++ .name = "rx_green_prio_4",
++ .offset = 0x28,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_5] = {
++ .name = "rx_green_prio_5",
++ .offset = 0x29,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_6] = {
++ .name = "rx_green_prio_6",
++ .offset = 0x2A,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_7] = {
++ .name = "rx_green_prio_7",
++ .offset = 0x2B,
++ },
++ [OCELOT_STAT_TX_OCTETS] = {
++ .name = "tx_octets",
++ .offset = 0x40,
++ },
++ [OCELOT_STAT_TX_UNICAST] = {
++ .name = "tx_unicast",
++ .offset = 0x41,
++ },
++ [OCELOT_STAT_TX_MULTICAST] = {
++ .name = "tx_multicast",
++ .offset = 0x42,
++ },
++ [OCELOT_STAT_TX_BROADCAST] = {
++ .name = "tx_broadcast",
++ .offset = 0x43,
++ },
++ [OCELOT_STAT_TX_COLLISION] = {
++ .name = "tx_collision",
++ .offset = 0x44,
++ },
++ [OCELOT_STAT_TX_DROPS] = {
++ .name = "tx_drops",
++ .offset = 0x45,
++ },
++ [OCELOT_STAT_TX_PAUSE] = {
++ .name = "tx_pause",
++ .offset = 0x46,
++ },
++ [OCELOT_STAT_TX_64] = {
++ .name = "tx_frames_below_65_octets",
++ .offset = 0x47,
++ },
++ [OCELOT_STAT_TX_65_127] = {
++ .name = "tx_frames_65_to_127_octets",
++ .offset = 0x48,
++ },
++ [OCELOT_STAT_TX_128_255] = {
++ .name = "tx_frames_128_255_octets",
++ .offset = 0x49,
++ },
++ [OCELOT_STAT_TX_256_511] = {
++ .name = "tx_frames_256_511_octets",
++ .offset = 0x4A,
++ },
++ [OCELOT_STAT_TX_512_1023] = {
++ .name = "tx_frames_512_1023_octets",
++ .offset = 0x4B,
++ },
++ [OCELOT_STAT_TX_1024_1526] = {
++ .name = "tx_frames_1024_1526_octets",
++ .offset = 0x4C,
++ },
++ [OCELOT_STAT_TX_1527_MAX] = {
++ .name = "tx_frames_over_1526_octets",
++ .offset = 0x4D,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_0] = {
++ .name = "tx_yellow_prio_0",
++ .offset = 0x4E,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_1] = {
++ .name = "tx_yellow_prio_1",
++ .offset = 0x4F,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_2] = {
++ .name = "tx_yellow_prio_2",
++ .offset = 0x50,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_3] = {
++ .name = "tx_yellow_prio_3",
++ .offset = 0x51,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_4] = {
++ .name = "tx_yellow_prio_4",
++ .offset = 0x52,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_5] = {
++ .name = "tx_yellow_prio_5",
++ .offset = 0x53,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_6] = {
++ .name = "tx_yellow_prio_6",
++ .offset = 0x54,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_7] = {
++ .name = "tx_yellow_prio_7",
++ .offset = 0x55,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_0] = {
++ .name = "tx_green_prio_0",
++ .offset = 0x56,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_1] = {
++ .name = "tx_green_prio_1",
++ .offset = 0x57,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_2] = {
++ .name = "tx_green_prio_2",
++ .offset = 0x58,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_3] = {
++ .name = "tx_green_prio_3",
++ .offset = 0x59,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_4] = {
++ .name = "tx_green_prio_4",
++ .offset = 0x5A,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_5] = {
++ .name = "tx_green_prio_5",
++ .offset = 0x5B,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_6] = {
++ .name = "tx_green_prio_6",
++ .offset = 0x5C,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_7] = {
++ .name = "tx_green_prio_7",
++ .offset = 0x5D,
++ },
++ [OCELOT_STAT_TX_AGED] = {
++ .name = "tx_aged",
++ .offset = 0x5E,
++ },
++ [OCELOT_STAT_DROP_LOCAL] = {
++ .name = "drop_local",
++ .offset = 0x80,
++ },
++ [OCELOT_STAT_DROP_TAIL] = {
++ .name = "drop_tail",
++ .offset = 0x81,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
++ .name = "drop_yellow_prio_0",
++ .offset = 0x82,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
++ .name = "drop_yellow_prio_1",
++ .offset = 0x83,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
++ .name = "drop_yellow_prio_2",
++ .offset = 0x84,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
++ .name = "drop_yellow_prio_3",
++ .offset = 0x85,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
++ .name = "drop_yellow_prio_4",
++ .offset = 0x86,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
++ .name = "drop_yellow_prio_5",
++ .offset = 0x87,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
++ .name = "drop_yellow_prio_6",
++ .offset = 0x88,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
++ .name = "drop_yellow_prio_7",
++ .offset = 0x89,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_0] = {
++ .name = "drop_green_prio_0",
++ .offset = 0x8A,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_1] = {
++ .name = "drop_green_prio_1",
++ .offset = 0x8B,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_2] = {
++ .name = "drop_green_prio_2",
++ .offset = 0x8C,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_3] = {
++ .name = "drop_green_prio_3",
++ .offset = 0x8D,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_4] = {
++ .name = "drop_green_prio_4",
++ .offset = 0x8E,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_5] = {
++ .name = "drop_green_prio_5",
++ .offset = 0x8F,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_6] = {
++ .name = "drop_green_prio_6",
++ .offset = 0x90,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_7] = {
++ .name = "drop_green_prio_7",
++ .offset = 0x91,
++ },
+ };
+
+ static const struct vcap_field vsc9953_vcap_es0_keys[] = {
+diff --git a/drivers/net/dsa/sja1105/sja1105_devlink.c b/drivers/net/dsa/sja1105/sja1105_devlink.c
+index 0569ff066634d..10c6fea1227fa 100644
+--- a/drivers/net/dsa/sja1105/sja1105_devlink.c
++++ b/drivers/net/dsa/sja1105/sja1105_devlink.c
+@@ -93,7 +93,7 @@ static int sja1105_setup_devlink_regions(struct dsa_switch *ds)
+
+ region = dsa_devlink_region_create(ds, ops, 1, size);
+ if (IS_ERR(region)) {
+- while (i-- >= 0)
++ while (--i >= 0)
+ dsa_devlink_region_destroy(priv->regions[i]);
+ return PTR_ERR(region);
+ }
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index e11cc29d3264c..06508eebb5853 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -265,12 +265,10 @@ static void aq_nic_service_timer_cb(struct timer_list *t)
+ static void aq_nic_polling_timer_cb(struct timer_list *t)
+ {
+ struct aq_nic_s *self = from_timer(self, t, polling_timer);
+- struct aq_vec_s *aq_vec = NULL;
+ unsigned int i = 0U;
+
+- for (i = 0U, aq_vec = self->aq_vec[0];
+- self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i])
+- aq_vec_isr(i, (void *)aq_vec);
++ for (i = 0U; self->aq_vecs > i; ++i)
++ aq_vec_isr(i, (void *)self->aq_vec[i]);
+
+ mod_timer(&self->polling_timer, jiffies +
+ AQ_CFG_POLLING_TIMER_INTERVAL);
+@@ -1014,7 +1012,6 @@ int aq_nic_get_regs_count(struct aq_nic_s *self)
+
+ u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data)
+ {
+- struct aq_vec_s *aq_vec = NULL;
+ struct aq_stats_s *stats;
+ unsigned int count = 0U;
+ unsigned int i = 0U;
+@@ -1064,11 +1061,11 @@ u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data)
+ data += i;
+
+ for (tc = 0U; tc < self->aq_nic_cfg.tcs; tc++) {
+- for (i = 0U, aq_vec = self->aq_vec[0];
+- aq_vec && self->aq_vecs > i;
+- ++i, aq_vec = self->aq_vec[i]) {
++ for (i = 0U; self->aq_vecs > i; ++i) {
++ if (!self->aq_vec[i])
++ break;
+ data += count;
+- count = aq_vec_get_sw_stats(aq_vec, tc, data);
++ count = aq_vec_get_sw_stats(self->aq_vec[i], tc, data);
+ }
+ }
+
+@@ -1382,7 +1379,6 @@ int aq_nic_set_loopback(struct aq_nic_s *self)
+
+ int aq_nic_stop(struct aq_nic_s *self)
+ {
+- struct aq_vec_s *aq_vec = NULL;
+ unsigned int i = 0U;
+
+ netif_tx_disable(self->ndev);
+@@ -1400,9 +1396,8 @@ int aq_nic_stop(struct aq_nic_s *self)
+
+ aq_ptp_irq_free(self);
+
+- for (i = 0U, aq_vec = self->aq_vec[0];
+- self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i])
+- aq_vec_stop(aq_vec);
++ for (i = 0U; self->aq_vecs > i; ++i)
++ aq_vec_stop(self->aq_vec[i]);
+
+ aq_ptp_ring_stop(self);
+
+diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
+index 2dfc1e32bbb31..93580484a3f4e 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.c
++++ b/drivers/net/ethernet/broadcom/bgmac.c
+@@ -189,8 +189,8 @@ static netdev_tx_t bgmac_dma_tx_add(struct bgmac *bgmac,
+ }
+
+ slot->skb = skb;
+- ring->end += nr_frags + 1;
+ netdev_sent_queue(net_dev, skb->len);
++ ring->end += nr_frags + 1;
+
+ wmb();
+
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index c888ddee1fc41..7ded559842e83 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -393,6 +393,9 @@ int bcmgenet_mii_probe(struct net_device *dev)
+ if (priv->internal_phy && !GENET_IS_V5(priv))
+ dev->phydev->irq = PHY_MAC_INTERRUPT;
+
++ /* Indicate that the MAC is responsible for PHY PM */
++ dev->phydev->mac_managed_pm = true;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+index 26433a62d7f0d..fed5f93bf620a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+@@ -497,7 +497,7 @@ struct cpl_t5_pass_accept_rpl {
+ __be32 opt2;
+ __be64 opt0;
+ __be32 iss;
+- __be32 rsvd[3];
++ __be32 rsvd;
+ };
+
+ struct cpl_act_open_req {
+diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
+index cb069a0af7b92..251ea16ed0fae 100644
+--- a/drivers/net/ethernet/engleder/tsnep_main.c
++++ b/drivers/net/ethernet/engleder/tsnep_main.c
+@@ -340,14 +340,14 @@ static int tsnep_tx_map(struct sk_buff *skb, struct tsnep_tx *tx, int count)
+ return 0;
+ }
+
+-static void tsnep_tx_unmap(struct tsnep_tx *tx, int count)
++static void tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count)
+ {
+ struct device *dmadev = tx->adapter->dmadev;
+ struct tsnep_tx_entry *entry;
+ int i;
+
+ for (i = 0; i < count; i++) {
+- entry = &tx->entry[(tx->read + i) % TSNEP_RING_SIZE];
++ entry = &tx->entry[(index + i) % TSNEP_RING_SIZE];
+
+ if (entry->len) {
+ if (i == 0)
+@@ -395,7 +395,7 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
+
+ retval = tsnep_tx_map(skb, tx, count);
+ if (retval != 0) {
+- tsnep_tx_unmap(tx, count);
++ tsnep_tx_unmap(tx, tx->write, count);
+ dev_kfree_skb_any(entry->skb);
+ entry->skb = NULL;
+
+@@ -464,7 +464,7 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
+ if (skb_shinfo(entry->skb)->nr_frags > 0)
+ count += skb_shinfo(entry->skb)->nr_frags;
+
+- tsnep_tx_unmap(tx, count);
++ tsnep_tx_unmap(tx, tx->read, count);
+
+ if ((skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) &&
+ (__le32_to_cpu(entry->desc_wb->properties) &
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index cd9ec80522e75..75d51572693d6 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1660,8 +1660,8 @@ static int dpaa2_eth_add_bufs(struct dpaa2_eth_priv *priv,
+ buf_array[i] = addr;
+
+ /* tracing point */
+- trace_dpaa2_eth_buf_seed(priv->net_dev,
+- page, DPAA2_ETH_RX_BUF_RAW_SIZE,
++ trace_dpaa2_eth_buf_seed(priv->net_dev, page_address(page),
++ DPAA2_ETH_RX_BUF_RAW_SIZE,
+ addr, priv->rx_buf_size,
+ bpid);
+ }
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 7d49c28215f31..3dc3c0b626c21 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -135,11 +135,7 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
+ * NSEC_PER_SEC - ts.tv_nsec. Add the remaining nanoseconds
+ * to current timer would be next second.
+ */
+- tempval = readl(fep->hwp + FEC_ATIME_CTRL);
+- tempval |= FEC_T_CTRL_CAPTURE;
+- writel(tempval, fep->hwp + FEC_ATIME_CTRL);
+-
+- tempval = readl(fep->hwp + FEC_ATIME);
++ tempval = fep->cc.read(&fep->cc);
+ /* Convert the ptp local counter to 1588 timestamp */
+ ns = timecounter_cyc2time(&fep->tc, tempval);
+ ts = ns_to_timespec64(ns);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 685556e968f20..71a8e1698ed48 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -384,7 +384,9 @@ static void i40e_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+ set_bit(__I40E_GLOBAL_RESET_REQUESTED, pf->state);
+ break;
+ default:
+- netdev_err(netdev, "tx_timeout recovery unsuccessful\n");
++ netdev_err(netdev, "tx_timeout recovery unsuccessful, device is in non-recoverable state.\n");
++ set_bit(__I40E_DOWN_REQUESTED, pf->state);
++ set_bit(__I40E_VSI_DOWN_REQUESTED, vsi->state);
+ break;
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 7bc1174edf6b9..af69ccc6e8d2f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -3204,11 +3204,13 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
+
+ protocol = vlan_get_protocol(skb);
+
+- if (eth_p_mpls(protocol))
++ if (eth_p_mpls(protocol)) {
+ ip.hdr = skb_inner_network_header(skb);
+- else
++ l4.hdr = skb_checksum_start(skb);
++ } else {
+ ip.hdr = skb_network_header(skb);
+- l4.hdr = skb_checksum_start(skb);
++ l4.hdr = skb_transport_header(skb);
++ }
+
+ /* set the tx_flags to indicate the IP protocol type. this is
+ * required so that checksum header computation below is accurate.
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_adminq.c b/drivers/net/ethernet/intel/iavf/iavf_adminq.c
+index cd4e6a22d0f9f..9ffbd24d83cb6 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_adminq.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_adminq.c
+@@ -324,6 +324,7 @@ static enum iavf_status iavf_config_arq_regs(struct iavf_hw *hw)
+ static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
+ {
+ enum iavf_status ret_code = 0;
++ int i;
+
+ if (hw->aq.asq.count > 0) {
+ /* queue already initialized */
+@@ -354,12 +355,17 @@ static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
+ /* initialize base registers */
+ ret_code = iavf_config_asq_regs(hw);
+ if (ret_code)
+- goto init_adminq_free_rings;
++ goto init_free_asq_bufs;
+
+ /* success! */
+ hw->aq.asq.count = hw->aq.num_asq_entries;
+ goto init_adminq_exit;
+
++init_free_asq_bufs:
++ for (i = 0; i < hw->aq.num_asq_entries; i++)
++ iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
++ iavf_free_virt_mem(hw, &hw->aq.asq.dma_head);
++
+ init_adminq_free_rings:
+ iavf_free_adminq_asq(hw);
+
+@@ -383,6 +389,7 @@ init_adminq_exit:
+ static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
+ {
+ enum iavf_status ret_code = 0;
++ int i;
+
+ if (hw->aq.arq.count > 0) {
+ /* queue already initialized */
+@@ -413,12 +420,16 @@ static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
+ /* initialize base registers */
+ ret_code = iavf_config_arq_regs(hw);
+ if (ret_code)
+- goto init_adminq_free_rings;
++ goto init_free_arq_bufs;
+
+ /* success! */
+ hw->aq.arq.count = hw->aq.num_arq_entries;
+ goto init_adminq_exit;
+
++init_free_arq_bufs:
++ for (i = 0; i < hw->aq.num_arq_entries; i++)
++ iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
++ iavf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+ init_adminq_free_rings:
+ iavf_free_adminq_arq(hw);
+
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 3dbfaead2ac74..6d159334da9ec 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2281,7 +2281,7 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter)
+ err = iavf_get_vf_config(adapter);
+ if (err == -EALREADY) {
+ err = iavf_send_vf_config_msg(adapter);
+- goto err_alloc;
++ goto err;
+ } else if (err == -EINVAL) {
+ /* We only get -EINVAL if the device is in a very bad
+ * state or if we've been disabled for previous bad
+@@ -2998,12 +2998,15 @@ continue_reset:
+
+ return;
+ reset_err:
++ if (running) {
++ set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
++ iavf_free_traffic_irqs(adapter);
++ }
++ iavf_disable_vf(adapter);
++
+ mutex_unlock(&adapter->client_lock);
+ mutex_unlock(&adapter->crit_lock);
+- if (running)
+- iavf_change_state(adapter, __IAVF_RUNNING);
+ dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
+- iavf_close(netdev);
+ }
+
+ /**
+@@ -3986,8 +3989,17 @@ static int iavf_open(struct net_device *netdev)
+ return -EIO;
+ }
+
+- while (!mutex_trylock(&adapter->crit_lock))
++ while (!mutex_trylock(&adapter->crit_lock)) {
++ /* If we are in __IAVF_INIT_CONFIG_ADAPTER state the crit_lock
++ * is already taken and iavf_open is called from an upper
++ * device's notifier reacting on NETDEV_REGISTER event.
++ * We have to leave here to avoid dead lock.
++ */
++ if (adapter->state == __IAVF_INIT_CONFIG_ADAPTER)
++ return -EBUSY;
++
+ usleep_range(500, 1000);
++ }
+
+ if (adapter->state != __IAVF_DOWN) {
+ err = -EBUSY;
+diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c
+index 85a94483c2edc..40e678cfb5078 100644
+--- a/drivers/net/ethernet/intel/ice/ice_fltr.c
++++ b/drivers/net/ethernet/intel/ice/ice_fltr.c
+@@ -62,7 +62,7 @@ ice_fltr_set_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi,
+ int result;
+
+ result = ice_set_vlan_vsi_promisc(hw, vsi->idx, promisc_mask, false);
+- if (result)
++ if (result && result != -EEXIST)
+ dev_err(ice_pf_to_dev(pf),
+ "Error setting promisc mode on VSI %i (rc=%d)\n",
+ vsi->vsi_num, result);
+@@ -86,7 +86,7 @@ ice_fltr_clear_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi,
+ int result;
+
+ result = ice_set_vlan_vsi_promisc(hw, vsi->idx, promisc_mask, true);
+- if (result)
++ if (result && result != -EEXIST)
+ dev_err(ice_pf_to_dev(pf),
+ "Error clearing promisc mode on VSI %i (rc=%d)\n",
+ vsi->vsi_num, result);
+@@ -109,7 +109,7 @@ ice_fltr_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ int result;
+
+ result = ice_clear_vsi_promisc(hw, vsi_handle, promisc_mask, vid);
+- if (result)
++ if (result && result != -EEXIST)
+ dev_err(ice_pf_to_dev(pf),
+ "Error clearing promisc mode on VSI %i for VID %u (rc=%d)\n",
+ ice_get_hw_vsi_num(hw, vsi_handle), vid, result);
+@@ -132,7 +132,7 @@ ice_fltr_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ int result;
+
+ result = ice_set_vsi_promisc(hw, vsi_handle, promisc_mask, vid);
+- if (result)
++ if (result && result != -EEXIST)
+ dev_err(ice_pf_to_dev(pf),
+ "Error setting promisc mode on VSI %i for VID %u (rc=%d)\n",
+ ice_get_hw_vsi_num(hw, vsi_handle), vid, result);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index f7f9c973ec54d..d6aafa272fb0b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -3178,7 +3178,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
+
+ pf = vsi->back;
+ vtype = vsi->type;
+- if (WARN_ON(vtype == ICE_VSI_VF) && !vsi->vf)
++ if (WARN_ON(vtype == ICE_VSI_VF && !vsi->vf))
+ return -EINVAL;
+
+ ice_vsi_init_vlan_ops(vsi);
+@@ -4078,7 +4078,11 @@ int ice_vsi_del_vlan_zero(struct ice_vsi *vsi)
+ if (err && err != -EEXIST)
+ return err;
+
+- return 0;
++ /* when deleting the last VLAN filter, make sure to disable the VLAN
++ * promisc mode so the filter isn't left by accident
++ */
++ return ice_clear_vsi_promisc(&vsi->back->hw, vsi->idx,
++ ICE_MCAST_VLAN_PROMISC_BITS, 0);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index bc68dc5c6927d..bfd97a9a8f2e0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -267,8 +267,10 @@ static int ice_set_promisc(struct ice_vsi *vsi, u8 promisc_m)
+ status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx,
+ promisc_m, 0);
+ }
++ if (status && status != -EEXIST)
++ return status;
+
+- return status;
++ return 0;
+ }
+
+ /**
+@@ -3572,6 +3574,14 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid)
+ while (test_and_set_bit(ICE_CFG_BUSY, vsi->state))
+ usleep_range(1000, 2000);
+
++ ret = ice_clear_vsi_promisc(&vsi->back->hw, vsi->idx,
++ ICE_MCAST_VLAN_PROMISC_BITS, vid);
++ if (ret) {
++ netdev_err(netdev, "Error clearing multicast promiscuous mode on VSI %i\n",
++ vsi->vsi_num);
++ vsi->current_netdev_flags |= IFF_ALLMULTI;
++ }
++
+ vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);
+
+ /* Make sure VLAN delete is successful before updating VLAN
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 9b2872e891518..b78a79c058bfc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -4414,6 +4414,13 @@ ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ goto free_fltr_list;
+
+ list_for_each_entry(list_itr, &vsi_list_head, list_entry) {
++ /* Avoid enabling or disabling VLAN zero twice when in double
++ * VLAN mode
++ */
++ if (ice_is_dvm_ena(hw) &&
++ list_itr->fltr_info.l_data.vlan.tpid == 0)
++ continue;
++
+ vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
+ if (rm_vlan_promisc)
+ status = ice_clear_vsi_promisc(hw, vsi_handle,
+@@ -4421,7 +4428,7 @@ ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ else
+ status = ice_set_vsi_promisc(hw, vsi_handle,
+ promisc_mask, vlan_id);
+- if (status)
++ if (status && status != -EEXIST)
+ break;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index 7adf9ddf129eb..7775aaa8cc439 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -505,8 +505,10 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
+
+ if (ice_is_vf_disabled(vf)) {
+ vsi = ice_get_vf_vsi(vf);
+- if (WARN_ON(!vsi))
++ if (!vsi) {
++ dev_dbg(dev, "VF is already removed\n");
+ return -EINVAL;
++ }
+ ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);
+ ice_vsi_stop_all_rx_rings(vsi);
+ dev_dbg(dev, "VF is already disabled, there is no need for resetting it, telling VM, all is fine %d\n",
+@@ -705,13 +707,16 @@ static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable)
+ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi)
+ {
+ struct ice_vsi_vlan_ops *vlan_ops;
+- int err;
++ int err = 0;
+
+ vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);
+
+- err = vlan_ops->ena_tx_filtering(vsi);
+- if (err)
+- return err;
++ /* Allow VF with VLAN 0 only to send all tagged traffic */
++ if (vsi->type != ICE_VSI_VF || ice_vsi_has_non_zero_vlans(vsi)) {
++ err = vlan_ops->ena_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
+
+ return ice_cfg_mac_antispoof(vsi, true);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 24188ec594d5a..a241c0bdc1507 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -2264,6 +2264,15 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
+
+ /* Enable VLAN filtering on first non-zero VLAN */
+ if (!vlan_promisc && vid && !ice_is_dvm_ena(&pf->hw)) {
++ if (vf->spoofchk) {
++ status = vsi->inner_vlan_ops.ena_tx_filtering(vsi);
++ if (status) {
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ dev_err(dev, "Enable VLAN anti-spoofing on VLAN ID: %d failed error-%d\n",
++ vid, status);
++ goto error_param;
++ }
++ }
+ if (vsi->inner_vlan_ops.ena_rx_filtering(vsi)) {
+ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n",
+@@ -2309,8 +2318,10 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
+ }
+
+ /* Disable VLAN filtering when only VLAN 0 is left */
+- if (!ice_vsi_has_non_zero_vlans(vsi))
++ if (!ice_vsi_has_non_zero_vlans(vsi)) {
++ vsi->inner_vlan_ops.dis_tx_filtering(vsi);
+ vsi->inner_vlan_ops.dis_rx_filtering(vsi);
++ }
+
+ if (vlan_promisc)
+ ice_vf_dis_vlan_promisc(vsi, &vlan);
+@@ -2814,6 +2825,13 @@ ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
+
+ if (vlan_promisc)
+ ice_vf_dis_vlan_promisc(vsi, &vlan);
++
++ /* Disable VLAN filtering when only VLAN 0 is left */
++ if (!ice_vsi_has_non_zero_vlans(vsi) && ice_is_dvm_ena(&vsi->back->hw)) {
++ err = vsi->outer_vlan_ops.dis_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
+ }
+
+ vc_vlan = &vlan_fltr->inner;
+@@ -2829,8 +2847,17 @@ ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
+ /* no support for VLAN promiscuous on inner VLAN unless
+ * we are in Single VLAN Mode (SVM)
+ */
+- if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc)
+- ice_vf_dis_vlan_promisc(vsi, &vlan);
++ if (!ice_is_dvm_ena(&vsi->back->hw)) {
++ if (vlan_promisc)
++ ice_vf_dis_vlan_promisc(vsi, &vlan);
++
++ /* Disable VLAN filtering when only VLAN 0 is left */
++ if (!ice_vsi_has_non_zero_vlans(vsi)) {
++ err = vsi->inner_vlan_ops.dis_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
++ }
+ }
+ }
+
+@@ -2907,6 +2934,13 @@ ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
+ if (err)
+ return err;
+ }
++
++ /* Enable VLAN filtering on first non-zero VLAN */
++ if (vf->spoofchk && vlan.vid && ice_is_dvm_ena(&vsi->back->hw)) {
++ err = vsi->outer_vlan_ops.ena_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
+ }
+
+ vc_vlan = &vlan_fltr->inner;
+@@ -2922,10 +2956,19 @@ ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
+ /* no support for VLAN promiscuous on inner VLAN unless
+ * we are in Single VLAN Mode (SVM)
+ */
+- if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) {
+- err = ice_vf_ena_vlan_promisc(vsi, &vlan);
+- if (err)
+- return err;
++ if (!ice_is_dvm_ena(&vsi->back->hw)) {
++ if (vlan_promisc) {
++ err = ice_vf_ena_vlan_promisc(vsi, &vlan);
++ if (err)
++ return err;
++ }
++
++ /* Enable VLAN filtering on first non-zero VLAN */
++ if (vf->spoofchk && vlan.vid) {
++ err = vsi->inner_vlan_ops.ena_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
+ }
+ }
+ }
+diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h
+index 2d3daf022651c..015b781441149 100644
+--- a/drivers/net/ethernet/intel/igb/igb.h
++++ b/drivers/net/ethernet/intel/igb/igb.h
+@@ -664,6 +664,8 @@ struct igb_adapter {
+ struct igb_mac_addr *mac_table;
+ struct vf_mac_filter vf_macs;
+ struct vf_mac_filter *vf_mac_list;
++ /* lock for VF resources */
++ spinlock_t vfs_lock;
+ };
+
+ /* flags controlling PTP/1588 function */
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index c5f04c40284bf..281a3b21d4257 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -3637,6 +3637,7 @@ static int igb_disable_sriov(struct pci_dev *pdev)
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct igb_adapter *adapter = netdev_priv(netdev);
+ struct e1000_hw *hw = &adapter->hw;
++ unsigned long flags;
+
+ /* reclaim resources allocated to VFs */
+ if (adapter->vf_data) {
+@@ -3649,12 +3650,13 @@ static int igb_disable_sriov(struct pci_dev *pdev)
+ pci_disable_sriov(pdev);
+ msleep(500);
+ }
+-
++ spin_lock_irqsave(&adapter->vfs_lock, flags);
+ kfree(adapter->vf_mac_list);
+ adapter->vf_mac_list = NULL;
+ kfree(adapter->vf_data);
+ adapter->vf_data = NULL;
+ adapter->vfs_allocated_count = 0;
++ spin_unlock_irqrestore(&adapter->vfs_lock, flags);
+ wr32(E1000_IOVCTL, E1000_IOVCTL_REUSE_VFQ);
+ wrfl();
+ msleep(100);
+@@ -3814,7 +3816,9 @@ static void igb_remove(struct pci_dev *pdev)
+ igb_release_hw_control(adapter);
+
+ #ifdef CONFIG_PCI_IOV
++ rtnl_lock();
+ igb_disable_sriov(pdev);
++ rtnl_unlock();
+ #endif
+
+ unregister_netdev(netdev);
+@@ -3974,6 +3978,9 @@ static int igb_sw_init(struct igb_adapter *adapter)
+
+ spin_lock_init(&adapter->nfc_lock);
+ spin_lock_init(&adapter->stats64_lock);
++
++ /* init spinlock to avoid concurrency of VF resources */
++ spin_lock_init(&adapter->vfs_lock);
+ #ifdef CONFIG_PCI_IOV
+ switch (hw->mac.type) {
+ case e1000_82576:
+@@ -7924,8 +7931,10 @@ unlock:
+ static void igb_msg_task(struct igb_adapter *adapter)
+ {
+ struct e1000_hw *hw = &adapter->hw;
++ unsigned long flags;
+ u32 vf;
+
++ spin_lock_irqsave(&adapter->vfs_lock, flags);
+ for (vf = 0; vf < adapter->vfs_allocated_count; vf++) {
+ /* process any reset requests */
+ if (!igb_check_for_rst(hw, vf))
+@@ -7939,6 +7948,7 @@ static void igb_msg_task(struct igb_adapter *adapter)
+ if (!igb_check_for_ack(hw, vf))
+ igb_rcv_ack_from_vf(adapter, vf);
+ }
++ spin_unlock_irqrestore(&adapter->vfs_lock, flags);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 54e1b27a7dfec..1484d332e5949 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2564,6 +2564,12 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
+ rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA);
+ rvu_reset_lmt_map_tbl(rvu, pcifunc);
+ rvu_detach_rsrcs(rvu, NULL, pcifunc);
++ /* In scenarios where PF/VF drivers detach NIXLF without freeing MCAM
++ * entries, check and free the MCAM entries explicitly to avoid leak.
++ * Since LF is detached use LF number as -1.
++ */
++ rvu_npc_free_mcam_entries(rvu, pcifunc, -1);
++
+ mutex_unlock(&rvu->flr_lock);
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 3a31fb8cc1554..13f8dfaa2ecb1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -1096,6 +1096,9 @@ static void npc_enadis_default_entries(struct rvu *rvu, u16 pcifunc,
+
+ void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+ {
++ if (nixlf < 0)
++ return;
++
+ npc_enadis_default_entries(rvu, pcifunc, nixlf, false);
+
+ /* Delete multicast and promisc MCAM entries */
+@@ -1107,6 +1110,9 @@ void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+
+ void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+ {
++ if (nixlf < 0)
++ return;
++
+ /* Enables only broadcast match entry. Promisc/Allmulti are enabled
+ * in set_rx_mode mbox handler.
+ */
+@@ -1650,7 +1656,7 @@ static void npc_load_kpu_profile(struct rvu *rvu)
+ * Firmware database method.
+ * Default KPU profile.
+ */
+- if (!request_firmware(&fw, kpu_profile, rvu->dev)) {
++ if (!request_firmware_direct(&fw, kpu_profile, rvu->dev)) {
+ dev_info(rvu->dev, "Loading KPU profile from firmware: %s\n",
+ kpu_profile);
+ rvu->kpu_fwdata = kzalloc(fw->size, GFP_KERNEL);
+@@ -1915,6 +1921,7 @@ static void rvu_npc_hw_init(struct rvu *rvu, int blkaddr)
+
+ static void rvu_npc_setup_interfaces(struct rvu *rvu, int blkaddr)
+ {
++ struct npc_mcam_kex *mkex = rvu->kpu.mkex;
+ struct npc_mcam *mcam = &rvu->hw->mcam;
+ struct rvu_hwinfo *hw = rvu->hw;
+ u64 nibble_ena, rx_kex, tx_kex;
+@@ -1927,15 +1934,15 @@ static void rvu_npc_setup_interfaces(struct rvu *rvu, int blkaddr)
+ mcam->counters.max--;
+ mcam->rx_miss_act_cntr = mcam->counters.max;
+
+- rx_kex = npc_mkex_default.keyx_cfg[NIX_INTF_RX];
+- tx_kex = npc_mkex_default.keyx_cfg[NIX_INTF_TX];
++ rx_kex = mkex->keyx_cfg[NIX_INTF_RX];
++ tx_kex = mkex->keyx_cfg[NIX_INTF_TX];
+ nibble_ena = FIELD_GET(NPC_PARSE_NIBBLE, rx_kex);
+
+ nibble_ena = rvu_npc_get_tx_nibble_cfg(rvu, nibble_ena);
+ if (nibble_ena) {
+ tx_kex &= ~NPC_PARSE_NIBBLE;
+ tx_kex |= FIELD_PREP(NPC_PARSE_NIBBLE, nibble_ena);
+- npc_mkex_default.keyx_cfg[NIX_INTF_TX] = tx_kex;
++ mkex->keyx_cfg[NIX_INTF_TX] = tx_kex;
+ }
+
+ /* Configure RX interfaces */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+index 19c53e591d0da..64654bd118845 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+@@ -445,7 +445,8 @@ do { \
+ NPC_SCAN_HDR(NPC_VLAN_TAG1, NPC_LID_LB, NPC_LT_LB_CTAG, 2, 2);
+ NPC_SCAN_HDR(NPC_VLAN_TAG2, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 2, 2);
+ NPC_SCAN_HDR(NPC_DMAC, NPC_LID_LA, la_ltype, la_start, 6);
+- NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start, 6);
++ /* SMAC follows the DMAC(which is 6 bytes) */
++ NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start + 6, 6);
+ /* PF_FUNC is 2 bytes at 0th byte of NPC_LT_LA_IH_NIX_ETHER */
+ NPC_SCAN_HDR(NPC_PF_FUNC, NPC_LID_LA, NPC_LT_LA_IH_NIX_ETHER, 0, 2);
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index fb8db5888d2f7..d686c7b6252f4 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -632,6 +632,12 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl)
+ req->num_regs++;
+ req->reg[1] = NIX_AF_TL3X_SCHEDULE(schq);
+ req->regval[1] = dwrr_val;
++ if (lvl == hw->txschq_link_cfg_lvl) {
++ req->num_regs++;
++ req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
++ /* Enable this queue and backpressure */
++ req->regval[2] = BIT_ULL(13) | BIT_ULL(12);
++ }
+ } else if (lvl == NIX_TXSCH_LVL_TL2) {
+ parent = hw->txschq_list[NIX_TXSCH_LVL_TL1][0];
+ req->reg[0] = NIX_AF_TL2X_PARENT(schq);
+@@ -641,11 +647,12 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl)
+ req->reg[1] = NIX_AF_TL2X_SCHEDULE(schq);
+ req->regval[1] = TXSCH_TL1_DFLT_RR_PRIO << 24 | dwrr_val;
+
+- req->num_regs++;
+- req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
+- /* Enable this queue and backpressure */
+- req->regval[2] = BIT_ULL(13) | BIT_ULL(12);
+-
++ if (lvl == hw->txschq_link_cfg_lvl) {
++ req->num_regs++;
++ req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
++ /* Enable this queue and backpressure */
++ req->regval[2] = BIT_ULL(13) | BIT_ULL(12);
++ }
+ } else if (lvl == NIX_TXSCH_LVL_TL1) {
+ /* Default config for TL1.
+ * For VF this is always ignored.
+@@ -1591,6 +1598,8 @@ void mbox_handler_nix_txsch_alloc(struct otx2_nic *pf,
+ for (schq = 0; schq < rsp->schq[lvl]; schq++)
+ pf->hw.txschq_list[lvl][schq] =
+ rsp->schq_list[lvl][schq];
++
++ pf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl;
+ }
+ EXPORT_SYMBOL(mbox_handler_nix_txsch_alloc);
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index ce2766317c0b8..f9c0d2f08e872 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -195,6 +195,7 @@ struct otx2_hw {
+ u16 sqb_size;
+
+ /* NIX */
++ u8 txschq_link_cfg_lvl;
+ u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ u16 matchall_ipolicer;
+ u32 dwrr_mtu;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index d87bbb0be7c86..e6f64d890fb34 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -506,7 +506,7 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
+ int err;
+
+ attr.ttl = tun_key->ttl;
+- attr.fl.fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label);
++ attr.fl.fl6.flowlabel = ip6_make_flowinfo(tun_key->tos, tun_key->label);
+ attr.fl.fl6.daddr = tun_key->u.ipv6.dst;
+ attr.fl.fl6.saddr = tun_key->u.ipv6.src;
+
+@@ -620,7 +620,7 @@ int mlx5e_tc_tun_update_header_ipv6(struct mlx5e_priv *priv,
+
+ attr.ttl = tun_key->ttl;
+
+- attr.fl.fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label);
++ attr.fl.fl6.flowlabel = ip6_make_flowinfo(tun_key->tos, tun_key->label);
+ attr.fl.fl6.daddr = tun_key->u.ipv6.dst;
+ attr.fl.fl6.saddr = tun_key->u.ipv6.src;
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index cafd206e8d7e9..49a9dca93529a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -1822,9 +1822,9 @@ static void mlxsw_sp_port_remove(struct mlxsw_sp *mlxsw_sp, u16 local_port)
+
+ cancel_delayed_work_sync(&mlxsw_sp_port->periodic_hw_stats.update_dw);
+ cancel_delayed_work_sync(&mlxsw_sp_port->ptp.shaper_dw);
+- mlxsw_sp_port_ptp_clear(mlxsw_sp_port);
+ mlxsw_core_port_clear(mlxsw_sp->core, local_port, mlxsw_sp);
+ unregister_netdev(mlxsw_sp_port->dev); /* This calls ndo_stop */
++ mlxsw_sp_port_ptp_clear(mlxsw_sp_port);
+ mlxsw_sp_port_vlan_classification_set(mlxsw_sp_port, true, true);
+ mlxsw_sp->ports[local_port] = NULL;
+ mlxsw_sp_port_vlan_flush(mlxsw_sp_port, true);
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index a3214a762e4b3..f11f1cb92025f 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -77,7 +77,7 @@ static void moxart_mac_free_memory(struct net_device *ndev)
+ int i;
+
+ for (i = 0; i < RX_DESC_NUM; i++)
+- dma_unmap_single(&ndev->dev, priv->rx_mapping[i],
++ dma_unmap_single(&priv->pdev->dev, priv->rx_mapping[i],
+ priv->rx_buf_size, DMA_FROM_DEVICE);
+
+ if (priv->tx_desc_base)
+@@ -147,11 +147,11 @@ static void moxart_mac_setup_desc_ring(struct net_device *ndev)
+ desc + RX_REG_OFFSET_DESC1);
+
+ priv->rx_buf[i] = priv->rx_buf_base + priv->rx_buf_size * i;
+- priv->rx_mapping[i] = dma_map_single(&ndev->dev,
++ priv->rx_mapping[i] = dma_map_single(&priv->pdev->dev,
+ priv->rx_buf[i],
+ priv->rx_buf_size,
+ DMA_FROM_DEVICE);
+- if (dma_mapping_error(&ndev->dev, priv->rx_mapping[i]))
++ if (dma_mapping_error(&priv->pdev->dev, priv->rx_mapping[i]))
+ netdev_err(ndev, "DMA mapping error\n");
+
+ moxart_desc_write(priv->rx_mapping[i],
+@@ -240,7 +240,7 @@ static int moxart_rx_poll(struct napi_struct *napi, int budget)
+ if (len > RX_BUF_SIZE)
+ len = RX_BUF_SIZE;
+
+- dma_sync_single_for_cpu(&ndev->dev,
++ dma_sync_single_for_cpu(&priv->pdev->dev,
+ priv->rx_mapping[rx_head],
+ priv->rx_buf_size, DMA_FROM_DEVICE);
+ skb = netdev_alloc_skb_ip_align(ndev, len);
+@@ -294,7 +294,7 @@ static void moxart_tx_finished(struct net_device *ndev)
+ unsigned int tx_tail = priv->tx_tail;
+
+ while (tx_tail != tx_head) {
+- dma_unmap_single(&ndev->dev, priv->tx_mapping[tx_tail],
++ dma_unmap_single(&priv->pdev->dev, priv->tx_mapping[tx_tail],
+ priv->tx_len[tx_tail], DMA_TO_DEVICE);
+
+ ndev->stats.tx_packets++;
+@@ -358,9 +358,9 @@ static netdev_tx_t moxart_mac_start_xmit(struct sk_buff *skb,
+
+ len = skb->len > TX_BUF_SIZE ? TX_BUF_SIZE : skb->len;
+
+- priv->tx_mapping[tx_head] = dma_map_single(&ndev->dev, skb->data,
++ priv->tx_mapping[tx_head] = dma_map_single(&priv->pdev->dev, skb->data,
+ len, DMA_TO_DEVICE);
+- if (dma_mapping_error(&ndev->dev, priv->tx_mapping[tx_head])) {
++ if (dma_mapping_error(&priv->pdev->dev, priv->tx_mapping[tx_head])) {
+ netdev_err(ndev, "DMA mapping error\n");
+ goto out_unlock;
+ }
+@@ -379,7 +379,7 @@ static netdev_tx_t moxart_mac_start_xmit(struct sk_buff *skb,
+ len = ETH_ZLEN;
+ }
+
+- dma_sync_single_for_device(&ndev->dev, priv->tx_mapping[tx_head],
++ dma_sync_single_for_device(&priv->pdev->dev, priv->tx_mapping[tx_head],
+ priv->tx_buf_size, DMA_TO_DEVICE);
+
+ txdes1 = TX_DESC1_LTS | TX_DESC1_FTS | (len & TX_DESC1_BUF_SIZE_MASK);
+@@ -493,7 +493,7 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ priv->tx_buf_size = TX_BUF_SIZE;
+ priv->rx_buf_size = RX_BUF_SIZE;
+
+- priv->tx_desc_base = dma_alloc_coherent(&pdev->dev, TX_REG_DESC_SIZE *
++ priv->tx_desc_base = dma_alloc_coherent(p_dev, TX_REG_DESC_SIZE *
+ TX_DESC_NUM, &priv->tx_base,
+ GFP_DMA | GFP_KERNEL);
+ if (!priv->tx_desc_base) {
+@@ -501,7 +501,7 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ goto init_fail;
+ }
+
+- priv->rx_desc_base = dma_alloc_coherent(&pdev->dev, RX_REG_DESC_SIZE *
++ priv->rx_desc_base = dma_alloc_coherent(p_dev, RX_REG_DESC_SIZE *
+ RX_DESC_NUM, &priv->rx_base,
+ GFP_DMA | GFP_KERNEL);
+ if (!priv->rx_desc_base) {
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index d4649e4ee0e7f..68991b021c560 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1860,16 +1860,20 @@ void ocelot_get_strings(struct ocelot *ocelot, int port, u32 sset, u8 *data)
+ if (sset != ETH_SS_STATS)
+ return;
+
+- for (i = 0; i < ocelot->num_stats; i++)
++ for (i = 0; i < OCELOT_NUM_STATS; i++) {
++ if (ocelot->stats_layout[i].name[0] == '\0')
++ continue;
++
+ memcpy(data + i * ETH_GSTRING_LEN, ocelot->stats_layout[i].name,
+ ETH_GSTRING_LEN);
++ }
+ }
+ EXPORT_SYMBOL(ocelot_get_strings);
+
+ /* Caller must hold &ocelot->stats_lock */
+ static int ocelot_port_update_stats(struct ocelot *ocelot, int port)
+ {
+- unsigned int idx = port * ocelot->num_stats;
++ unsigned int idx = port * OCELOT_NUM_STATS;
+ struct ocelot_stats_region *region;
+ int err, j;
+
+@@ -1906,13 +1910,13 @@ static void ocelot_check_stats_work(struct work_struct *work)
+ stats_work);
+ int i, err;
+
+- mutex_lock(&ocelot->stats_lock);
++ spin_lock(&ocelot->stats_lock);
+ for (i = 0; i < ocelot->num_phys_ports; i++) {
+ err = ocelot_port_update_stats(ocelot, i);
+ if (err)
+ break;
+ }
+- mutex_unlock(&ocelot->stats_lock);
++ spin_unlock(&ocelot->stats_lock);
+
+ if (err)
+ dev_err(ocelot->dev, "Error %d updating ethtool stats\n", err);
+@@ -1925,16 +1929,22 @@ void ocelot_get_ethtool_stats(struct ocelot *ocelot, int port, u64 *data)
+ {
+ int i, err;
+
+- mutex_lock(&ocelot->stats_lock);
++ spin_lock(&ocelot->stats_lock);
+
+ /* check and update now */
+ err = ocelot_port_update_stats(ocelot, port);
+
+- /* Copy all counters */
+- for (i = 0; i < ocelot->num_stats; i++)
+- *data++ = ocelot->stats[port * ocelot->num_stats + i];
++ /* Copy all supported counters */
++ for (i = 0; i < OCELOT_NUM_STATS; i++) {
++ int index = port * OCELOT_NUM_STATS + i;
++
++ if (ocelot->stats_layout[i].name[0] == '\0')
++ continue;
++
++ *data++ = ocelot->stats[index];
++ }
+
+- mutex_unlock(&ocelot->stats_lock);
++ spin_unlock(&ocelot->stats_lock);
+
+ if (err)
+ dev_err(ocelot->dev, "Error %d updating ethtool stats\n", err);
+@@ -1943,10 +1953,16 @@ EXPORT_SYMBOL(ocelot_get_ethtool_stats);
+
+ int ocelot_get_sset_count(struct ocelot *ocelot, int port, int sset)
+ {
++ int i, num_stats = 0;
++
+ if (sset != ETH_SS_STATS)
+ return -EOPNOTSUPP;
+
+- return ocelot->num_stats;
++ for (i = 0; i < OCELOT_NUM_STATS; i++)
++ if (ocelot->stats_layout[i].name[0] != '\0')
++ num_stats++;
++
++ return num_stats;
+ }
+ EXPORT_SYMBOL(ocelot_get_sset_count);
+
+@@ -1958,7 +1974,10 @@ static int ocelot_prepare_stats_regions(struct ocelot *ocelot)
+
+ INIT_LIST_HEAD(&ocelot->stats_regions);
+
+- for (i = 0; i < ocelot->num_stats; i++) {
++ for (i = 0; i < OCELOT_NUM_STATS; i++) {
++ if (ocelot->stats_layout[i].name[0] == '\0')
++ continue;
++
+ if (region && ocelot->stats_layout[i].offset == last + 1) {
+ region->count++;
+ } else {
+@@ -3340,7 +3359,6 @@ static void ocelot_detect_features(struct ocelot *ocelot)
+
+ int ocelot_init(struct ocelot *ocelot)
+ {
+- const struct ocelot_stat_layout *stat;
+ char queue_name[32];
+ int i, ret;
+ u32 port;
+@@ -3353,17 +3371,13 @@ int ocelot_init(struct ocelot *ocelot)
+ }
+ }
+
+- ocelot->num_stats = 0;
+- for_each_stat(ocelot, stat)
+- ocelot->num_stats++;
+-
+ ocelot->stats = devm_kcalloc(ocelot->dev,
+- ocelot->num_phys_ports * ocelot->num_stats,
++ ocelot->num_phys_ports * OCELOT_NUM_STATS,
+ sizeof(u64), GFP_KERNEL);
+ if (!ocelot->stats)
+ return -ENOMEM;
+
+- mutex_init(&ocelot->stats_lock);
++ spin_lock_init(&ocelot->stats_lock);
+ mutex_init(&ocelot->ptp_lock);
+ mutex_init(&ocelot->mact_lock);
+ mutex_init(&ocelot->fwd_domain_lock);
+@@ -3511,7 +3525,6 @@ void ocelot_deinit(struct ocelot *ocelot)
+ cancel_delayed_work(&ocelot->stats_work);
+ destroy_workqueue(ocelot->stats_queue);
+ destroy_workqueue(ocelot->owq);
+- mutex_destroy(&ocelot->stats_lock);
+ }
+ EXPORT_SYMBOL(ocelot_deinit);
+
+diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c
+index 5e6136e80282b..330d30841cdc4 100644
+--- a/drivers/net/ethernet/mscc/ocelot_net.c
++++ b/drivers/net/ethernet/mscc/ocelot_net.c
+@@ -725,37 +725,42 @@ static void ocelot_get_stats64(struct net_device *dev,
+ struct ocelot_port_private *priv = netdev_priv(dev);
+ struct ocelot *ocelot = priv->port.ocelot;
+ int port = priv->port.index;
++ u64 *s;
+
+- /* Configure the port to read the stats from */
+- ocelot_write(ocelot, SYS_STAT_CFG_STAT_VIEW(port),
+- SYS_STAT_CFG);
++ spin_lock(&ocelot->stats_lock);
++
++ s = &ocelot->stats[port * OCELOT_NUM_STATS];
+
+ /* Get Rx stats */
+- stats->rx_bytes = ocelot_read(ocelot, SYS_COUNT_RX_OCTETS);
+- stats->rx_packets = ocelot_read(ocelot, SYS_COUNT_RX_SHORTS) +
+- ocelot_read(ocelot, SYS_COUNT_RX_FRAGMENTS) +
+- ocelot_read(ocelot, SYS_COUNT_RX_JABBERS) +
+- ocelot_read(ocelot, SYS_COUNT_RX_LONGS) +
+- ocelot_read(ocelot, SYS_COUNT_RX_64) +
+- ocelot_read(ocelot, SYS_COUNT_RX_65_127) +
+- ocelot_read(ocelot, SYS_COUNT_RX_128_255) +
+- ocelot_read(ocelot, SYS_COUNT_RX_256_1023) +
+- ocelot_read(ocelot, SYS_COUNT_RX_1024_1526) +
+- ocelot_read(ocelot, SYS_COUNT_RX_1527_MAX);
+- stats->multicast = ocelot_read(ocelot, SYS_COUNT_RX_MULTICAST);
++ stats->rx_bytes = s[OCELOT_STAT_RX_OCTETS];
++ stats->rx_packets = s[OCELOT_STAT_RX_SHORTS] +
++ s[OCELOT_STAT_RX_FRAGMENTS] +
++ s[OCELOT_STAT_RX_JABBERS] +
++ s[OCELOT_STAT_RX_LONGS] +
++ s[OCELOT_STAT_RX_64] +
++ s[OCELOT_STAT_RX_65_127] +
++ s[OCELOT_STAT_RX_128_255] +
++ s[OCELOT_STAT_RX_256_511] +
++ s[OCELOT_STAT_RX_512_1023] +
++ s[OCELOT_STAT_RX_1024_1526] +
++ s[OCELOT_STAT_RX_1527_MAX];
++ stats->multicast = s[OCELOT_STAT_RX_MULTICAST];
+ stats->rx_dropped = dev->stats.rx_dropped;
+
+ /* Get Tx stats */
+- stats->tx_bytes = ocelot_read(ocelot, SYS_COUNT_TX_OCTETS);
+- stats->tx_packets = ocelot_read(ocelot, SYS_COUNT_TX_64) +
+- ocelot_read(ocelot, SYS_COUNT_TX_65_127) +
+- ocelot_read(ocelot, SYS_COUNT_TX_128_511) +
+- ocelot_read(ocelot, SYS_COUNT_TX_512_1023) +
+- ocelot_read(ocelot, SYS_COUNT_TX_1024_1526) +
+- ocelot_read(ocelot, SYS_COUNT_TX_1527_MAX);
+- stats->tx_dropped = ocelot_read(ocelot, SYS_COUNT_TX_DROPS) +
+- ocelot_read(ocelot, SYS_COUNT_TX_AGING);
+- stats->collisions = ocelot_read(ocelot, SYS_COUNT_TX_COLLISION);
++ stats->tx_bytes = s[OCELOT_STAT_TX_OCTETS];
++ stats->tx_packets = s[OCELOT_STAT_TX_64] +
++ s[OCELOT_STAT_TX_65_127] +
++ s[OCELOT_STAT_TX_128_255] +
++ s[OCELOT_STAT_TX_256_511] +
++ s[OCELOT_STAT_TX_512_1023] +
++ s[OCELOT_STAT_TX_1024_1526] +
++ s[OCELOT_STAT_TX_1527_MAX];
++ stats->tx_dropped = s[OCELOT_STAT_TX_DROPS] +
++ s[OCELOT_STAT_TX_AGED];
++ stats->collisions = s[OCELOT_STAT_TX_COLLISION];
++
++ spin_unlock(&ocelot->stats_lock);
+ }
+
+ static int ocelot_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+index 961f803aca192..9ff9105600438 100644
+--- a/drivers/net/ethernet/mscc/ocelot_vsc7514.c
++++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+@@ -96,101 +96,379 @@ static const struct reg_field ocelot_regfields[REGFIELD_MAX] = {
+ [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 12, 4),
+ };
+
+-static const struct ocelot_stat_layout ocelot_stats_layout[] = {
+- { .name = "rx_octets", .offset = 0x00, },
+- { .name = "rx_unicast", .offset = 0x01, },
+- { .name = "rx_multicast", .offset = 0x02, },
+- { .name = "rx_broadcast", .offset = 0x03, },
+- { .name = "rx_shorts", .offset = 0x04, },
+- { .name = "rx_fragments", .offset = 0x05, },
+- { .name = "rx_jabbers", .offset = 0x06, },
+- { .name = "rx_crc_align_errs", .offset = 0x07, },
+- { .name = "rx_sym_errs", .offset = 0x08, },
+- { .name = "rx_frames_below_65_octets", .offset = 0x09, },
+- { .name = "rx_frames_65_to_127_octets", .offset = 0x0A, },
+- { .name = "rx_frames_128_to_255_octets", .offset = 0x0B, },
+- { .name = "rx_frames_256_to_511_octets", .offset = 0x0C, },
+- { .name = "rx_frames_512_to_1023_octets", .offset = 0x0D, },
+- { .name = "rx_frames_1024_to_1526_octets", .offset = 0x0E, },
+- { .name = "rx_frames_over_1526_octets", .offset = 0x0F, },
+- { .name = "rx_pause", .offset = 0x10, },
+- { .name = "rx_control", .offset = 0x11, },
+- { .name = "rx_longs", .offset = 0x12, },
+- { .name = "rx_classified_drops", .offset = 0x13, },
+- { .name = "rx_red_prio_0", .offset = 0x14, },
+- { .name = "rx_red_prio_1", .offset = 0x15, },
+- { .name = "rx_red_prio_2", .offset = 0x16, },
+- { .name = "rx_red_prio_3", .offset = 0x17, },
+- { .name = "rx_red_prio_4", .offset = 0x18, },
+- { .name = "rx_red_prio_5", .offset = 0x19, },
+- { .name = "rx_red_prio_6", .offset = 0x1A, },
+- { .name = "rx_red_prio_7", .offset = 0x1B, },
+- { .name = "rx_yellow_prio_0", .offset = 0x1C, },
+- { .name = "rx_yellow_prio_1", .offset = 0x1D, },
+- { .name = "rx_yellow_prio_2", .offset = 0x1E, },
+- { .name = "rx_yellow_prio_3", .offset = 0x1F, },
+- { .name = "rx_yellow_prio_4", .offset = 0x20, },
+- { .name = "rx_yellow_prio_5", .offset = 0x21, },
+- { .name = "rx_yellow_prio_6", .offset = 0x22, },
+- { .name = "rx_yellow_prio_7", .offset = 0x23, },
+- { .name = "rx_green_prio_0", .offset = 0x24, },
+- { .name = "rx_green_prio_1", .offset = 0x25, },
+- { .name = "rx_green_prio_2", .offset = 0x26, },
+- { .name = "rx_green_prio_3", .offset = 0x27, },
+- { .name = "rx_green_prio_4", .offset = 0x28, },
+- { .name = "rx_green_prio_5", .offset = 0x29, },
+- { .name = "rx_green_prio_6", .offset = 0x2A, },
+- { .name = "rx_green_prio_7", .offset = 0x2B, },
+- { .name = "tx_octets", .offset = 0x40, },
+- { .name = "tx_unicast", .offset = 0x41, },
+- { .name = "tx_multicast", .offset = 0x42, },
+- { .name = "tx_broadcast", .offset = 0x43, },
+- { .name = "tx_collision", .offset = 0x44, },
+- { .name = "tx_drops", .offset = 0x45, },
+- { .name = "tx_pause", .offset = 0x46, },
+- { .name = "tx_frames_below_65_octets", .offset = 0x47, },
+- { .name = "tx_frames_65_to_127_octets", .offset = 0x48, },
+- { .name = "tx_frames_128_255_octets", .offset = 0x49, },
+- { .name = "tx_frames_256_511_octets", .offset = 0x4A, },
+- { .name = "tx_frames_512_1023_octets", .offset = 0x4B, },
+- { .name = "tx_frames_1024_1526_octets", .offset = 0x4C, },
+- { .name = "tx_frames_over_1526_octets", .offset = 0x4D, },
+- { .name = "tx_yellow_prio_0", .offset = 0x4E, },
+- { .name = "tx_yellow_prio_1", .offset = 0x4F, },
+- { .name = "tx_yellow_prio_2", .offset = 0x50, },
+- { .name = "tx_yellow_prio_3", .offset = 0x51, },
+- { .name = "tx_yellow_prio_4", .offset = 0x52, },
+- { .name = "tx_yellow_prio_5", .offset = 0x53, },
+- { .name = "tx_yellow_prio_6", .offset = 0x54, },
+- { .name = "tx_yellow_prio_7", .offset = 0x55, },
+- { .name = "tx_green_prio_0", .offset = 0x56, },
+- { .name = "tx_green_prio_1", .offset = 0x57, },
+- { .name = "tx_green_prio_2", .offset = 0x58, },
+- { .name = "tx_green_prio_3", .offset = 0x59, },
+- { .name = "tx_green_prio_4", .offset = 0x5A, },
+- { .name = "tx_green_prio_5", .offset = 0x5B, },
+- { .name = "tx_green_prio_6", .offset = 0x5C, },
+- { .name = "tx_green_prio_7", .offset = 0x5D, },
+- { .name = "tx_aged", .offset = 0x5E, },
+- { .name = "drop_local", .offset = 0x80, },
+- { .name = "drop_tail", .offset = 0x81, },
+- { .name = "drop_yellow_prio_0", .offset = 0x82, },
+- { .name = "drop_yellow_prio_1", .offset = 0x83, },
+- { .name = "drop_yellow_prio_2", .offset = 0x84, },
+- { .name = "drop_yellow_prio_3", .offset = 0x85, },
+- { .name = "drop_yellow_prio_4", .offset = 0x86, },
+- { .name = "drop_yellow_prio_5", .offset = 0x87, },
+- { .name = "drop_yellow_prio_6", .offset = 0x88, },
+- { .name = "drop_yellow_prio_7", .offset = 0x89, },
+- { .name = "drop_green_prio_0", .offset = 0x8A, },
+- { .name = "drop_green_prio_1", .offset = 0x8B, },
+- { .name = "drop_green_prio_2", .offset = 0x8C, },
+- { .name = "drop_green_prio_3", .offset = 0x8D, },
+- { .name = "drop_green_prio_4", .offset = 0x8E, },
+- { .name = "drop_green_prio_5", .offset = 0x8F, },
+- { .name = "drop_green_prio_6", .offset = 0x90, },
+- { .name = "drop_green_prio_7", .offset = 0x91, },
+- OCELOT_STAT_END
++static const struct ocelot_stat_layout ocelot_stats_layout[OCELOT_NUM_STATS] = {
++ [OCELOT_STAT_RX_OCTETS] = {
++ .name = "rx_octets",
++ .offset = 0x00,
++ },
++ [OCELOT_STAT_RX_UNICAST] = {
++ .name = "rx_unicast",
++ .offset = 0x01,
++ },
++ [OCELOT_STAT_RX_MULTICAST] = {
++ .name = "rx_multicast",
++ .offset = 0x02,
++ },
++ [OCELOT_STAT_RX_BROADCAST] = {
++ .name = "rx_broadcast",
++ .offset = 0x03,
++ },
++ [OCELOT_STAT_RX_SHORTS] = {
++ .name = "rx_shorts",
++ .offset = 0x04,
++ },
++ [OCELOT_STAT_RX_FRAGMENTS] = {
++ .name = "rx_fragments",
++ .offset = 0x05,
++ },
++ [OCELOT_STAT_RX_JABBERS] = {
++ .name = "rx_jabbers",
++ .offset = 0x06,
++ },
++ [OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
++ .name = "rx_crc_align_errs",
++ .offset = 0x07,
++ },
++ [OCELOT_STAT_RX_SYM_ERRS] = {
++ .name = "rx_sym_errs",
++ .offset = 0x08,
++ },
++ [OCELOT_STAT_RX_64] = {
++ .name = "rx_frames_below_65_octets",
++ .offset = 0x09,
++ },
++ [OCELOT_STAT_RX_65_127] = {
++ .name = "rx_frames_65_to_127_octets",
++ .offset = 0x0A,
++ },
++ [OCELOT_STAT_RX_128_255] = {
++ .name = "rx_frames_128_to_255_octets",
++ .offset = 0x0B,
++ },
++ [OCELOT_STAT_RX_256_511] = {
++ .name = "rx_frames_256_to_511_octets",
++ .offset = 0x0C,
++ },
++ [OCELOT_STAT_RX_512_1023] = {
++ .name = "rx_frames_512_to_1023_octets",
++ .offset = 0x0D,
++ },
++ [OCELOT_STAT_RX_1024_1526] = {
++ .name = "rx_frames_1024_to_1526_octets",
++ .offset = 0x0E,
++ },
++ [OCELOT_STAT_RX_1527_MAX] = {
++ .name = "rx_frames_over_1526_octets",
++ .offset = 0x0F,
++ },
++ [OCELOT_STAT_RX_PAUSE] = {
++ .name = "rx_pause",
++ .offset = 0x10,
++ },
++ [OCELOT_STAT_RX_CONTROL] = {
++ .name = "rx_control",
++ .offset = 0x11,
++ },
++ [OCELOT_STAT_RX_LONGS] = {
++ .name = "rx_longs",
++ .offset = 0x12,
++ },
++ [OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
++ .name = "rx_classified_drops",
++ .offset = 0x13,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_0] = {
++ .name = "rx_red_prio_0",
++ .offset = 0x14,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_1] = {
++ .name = "rx_red_prio_1",
++ .offset = 0x15,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_2] = {
++ .name = "rx_red_prio_2",
++ .offset = 0x16,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_3] = {
++ .name = "rx_red_prio_3",
++ .offset = 0x17,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_4] = {
++ .name = "rx_red_prio_4",
++ .offset = 0x18,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_5] = {
++ .name = "rx_red_prio_5",
++ .offset = 0x19,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_6] = {
++ .name = "rx_red_prio_6",
++ .offset = 0x1A,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_7] = {
++ .name = "rx_red_prio_7",
++ .offset = 0x1B,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_0] = {
++ .name = "rx_yellow_prio_0",
++ .offset = 0x1C,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_1] = {
++ .name = "rx_yellow_prio_1",
++ .offset = 0x1D,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_2] = {
++ .name = "rx_yellow_prio_2",
++ .offset = 0x1E,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_3] = {
++ .name = "rx_yellow_prio_3",
++ .offset = 0x1F,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_4] = {
++ .name = "rx_yellow_prio_4",
++ .offset = 0x20,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_5] = {
++ .name = "rx_yellow_prio_5",
++ .offset = 0x21,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_6] = {
++ .name = "rx_yellow_prio_6",
++ .offset = 0x22,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_7] = {
++ .name = "rx_yellow_prio_7",
++ .offset = 0x23,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_0] = {
++ .name = "rx_green_prio_0",
++ .offset = 0x24,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_1] = {
++ .name = "rx_green_prio_1",
++ .offset = 0x25,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_2] = {
++ .name = "rx_green_prio_2",
++ .offset = 0x26,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_3] = {
++ .name = "rx_green_prio_3",
++ .offset = 0x27,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_4] = {
++ .name = "rx_green_prio_4",
++ .offset = 0x28,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_5] = {
++ .name = "rx_green_prio_5",
++ .offset = 0x29,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_6] = {
++ .name = "rx_green_prio_6",
++ .offset = 0x2A,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_7] = {
++ .name = "rx_green_prio_7",
++ .offset = 0x2B,
++ },
++ [OCELOT_STAT_TX_OCTETS] = {
++ .name = "tx_octets",
++ .offset = 0x40,
++ },
++ [OCELOT_STAT_TX_UNICAST] = {
++ .name = "tx_unicast",
++ .offset = 0x41,
++ },
++ [OCELOT_STAT_TX_MULTICAST] = {
++ .name = "tx_multicast",
++ .offset = 0x42,
++ },
++ [OCELOT_STAT_TX_BROADCAST] = {
++ .name = "tx_broadcast",
++ .offset = 0x43,
++ },
++ [OCELOT_STAT_TX_COLLISION] = {
++ .name = "tx_collision",
++ .offset = 0x44,
++ },
++ [OCELOT_STAT_TX_DROPS] = {
++ .name = "tx_drops",
++ .offset = 0x45,
++ },
++ [OCELOT_STAT_TX_PAUSE] = {
++ .name = "tx_pause",
++ .offset = 0x46,
++ },
++ [OCELOT_STAT_TX_64] = {
++ .name = "tx_frames_below_65_octets",
++ .offset = 0x47,
++ },
++ [OCELOT_STAT_TX_65_127] = {
++ .name = "tx_frames_65_to_127_octets",
++ .offset = 0x48,
++ },
++ [OCELOT_STAT_TX_128_255] = {
++ .name = "tx_frames_128_255_octets",
++ .offset = 0x49,
++ },
++ [OCELOT_STAT_TX_256_511] = {
++ .name = "tx_frames_256_511_octets",
++ .offset = 0x4A,
++ },
++ [OCELOT_STAT_TX_512_1023] = {
++ .name = "tx_frames_512_1023_octets",
++ .offset = 0x4B,
++ },
++ [OCELOT_STAT_TX_1024_1526] = {
++ .name = "tx_frames_1024_1526_octets",
++ .offset = 0x4C,
++ },
++ [OCELOT_STAT_TX_1527_MAX] = {
++ .name = "tx_frames_over_1526_octets",
++ .offset = 0x4D,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_0] = {
++ .name = "tx_yellow_prio_0",
++ .offset = 0x4E,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_1] = {
++ .name = "tx_yellow_prio_1",
++ .offset = 0x4F,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_2] = {
++ .name = "tx_yellow_prio_2",
++ .offset = 0x50,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_3] = {
++ .name = "tx_yellow_prio_3",
++ .offset = 0x51,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_4] = {
++ .name = "tx_yellow_prio_4",
++ .offset = 0x52,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_5] = {
++ .name = "tx_yellow_prio_5",
++ .offset = 0x53,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_6] = {
++ .name = "tx_yellow_prio_6",
++ .offset = 0x54,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_7] = {
++ .name = "tx_yellow_prio_7",
++ .offset = 0x55,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_0] = {
++ .name = "tx_green_prio_0",
++ .offset = 0x56,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_1] = {
++ .name = "tx_green_prio_1",
++ .offset = 0x57,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_2] = {
++ .name = "tx_green_prio_2",
++ .offset = 0x58,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_3] = {
++ .name = "tx_green_prio_3",
++ .offset = 0x59,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_4] = {
++ .name = "tx_green_prio_4",
++ .offset = 0x5A,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_5] = {
++ .name = "tx_green_prio_5",
++ .offset = 0x5B,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_6] = {
++ .name = "tx_green_prio_6",
++ .offset = 0x5C,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_7] = {
++ .name = "tx_green_prio_7",
++ .offset = 0x5D,
++ },
++ [OCELOT_STAT_TX_AGED] = {
++ .name = "tx_aged",
++ .offset = 0x5E,
++ },
++ [OCELOT_STAT_DROP_LOCAL] = {
++ .name = "drop_local",
++ .offset = 0x80,
++ },
++ [OCELOT_STAT_DROP_TAIL] = {
++ .name = "drop_tail",
++ .offset = 0x81,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
++ .name = "drop_yellow_prio_0",
++ .offset = 0x82,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
++ .name = "drop_yellow_prio_1",
++ .offset = 0x83,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
++ .name = "drop_yellow_prio_2",
++ .offset = 0x84,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
++ .name = "drop_yellow_prio_3",
++ .offset = 0x85,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
++ .name = "drop_yellow_prio_4",
++ .offset = 0x86,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
++ .name = "drop_yellow_prio_5",
++ .offset = 0x87,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
++ .name = "drop_yellow_prio_6",
++ .offset = 0x88,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
++ .name = "drop_yellow_prio_7",
++ .offset = 0x89,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_0] = {
++ .name = "drop_green_prio_0",
++ .offset = 0x8A,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_1] = {
++ .name = "drop_green_prio_1",
++ .offset = 0x8B,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_2] = {
++ .name = "drop_green_prio_2",
++ .offset = 0x8C,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_3] = {
++ .name = "drop_green_prio_3",
++ .offset = 0x8D,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_4] = {
++ .name = "drop_green_prio_4",
++ .offset = 0x8E,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_5] = {
++ .name = "drop_green_prio_5",
++ .offset = 0x8F,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_6] = {
++ .name = "drop_green_prio_6",
++ .offset = 0x90,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_7] = {
++ .name = "drop_green_prio_7",
++ .offset = 0x91,
++ },
+ };
+
+ static void ocelot_pll5_init(struct ocelot *ocelot)
+diff --git a/drivers/net/ethernet/mscc/vsc7514_regs.c b/drivers/net/ethernet/mscc/vsc7514_regs.c
+index c2af4eb8ca5d3..8ff935f7f150c 100644
+--- a/drivers/net/ethernet/mscc/vsc7514_regs.c
++++ b/drivers/net/ethernet/mscc/vsc7514_regs.c
+@@ -180,13 +180,14 @@ const u32 vsc7514_sys_regmap[] = {
+ REG(SYS_COUNT_RX_64, 0x000024),
+ REG(SYS_COUNT_RX_65_127, 0x000028),
+ REG(SYS_COUNT_RX_128_255, 0x00002c),
+- REG(SYS_COUNT_RX_256_1023, 0x000030),
+- REG(SYS_COUNT_RX_1024_1526, 0x000034),
+- REG(SYS_COUNT_RX_1527_MAX, 0x000038),
+- REG(SYS_COUNT_RX_PAUSE, 0x00003c),
+- REG(SYS_COUNT_RX_CONTROL, 0x000040),
+- REG(SYS_COUNT_RX_LONGS, 0x000044),
+- REG(SYS_COUNT_RX_CLASSIFIED_DROPS, 0x000048),
++ REG(SYS_COUNT_RX_256_511, 0x000030),
++ REG(SYS_COUNT_RX_512_1023, 0x000034),
++ REG(SYS_COUNT_RX_1024_1526, 0x000038),
++ REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
++ REG(SYS_COUNT_RX_PAUSE, 0x000040),
++ REG(SYS_COUNT_RX_CONTROL, 0x000044),
++ REG(SYS_COUNT_RX_LONGS, 0x000048),
++ REG(SYS_COUNT_RX_CLASSIFIED_DROPS, 0x00004c),
+ REG(SYS_COUNT_TX_OCTETS, 0x000100),
+ REG(SYS_COUNT_TX_UNICAST, 0x000104),
+ REG(SYS_COUNT_TX_MULTICAST, 0x000108),
+@@ -196,11 +197,12 @@ const u32 vsc7514_sys_regmap[] = {
+ REG(SYS_COUNT_TX_PAUSE, 0x000118),
+ REG(SYS_COUNT_TX_64, 0x00011c),
+ REG(SYS_COUNT_TX_65_127, 0x000120),
+- REG(SYS_COUNT_TX_128_511, 0x000124),
+- REG(SYS_COUNT_TX_512_1023, 0x000128),
+- REG(SYS_COUNT_TX_1024_1526, 0x00012c),
+- REG(SYS_COUNT_TX_1527_MAX, 0x000130),
+- REG(SYS_COUNT_TX_AGING, 0x000170),
++ REG(SYS_COUNT_TX_128_255, 0x000124),
++ REG(SYS_COUNT_TX_256_511, 0x000128),
++ REG(SYS_COUNT_TX_512_1023, 0x00012c),
++ REG(SYS_COUNT_TX_1024_1526, 0x000130),
++ REG(SYS_COUNT_TX_1527_MAX, 0x000134),
++ REG(SYS_COUNT_TX_AGING, 0x000178),
+ REG(SYS_RESET_CFG, 0x000508),
+ REG(SYS_CMID, 0x00050c),
+ REG(SYS_VLAN_ETYPE_CFG, 0x000510),
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index df0afd271a21e..e6ee45afd80c7 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -1230,6 +1230,8 @@ nfp_port_get_module_info(struct net_device *netdev,
+ u8 data;
+
+ port = nfp_port_from_netdev(netdev);
++ /* update port state to get latest interface */
++ set_bit(NFP_PORT_CHANGED, &port->flags);
+ eth_port = nfp_port_get_eth_port(port);
+ if (!eth_port)
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 3fe720c5dc9fc..aec0d973ced0e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -1104,6 +1104,7 @@ static void intel_eth_pci_remove(struct pci_dev *pdev)
+
+ stmmac_dvr_remove(&pdev->dev);
+
++ clk_disable_unprepare(priv->plat->stmmac_clk);
+ clk_unregister_fixed_rate(priv->plat->stmmac_clk);
+
+ pcim_iounmap_regions(pdev, BIT(0));
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 018d365f9debf..7962c37b3f14b 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -797,7 +797,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ struct geneve_sock *gs4,
+ struct flowi4 *fl4,
+ const struct ip_tunnel_info *info,
+- __be16 dport, __be16 sport)
++ __be16 dport, __be16 sport,
++ __u8 *full_tos)
+ {
+ bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ struct geneve_dev *geneve = netdev_priv(dev);
+@@ -823,6 +824,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ use_cache = false;
+ }
+ fl4->flowi4_tos = RT_TOS(tos);
++ if (full_tos)
++ *full_tos = tos;
+
+ dst_cache = (struct dst_cache *)&info->dst_cache;
+ if (use_cache) {
+@@ -876,8 +879,7 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
+ use_cache = false;
+ }
+
+- fl6->flowlabel = ip6_make_flowinfo(RT_TOS(prio),
+- info->key.label);
++ fl6->flowlabel = ip6_make_flowinfo(prio, info->key.label);
+ dst_cache = (struct dst_cache *)&info->dst_cache;
+ if (use_cache) {
+ dst = dst_cache_get_ip6(dst_cache, &fl6->saddr);
+@@ -911,6 +913,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ const struct ip_tunnel_key *key = &info->key;
+ struct rtable *rt;
+ struct flowi4 fl4;
++ __u8 full_tos;
+ __u8 tos, ttl;
+ __be16 df = 0;
+ __be16 sport;
+@@ -921,7 +924,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+
+ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+- geneve->cfg.info.key.tp_dst, sport);
++ geneve->cfg.info.key.tp_dst, sport, &full_tos);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+
+@@ -965,7 +968,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+
+ df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0;
+ } else {
+- tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, ip_hdr(skb), skb);
++ tos = ip_tunnel_ecn_encap(full_tos, ip_hdr(skb), skb);
+ if (geneve->cfg.ttl_inherit)
+ ttl = ip_tunnel_get_ttl(ip_hdr(skb), skb);
+ else
+@@ -1149,7 +1152,7 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ 1, USHRT_MAX, true);
+
+ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+- geneve->cfg.info.key.tp_dst, sport);
++ geneve->cfg.info.key.tp_dst, sport, NULL);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+
+diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c
+index 29b1df03f3e8b..a87a4b3ffce4e 100644
+--- a/drivers/net/phy/phy-c45.c
++++ b/drivers/net/phy/phy-c45.c
+@@ -190,44 +190,42 @@ EXPORT_SYMBOL_GPL(genphy_c45_pma_setup_forced);
+ */
+ static int genphy_c45_baset1_an_config_aneg(struct phy_device *phydev)
+ {
++ u16 adv_l_mask, adv_l = 0;
++ u16 adv_m_mask, adv_m = 0;
+ int changed = 0;
+- u16 adv_l = 0;
+- u16 adv_m = 0;
+ int ret;
+
++ adv_l_mask = MDIO_AN_T1_ADV_L_FORCE_MS | MDIO_AN_T1_ADV_L_PAUSE_CAP |
++ MDIO_AN_T1_ADV_L_PAUSE_ASYM;
++ adv_m_mask = MDIO_AN_T1_ADV_M_MST | MDIO_AN_T1_ADV_M_B10L;
++
+ switch (phydev->master_slave_set) {
+ case MASTER_SLAVE_CFG_MASTER_FORCE:
++ adv_m |= MDIO_AN_T1_ADV_M_MST;
++ fallthrough;
+ case MASTER_SLAVE_CFG_SLAVE_FORCE:
+ adv_l |= MDIO_AN_T1_ADV_L_FORCE_MS;
+ break;
+ case MASTER_SLAVE_CFG_MASTER_PREFERRED:
++ adv_m |= MDIO_AN_T1_ADV_M_MST;
++ fallthrough;
+ case MASTER_SLAVE_CFG_SLAVE_PREFERRED:
+ break;
+ case MASTER_SLAVE_CFG_UNKNOWN:
+ case MASTER_SLAVE_CFG_UNSUPPORTED:
+- return 0;
++ /* if master/slave role is not specified, do not overwrite it */
++ adv_l_mask &= ~MDIO_AN_T1_ADV_L_FORCE_MS;
++ adv_m_mask &= ~MDIO_AN_T1_ADV_M_MST;
++ break;
+ default:
+ phydev_warn(phydev, "Unsupported Master/Slave mode\n");
+ return -EOPNOTSUPP;
+ }
+
+- switch (phydev->master_slave_set) {
+- case MASTER_SLAVE_CFG_MASTER_FORCE:
+- case MASTER_SLAVE_CFG_MASTER_PREFERRED:
+- adv_m |= MDIO_AN_T1_ADV_M_MST;
+- break;
+- case MASTER_SLAVE_CFG_SLAVE_FORCE:
+- case MASTER_SLAVE_CFG_SLAVE_PREFERRED:
+- break;
+- default:
+- break;
+- }
+-
+ adv_l |= linkmode_adv_to_mii_t1_adv_l_t(phydev->advertising);
+
+ ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_T1_ADV_L,
+- (MDIO_AN_T1_ADV_L_FORCE_MS | MDIO_AN_T1_ADV_L_PAUSE_CAP
+- | MDIO_AN_T1_ADV_L_PAUSE_ASYM), adv_l);
++ adv_l_mask, adv_l);
+ if (ret < 0)
+ return ret;
+ if (ret > 0)
+@@ -236,7 +234,7 @@ static int genphy_c45_baset1_an_config_aneg(struct phy_device *phydev)
+ adv_m |= linkmode_adv_to_mii_t1_adv_m_t(phydev->advertising);
+
+ ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_T1_ADV_M,
+- MDIO_AN_T1_ADV_M_MST | MDIO_AN_T1_ADV_M_B10L, adv_m);
++ adv_m_mask, adv_m);
+ if (ret < 0)
+ return ret;
+ if (ret > 0)
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 46acddd865a78..608de5a94165f 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -316,6 +316,12 @@ static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
+
+ phydev->suspended_by_mdio_bus = 0;
+
++ /* If we managed to get here with the PHY state machine in a state other
++ * than PHY_HALTED this is an indication that something went wrong and
++ * we should most likely be using MAC managed PM and we are not.
++ */
++ WARN_ON(phydev->state != PHY_HALTED && !phydev->mac_managed_pm);
++
+ ret = phy_init_hw(phydev);
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/net/plip/plip.c b/drivers/net/plip/plip.c
+index dafd3e9ebbf87..c8791e9b451d2 100644
+--- a/drivers/net/plip/plip.c
++++ b/drivers/net/plip/plip.c
+@@ -1111,7 +1111,7 @@ plip_open(struct net_device *dev)
+ /* Any address will do - we take the first. We already
+ have the first two bytes filled with 0xfc, from
+ plip_init_dev(). */
+- const struct in_ifaddr *ifa = rcu_dereference(in_dev->ifa_list);
++ const struct in_ifaddr *ifa = rtnl_dereference(in_dev->ifa_list);
+ if (ifa != NULL) {
+ dev_addr_mod(dev, 2, &ifa->ifa_local, 4);
+ }
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index c3d42062559dd..9e75ed3f08ce5 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -716,10 +716,20 @@ static ssize_t tap_get_user(struct tap_queue *q, void *msg_control,
+ skb_reset_mac_header(skb);
+ skb->protocol = eth_hdr(skb)->h_proto;
+
++ rcu_read_lock();
++ tap = rcu_dereference(q->tap);
++ if (!tap) {
++ kfree_skb(skb);
++ rcu_read_unlock();
++ return total_len;
++ }
++ skb->dev = tap->dev;
++
+ if (vnet_hdr_len) {
+ err = virtio_net_hdr_to_skb(skb, &vnet_hdr,
+ tap_is_little_endian(q));
+ if (err) {
++ rcu_read_unlock();
+ drop_reason = SKB_DROP_REASON_DEV_HDR;
+ goto err_kfree;
+ }
+@@ -732,8 +742,6 @@ static ssize_t tap_get_user(struct tap_queue *q, void *msg_control,
+ __vlan_get_protocol(skb, skb->protocol, &depth) != 0)
+ skb_set_network_header(skb, depth);
+
+- rcu_read_lock();
+- tap = rcu_dereference(q->tap);
+ /* copy skb_ubuf_info for callback when skb has no error */
+ if (zerocopy) {
+ skb_zcopy_init(skb, msg_control);
+@@ -742,14 +750,8 @@ static ssize_t tap_get_user(struct tap_queue *q, void *msg_control,
+ uarg->callback(NULL, uarg, false);
+ }
+
+- if (tap) {
+- skb->dev = tap->dev;
+- dev_queue_xmit(skb);
+- } else {
+- kfree_skb(skb);
+- }
++ dev_queue_xmit(skb);
+ rcu_read_unlock();
+-
+ return total_len;
+
+ err_kfree:
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index ec8e1b3108c3a..d4e0a775b1ba7 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1057,8 +1057,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ case XDP_TX:
+ stats->xdp_tx++;
+ xdpf = xdp_convert_buff_to_frame(&xdp);
+- if (unlikely(!xdpf))
++ if (unlikely(!xdpf)) {
++ if (unlikely(xdp_page != page))
++ put_page(xdp_page);
+ goto err_xdp;
++ }
+ err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
+ if (unlikely(!err)) {
+ xdp_return_frame_rx_napi(xdpf);
+@@ -1196,7 +1199,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
+ if (!hdr_hash || !skb)
+ return;
+
+- switch ((int)hdr_hash->hash_report) {
++ switch (__le16_to_cpu(hdr_hash->hash_report)) {
+ case VIRTIO_NET_HASH_REPORT_TCPv4:
+ case VIRTIO_NET_HASH_REPORT_UDPv4:
+ case VIRTIO_NET_HASH_REPORT_TCPv6:
+@@ -1214,7 +1217,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
+ default:
+ rss_hash_type = PKT_HASH_TYPE_NONE;
+ }
+- skb_set_hash(skb, (unsigned int)hdr_hash->hash_value, rss_hash_type);
++ skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type);
+ }
+
+ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 6991bf7c1cf03..52f58f2fc4627 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -2321,7 +2321,7 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
+ fl6.flowi6_oif = oif;
+ fl6.daddr = *daddr;
+ fl6.saddr = *saddr;
+- fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
++ fl6.flowlabel = ip6_make_flowinfo(tos, label);
+ fl6.flowi6_mark = skb->mark;
+ fl6.flowi6_proto = IPPROTO_UDP;
+ fl6.fl6_dport = dport;
+diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
+index b7bf3f863d79b..5ee0afa621a95 100644
+--- a/drivers/ntb/test/ntb_tool.c
++++ b/drivers/ntb/test/ntb_tool.c
+@@ -367,14 +367,16 @@ static ssize_t tool_fn_write(struct tool_ctx *tc,
+ u64 bits;
+ int n;
+
++ if (*offp)
++ return 0;
++
+ buf = kmalloc(size + 1, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+- ret = simple_write_to_buffer(buf, size, offp, ubuf, size);
+- if (ret < 0) {
++ if (copy_from_user(buf, ubuf, size)) {
+ kfree(buf);
+- return ret;
++ return -EFAULT;
+ }
+
+ buf[size] = 0;
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 3c778bb0c2944..4aff83b1b0c05 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -3880,6 +3880,7 @@ static int fc_parse_cgrpid(const char *buf, u64 *id)
+ static ssize_t fc_appid_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+ {
++ size_t orig_count = count;
+ u64 cgrp_id;
+ int appid_len = 0;
+ int cgrpid_len = 0;
+@@ -3904,7 +3905,7 @@ static ssize_t fc_appid_store(struct device *dev,
+ ret = blkcg_set_fc_appid(app_id, cgrp_id, sizeof(app_id));
+ if (ret < 0)
+ return ret;
+- return count;
++ return orig_count;
+ }
+ static DEVICE_ATTR(appid_store, 0200, NULL, fc_appid_store);
+ #endif /* CONFIG_BLK_CGROUP_FC_APPID */
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 0a9542599ad1c..dc3b4dc8fe08b 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1839,7 +1839,8 @@ static int __init nvmet_tcp_init(void)
+ {
+ int ret;
+
+- nvmet_tcp_wq = alloc_workqueue("nvmet_tcp_wq", WQ_HIGHPRI, 0);
++ nvmet_tcp_wq = alloc_workqueue("nvmet_tcp_wq",
++ WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+ if (!nvmet_tcp_wq)
+ return -ENOMEM;
+
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 4044ddcb02c60..84a8d402009cb 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -903,12 +903,6 @@ static int of_overlay_apply(struct overlay_changeset *ovcs)
+ {
+ int ret = 0, ret_revert, ret_tmp;
+
+- if (devicetree_corrupt()) {
+- pr_err("devicetree state suspect, refuse to apply overlay\n");
+- ret = -EBUSY;
+- goto out;
+- }
+-
+ ret = of_resolve_phandles(ovcs->overlay_root);
+ if (ret)
+ goto out;
+@@ -983,6 +977,11 @@ int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size,
+
+ *ret_ovcs_id = 0;
+
++ if (devicetree_corrupt()) {
++ pr_err("devicetree state suspect, refuse to apply overlay\n");
++ return -EBUSY;
++ }
++
+ if (overlay_fdt_size < sizeof(struct fdt_header) ||
+ fdt_check_header(overlay_fdt)) {
+ pr_err("Invalid overlay_fdt header\n");
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index ffec82c8a523f..62db476a86514 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -8,6 +8,7 @@
+ * Author: Hezi Shahmoon <hezi.shahmoon@marvell.com>
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/interrupt.h>
+@@ -857,14 +858,11 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+
+
+ switch (reg) {
+- case PCI_EXP_SLTCTL:
+- *value = PCI_EXP_SLTSTA_PDS << 16;
+- return PCI_BRIDGE_EMUL_HANDLED;
+-
+ /*
+- * PCI_EXP_RTCTL and PCI_EXP_RTSTA are also supported, but do not need
+- * to be handled here, because their values are stored in emulated
+- * config space buffer, and we read them from there when needed.
++ * PCI_EXP_SLTCAP, PCI_EXP_SLTCTL, PCI_EXP_RTCTL and PCI_EXP_RTSTA are
++ * also supported, but do not need to be handled here, because their
++ * values are stored in emulated config space buffer, and we read them
++ * from there when needed.
+ */
+
+ case PCI_EXP_LNKCAP: {
+@@ -977,8 +975,25 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ /* Support interrupt A for MSI feature */
+ bridge->conf.intpin = PCI_INTERRUPT_INTA;
+
+- /* Aardvark HW provides PCIe Capability structure in version 2 */
+- bridge->pcie_conf.cap = cpu_to_le16(2);
++ /*
++ * Aardvark HW provides PCIe Capability structure in version 2 and
++ * indicate slot support, which is emulated.
++ */
++ bridge->pcie_conf.cap = cpu_to_le16(2 | PCI_EXP_FLAGS_SLOT);
++
++ /*
++ * Set Presence Detect State bit permanently since there is no support
++ * for unplugging the card nor detecting whether it is plugged. (If a
++ * platform exists in the future that supports it, via a GPIO for
++ * example, it should be implemented via this bit.)
++ *
++ * Set physical slot number to 1 since there is only one port and zero
++ * value is reserved for ports within the same silicon as Root Port
++ * which is not our case.
++ */
++ bridge->pcie_conf.slotcap = cpu_to_le32(FIELD_PREP(PCI_EXP_SLTCAP_PSN,
++ 1));
++ bridge->pcie_conf.slotsta = cpu_to_le16(PCI_EXP_SLTSTA_PDS);
+
+ /* Indicates supports for Completion Retry Status */
+ bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 41aeaa2351322..2e68f50bc7ae4 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4924,6 +4924,9 @@ static const struct pci_dev_acs_enabled {
+ { PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs },
+ /* Broadcom multi-function device */
+ { PCI_VENDOR_ID_BROADCOM, 0x16D7, pci_quirk_mf_endpoint_acs },
++ { PCI_VENDOR_ID_BROADCOM, 0x1750, pci_quirk_mf_endpoint_acs },
++ { PCI_VENDOR_ID_BROADCOM, 0x1751, pci_quirk_mf_endpoint_acs },
++ { PCI_VENDOR_ID_BROADCOM, 0x1752, pci_quirk_mf_endpoint_acs },
+ { PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
+ /* Amazon Annapurna Labs */
+ { PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
+diff --git a/drivers/phy/samsung/phy-exynos-pcie.c b/drivers/phy/samsung/phy-exynos-pcie.c
+index 578cfe07d07ab..53c9230c29078 100644
+--- a/drivers/phy/samsung/phy-exynos-pcie.c
++++ b/drivers/phy/samsung/phy-exynos-pcie.c
+@@ -51,6 +51,13 @@ static int exynos5433_pcie_phy_init(struct phy *phy)
+ {
+ struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
+
++ regmap_update_bits(ep->pmureg, EXYNOS5433_PMU_PCIE_PHY_OFFSET,
++ BIT(0), 1);
++ regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_GLOBAL_RESET,
++ PCIE_APP_REQ_EXIT_L1_MODE, 0);
++ regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_L1SUB_CM_CON,
++ PCIE_REFCLK_GATING_EN, 0);
++
+ regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_COMMON_RESET,
+ PCIE_PHY_RESET, 1);
+ regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_MAC_RESET,
+@@ -109,20 +116,7 @@ static int exynos5433_pcie_phy_init(struct phy *phy)
+ return 0;
+ }
+
+-static int exynos5433_pcie_phy_power_on(struct phy *phy)
+-{
+- struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
+-
+- regmap_update_bits(ep->pmureg, EXYNOS5433_PMU_PCIE_PHY_OFFSET,
+- BIT(0), 1);
+- regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_GLOBAL_RESET,
+- PCIE_APP_REQ_EXIT_L1_MODE, 0);
+- regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_L1SUB_CM_CON,
+- PCIE_REFCLK_GATING_EN, 0);
+- return 0;
+-}
+-
+-static int exynos5433_pcie_phy_power_off(struct phy *phy)
++static int exynos5433_pcie_phy_exit(struct phy *phy)
+ {
+ struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
+
+@@ -135,8 +129,7 @@ static int exynos5433_pcie_phy_power_off(struct phy *phy)
+
+ static const struct phy_ops exynos5433_phy_ops = {
+ .init = exynos5433_pcie_phy_init,
+- .power_on = exynos5433_pcie_phy_power_on,
+- .power_off = exynos5433_pcie_phy_power_off,
++ .exit = exynos5433_pcie_phy_exit,
+ .owner = THIS_MODULE,
+ };
+
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index ffc045f7bf00b..fd093e36c3a88 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -1641,16 +1641,14 @@ EXPORT_SYMBOL_GPL(intel_pinctrl_probe_by_uid);
+
+ const struct intel_pinctrl_soc_data *intel_pinctrl_get_soc_data(struct platform_device *pdev)
+ {
++ const struct intel_pinctrl_soc_data * const *table;
+ const struct intel_pinctrl_soc_data *data = NULL;
+- const struct intel_pinctrl_soc_data **table;
+- struct acpi_device *adev;
+- unsigned int i;
+
+- adev = ACPI_COMPANION(&pdev->dev);
+- if (adev) {
+- const void *match = device_get_match_data(&pdev->dev);
++ table = device_get_match_data(&pdev->dev);
++ if (table) {
++ struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
++ unsigned int i;
+
+- table = (const struct intel_pinctrl_soc_data **)match;
+ for (i = 0; table[i]; i++) {
+ if (!strcmp(adev->pnp.unique_id, table[i]->uid)) {
+ data = table[i];
+@@ -1664,7 +1662,7 @@ const struct intel_pinctrl_soc_data *intel_pinctrl_get_soc_data(struct platform_
+ if (!id)
+ return ERR_PTR(-ENODEV);
+
+- table = (const struct intel_pinctrl_soc_data **)id->driver_data;
++ table = (const struct intel_pinctrl_soc_data * const *)id->driver_data;
+ data = table[pdev->id];
+ }
+
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index 640e50d94f27e..f5014d09d81a2 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -1421,8 +1421,10 @@ static int nmk_pinctrl_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+
+ has_config = nmk_pinctrl_dt_get_config(np, &configs);
+ np_config = of_parse_phandle(np, "ste,config", 0);
+- if (np_config)
++ if (np_config) {
+ has_config |= nmk_pinctrl_dt_get_config(np_config, &configs);
++ of_node_put(np_config);
++ }
+ if (has_config) {
+ const char *gpio_name;
+ const char *pin;
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 0645c2c24f508..06923e8859b78 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -917,6 +917,7 @@ static int amd_gpio_suspend(struct device *dev)
+ {
+ struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+ struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++ unsigned long flags;
+ int i;
+
+ for (i = 0; i < desc->npins; i++) {
+@@ -925,7 +926,9 @@ static int amd_gpio_suspend(struct device *dev)
+ if (!amd_gpio_should_save(gpio_dev, pin))
+ continue;
+
+- gpio_dev->saved_regs[i] = readl(gpio_dev->base + pin*4);
++ raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++ gpio_dev->saved_regs[i] = readl(gpio_dev->base + pin * 4) & ~PIN_IRQ_PENDING;
++ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ }
+
+ return 0;
+@@ -935,6 +938,7 @@ static int amd_gpio_resume(struct device *dev)
+ {
+ struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+ struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++ unsigned long flags;
+ int i;
+
+ for (i = 0; i < desc->npins; i++) {
+@@ -943,7 +947,10 @@ static int amd_gpio_resume(struct device *dev)
+ if (!amd_gpio_should_save(gpio_dev, pin))
+ continue;
+
+- writel(gpio_dev->saved_regs[i], gpio_dev->base + pin*4);
++ raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++ gpio_dev->saved_regs[i] |= readl(gpio_dev->base + pin * 4) & PIN_IRQ_PENDING;
++ writel(gpio_dev->saved_regs[i], gpio_dev->base + pin * 4);
++ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ }
+
+ return 0;
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm8916.c b/drivers/pinctrl/qcom/pinctrl-msm8916.c
+index 396db12ae9048..bf68913ba8212 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm8916.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm8916.c
+@@ -844,8 +844,8 @@ static const struct msm_pingroup msm8916_groups[] = {
+ PINGROUP(28, pwr_modem_enabled_a, NA, NA, NA, NA, NA, qdss_tracedata_b, NA, atest_combodac),
+ PINGROUP(29, cci_i2c, NA, NA, NA, NA, NA, qdss_tracedata_b, NA, atest_combodac),
+ PINGROUP(30, cci_i2c, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+- PINGROUP(31, cci_timer0, NA, NA, NA, NA, NA, NA, NA, NA),
+- PINGROUP(32, cci_timer1, NA, NA, NA, NA, NA, NA, NA, NA),
++ PINGROUP(31, cci_timer0, flash_strobe, NA, NA, NA, NA, NA, NA, NA),
++ PINGROUP(32, cci_timer1, flash_strobe, NA, NA, NA, NA, NA, NA, NA),
+ PINGROUP(33, cci_async, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+ PINGROUP(34, pwr_nav_enabled_a, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+ PINGROUP(35, pwr_crypto_enabled_a, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+diff --git a/drivers/pinctrl/qcom/pinctrl-sm8250.c b/drivers/pinctrl/qcom/pinctrl-sm8250.c
+index af144e724bd9c..3bd7f9fedcc34 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sm8250.c
++++ b/drivers/pinctrl/qcom/pinctrl-sm8250.c
+@@ -1316,7 +1316,7 @@ static const struct msm_pingroup sm8250_groups[] = {
+ static const struct msm_gpio_wakeirq_map sm8250_pdc_map[] = {
+ { 0, 79 }, { 1, 84 }, { 2, 80 }, { 3, 82 }, { 4, 107 }, { 7, 43 },
+ { 11, 42 }, { 14, 44 }, { 15, 52 }, { 19, 67 }, { 23, 68 }, { 24, 105 },
+- { 27, 92 }, { 28, 106 }, { 31, 69 }, { 35, 70 }, { 39, 37 },
++ { 27, 92 }, { 28, 106 }, { 31, 69 }, { 35, 70 }, { 39, 73 },
+ { 40, 108 }, { 43, 71 }, { 45, 72 }, { 47, 83 }, { 51, 74 }, { 55, 77 },
+ { 59, 78 }, { 63, 75 }, { 64, 81 }, { 65, 87 }, { 66, 88 }, { 67, 89 },
+ { 68, 54 }, { 70, 85 }, { 77, 46 }, { 80, 90 }, { 81, 91 }, { 83, 97 },
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index a48cac55152ce..c3cdf52b72945 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -517,6 +517,8 @@ static int rzg2l_pinctrl_pinconf_get(struct pinctrl_dev *pctldev,
+ if (!(cfg & PIN_CFG_IEN))
+ return -EINVAL;
+ arg = rzg2l_read_pin_config(pctrl, IEN(port_offset), bit, IEN_MASK);
++ if (!arg)
++ return -EINVAL;
+ break;
+
+ case PIN_CONFIG_POWER_SOURCE: {
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c b/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
+index c7d90c44e87aa..7b4b9f3d45558 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
+@@ -107,6 +107,7 @@ static const struct sunxi_pinctrl_desc sun50i_h6_r_pinctrl_data = {
+ .npins = ARRAY_SIZE(sun50i_h6_r_pins),
+ .pin_base = PL_BASE,
+ .irq_banks = 2,
++ .io_bias_cfg_variant = BIAS_VOLTAGE_PIO_POW_MODE_SEL,
+ };
+
+ static int sun50i_h6_r_pinctrl_probe(struct platform_device *pdev)
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index dd928402af997..09639e1d67098 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -624,7 +624,7 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ unsigned pin,
+ struct regulator *supply)
+ {
+- unsigned short bank = pin / PINS_PER_BANK;
++ unsigned short bank;
+ unsigned long flags;
+ u32 val, reg;
+ int uV;
+@@ -640,6 +640,9 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ if (uV == 0)
+ return 0;
+
++ pin -= pctl->desc->pin_base;
++ bank = pin / PINS_PER_BANK;
++
+ switch (pctl->desc->io_bias_cfg_variant) {
+ case BIAS_VOLTAGE_GRP_CONFIG:
+ /*
+@@ -657,8 +660,6 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ else
+ val = 0xD; /* 3.3V */
+
+- pin -= pctl->desc->pin_base;
+-
+ reg = readl(pctl->membase + sunxi_grp_config_reg(pin));
+ reg &= ~IO_BIAS_MASK;
+ writel(reg | val, pctl->membase + sunxi_grp_config_reg(pin));
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index ff767dccdf0f6..40dc048d18ad3 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -509,13 +509,13 @@ int cros_ec_query_all(struct cros_ec_device *ec_dev)
+ ret = cros_ec_get_host_command_version_mask(ec_dev,
+ EC_CMD_GET_NEXT_EVENT,
+ &ver_mask);
+- if (ret < 0 || ver_mask == 0)
++ if (ret < 0 || ver_mask == 0) {
+ ec_dev->mkbp_event_supported = 0;
+- else
++ } else {
+ ec_dev->mkbp_event_supported = fls(ver_mask);
+
+- dev_dbg(ec_dev->dev, "MKBP support version %u\n",
+- ec_dev->mkbp_event_supported - 1);
++ dev_dbg(ec_dev->dev, "MKBP support version %u\n", ec_dev->mkbp_event_supported - 1);
++ }
+
+ /* Probe if host sleep v1 is supported for S0ix failure detection. */
+ ret = cros_ec_get_host_command_version_mask(ec_dev,
+diff --git a/drivers/rtc/rtc-spear.c b/drivers/rtc/rtc-spear.c
+index d4777b01ab220..736fe535cd457 100644
+--- a/drivers/rtc/rtc-spear.c
++++ b/drivers/rtc/rtc-spear.c
+@@ -388,7 +388,7 @@ static int spear_rtc_probe(struct platform_device *pdev)
+
+ config->rtc->ops = &spear_rtc_ops;
+ config->rtc->range_min = RTC_TIMESTAMP_BEGIN_0000;
+- config->rtc->range_min = RTC_TIMESTAMP_END_9999;
++ config->rtc->range_max = RTC_TIMESTAMP_END_9999;
+
+ status = devm_rtc_register_device(config->rtc);
+ if (status)
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 0a9045b49c508..052a9114b2a6c 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -2068,6 +2068,9 @@ static inline void ap_scan_adapter(int ap)
+ */
+ static bool ap_get_configuration(void)
+ {
++ if (!ap_qci_info) /* QCI not supported */
++ return false;
++
+ memcpy(ap_qci_info_old, ap_qci_info, sizeof(*ap_qci_info));
+ ap_fetch_qci_info(ap_qci_info);
+
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 0c40af157df23..0f17933954fb2 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -148,12 +148,16 @@ struct ap_driver {
+ /*
+ * Called at the start of the ap bus scan function when
+ * the crypto config information (qci) has changed.
++ * This callback is not invoked if there is no AP
++ * QCI support available.
+ */
+ void (*on_config_changed)(struct ap_config_info *new_config_info,
+ struct ap_config_info *old_config_info);
+ /*
+ * Called at the end of the ap bus scan function when
+ * the crypto config information (qci) has changed.
++ * This callback is not invoked if there is no AP
++ * QCI support available.
+ */
+ void (*on_scan_complete)(struct ap_config_info *new_config_info,
+ struct ap_config_info *old_config_info);
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 7b24c932e8126..25deacc92b020 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -2607,8 +2607,8 @@ lpfc_debugfs_multixripools_write(struct file *file, const char __user *buf,
+ struct lpfc_sli4_hdw_queue *qp;
+ struct lpfc_multixri_pool *multixri_pool;
+
+- if (nbytes > 64)
+- nbytes = 64;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+@@ -2688,8 +2688,8 @@ lpfc_debugfs_nvmestat_write(struct file *file, const char __user *buf,
+ if (!phba->targetport)
+ return -ENXIO;
+
+- if (nbytes > 64)
+- nbytes = 64;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+@@ -2826,8 +2826,8 @@ lpfc_debugfs_ioktime_write(struct file *file, const char __user *buf,
+ char mybuf[64];
+ char *pbuf;
+
+- if (nbytes > 64)
+- nbytes = 64;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+@@ -2954,8 +2954,8 @@ lpfc_debugfs_nvmeio_trc_write(struct file *file, const char __user *buf,
+ char mybuf[64];
+ char *pbuf;
+
+- if (nbytes > 63)
+- nbytes = 63;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+@@ -3060,8 +3060,8 @@ lpfc_debugfs_hdwqstat_write(struct file *file, const char __user *buf,
+ char *pbuf;
+ int i;
+
+- if (nbytes > 64)
+- nbytes = 64;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 80ac3a051c192..e2127e85ff325 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -2003,10 +2003,12 @@ initpath:
+
+ sync_buf->cmd_flag |= LPFC_IO_CMF;
+ ret_val = lpfc_sli4_issue_wqe(phba, &phba->sli4_hba.hdwq[0], sync_buf);
+- if (ret_val)
++ if (ret_val) {
+ lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT,
+ "6214 Cannot issue CMF_SYNC_WQE: x%x\n",
+ ret_val);
++ __lpfc_sli_release_iocbq(phba, sync_buf);
++ }
+ out_unlock:
+ spin_unlock_irqrestore(&phba->hbalock, iflags);
+ return ret_val;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 2a38cd2d24eff..02899e8849dd6 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2143,8 +2143,6 @@ static int iscsi_iter_destroy_conn_fn(struct device *dev, void *data)
+ return 0;
+
+ iscsi_remove_conn(iscsi_dev_to_conn(dev));
+- iscsi_put_conn(iscsi_dev_to_conn(dev));
+-
+ return 0;
+ }
+
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index 0bc7daa7afc83..e4cb52e1fe261 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -156,6 +156,7 @@ struct meson_spicc_device {
+ void __iomem *base;
+ struct clk *core;
+ struct clk *pclk;
++ struct clk_divider pow2_div;
+ struct clk *clk;
+ struct spi_message *message;
+ struct spi_transfer *xfer;
+@@ -168,6 +169,8 @@ struct meson_spicc_device {
+ unsigned long xfer_remain;
+ };
+
++#define pow2_clk_to_spicc(_div) container_of(_div, struct meson_spicc_device, pow2_div)
++
+ static void meson_spicc_oen_enable(struct meson_spicc_device *spicc)
+ {
+ u32 conf;
+@@ -421,7 +424,7 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ {
+ struct meson_spicc_device *spicc = spi_master_get_devdata(master);
+ struct spi_device *spi = message->spi;
+- u32 conf = 0;
++ u32 conf = readl_relaxed(spicc->base + SPICC_CONREG) & SPICC_DATARATE_MASK;
+
+ /* Store current message */
+ spicc->message = message;
+@@ -458,8 +461,6 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ /* Select CS */
+ conf |= FIELD_PREP(SPICC_CS_MASK, spi->chip_select);
+
+- /* Default Clock rate core/4 */
+-
+ /* Default 8bit word */
+ conf |= FIELD_PREP(SPICC_BITLENGTH_MASK, 8 - 1);
+
+@@ -476,12 +477,16 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ static int meson_spicc_unprepare_transfer(struct spi_master *master)
+ {
+ struct meson_spicc_device *spicc = spi_master_get_devdata(master);
++ u32 conf = readl_relaxed(spicc->base + SPICC_CONREG) & SPICC_DATARATE_MASK;
+
+ /* Disable all IRQs */
+ writel(0, spicc->base + SPICC_INTREG);
+
+ device_reset_optional(&spicc->pdev->dev);
+
++ /* Set default configuration, keeping datarate field */
++ writel_relaxed(conf, spicc->base + SPICC_CONREG);
++
+ return 0;
+ }
+
+@@ -518,14 +523,60 @@ static void meson_spicc_cleanup(struct spi_device *spi)
+ * Clk path for G12A series:
+ * pclk -> pow2 fixed div -> pow2 div -> mux -> out
+ * pclk -> enh fixed div -> enh div -> mux -> out
++ *
++ * The pow2 divider is tied to the controller HW state, and the
++ * divider is only valid when the controller is initialized.
++ *
++ * A set of clock ops is added to make sure we don't read/set this
++ * clock rate while the controller is in an unknown state.
+ */
+
+-static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
++static unsigned long meson_spicc_pow2_recalc_rate(struct clk_hw *hw,
++ unsigned long parent_rate)
++{
++ struct clk_divider *divider = to_clk_divider(hw);
++ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++ if (!spicc->master->cur_msg || !spicc->master->busy)
++ return 0;
++
++ return clk_divider_ops.recalc_rate(hw, parent_rate);
++}
++
++static int meson_spicc_pow2_determine_rate(struct clk_hw *hw,
++ struct clk_rate_request *req)
++{
++ struct clk_divider *divider = to_clk_divider(hw);
++ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++ if (!spicc->master->cur_msg || !spicc->master->busy)
++ return -EINVAL;
++
++ return clk_divider_ops.determine_rate(hw, req);
++}
++
++static int meson_spicc_pow2_set_rate(struct clk_hw *hw, unsigned long rate,
++ unsigned long parent_rate)
++{
++ struct clk_divider *divider = to_clk_divider(hw);
++ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++ if (!spicc->master->cur_msg || !spicc->master->busy)
++ return -EINVAL;
++
++ return clk_divider_ops.set_rate(hw, rate, parent_rate);
++}
++
++const struct clk_ops meson_spicc_pow2_clk_ops = {
++ .recalc_rate = meson_spicc_pow2_recalc_rate,
++ .determine_rate = meson_spicc_pow2_determine_rate,
++ .set_rate = meson_spicc_pow2_set_rate,
++};
++
++static int meson_spicc_pow2_clk_init(struct meson_spicc_device *spicc)
+ {
+ struct device *dev = &spicc->pdev->dev;
+- struct clk_fixed_factor *pow2_fixed_div, *enh_fixed_div;
+- struct clk_divider *pow2_div, *enh_div;
+- struct clk_mux *mux;
++ struct clk_fixed_factor *pow2_fixed_div;
+ struct clk_init_data init;
+ struct clk *clk;
+ struct clk_parent_data parent_data[2];
+@@ -560,31 +611,45 @@ static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
+ if (WARN_ON(IS_ERR(clk)))
+ return PTR_ERR(clk);
+
+- pow2_div = devm_kzalloc(dev, sizeof(*pow2_div), GFP_KERNEL);
+- if (!pow2_div)
+- return -ENOMEM;
+-
+ snprintf(name, sizeof(name), "%s#pow2_div", dev_name(dev));
+ init.name = name;
+- init.ops = &clk_divider_ops;
+- init.flags = CLK_SET_RATE_PARENT;
++ init.ops = &meson_spicc_pow2_clk_ops;
++ /*
++ * Set NOCACHE here to make sure we read the actual HW value
++ * since we reset the HW after each transfer.
++ */
++ init.flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE;
+ parent_data[0].hw = &pow2_fixed_div->hw;
+ init.num_parents = 1;
+
+- pow2_div->shift = 16,
+- pow2_div->width = 3,
+- pow2_div->flags = CLK_DIVIDER_POWER_OF_TWO,
+- pow2_div->reg = spicc->base + SPICC_CONREG;
+- pow2_div->hw.init = &init;
++ spicc->pow2_div.shift = 16,
++ spicc->pow2_div.width = 3,
++ spicc->pow2_div.flags = CLK_DIVIDER_POWER_OF_TWO,
++ spicc->pow2_div.reg = spicc->base + SPICC_CONREG;
++ spicc->pow2_div.hw.init = &init;
+
+- clk = devm_clk_register(dev, &pow2_div->hw);
+- if (WARN_ON(IS_ERR(clk)))
+- return PTR_ERR(clk);
++ spicc->clk = devm_clk_register(dev, &spicc->pow2_div.hw);
++ if (WARN_ON(IS_ERR(spicc->clk)))
++ return PTR_ERR(spicc->clk);
+
+- if (!spicc->data->has_enhance_clk_div) {
+- spicc->clk = clk;
+- return 0;
+- }
++ return 0;
++}
++
++static int meson_spicc_enh_clk_init(struct meson_spicc_device *spicc)
++{
++ struct device *dev = &spicc->pdev->dev;
++ struct clk_fixed_factor *enh_fixed_div;
++ struct clk_divider *enh_div;
++ struct clk_mux *mux;
++ struct clk_init_data init;
++ struct clk *clk;
++ struct clk_parent_data parent_data[2];
++ char name[64];
++
++ memset(&init, 0, sizeof(init));
++ memset(&parent_data, 0, sizeof(parent_data));
++
++ init.parent_data = parent_data;
+
+ /* algorithm for enh div: rate = freq / 2 / (N + 1) */
+
+@@ -637,7 +702,7 @@ static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
+ snprintf(name, sizeof(name), "%s#sel", dev_name(dev));
+ init.name = name;
+ init.ops = &clk_mux_ops;
+- parent_data[0].hw = &pow2_div->hw;
++ parent_data[0].hw = &spicc->pow2_div.hw;
+ parent_data[1].hw = &enh_div->hw;
+ init.num_parents = 2;
+ init.flags = CLK_SET_RATE_PARENT;
+@@ -754,12 +819,20 @@ static int meson_spicc_probe(struct platform_device *pdev)
+
+ meson_spicc_oen_enable(spicc);
+
+- ret = meson_spicc_clk_init(spicc);
++ ret = meson_spicc_pow2_clk_init(spicc);
+ if (ret) {
+- dev_err(&pdev->dev, "clock registration failed\n");
++ dev_err(&pdev->dev, "pow2 clock registration failed\n");
+ goto out_clk;
+ }
+
++ if (spicc->data->has_enhance_clk_div) {
++ ret = meson_spicc_enh_clk_init(spicc);
++ if (ret) {
++ dev_err(&pdev->dev, "clock registration failed\n");
++ goto out_clk;
++ }
++ }
++
+ ret = devm_spi_register_master(&pdev->dev, master);
+ if (ret) {
+ dev_err(&pdev->dev, "spi master registration failed\n");
+diff --git a/drivers/staging/r8188eu/core/rtw_cmd.c b/drivers/staging/r8188eu/core/rtw_cmd.c
+index 06523d91939a6..5b6a891b5d67e 100644
+--- a/drivers/staging/r8188eu/core/rtw_cmd.c
++++ b/drivers/staging/r8188eu/core/rtw_cmd.c
+@@ -898,8 +898,12 @@ static void traffic_status_watchdog(struct adapter *padapter)
+ static void rtl8188e_sreset_xmit_status_check(struct adapter *padapter)
+ {
+ u32 txdma_status;
++ int res;
++
++ res = rtw_read32(padapter, REG_TXDMA_STATUS, &txdma_status);
++ if (res)
++ return;
+
+- txdma_status = rtw_read32(padapter, REG_TXDMA_STATUS);
+ if (txdma_status != 0x00)
+ rtw_write32(padapter, REG_TXDMA_STATUS, txdma_status);
+ /* total xmit irp = 4 */
+@@ -1177,7 +1181,14 @@ exit:
+
+ static bool rtw_is_hi_queue_empty(struct adapter *adapter)
+ {
+- return (rtw_read32(adapter, REG_HGQ_INFORMATION) & 0x0000ff00) == 0;
++ int res;
++ u32 reg;
++
++ res = rtw_read32(adapter, REG_HGQ_INFORMATION, ®);
++ if (res)
++ return false;
++
++ return (reg & 0x0000ff00) == 0;
+ }
+
+ static void rtw_chk_hi_queue_hdl(struct adapter *padapter)
+diff --git a/drivers/staging/r8188eu/core/rtw_efuse.c b/drivers/staging/r8188eu/core/rtw_efuse.c
+index 0e0e606388802..8005ed8d3a203 100644
+--- a/drivers/staging/r8188eu/core/rtw_efuse.c
++++ b/drivers/staging/r8188eu/core/rtw_efuse.c
+@@ -28,22 +28,35 @@ ReadEFuseByte(
+ u32 value32;
+ u8 readbyte;
+ u16 retry;
++ int res;
+
+ /* Write Address */
+ rtw_write8(Adapter, EFUSE_CTRL + 1, (_offset & 0xff));
+- readbyte = rtw_read8(Adapter, EFUSE_CTRL + 2);
++ res = rtw_read8(Adapter, EFUSE_CTRL + 2, &readbyte);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, EFUSE_CTRL + 2, ((_offset >> 8) & 0x03) | (readbyte & 0xfc));
+
+ /* Write bit 32 0 */
+- readbyte = rtw_read8(Adapter, EFUSE_CTRL + 3);
++ res = rtw_read8(Adapter, EFUSE_CTRL + 3, &readbyte);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, EFUSE_CTRL + 3, (readbyte & 0x7f));
+
+ /* Check bit 32 read-ready */
+- retry = 0;
+- value32 = rtw_read32(Adapter, EFUSE_CTRL);
+- while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < 10000)) {
+- value32 = rtw_read32(Adapter, EFUSE_CTRL);
+- retry++;
++ res = rtw_read32(Adapter, EFUSE_CTRL, &value32);
++ if (res)
++ return;
++
++ for (retry = 0; retry < 10000; retry++) {
++ res = rtw_read32(Adapter, EFUSE_CTRL, &value32);
++ if (res)
++ continue;
++
++ if (((value32 >> 24) & 0xff) & 0x80)
++ break;
+ }
+
+ /* 20100205 Joseph: Add delay suggested by SD1 Victor. */
+@@ -51,9 +64,13 @@ ReadEFuseByte(
+ /* Designer says that there shall be some delay after ready bit is set, or the */
+ /* result will always stay on last data we read. */
+ udelay(50);
+- value32 = rtw_read32(Adapter, EFUSE_CTRL);
++ res = rtw_read32(Adapter, EFUSE_CTRL, &value32);
++ if (res)
++ return;
+
+ *pbuf = (u8)(value32 & 0xff);
++
++ /* FIXME: return an error to caller */
+ }
+
+ /*-----------------------------------------------------------------------------
+diff --git a/drivers/staging/r8188eu/core/rtw_fw.c b/drivers/staging/r8188eu/core/rtw_fw.c
+index 0451e51776448..04f25e0b3bca5 100644
+--- a/drivers/staging/r8188eu/core/rtw_fw.c
++++ b/drivers/staging/r8188eu/core/rtw_fw.c
+@@ -44,18 +44,28 @@ static_assert(sizeof(struct rt_firmware_hdr) == 32);
+ static void fw_download_enable(struct adapter *padapter, bool enable)
+ {
+ u8 tmp;
++ int res;
+
+ if (enable) {
+ /* MCU firmware download enable. */
+- tmp = rtw_read8(padapter, REG_MCUFWDL);
++ res = rtw_read8(padapter, REG_MCUFWDL, &tmp);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_MCUFWDL, tmp | 0x01);
+
+ /* 8051 reset */
+- tmp = rtw_read8(padapter, REG_MCUFWDL + 2);
++ res = rtw_read8(padapter, REG_MCUFWDL + 2, &tmp);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_MCUFWDL + 2, tmp & 0xf7);
+ } else {
+ /* MCU firmware download disable. */
+- tmp = rtw_read8(padapter, REG_MCUFWDL);
++ res = rtw_read8(padapter, REG_MCUFWDL, &tmp);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_MCUFWDL, tmp & 0xfe);
+
+ /* Reserved for fw extension. */
+@@ -125,8 +135,13 @@ static int page_write(struct adapter *padapter, u32 page, u8 *buffer, u32 size)
+ {
+ u8 value8;
+ u8 u8Page = (u8)(page & 0x07);
++ int res;
++
++ res = rtw_read8(padapter, REG_MCUFWDL + 2, &value8);
++ if (res)
++ return _FAIL;
+
+- value8 = (rtw_read8(padapter, REG_MCUFWDL + 2) & 0xF8) | u8Page;
++ value8 = (value8 & 0xF8) | u8Page;
+ rtw_write8(padapter, REG_MCUFWDL + 2, value8);
+
+ return block_write(padapter, buffer, size);
+@@ -165,8 +180,12 @@ exit:
+ void rtw_reset_8051(struct adapter *padapter)
+ {
+ u8 val8;
++ int res;
++
++ res = rtw_read8(padapter, REG_SYS_FUNC_EN + 1, &val8);
++ if (res)
++ return;
+
+- val8 = rtw_read8(padapter, REG_SYS_FUNC_EN + 1);
+ rtw_write8(padapter, REG_SYS_FUNC_EN + 1, val8 & (~BIT(2)));
+ rtw_write8(padapter, REG_SYS_FUNC_EN + 1, val8 | (BIT(2)));
+ }
+@@ -175,10 +194,14 @@ static int fw_free_to_go(struct adapter *padapter)
+ {
+ u32 counter = 0;
+ u32 value32;
++ int res;
+
+ /* polling CheckSum report */
+ do {
+- value32 = rtw_read32(padapter, REG_MCUFWDL);
++ res = rtw_read32(padapter, REG_MCUFWDL, &value32);
++ if (res)
++ continue;
++
+ if (value32 & FWDL_CHKSUM_RPT)
+ break;
+ } while (counter++ < POLLING_READY_TIMEOUT_COUNT);
+@@ -186,7 +209,10 @@ static int fw_free_to_go(struct adapter *padapter)
+ if (counter >= POLLING_READY_TIMEOUT_COUNT)
+ return _FAIL;
+
+- value32 = rtw_read32(padapter, REG_MCUFWDL);
++ res = rtw_read32(padapter, REG_MCUFWDL, &value32);
++ if (res)
++ return _FAIL;
++
+ value32 |= MCUFWDL_RDY;
+ value32 &= ~WINTINI_RDY;
+ rtw_write32(padapter, REG_MCUFWDL, value32);
+@@ -196,9 +222,10 @@ static int fw_free_to_go(struct adapter *padapter)
+ /* polling for FW ready */
+ counter = 0;
+ do {
+- value32 = rtw_read32(padapter, REG_MCUFWDL);
+- if (value32 & WINTINI_RDY)
++ res = rtw_read32(padapter, REG_MCUFWDL, &value32);
++ if (!res && value32 & WINTINI_RDY)
+ return _SUCCESS;
++
+ udelay(5);
+ } while (counter++ < POLLING_READY_TIMEOUT_COUNT);
+
+@@ -239,7 +266,7 @@ exit:
+ int rtl8188e_firmware_download(struct adapter *padapter)
+ {
+ int ret = _SUCCESS;
+- u8 write_fw_retry = 0;
++ u8 reg;
+ unsigned long fwdl_timeout;
+ struct dvobj_priv *dvobj = adapter_to_dvobj(padapter);
+ struct device *device = dvobj_to_dev(dvobj);
+@@ -269,23 +296,34 @@ int rtl8188e_firmware_download(struct adapter *padapter)
+
+ /* Suggested by Filen. If 8051 is running in RAM code, driver should inform Fw to reset by itself, */
+ /* or it will cause download Fw fail. 2010.02.01. by tynli. */
+- if (rtw_read8(padapter, REG_MCUFWDL) & RAM_DL_SEL) { /* 8051 RAM code */
++ ret = rtw_read8(padapter, REG_MCUFWDL, ®);
++ if (ret) {
++ ret = _FAIL;
++ goto exit;
++ }
++
++ if (reg & RAM_DL_SEL) { /* 8051 RAM code */
+ rtw_write8(padapter, REG_MCUFWDL, 0x00);
+ rtw_reset_8051(padapter);
+ }
+
+ fw_download_enable(padapter, true);
+ fwdl_timeout = jiffies + msecs_to_jiffies(500);
+- while (1) {
++ do {
+ /* reset the FWDL chksum */
+- rtw_write8(padapter, REG_MCUFWDL, rtw_read8(padapter, REG_MCUFWDL) | FWDL_CHKSUM_RPT);
++ ret = rtw_read8(padapter, REG_MCUFWDL, ®);
++ if (ret) {
++ ret = _FAIL;
++ continue;
++ }
+
+- ret = write_fw(padapter, fw_data, fw_size);
++ rtw_write8(padapter, REG_MCUFWDL, reg | FWDL_CHKSUM_RPT);
+
+- if (ret == _SUCCESS ||
+- (time_after(jiffies, fwdl_timeout) && write_fw_retry++ >= 3))
++ ret = write_fw(padapter, fw_data, fw_size);
++ if (ret == _SUCCESS)
+ break;
+- }
++ } while (!time_after(jiffies, fwdl_timeout));
++
+ fw_download_enable(padapter, false);
+ if (ret != _SUCCESS)
+ goto exit;
+diff --git a/drivers/staging/r8188eu/core/rtw_led.c b/drivers/staging/r8188eu/core/rtw_led.c
+index 2f3000428af76..25989acf52599 100644
+--- a/drivers/staging/r8188eu/core/rtw_led.c
++++ b/drivers/staging/r8188eu/core/rtw_led.c
+@@ -35,11 +35,15 @@ static void ResetLedStatus(struct LED_871x *pLed)
+ static void SwLedOn(struct adapter *padapter, struct LED_871x *pLed)
+ {
+ u8 LedCfg;
++ int res;
+
+ if (padapter->bSurpriseRemoved || padapter->bDriverStopped)
+ return;
+
+- LedCfg = rtw_read8(padapter, REG_LEDCFG2);
++ res = rtw_read8(padapter, REG_LEDCFG2, &LedCfg);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_LEDCFG2, (LedCfg & 0xf0) | BIT(5) | BIT(6)); /* SW control led0 on. */
+ pLed->bLedOn = true;
+ }
+@@ -47,15 +51,21 @@ static void SwLedOn(struct adapter *padapter, struct LED_871x *pLed)
+ static void SwLedOff(struct adapter *padapter, struct LED_871x *pLed)
+ {
+ u8 LedCfg;
++ int res;
+
+ if (padapter->bSurpriseRemoved || padapter->bDriverStopped)
+ goto exit;
+
+- LedCfg = rtw_read8(padapter, REG_LEDCFG2);/* 0x4E */
++ res = rtw_read8(padapter, REG_LEDCFG2, &LedCfg);/* 0x4E */
++ if (res)
++ goto exit;
+
+ LedCfg &= 0x90; /* Set to software control. */
+ rtw_write8(padapter, REG_LEDCFG2, (LedCfg | BIT(3)));
+- LedCfg = rtw_read8(padapter, REG_MAC_PINMUX_CFG);
++ res = rtw_read8(padapter, REG_MAC_PINMUX_CFG, &LedCfg);
++ if (res)
++ goto exit;
++
+ LedCfg &= 0xFE;
+ rtw_write8(padapter, REG_MAC_PINMUX_CFG, LedCfg);
+ exit:
+diff --git a/drivers/staging/r8188eu/core/rtw_mlme_ext.c b/drivers/staging/r8188eu/core/rtw_mlme_ext.c
+index faf23fc950c53..88a4953d31d81 100644
+--- a/drivers/staging/r8188eu/core/rtw_mlme_ext.c
++++ b/drivers/staging/r8188eu/core/rtw_mlme_ext.c
+@@ -5667,14 +5667,28 @@ unsigned int send_beacon(struct adapter *padapter)
+
+ bool get_beacon_valid_bit(struct adapter *adapter)
+ {
++ int res;
++ u8 reg;
++
++ res = rtw_read8(adapter, REG_TDECTRL + 2, ®);
++ if (res)
++ return false;
++
+ /* BIT(16) of REG_TDECTRL = BIT(0) of REG_TDECTRL+2 */
+- return BIT(0) & rtw_read8(adapter, REG_TDECTRL + 2);
++ return BIT(0) & reg;
+ }
+
+ void clear_beacon_valid_bit(struct adapter *adapter)
+ {
++ int res;
++ u8 reg;
++
++ res = rtw_read8(adapter, REG_TDECTRL + 2, ®);
++ if (res)
++ return;
++
+ /* BIT(16) of REG_TDECTRL = BIT(0) of REG_TDECTRL+2, write 1 to clear, Clear by sw */
+- rtw_write8(adapter, REG_TDECTRL + 2, rtw_read8(adapter, REG_TDECTRL + 2) | BIT(0));
++ rtw_write8(adapter, REG_TDECTRL + 2, reg | BIT(0));
+ }
+
+ /****************************************************************************
+@@ -6002,7 +6016,9 @@ static void rtw_set_bssid(struct adapter *adapter, u8 *bssid)
+ static void mlme_join(struct adapter *adapter, int type)
+ {
+ struct mlme_priv *mlmepriv = &adapter->mlmepriv;
+- u8 retry_limit = 0x30;
++ u8 retry_limit = 0x30, reg;
++ u32 reg32;
++ int res;
+
+ switch (type) {
+ case 0:
+@@ -6010,8 +6026,12 @@ static void mlme_join(struct adapter *adapter, int type)
+ /* enable to rx data frame, accept all data frame */
+ rtw_write16(adapter, REG_RXFLTMAP2, 0xFFFF);
+
++ res = rtw_read32(adapter, REG_RCR, ®32);
++ if (res)
++ return;
++
+ rtw_write32(adapter, REG_RCR,
+- rtw_read32(adapter, REG_RCR) | RCR_CBSSID_DATA | RCR_CBSSID_BCN);
++ reg32 | RCR_CBSSID_DATA | RCR_CBSSID_BCN);
+
+ if (check_fwstate(mlmepriv, WIFI_STATION_STATE)) {
+ retry_limit = 48;
+@@ -6027,7 +6047,11 @@ static void mlme_join(struct adapter *adapter, int type)
+ case 2:
+ /* sta add event call back */
+ /* enable update TSF */
+- rtw_write8(adapter, REG_BCN_CTRL, rtw_read8(adapter, REG_BCN_CTRL) & (~BIT(4)));
++ res = rtw_read8(adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapter, REG_BCN_CTRL, reg & (~BIT(4)));
+
+ if (check_fwstate(mlmepriv, WIFI_ADHOC_STATE | WIFI_ADHOC_MASTER_STATE))
+ retry_limit = 0x7;
+@@ -6748,6 +6772,9 @@ void mlmeext_sta_add_event_callback(struct adapter *padapter, struct sta_info *p
+
+ static void mlme_disconnect(struct adapter *adapter)
+ {
++ int res;
++ u8 reg;
++
+ /* Set RCR to not to receive data frame when NO LINK state */
+ /* reject all data frames */
+ rtw_write16(adapter, REG_RXFLTMAP2, 0x00);
+@@ -6756,7 +6783,12 @@ static void mlme_disconnect(struct adapter *adapter)
+ rtw_write8(adapter, REG_DUAL_TSF_RST, (BIT(0) | BIT(1)));
+
+ /* disable update TSF */
+- rtw_write8(adapter, REG_BCN_CTRL, rtw_read8(adapter, REG_BCN_CTRL) | BIT(4));
++
++ res = rtw_read8(adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapter, REG_BCN_CTRL, reg | BIT(4));
+ }
+
+ void mlmeext_sta_del_event_callback(struct adapter *padapter)
+@@ -6810,14 +6842,20 @@ static u8 chk_ap_is_alive(struct sta_info *psta)
+ return ret;
+ }
+
+-static void rtl8188e_sreset_linked_status_check(struct adapter *padapter)
++static int rtl8188e_sreset_linked_status_check(struct adapter *padapter)
+ {
+- u32 rx_dma_status = rtw_read32(padapter, REG_RXDMA_STATUS);
++ u32 rx_dma_status;
++ int res;
++ u8 reg;
++
++ res = rtw_read32(padapter, REG_RXDMA_STATUS, &rx_dma_status);
++ if (res)
++ return res;
+
+ if (rx_dma_status != 0x00)
+ rtw_write32(padapter, REG_RXDMA_STATUS, rx_dma_status);
+
+- rtw_read8(padapter, REG_FMETHR);
++ return rtw_read8(padapter, REG_FMETHR, ®);
+ }
+
+ void linked_status_chk(struct adapter *padapter)
+@@ -7219,6 +7257,7 @@ u8 disconnect_hdl(struct adapter *padapter, unsigned char *pbuf)
+ struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info;
+ struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&pmlmeinfo->network);
+ u8 val8;
++ int res;
+
+ if (is_client_associated_to_ap(padapter))
+ issue_deauth_ex(padapter, pnetwork->MacAddress, WLAN_REASON_DEAUTH_LEAVING, param->deauth_timeout_ms / 100, 100);
+@@ -7231,7 +7270,10 @@ u8 disconnect_hdl(struct adapter *padapter, unsigned char *pbuf)
+
+ if (((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) || ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE)) {
+ /* Stop BCN */
+- val8 = rtw_read8(padapter, REG_BCN_CTRL);
++ res = rtw_read8(padapter, REG_BCN_CTRL, &val8);
++ if (res)
++ return H2C_DROPPED;
++
+ rtw_write8(padapter, REG_BCN_CTRL, val8 & (~(EN_BCN_FUNCTION | EN_TXBCN_RPT)));
+ }
+
+diff --git a/drivers/staging/r8188eu/core/rtw_pwrctrl.c b/drivers/staging/r8188eu/core/rtw_pwrctrl.c
+index 7b816b824947d..45e85b593665f 100644
+--- a/drivers/staging/r8188eu/core/rtw_pwrctrl.c
++++ b/drivers/staging/r8188eu/core/rtw_pwrctrl.c
+@@ -229,6 +229,9 @@ void rtw_set_ps_mode(struct adapter *padapter, u8 ps_mode, u8 smart_ps, u8 bcn_a
+
+ static bool lps_rf_on(struct adapter *adapter)
+ {
++ int res;
++ u32 reg;
++
+ /* When we halt NIC, we should check if FW LPS is leave. */
+ if (adapter->pwrctrlpriv.rf_pwrstate == rf_off) {
+ /* If it is in HW/SW Radio OFF or IPS state, we do not check Fw LPS Leave, */
+@@ -236,7 +239,11 @@ static bool lps_rf_on(struct adapter *adapter)
+ return true;
+ }
+
+- if (rtw_read32(adapter, REG_RCR) & 0x00070000)
++ res = rtw_read32(adapter, REG_RCR, ®);
++ if (res)
++ return false;
++
++ if (reg & 0x00070000)
+ return false;
+
+ return true;
+diff --git a/drivers/staging/r8188eu/core/rtw_wlan_util.c b/drivers/staging/r8188eu/core/rtw_wlan_util.c
+index 392a65783f324..9bd059b86d0c4 100644
+--- a/drivers/staging/r8188eu/core/rtw_wlan_util.c
++++ b/drivers/staging/r8188eu/core/rtw_wlan_util.c
+@@ -279,8 +279,13 @@ void Restore_DM_Func_Flag(struct adapter *padapter)
+ void Set_MSR(struct adapter *padapter, u8 type)
+ {
+ u8 val8;
++ int res;
+
+- val8 = rtw_read8(padapter, MSR) & 0x0c;
++ res = rtw_read8(padapter, MSR, &val8);
++ if (res)
++ return;
++
++ val8 &= 0x0c;
+ val8 |= type;
+ rtw_write8(padapter, MSR, val8);
+ }
+@@ -505,7 +510,11 @@ int WMM_param_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE)
+
+ static void set_acm_ctrl(struct adapter *adapter, u8 acm_mask)
+ {
+- u8 acmctrl = rtw_read8(adapter, REG_ACMHWCTRL);
++ u8 acmctrl;
++ int res = rtw_read8(adapter, REG_ACMHWCTRL, &acmctrl);
++
++ if (res)
++ return;
+
+ if (acm_mask > 1)
+ acmctrl = acmctrl | 0x1;
+@@ -765,6 +774,7 @@ void HT_info_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE)
+ static void set_min_ampdu_spacing(struct adapter *adapter, u8 spacing)
+ {
+ u8 sec_spacing;
++ int res;
+
+ if (spacing <= 7) {
+ switch (adapter->securitypriv.dot11PrivacyAlgrthm) {
+@@ -786,8 +796,12 @@ static void set_min_ampdu_spacing(struct adapter *adapter, u8 spacing)
+ if (spacing < sec_spacing)
+ spacing = sec_spacing;
+
++ res = rtw_read8(adapter, REG_AMPDU_MIN_SPACE, &sec_spacing);
++ if (res)
++ return;
++
+ rtw_write8(adapter, REG_AMPDU_MIN_SPACE,
+- (rtw_read8(adapter, REG_AMPDU_MIN_SPACE) & 0xf8) | spacing);
++ (sec_spacing & 0xf8) | spacing);
+ }
+ }
+
+diff --git a/drivers/staging/r8188eu/hal/Hal8188ERateAdaptive.c b/drivers/staging/r8188eu/hal/Hal8188ERateAdaptive.c
+index 57e8f55738467..3cefdf90d6e00 100644
+--- a/drivers/staging/r8188eu/hal/Hal8188ERateAdaptive.c
++++ b/drivers/staging/r8188eu/hal/Hal8188ERateAdaptive.c
+@@ -279,6 +279,7 @@ static int odm_ARFBRefresh_8188E(struct odm_dm_struct *dm_odm, struct odm_ra_inf
+ { /* Wilson 2011/10/26 */
+ u32 MaskFromReg;
+ s8 i;
++ int res;
+
+ switch (pRaInfo->RateID) {
+ case RATR_INX_WIRELESS_NGB:
+@@ -303,19 +304,31 @@ static int odm_ARFBRefresh_8188E(struct odm_dm_struct *dm_odm, struct odm_ra_inf
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & 0x0000000d;
+ break;
+ case 12:
+- MaskFromReg = rtw_read32(dm_odm->Adapter, REG_ARFR0);
++ res = rtw_read32(dm_odm->Adapter, REG_ARFR0, &MaskFromReg);
++ if (res)
++ return res;
++
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & MaskFromReg;
+ break;
+ case 13:
+- MaskFromReg = rtw_read32(dm_odm->Adapter, REG_ARFR1);
++ res = rtw_read32(dm_odm->Adapter, REG_ARFR1, &MaskFromReg);
++ if (res)
++ return res;
++
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & MaskFromReg;
+ break;
+ case 14:
+- MaskFromReg = rtw_read32(dm_odm->Adapter, REG_ARFR2);
++ res = rtw_read32(dm_odm->Adapter, REG_ARFR2, &MaskFromReg);
++ if (res)
++ return res;
++
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & MaskFromReg;
+ break;
+ case 15:
+- MaskFromReg = rtw_read32(dm_odm->Adapter, REG_ARFR3);
++ res = rtw_read32(dm_odm->Adapter, REG_ARFR3, &MaskFromReg);
++ if (res)
++ return res;
++
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & MaskFromReg;
+ break;
+ default:
+diff --git a/drivers/staging/r8188eu/hal/HalPhyRf_8188e.c b/drivers/staging/r8188eu/hal/HalPhyRf_8188e.c
+index b944c8071a3b9..525deab10820b 100644
+--- a/drivers/staging/r8188eu/hal/HalPhyRf_8188e.c
++++ b/drivers/staging/r8188eu/hal/HalPhyRf_8188e.c
+@@ -463,6 +463,7 @@ void _PHY_SaveADDARegisters(struct adapter *adapt, u32 *ADDAReg, u32 *ADDABackup
+ }
+ }
+
++/* FIXME: return an error to caller */
+ static void _PHY_SaveMACRegisters(
+ struct adapter *adapt,
+ u32 *MACReg,
+@@ -470,11 +471,20 @@ static void _PHY_SaveMACRegisters(
+ )
+ {
+ u32 i;
++ int res;
+
+- for (i = 0; i < (IQK_MAC_REG_NUM - 1); i++)
+- MACBackup[i] = rtw_read8(adapt, MACReg[i]);
++ for (i = 0; i < (IQK_MAC_REG_NUM - 1); i++) {
++ u8 reg;
++
++ res = rtw_read8(adapt, MACReg[i], ®);
++ if (res)
++ return;
+
+- MACBackup[i] = rtw_read32(adapt, MACReg[i]);
++ MACBackup[i] = reg;
++ }
++
++ res = rtw_read32(adapt, MACReg[i], MACBackup + i);
++ (void)res;
+ }
+
+ static void reload_adda_reg(struct adapter *adapt, u32 *ADDAReg, u32 *ADDABackup, u32 RegiesterNum)
+@@ -739,9 +749,12 @@ static void phy_LCCalibrate_8188E(struct adapter *adapt)
+ {
+ u8 tmpreg;
+ u32 RF_Amode = 0, LC_Cal;
++ int res;
+
+ /* Check continuous TX and Packet TX */
+- tmpreg = rtw_read8(adapt, 0xd03);
++ res = rtw_read8(adapt, 0xd03, &tmpreg);
++ if (res)
++ return;
+
+ if ((tmpreg & 0x70) != 0) /* Deal with contisuous TX case */
+ rtw_write8(adapt, 0xd03, tmpreg & 0x8F); /* disable all continuous TX */
+diff --git a/drivers/staging/r8188eu/hal/HalPwrSeqCmd.c b/drivers/staging/r8188eu/hal/HalPwrSeqCmd.c
+index 150ea380c39e9..4a4563b900b39 100644
+--- a/drivers/staging/r8188eu/hal/HalPwrSeqCmd.c
++++ b/drivers/staging/r8188eu/hal/HalPwrSeqCmd.c
+@@ -12,6 +12,7 @@ u8 HalPwrSeqCmdParsing(struct adapter *padapter, struct wl_pwr_cfg pwrseqcmd[])
+ u32 offset = 0;
+ u32 poll_count = 0; /* polling autoload done. */
+ u32 max_poll_count = 5000;
++ int res;
+
+ do {
+ pwrcfgcmd = pwrseqcmd[aryidx];
+@@ -21,7 +22,9 @@ u8 HalPwrSeqCmdParsing(struct adapter *padapter, struct wl_pwr_cfg pwrseqcmd[])
+ offset = GET_PWR_CFG_OFFSET(pwrcfgcmd);
+
+ /* Read the value from system register */
+- value = rtw_read8(padapter, offset);
++ res = rtw_read8(padapter, offset, &value);
++ if (res)
++ return false;
+
+ value &= ~(GET_PWR_CFG_MASK(pwrcfgcmd));
+ value |= (GET_PWR_CFG_VALUE(pwrcfgcmd) & GET_PWR_CFG_MASK(pwrcfgcmd));
+@@ -33,7 +36,9 @@ u8 HalPwrSeqCmdParsing(struct adapter *padapter, struct wl_pwr_cfg pwrseqcmd[])
+ poll_bit = false;
+ offset = GET_PWR_CFG_OFFSET(pwrcfgcmd);
+ do {
+- value = rtw_read8(padapter, offset);
++ res = rtw_read8(padapter, offset, &value);
++ if (res)
++ return false;
+
+ value &= GET_PWR_CFG_MASK(pwrcfgcmd);
+ if (value == (GET_PWR_CFG_VALUE(pwrcfgcmd) & GET_PWR_CFG_MASK(pwrcfgcmd)))
+diff --git a/drivers/staging/r8188eu/hal/hal_com.c b/drivers/staging/r8188eu/hal/hal_com.c
+index 910cc07f656ca..e9a32dd84a8ef 100644
+--- a/drivers/staging/r8188eu/hal/hal_com.c
++++ b/drivers/staging/r8188eu/hal/hal_com.c
+@@ -303,7 +303,9 @@ s32 c2h_evt_read(struct adapter *adapter, u8 *buf)
+ if (!buf)
+ goto exit;
+
+- trigger = rtw_read8(adapter, REG_C2HEVT_CLEAR);
++ ret = rtw_read8(adapter, REG_C2HEVT_CLEAR, &trigger);
++ if (ret)
++ return _FAIL;
+
+ if (trigger == C2H_EVT_HOST_CLOSE)
+ goto exit; /* Not ready */
+@@ -314,13 +316,26 @@ s32 c2h_evt_read(struct adapter *adapter, u8 *buf)
+
+ memset(c2h_evt, 0, 16);
+
+- *buf = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL);
+- *(buf + 1) = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL + 1);
++ ret = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL, buf);
++ if (ret) {
++ ret = _FAIL;
++ goto clear_evt;
++ }
+
++ ret = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL + 1, buf + 1);
++ if (ret) {
++ ret = _FAIL;
++ goto clear_evt;
++ }
+ /* Read the content */
+- for (i = 0; i < c2h_evt->plen; i++)
+- c2h_evt->payload[i] = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL +
+- sizeof(*c2h_evt) + i);
++ for (i = 0; i < c2h_evt->plen; i++) {
++ ret = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL +
++ sizeof(*c2h_evt) + i, c2h_evt->payload + i);
++ if (ret) {
++ ret = _FAIL;
++ goto clear_evt;
++ }
++ }
+
+ ret = _SUCCESS;
+
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_cmd.c b/drivers/staging/r8188eu/hal/rtl8188e_cmd.c
+index 475650dc73011..b01ee1695fee2 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_cmd.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_cmd.c
+@@ -18,13 +18,18 @@
+
+ static u8 _is_fw_read_cmd_down(struct adapter *adapt, u8 msgbox_num)
+ {
+- u8 read_down = false;
++ u8 read_down = false, reg;
+ int retry_cnts = 100;
++ int res;
+
+ u8 valid;
+
+ do {
+- valid = rtw_read8(adapt, REG_HMETFR) & BIT(msgbox_num);
++ res = rtw_read8(adapt, REG_HMETFR, ®);
++ if (res)
++ continue;
++
++ valid = reg & BIT(msgbox_num);
+ if (0 == valid)
+ read_down = true;
+ } while ((!read_down) && (retry_cnts--));
+@@ -533,6 +538,8 @@ void rtl8188e_set_FwJoinBssReport_cmd(struct adapter *adapt, u8 mstatus)
+ bool bcn_valid = false;
+ u8 DLBcnCount = 0;
+ u32 poll = 0;
++ u8 reg;
++ int res;
+
+ if (mstatus == 1) {
+ /* We should set AID, correct TSF, HW seq enable before set JoinBssReport to Fw in 88/92C. */
+@@ -547,8 +554,17 @@ void rtl8188e_set_FwJoinBssReport_cmd(struct adapter *adapt, u8 mstatus)
+ /* Disable Hw protection for a time which revserd for Hw sending beacon. */
+ /* Fix download reserved page packet fail that access collision with the protection time. */
+ /* 2010.05.11. Added by tynli. */
+- rtw_write8(adapt, REG_BCN_CTRL, rtw_read8(adapt, REG_BCN_CTRL) & (~BIT(3)));
+- rtw_write8(adapt, REG_BCN_CTRL, rtw_read8(adapt, REG_BCN_CTRL) | BIT(4));
++ res = rtw_read8(adapt, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, REG_BCN_CTRL, reg & (~BIT(3)));
++
++ res = rtw_read8(adapt, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, REG_BCN_CTRL, reg | BIT(4));
+
+ if (haldata->RegFwHwTxQCtrl & BIT(6))
+ bSendBeacon = true;
+@@ -581,8 +597,17 @@ void rtl8188e_set_FwJoinBssReport_cmd(struct adapter *adapt, u8 mstatus)
+ /* */
+
+ /* Enable Bcn */
+- rtw_write8(adapt, REG_BCN_CTRL, rtw_read8(adapt, REG_BCN_CTRL) | BIT(3));
+- rtw_write8(adapt, REG_BCN_CTRL, rtw_read8(adapt, REG_BCN_CTRL) & (~BIT(4)));
++ res = rtw_read8(adapt, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, REG_BCN_CTRL, reg | BIT(3));
++
++ res = rtw_read8(adapt, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, REG_BCN_CTRL, reg & (~BIT(4)));
+
+ /* To make sure that if there exists an adapter which would like to send beacon. */
+ /* If exists, the origianl value of 0x422[6] will be 1, we should check this to */
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_dm.c b/drivers/staging/r8188eu/hal/rtl8188e_dm.c
+index 6d28e3dc0d261..0399872c45460 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_dm.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_dm.c
+@@ -12,8 +12,12 @@
+ static void dm_InitGPIOSetting(struct adapter *Adapter)
+ {
+ u8 tmp1byte;
++ int res;
++
++ res = rtw_read8(Adapter, REG_GPIO_MUXCFG, &tmp1byte);
++ if (res)
++ return;
+
+- tmp1byte = rtw_read8(Adapter, REG_GPIO_MUXCFG);
+ tmp1byte &= (GPIOSEL_GPIO | ~GPIOSEL_ENBT);
+
+ rtw_write8(Adapter, REG_GPIO_MUXCFG, tmp1byte);
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c b/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
+index e17375a74f179..5549e7be334ab 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
+@@ -13,10 +13,14 @@
+ static void iol_mode_enable(struct adapter *padapter, u8 enable)
+ {
+ u8 reg_0xf0 = 0;
++ int res;
+
+ if (enable) {
+ /* Enable initial offload */
+- reg_0xf0 = rtw_read8(padapter, REG_SYS_CFG);
++ res = rtw_read8(padapter, REG_SYS_CFG, ®_0xf0);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_SYS_CFG, reg_0xf0 | SW_OFFLOAD_EN);
+
+ if (!padapter->bFWReady)
+@@ -24,7 +28,10 @@ static void iol_mode_enable(struct adapter *padapter, u8 enable)
+
+ } else {
+ /* disable initial offload */
+- reg_0xf0 = rtw_read8(padapter, REG_SYS_CFG);
++ res = rtw_read8(padapter, REG_SYS_CFG, ®_0xf0);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_SYS_CFG, reg_0xf0 & ~SW_OFFLOAD_EN);
+ }
+ }
+@@ -34,17 +41,31 @@ static s32 iol_execute(struct adapter *padapter, u8 control)
+ s32 status = _FAIL;
+ u8 reg_0x88 = 0;
+ unsigned long timeout;
++ int res;
+
+ control = control & 0x0f;
+- reg_0x88 = rtw_read8(padapter, REG_HMEBOX_E0);
++ res = rtw_read8(padapter, REG_HMEBOX_E0, ®_0x88);
++ if (res)
++ return _FAIL;
++
+ rtw_write8(padapter, REG_HMEBOX_E0, reg_0x88 | control);
+
+ timeout = jiffies + msecs_to_jiffies(1000);
+- while ((reg_0x88 = rtw_read8(padapter, REG_HMEBOX_E0)) & control &&
+- time_before(jiffies, timeout))
+- ;
+
+- reg_0x88 = rtw_read8(padapter, REG_HMEBOX_E0);
++ do {
++ res = rtw_read8(padapter, REG_HMEBOX_E0, ®_0x88);
++ if (res)
++ continue;
++
++ if (!(reg_0x88 & control))
++ break;
++
++ } while (time_before(jiffies, timeout));
++
++ res = rtw_read8(padapter, REG_HMEBOX_E0, ®_0x88);
++ if (res)
++ return _FAIL;
++
+ status = (reg_0x88 & control) ? _FAIL : _SUCCESS;
+ if (reg_0x88 & control << 4)
+ status = _FAIL;
+@@ -179,7 +200,8 @@ exit:
+ kfree(eFuseWord);
+ }
+
+-static void efuse_read_phymap_from_txpktbuf(
++/* FIXME: add error handling in callers */
++static int efuse_read_phymap_from_txpktbuf(
+ struct adapter *adapter,
+ int bcnhead, /* beacon head, where FW store len(2-byte) and efuse physical map. */
+ u8 *content, /* buffer to store efuse physical map */
+@@ -190,13 +212,19 @@ static void efuse_read_phymap_from_txpktbuf(
+ u16 dbg_addr = 0;
+ __le32 lo32 = 0, hi32 = 0;
+ u16 len = 0, count = 0;
+- int i = 0;
++ int i = 0, res;
+ u16 limit = *size;
+-
++ u8 reg;
+ u8 *pos = content;
++ u32 reg32;
+
+- if (bcnhead < 0) /* if not valid */
+- bcnhead = rtw_read8(adapter, REG_TDECTRL + 1);
++ if (bcnhead < 0) { /* if not valid */
++ res = rtw_read8(adapter, REG_TDECTRL + 1, ®);
++ if (res)
++ return res;
++
++ bcnhead = reg;
++ }
+
+ rtw_write8(adapter, REG_PKT_BUFF_ACCESS_CTRL, TXPKT_BUF_SELECT);
+
+@@ -207,19 +235,40 @@ static void efuse_read_phymap_from_txpktbuf(
+
+ rtw_write8(adapter, REG_TXPKTBUF_DBG, 0);
+ timeout = jiffies + msecs_to_jiffies(1000);
+- while (!rtw_read8(adapter, REG_TXPKTBUF_DBG) && time_before(jiffies, timeout))
++ do {
++ res = rtw_read8(adapter, REG_TXPKTBUF_DBG, ®);
++ if (res)
++ continue;
++
++ if (reg)
++ break;
++
+ rtw_usleep_os(100);
++ } while (time_before(jiffies, timeout));
+
+ /* data from EEPROM needs to be in LE */
+- lo32 = cpu_to_le32(rtw_read32(adapter, REG_PKTBUF_DBG_DATA_L));
+- hi32 = cpu_to_le32(rtw_read32(adapter, REG_PKTBUF_DBG_DATA_H));
++ res = rtw_read32(adapter, REG_PKTBUF_DBG_DATA_L, ®32);
++ if (res)
++ return res;
++
++ lo32 = cpu_to_le32(reg32);
++
++ res = rtw_read32(adapter, REG_PKTBUF_DBG_DATA_H, ®32);
++ if (res)
++ return res;
++
++ hi32 = cpu_to_le32(reg32);
+
+ if (i == 0) {
++ u16 reg;
++
+ /* Although lenc is only used in a debug statement,
+ * do not remove it as the rtw_read16() call consumes
+ * 2 bytes from the EEPROM source.
+ */
+- rtw_read16(adapter, REG_PKTBUF_DBG_DATA_L);
++ res = rtw_read16(adapter, REG_PKTBUF_DBG_DATA_L, ®);
++ if (res)
++ return res;
+
+ len = le32_to_cpu(lo32) & 0x0000ffff;
+
+@@ -246,6 +295,8 @@ static void efuse_read_phymap_from_txpktbuf(
+ }
+ rtw_write8(adapter, REG_PKT_BUFF_ACCESS_CTRL, DISABLE_TRXPKT_BUF_ACCESS);
+ *size = count;
++
++ return 0;
+ }
+
+ static s32 iol_read_efuse(struct adapter *padapter, u8 txpktbuf_bndy, u16 offset, u16 size_byte, u8 *logical_map)
+@@ -321,25 +372,35 @@ exit:
+ void rtl8188e_EfusePowerSwitch(struct adapter *pAdapter, u8 PwrState)
+ {
+ u16 tmpV16;
++ int res;
+
+ if (PwrState) {
+ rtw_write8(pAdapter, REG_EFUSE_ACCESS, EFUSE_ACCESS_ON);
+
+ /* 1.2V Power: From VDDON with Power Cut(0x0000h[15]), defualt valid */
+- tmpV16 = rtw_read16(pAdapter, REG_SYS_ISO_CTRL);
++ res = rtw_read16(pAdapter, REG_SYS_ISO_CTRL, &tmpV16);
++ if (res)
++ return;
++
+ if (!(tmpV16 & PWC_EV12V)) {
+ tmpV16 |= PWC_EV12V;
+ rtw_write16(pAdapter, REG_SYS_ISO_CTRL, tmpV16);
+ }
+ /* Reset: 0x0000h[28], default valid */
+- tmpV16 = rtw_read16(pAdapter, REG_SYS_FUNC_EN);
++ res = rtw_read16(pAdapter, REG_SYS_FUNC_EN, &tmpV16);
++ if (res)
++ return;
++
+ if (!(tmpV16 & FEN_ELDR)) {
+ tmpV16 |= FEN_ELDR;
+ rtw_write16(pAdapter, REG_SYS_FUNC_EN, tmpV16);
+ }
+
+ /* Clock: Gated(0x0008h[5]) 8M(0x0008h[1]) clock from ANA, default valid */
+- tmpV16 = rtw_read16(pAdapter, REG_SYS_CLKR);
++ res = rtw_read16(pAdapter, REG_SYS_CLKR, &tmpV16);
++ if (res)
++ return;
++
+ if ((!(tmpV16 & LOADER_CLK_EN)) || (!(tmpV16 & ANA8M))) {
+ tmpV16 |= (LOADER_CLK_EN | ANA8M);
+ rtw_write16(pAdapter, REG_SYS_CLKR, tmpV16);
+@@ -497,8 +558,12 @@ void rtl8188e_read_chip_version(struct adapter *padapter)
+ u32 value32;
+ struct HAL_VERSION ChipVersion;
+ struct hal_data_8188e *pHalData = &padapter->haldata;
++ int res;
++
++ res = rtw_read32(padapter, REG_SYS_CFG, &value32);
++ if (res)
++ return;
+
+- value32 = rtw_read32(padapter, REG_SYS_CFG);
+ ChipVersion.ChipType = ((value32 & RTL_ID) ? TEST_CHIP : NORMAL_CHIP);
+
+ ChipVersion.VendorType = ((value32 & VENDOR_ID) ? CHIP_VENDOR_UMC : CHIP_VENDOR_TSMC);
+@@ -525,10 +590,17 @@ void rtl8188e_SetHalODMVar(struct adapter *Adapter, void *pValue1, bool bSet)
+
+ void hal_notch_filter_8188e(struct adapter *adapter, bool enable)
+ {
++ int res;
++ u8 reg;
++
++ res = rtw_read8(adapter, rOFDM0_RxDSP + 1, ®);
++ if (res)
++ return;
++
+ if (enable)
+- rtw_write8(adapter, rOFDM0_RxDSP + 1, rtw_read8(adapter, rOFDM0_RxDSP + 1) | BIT(1));
++ rtw_write8(adapter, rOFDM0_RxDSP + 1, reg | BIT(1));
+ else
+- rtw_write8(adapter, rOFDM0_RxDSP + 1, rtw_read8(adapter, rOFDM0_RxDSP + 1) & ~BIT(1));
++ rtw_write8(adapter, rOFDM0_RxDSP + 1, reg & ~BIT(1));
+ }
+
+ /* */
+@@ -538,26 +610,24 @@ void hal_notch_filter_8188e(struct adapter *adapter, bool enable)
+ /* */
+ static s32 _LLTWrite(struct adapter *padapter, u32 address, u32 data)
+ {
+- s32 status = _SUCCESS;
+- s32 count = 0;
++ s32 count;
+ u32 value = _LLT_INIT_ADDR(address) | _LLT_INIT_DATA(data) | _LLT_OP(_LLT_WRITE_ACCESS);
+ u16 LLTReg = REG_LLT_INIT;
++ int res;
+
+ rtw_write32(padapter, LLTReg, value);
+
+ /* polling */
+- do {
+- value = rtw_read32(padapter, LLTReg);
+- if (_LLT_NO_ACTIVE == _LLT_OP_VALUE(value))
+- break;
++ for (count = 0; count <= POLLING_LLT_THRESHOLD; count++) {
++ res = rtw_read32(padapter, LLTReg, &value);
++ if (res)
++ continue;
+
+- if (count > POLLING_LLT_THRESHOLD) {
+- status = _FAIL;
++ if (_LLT_NO_ACTIVE == _LLT_OP_VALUE(value))
+ break;
+- }
+- } while (count++);
++ }
+
+- return status;
++ return count > POLLING_LLT_THRESHOLD ? _FAIL : _SUCCESS;
+ }
+
+ s32 InitLLTTable(struct adapter *padapter, u8 txpktbuf_bndy)
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_phycfg.c b/drivers/staging/r8188eu/hal/rtl8188e_phycfg.c
+index 4864dafd887b9..dea6d915a1f40 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_phycfg.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_phycfg.c
+@@ -56,8 +56,12 @@ rtl8188e_PHY_QueryBBReg(
+ )
+ {
+ u32 ReturnValue = 0, OriginalValue, BitShift;
++ int res;
++
++ res = rtw_read32(Adapter, RegAddr, &OriginalValue);
++ if (res)
++ return 0;
+
+- OriginalValue = rtw_read32(Adapter, RegAddr);
+ BitShift = phy_CalculateBitShift(BitMask);
+ ReturnValue = (OriginalValue & BitMask) >> BitShift;
+ return ReturnValue;
+@@ -84,9 +88,13 @@ rtl8188e_PHY_QueryBBReg(
+ void rtl8188e_PHY_SetBBReg(struct adapter *Adapter, u32 RegAddr, u32 BitMask, u32 Data)
+ {
+ u32 OriginalValue, BitShift;
++ int res;
+
+ if (BitMask != bMaskDWord) { /* if not "double word" write */
+- OriginalValue = rtw_read32(Adapter, RegAddr);
++ res = rtw_read32(Adapter, RegAddr, &OriginalValue);
++ if (res)
++ return;
++
+ BitShift = phy_CalculateBitShift(BitMask);
+ Data = ((OriginalValue & (~BitMask)) | (Data << BitShift));
+ }
+@@ -484,13 +492,17 @@ PHY_BBConfig8188E(
+ {
+ int rtStatus = _SUCCESS;
+ struct hal_data_8188e *pHalData = &Adapter->haldata;
+- u32 RegVal;
++ u16 RegVal;
+ u8 CrystalCap;
++ int res;
+
+ phy_InitBBRFRegisterDefinition(Adapter);
+
+ /* Enable BB and RF */
+- RegVal = rtw_read16(Adapter, REG_SYS_FUNC_EN);
++ res = rtw_read16(Adapter, REG_SYS_FUNC_EN, &RegVal);
++ if (res)
++ return _FAIL;
++
+ rtw_write16(Adapter, REG_SYS_FUNC_EN, (u16)(RegVal | BIT(13) | BIT(0) | BIT(1)));
+
+ /* 20090923 Joseph: Advised by Steven and Jenyu. Power sequence before init RF. */
+@@ -594,6 +606,7 @@ _PHY_SetBWMode92C(
+ struct hal_data_8188e *pHalData = &Adapter->haldata;
+ u8 regBwOpMode;
+ u8 regRRSR_RSC;
++ int res;
+
+ if (Adapter->bDriverStopped)
+ return;
+@@ -602,8 +615,13 @@ _PHY_SetBWMode92C(
+ /* 3<1>Set MAC register */
+ /* 3 */
+
+- regBwOpMode = rtw_read8(Adapter, REG_BWOPMODE);
+- regRRSR_RSC = rtw_read8(Adapter, REG_RRSR + 2);
++ res = rtw_read8(Adapter, REG_BWOPMODE, ®BwOpMode);
++ if (res)
++ return;
++
++ res = rtw_read8(Adapter, REG_RRSR + 2, ®RRSR_RSC);
++ if (res)
++ return;
+
+ switch (pHalData->CurrentChannelBW) {
+ case HT_CHANNEL_WIDTH_20:
+diff --git a/drivers/staging/r8188eu/hal/usb_halinit.c b/drivers/staging/r8188eu/hal/usb_halinit.c
+index a217272a07f85..0afde5038b3f7 100644
+--- a/drivers/staging/r8188eu/hal/usb_halinit.c
++++ b/drivers/staging/r8188eu/hal/usb_halinit.c
+@@ -52,6 +52,8 @@ void rtl8188eu_interface_configure(struct adapter *adapt)
+ u32 rtl8188eu_InitPowerOn(struct adapter *adapt)
+ {
+ u16 value16;
++ int res;
++
+ /* HW Power on sequence */
+ struct hal_data_8188e *haldata = &adapt->haldata;
+ if (haldata->bMacPwrCtrlOn)
+@@ -65,7 +67,10 @@ u32 rtl8188eu_InitPowerOn(struct adapter *adapt)
+ rtw_write16(adapt, REG_CR, 0x00); /* suggseted by zhouzhou, by page, 20111230 */
+
+ /* Enable MAC DMA/WMAC/SCHEDULE/SEC block */
+- value16 = rtw_read16(adapt, REG_CR);
++ res = rtw_read16(adapt, REG_CR, &value16);
++ if (res)
++ return _FAIL;
++
+ value16 |= (HCI_TXDMA_EN | HCI_RXDMA_EN | TXDMA_EN | RXDMA_EN
+ | PROTOCOL_EN | SCHEDULE_EN | ENSEC | CALTMR_EN);
+ /* for SDIO - Set CR bit10 to enable 32k calibration. Suggested by SD1 Gimmy. Added by tynli. 2011.08.31. */
+@@ -81,6 +86,7 @@ static void _InitInterrupt(struct adapter *Adapter)
+ {
+ u32 imr, imr_ex;
+ u8 usb_opt;
++ int res;
+
+ /* HISR write one to clear */
+ rtw_write32(Adapter, REG_HISR_88E, 0xFFFFFFFF);
+@@ -94,7 +100,9 @@ static void _InitInterrupt(struct adapter *Adapter)
+ /* REG_USB_SPECIAL_OPTION - BIT(4) */
+ /* 0; Use interrupt endpoint to upload interrupt pkt */
+ /* 1; Use bulk endpoint to upload interrupt pkt, */
+- usb_opt = rtw_read8(Adapter, REG_USB_SPECIAL_OPTION);
++ res = rtw_read8(Adapter, REG_USB_SPECIAL_OPTION, &usb_opt);
++ if (res)
++ return;
+
+ if (adapter_to_dvobj(Adapter)->pusbdev->speed == USB_SPEED_HIGH)
+ usb_opt = usb_opt | (INT_BULK_SEL);
+@@ -163,7 +171,14 @@ static void _InitNormalChipRegPriority(struct adapter *Adapter, u16 beQ,
+ u16 bkQ, u16 viQ, u16 voQ, u16 mgtQ,
+ u16 hiQ)
+ {
+- u16 value16 = (rtw_read16(Adapter, REG_TRXDMA_CTRL) & 0x7);
++ u16 value16;
++ int res;
++
++ res = rtw_read16(Adapter, REG_TRXDMA_CTRL, &value16);
++ if (res)
++ return;
++
++ value16 &= 0x7;
+
+ value16 |= _TXDMA_BEQ_MAP(beQ) | _TXDMA_BKQ_MAP(bkQ) |
+ _TXDMA_VIQ_MAP(viQ) | _TXDMA_VOQ_MAP(voQ) |
+@@ -282,8 +297,12 @@ static void _InitQueuePriority(struct adapter *Adapter)
+ static void _InitNetworkType(struct adapter *Adapter)
+ {
+ u32 value32;
++ int res;
++
++ res = rtw_read32(Adapter, REG_CR, &value32);
++ if (res)
++ return;
+
+- value32 = rtw_read32(Adapter, REG_CR);
+ /* TODO: use the other function to set network type */
+ value32 = (value32 & ~MASK_NETTYPE) | _NETTYPE(NT_LINK_AP);
+
+@@ -323,9 +342,13 @@ static void _InitAdaptiveCtrl(struct adapter *Adapter)
+ {
+ u16 value16;
+ u32 value32;
++ int res;
+
+ /* Response Rate Set */
+- value32 = rtw_read32(Adapter, REG_RRSR);
++ res = rtw_read32(Adapter, REG_RRSR, &value32);
++ if (res)
++ return;
++
+ value32 &= ~RATE_BITMAP_ALL;
+ value32 |= RATE_RRSR_CCK_ONLY_1M;
+ rtw_write32(Adapter, REG_RRSR, value32);
+@@ -363,8 +386,12 @@ static void _InitEDCA(struct adapter *Adapter)
+ static void _InitRetryFunction(struct adapter *Adapter)
+ {
+ u8 value8;
++ int res;
++
++ res = rtw_read8(Adapter, REG_FWHW_TXQ_CTRL, &value8);
++ if (res)
++ return;
+
+- value8 = rtw_read8(Adapter, REG_FWHW_TXQ_CTRL);
+ value8 |= EN_AMPDU_RTY_NEW;
+ rtw_write8(Adapter, REG_FWHW_TXQ_CTRL, value8);
+
+@@ -390,11 +417,15 @@ static void _InitRetryFunction(struct adapter *Adapter)
+ static void usb_AggSettingTxUpdate(struct adapter *Adapter)
+ {
+ u32 value32;
++ int res;
+
+ if (Adapter->registrypriv.wifi_spec)
+ return;
+
+- value32 = rtw_read32(Adapter, REG_TDECTRL);
++ res = rtw_read32(Adapter, REG_TDECTRL, &value32);
++ if (res)
++ return;
++
+ value32 = value32 & ~(BLK_DESC_NUM_MASK << BLK_DESC_NUM_SHIFT);
+ value32 |= ((USB_TXAGG_DESC_NUM & BLK_DESC_NUM_MASK) << BLK_DESC_NUM_SHIFT);
+
+@@ -423,9 +454,15 @@ usb_AggSettingRxUpdate(
+ {
+ u8 valueDMA;
+ u8 valueUSB;
++ int res;
++
++ res = rtw_read8(Adapter, REG_TRXDMA_CTRL, &valueDMA);
++ if (res)
++ return;
+
+- valueDMA = rtw_read8(Adapter, REG_TRXDMA_CTRL);
+- valueUSB = rtw_read8(Adapter, REG_USB_SPECIAL_OPTION);
++ res = rtw_read8(Adapter, REG_USB_SPECIAL_OPTION, &valueUSB);
++ if (res)
++ return;
+
+ valueDMA |= RXDMA_AGG_EN;
+ valueUSB &= ~USB_AGG_EN;
+@@ -446,9 +483,11 @@ static void InitUsbAggregationSetting(struct adapter *Adapter)
+ usb_AggSettingRxUpdate(Adapter);
+ }
+
+-static void _InitBeaconParameters(struct adapter *Adapter)
++/* FIXME: add error handling in callers */
++static int _InitBeaconParameters(struct adapter *Adapter)
+ {
+ struct hal_data_8188e *haldata = &Adapter->haldata;
++ int res;
+
+ rtw_write16(Adapter, REG_BCN_CTRL, 0x1010);
+
+@@ -461,9 +500,19 @@ static void _InitBeaconParameters(struct adapter *Adapter)
+ /* beacause test chip does not contension before sending beacon. by tynli. 2009.11.03 */
+ rtw_write16(Adapter, REG_BCNTCFG, 0x660F);
+
+- haldata->RegFwHwTxQCtrl = rtw_read8(Adapter, REG_FWHW_TXQ_CTRL + 2);
+- haldata->RegReg542 = rtw_read8(Adapter, REG_TBTT_PROHIBIT + 2);
+- haldata->RegCR_1 = rtw_read8(Adapter, REG_CR + 1);
++ res = rtw_read8(Adapter, REG_FWHW_TXQ_CTRL + 2, &haldata->RegFwHwTxQCtrl);
++ if (res)
++ return res;
++
++ res = rtw_read8(Adapter, REG_TBTT_PROHIBIT + 2, &haldata->RegReg542);
++ if (res)
++ return res;
++
++ res = rtw_read8(Adapter, REG_CR + 1, &haldata->RegCR_1);
++ if (res)
++ return res;
++
++ return 0;
+ }
+
+ static void _BeaconFunctionEnable(struct adapter *Adapter,
+@@ -484,11 +533,17 @@ static void _BBTurnOnBlock(struct adapter *Adapter)
+ static void _InitAntenna_Selection(struct adapter *Adapter)
+ {
+ struct hal_data_8188e *haldata = &Adapter->haldata;
++ int res;
++ u32 reg;
+
+ if (haldata->AntDivCfg == 0)
+ return;
+
+- rtw_write32(Adapter, REG_LEDCFG0, rtw_read32(Adapter, REG_LEDCFG0) | BIT(23));
++ res = rtw_read32(Adapter, REG_LEDCFG0, ®);
++ if (res)
++ return;
++
++ rtw_write32(Adapter, REG_LEDCFG0, reg | BIT(23));
+ rtl8188e_PHY_SetBBReg(Adapter, rFPGA0_XAB_RFParameter, BIT(13), 0x01);
+
+ if (rtl8188e_PHY_QueryBBReg(Adapter, rFPGA0_XA_RFInterfaceOE, 0x300) == Antenna_A)
+@@ -514,9 +569,11 @@ u32 rtl8188eu_hal_init(struct adapter *Adapter)
+ u16 value16;
+ u8 txpktbuf_bndy;
+ u32 status = _SUCCESS;
++ int res;
+ struct hal_data_8188e *haldata = &Adapter->haldata;
+ struct pwrctrl_priv *pwrctrlpriv = &Adapter->pwrctrlpriv;
+ struct registry_priv *pregistrypriv = &Adapter->registrypriv;
++ u32 reg;
+
+ if (Adapter->pwrctrlpriv.bkeepfwalive) {
+ if (haldata->odmpriv.RFCalibrateInfo.bIQKInitialized) {
+@@ -614,13 +671,19 @@ u32 rtl8188eu_hal_init(struct adapter *Adapter)
+ /* Hw bug which Hw initials RxFF boundary size to a value which is larger than the real Rx buffer size in 88E. */
+ /* */
+ /* Enable MACTXEN/MACRXEN block */
+- value16 = rtw_read16(Adapter, REG_CR);
++ res = rtw_read16(Adapter, REG_CR, &value16);
++ if (res)
++ return _FAIL;
++
+ value16 |= (MACTXEN | MACRXEN);
+ rtw_write8(Adapter, REG_CR, value16);
+
+ /* Enable TX Report */
+ /* Enable Tx Report Timer */
+- value8 = rtw_read8(Adapter, REG_TX_RPT_CTRL);
++ res = rtw_read8(Adapter, REG_TX_RPT_CTRL, &value8);
++ if (res)
++ return _FAIL;
++
+ rtw_write8(Adapter, REG_TX_RPT_CTRL, (value8 | BIT(1) | BIT(0)));
+ /* Set MAX RPT MACID */
+ rtw_write8(Adapter, REG_TX_RPT_CTRL + 1, 2);/* FOR sta mode ,0: bc/mc ,1:AP */
+@@ -684,7 +747,11 @@ u32 rtl8188eu_hal_init(struct adapter *Adapter)
+ rtw_write16(Adapter, REG_TX_RPT_TIME, 0x3DF0);
+
+ /* enable tx DMA to drop the redundate data of packet */
+- rtw_write16(Adapter, REG_TXDMA_OFFSET_CHK, (rtw_read16(Adapter, REG_TXDMA_OFFSET_CHK) | DROP_DATA_EN));
++ res = rtw_read16(Adapter, REG_TXDMA_OFFSET_CHK, &value16);
++ if (res)
++ return _FAIL;
++
++ rtw_write16(Adapter, REG_TXDMA_OFFSET_CHK, (value16 | DROP_DATA_EN));
+
+ /* 2010/08/26 MH Merge from 8192CE. */
+ if (pwrctrlpriv->rf_pwrstate == rf_on) {
+@@ -704,7 +771,11 @@ u32 rtl8188eu_hal_init(struct adapter *Adapter)
+ rtw_write8(Adapter, REG_USB_HRPWM, 0);
+
+ /* ack for xmit mgmt frames. */
+- rtw_write32(Adapter, REG_FWHW_TXQ_CTRL, rtw_read32(Adapter, REG_FWHW_TXQ_CTRL) | BIT(12));
++ res = rtw_read32(Adapter, REG_FWHW_TXQ_CTRL, ®);
++ if (res)
++ return _FAIL;
++
++ rtw_write32(Adapter, REG_FWHW_TXQ_CTRL, reg | BIT(12));
+
+ exit:
+ return status;
+@@ -714,9 +785,13 @@ static void CardDisableRTL8188EU(struct adapter *Adapter)
+ {
+ u8 val8;
+ struct hal_data_8188e *haldata = &Adapter->haldata;
++ int res;
+
+ /* Stop Tx Report Timer. 0x4EC[Bit1]=b'0 */
+- val8 = rtw_read8(Adapter, REG_TX_RPT_CTRL);
++ res = rtw_read8(Adapter, REG_TX_RPT_CTRL, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_TX_RPT_CTRL, val8 & (~BIT(1)));
+
+ /* stop rx */
+@@ -727,10 +802,16 @@ static void CardDisableRTL8188EU(struct adapter *Adapter)
+
+ /* 2. 0x1F[7:0] = 0 turn off RF */
+
+- val8 = rtw_read8(Adapter, REG_MCUFWDL);
++ res = rtw_read8(Adapter, REG_MCUFWDL, &val8);
++ if (res)
++ return;
++
+ if ((val8 & RAM_DL_SEL) && Adapter->bFWReady) { /* 8051 RAM code */
+ /* Reset MCU 0x2[10]=0. */
+- val8 = rtw_read8(Adapter, REG_SYS_FUNC_EN + 1);
++ res = rtw_read8(Adapter, REG_SYS_FUNC_EN + 1, &val8);
++ if (res)
++ return;
++
+ val8 &= ~BIT(2); /* 0x2[10], FEN_CPUEN */
+ rtw_write8(Adapter, REG_SYS_FUNC_EN + 1, val8);
+ }
+@@ -740,26 +821,45 @@ static void CardDisableRTL8188EU(struct adapter *Adapter)
+
+ /* YJ,add,111212 */
+ /* Disable 32k */
+- val8 = rtw_read8(Adapter, REG_32K_CTRL);
++ res = rtw_read8(Adapter, REG_32K_CTRL, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_32K_CTRL, val8 & (~BIT(0)));
+
+ /* Card disable power action flow */
+ HalPwrSeqCmdParsing(Adapter, Rtl8188E_NIC_DISABLE_FLOW);
+
+ /* Reset MCU IO Wrapper */
+- val8 = rtw_read8(Adapter, REG_RSV_CTRL + 1);
++ res = rtw_read8(Adapter, REG_RSV_CTRL + 1, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_RSV_CTRL + 1, (val8 & (~BIT(3))));
+- val8 = rtw_read8(Adapter, REG_RSV_CTRL + 1);
++
++ res = rtw_read8(Adapter, REG_RSV_CTRL + 1, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_RSV_CTRL + 1, val8 | BIT(3));
+
+ /* YJ,test add, 111207. For Power Consumption. */
+- val8 = rtw_read8(Adapter, GPIO_IN);
++ res = rtw_read8(Adapter, GPIO_IN, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, GPIO_OUT, val8);
+ rtw_write8(Adapter, GPIO_IO_SEL, 0xFF);/* Reg0x46 */
+
+- val8 = rtw_read8(Adapter, REG_GPIO_IO_SEL);
++ res = rtw_read8(Adapter, REG_GPIO_IO_SEL, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_GPIO_IO_SEL, (val8 << 4));
+- val8 = rtw_read8(Adapter, REG_GPIO_IO_SEL + 1);
++ res = rtw_read8(Adapter, REG_GPIO_IO_SEL + 1, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_GPIO_IO_SEL + 1, val8 | 0x0F);/* Reg0x43 */
+ rtw_write32(Adapter, REG_BB_PAD_CTRL, 0x00080808);/* set LNA ,TRSW,EX_PA Pin to output mode */
+ haldata->bMacPwrCtrlOn = false;
+@@ -830,9 +930,13 @@ void ReadAdapterInfo8188EU(struct adapter *Adapter)
+ struct eeprom_priv *eeprom = &Adapter->eeprompriv;
+ struct led_priv *ledpriv = &Adapter->ledpriv;
+ u8 eeValue;
++ int res;
+
+ /* check system boot selection */
+- eeValue = rtw_read8(Adapter, REG_9346CR);
++ res = rtw_read8(Adapter, REG_9346CR, &eeValue);
++ if (res)
++ return;
++
+ eeprom->EepromOrEfuse = (eeValue & BOOT_FROM_EEPROM);
+ eeprom->bautoload_fail_flag = !(eeValue & EEPROM_EN);
+
+@@ -887,12 +991,21 @@ static void hw_var_set_opmode(struct adapter *Adapter, u8 *val)
+ {
+ u8 val8;
+ u8 mode = *((u8 *)val);
++ int res;
+
+ /* disable Port0 TSF update */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) | BIT(4));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, &val8);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, val8 | BIT(4));
+
+ /* set net_type */
+- val8 = rtw_read8(Adapter, MSR) & 0x0c;
++ res = rtw_read8(Adapter, MSR, &val8);
++ if (res)
++ return;
++
++ val8 &= 0x0c;
+ val8 |= mode;
+ rtw_write8(Adapter, MSR, val8);
+
+@@ -927,14 +1040,22 @@ static void hw_var_set_opmode(struct adapter *Adapter, u8 *val)
+ rtw_write8(Adapter, REG_DUAL_TSF_RST, BIT(0));
+
+ /* BIT(3) - If set 0, hw will clr bcnq when tx becon ok/fail or port 0 */
+- rtw_write8(Adapter, REG_MBID_NUM, rtw_read8(Adapter, REG_MBID_NUM) | BIT(3) | BIT(4));
++ res = rtw_read8(Adapter, REG_MBID_NUM, &val8);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_MBID_NUM, val8 | BIT(3) | BIT(4));
+
+ /* enable BCN0 Function for if1 */
+ /* don't enable update TSF0 for if1 (due to TSF update when beacon/probe rsp are received) */
+ rtw_write8(Adapter, REG_BCN_CTRL, (DIS_TSF_UDT0_NORMAL_CHIP | EN_BCN_FUNCTION | BIT(1)));
+
+ /* dis BCN1 ATIM WND if if2 is station */
+- rtw_write8(Adapter, REG_BCN_CTRL_1, rtw_read8(Adapter, REG_BCN_CTRL_1) | BIT(0));
++ res = rtw_read8(Adapter, REG_BCN_CTRL_1, &val8);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL_1, val8 | BIT(0));
+ }
+ }
+
+@@ -943,6 +1064,8 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ struct hal_data_8188e *haldata = &Adapter->haldata;
+ struct dm_priv *pdmpriv = &haldata->dmpriv;
+ struct odm_dm_struct *podmpriv = &haldata->odmpriv;
++ u8 reg;
++ int res;
+
+ switch (variable) {
+ case HW_VAR_SET_OPMODE:
+@@ -970,7 +1093,11 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ /* Set RRSR rate table. */
+ rtw_write8(Adapter, REG_RRSR, BrateCfg & 0xff);
+ rtw_write8(Adapter, REG_RRSR + 1, (BrateCfg >> 8) & 0xff);
+- rtw_write8(Adapter, REG_RRSR + 2, rtw_read8(Adapter, REG_RRSR + 2) & 0xf0);
++ res = rtw_read8(Adapter, REG_RRSR + 2, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_RRSR + 2, reg & 0xf0);
+
+ /* Set RTS initial rate */
+ while (BrateCfg > 0x1) {
+@@ -994,13 +1121,21 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ StopTxBeacon(Adapter);
+
+ /* disable related TSF function */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) & (~BIT(3)));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg & (~BIT(3)));
+
+ rtw_write32(Adapter, REG_TSFTR, tsf);
+ rtw_write32(Adapter, REG_TSFTR + 4, tsf >> 32);
+
+ /* enable related TSF function */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) | BIT(3));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg | BIT(3));
+
+ if (((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) || ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE))
+ ResumeTxBeacon(Adapter);
+@@ -1009,17 +1144,27 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ case HW_VAR_MLME_SITESURVEY:
+ if (*((u8 *)val)) { /* under sitesurvey */
+ /* config RCR to receive different BSSID & not to receive data frame */
+- u32 v = rtw_read32(Adapter, REG_RCR);
++ u32 v;
++
++ res = rtw_read32(Adapter, REG_RCR, &v);
++ if (res)
++ return;
++
+ v &= ~(RCR_CBSSID_BCN);
+ rtw_write32(Adapter, REG_RCR, v);
+ /* reject all data frame */
+ rtw_write16(Adapter, REG_RXFLTMAP2, 0x00);
+
+ /* disable update TSF */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) | BIT(4));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg | BIT(4));
+ } else { /* sitesurvey done */
+ struct mlme_ext_priv *pmlmeext = &Adapter->mlmeextpriv;
+ struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info;
++ u32 reg32;
+
+ if ((is_client_associated_to_ap(Adapter)) ||
+ ((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE)) {
+@@ -1027,13 +1172,26 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ rtw_write16(Adapter, REG_RXFLTMAP2, 0xFFFF);
+
+ /* enable update TSF */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) & (~BIT(4)));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg & (~BIT(4)));
+ } else if ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE) {
+ rtw_write16(Adapter, REG_RXFLTMAP2, 0xFFFF);
+ /* enable update TSF */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) & (~BIT(4)));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg & (~BIT(4)));
+ }
+- rtw_write32(Adapter, REG_RCR, rtw_read32(Adapter, REG_RCR) | RCR_CBSSID_BCN);
++
++ res = rtw_read32(Adapter, REG_RCR, ®32);
++ if (res)
++ return;
++
++ rtw_write32(Adapter, REG_RCR, reg32 | RCR_CBSSID_BCN);
+ }
+ break;
+ case HW_VAR_SLOT_TIME:
+@@ -1190,6 +1348,8 @@ void SetBeaconRelatedRegisters8188EUsb(struct adapter *adapt)
+ struct mlme_ext_priv *pmlmeext = &adapt->mlmeextpriv;
+ struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info;
+ u32 bcn_ctrl_reg = REG_BCN_CTRL;
++ int res;
++ u8 reg;
+ /* reset TSF, enable update TSF, correcting TSF On Beacon */
+
+ /* BCN interval */
+@@ -1200,7 +1360,10 @@ void SetBeaconRelatedRegisters8188EUsb(struct adapter *adapt)
+
+ rtw_write8(adapt, REG_SLOT, 0x09);
+
+- value32 = rtw_read32(adapt, REG_TCR);
++ res = rtw_read32(adapt, REG_TCR, &value32);
++ if (res)
++ return;
++
+ value32 &= ~TSFRST;
+ rtw_write32(adapt, REG_TCR, value32);
+
+@@ -1215,7 +1378,11 @@ void SetBeaconRelatedRegisters8188EUsb(struct adapter *adapt)
+
+ ResumeTxBeacon(adapt);
+
+- rtw_write8(adapt, bcn_ctrl_reg, rtw_read8(adapt, bcn_ctrl_reg) | BIT(1));
++ res = rtw_read8(adapt, bcn_ctrl_reg, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, bcn_ctrl_reg, reg | BIT(1));
+ }
+
+ void rtl8188eu_init_default_value(struct adapter *adapt)
+diff --git a/drivers/staging/r8188eu/hal/usb_ops_linux.c b/drivers/staging/r8188eu/hal/usb_ops_linux.c
+index d5e674542a785..c1a4d023f6279 100644
+--- a/drivers/staging/r8188eu/hal/usb_ops_linux.c
++++ b/drivers/staging/r8188eu/hal/usb_ops_linux.c
+@@ -94,40 +94,47 @@ static int usb_write(struct intf_hdl *intf, u16 value, void *data, u8 size)
+ return status;
+ }
+
+-u8 rtw_read8(struct adapter *adapter, u32 addr)
++int __must_check rtw_read8(struct adapter *adapter, u32 addr, u8 *data)
+ {
+ struct io_priv *io_priv = &adapter->iopriv;
+ struct intf_hdl *intf = &io_priv->intf;
+ u16 value = addr & 0xffff;
+- u8 data;
+
+- usb_read(intf, value, &data, 1);
+-
+- return data;
++ return usb_read(intf, value, data, 1);
+ }
+
+-u16 rtw_read16(struct adapter *adapter, u32 addr)
++int __must_check rtw_read16(struct adapter *adapter, u32 addr, u16 *data)
+ {
+ struct io_priv *io_priv = &adapter->iopriv;
+ struct intf_hdl *intf = &io_priv->intf;
+ u16 value = addr & 0xffff;
+- __le16 data;
++ __le16 le_data;
++ int res;
++
++ res = usb_read(intf, value, &le_data, 2);
++ if (res)
++ return res;
+
+- usb_read(intf, value, &data, 2);
++ *data = le16_to_cpu(le_data);
+
+- return le16_to_cpu(data);
++ return 0;
+ }
+
+-u32 rtw_read32(struct adapter *adapter, u32 addr)
++int __must_check rtw_read32(struct adapter *adapter, u32 addr, u32 *data)
+ {
+ struct io_priv *io_priv = &adapter->iopriv;
+ struct intf_hdl *intf = &io_priv->intf;
+ u16 value = addr & 0xffff;
+- __le32 data;
++ __le32 le_data;
++ int res;
++
++ res = usb_read(intf, value, &le_data, 4);
++ if (res)
++ return res;
+
+- usb_read(intf, value, &data, 4);
++ *data = le32_to_cpu(le_data);
+
+- return le32_to_cpu(data);
++ return 0;
+ }
+
+ int rtw_write8(struct adapter *adapter, u32 addr, u8 val)
+diff --git a/drivers/staging/r8188eu/include/rtw_io.h b/drivers/staging/r8188eu/include/rtw_io.h
+index 6910e2b430e24..1c6097367a67c 100644
+--- a/drivers/staging/r8188eu/include/rtw_io.h
++++ b/drivers/staging/r8188eu/include/rtw_io.h
+@@ -220,9 +220,9 @@ void unregister_intf_hdl(struct intf_hdl *pintfhdl);
+ void _rtw_attrib_read(struct adapter *adapter, u32 addr, u32 cnt, u8 *pmem);
+ void _rtw_attrib_write(struct adapter *adapter, u32 addr, u32 cnt, u8 *pmem);
+
+-u8 rtw_read8(struct adapter *adapter, u32 addr);
+-u16 rtw_read16(struct adapter *adapter, u32 addr);
+-u32 rtw_read32(struct adapter *adapter, u32 addr);
++int __must_check rtw_read8(struct adapter *adapter, u32 addr, u8 *data);
++int __must_check rtw_read16(struct adapter *adapter, u32 addr, u16 *data);
++int __must_check rtw_read32(struct adapter *adapter, u32 addr, u32 *data);
+ void _rtw_read_mem(struct adapter *adapter, u32 addr, u32 cnt, u8 *pmem);
+ u32 rtw_read_port(struct adapter *adapter, u8 *pmem);
+ void rtw_read_port_cancel(struct adapter *adapter);
+diff --git a/drivers/staging/r8188eu/os_dep/ioctl_linux.c b/drivers/staging/r8188eu/os_dep/ioctl_linux.c
+index 8dd280e2739a2..f486870965ac7 100644
+--- a/drivers/staging/r8188eu/os_dep/ioctl_linux.c
++++ b/drivers/staging/r8188eu/os_dep/ioctl_linux.c
+@@ -3126,18 +3126,29 @@ exit:
+ static void mac_reg_dump(struct adapter *padapter)
+ {
+ int i, j = 1;
++ u32 reg;
++ int res;
++
+ pr_info("\n ======= MAC REG =======\n");
+ for (i = 0x0; i < 0x300; i += 4) {
+ if (j % 4 == 1)
+ pr_info("0x%02x", i);
+- pr_info(" 0x%08x ", rtw_read32(padapter, i));
++
++ res = rtw_read32(padapter, i, ®);
++ if (!res)
++ pr_info(" 0x%08x ", reg);
++
+ if ((j++) % 4 == 0)
+ pr_info("\n");
+ }
+ for (i = 0x400; i < 0x800; i += 4) {
+ if (j % 4 == 1)
+ pr_info("0x%02x", i);
+- pr_info(" 0x%08x ", rtw_read32(padapter, i));
++
++ res = rtw_read32(padapter, i, ®);
++ if (!res)
++ pr_info(" 0x%08x ", reg);
++
+ if ((j++) % 4 == 0)
+ pr_info("\n");
+ }
+@@ -3145,13 +3156,18 @@ static void mac_reg_dump(struct adapter *padapter)
+
+ static void bb_reg_dump(struct adapter *padapter)
+ {
+- int i, j = 1;
++ int i, j = 1, res;
++ u32 reg;
++
+ pr_info("\n ======= BB REG =======\n");
+ for (i = 0x800; i < 0x1000; i += 4) {
+ if (j % 4 == 1)
+ pr_info("0x%02x", i);
+
+- pr_info(" 0x%08x ", rtw_read32(padapter, i));
++ res = rtw_read32(padapter, i, ®);
++ if (!res)
++ pr_info(" 0x%08x ", reg);
++
+ if ((j++) % 4 == 0)
+ pr_info("\n");
+ }
+@@ -3178,6 +3194,7 @@ static void rtw_set_dynamic_functions(struct adapter *adapter, u8 dm_func)
+ {
+ struct hal_data_8188e *haldata = &adapter->haldata;
+ struct odm_dm_struct *odmpriv = &haldata->odmpriv;
++ int res;
+
+ switch (dm_func) {
+ case 0:
+@@ -3193,7 +3210,9 @@ static void rtw_set_dynamic_functions(struct adapter *adapter, u8 dm_func)
+ if (!(odmpriv->SupportAbility & DYNAMIC_BB_DIG)) {
+ struct rtw_dig *digtable = &odmpriv->DM_DigTable;
+
+- digtable->CurIGValue = rtw_read8(adapter, 0xc50);
++ res = rtw_read8(adapter, 0xc50, &digtable->CurIGValue);
++ (void)res;
++ /* FIXME: return an error to caller */
+ }
+ odmpriv->SupportAbility = DYNAMIC_ALL_FUNC_ENABLE;
+ break;
+@@ -3329,8 +3348,9 @@ static int rtw_dbg_port(struct net_device *dev,
+ u16 reg = arg;
+ u16 start_value = 0;
+ u32 write_num = extra_arg;
+- int i;
++ int i, res;
+ struct xmit_frame *xmit_frame;
++ u8 val8;
+
+ xmit_frame = rtw_IOL_accquire_xmit_frame(padapter);
+ if (!xmit_frame) {
+@@ -3343,7 +3363,9 @@ static int rtw_dbg_port(struct net_device *dev,
+ if (rtl8188e_IOL_exec_cmds_sync(padapter, xmit_frame, 5000, 0) != _SUCCESS)
+ ret = -EPERM;
+
+- rtw_read8(padapter, reg);
++ /* FIXME: is this read necessary? */
++ res = rtw_read8(padapter, reg, &val8);
++ (void)res;
+ }
+ break;
+
+@@ -3352,8 +3374,8 @@ static int rtw_dbg_port(struct net_device *dev,
+ u16 reg = arg;
+ u16 start_value = 200;
+ u32 write_num = extra_arg;
+-
+- int i;
++ u16 val16;
++ int i, res;
+ struct xmit_frame *xmit_frame;
+
+ xmit_frame = rtw_IOL_accquire_xmit_frame(padapter);
+@@ -3367,7 +3389,9 @@ static int rtw_dbg_port(struct net_device *dev,
+ if (rtl8188e_IOL_exec_cmds_sync(padapter, xmit_frame, 5000, 0) != _SUCCESS)
+ ret = -EPERM;
+
+- rtw_read16(padapter, reg);
++ /* FIXME: is this read necessary? */
++ res = rtw_read16(padapter, reg, &val16);
++ (void)res;
+ }
+ break;
+ case 0x08: /* continuous write dword test */
+@@ -3390,7 +3414,8 @@ static int rtw_dbg_port(struct net_device *dev,
+ if (rtl8188e_IOL_exec_cmds_sync(padapter, xmit_frame, 5000, 0) != _SUCCESS)
+ ret = -EPERM;
+
+- rtw_read32(padapter, reg);
++ /* FIXME: is this read necessary? */
++ ret = rtw_read32(padapter, reg, &write_num);
+ }
+ break;
+ }
+diff --git a/drivers/staging/r8188eu/os_dep/os_intfs.c b/drivers/staging/r8188eu/os_dep/os_intfs.c
+index 891c85b088ca1..cac9553666e6d 100644
+--- a/drivers/staging/r8188eu/os_dep/os_intfs.c
++++ b/drivers/staging/r8188eu/os_dep/os_intfs.c
+@@ -740,19 +740,32 @@ static void rtw_fifo_cleanup(struct adapter *adapter)
+ {
+ struct pwrctrl_priv *pwrpriv = &adapter->pwrctrlpriv;
+ u8 trycnt = 100;
++ int res;
++ u32 reg;
+
+ /* pause tx */
+ rtw_write8(adapter, REG_TXPAUSE, 0xff);
+
+ /* keep sn */
+- adapter->xmitpriv.nqos_ssn = rtw_read16(adapter, REG_NQOS_SEQ);
++ /* FIXME: return an error to caller */
++ res = rtw_read16(adapter, REG_NQOS_SEQ, &adapter->xmitpriv.nqos_ssn);
++ if (res)
++ return;
+
+ if (!pwrpriv->bkeepfwalive) {
+ /* RX DMA stop */
++ res = rtw_read32(adapter, REG_RXPKT_NUM, ®);
++ if (res)
++ return;
++
+ rtw_write32(adapter, REG_RXPKT_NUM,
+- (rtw_read32(adapter, REG_RXPKT_NUM) | RW_RELEASE_EN));
++ (reg | RW_RELEASE_EN));
+ do {
+- if (!(rtw_read32(adapter, REG_RXPKT_NUM) & RXDMA_IDLE))
++ res = rtw_read32(adapter, REG_RXPKT_NUM, ®);
++ if (res)
++ continue;
++
++ if (!(reg & RXDMA_IDLE))
+ break;
+ } while (trycnt--);
+
+diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
+index e4a07a26f6939..93ba1d00335bf 100644
+--- a/drivers/thunderbolt/tmu.c
++++ b/drivers/thunderbolt/tmu.c
+@@ -359,13 +359,14 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
+ * In case of uni-directional time sync, TMU handshake is
+ * initiated by upstream router. In case of bi-directional
+ * time sync, TMU handshake is initiated by downstream router.
+- * Therefore, we change the rate to off in the respective
+- * router.
++ * We change downstream router's rate to off for both uni/bidir
++ * cases although it is needed only for the bi-directional mode.
++ * We avoid changing upstream router's mode since it might
++ * have another downstream router plugged, that is set to
++ * uni-directional mode and we don't want to change it's TMU
++ * mode.
+ */
+- if (unidirectional)
+- tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF);
+- else
+- tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
++ tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
+
+ tb_port_tmu_time_sync_disable(up);
+ ret = tb_port_tmu_time_sync_disable(down);
+diff --git a/drivers/tty/serial/ucc_uart.c b/drivers/tty/serial/ucc_uart.c
+index 6000853973c10..3cc9ef08455c2 100644
+--- a/drivers/tty/serial/ucc_uart.c
++++ b/drivers/tty/serial/ucc_uart.c
+@@ -1137,6 +1137,8 @@ static unsigned int soc_info(unsigned int *rev_h, unsigned int *rev_l)
+ /* No compatible property, so try the name. */
+ soc_string = np->name;
+
++ of_node_put(np);
++
+ /* Extract the SOC number from the "PowerPC," string */
+ if ((sscanf(soc_string, "PowerPC,%u", &soc) != 1) || !soc)
+ return 0;
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 8d91be0fd1a4e..a51ca56a0ebe7 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2227,6 +2227,8 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba)
+ int err;
+
+ hba->capabilities = ufshcd_readl(hba, REG_CONTROLLER_CAPABILITIES);
++ if (hba->quirks & UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS)
++ hba->capabilities &= ~MASK_64_ADDRESSING_SUPPORT;
+
+ /* nutrs and nutmrs are 0 based values */
+ hba->nutrs = (hba->capabilities & MASK_TRANSFER_REQUESTS_SLOTS) + 1;
+@@ -4290,8 +4292,13 @@ static int ufshcd_get_max_pwr_mode(struct ufs_hba *hba)
+ if (hba->max_pwr_info.is_valid)
+ return 0;
+
+- pwr_info->pwr_tx = FAST_MODE;
+- pwr_info->pwr_rx = FAST_MODE;
++ if (hba->quirks & UFSHCD_QUIRK_HIBERN_FASTAUTO) {
++ pwr_info->pwr_tx = FASTAUTO_MODE;
++ pwr_info->pwr_rx = FASTAUTO_MODE;
++ } else {
++ pwr_info->pwr_tx = FAST_MODE;
++ pwr_info->pwr_rx = FAST_MODE;
++ }
+ pwr_info->hs_rate = PA_HS_MODE_B;
+
+ /* Get the connected lane count */
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index a81d8cbd542f3..25995667c8323 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -910,9 +910,13 @@ static int exynos_ufs_phy_init(struct exynos_ufs *ufs)
+ if (ret) {
+ dev_err(hba->dev, "%s: phy init failed, ret = %d\n",
+ __func__, ret);
+- goto out_exit_phy;
++ return ret;
+ }
+
++ ret = phy_power_on(generic_phy);
++ if (ret)
++ goto out_exit_phy;
++
+ return 0;
+
+ out_exit_phy:
+@@ -1174,10 +1178,6 @@ static int exynos_ufs_init(struct ufs_hba *hba)
+ goto out;
+ }
+
+- ret = phy_power_on(ufs->phy);
+- if (ret)
+- goto phy_off;
+-
+ exynos_ufs_priv_init(hba, ufs);
+
+ if (ufs->drv_data->drv_init) {
+@@ -1195,8 +1195,6 @@ static int exynos_ufs_init(struct ufs_hba *hba)
+ exynos_ufs_config_smu(ufs);
+ return 0;
+
+-phy_off:
+- phy_power_off(ufs->phy);
+ out:
+ hba->priv = NULL;
+ return ret;
+@@ -1514,9 +1512,14 @@ static int exynos_ufs_probe(struct platform_device *pdev)
+ static int exynos_ufs_remove(struct platform_device *pdev)
+ {
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
++ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+
+ pm_runtime_get_sync(&(pdev)->dev);
+ ufshcd_remove(hba);
++
++ phy_power_off(ufs->phy);
++ phy_exit(ufs->phy);
++
+ return 0;
+ }
+
+diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c
+index beabc3ccd30b3..4582d69309d92 100644
+--- a/drivers/ufs/host/ufs-mediatek.c
++++ b/drivers/ufs/host/ufs-mediatek.c
+@@ -1026,7 +1026,6 @@ static int ufs_mtk_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op,
+ * ufshcd_suspend() re-enabling regulators while vreg is still
+ * in low-power mode.
+ */
+- ufs_mtk_vreg_set_lpm(hba, true);
+ err = ufs_mtk_mphy_power_on(hba, false);
+ if (err)
+ goto fail;
+@@ -1050,12 +1049,13 @@ static int ufs_mtk_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ {
+ int err;
+
++ if (hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
++ ufs_mtk_vreg_set_lpm(hba, false);
++
+ err = ufs_mtk_mphy_power_on(hba, true);
+ if (err)
+ goto fail;
+
+- ufs_mtk_vreg_set_lpm(hba, false);
+-
+ if (ufshcd_is_link_hibern8(hba)) {
+ err = ufs_mtk_link_set_hpm(hba);
+ if (err)
+@@ -1220,9 +1220,59 @@ static int ufs_mtk_remove(struct platform_device *pdev)
+ return 0;
+ }
+
++#ifdef CONFIG_PM_SLEEP
++int ufs_mtk_system_suspend(struct device *dev)
++{
++ struct ufs_hba *hba = dev_get_drvdata(dev);
++ int ret;
++
++ ret = ufshcd_system_suspend(dev);
++ if (ret)
++ return ret;
++
++ ufs_mtk_vreg_set_lpm(hba, true);
++
++ return 0;
++}
++
++int ufs_mtk_system_resume(struct device *dev)
++{
++ struct ufs_hba *hba = dev_get_drvdata(dev);
++
++ ufs_mtk_vreg_set_lpm(hba, false);
++
++ return ufshcd_system_resume(dev);
++}
++#endif
++
++int ufs_mtk_runtime_suspend(struct device *dev)
++{
++ struct ufs_hba *hba = dev_get_drvdata(dev);
++ int ret = 0;
++
++ ret = ufshcd_runtime_suspend(dev);
++ if (ret)
++ return ret;
++
++ ufs_mtk_vreg_set_lpm(hba, true);
++
++ return 0;
++}
++
++int ufs_mtk_runtime_resume(struct device *dev)
++{
++ struct ufs_hba *hba = dev_get_drvdata(dev);
++
++ ufs_mtk_vreg_set_lpm(hba, false);
++
++ return ufshcd_runtime_resume(dev);
++}
++
+ static const struct dev_pm_ops ufs_mtk_pm_ops = {
+- SET_SYSTEM_SLEEP_PM_OPS(ufshcd_system_suspend, ufshcd_system_resume)
+- SET_RUNTIME_PM_OPS(ufshcd_runtime_suspend, ufshcd_runtime_resume, NULL)
++ SET_SYSTEM_SLEEP_PM_OPS(ufs_mtk_system_suspend,
++ ufs_mtk_system_resume)
++ SET_RUNTIME_PM_OPS(ufs_mtk_runtime_suspend,
++ ufs_mtk_runtime_resume, NULL)
+ .prepare = ufshcd_suspend_prepare,
+ .complete = ufshcd_resume_complete,
+ };
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index 87cfa91a758df..d21b69997e750 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -625,9 +625,9 @@ static void cdns3_wa2_remove_old_request(struct cdns3_endpoint *priv_ep)
+ trace_cdns3_wa2(priv_ep, "removes eldest request");
+
+ kfree(priv_req->request.buf);
++ list_del_init(&priv_req->list);
+ cdns3_gadget_ep_free_request(&priv_ep->endpoint,
+ &priv_req->request);
+- list_del_init(&priv_req->list);
+ --priv_ep->wa2_counter;
+
+ if (!chain)
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index fe2a58c758610..8b15742d9e8aa 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -3594,7 +3594,8 @@ void dwc2_hsotg_core_disconnect(struct dwc2_hsotg *hsotg)
+ void dwc2_hsotg_core_connect(struct dwc2_hsotg *hsotg)
+ {
+ /* remove the soft-disconnect and let's go */
+- dwc2_clear_bit(hsotg, DCTL, DCTL_SFTDISCON);
++ if (!hsotg->role_sw || (dwc2_readl(hsotg, GOTGCTL) & GOTGCTL_BSESVLD))
++ dwc2_clear_bit(hsotg, DCTL, DCTL_SFTDISCON);
+ }
+
+ /**
+diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
+index 951934aa44541..ec500ee499eed 100644
+--- a/drivers/usb/gadget/function/uvc_queue.c
++++ b/drivers/usb/gadget/function/uvc_queue.c
+@@ -44,7 +44,8 @@ static int uvc_queue_setup(struct vb2_queue *vq,
+ {
+ struct uvc_video_queue *queue = vb2_get_drv_priv(vq);
+ struct uvc_video *video = container_of(queue, struct uvc_video, queue);
+- struct usb_composite_dev *cdev = video->uvc->func.config->cdev;
++ unsigned int req_size;
++ unsigned int nreq;
+
+ if (*nbuffers > UVC_MAX_VIDEO_BUFFERS)
+ *nbuffers = UVC_MAX_VIDEO_BUFFERS;
+@@ -53,10 +54,16 @@ static int uvc_queue_setup(struct vb2_queue *vq,
+
+ sizes[0] = video->imagesize;
+
+- if (cdev->gadget->speed < USB_SPEED_SUPER)
+- video->uvc_num_requests = 4;
+- else
+- video->uvc_num_requests = 64;
++ req_size = video->ep->maxpacket
++ * max_t(unsigned int, video->ep->maxburst, 1)
++ * (video->ep->mult);
++
++ /* We divide by two, to increase the chance to run
++ * into fewer requests for smaller framesizes.
++ */
++ nreq = DIV_ROUND_UP(DIV_ROUND_UP(sizes[0], 2), req_size);
++ nreq = clamp(nreq, 4U, 64U);
++ video->uvc_num_requests = nreq;
+
+ return 0;
+ }
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index ce421d9cc241b..c00ce0e91f5d5 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -261,7 +261,7 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
+ break;
+
+ default:
+- uvcg_info(&video->uvc->func,
++ uvcg_warn(&video->uvc->func,
+ "VS request completed with status %d.\n",
+ req->status);
+ uvcg_queue_cancel(queue, 0);
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 79990597c39f1..01c3ead7d1b42 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -362,6 +362,7 @@ ep_io (struct ep_data *epdata, void *buf, unsigned len)
+ spin_unlock_irq (&epdata->dev->lock);
+
+ DBG (epdata->dev, "endpoint gone\n");
++ wait_for_completion(&done);
+ epdata->status = -ENODEV;
+ }
+ }
+diff --git a/drivers/usb/host/ohci-ppc-of.c b/drivers/usb/host/ohci-ppc-of.c
+index 1960b8dfdba51..591f675cc9306 100644
+--- a/drivers/usb/host/ohci-ppc-of.c
++++ b/drivers/usb/host/ohci-ppc-of.c
+@@ -166,6 +166,7 @@ static int ohci_hcd_ppc_of_probe(struct platform_device *op)
+ release_mem_region(res.start, 0x4);
+ } else
+ pr_debug("%s: cannot get ehci offset from fdt\n", __FILE__);
++ of_node_put(np);
+ }
+
+ irq_dispose_mapping(irq);
+diff --git a/drivers/usb/renesas_usbhs/rza.c b/drivers/usb/renesas_usbhs/rza.c
+index 24de64edb674b..2d77edefb4b30 100644
+--- a/drivers/usb/renesas_usbhs/rza.c
++++ b/drivers/usb/renesas_usbhs/rza.c
+@@ -23,6 +23,10 @@ static int usbhs_rza1_hardware_init(struct platform_device *pdev)
+ extal_clk = of_find_node_by_name(NULL, "extal");
+ of_property_read_u32(usb_x1_clk, "clock-frequency", &freq_usb);
+ of_property_read_u32(extal_clk, "clock-frequency", &freq_extal);
++
++ of_node_put(usb_x1_clk);
++ of_node_put(extal_clk);
++
+ if (freq_usb == 0) {
+ if (freq_extal == 12000000) {
+ /* Select 12MHz XTAL */
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 0f28658996472..3e81532c01cb8 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -33,7 +33,7 @@ MODULE_PARM_DESC(batch_mapping, "Batched mapping 1 -Enable; 0 - Disable");
+ static int max_iotlb_entries = 2048;
+ module_param(max_iotlb_entries, int, 0444);
+ MODULE_PARM_DESC(max_iotlb_entries,
+- "Maximum number of iotlb entries. 0 means unlimited. (default: 2048)");
++ "Maximum number of iotlb entries for each address space. 0 means unlimited. (default: 2048)");
+
+ #define VDPASIM_QUEUE_ALIGN PAGE_SIZE
+ #define VDPASIM_QUEUE_MAX 256
+@@ -291,7 +291,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
+ goto err_iommu;
+
+ for (i = 0; i < vdpasim->dev_attr.nas; i++)
+- vhost_iotlb_init(&vdpasim->iommu[i], 0, 0);
++ vhost_iotlb_init(&vdpasim->iommu[i], max_iotlb_entries, 0);
+
+ vdpasim->buffer = kvmalloc(dev_attr->buffer_size, GFP_KERNEL);
+ if (!vdpasim->buffer)
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
+index 42d401d439117..03a28def8eeeb 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
+@@ -34,7 +34,11 @@
+ #define VDPASIM_BLK_CAPACITY 0x40000
+ #define VDPASIM_BLK_SIZE_MAX 0x1000
+ #define VDPASIM_BLK_SEG_MAX 32
++
++/* 1 virtqueue, 1 address space, 1 virtqueue group */
+ #define VDPASIM_BLK_VQ_NUM 1
++#define VDPASIM_BLK_AS_NUM 1
++#define VDPASIM_BLK_GROUP_NUM 1
+
+ static char vdpasim_blk_id[VIRTIO_BLK_ID_BYTES] = "vdpa_blk_sim";
+
+@@ -260,6 +264,8 @@ static int vdpasim_blk_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
+ dev_attr.id = VIRTIO_ID_BLOCK;
+ dev_attr.supported_features = VDPASIM_BLK_FEATURES;
+ dev_attr.nvqs = VDPASIM_BLK_VQ_NUM;
++ dev_attr.ngroups = VDPASIM_BLK_GROUP_NUM;
++ dev_attr.nas = VDPASIM_BLK_AS_NUM;
+ dev_attr.config_size = sizeof(struct virtio_blk_config);
+ dev_attr.get_config = vdpasim_blk_get_config;
+ dev_attr.work_fn = vdpasim_blk_work;
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index 18fc0916587ec..277cd1152dd80 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -1814,6 +1814,7 @@ struct vfio_info_cap_header *vfio_info_cap_add(struct vfio_info_cap *caps,
+ buf = krealloc(caps->buf, caps->size + size, GFP_KERNEL);
+ if (!buf) {
+ kfree(caps->buf);
++ caps->buf = NULL;
+ caps->size = 0;
+ return ERR_PTR(-ENOMEM);
+ }
+diff --git a/drivers/video/fbdev/i740fb.c b/drivers/video/fbdev/i740fb.c
+index 09dd85553d4f3..7f09a0daaaa24 100644
+--- a/drivers/video/fbdev/i740fb.c
++++ b/drivers/video/fbdev/i740fb.c
+@@ -400,7 +400,7 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
+ u32 xres, right, hslen, left, xtotal;
+ u32 yres, lower, vslen, upper, ytotal;
+ u32 vxres, xoffset, vyres, yoffset;
+- u32 bpp, base, dacspeed24, mem;
++ u32 bpp, base, dacspeed24, mem, freq;
+ u8 r7;
+ int i;
+
+@@ -643,7 +643,12 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
+ par->atc[VGA_ATC_OVERSCAN] = 0;
+
+ /* Calculate VCLK that most closely matches the requested dot clock */
+- i740_calc_vclk((((u32)1e9) / var->pixclock) * (u32)(1e3), par);
++ freq = (((u32)1e9) / var->pixclock) * (u32)(1e3);
++ if (freq < I740_RFREQ_FIX) {
++ fb_dbg(info, "invalid pixclock\n");
++ freq = I740_RFREQ_FIX;
++ }
++ i740_calc_vclk(freq, par);
+
+ /* Since we program the clocks ourselves, always use VCLK2. */
+ par->misc |= 0x0C;
+diff --git a/drivers/virt/vboxguest/vboxguest_linux.c b/drivers/virt/vboxguest/vboxguest_linux.c
+index 73eb34849eaba..4ccfd30c2a304 100644
+--- a/drivers/virt/vboxguest/vboxguest_linux.c
++++ b/drivers/virt/vboxguest/vboxguest_linux.c
+@@ -356,8 +356,8 @@ static int vbg_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ goto err_vbg_core_exit;
+ }
+
+- ret = devm_request_irq(dev, pci->irq, vbg_core_isr, IRQF_SHARED,
+- DEVICE_NAME, gdev);
++ ret = request_irq(pci->irq, vbg_core_isr, IRQF_SHARED, DEVICE_NAME,
++ gdev);
+ if (ret) {
+ vbg_err("vboxguest: Error requesting irq: %d\n", ret);
+ goto err_vbg_core_exit;
+@@ -367,7 +367,7 @@ static int vbg_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ if (ret) {
+ vbg_err("vboxguest: Error misc_register %s failed: %d\n",
+ DEVICE_NAME, ret);
+- goto err_vbg_core_exit;
++ goto err_free_irq;
+ }
+
+ ret = misc_register(&gdev->misc_device_user);
+@@ -403,6 +403,8 @@ err_unregister_misc_device_user:
+ misc_deregister(&gdev->misc_device_user);
+ err_unregister_misc_device:
+ misc_deregister(&gdev->misc_device);
++err_free_irq:
++ free_irq(pci->irq, gdev);
+ err_vbg_core_exit:
+ vbg_core_exit(gdev);
+ err_disable_pcidev:
+@@ -419,6 +421,7 @@ static void vbg_pci_remove(struct pci_dev *pci)
+ vbg_gdev = NULL;
+ mutex_unlock(&vbg_gdev_mutex);
+
++ free_irq(pci->irq, gdev);
+ device_remove_file(gdev->dev, &dev_attr_host_features);
+ device_remove_file(gdev->dev, &dev_attr_host_version);
+ misc_deregister(&gdev->misc_device_user);
+diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
+index 56c77f63cd224..dd9e6f68de245 100644
+--- a/drivers/virtio/Kconfig
++++ b/drivers/virtio/Kconfig
+@@ -35,11 +35,12 @@ if VIRTIO_MENU
+
+ config VIRTIO_HARDEN_NOTIFICATION
+ bool "Harden virtio notification"
++ depends on BROKEN
+ help
+ Enable this to harden the device notifications and suppress
+ those that happen at a time where notifications are illegal.
+
+- Experimental: Note that several drivers still have bugs that
++ Experimental: Note that several drivers still have issues that
+ may cause crashes or hangs when correct handling of
+ notifications is enforced; depending on the subset of
+ drivers and devices you use, this may or may not work.
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index 597af455a522b..0792fda49a15f 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -128,7 +128,7 @@ static ssize_t xenbus_file_read(struct file *filp,
+ {
+ struct xenbus_file_priv *u = filp->private_data;
+ struct read_buffer *rb;
+- unsigned i;
++ ssize_t i;
+ int ret;
+
+ mutex_lock(&u->reply_mutex);
+@@ -148,7 +148,7 @@ again:
+ rb = list_entry(u->read_buffers.next, struct read_buffer, list);
+ i = 0;
+ while (i < len) {
+- unsigned sz = min((unsigned)len - i, rb->len - rb->cons);
++ size_t sz = min_t(size_t, len - i, rb->len - rb->cons);
+
+ ret = copy_to_user(ubuf + i, &rb->msg[rb->cons], sz);
+
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 5627b43d4cc24..deaed255f301e 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1640,9 +1640,11 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ div64_u64(zone_unusable * 100, bg->length));
+ trace_btrfs_reclaim_block_group(bg);
+ ret = btrfs_relocate_chunk(fs_info, bg->start);
+- if (ret)
++ if (ret) {
++ btrfs_dec_block_group_ro(bg);
+ btrfs_err(fs_info, "error relocating chunk %llu",
+ bg->start);
++ }
+
+ next:
+ btrfs_put_block_group(bg);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index a6dc827e75af0..33411baf5c7a3 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -3573,7 +3573,12 @@ int prepare_to_relocate(struct reloc_control *rc)
+ */
+ return PTR_ERR(trans);
+ }
+- return btrfs_commit_transaction(trans);
++
++ ret = btrfs_commit_transaction(trans);
++ if (ret)
++ unset_reloc_control(rc);
++
++ return ret;
+ }
+
+ static noinline_for_stack int relocate_block_group(struct reloc_control *rc)
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 3c962bfd204f6..42f02cffe06b0 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1146,7 +1146,9 @@ again:
+ extref = btrfs_lookup_inode_extref(NULL, root, path, name, namelen,
+ inode_objectid, parent_objectid, 0,
+ 0);
+- if (!IS_ERR_OR_NULL(extref)) {
++ if (IS_ERR(extref)) {
++ return PTR_ERR(extref);
++ } else if (extref) {
+ u32 item_size;
+ u32 cur_offset = 0;
+ unsigned long base;
+@@ -1457,7 +1459,7 @@ static int add_link(struct btrfs_trans_handle *trans,
+ * on the inode will not free it. We will fixup the link count later.
+ */
+ if (other_inode->i_nlink == 0)
+- inc_nlink(other_inode);
++ set_nlink(other_inode, 1);
+ add_link:
+ ret = btrfs_add_link(trans, BTRFS_I(dir), BTRFS_I(inode),
+ name, namelen, 0, ref_index);
+@@ -1600,7 +1602,7 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ * free it. We will fixup the link count later.
+ */
+ if (!ret && inode->i_nlink == 0)
+- inc_nlink(inode);
++ set_nlink(inode, 1);
+ }
+ if (ret < 0)
+ goto out;
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index ac8fd5e7f5405..2b1f22322e8fa 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -3578,24 +3578,23 @@ static void handle_cap_grant(struct inode *inode,
+ fill_inline = true;
+ }
+
+- if (ci->i_auth_cap == cap &&
+- le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
+- if (newcaps & ~extra_info->issued)
+- wake = true;
++ if (le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
++ if (ci->i_auth_cap == cap) {
++ if (newcaps & ~extra_info->issued)
++ wake = true;
++
++ if (ci->i_requested_max_size > max_size ||
++ !(le32_to_cpu(grant->wanted) & CEPH_CAP_ANY_FILE_WR)) {
++ /* re-request max_size if necessary */
++ ci->i_requested_max_size = 0;
++ wake = true;
++ }
+
+- if (ci->i_requested_max_size > max_size ||
+- !(le32_to_cpu(grant->wanted) & CEPH_CAP_ANY_FILE_WR)) {
+- /* re-request max_size if necessary */
+- ci->i_requested_max_size = 0;
+- wake = true;
++ ceph_kick_flushing_inode_caps(session, ci);
+ }
+-
+- ceph_kick_flushing_inode_caps(session, ci);
+- spin_unlock(&ci->i_ceph_lock);
+ up_read(&session->s_mdsc->snap_rwsem);
+- } else {
+- spin_unlock(&ci->i_ceph_lock);
+ }
++ spin_unlock(&ci->i_ceph_lock);
+
+ if (fill_inline)
+ ceph_fill_inline_data(inode, NULL, extra_info->inline_data,
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 33f517d549ce5..0aded10375fdd 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1220,14 +1220,17 @@ static int encode_supported_features(void **p, void *end)
+ if (count > 0) {
+ size_t i;
+ size_t size = FEATURE_BYTES(count);
++ unsigned long bit;
+
+ if (WARN_ON_ONCE(*p + 4 + size > end))
+ return -ERANGE;
+
+ ceph_encode_32(p, size);
+ memset(*p, 0, size);
+- for (i = 0; i < count; i++)
+- ((unsigned char*)(*p))[i / 8] |= BIT(feature_bits[i] % 8);
++ for (i = 0; i < count; i++) {
++ bit = feature_bits[i];
++ ((unsigned char *)(*p))[bit / 8] |= BIT(bit % 8);
++ }
+ *p += size;
+ } else {
+ if (WARN_ON_ONCE(*p + 4 > end))
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 1140aecd82ce4..2a49e331987be 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -33,10 +33,6 @@ enum ceph_feature_type {
+ CEPHFS_FEATURE_MAX = CEPHFS_FEATURE_METRIC_COLLECT,
+ };
+
+-/*
+- * This will always have the highest feature bit value
+- * as the last element of the array.
+- */
+ #define CEPHFS_FEATURES_CLIENT_SUPPORTED { \
+ 0, 1, 2, 3, 4, 5, 6, 7, \
+ CEPHFS_FEATURE_MIMIC, \
+@@ -45,8 +41,6 @@ enum ceph_feature_type {
+ CEPHFS_FEATURE_MULTI_RECONNECT, \
+ CEPHFS_FEATURE_DELEG_INO, \
+ CEPHFS_FEATURE_METRIC_COLLECT, \
+- \
+- CEPHFS_FEATURE_MAX, \
+ }
+ #define CEPHFS_FEATURES_CLIENT_REQUIRED {}
+
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 0e84e6fcf8ab4..197f3c09d3f3e 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -742,6 +742,8 @@ cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode)
+ list_for_each_entry(cfile, &cifs_inode->openFileList, flist) {
+ if (delayed_work_pending(&cfile->deferred)) {
+ if (cancel_delayed_work(&cfile->deferred)) {
++ cifs_del_deferred_close(cfile);
++
+ tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ if (tmp_list == NULL)
+ break;
+@@ -773,6 +775,8 @@ cifs_close_all_deferred_files(struct cifs_tcon *tcon)
+ cfile = list_entry(tmp, struct cifsFileInfo, tlist);
+ if (delayed_work_pending(&cfile->deferred)) {
+ if (cancel_delayed_work(&cfile->deferred)) {
++ cifs_del_deferred_close(cfile);
++
+ tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ if (tmp_list == NULL)
+ break;
+@@ -808,6 +812,8 @@ cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, const char *path)
+ if (strstr(full_path, path)) {
+ if (delayed_work_pending(&cfile->deferred)) {
+ if (cancel_delayed_work(&cfile->deferred)) {
++ cifs_del_deferred_close(cfile);
++
+ tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ if (tmp_list == NULL)
+ break;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 8802995b2d3d6..aa4c1d403708f 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1145,9 +1145,7 @@ move_smb2_ea_to_cifs(char *dst, size_t dst_size,
+ size_t name_len, value_len, user_name_len;
+
+ while (src_size > 0) {
+- name = &src->ea_data[0];
+ name_len = (size_t)src->ea_name_length;
+- value = &src->ea_data[src->ea_name_length + 1];
+ value_len = (size_t)le16_to_cpu(src->ea_value_length);
+
+ if (name_len == 0)
+@@ -1159,6 +1157,9 @@ move_smb2_ea_to_cifs(char *dst, size_t dst_size,
+ goto out;
+ }
+
++ name = &src->ea_data[0];
++ value = &src->ea_data[src->ea_name_length + 1];
++
+ if (ea_name) {
+ if (ea_name_len == name_len &&
+ memcmp(ea_name, name, name_len) == 0) {
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 9e06334771a39..38e7dc2531b17 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5928,6 +5928,15 @@ static void ext4_mb_clear_bb(handle_t *handle, struct inode *inode,
+
+ sbi = EXT4_SB(sb);
+
++ if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
++ !ext4_inode_block_valid(inode, block, count)) {
++ ext4_error(sb, "Freeing blocks in system zone - "
++ "Block = %llu, count = %lu", block, count);
++ /* err = 0. ext4_std_error should be a no op */
++ goto error_return;
++ }
++ flags |= EXT4_FREE_BLOCKS_VALIDATED;
++
+ do_more:
+ overflow = 0;
+ ext4_get_group_no_and_offset(sb, block, &block_group, &bit);
+@@ -5944,6 +5953,8 @@ do_more:
+ overflow = EXT4_C2B(sbi, bit) + count -
+ EXT4_BLOCKS_PER_GROUP(sb);
+ count -= overflow;
++ /* The range changed so it's no longer validated */
++ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ }
+ count_clusters = EXT4_NUM_B2C(sbi, count);
+ bitmap_bh = ext4_read_block_bitmap(sb, block_group);
+@@ -5958,7 +5969,8 @@ do_more:
+ goto error_return;
+ }
+
+- if (!ext4_inode_block_valid(inode, block, count)) {
++ if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
++ !ext4_inode_block_valid(inode, block, count)) {
+ ext4_error(sb, "Freeing blocks in system zone - "
+ "Block = %llu, count = %lu", block, count);
+ /* err = 0. ext4_std_error should be a no op */
+@@ -6081,6 +6093,8 @@ do_more:
+ block += count;
+ count = overflow;
+ put_bh(bitmap_bh);
++ /* The range changed so it's no longer validated */
++ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ goto do_more;
+ }
+ error_return:
+@@ -6127,6 +6141,7 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ "block = %llu, count = %lu", block, count);
+ return;
+ }
++ flags |= EXT4_FREE_BLOCKS_VALIDATED;
+
+ ext4_debug("freeing block %llu\n", block);
+ trace_ext4_free_blocks(inode, block, count, flags);
+@@ -6158,6 +6173,8 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ block -= overflow;
+ count += overflow;
+ }
++ /* The range changed so it's no longer validated */
++ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ }
+ overflow = EXT4_LBLK_COFF(sbi, count);
+ if (overflow) {
+@@ -6168,6 +6185,8 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ return;
+ } else
+ count += sbi->s_cluster_ratio - overflow;
++ /* The range changed so it's no longer validated */
++ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ }
+
+ if (!bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 4af441494e09b..3a31b662f6619 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3090,11 +3090,8 @@ bool ext4_empty_dir(struct inode *inode)
+ de = (struct ext4_dir_entry_2 *) (bh->b_data +
+ (offset & (sb->s_blocksize - 1)));
+ if (ext4_check_dir_entry(inode, NULL, de, bh,
+- bh->b_data, bh->b_size, offset)) {
+- offset = (offset | (sb->s_blocksize - 1)) + 1;
+- continue;
+- }
+- if (le32_to_cpu(de->inode)) {
++ bh->b_data, bh->b_size, offset) ||
++ le32_to_cpu(de->inode)) {
+ brelse(bh);
+ return false;
+ }
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index e5c2713aa11ad..cb5a64293881e 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1989,6 +1989,16 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
+ }
+ brelse(bh);
+
++ /*
++ * For bigalloc, trim the requested size to the nearest cluster
++ * boundary to avoid creating an unusable filesystem. We do this
++ * silently, instead of returning an error, to avoid breaking
++ * callers that blindly resize the filesystem to the full size of
++ * the underlying block device.
++ */
++ if (ext4_has_feature_bigalloc(sb))
++ n_blocks_count &= ~((1 << EXT4_CLUSTER_BITS(sb)) - 1);
++
+ retry:
+ o_blocks_count = ext4_blocks_count(es);
+
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 5c950298837f1..7006fa7dd5cb8 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -757,6 +757,7 @@ enum {
+ FI_ENABLE_COMPRESS, /* enable compression in "user" compression mode */
+ FI_COMPRESS_RELEASED, /* compressed blocks were released */
+ FI_ALIGNED_WRITE, /* enable aligned write */
++ FI_COW_FILE, /* indicate COW file */
+ FI_MAX, /* max flag, never be used */
+ };
+
+@@ -3208,6 +3209,11 @@ static inline bool f2fs_is_atomic_file(struct inode *inode)
+ return is_inode_flag_set(inode, FI_ATOMIC_FILE);
+ }
+
++static inline bool f2fs_is_cow_file(struct inode *inode)
++{
++ return is_inode_flag_set(inode, FI_COW_FILE);
++}
++
+ static inline bool f2fs_is_first_block_written(struct inode *inode)
+ {
+ return is_inode_flag_set(inode, FI_FIRST_BLOCK_WRITTEN);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index fc0f30738b21c..ecd833ba35fcb 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2061,7 +2061,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
+
+ set_inode_flag(inode, FI_ATOMIC_FILE);
+- set_inode_flag(fi->cow_inode, FI_ATOMIC_FILE);
++ set_inode_flag(fi->cow_inode, FI_COW_FILE);
+ clear_inode_flag(fi->cow_inode, FI_INLINE_DATA);
+ f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
+
+@@ -2108,6 +2108,31 @@ unlock_out:
+ return ret;
+ }
+
++static int f2fs_ioc_abort_atomic_write(struct file *filp)
++{
++ struct inode *inode = file_inode(filp);
++ struct user_namespace *mnt_userns = file_mnt_user_ns(filp);
++ int ret;
++
++ if (!inode_owner_or_capable(mnt_userns, inode))
++ return -EACCES;
++
++ ret = mnt_want_write_file(filp);
++ if (ret)
++ return ret;
++
++ inode_lock(inode);
++
++ if (f2fs_is_atomic_file(inode))
++ f2fs_abort_atomic_write(inode, true);
++
++ inode_unlock(inode);
++
++ mnt_drop_write_file(filp);
++ f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
++ return ret;
++}
++
+ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ {
+ struct inode *inode = file_inode(filp);
+@@ -4063,9 +4088,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ return f2fs_ioc_start_atomic_write(filp);
+ case F2FS_IOC_COMMIT_ATOMIC_WRITE:
+ return f2fs_ioc_commit_atomic_write(filp);
++ case F2FS_IOC_ABORT_ATOMIC_WRITE:
++ return f2fs_ioc_abort_atomic_write(filp);
+ case F2FS_IOC_START_VOLATILE_WRITE:
+ case F2FS_IOC_RELEASE_VOLATILE_WRITE:
+- case F2FS_IOC_ABORT_VOLATILE_WRITE:
+ return -EOPNOTSUPP;
+ case F2FS_IOC_SHUTDOWN:
+ return f2fs_ioc_shutdown(filp, arg);
+@@ -4734,7 +4760,7 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ case F2FS_IOC_COMMIT_ATOMIC_WRITE:
+ case F2FS_IOC_START_VOLATILE_WRITE:
+ case F2FS_IOC_RELEASE_VOLATILE_WRITE:
+- case F2FS_IOC_ABORT_VOLATILE_WRITE:
++ case F2FS_IOC_ABORT_ATOMIC_WRITE:
+ case F2FS_IOC_SHUTDOWN:
+ case FITRIM:
+ case FS_IOC_SET_ENCRYPTION_POLICY:
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index cf6f7fc83c082..02e92a72511b2 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1292,7 +1292,11 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
+ dec_valid_node_count(sbi, dn->inode, !ofs);
+ goto fail;
+ }
+- f2fs_bug_on(sbi, new_ni.blk_addr != NULL_ADDR);
++ if (unlikely(new_ni.blk_addr != NULL_ADDR)) {
++ err = -EFSCORRUPTED;
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ goto fail;
++ }
+ #endif
+ new_ni.nid = dn->nid;
+ new_ni.ino = dn->inode->i_ino;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 874c1b9c41a2a..52df19a0638b1 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -193,7 +193,7 @@ void f2fs_abort_atomic_write(struct inode *inode, bool clean)
+ if (f2fs_is_atomic_file(inode)) {
+ if (clean)
+ truncate_inode_pages_final(inode->i_mapping);
+- clear_inode_flag(fi->cow_inode, FI_ATOMIC_FILE);
++ clear_inode_flag(fi->cow_inode, FI_COW_FILE);
+ iput(fi->cow_inode);
+ fi->cow_inode = NULL;
+ clear_inode_flag(inode, FI_ATOMIC_FILE);
+@@ -3166,7 +3166,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
+ return CURSEG_COLD_DATA;
+ if (file_is_hot(inode) ||
+ is_inode_flag_set(inode, FI_HOT_DATA) ||
+- f2fs_is_atomic_file(inode))
++ f2fs_is_cow_file(inode))
+ return CURSEG_HOT_DATA;
+ return f2fs_rw_hint_to_seg_type(inode->i_write_hint);
+ } else {
+@@ -4362,6 +4362,12 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ return err;
+ seg_info_from_raw_sit(se, &sit);
+
++ if (se->type >= NR_PERSISTENT_LOG) {
++ f2fs_err(sbi, "Invalid segment type: %u, segno: %u",
++ se->type, start);
++ return -EFSCORRUPTED;
++ }
++
+ sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+
+ if (f2fs_block_unit_discard(sbi)) {
+@@ -4410,6 +4416,13 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ break;
+ seg_info_from_raw_sit(se, &sit);
+
++ if (se->type >= NR_PERSISTENT_LOG) {
++ f2fs_err(sbi, "Invalid segment type: %u, segno: %u",
++ se->type, start);
++ err = -EFSCORRUPTED;
++ break;
++ }
++
+ sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+
+ if (f2fs_block_unit_discard(sbi)) {
+diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
+index 74920826d8f67..26a6d395737a6 100644
+--- a/fs/fscache/cookie.c
++++ b/fs/fscache/cookie.c
+@@ -739,6 +739,9 @@ again_locked:
+ fallthrough;
+
+ case FSCACHE_COOKIE_STATE_FAILED:
++ if (test_and_clear_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags))
++ fscache_end_cookie_access(cookie, fscache_access_invalidate_cookie_end);
++
+ if (atomic_read(&cookie->n_accesses) != 0)
+ break;
+ if (test_bit(FSCACHE_COOKIE_DO_RELINQUISH, &cookie->flags)) {
+@@ -1063,8 +1066,8 @@ void __fscache_invalidate(struct fscache_cookie *cookie,
+ return;
+
+ case FSCACHE_COOKIE_STATE_LOOKING_UP:
+- __fscache_begin_cookie_access(cookie, fscache_access_invalidate_cookie);
+- set_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags);
++ if (!test_and_set_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags))
++ __fscache_begin_cookie_access(cookie, fscache_access_invalidate_cookie);
+ fallthrough;
+ case FSCACHE_COOKIE_STATE_CREATING:
+ spin_unlock(&cookie->lock);
+diff --git a/fs/nfs/nfs4idmap.c b/fs/nfs/nfs4idmap.c
+index f331866dd4182..ec6afd3c4bca6 100644
+--- a/fs/nfs/nfs4idmap.c
++++ b/fs/nfs/nfs4idmap.c
+@@ -561,22 +561,20 @@ nfs_idmap_prepare_pipe_upcall(struct idmap *idmap,
+ return true;
+ }
+
+-static void
+-nfs_idmap_complete_pipe_upcall_locked(struct idmap *idmap, int ret)
++static void nfs_idmap_complete_pipe_upcall(struct idmap_legacy_upcalldata *data,
++ int ret)
+ {
+- struct key *authkey = idmap->idmap_upcall_data->authkey;
+-
+- kfree(idmap->idmap_upcall_data);
+- idmap->idmap_upcall_data = NULL;
+- complete_request_key(authkey, ret);
+- key_put(authkey);
++ complete_request_key(data->authkey, ret);
++ key_put(data->authkey);
++ kfree(data);
+ }
+
+-static void
+-nfs_idmap_abort_pipe_upcall(struct idmap *idmap, int ret)
++static void nfs_idmap_abort_pipe_upcall(struct idmap *idmap,
++ struct idmap_legacy_upcalldata *data,
++ int ret)
+ {
+- if (idmap->idmap_upcall_data != NULL)
+- nfs_idmap_complete_pipe_upcall_locked(idmap, ret);
++ if (cmpxchg(&idmap->idmap_upcall_data, data, NULL) == data)
++ nfs_idmap_complete_pipe_upcall(data, ret);
+ }
+
+ static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux)
+@@ -613,7 +611,7 @@ static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux)
+
+ ret = rpc_queue_upcall(idmap->idmap_pipe, msg);
+ if (ret < 0)
+- nfs_idmap_abort_pipe_upcall(idmap, ret);
++ nfs_idmap_abort_pipe_upcall(idmap, data, ret);
+
+ return ret;
+ out2:
+@@ -669,6 +667,7 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ struct request_key_auth *rka;
+ struct rpc_inode *rpci = RPC_I(file_inode(filp));
+ struct idmap *idmap = (struct idmap *)rpci->private;
++ struct idmap_legacy_upcalldata *data;
+ struct key *authkey;
+ struct idmap_msg im;
+ size_t namelen_in;
+@@ -678,10 +677,11 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ * will have been woken up and someone else may now have used
+ * idmap_key_cons - so after this point we may no longer touch it.
+ */
+- if (idmap->idmap_upcall_data == NULL)
++ data = xchg(&idmap->idmap_upcall_data, NULL);
++ if (data == NULL)
+ goto out_noupcall;
+
+- authkey = idmap->idmap_upcall_data->authkey;
++ authkey = data->authkey;
+ rka = get_request_key_auth(authkey);
+
+ if (mlen != sizeof(im)) {
+@@ -703,18 +703,17 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ if (namelen_in == 0 || namelen_in == IDMAP_NAMESZ) {
+ ret = -EINVAL;
+ goto out;
+-}
++ }
+
+- ret = nfs_idmap_read_and_verify_message(&im,
+- &idmap->idmap_upcall_data->idmap_msg,
+- rka->target_key, authkey);
++ ret = nfs_idmap_read_and_verify_message(&im, &data->idmap_msg,
++ rka->target_key, authkey);
+ if (ret >= 0) {
+ key_set_timeout(rka->target_key, nfs_idmap_cache_timeout);
+ ret = mlen;
+ }
+
+ out:
+- nfs_idmap_complete_pipe_upcall_locked(idmap, ret);
++ nfs_idmap_complete_pipe_upcall(data, ret);
+ out_noupcall:
+ return ret;
+ }
+@@ -728,7 +727,7 @@ idmap_pipe_destroy_msg(struct rpc_pipe_msg *msg)
+ struct idmap *idmap = data->idmap;
+
+ if (msg->errno)
+- nfs_idmap_abort_pipe_upcall(idmap, msg->errno);
++ nfs_idmap_abort_pipe_upcall(idmap, data, msg->errno);
+ }
+
+ static void
+@@ -736,8 +735,11 @@ idmap_release_pipe(struct inode *inode)
+ {
+ struct rpc_inode *rpci = RPC_I(inode);
+ struct idmap *idmap = (struct idmap *)rpci->private;
++ struct idmap_legacy_upcalldata *data;
+
+- nfs_idmap_abort_pipe_upcall(idmap, -EPIPE);
++ data = xchg(&idmap->idmap_upcall_data, NULL);
++ if (data)
++ nfs_idmap_complete_pipe_upcall(data, -EPIPE);
+ }
+
+ int nfs_map_name_to_uid(const struct nfs_server *server, const char *name, size_t namelen, kuid_t *uid)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index bb0e84a46d61a..77e5a99846d62 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -784,10 +784,9 @@ static void nfs4_slot_sequence_record_sent(struct nfs4_slot *slot,
+ if ((s32)(seqnr - slot->seq_nr_highest_sent) > 0)
+ slot->seq_nr_highest_sent = seqnr;
+ }
+-static void nfs4_slot_sequence_acked(struct nfs4_slot *slot,
+- u32 seqnr)
++static void nfs4_slot_sequence_acked(struct nfs4_slot *slot, u32 seqnr)
+ {
+- slot->seq_nr_highest_sent = seqnr;
++ nfs4_slot_sequence_record_sent(slot, seqnr);
+ slot->seq_nr_last_acked = seqnr;
+ }
+
+@@ -854,7 +853,6 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ __func__,
+ slot->slot_nr,
+ slot->seq_nr);
+- nfs4_slot_sequence_acked(slot, slot->seq_nr);
+ goto out_retry;
+ case -NFS4ERR_RETRY_UNCACHED_REP:
+ case -NFS4ERR_SEQ_FALSE_RETRY:
+@@ -3098,12 +3096,13 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ }
+
+ out:
+- if (opendata->lgp) {
+- nfs4_lgopen_release(opendata->lgp);
+- opendata->lgp = NULL;
+- }
+- if (!opendata->cancelled)
++ if (!opendata->cancelled) {
++ if (opendata->lgp) {
++ nfs4_lgopen_release(opendata->lgp);
++ opendata->lgp = NULL;
++ }
+ nfs4_sequence_free_slot(&opendata->o_res.seq_res);
++ }
+ return ret;
+ }
+
+@@ -9477,6 +9476,9 @@ static int nfs41_reclaim_complete_handle_errors(struct rpc_task *task, struct nf
+ rpc_delay(task, NFS4_POLL_RETRY_MAX);
+ fallthrough;
+ case -NFS4ERR_RETRY_UNCACHED_REP:
++ case -EACCES:
++ dprintk("%s: failed to reclaim complete error %d for server %s, retrying\n",
++ __func__, task->tk_status, clp->cl_hostname);
+ return -EAGAIN;
+ case -NFS4ERR_BADSESSION:
+ case -NFS4ERR_DEADSESSION:
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index 49b7df6167785..614513460b8e0 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -5057,7 +5057,7 @@ undo_action_next:
+ goto add_allocated_vcns;
+
+ vcn = le64_to_cpu(lrh->target_vcn);
+- vcn &= ~(log->clst_per_page - 1);
++ vcn &= ~(u64)(log->clst_per_page - 1);
+
+ add_allocated_vcns:
+ for (i = 0, vcn = le64_to_cpu(lrh->target_vcn),
+diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
+index 3de5700a9b833..891125ca68488 100644
+--- a/fs/ntfs3/fsntfs.c
++++ b/fs/ntfs3/fsntfs.c
+@@ -831,10 +831,15 @@ int ntfs_update_mftmirr(struct ntfs_sb_info *sbi, int wait)
+ {
+ int err;
+ struct super_block *sb = sbi->sb;
+- u32 blocksize = sb->s_blocksize;
++ u32 blocksize;
+ sector_t block1, block2;
+ u32 bytes;
+
++ if (!sb)
++ return -EINVAL;
++
++ blocksize = sb->s_blocksize;
++
+ if (!(sbi->flags & NTFS_FLAGS_MFTMIRR))
+ return 0;
+
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 6f81e3a49abfb..76ebea253fa25 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -1994,7 +1994,7 @@ static int indx_free_children(struct ntfs_index *indx, struct ntfs_inode *ni,
+ const struct NTFS_DE *e, bool trim)
+ {
+ int err;
+- struct indx_node *n;
++ struct indx_node *n = NULL;
+ struct INDEX_HDR *hdr;
+ CLST vbn = de_get_vbn(e);
+ size_t i;
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index be4ebdd8048b0..803ff4c63c318 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -430,6 +430,7 @@ end_enum:
+ } else if (fname && fname->home.low == cpu_to_le32(MFT_REC_EXTEND) &&
+ fname->home.seq == cpu_to_le16(MFT_REC_EXTEND)) {
+ /* Records in $Extend are not a files or general directories. */
++ inode->i_op = &ntfs_file_inode_operations;
+ } else {
+ err = -EINVAL;
+ goto out;
+diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
+index 0c6de62877377..b41d7c824a50b 100644
+--- a/fs/ntfs3/super.c
++++ b/fs/ntfs3/super.c
+@@ -30,6 +30,7 @@
+ #include <linux/fs_context.h>
+ #include <linux/fs_parser.h>
+ #include <linux/log2.h>
++#include <linux/minmax.h>
+ #include <linux/module.h>
+ #include <linux/nls.h>
+ #include <linux/seq_file.h>
+@@ -390,7 +391,7 @@ static int ntfs_fs_reconfigure(struct fs_context *fc)
+ return -EINVAL;
+ }
+
+- memcpy(sbi->options, new_opts, sizeof(*new_opts));
++ swap(sbi->options, fc->fs_private);
+
+ return 0;
+ }
+@@ -900,6 +901,8 @@ static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ ref.high = 0;
+
+ sbi->sb = sb;
++ sbi->options = fc->fs_private;
++ fc->fs_private = NULL;
+ sb->s_flags |= SB_NODIRATIME;
+ sb->s_magic = 0x7366746e; // "ntfs"
+ sb->s_op = &ntfs_sops;
+@@ -1262,8 +1265,6 @@ load_root:
+ goto put_inode_out;
+ }
+
+- fc->fs_private = NULL;
+-
+ return 0;
+
+ put_inode_out:
+@@ -1416,7 +1417,6 @@ static int ntfs_init_fs_context(struct fs_context *fc)
+ mutex_init(&sbi->compress.mtx_lzx);
+ #endif
+
+- sbi->options = opts;
+ fc->s_fs_info = sbi;
+ ok:
+ fc->fs_private = opts;
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index 5e0e0280e70de..1b8c89dbf6684 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -547,28 +547,23 @@ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+ {
+ const char *name;
+ size_t size, name_len;
+- void *value = NULL;
+- int err = 0;
++ void *value;
++ int err;
+ int flags;
++ umode_t mode;
+
+ if (S_ISLNK(inode->i_mode))
+ return -EOPNOTSUPP;
+
++ mode = inode->i_mode;
+ switch (type) {
+ case ACL_TYPE_ACCESS:
+ /* Do not change i_mode if we are in init_acl */
+ if (acl && !init_acl) {
+- umode_t mode;
+-
+ err = posix_acl_update_mode(mnt_userns, inode, &mode,
+ &acl);
+ if (err)
+- goto out;
+-
+- if (inode->i_mode != mode) {
+- inode->i_mode = mode;
+- mark_inode_dirty(inode);
+- }
++ return err;
+ }
+ name = XATTR_NAME_POSIX_ACL_ACCESS;
+ name_len = sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1;
+@@ -604,8 +599,13 @@ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+ err = ntfs_set_ea(inode, name, name_len, value, size, flags, 0);
+ if (err == -ENODATA && !size)
+ err = 0; /* Removing non existed xattr. */
+- if (!err)
++ if (!err) {
+ set_cached_acl(inode, type, acl);
++ if (inode->i_mode != mode) {
++ inode->i_mode = mode;
++ mark_inode_dirty(inode);
++ }
++ }
+
+ out:
+ kfree(value);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 1ce5c96983937..4c20961302094 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1418,11 +1418,12 @@ static int ovl_make_workdir(struct super_block *sb, struct ovl_fs *ofs,
+ */
+ err = ovl_setxattr(ofs, ofs->workdir, OVL_XATTR_OPAQUE, "0", 1);
+ if (err) {
++ pr_warn("failed to set xattr on upper\n");
+ ofs->noxattr = true;
+ if (ofs->config.index || ofs->config.metacopy) {
+ ofs->config.index = false;
+ ofs->config.metacopy = false;
+- pr_warn("upper fs does not support xattr, falling back to index=off,metacopy=off.\n");
++ pr_warn("...falling back to index=off,metacopy=off.\n");
+ }
+ /*
+ * xattr support is required for persistent st_ino.
+@@ -1430,8 +1431,10 @@ static int ovl_make_workdir(struct super_block *sb, struct ovl_fs *ofs,
+ */
+ if (ofs->config.xino == OVL_XINO_AUTO) {
+ ofs->config.xino = OVL_XINO_OFF;
+- pr_warn("upper fs does not support xattr, falling back to xino=off.\n");
++ pr_warn("...falling back to xino=off.\n");
+ }
++ if (err == -EPERM && !ofs->config.userxattr)
++ pr_info("try mounting with 'userxattr' option\n");
+ err = 0;
+ } else {
+ ovl_removexattr(ofs, ofs->workdir, OVL_XATTR_OPAQUE);
+diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
+index 3096f086b5a32..71ab4ba9c25d1 100644
+--- a/include/asm-generic/bitops/atomic.h
++++ b/include/asm-generic/bitops/atomic.h
+@@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
+ unsigned long mask = BIT_MASK(nr);
+
+ p += BIT_WORD(nr);
+- if (READ_ONCE(*p) & mask)
+- return 1;
+-
+ old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
+ return !!(old & mask);
+ }
+@@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
+ unsigned long mask = BIT_MASK(nr);
+
+ p += BIT_WORD(nr);
+- if (!(READ_ONCE(*p) & mask))
+- return 0;
+-
+ old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
+ return !!(old & mask);
+ }
+diff --git a/include/linux/bpfptr.h b/include/linux/bpfptr.h
+index 46e1757d06a35..79b2f78eec1a0 100644
+--- a/include/linux/bpfptr.h
++++ b/include/linux/bpfptr.h
+@@ -49,7 +49,9 @@ static inline void bpfptr_add(bpfptr_t *bpfptr, size_t val)
+ static inline int copy_from_bpfptr_offset(void *dst, bpfptr_t src,
+ size_t offset, size_t size)
+ {
+- return copy_from_sockptr_offset(dst, (sockptr_t) src, offset, size);
++ if (!bpfptr_is_kernel(src))
++ return copy_from_user(dst, src.user + offset, size);
++ return copy_from_kernel_nofault(dst, src.kernel + offset, size);
+ }
+
+ static inline int copy_from_bpfptr(void *dst, bpfptr_t src, size_t size)
+@@ -78,7 +80,9 @@ static inline void *kvmemdup_bpfptr(bpfptr_t src, size_t len)
+
+ static inline long strncpy_from_bpfptr(char *dst, bpfptr_t src, size_t count)
+ {
+- return strncpy_from_sockptr(dst, (sockptr_t) src, count);
++ if (bpfptr_is_kernel(src))
++ return strncpy_from_kernel_nofault(dst, src.kernel, count);
++ return strncpy_from_user(dst, src.user, count);
+ }
+
+ #endif /* _LINUX_BPFPTR_H */
+diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
+index 86af6f0a00a2a..ca98aeadcc804 100644
+--- a/include/linux/io-pgtable.h
++++ b/include/linux/io-pgtable.h
+@@ -74,17 +74,22 @@ struct io_pgtable_cfg {
+ * to support up to 35 bits PA where the bit32, bit33 and bit34 are
+ * encoded in the bit9, bit4 and bit5 of the PTE respectively.
+ *
++ * IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT: (ARM v7s format) MediaTek IOMMUs
++ * extend the translation table base support up to 35 bits PA, the
++ * encoding format is same with IO_PGTABLE_QUIRK_ARM_MTK_EXT.
++ *
+ * IO_PGTABLE_QUIRK_ARM_TTBR1: (ARM LPAE format) Configure the table
+ * for use in the upper half of a split address space.
+ *
+ * IO_PGTABLE_QUIRK_ARM_OUTER_WBWA: Override the outer-cacheability
+ * attributes set in the TCR for a non-coherent page-table walker.
+ */
+- #define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
+- #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
+- #define IO_PGTABLE_QUIRK_ARM_MTK_EXT BIT(3)
+- #define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5)
+- #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6)
++ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
++ #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
++ #define IO_PGTABLE_QUIRK_ARM_MTK_EXT BIT(3)
++ #define IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT BIT(4)
++ #define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5)
++ #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6)
+ unsigned long quirks;
+ unsigned long pgsize_bitmap;
+ unsigned int ias;
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index 750c7f395ca90..f700ff2df074e 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -122,6 +122,8 @@ int watchdog_nmi_probe(void);
+ int watchdog_nmi_enable(unsigned int cpu);
+ void watchdog_nmi_disable(unsigned int cpu);
+
++void lockup_detector_reconfigure(void);
++
+ /**
+ * touch_nmi_watchdog - restart NMI watchdog timeout.
+ *
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index 5860f32e39580..986c8a17ca5e7 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -419,8 +419,8 @@ static inline int xdr_stream_encode_item_absent(struct xdr_stream *xdr)
+ */
+ static inline __be32 *xdr_encode_bool(__be32 *p, u32 n)
+ {
+- *p = n ? xdr_one : xdr_zero;
+- return p++;
++ *p++ = n ? xdr_one : xdr_zero;
++ return p;
+ }
+
+ /**
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index 522bbf9379571..99f54d4f4bca2 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -144,7 +144,8 @@ struct rpc_xprt_ops {
+ unsigned short (*get_srcport)(struct rpc_xprt *xprt);
+ int (*buf_alloc)(struct rpc_task *task);
+ void (*buf_free)(struct rpc_task *task);
+- int (*prepare_request)(struct rpc_rqst *req);
++ int (*prepare_request)(struct rpc_rqst *req,
++ struct xdr_buf *buf);
+ int (*send_request)(struct rpc_rqst *req);
+ void (*wait_for_reply_request)(struct rpc_task *task);
+ void (*timer)(struct rpc_xprt *xprt, struct rpc_task *task);
+diff --git a/include/linux/uacce.h b/include/linux/uacce.h
+index 48e319f402751..9ce88c28b0a87 100644
+--- a/include/linux/uacce.h
++++ b/include/linux/uacce.h
+@@ -70,6 +70,7 @@ enum uacce_q_state {
+ * @wait: wait queue head
+ * @list: index into uacce queues list
+ * @qfrs: pointer of qfr regions
++ * @mutex: protects queue state
+ * @state: queue state machine
+ * @pasid: pasid associated to the mm
+ * @handle: iommu_sva handle returned by iommu_sva_bind_device()
+@@ -80,6 +81,7 @@ struct uacce_queue {
+ wait_queue_head_t wait;
+ struct list_head list;
+ struct uacce_qfile_region *qfrs[UACCE_MAX_REGION];
++ struct mutex mutex;
+ enum uacce_q_state state;
+ u32 pasid;
+ struct iommu_sva *handle;
+@@ -97,9 +99,9 @@ struct uacce_queue {
+ * @dev_id: id of the uacce device
+ * @cdev: cdev of the uacce
+ * @dev: dev of the uacce
++ * @mutex: protects uacce operation
+ * @priv: private pointer of the uacce
+ * @queues: list of queues
+- * @queues_lock: lock for queues list
+ * @inode: core vfs
+ */
+ struct uacce_device {
+@@ -113,9 +115,9 @@ struct uacce_device {
+ u32 dev_id;
+ struct cdev *cdev;
+ struct device dev;
++ struct mutex mutex;
+ void *priv;
+ struct list_head queues;
+- struct mutex queues_lock;
+ struct inode *inode;
+ };
+
+diff --git a/include/linux/usb/typec_mux.h b/include/linux/usb/typec_mux.h
+index ee57781dcf288..9292f0e078464 100644
+--- a/include/linux/usb/typec_mux.h
++++ b/include/linux/usb/typec_mux.h
+@@ -58,17 +58,13 @@ struct typec_mux_desc {
+ void *drvdata;
+ };
+
++#if IS_ENABLED(CONFIG_TYPEC)
++
+ struct typec_mux *fwnode_typec_mux_get(struct fwnode_handle *fwnode,
+ const struct typec_altmode_desc *desc);
+ void typec_mux_put(struct typec_mux *mux);
+ int typec_mux_set(struct typec_mux *mux, struct typec_mux_state *state);
+
+-static inline struct typec_mux *
+-typec_mux_get(struct device *dev, const struct typec_altmode_desc *desc)
+-{
+- return fwnode_typec_mux_get(dev_fwnode(dev), desc);
+-}
+-
+ struct typec_mux_dev *
+ typec_mux_register(struct device *parent, const struct typec_mux_desc *desc);
+ void typec_mux_unregister(struct typec_mux_dev *mux);
+@@ -76,4 +72,40 @@ void typec_mux_unregister(struct typec_mux_dev *mux);
+ void typec_mux_set_drvdata(struct typec_mux_dev *mux, void *data);
+ void *typec_mux_get_drvdata(struct typec_mux_dev *mux);
+
++#else
++
++static inline struct typec_mux *fwnode_typec_mux_get(struct fwnode_handle *fwnode,
++ const struct typec_altmode_desc *desc)
++{
++ return NULL;
++}
++
++static inline void typec_mux_put(struct typec_mux *mux) {}
++
++static inline int typec_mux_set(struct typec_mux *mux, struct typec_mux_state *state)
++{
++ return 0;
++}
++
++static inline struct typec_mux_dev *
++typec_mux_register(struct device *parent, const struct typec_mux_desc *desc)
++{
++ return ERR_PTR(-EOPNOTSUPP);
++}
++static inline void typec_mux_unregister(struct typec_mux_dev *mux) {}
++
++static inline void typec_mux_set_drvdata(struct typec_mux_dev *mux, void *data) {}
++static inline void *typec_mux_get_drvdata(struct typec_mux_dev *mux)
++{
++ return ERR_PTR(-EOPNOTSUPP);
++}
++
++#endif /* CONFIG_TYPEC */
++
++static inline struct typec_mux *
++typec_mux_get(struct device *dev, const struct typec_altmode_desc *desc)
++{
++ return fwnode_typec_mux_get(dev_fwnode(dev), desc);
++}
++
+ #endif /* __USB_TYPEC_MUX */
+diff --git a/include/net/mptcp.h b/include/net/mptcp.h
+index 4d761ad530c94..e2ff509da0198 100644
+--- a/include/net/mptcp.h
++++ b/include/net/mptcp.h
+@@ -290,4 +290,8 @@ struct mptcp_sock *bpf_mptcp_sock_from_subflow(struct sock *sk);
+ static inline struct mptcp_sock *bpf_mptcp_sock_from_subflow(struct sock *sk) { return NULL; }
+ #endif
+
++#if !IS_ENABLED(CONFIG_MPTCP)
++struct mptcp_sock { };
++#endif
++
+ #endif /* __NET_MPTCP_H */
+diff --git a/include/net/netns/conntrack.h b/include/net/netns/conntrack.h
+index 0677cd3de0344..c396a3862e808 100644
+--- a/include/net/netns/conntrack.h
++++ b/include/net/netns/conntrack.h
+@@ -95,7 +95,7 @@ struct nf_ip_net {
+
+ struct netns_ct {
+ #ifdef CONFIG_NF_CONNTRACK_EVENTS
+- bool ctnetlink_has_listener;
++ u8 ctnetlink_has_listener;
+ bool ecache_dwork_pending;
+ #endif
+ u8 sysctl_log_invalid; /* Log invalid packets */
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index ac151ecc7f19f..2428bc64cb1d6 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -105,11 +105,6 @@
+ #define REG_RESERVED_ADDR 0xffffffff
+ #define REG_RESERVED(reg) REG(reg, REG_RESERVED_ADDR)
+
+-#define for_each_stat(ocelot, stat) \
+- for ((stat) = (ocelot)->stats_layout; \
+- ((stat)->name[0] != '\0'); \
+- (stat)++)
+-
+ enum ocelot_target {
+ ANA = 1,
+ QS,
+@@ -335,7 +330,8 @@ enum ocelot_reg {
+ SYS_COUNT_RX_64,
+ SYS_COUNT_RX_65_127,
+ SYS_COUNT_RX_128_255,
+- SYS_COUNT_RX_256_1023,
++ SYS_COUNT_RX_256_511,
++ SYS_COUNT_RX_512_1023,
+ SYS_COUNT_RX_1024_1526,
+ SYS_COUNT_RX_1527_MAX,
+ SYS_COUNT_RX_PAUSE,
+@@ -351,7 +347,8 @@ enum ocelot_reg {
+ SYS_COUNT_TX_PAUSE,
+ SYS_COUNT_TX_64,
+ SYS_COUNT_TX_65_127,
+- SYS_COUNT_TX_128_511,
++ SYS_COUNT_TX_128_255,
++ SYS_COUNT_TX_256_511,
+ SYS_COUNT_TX_512_1023,
+ SYS_COUNT_TX_1024_1526,
+ SYS_COUNT_TX_1527_MAX,
+@@ -538,13 +535,108 @@ enum ocelot_ptp_pins {
+ TOD_ACC_PIN
+ };
+
++enum ocelot_stat {
++ OCELOT_STAT_RX_OCTETS,
++ OCELOT_STAT_RX_UNICAST,
++ OCELOT_STAT_RX_MULTICAST,
++ OCELOT_STAT_RX_BROADCAST,
++ OCELOT_STAT_RX_SHORTS,
++ OCELOT_STAT_RX_FRAGMENTS,
++ OCELOT_STAT_RX_JABBERS,
++ OCELOT_STAT_RX_CRC_ALIGN_ERRS,
++ OCELOT_STAT_RX_SYM_ERRS,
++ OCELOT_STAT_RX_64,
++ OCELOT_STAT_RX_65_127,
++ OCELOT_STAT_RX_128_255,
++ OCELOT_STAT_RX_256_511,
++ OCELOT_STAT_RX_512_1023,
++ OCELOT_STAT_RX_1024_1526,
++ OCELOT_STAT_RX_1527_MAX,
++ OCELOT_STAT_RX_PAUSE,
++ OCELOT_STAT_RX_CONTROL,
++ OCELOT_STAT_RX_LONGS,
++ OCELOT_STAT_RX_CLASSIFIED_DROPS,
++ OCELOT_STAT_RX_RED_PRIO_0,
++ OCELOT_STAT_RX_RED_PRIO_1,
++ OCELOT_STAT_RX_RED_PRIO_2,
++ OCELOT_STAT_RX_RED_PRIO_3,
++ OCELOT_STAT_RX_RED_PRIO_4,
++ OCELOT_STAT_RX_RED_PRIO_5,
++ OCELOT_STAT_RX_RED_PRIO_6,
++ OCELOT_STAT_RX_RED_PRIO_7,
++ OCELOT_STAT_RX_YELLOW_PRIO_0,
++ OCELOT_STAT_RX_YELLOW_PRIO_1,
++ OCELOT_STAT_RX_YELLOW_PRIO_2,
++ OCELOT_STAT_RX_YELLOW_PRIO_3,
++ OCELOT_STAT_RX_YELLOW_PRIO_4,
++ OCELOT_STAT_RX_YELLOW_PRIO_5,
++ OCELOT_STAT_RX_YELLOW_PRIO_6,
++ OCELOT_STAT_RX_YELLOW_PRIO_7,
++ OCELOT_STAT_RX_GREEN_PRIO_0,
++ OCELOT_STAT_RX_GREEN_PRIO_1,
++ OCELOT_STAT_RX_GREEN_PRIO_2,
++ OCELOT_STAT_RX_GREEN_PRIO_3,
++ OCELOT_STAT_RX_GREEN_PRIO_4,
++ OCELOT_STAT_RX_GREEN_PRIO_5,
++ OCELOT_STAT_RX_GREEN_PRIO_6,
++ OCELOT_STAT_RX_GREEN_PRIO_7,
++ OCELOT_STAT_TX_OCTETS,
++ OCELOT_STAT_TX_UNICAST,
++ OCELOT_STAT_TX_MULTICAST,
++ OCELOT_STAT_TX_BROADCAST,
++ OCELOT_STAT_TX_COLLISION,
++ OCELOT_STAT_TX_DROPS,
++ OCELOT_STAT_TX_PAUSE,
++ OCELOT_STAT_TX_64,
++ OCELOT_STAT_TX_65_127,
++ OCELOT_STAT_TX_128_255,
++ OCELOT_STAT_TX_256_511,
++ OCELOT_STAT_TX_512_1023,
++ OCELOT_STAT_TX_1024_1526,
++ OCELOT_STAT_TX_1527_MAX,
++ OCELOT_STAT_TX_YELLOW_PRIO_0,
++ OCELOT_STAT_TX_YELLOW_PRIO_1,
++ OCELOT_STAT_TX_YELLOW_PRIO_2,
++ OCELOT_STAT_TX_YELLOW_PRIO_3,
++ OCELOT_STAT_TX_YELLOW_PRIO_4,
++ OCELOT_STAT_TX_YELLOW_PRIO_5,
++ OCELOT_STAT_TX_YELLOW_PRIO_6,
++ OCELOT_STAT_TX_YELLOW_PRIO_7,
++ OCELOT_STAT_TX_GREEN_PRIO_0,
++ OCELOT_STAT_TX_GREEN_PRIO_1,
++ OCELOT_STAT_TX_GREEN_PRIO_2,
++ OCELOT_STAT_TX_GREEN_PRIO_3,
++ OCELOT_STAT_TX_GREEN_PRIO_4,
++ OCELOT_STAT_TX_GREEN_PRIO_5,
++ OCELOT_STAT_TX_GREEN_PRIO_6,
++ OCELOT_STAT_TX_GREEN_PRIO_7,
++ OCELOT_STAT_TX_AGED,
++ OCELOT_STAT_DROP_LOCAL,
++ OCELOT_STAT_DROP_TAIL,
++ OCELOT_STAT_DROP_YELLOW_PRIO_0,
++ OCELOT_STAT_DROP_YELLOW_PRIO_1,
++ OCELOT_STAT_DROP_YELLOW_PRIO_2,
++ OCELOT_STAT_DROP_YELLOW_PRIO_3,
++ OCELOT_STAT_DROP_YELLOW_PRIO_4,
++ OCELOT_STAT_DROP_YELLOW_PRIO_5,
++ OCELOT_STAT_DROP_YELLOW_PRIO_6,
++ OCELOT_STAT_DROP_YELLOW_PRIO_7,
++ OCELOT_STAT_DROP_GREEN_PRIO_0,
++ OCELOT_STAT_DROP_GREEN_PRIO_1,
++ OCELOT_STAT_DROP_GREEN_PRIO_2,
++ OCELOT_STAT_DROP_GREEN_PRIO_3,
++ OCELOT_STAT_DROP_GREEN_PRIO_4,
++ OCELOT_STAT_DROP_GREEN_PRIO_5,
++ OCELOT_STAT_DROP_GREEN_PRIO_6,
++ OCELOT_STAT_DROP_GREEN_PRIO_7,
++ OCELOT_NUM_STATS,
++};
++
+ struct ocelot_stat_layout {
+ u32 offset;
+ char name[ETH_GSTRING_LEN];
+ };
+
+-#define OCELOT_STAT_END { .name = "" }
+-
+ struct ocelot_stats_region {
+ struct list_head node;
+ u32 offset;
+@@ -707,7 +799,6 @@ struct ocelot {
+ const u32 *const *map;
+ const struct ocelot_stat_layout *stats_layout;
+ struct list_head stats_regions;
+- unsigned int num_stats;
+
+ u32 pool_size[OCELOT_SB_NUM][OCELOT_SB_POOL_NUM];
+ int packet_buffer_size;
+@@ -750,7 +841,7 @@ struct ocelot {
+ struct ocelot_psfp_list psfp;
+
+ /* Workqueue to check statistics for overflow with its lock */
+- struct mutex stats_lock;
++ spinlock_t stats_lock;
+ u64 *stats;
+ struct delayed_work stats_work;
+ struct workqueue_struct *stats_queue;
+diff --git a/include/sound/control.h b/include/sound/control.h
+index 985c51a8fb748..a1fc7e0a47d95 100644
+--- a/include/sound/control.h
++++ b/include/sound/control.h
+@@ -109,7 +109,7 @@ struct snd_ctl_file {
+ int preferred_subdevice[SND_CTL_SUBDEV_ITEMS];
+ wait_queue_head_t change_sleep;
+ spinlock_t read_lock;
+- struct fasync_struct *fasync;
++ struct snd_fasync *fasync;
+ int subscribed; /* read interface is activated */
+ struct list_head events; /* waiting events for read */
+ };
+diff --git a/include/sound/core.h b/include/sound/core.h
+index 6d4cc49584c63..39cee40ac22e0 100644
+--- a/include/sound/core.h
++++ b/include/sound/core.h
+@@ -501,4 +501,12 @@ snd_pci_quirk_lookup_id(u16 vendor, u16 device,
+ }
+ #endif
+
++/* async signal helpers */
++struct snd_fasync;
++
++int snd_fasync_helper(int fd, struct file *file, int on,
++ struct snd_fasync **fasyncp);
++void snd_kill_fasync(struct snd_fasync *fasync, int signal, int poll);
++void snd_fasync_free(struct snd_fasync *fasync);
++
+ #endif /* __SOUND_CORE_H */
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 6b99310b5b889..6987110843f03 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -399,7 +399,7 @@ struct snd_pcm_runtime {
+ snd_pcm_uframes_t twake; /* do transfer (!poll) wakeup if non-zero */
+ wait_queue_head_t sleep; /* poll sleep */
+ wait_queue_head_t tsleep; /* transfer sleep */
+- struct fasync_struct *fasync;
++ struct snd_fasync *fasync;
+ bool stop_operating; /* sync_stop will be called */
+ struct mutex buffer_mutex; /* protect for buffer changes */
+ atomic_t buffer_accessing; /* >0: in r/w operation, <0: blocked */
+diff --git a/include/uapi/linux/atm_zatm.h b/include/uapi/linux/atm_zatm.h
+new file mode 100644
+index 0000000000000..5135027b93c1c
+--- /dev/null
++++ b/include/uapi/linux/atm_zatm.h
+@@ -0,0 +1,47 @@
++/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
++/* atm_zatm.h - Driver-specific declarations of the ZATM driver (for use by
++ driver-specific utilities) */
++
++/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
++
++
++#ifndef LINUX_ATM_ZATM_H
++#define LINUX_ATM_ZATM_H
++
++/*
++ * Note: non-kernel programs including this file must also include
++ * sys/types.h for struct timeval
++ */
++
++#include <linux/atmapi.h>
++#include <linux/atmioc.h>
++
++#define ZATM_GETPOOL _IOW('a',ATMIOC_SARPRV+1,struct atmif_sioc)
++ /* get pool statistics */
++#define ZATM_GETPOOLZ _IOW('a',ATMIOC_SARPRV+2,struct atmif_sioc)
++ /* get statistics and zero */
++#define ZATM_SETPOOL _IOW('a',ATMIOC_SARPRV+3,struct atmif_sioc)
++ /* set pool parameters */
++
++struct zatm_pool_info {
++ int ref_count; /* free buffer pool usage counters */
++ int low_water,high_water; /* refill parameters */
++ int rqa_count,rqu_count; /* queue condition counters */
++ int offset,next_off; /* alignment optimizations: offset */
++ int next_cnt,next_thres; /* repetition counter and threshold */
++};
++
++struct zatm_pool_req {
++ int pool_num; /* pool number */
++ struct zatm_pool_info info; /* actual information */
++};
++
++#define ZATM_OAM_POOL 0 /* free buffer pool for OAM cells */
++#define ZATM_AAL0_POOL 1 /* free buffer pool for AAL0 cells */
++#define ZATM_AAL5_POOL_BASE 2 /* first AAL5 free buffer pool */
++#define ZATM_LAST_POOL ZATM_AAL5_POOL_BASE+10 /* max. 64 kB */
++
++#define ZATM_TIMER_HISTORY_SIZE 16 /* number of timer adjustments to
++ record; must be 2^n */
++
++#endif
+diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
+index 352a822d43709..3121d127d5aae 100644
+--- a/include/uapi/linux/f2fs.h
++++ b/include/uapi/linux/f2fs.h
+@@ -13,7 +13,7 @@
+ #define F2FS_IOC_COMMIT_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 2)
+ #define F2FS_IOC_START_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 3)
+ #define F2FS_IOC_RELEASE_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 4)
+-#define F2FS_IOC_ABORT_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
++#define F2FS_IOC_ABORT_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
+ #define F2FS_IOC_GARBAGE_COLLECT _IOW(F2FS_IOCTL_MAGIC, 6, __u32)
+ #define F2FS_IOC_WRITE_CHECKPOINT _IO(F2FS_IOCTL_MAGIC, 7)
+ #define F2FS_IOC_DEFRAGMENT _IOWR(F2FS_IOCTL_MAGIC, 8, \
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index a92271421718e..991aea081ec70 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -577,6 +577,18 @@ enum ufshcd_quirks {
+ * support physical host configuration.
+ */
+ UFSHCD_QUIRK_SKIP_PH_CONFIGURATION = 1 << 16,
++
++ /*
++ * This quirk needs to be enabled if the host controller has
++ * 64-bit addressing supported capability but it doesn't work.
++ */
++ UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS = 1 << 17,
++
++ /*
++ * This quirk needs to be enabled if the host controller has
++ * auto-hibernate capability but it's FASTAUTO only.
++ */
++ UFSHCD_QUIRK_HIBERN_FASTAUTO = 1 << 18,
+ };
+
+ enum ufshcd_caps {
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 1d05d63e6fa5a..a50cf0bb520ab 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -649,6 +649,11 @@ static int bpf_iter_init_array_map(void *priv_data,
+ seq_info->percpu_value_buf = value_buf;
+ }
+
++ /* bpf_iter_attach_map() acquires a map uref, and the uref may be
++ * released before or in the middle of iterating map elements, so
++ * acquire an extra map uref for iterator.
++ */
++ bpf_map_inc_with_uref(map);
+ seq_info->map = map;
+ return 0;
+ }
+@@ -657,6 +662,7 @@ static void bpf_iter_fini_array_map(void *priv_data)
+ {
+ struct bpf_iter_seq_array_map_info *seq_info = priv_data;
+
++ bpf_map_put_with_uref(seq_info->map);
+ kfree(seq_info->percpu_value_buf);
+ }
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 17fb69c0e0dcd..4dd5e0005afa1 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -311,12 +311,8 @@ static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key,
+ struct htab_elem *l;
+
+ if (node) {
+- u32 key_size = htab->map.key_size;
+-
+ l = container_of(node, struct htab_elem, lru_node);
+- memcpy(l->key, key, key_size);
+- check_and_init_map_value(&htab->map,
+- l->key + round_up(key_size, 8));
++ memcpy(l->key, key, htab->map.key_size);
+ return l;
+ }
+
+@@ -2064,6 +2060,7 @@ static int bpf_iter_init_hash_map(void *priv_data,
+ seq_info->percpu_value_buf = value_buf;
+ }
+
++ bpf_map_inc_with_uref(map);
+ seq_info->map = map;
+ seq_info->htab = container_of(map, struct bpf_htab, map);
+ return 0;
+@@ -2073,6 +2070,7 @@ static void bpf_iter_fini_hash_map(void *priv_data)
+ {
+ struct bpf_iter_seq_hash_map_info *seq_info = priv_data;
+
++ bpf_map_put_with_uref(seq_info->map);
+ kfree(seq_info->percpu_value_buf);
+ }
+
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 2b69306d3c6e6..82e83cfb4114a 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -5035,9 +5035,6 @@ static bool syscall_prog_is_valid_access(int off, int size,
+
+ BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size)
+ {
+- struct bpf_prog * __maybe_unused prog;
+- struct bpf_tramp_run_ctx __maybe_unused run_ctx;
+-
+ switch (cmd) {
+ case BPF_MAP_CREATE:
+ case BPF_MAP_UPDATE_ELEM:
+@@ -5047,6 +5044,18 @@ BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size)
+ case BPF_LINK_CREATE:
+ case BPF_RAW_TRACEPOINT_OPEN:
+ break;
++ default:
++ return -EINVAL;
++ }
++ return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size);
++}
++
++int kern_sys_bpf(int cmd, union bpf_attr *attr, unsigned int size)
++{
++ struct bpf_prog * __maybe_unused prog;
++ struct bpf_tramp_run_ctx __maybe_unused run_ctx;
++
++ switch (cmd) {
+ #ifdef CONFIG_BPF_JIT /* __bpf_prog_enter_sleepable used by trampoline and JIT */
+ case BPF_PROG_TEST_RUN:
+ if (attr->test.data_in || attr->test.data_out ||
+@@ -5077,11 +5086,10 @@ BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size)
+ return 0;
+ #endif
+ default:
+- return -EINVAL;
++ return ____bpf_sys_bpf(cmd, attr, size);
+ }
+- return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size);
+ }
+-EXPORT_SYMBOL(bpf_sys_bpf);
++EXPORT_SYMBOL(kern_sys_bpf);
+
+ static const struct bpf_func_proto bpf_sys_bpf_proto = {
+ .func = bpf_sys_bpf,
+diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
+index 7d4478525c669..4c57fc89fa17b 100644
+--- a/kernel/trace/trace_eprobe.c
++++ b/kernel/trace/trace_eprobe.c
+@@ -226,6 +226,7 @@ static int trace_eprobe_tp_arg_update(struct trace_eprobe *ep, int i)
+ struct probe_arg *parg = &ep->tp.args[i];
+ struct ftrace_event_field *field;
+ struct list_head *head;
++ int ret = -ENOENT;
+
+ head = trace_get_fields(ep->event);
+ list_for_each_entry(field, head, link) {
+@@ -235,9 +236,20 @@ static int trace_eprobe_tp_arg_update(struct trace_eprobe *ep, int i)
+ return 0;
+ }
+ }
++
++ /*
++ * Argument not found on event. But allow for comm and COMM
++ * to be used to get the current->comm.
++ */
++ if (strcmp(parg->code->data, "COMM") == 0 ||
++ strcmp(parg->code->data, "comm") == 0) {
++ parg->code->op = FETCH_OP_COMM;
++ ret = 0;
++ }
++
+ kfree(parg->code->data);
+ parg->code->data = NULL;
+- return -ENOENT;
++ return ret;
+ }
+
+ static int eprobe_event_define_fields(struct trace_event_call *event_call)
+@@ -310,6 +322,27 @@ static unsigned long get_event_field(struct fetch_insn *code, void *rec)
+
+ addr = rec + field->offset;
+
++ if (is_string_field(field)) {
++ switch (field->filter_type) {
++ case FILTER_DYN_STRING:
++ val = (unsigned long)(rec + (*(unsigned int *)addr & 0xffff));
++ break;
++ case FILTER_RDYN_STRING:
++ val = (unsigned long)(addr + (*(unsigned int *)addr & 0xffff));
++ break;
++ case FILTER_STATIC_STRING:
++ val = (unsigned long)addr;
++ break;
++ case FILTER_PTR_STRING:
++ val = (unsigned long)(*(char *)addr);
++ break;
++ default:
++ WARN_ON_ONCE(1);
++ return 0;
++ }
++ return val;
++ }
++
+ switch (field->size) {
+ case 1:
+ if (field->is_signed)
+@@ -341,16 +374,38 @@ static unsigned long get_event_field(struct fetch_insn *code, void *rec)
+
+ static int get_eprobe_size(struct trace_probe *tp, void *rec)
+ {
++ struct fetch_insn *code;
+ struct probe_arg *arg;
+ int i, len, ret = 0;
+
+ for (i = 0; i < tp->nr_args; i++) {
+ arg = tp->args + i;
+- if (unlikely(arg->dynamic)) {
++ if (arg->dynamic) {
+ unsigned long val;
+
+- val = get_event_field(arg->code, rec);
+- len = process_fetch_insn_bottom(arg->code + 1, val, NULL, NULL);
++ code = arg->code;
++ retry:
++ switch (code->op) {
++ case FETCH_OP_TP_ARG:
++ val = get_event_field(code, rec);
++ break;
++ case FETCH_OP_IMM:
++ val = code->immediate;
++ break;
++ case FETCH_OP_COMM:
++ val = (unsigned long)current->comm;
++ break;
++ case FETCH_OP_DATA:
++ val = (unsigned long)code->data;
++ break;
++ case FETCH_NOP_SYMBOL: /* Ignore a place holder */
++ code++;
++ goto retry;
++ default:
++ continue;
++ }
++ code++;
++ len = process_fetch_insn_bottom(code, val, NULL, NULL);
+ if (len > 0)
+ ret += len;
+ }
+@@ -368,8 +423,28 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *dest,
+ {
+ unsigned long val;
+
+- val = get_event_field(code, rec);
+- return process_fetch_insn_bottom(code + 1, val, dest, base);
++ retry:
++ switch (code->op) {
++ case FETCH_OP_TP_ARG:
++ val = get_event_field(code, rec);
++ break;
++ case FETCH_OP_IMM:
++ val = code->immediate;
++ break;
++ case FETCH_OP_COMM:
++ val = (unsigned long)current->comm;
++ break;
++ case FETCH_OP_DATA:
++ val = (unsigned long)code->data;
++ break;
++ case FETCH_NOP_SYMBOL: /* Ignore a place holder */
++ code++;
++ goto retry;
++ default:
++ return -EILSEQ;
++ }
++ code++;
++ return process_fetch_insn_bottom(code, val, dest, base);
+ }
+ NOKPROBE_SYMBOL(process_fetch_insn)
+
+@@ -841,6 +916,10 @@ static int trace_eprobe_tp_update_arg(struct trace_eprobe *ep, const char *argv[
+ if (ep->tp.args[i].code->op == FETCH_OP_TP_ARG)
+ ret = trace_eprobe_tp_arg_update(ep, i);
+
++ /* Handle symbols "@" */
++ if (!ret)
++ ret = traceprobe_update_arg(&ep->tp.args[i]);
++
+ return ret;
+ }
+
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index a114549720d63..61e3a2620fa3c 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -157,7 +157,7 @@ static void perf_trace_event_unreg(struct perf_event *p_event)
+ int i;
+
+ if (--tp_event->perf_refcount > 0)
+- goto out;
++ return;
+
+ tp_event->class->reg(tp_event, TRACE_REG_PERF_UNREGISTER, NULL);
+
+@@ -176,8 +176,6 @@ static void perf_trace_event_unreg(struct perf_event *p_event)
+ perf_trace_buf[i] = NULL;
+ }
+ }
+-out:
+- trace_event_put_ref(tp_event);
+ }
+
+ static int perf_trace_event_open(struct perf_event *p_event)
+@@ -241,6 +239,7 @@ void perf_trace_destroy(struct perf_event *p_event)
+ mutex_lock(&event_mutex);
+ perf_trace_event_close(p_event);
+ perf_trace_event_unreg(p_event);
++ trace_event_put_ref(p_event->tp_event);
+ mutex_unlock(&event_mutex);
+ }
+
+@@ -292,6 +291,7 @@ void perf_kprobe_destroy(struct perf_event *p_event)
+ mutex_lock(&event_mutex);
+ perf_trace_event_close(p_event);
+ perf_trace_event_unreg(p_event);
++ trace_event_put_ref(p_event->tp_event);
+ mutex_unlock(&event_mutex);
+
+ destroy_local_trace_kprobe(p_event->tp_event);
+@@ -347,6 +347,7 @@ void perf_uprobe_destroy(struct perf_event *p_event)
+ mutex_lock(&event_mutex);
+ perf_trace_event_close(p_event);
+ perf_trace_event_unreg(p_event);
++ trace_event_put_ref(p_event->tp_event);
+ mutex_unlock(&event_mutex);
+ destroy_local_trace_uprobe(p_event->tp_event);
+ }
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 181f08186d32c..0356cae0cf74e 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -176,6 +176,7 @@ static int trace_define_generic_fields(void)
+
+ __generic_field(int, CPU, FILTER_CPU);
+ __generic_field(int, cpu, FILTER_CPU);
++ __generic_field(int, common_cpu, FILTER_CPU);
+ __generic_field(char *, COMM, FILTER_COMM);
+ __generic_field(char *, comm, FILTER_COMM);
+
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 80863c6508e5e..d7626c936b986 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -279,7 +279,14 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
+ int ret = 0;
+ int len;
+
+- if (strcmp(arg, "retval") == 0) {
++ if (flags & TPARG_FL_TPOINT) {
++ if (code->data)
++ return -EFAULT;
++ code->data = kstrdup(arg, GFP_KERNEL);
++ if (!code->data)
++ return -ENOMEM;
++ code->op = FETCH_OP_TP_ARG;
++ } else if (strcmp(arg, "retval") == 0) {
+ if (flags & TPARG_FL_RETURN) {
+ code->op = FETCH_OP_RETVAL;
+ } else {
+@@ -303,7 +310,7 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
+ }
+ } else
+ goto inval_var;
+- } else if (strcmp(arg, "comm") == 0) {
++ } else if (strcmp(arg, "comm") == 0 || strcmp(arg, "COMM") == 0) {
+ code->op = FETCH_OP_COMM;
+ #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
+ } else if (((flags & TPARG_FL_MASK) ==
+@@ -319,13 +326,6 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
+ code->op = FETCH_OP_ARG;
+ code->param = (unsigned int)param - 1;
+ #endif
+- } else if (flags & TPARG_FL_TPOINT) {
+- if (code->data)
+- return -EFAULT;
+- code->data = kstrdup(arg, GFP_KERNEL);
+- if (!code->data)
+- return -ENOMEM;
+- code->op = FETCH_OP_TP_ARG;
+ } else
+ goto inval_var;
+
+@@ -380,6 +380,11 @@ parse_probe_arg(char *arg, const struct fetch_type *type,
+ break;
+
+ case '%': /* named register */
++ if (flags & TPARG_FL_TPOINT) {
++ /* eprobes do not handle registers */
++ trace_probe_log_err(offs, BAD_VAR);
++ break;
++ }
+ ret = regs_query_register_offset(arg + 1);
+ if (ret >= 0) {
+ code->op = FETCH_OP_REG;
+@@ -613,9 +618,11 @@ static int traceprobe_parse_probe_arg_body(const char *argv, ssize_t *size,
+
+ /*
+ * Since $comm and immediate string can not be dereferenced,
+- * we can find those by strcmp.
++ * we can find those by strcmp. But ignore for eprobes.
+ */
+- if (strcmp(arg, "$comm") == 0 || strncmp(arg, "\\\"", 2) == 0) {
++ if (!(flags & TPARG_FL_TPOINT) &&
++ (strcmp(arg, "$comm") == 0 || strcmp(arg, "$COMM") == 0 ||
++ strncmp(arg, "\\\"", 2) == 0)) {
+ /* The type of $comm must be "string", and not an array. */
+ if (parg->count || (t && strcmp(t, "string")))
+ goto out;
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index ecb0e8346e653..8e61f21e7e33e 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -537,7 +537,7 @@ int lockup_detector_offline_cpu(unsigned int cpu)
+ return 0;
+ }
+
+-static void lockup_detector_reconfigure(void)
++static void __lockup_detector_reconfigure(void)
+ {
+ cpus_read_lock();
+ watchdog_nmi_stop();
+@@ -557,6 +557,13 @@ static void lockup_detector_reconfigure(void)
+ __lockup_detector_cleanup();
+ }
+
++void lockup_detector_reconfigure(void)
++{
++ mutex_lock(&watchdog_mutex);
++ __lockup_detector_reconfigure();
++ mutex_unlock(&watchdog_mutex);
++}
++
+ /*
+ * Create the watchdog infrastructure and configure the detector(s).
+ */
+@@ -573,13 +580,13 @@ static __init void lockup_detector_setup(void)
+ return;
+
+ mutex_lock(&watchdog_mutex);
+- lockup_detector_reconfigure();
++ __lockup_detector_reconfigure();
+ softlockup_initialized = true;
+ mutex_unlock(&watchdog_mutex);
+ }
+
+ #else /* CONFIG_SOFTLOCKUP_DETECTOR */
+-static void lockup_detector_reconfigure(void)
++static void __lockup_detector_reconfigure(void)
+ {
+ cpus_read_lock();
+ watchdog_nmi_stop();
+@@ -587,9 +594,13 @@ static void lockup_detector_reconfigure(void)
+ watchdog_nmi_start();
+ cpus_read_unlock();
+ }
++void lockup_detector_reconfigure(void)
++{
++ __lockup_detector_reconfigure();
++}
+ static inline void lockup_detector_setup(void)
+ {
+- lockup_detector_reconfigure();
++ __lockup_detector_reconfigure();
+ }
+ #endif /* !CONFIG_SOFTLOCKUP_DETECTOR */
+
+@@ -629,7 +640,7 @@ static void proc_watchdog_update(void)
+ {
+ /* Remove impossible cpus to keep sysctl output clean. */
+ cpumask_and(&watchdog_cpumask, &watchdog_cpumask, cpu_possible_mask);
+- lockup_detector_reconfigure();
++ __lockup_detector_reconfigure();
+ }
+
+ /*
+diff --git a/lib/list_debug.c b/lib/list_debug.c
+index 9daa3fb9d1cd6..d98d43f80958b 100644
+--- a/lib/list_debug.c
++++ b/lib/list_debug.c
+@@ -20,7 +20,11 @@
+ bool __list_add_valid(struct list_head *new, struct list_head *prev,
+ struct list_head *next)
+ {
+- if (CHECK_DATA_CORRUPTION(next->prev != prev,
++ if (CHECK_DATA_CORRUPTION(prev == NULL,
++ "list_add corruption. prev is NULL.\n") ||
++ CHECK_DATA_CORRUPTION(next == NULL,
++ "list_add corruption. next is NULL.\n") ||
++ CHECK_DATA_CORRUPTION(next->prev != prev,
+ "list_add corruption. next->prev should be prev (%px), but was %px. (next=%px).\n",
+ prev, next->prev, next) ||
+ CHECK_DATA_CORRUPTION(prev->next != next,
+@@ -42,7 +46,11 @@ bool __list_del_entry_valid(struct list_head *entry)
+ prev = entry->prev;
+ next = entry->next;
+
+- if (CHECK_DATA_CORRUPTION(next == LIST_POISON1,
++ if (CHECK_DATA_CORRUPTION(next == NULL,
++ "list_del corruption, %px->next is NULL\n", entry) ||
++ CHECK_DATA_CORRUPTION(prev == NULL,
++ "list_del corruption, %px->prev is NULL\n", entry) ||
++ CHECK_DATA_CORRUPTION(next == LIST_POISON1,
+ "list_del corruption, %px->next is LIST_POISON1 (%px)\n",
+ entry, LIST_POISON1) ||
+ CHECK_DATA_CORRUPTION(prev == LIST_POISON2,
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index f5ecfdcf57b22..b670ba03a675c 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -178,7 +178,10 @@ activate_next:
+ if (!first)
+ return;
+
+- if (WARN_ON_ONCE(j1939_session_activate(first))) {
++ if (j1939_session_activate(first)) {
++ netdev_warn_once(first->priv->ndev,
++ "%s: 0x%p: Identical session is already activated.\n",
++ __func__, first);
+ first->err = -EBUSY;
+ goto activate_next;
+ } else {
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 307ee1174a6e2..d7d86c944d76d 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -260,6 +260,8 @@ static void __j1939_session_drop(struct j1939_session *session)
+
+ static void j1939_session_destroy(struct j1939_session *session)
+ {
++ struct sk_buff *skb;
++
+ if (session->transmission) {
+ if (session->err)
+ j1939_sk_errqueue(session, J1939_ERRQUEUE_TX_ABORT);
+@@ -274,7 +276,11 @@ static void j1939_session_destroy(struct j1939_session *session)
+ WARN_ON_ONCE(!list_empty(&session->sk_session_queue_entry));
+ WARN_ON_ONCE(!list_empty(&session->active_session_list_entry));
+
+- skb_queue_purge(&session->skb_queue);
++ while ((skb = skb_dequeue(&session->skb_queue)) != NULL) {
++ /* drop ref taken in j1939_session_skb_queue() */
++ skb_unref(skb);
++ kfree_skb(skb);
++ }
+ __j1939_session_drop(session);
+ j1939_priv_put(session->priv);
+ kfree(session);
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index a25ec93729b97..1b7f385643b4c 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -875,10 +875,18 @@ static int bpf_iter_init_sk_storage_map(void *priv_data,
+ {
+ struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data;
+
++ bpf_map_inc_with_uref(aux->map);
+ seq_info->map = aux->map;
+ return 0;
+ }
+
++static void bpf_iter_fini_sk_storage_map(void *priv_data)
++{
++ struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data;
++
++ bpf_map_put_with_uref(seq_info->map);
++}
++
+ static int bpf_iter_attach_map(struct bpf_prog *prog,
+ union bpf_iter_link_info *linfo,
+ struct bpf_iter_aux_info *aux)
+@@ -896,7 +904,7 @@ static int bpf_iter_attach_map(struct bpf_prog *prog,
+ if (map->map_type != BPF_MAP_TYPE_SK_STORAGE)
+ goto put_map;
+
+- if (prog->aux->max_rdonly_access > map->value_size) {
++ if (prog->aux->max_rdwr_access > map->value_size) {
+ err = -EACCES;
+ goto put_map;
+ }
+@@ -924,7 +932,7 @@ static const struct seq_operations bpf_sk_storage_map_seq_ops = {
+ static const struct bpf_iter_seq_info iter_seq_info = {
+ .seq_ops = &bpf_sk_storage_map_seq_ops,
+ .init_seq_private = bpf_iter_init_sk_storage_map,
+- .fini_seq_private = NULL,
++ .fini_seq_private = bpf_iter_fini_sk_storage_map,
+ .seq_priv_size = sizeof(struct bpf_iter_seq_sk_storage_map_info),
+ };
+
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 5cc88490f18fd..5e36723cbc21f 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -4943,7 +4943,7 @@ static int devlink_param_get(struct devlink *devlink,
+ const struct devlink_param *param,
+ struct devlink_param_gset_ctx *ctx)
+ {
+- if (!param->get)
++ if (!param->get || devlink->reload_failed)
+ return -EOPNOTSUPP;
+ return param->get(devlink, param->id, ctx);
+ }
+@@ -4952,7 +4952,7 @@ static int devlink_param_set(struct devlink *devlink,
+ const struct devlink_param *param,
+ struct devlink_param_gset_ctx *ctx)
+ {
+- if (!param->set)
++ if (!param->set || devlink->reload_failed)
+ return -EOPNOTSUPP;
+ return param->set(devlink, param->id, ctx);
+ }
+diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
+index a10335b4ba2d0..c8d137ef5980e 100644
+--- a/net/core/gen_stats.c
++++ b/net/core/gen_stats.c
+@@ -345,7 +345,7 @@ static void gnet_stats_add_queue_cpu(struct gnet_stats_queue *qstats,
+ for_each_possible_cpu(i) {
+ const struct gnet_stats_queue *qcpu = per_cpu_ptr(q, i);
+
+- qstats->qlen += qcpu->backlog;
++ qstats->qlen += qcpu->qlen;
+ qstats->backlog += qcpu->backlog;
+ qstats->drops += qcpu->drops;
+ qstats->requeues += qcpu->requeues;
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index ac45328607f77..4b5b15c684ed6 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -6070,6 +6070,7 @@ static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (kind == RTNL_KIND_DEL && (nlh->nlmsg_flags & NLM_F_BULK) &&
+ !(flags & RTNL_FLAG_BULK_DEL_SUPPORTED)) {
+ NL_SET_ERR_MSG(extack, "Bulk delete is not supported");
++ module_put(owner);
+ goto err_unlock;
+ }
+
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 81d4b4756a02d..b8ba578c5a504 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -783,13 +783,22 @@ static int sock_map_init_seq_private(void *priv_data,
+ {
+ struct sock_map_seq_info *info = priv_data;
+
++ bpf_map_inc_with_uref(aux->map);
+ info->map = aux->map;
+ return 0;
+ }
+
++static void sock_map_fini_seq_private(void *priv_data)
++{
++ struct sock_map_seq_info *info = priv_data;
++
++ bpf_map_put_with_uref(info->map);
++}
++
+ static const struct bpf_iter_seq_info sock_map_iter_seq_info = {
+ .seq_ops = &sock_map_seq_ops,
+ .init_seq_private = sock_map_init_seq_private,
++ .fini_seq_private = sock_map_fini_seq_private,
+ .seq_priv_size = sizeof(struct sock_map_seq_info),
+ };
+
+@@ -1369,18 +1378,27 @@ static const struct seq_operations sock_hash_seq_ops = {
+ };
+
+ static int sock_hash_init_seq_private(void *priv_data,
+- struct bpf_iter_aux_info *aux)
++ struct bpf_iter_aux_info *aux)
+ {
+ struct sock_hash_seq_info *info = priv_data;
+
++ bpf_map_inc_with_uref(aux->map);
+ info->map = aux->map;
+ info->htab = container_of(aux->map, struct bpf_shtab, map);
+ return 0;
+ }
+
++static void sock_hash_fini_seq_private(void *priv_data)
++{
++ struct sock_hash_seq_info *info = priv_data;
++
++ bpf_map_put_with_uref(info->map);
++}
++
+ static const struct bpf_iter_seq_info sock_hash_iter_seq_info = {
+ .seq_ops = &sock_hash_seq_ops,
+ .init_seq_private = sock_hash_init_seq_private,
++ .fini_seq_private = sock_hash_fini_seq_private,
+ .seq_priv_size = sizeof(struct sock_hash_seq_info),
+ };
+
+diff --git a/net/dsa/port.c b/net/dsa/port.c
+index 2dd76eb1621c7..a8895ee3cd600 100644
+--- a/net/dsa/port.c
++++ b/net/dsa/port.c
+@@ -145,11 +145,14 @@ int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age)
+ static void dsa_port_set_state_now(struct dsa_port *dp, u8 state,
+ bool do_fast_age)
+ {
++ struct dsa_switch *ds = dp->ds;
+ int err;
+
+ err = dsa_port_set_state(dp, state, do_fast_age);
+- if (err)
+- pr_err("DSA: failed to set STP state %u (%d)\n", state, err);
++ if (err && err != -EOPNOTSUPP) {
++ dev_err(ds->dev, "port %d failed to set STP state %u: %pe\n",
++ dp->index, state, ERR_PTR(err));
++ }
+ }
+
+ int dsa_port_set_mst_state(struct dsa_port *dp,
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 77e3f5970ce48..ec62f472aa1cb 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1311,8 +1311,7 @@ struct dst_entry *ip6_dst_lookup_tunnel(struct sk_buff *skb,
+ fl6.daddr = info->key.u.ipv6.dst;
+ fl6.saddr = info->key.u.ipv6.src;
+ prio = info->key.tos;
+- fl6.flowlabel = ip6_make_flowinfo(RT_TOS(prio),
+- info->key.label);
++ fl6.flowlabel = ip6_make_flowinfo(prio, info->key.label);
+
+ dst = ipv6_stub->ipv6_dst_lookup_flow(net, sock->sk, &fl6,
+ NULL);
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index b0dfe97ea4eed..7ad4542ecc605 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1358,6 +1358,9 @@ static void ndisc_router_discovery(struct sk_buff *skb)
+ if (!rt && lifetime) {
+ ND_PRINTK(3, info, "RA: adding default router\n");
+
++ if (neigh)
++ neigh_release(neigh);
++
+ rt = rt6_add_dflt_router(net, &ipv6_hdr(skb)->saddr,
+ skb->dev, pref, defrtr_usr_metric);
+ if (!rt) {
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 8ffb8aabd3244..3d90fa9653ef3 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1276,6 +1276,9 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ info->limit > dfrag->data_len))
+ return 0;
+
++ if (unlikely(!__tcp_can_send(ssk)))
++ return -EAGAIN;
++
+ /* compute send limit */
+ info->mss_now = tcp_send_mss(ssk, &info->size_goal, info->flags);
+ copy = info->size_goal;
+@@ -1449,7 +1452,8 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk)
+ if (__mptcp_check_fallback(msk)) {
+ if (!msk->first)
+ return NULL;
+- return sk_stream_memory_free(msk->first) ? msk->first : NULL;
++ return __tcp_can_send(msk->first) &&
++ sk_stream_memory_free(msk->first) ? msk->first : NULL;
+ }
+
+ /* re-use last subflow, if the burst allow that */
+@@ -1600,6 +1604,8 @@ void __mptcp_push_pending(struct sock *sk, unsigned int flags)
+
+ ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+ if (ret <= 0) {
++ if (ret == -EAGAIN)
++ continue;
+ mptcp_push_release(ssk, &info);
+ goto out;
+ }
+@@ -2805,30 +2811,16 @@ static void __mptcp_wr_shutdown(struct sock *sk)
+
+ static void __mptcp_destroy_sock(struct sock *sk)
+ {
+- struct mptcp_subflow_context *subflow, *tmp;
+ struct mptcp_sock *msk = mptcp_sk(sk);
+- LIST_HEAD(conn_list);
+
+ pr_debug("msk=%p", msk);
+
+ might_sleep();
+
+- /* join list will be eventually flushed (with rst) at sock lock release time*/
+- list_splice_init(&msk->conn_list, &conn_list);
+-
+ mptcp_stop_timer(sk);
+ sk_stop_timer(sk, &sk->sk_timer);
+ msk->pm.status = 0;
+
+- /* clears msk->subflow, allowing the following loop to close
+- * even the initial subflow
+- */
+- mptcp_dispose_initial_subflow(msk);
+- list_for_each_entry_safe(subflow, tmp, &conn_list, node) {
+- struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+- __mptcp_close_ssk(sk, ssk, subflow, 0);
+- }
+-
+ sk->sk_prot->destroy(sk);
+
+ WARN_ON_ONCE(msk->rmem_fwd_alloc);
+@@ -2920,24 +2912,20 @@ static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)
+
+ static int mptcp_disconnect(struct sock *sk, int flags)
+ {
+- struct mptcp_subflow_context *subflow, *tmp;
+ struct mptcp_sock *msk = mptcp_sk(sk);
+
+ inet_sk_state_store(sk, TCP_CLOSE);
+
+- list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) {
+- struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+-
+- __mptcp_close_ssk(sk, ssk, subflow, MPTCP_CF_FASTCLOSE);
+- }
+-
+ mptcp_stop_timer(sk);
+ sk_stop_timer(sk, &sk->sk_timer);
+
+ if (mptcp_sk(sk)->token)
+ mptcp_event(MPTCP_EVENT_CLOSED, mptcp_sk(sk), NULL, GFP_KERNEL);
+
+- mptcp_destroy_common(msk);
++ /* msk->subflow is still intact, the following will not free the first
++ * subflow
++ */
++ mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE);
+ msk->last_snd = NULL;
+ WRITE_ONCE(msk->flags, 0);
+ msk->cb_flags = 0;
+@@ -3087,12 +3075,17 @@ out:
+ return newsk;
+ }
+
+-void mptcp_destroy_common(struct mptcp_sock *msk)
++void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags)
+ {
++ struct mptcp_subflow_context *subflow, *tmp;
+ struct sock *sk = (struct sock *)msk;
+
+ __mptcp_clear_xmit(sk);
+
++ /* join list will be eventually flushed (with rst) at sock lock release time */
++ list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node)
++ __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, flags);
++
+ /* move to sk_receive_queue, sk_stream_kill_queues will purge it */
+ mptcp_data_lock(sk);
+ skb_queue_splice_tail_init(&msk->receive_queue, &sk->sk_receive_queue);
+@@ -3114,7 +3107,11 @@ static void mptcp_destroy(struct sock *sk)
+ {
+ struct mptcp_sock *msk = mptcp_sk(sk);
+
+- mptcp_destroy_common(msk);
++ /* clears msk->subflow, allowing the following to close
++ * even the initial subflow
++ */
++ mptcp_dispose_initial_subflow(msk);
++ mptcp_destroy_common(msk, 0);
+ sk_sockets_allocated_dec(sk);
+ }
+
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 480c5320b86e5..092154d5bc752 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -625,16 +625,19 @@ void mptcp_info2sockaddr(const struct mptcp_addr_info *info,
+ struct sockaddr_storage *addr,
+ unsigned short family);
+
+-static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
++static inline bool __tcp_can_send(const struct sock *ssk)
+ {
+- struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
++ /* only send if our side has not closed yet */
++ return ((1 << inet_sk_state_load(ssk)) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT));
++}
+
++static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
++{
+ /* can't send if JOIN hasn't completed yet (i.e. is usable for mptcp) */
+ if (subflow->request_join && !subflow->fully_established)
+ return false;
+
+- /* only send if our side has not closed yet */
+- return ((1 << ssk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT));
++ return __tcp_can_send(mptcp_subflow_tcp_sock(subflow));
+ }
+
+ void mptcp_subflow_set_active(struct mptcp_subflow_context *subflow);
+@@ -718,7 +721,7 @@ static inline void mptcp_write_space(struct sock *sk)
+ }
+ }
+
+-void mptcp_destroy_common(struct mptcp_sock *msk);
++void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags);
+
+ #define MPTCP_TOKEN_MAX_RETRIES 4
+
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index af28f3b603899..ac41b55b0a81a 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -621,7 +621,8 @@ static void mptcp_sock_destruct(struct sock *sk)
+ sock_orphan(sk);
+ }
+
+- mptcp_destroy_common(mptcp_sk(sk));
++ /* We don't need to clear msk->subflow, as it's still NULL at this point */
++ mptcp_destroy_common(mptcp_sk(sk), 0);
+ inet_sock_destruct(sk);
+ }
+
+diff --git a/net/netfilter/nf_conntrack_ftp.c b/net/netfilter/nf_conntrack_ftp.c
+index a414274338cff..0d9332e9cf71a 100644
+--- a/net/netfilter/nf_conntrack_ftp.c
++++ b/net/netfilter/nf_conntrack_ftp.c
+@@ -34,11 +34,6 @@ MODULE_DESCRIPTION("ftp connection tracking helper");
+ MODULE_ALIAS("ip_conntrack_ftp");
+ MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
+
+-/* This is slow, but it's simple. --RR */
+-static char *ftp_buffer;
+-
+-static DEFINE_SPINLOCK(nf_ftp_lock);
+-
+ #define MAX_PORTS 8
+ static u_int16_t ports[MAX_PORTS];
+ static unsigned int ports_c;
+@@ -398,6 +393,9 @@ static int help(struct sk_buff *skb,
+ return NF_ACCEPT;
+ }
+
++ if (unlikely(skb_linearize(skb)))
++ return NF_DROP;
++
+ th = skb_header_pointer(skb, protoff, sizeof(_tcph), &_tcph);
+ if (th == NULL)
+ return NF_ACCEPT;
+@@ -411,12 +409,8 @@ static int help(struct sk_buff *skb,
+ }
+ datalen = skb->len - dataoff;
+
+- spin_lock_bh(&nf_ftp_lock);
+- fb_ptr = skb_header_pointer(skb, dataoff, datalen, ftp_buffer);
+- if (!fb_ptr) {
+- spin_unlock_bh(&nf_ftp_lock);
+- return NF_ACCEPT;
+- }
++ spin_lock_bh(&ct->lock);
++ fb_ptr = skb->data + dataoff;
+
+ ends_in_nl = (fb_ptr[datalen - 1] == '\n');
+ seq = ntohl(th->seq) + datalen;
+@@ -544,7 +538,7 @@ out_update_nl:
+ if (ends_in_nl)
+ update_nl_seq(ct, seq, ct_ftp_info, dir, skb);
+ out:
+- spin_unlock_bh(&nf_ftp_lock);
++ spin_unlock_bh(&ct->lock);
+ return ret;
+ }
+
+@@ -571,7 +565,6 @@ static const struct nf_conntrack_expect_policy ftp_exp_policy = {
+ static void __exit nf_conntrack_ftp_fini(void)
+ {
+ nf_conntrack_helpers_unregister(ftp, ports_c * 2);
+- kfree(ftp_buffer);
+ }
+
+ static int __init nf_conntrack_ftp_init(void)
+@@ -580,10 +573,6 @@ static int __init nf_conntrack_ftp_init(void)
+
+ NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_ftp_master));
+
+- ftp_buffer = kmalloc(65536, GFP_KERNEL);
+- if (!ftp_buffer)
+- return -ENOMEM;
+-
+ if (ports_c == 0)
+ ports[ports_c++] = FTP_PORT;
+
+@@ -603,7 +592,6 @@ static int __init nf_conntrack_ftp_init(void)
+ ret = nf_conntrack_helpers_register(ftp, ports_c * 2);
+ if (ret < 0) {
+ pr_err("failed to register helpers\n");
+- kfree(ftp_buffer);
+ return ret;
+ }
+
+diff --git a/net/netfilter/nf_conntrack_h323_main.c b/net/netfilter/nf_conntrack_h323_main.c
+index 2eb31ffb3d141..04479d0aab8dd 100644
+--- a/net/netfilter/nf_conntrack_h323_main.c
++++ b/net/netfilter/nf_conntrack_h323_main.c
+@@ -34,6 +34,8 @@
+ #include <net/netfilter/nf_conntrack_zones.h>
+ #include <linux/netfilter/nf_conntrack_h323.h>
+
++#define H323_MAX_SIZE 65535
++
+ /* Parameters */
+ static unsigned int default_rrq_ttl __read_mostly = 300;
+ module_param(default_rrq_ttl, uint, 0600);
+@@ -142,6 +144,9 @@ static int get_tpkt_data(struct sk_buff *skb, unsigned int protoff,
+ if (tcpdatalen <= 0) /* No TCP data */
+ goto clear_out;
+
++ if (tcpdatalen > H323_MAX_SIZE)
++ tcpdatalen = H323_MAX_SIZE;
++
+ if (*data == NULL) { /* first TPKT */
+ /* Get first TPKT pointer */
+ tpkt = skb_header_pointer(skb, tcpdataoff, tcpdatalen,
+@@ -1220,6 +1225,9 @@ static unsigned char *get_udp_data(struct sk_buff *skb, unsigned int protoff,
+ if (dataoff >= skb->len)
+ return NULL;
+ *datalen = skb->len - dataoff;
++ if (*datalen > H323_MAX_SIZE)
++ *datalen = H323_MAX_SIZE;
++
+ return skb_header_pointer(skb, dataoff, *datalen, h323_buffer);
+ }
+
+@@ -1821,7 +1829,7 @@ static int __init nf_conntrack_h323_init(void)
+
+ NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_h323_master));
+
+- h323_buffer = kmalloc(65536, GFP_KERNEL);
++ h323_buffer = kmalloc(H323_MAX_SIZE + 1, GFP_KERNEL);
+ if (!h323_buffer)
+ return -ENOMEM;
+ ret = h323_helper_init();
+diff --git a/net/netfilter/nf_conntrack_irc.c b/net/netfilter/nf_conntrack_irc.c
+index 08ee4e760a3d2..1796c456ac98b 100644
+--- a/net/netfilter/nf_conntrack_irc.c
++++ b/net/netfilter/nf_conntrack_irc.c
+@@ -39,6 +39,7 @@ unsigned int (*nf_nat_irc_hook)(struct sk_buff *skb,
+ EXPORT_SYMBOL_GPL(nf_nat_irc_hook);
+
+ #define HELPER_NAME "irc"
++#define MAX_SEARCH_SIZE 4095
+
+ MODULE_AUTHOR("Harald Welte <laforge@netfilter.org>");
+ MODULE_DESCRIPTION("IRC (DCC) connection tracking helper");
+@@ -121,6 +122,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ int i, ret = NF_ACCEPT;
+ char *addr_beg_p, *addr_end_p;
+ typeof(nf_nat_irc_hook) nf_nat_irc;
++ unsigned int datalen;
+
+ /* If packet is coming from IRC server */
+ if (dir == IP_CT_DIR_REPLY)
+@@ -140,8 +142,12 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ if (dataoff >= skb->len)
+ return NF_ACCEPT;
+
++ datalen = skb->len - dataoff;
++ if (datalen > MAX_SEARCH_SIZE)
++ datalen = MAX_SEARCH_SIZE;
++
+ spin_lock_bh(&irc_buffer_lock);
+- ib_ptr = skb_header_pointer(skb, dataoff, skb->len - dataoff,
++ ib_ptr = skb_header_pointer(skb, dataoff, datalen,
+ irc_buffer);
+ if (!ib_ptr) {
+ spin_unlock_bh(&irc_buffer_lock);
+@@ -149,7 +155,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ }
+
+ data = ib_ptr;
+- data_limit = ib_ptr + skb->len - dataoff;
++ data_limit = ib_ptr + datalen;
+
+ /* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24
+ * 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */
+@@ -251,7 +257,7 @@ static int __init nf_conntrack_irc_init(void)
+ irc_exp_policy.max_expected = max_dcc_channels;
+ irc_exp_policy.timeout = dcc_timeout;
+
+- irc_buffer = kmalloc(65536, GFP_KERNEL);
++ irc_buffer = kmalloc(MAX_SEARCH_SIZE + 1, GFP_KERNEL);
+ if (!irc_buffer)
+ return -ENOMEM;
+
+diff --git a/net/netfilter/nf_conntrack_sane.c b/net/netfilter/nf_conntrack_sane.c
+index fcb33b1d5456d..13dc421fc4f52 100644
+--- a/net/netfilter/nf_conntrack_sane.c
++++ b/net/netfilter/nf_conntrack_sane.c
+@@ -34,10 +34,6 @@ MODULE_AUTHOR("Michal Schmidt <mschmidt@redhat.com>");
+ MODULE_DESCRIPTION("SANE connection tracking helper");
+ MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
+
+-static char *sane_buffer;
+-
+-static DEFINE_SPINLOCK(nf_sane_lock);
+-
+ #define MAX_PORTS 8
+ static u_int16_t ports[MAX_PORTS];
+ static unsigned int ports_c;
+@@ -67,14 +63,16 @@ static int help(struct sk_buff *skb,
+ unsigned int dataoff, datalen;
+ const struct tcphdr *th;
+ struct tcphdr _tcph;
+- void *sb_ptr;
+ int ret = NF_ACCEPT;
+ int dir = CTINFO2DIR(ctinfo);
+ struct nf_ct_sane_master *ct_sane_info = nfct_help_data(ct);
+ struct nf_conntrack_expect *exp;
+ struct nf_conntrack_tuple *tuple;
+- struct sane_request *req;
+ struct sane_reply_net_start *reply;
++ union {
++ struct sane_request req;
++ struct sane_reply_net_start repl;
++ } buf;
+
+ /* Until there's been traffic both ways, don't look in packets. */
+ if (ctinfo != IP_CT_ESTABLISHED &&
+@@ -92,59 +90,62 @@ static int help(struct sk_buff *skb,
+ return NF_ACCEPT;
+
+ datalen = skb->len - dataoff;
+-
+- spin_lock_bh(&nf_sane_lock);
+- sb_ptr = skb_header_pointer(skb, dataoff, datalen, sane_buffer);
+- if (!sb_ptr) {
+- spin_unlock_bh(&nf_sane_lock);
+- return NF_ACCEPT;
+- }
+-
+ if (dir == IP_CT_DIR_ORIGINAL) {
++ const struct sane_request *req;
++
+ if (datalen != sizeof(struct sane_request))
+- goto out;
++ return NF_ACCEPT;
++
++ req = skb_header_pointer(skb, dataoff, datalen, &buf.req);
++ if (!req)
++ return NF_ACCEPT;
+
+- req = sb_ptr;
+ if (req->RPC_code != htonl(SANE_NET_START)) {
+ /* Not an interesting command */
+- ct_sane_info->state = SANE_STATE_NORMAL;
+- goto out;
++ WRITE_ONCE(ct_sane_info->state, SANE_STATE_NORMAL);
++ return NF_ACCEPT;
+ }
+
+ /* We're interested in the next reply */
+- ct_sane_info->state = SANE_STATE_START_REQUESTED;
+- goto out;
++ WRITE_ONCE(ct_sane_info->state, SANE_STATE_START_REQUESTED);
++ return NF_ACCEPT;
+ }
+
++ /* IP_CT_DIR_REPLY */
++
+ /* Is it a reply to an uninteresting command? */
+- if (ct_sane_info->state != SANE_STATE_START_REQUESTED)
+- goto out;
++ if (READ_ONCE(ct_sane_info->state) != SANE_STATE_START_REQUESTED)
++ return NF_ACCEPT;
+
+ /* It's a reply to SANE_NET_START. */
+- ct_sane_info->state = SANE_STATE_NORMAL;
++ WRITE_ONCE(ct_sane_info->state, SANE_STATE_NORMAL);
+
+ if (datalen < sizeof(struct sane_reply_net_start)) {
+ pr_debug("NET_START reply too short\n");
+- goto out;
++ return NF_ACCEPT;
+ }
+
+- reply = sb_ptr;
++ datalen = sizeof(struct sane_reply_net_start);
++
++ reply = skb_header_pointer(skb, dataoff, datalen, &buf.repl);
++ if (!reply)
++ return NF_ACCEPT;
++
+ if (reply->status != htonl(SANE_STATUS_SUCCESS)) {
+ /* saned refused the command */
+ pr_debug("unsuccessful SANE_STATUS = %u\n",
+ ntohl(reply->status));
+- goto out;
++ return NF_ACCEPT;
+ }
+
+ /* Invalid saned reply? Ignore it. */
+ if (reply->zero != 0)
+- goto out;
++ return NF_ACCEPT;
+
+ exp = nf_ct_expect_alloc(ct);
+ if (exp == NULL) {
+ nf_ct_helper_log(skb, ct, "cannot alloc expectation");
+- ret = NF_DROP;
+- goto out;
++ return NF_DROP;
+ }
+
+ tuple = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
+@@ -162,9 +163,6 @@ static int help(struct sk_buff *skb,
+ }
+
+ nf_ct_expect_put(exp);
+-
+-out:
+- spin_unlock_bh(&nf_sane_lock);
+ return ret;
+ }
+
+@@ -178,7 +176,6 @@ static const struct nf_conntrack_expect_policy sane_exp_policy = {
+ static void __exit nf_conntrack_sane_fini(void)
+ {
+ nf_conntrack_helpers_unregister(sane, ports_c * 2);
+- kfree(sane_buffer);
+ }
+
+ static int __init nf_conntrack_sane_init(void)
+@@ -187,10 +184,6 @@ static int __init nf_conntrack_sane_init(void)
+
+ NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_sane_master));
+
+- sane_buffer = kmalloc(65536, GFP_KERNEL);
+- if (!sane_buffer)
+- return -ENOMEM;
+-
+ if (ports_c == 0)
+ ports[ports_c++] = SANE_PORT;
+
+@@ -210,7 +203,6 @@ static int __init nf_conntrack_sane_init(void)
+ ret = nf_conntrack_helpers_register(sane, ports_c * 2);
+ if (ret < 0) {
+ pr_err("failed to register helpers\n");
+- kfree(sane_buffer);
+ return ret;
+ }
+
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f4d2a5f277952..4bd6e9427c918 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -889,7 +889,7 @@ static int nf_tables_dump_tables(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -1705,7 +1705,7 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -3149,7 +3149,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -3907,7 +3907,7 @@ cont:
+ list_for_each_entry(i, &ctx->table->sets, list) {
+ int tmp;
+
+- if (!nft_is_active_next(ctx->net, set))
++ if (!nft_is_active_next(ctx->net, i))
+ continue;
+ if (!sscanf(i->name, name, &tmp))
+ continue;
+@@ -4133,7 +4133,7 @@ static int nf_tables_dump_sets(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (ctx->family != NFPROTO_UNSPEC &&
+@@ -4451,6 +4451,11 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ err = nf_tables_set_desc_parse(&desc, nla[NFTA_SET_DESC]);
+ if (err < 0)
+ return err;
++
++ if (desc.field_count > 1 && !(flags & NFT_SET_CONCAT))
++ return -EINVAL;
++ } else if (flags & NFT_SET_CONCAT) {
++ return -EINVAL;
+ }
+
+ if (nla[NFTA_SET_EXPR] || nla[NFTA_SET_EXPRESSIONS])
+@@ -5061,6 +5066,8 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
++ cb->seq = READ_ONCE(nft_net->base_seq);
++
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (dump_ctx->ctx.family != NFPROTO_UNSPEC &&
+ dump_ctx->ctx.family != table->family)
+@@ -5196,6 +5203,9 @@ static int nft_setelem_parse_flags(const struct nft_set *set,
+ if (!(set->flags & NFT_SET_INTERVAL) &&
+ *flags & NFT_SET_ELEM_INTERVAL_END)
+ return -EINVAL;
++ if ((*flags & (NFT_SET_ELEM_INTERVAL_END | NFT_SET_ELEM_CATCHALL)) ==
++ (NFT_SET_ELEM_INTERVAL_END | NFT_SET_ELEM_CATCHALL))
++ return -EINVAL;
+
+ return 0;
+ }
+@@ -5564,7 +5574,7 @@ int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
+
+ err = nft_expr_clone(expr, set->exprs[i]);
+ if (err < 0) {
+- nft_expr_destroy(ctx, expr);
++ kfree(expr);
+ goto err_expr;
+ }
+ expr_array[i] = expr;
+@@ -5796,6 +5806,24 @@ static void nft_setelem_remove(const struct net *net,
+ set->ops->remove(net, set, elem);
+ }
+
++static bool nft_setelem_valid_key_end(const struct nft_set *set,
++ struct nlattr **nla, u32 flags)
++{
++ if ((set->flags & (NFT_SET_CONCAT | NFT_SET_INTERVAL)) ==
++ (NFT_SET_CONCAT | NFT_SET_INTERVAL)) {
++ if (flags & NFT_SET_ELEM_INTERVAL_END)
++ return false;
++ if (!nla[NFTA_SET_ELEM_KEY_END] &&
++ !(flags & NFT_SET_ELEM_CATCHALL))
++ return false;
++ } else {
++ if (nla[NFTA_SET_ELEM_KEY_END])
++ return false;
++ }
++
++ return true;
++}
++
+ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ const struct nlattr *attr, u32 nlmsg_flags)
+ {
+@@ -5846,6 +5874,18 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ return -EINVAL;
+ }
+
++ if (set->flags & NFT_SET_OBJECT) {
++ if (!nla[NFTA_SET_ELEM_OBJREF] &&
++ !(flags & NFT_SET_ELEM_INTERVAL_END))
++ return -EINVAL;
++ } else {
++ if (nla[NFTA_SET_ELEM_OBJREF])
++ return -EINVAL;
++ }
++
++ if (!nft_setelem_valid_key_end(set, nla, flags))
++ return -EINVAL;
++
+ if ((flags & NFT_SET_ELEM_INTERVAL_END) &&
+ (nla[NFTA_SET_ELEM_DATA] ||
+ nla[NFTA_SET_ELEM_OBJREF] ||
+@@ -5853,6 +5893,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ nla[NFTA_SET_ELEM_EXPIRATION] ||
+ nla[NFTA_SET_ELEM_USERDATA] ||
+ nla[NFTA_SET_ELEM_EXPR] ||
++ nla[NFTA_SET_ELEM_KEY_END] ||
+ nla[NFTA_SET_ELEM_EXPRESSIONS]))
+ return -EINVAL;
+
+@@ -5983,10 +6024,6 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ }
+
+ if (nla[NFTA_SET_ELEM_OBJREF] != NULL) {
+- if (!(set->flags & NFT_SET_OBJECT)) {
+- err = -EINVAL;
+- goto err_parse_key_end;
+- }
+ obj = nft_obj_lookup(ctx->net, ctx->table,
+ nla[NFTA_SET_ELEM_OBJREF],
+ set->objtype, genmask);
+@@ -6273,6 +6310,9 @@ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
+ if (!nla[NFTA_SET_ELEM_KEY] && !(flags & NFT_SET_ELEM_CATCHALL))
+ return -EINVAL;
+
++ if (!nft_setelem_valid_key_end(set, nla, flags))
++ return -EINVAL;
++
+ nft_set_ext_prepare(&tmpl);
+
+ if (flags != 0) {
+@@ -6887,7 +6927,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -7819,7 +7859,7 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -8752,6 +8792,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ struct nft_trans_elem *te;
+ struct nft_chain *chain;
+ struct nft_table *table;
++ unsigned int base_seq;
+ LIST_HEAD(adl);
+ int err;
+
+@@ -8801,9 +8842,12 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ * Bump generation counter, invalidate any dump in progress.
+ * Cannot fail after this point.
+ */
+- while (++nft_net->base_seq == 0)
++ base_seq = READ_ONCE(nft_net->base_seq);
++ while (++base_seq == 0)
+ ;
+
++ WRITE_ONCE(nft_net->base_seq, base_seq);
++
+ /* step 3. Start new generation, rules_gen_X now in use. */
+ net->nft.gencursor = nft_gencursor_next(net);
+
+@@ -9365,13 +9409,9 @@ static int nf_tables_check_loops(const struct nft_ctx *ctx,
+ break;
+ }
+ }
+-
+- cond_resched();
+ }
+
+ list_for_each_entry(set, &ctx->table->sets, list) {
+- cond_resched();
+-
+ if (!nft_is_active_next(ctx->net, set))
+ continue;
+ if (!(set->flags & NFT_SET_MAP) ||
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index 3ddce24ac76dd..cee3e4e905ec8 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -34,25 +34,23 @@ static noinline void __nft_trace_packet(struct nft_traceinfo *info,
+ nft_trace_notify(info);
+ }
+
+-static inline void nft_trace_packet(struct nft_traceinfo *info,
++static inline void nft_trace_packet(const struct nft_pktinfo *pkt,
++ struct nft_traceinfo *info,
+ const struct nft_chain *chain,
+ const struct nft_rule_dp *rule,
+ enum nft_trace_types type)
+ {
+ if (static_branch_unlikely(&nft_trace_enabled)) {
+- const struct nft_pktinfo *pkt = info->pkt;
+-
+ info->nf_trace = pkt->skb->nf_trace;
+ info->rule = rule;
+ __nft_trace_packet(info, chain, type);
+ }
+ }
+
+-static inline void nft_trace_copy_nftrace(struct nft_traceinfo *info)
++static inline void nft_trace_copy_nftrace(const struct nft_pktinfo *pkt,
++ struct nft_traceinfo *info)
+ {
+ if (static_branch_unlikely(&nft_trace_enabled)) {
+- const struct nft_pktinfo *pkt = info->pkt;
+-
+ if (info->trace)
+ info->nf_trace = pkt->skb->nf_trace;
+ }
+@@ -96,7 +94,6 @@ static noinline void __nft_trace_verdict(struct nft_traceinfo *info,
+ const struct nft_chain *chain,
+ const struct nft_regs *regs)
+ {
+- const struct nft_pktinfo *pkt = info->pkt;
+ enum nft_trace_types type;
+
+ switch (regs->verdict.code) {
+@@ -110,7 +107,9 @@ static noinline void __nft_trace_verdict(struct nft_traceinfo *info,
+ break;
+ default:
+ type = NFT_TRACETYPE_RULE;
+- info->nf_trace = pkt->skb->nf_trace;
++
++ if (info->trace)
++ info->nf_trace = info->pkt->skb->nf_trace;
+ break;
+ }
+
+@@ -271,10 +270,10 @@ next_rule:
+ switch (regs.verdict.code) {
+ case NFT_BREAK:
+ regs.verdict.code = NFT_CONTINUE;
+- nft_trace_copy_nftrace(&info);
++ nft_trace_copy_nftrace(pkt, &info);
+ continue;
+ case NFT_CONTINUE:
+- nft_trace_packet(&info, chain, rule,
++ nft_trace_packet(pkt, &info, chain, rule,
+ NFT_TRACETYPE_RULE);
+ continue;
+ }
+@@ -318,7 +317,7 @@ next_rule:
+ goto next_rule;
+ }
+
+- nft_trace_packet(&info, basechain, NULL, NFT_TRACETYPE_POLICY);
++ nft_trace_packet(pkt, &info, basechain, NULL, NFT_TRACETYPE_POLICY);
+
+ if (static_branch_unlikely(&nft_counters_enabled))
+ nft_update_chain_stats(basechain, pkt);
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index 2f7c477fc9e70..1f38bf8fcfa87 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -44,6 +44,10 @@ MODULE_DESCRIPTION("Netfilter messages via netlink socket");
+
+ static unsigned int nfnetlink_pernet_id __read_mostly;
+
++#ifdef CONFIG_NF_CONNTRACK_EVENTS
++static DEFINE_SPINLOCK(nfnl_grp_active_lock);
++#endif
++
+ struct nfnl_net {
+ struct sock *nfnl;
+ };
+@@ -654,6 +658,44 @@ static void nfnetlink_rcv(struct sk_buff *skb)
+ netlink_rcv_skb(skb, nfnetlink_rcv_msg);
+ }
+
++static void nfnetlink_bind_event(struct net *net, unsigned int group)
++{
++#ifdef CONFIG_NF_CONNTRACK_EVENTS
++ int type, group_bit;
++ u8 v;
++
++ /* All NFNLGRP_CONNTRACK_* group bits fit into u8.
++ * The other groups are not relevant and can be ignored.
++ */
++ if (group >= 8)
++ return;
++
++ type = nfnl_group2type[group];
++
++ switch (type) {
++ case NFNL_SUBSYS_CTNETLINK:
++ break;
++ case NFNL_SUBSYS_CTNETLINK_EXP:
++ break;
++ default:
++ return;
++ }
++
++ group_bit = (1 << group);
++
++ spin_lock(&nfnl_grp_active_lock);
++ v = READ_ONCE(net->ct.ctnetlink_has_listener);
++ if ((v & group_bit) == 0) {
++ v |= group_bit;
++
++ /* read concurrently without nfnl_grp_active_lock held. */
++ WRITE_ONCE(net->ct.ctnetlink_has_listener, v);
++ }
++
++ spin_unlock(&nfnl_grp_active_lock);
++#endif
++}
++
+ static int nfnetlink_bind(struct net *net, int group)
+ {
+ const struct nfnetlink_subsystem *ss;
+@@ -670,28 +712,45 @@ static int nfnetlink_bind(struct net *net, int group)
+ if (!ss)
+ request_module_nowait("nfnetlink-subsys-%d", type);
+
+-#ifdef CONFIG_NF_CONNTRACK_EVENTS
+- if (type == NFNL_SUBSYS_CTNETLINK) {
+- nfnl_lock(NFNL_SUBSYS_CTNETLINK);
+- WRITE_ONCE(net->ct.ctnetlink_has_listener, true);
+- nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
+- }
+-#endif
++ nfnetlink_bind_event(net, group);
+ return 0;
+ }
+
+ static void nfnetlink_unbind(struct net *net, int group)
+ {
+ #ifdef CONFIG_NF_CONNTRACK_EVENTS
++ int type, group_bit;
++
+ if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX)
+ return;
+
+- if (nfnl_group2type[group] == NFNL_SUBSYS_CTNETLINK) {
+- nfnl_lock(NFNL_SUBSYS_CTNETLINK);
+- if (!nfnetlink_has_listeners(net, group))
+- WRITE_ONCE(net->ct.ctnetlink_has_listener, false);
+- nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
++ type = nfnl_group2type[group];
++
++ switch (type) {
++ case NFNL_SUBSYS_CTNETLINK:
++ break;
++ case NFNL_SUBSYS_CTNETLINK_EXP:
++ break;
++ default:
++ return;
++ }
++
++ /* ctnetlink_has_listener is u8 */
++ if (group >= 8)
++ return;
++
++ group_bit = (1 << group);
++
++ spin_lock(&nfnl_grp_active_lock);
++ if (!nfnetlink_has_listeners(net, group)) {
++ u8 v = READ_ONCE(net->ct.ctnetlink_has_listener);
++
++ v &= ~group_bit;
++
++ /* read concurrently without nfnl_grp_active_lock held. */
++ WRITE_ONCE(net->ct.ctnetlink_has_listener, v);
+ }
++ spin_unlock(&nfnl_grp_active_lock);
+ #endif
+ }
+
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 1afca2a6c2ac1..57010927e20a8 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1174,13 +1174,17 @@ static int ctrl_dumppolicy_start(struct netlink_callback *cb)
+ op.policy,
+ op.maxattr);
+ if (err)
+- return err;
++ goto err_free_state;
+ }
+ }
+
+ if (!ctx->state)
+ return -ENODATA;
+ return 0;
++
++err_free_state:
++ netlink_policy_dump_free(ctx->state);
++ return err;
+ }
+
+ static void *ctrl_dumppolicy_prep(struct sk_buff *skb,
+diff --git a/net/netlink/policy.c b/net/netlink/policy.c
+index 8d7c900e27f4c..87e3de0fde896 100644
+--- a/net/netlink/policy.c
++++ b/net/netlink/policy.c
+@@ -144,7 +144,7 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+
+ err = add_policy(&state, policy, maxtype);
+ if (err)
+- return err;
++ goto err_try_undo;
+
+ for (policy_idx = 0;
+ policy_idx < state->n_alloc && state->policies[policy_idx].policy;
+@@ -164,7 +164,7 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+ policy[type].nested_policy,
+ policy[type].len);
+ if (err)
+- return err;
++ goto err_try_undo;
+ break;
+ default:
+ break;
+@@ -174,6 +174,16 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+
+ *pstate = state;
+ return 0;
++
++err_try_undo:
++ /* Try to preserve reasonable unwind semantics - if we're starting from
++ * scratch clean up fully, otherwise record what we got and caller will.
++ */
++ if (!*pstate)
++ netlink_policy_dump_free(state);
++ else
++ *pstate = state;
++ return err;
+ }
+
+ static bool
+diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c
+index 18196e1c8c2fd..9ced13c0627a7 100644
+--- a/net/qrtr/mhi.c
++++ b/net/qrtr/mhi.c
+@@ -78,11 +78,6 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
+ struct qrtr_mhi_dev *qdev;
+ int rc;
+
+- /* start channels */
+- rc = mhi_prepare_for_transfer_autoqueue(mhi_dev);
+- if (rc)
+- return rc;
+-
+ qdev = devm_kzalloc(&mhi_dev->dev, sizeof(*qdev), GFP_KERNEL);
+ if (!qdev)
+ return -ENOMEM;
+@@ -96,6 +91,13 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
+ if (rc)
+ return rc;
+
++ /* start channels */
++ rc = mhi_prepare_for_transfer_autoqueue(mhi_dev);
++ if (rc) {
++ qrtr_endpoint_unregister(&qdev->ep);
++ return rc;
++ }
++
+ dev_dbg(qdev->dev, "Qualcomm MHI QRTR driver probed\n");
+
+ return 0;
+diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
+index 6fdedd9dbbc28..cfbf0e129cba5 100644
+--- a/net/rds/ib_recv.c
++++ b/net/rds/ib_recv.c
+@@ -363,6 +363,7 @@ static int acquire_refill(struct rds_connection *conn)
+ static void release_refill(struct rds_connection *conn)
+ {
+ clear_bit(RDS_RECV_REFILL, &conn->c_flags);
++ smp_mb__after_atomic();
+
+ /* We don't use wait_on_bit()/wake_up_bit() because our waking is in a
+ * hot path and finding waiters is very rare. We don't want to walk
+diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c
+index 682fcd24bf439..2324d1e58f21f 100644
+--- a/net/sunrpc/auth.c
++++ b/net/sunrpc/auth.c
+@@ -445,7 +445,7 @@ rpcauth_prune_expired(struct list_head *free, int nr_to_scan)
+ * Enforce a 60 second garbage collection moratorium
+ * Note that the cred_unused list must be time-ordered.
+ */
+- if (!time_in_range(cred->cr_expire, expired, jiffies))
++ if (time_in_range(cred->cr_expire, expired, jiffies))
+ continue;
+ if (!rpcauth_unhash_cred(cred))
+ continue;
+diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
+index 5a6b61dcdf2dc..ad8ef1fb08b4e 100644
+--- a/net/sunrpc/backchannel_rqst.c
++++ b/net/sunrpc/backchannel_rqst.c
+@@ -64,6 +64,17 @@ static void xprt_free_allocation(struct rpc_rqst *req)
+ kfree(req);
+ }
+
++static void xprt_bc_reinit_xdr_buf(struct xdr_buf *buf)
++{
++ buf->head[0].iov_len = PAGE_SIZE;
++ buf->tail[0].iov_len = 0;
++ buf->pages = NULL;
++ buf->page_len = 0;
++ buf->flags = 0;
++ buf->len = 0;
++ buf->buflen = PAGE_SIZE;
++}
++
+ static int xprt_alloc_xdr_buf(struct xdr_buf *buf, gfp_t gfp_flags)
+ {
+ struct page *page;
+@@ -292,6 +303,9 @@ void xprt_free_bc_rqst(struct rpc_rqst *req)
+ */
+ spin_lock_bh(&xprt->bc_pa_lock);
+ if (xprt_need_to_requeue(xprt)) {
++ xprt_bc_reinit_xdr_buf(&req->rq_snd_buf);
++ xprt_bc_reinit_xdr_buf(&req->rq_rcv_buf);
++ req->rq_rcv_buf.len = PAGE_SIZE;
+ list_add_tail(&req->rq_bc_pa_list, &xprt->bc_pa_list);
+ xprt->bc_alloc_count++;
+ atomic_inc(&xprt->bc_slot_count);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index b6781ada3aa8d..733f9f2260926 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1856,7 +1856,6 @@ rpc_xdr_encode(struct rpc_task *task)
+ req->rq_snd_buf.head[0].iov_len = 0;
+ xdr_init_encode(&xdr, &req->rq_snd_buf,
+ req->rq_snd_buf.head[0].iov_base, req);
+- xdr_free_bvec(&req->rq_snd_buf);
+ if (rpc_encode_header(task, &xdr))
+ return;
+
+diff --git a/net/sunrpc/sysfs.c b/net/sunrpc/sysfs.c
+index a3a2f8aeb80ea..d1a15c6d3fd9b 100644
+--- a/net/sunrpc/sysfs.c
++++ b/net/sunrpc/sysfs.c
+@@ -291,8 +291,10 @@ static ssize_t rpc_sysfs_xprt_state_change(struct kobject *kobj,
+ int offline = 0, online = 0, remove = 0;
+ struct rpc_xprt_switch *xps = rpc_sysfs_xprt_kobj_get_xprt_switch(kobj);
+
+- if (!xprt)
+- return 0;
++ if (!xprt || !xps) {
++ count = 0;
++ goto out_put;
++ }
+
+ if (!strncmp(buf, "offline", 7))
+ offline = 1;
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 86d62cffba0dd..53b024cea3b3e 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -73,7 +73,7 @@ static void xprt_init(struct rpc_xprt *xprt, struct net *net);
+ static __be32 xprt_alloc_xid(struct rpc_xprt *xprt);
+ static void xprt_destroy(struct rpc_xprt *xprt);
+ static void xprt_request_init(struct rpc_task *task);
+-static int xprt_request_prepare(struct rpc_rqst *req);
++static int xprt_request_prepare(struct rpc_rqst *req, struct xdr_buf *buf);
+
+ static DEFINE_SPINLOCK(xprt_list_lock);
+ static LIST_HEAD(xprt_list);
+@@ -1149,7 +1149,7 @@ xprt_request_enqueue_receive(struct rpc_task *task)
+ if (!xprt_request_need_enqueue_receive(task, req))
+ return 0;
+
+- ret = xprt_request_prepare(task->tk_rqstp);
++ ret = xprt_request_prepare(task->tk_rqstp, &req->rq_rcv_buf);
+ if (ret)
+ return ret;
+ spin_lock(&xprt->queue_lock);
+@@ -1179,8 +1179,11 @@ xprt_request_dequeue_receive_locked(struct rpc_task *task)
+ {
+ struct rpc_rqst *req = task->tk_rqstp;
+
+- if (test_and_clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate))
++ if (test_and_clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate)) {
+ xprt_request_rb_remove(req->rq_xprt, req);
++ xdr_free_bvec(&req->rq_rcv_buf);
++ req->rq_private_buf.bvec = NULL;
++ }
+ }
+
+ /**
+@@ -1336,8 +1339,14 @@ xprt_request_enqueue_transmit(struct rpc_task *task)
+ {
+ struct rpc_rqst *pos, *req = task->tk_rqstp;
+ struct rpc_xprt *xprt = req->rq_xprt;
++ int ret;
+
+ if (xprt_request_need_enqueue_transmit(task, req)) {
++ ret = xprt_request_prepare(task->tk_rqstp, &req->rq_snd_buf);
++ if (ret) {
++ task->tk_status = ret;
++ return;
++ }
+ req->rq_bytes_sent = 0;
+ spin_lock(&xprt->queue_lock);
+ /*
+@@ -1397,6 +1406,7 @@ xprt_request_dequeue_transmit_locked(struct rpc_task *task)
+ } else
+ list_del(&req->rq_xmit2);
+ atomic_long_dec(&req->rq_xprt->xmit_queuelen);
++ xdr_free_bvec(&req->rq_snd_buf);
+ }
+
+ /**
+@@ -1433,8 +1443,6 @@ xprt_request_dequeue_xprt(struct rpc_task *task)
+ test_bit(RPC_TASK_NEED_RECV, &task->tk_runstate) ||
+ xprt_is_pinned_rqst(req)) {
+ spin_lock(&xprt->queue_lock);
+- xprt_request_dequeue_transmit_locked(task);
+- xprt_request_dequeue_receive_locked(task);
+ while (xprt_is_pinned_rqst(req)) {
+ set_bit(RPC_TASK_MSG_PIN_WAIT, &task->tk_runstate);
+ spin_unlock(&xprt->queue_lock);
+@@ -1442,6 +1450,8 @@ xprt_request_dequeue_xprt(struct rpc_task *task)
+ spin_lock(&xprt->queue_lock);
+ clear_bit(RPC_TASK_MSG_PIN_WAIT, &task->tk_runstate);
+ }
++ xprt_request_dequeue_transmit_locked(task);
++ xprt_request_dequeue_receive_locked(task);
+ spin_unlock(&xprt->queue_lock);
+ }
+ }
+@@ -1449,18 +1459,19 @@ xprt_request_dequeue_xprt(struct rpc_task *task)
+ /**
+ * xprt_request_prepare - prepare an encoded request for transport
+ * @req: pointer to rpc_rqst
++ * @buf: pointer to send/rcv xdr_buf
+ *
+ * Calls into the transport layer to do whatever is needed to prepare
+ * the request for transmission or receive.
+ * Returns error, or zero.
+ */
+ static int
+-xprt_request_prepare(struct rpc_rqst *req)
++xprt_request_prepare(struct rpc_rqst *req, struct xdr_buf *buf)
+ {
+ struct rpc_xprt *xprt = req->rq_xprt;
+
+ if (xprt->ops->prepare_request)
+- return xprt->ops->prepare_request(req);
++ return xprt->ops->prepare_request(req, buf);
+ return 0;
+ }
+
+@@ -1961,8 +1972,6 @@ void xprt_release(struct rpc_task *task)
+ spin_unlock(&xprt->transport_lock);
+ if (req->rq_buffer)
+ xprt->ops->buf_free(task);
+- xdr_free_bvec(&req->rq_rcv_buf);
+- xdr_free_bvec(&req->rq_snd_buf);
+ if (req->rq_cred != NULL)
+ put_rpccred(req->rq_cred);
+ if (req->rq_release_snd_buf)
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index fcdd0fca408e0..95a15b74667d6 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -822,17 +822,9 @@ static int xs_stream_nospace(struct rpc_rqst *req, bool vm_wait)
+ return ret;
+ }
+
+-static int
+-xs_stream_prepare_request(struct rpc_rqst *req)
++static int xs_stream_prepare_request(struct rpc_rqst *req, struct xdr_buf *buf)
+ {
+- gfp_t gfp = rpc_task_gfp_mask();
+- int ret;
+-
+- ret = xdr_alloc_bvec(&req->rq_snd_buf, gfp);
+- if (ret < 0)
+- return ret;
+- xdr_free_bvec(&req->rq_rcv_buf);
+- return xdr_alloc_bvec(&req->rq_rcv_buf, gfp);
++ return xdr_alloc_bvec(buf, rpc_task_gfp_mask());
+ }
+
+ /*
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index f04abf662ec6c..b4ee163154a68 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1286,6 +1286,7 @@ static void vsock_connect_timeout(struct work_struct *work)
+ if (sk->sk_state == TCP_SYN_SENT &&
+ (sk->sk_shutdown != SHUTDOWN_MASK)) {
+ sk->sk_state = TCP_CLOSE;
++ sk->sk_socket->state = SS_UNCONNECTED;
+ sk->sk_err = ETIMEDOUT;
+ sk_error_report(sk);
+ vsock_transport_cancel_pkt(vsk);
+@@ -1391,7 +1392,14 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ * timeout fires.
+ */
+ sock_hold(sk);
+- schedule_delayed_work(&vsk->connect_work, timeout);
++
++ /* If the timeout function is already scheduled,
++ * reschedule it, then ungrab the socket refcount to
++ * keep it balanced.
++ */
++ if (mod_delayed_work(system_wq, &vsk->connect_work,
++ timeout))
++ sock_put(sk);
+
+ /* Skip ahead to preserve error code set above. */
+ goto out_wait;
+diff --git a/scripts/Makefile.gcc-plugins b/scripts/Makefile.gcc-plugins
+index 692d64a70542a..e4deaf5fa571d 100644
+--- a/scripts/Makefile.gcc-plugins
++++ b/scripts/Makefile.gcc-plugins
+@@ -4,7 +4,7 @@ gcc-plugin-$(CONFIG_GCC_PLUGIN_LATENT_ENTROPY) += latent_entropy_plugin.so
+ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_LATENT_ENTROPY) \
+ += -DLATENT_ENTROPY_PLUGIN
+ ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY
+- DISABLE_LATENT_ENTROPY_PLUGIN += -fplugin-arg-latent_entropy_plugin-disable
++ DISABLE_LATENT_ENTROPY_PLUGIN += -fplugin-arg-latent_entropy_plugin-disable -ULATENT_ENTROPY_PLUGIN
+ endif
+ export DISABLE_LATENT_ENTROPY_PLUGIN
+
+diff --git a/scripts/dummy-tools/gcc b/scripts/dummy-tools/gcc
+index b2483149bbe55..7db8258434355 100755
+--- a/scripts/dummy-tools/gcc
++++ b/scripts/dummy-tools/gcc
+@@ -96,12 +96,8 @@ fi
+
+ # To set GCC_PLUGINS
+ if arg_contain -print-file-name=plugin "$@"; then
+- plugin_dir=$(mktemp -d)
+-
+- mkdir -p $plugin_dir/include
+- touch $plugin_dir/include/plugin-version.h
+-
+- echo $plugin_dir
++ # Use $0 to find the in-tree dummy directory
++ echo "$(dirname "$(readlink -f "$0")")/dummy-plugin-dir"
+ exit 0
+ fi
+
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 620dc8c4c8140..c664a0a1f7d66 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -2203,13 +2203,11 @@ static void add_exported_symbols(struct buffer *buf, struct module *mod)
+ /* record CRCs for exported symbols */
+ buf_printf(buf, "\n");
+ list_for_each_entry(sym, &mod->exported_symbols, list) {
+- if (!sym->crc_valid) {
++ if (!sym->crc_valid)
+ warn("EXPORT symbol \"%s\" [%s%s] version generation failed, symbol will not be versioned.\n"
+ "Is \"%s\" prototyped in <asm/asm-prototypes.h>?\n",
+ sym->name, mod->name, mod->is_vmlinux ? "" : ".ko",
+ sym->name);
+- continue;
+- }
+
+ buf_printf(buf, "SYMBOL_CRC(%s, 0x%08x, \"%s\");\n",
+ sym->name, sym->crc, sym->is_gpl_only ? "_gpl" : "");
+diff --git a/scripts/module.lds.S b/scripts/module.lds.S
+index 1d0e1e4dc3d2a..3a3aa2354ed86 100644
+--- a/scripts/module.lds.S
++++ b/scripts/module.lds.S
+@@ -27,6 +27,8 @@ SECTIONS {
+ .ctors 0 : ALIGN(8) { *(SORT(.ctors.*)) *(.ctors) }
+ .init_array 0 : ALIGN(8) { *(SORT(.init_array.*)) *(.init_array) }
+
++ .altinstructions 0 : ALIGN(8) { KEEP(*(.altinstructions)) }
++ __bug_table 0 : ALIGN(8) { KEEP(*(__bug_table)) }
+ __jump_table 0 : ALIGN(8) { KEEP(*(__jump_table)) }
+
+ __patchable_function_entries : { *(__patchable_function_entries) }
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index 0797edb2fb3dc..d307fb1edd76e 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -401,7 +401,7 @@ static struct aa_loaddata *aa_simple_write_to_buffer(const char __user *userbuf,
+
+ data->size = copy_size;
+ if (copy_from_user(data->data, userbuf, copy_size)) {
+- kvfree(data);
++ aa_put_loaddata(data);
+ return ERR_PTR(-EFAULT);
+ }
+
+diff --git a/security/apparmor/audit.c b/security/apparmor/audit.c
+index f7e97c7e80f3d..704b0c895605a 100644
+--- a/security/apparmor/audit.c
++++ b/security/apparmor/audit.c
+@@ -137,7 +137,7 @@ int aa_audit(int type, struct aa_profile *profile, struct common_audit_data *sa,
+ }
+ if (AUDIT_MODE(profile) == AUDIT_QUIET ||
+ (type == AUDIT_APPARMOR_DENIED &&
+- AUDIT_MODE(profile) == AUDIT_QUIET))
++ AUDIT_MODE(profile) == AUDIT_QUIET_DENIED))
+ return aad(sa)->error;
+
+ if (KILL_MODE(profile) && type == AUDIT_APPARMOR_DENIED)
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index a29e69d2c3005..97721115340fc 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -466,7 +466,7 @@ restart:
+ * xattrs, or a longer match
+ */
+ candidate = profile;
+- candidate_len = profile->xmatch_len;
++ candidate_len = max(count, profile->xmatch_len);
+ candidate_xattrs = ret;
+ conflict = false;
+ }
+diff --git a/security/apparmor/include/lib.h b/security/apparmor/include/lib.h
+index e2e8df0c6f1c9..f42359f58eb58 100644
+--- a/security/apparmor/include/lib.h
++++ b/security/apparmor/include/lib.h
+@@ -22,6 +22,11 @@
+ */
+
+ #define DEBUG_ON (aa_g_debug)
++/*
++ * split individual debug cases out in preparation for finer grained
++ * debug controls in the future.
++ */
++#define AA_DEBUG_LABEL DEBUG_ON
+ #define dbg_printk(__fmt, __args...) pr_debug(__fmt, ##__args)
+ #define AA_DEBUG(fmt, args...) \
+ do { \
+diff --git a/security/apparmor/include/policy.h b/security/apparmor/include/policy.h
+index cb5ef21991b72..232d3d9566eb7 100644
+--- a/security/apparmor/include/policy.h
++++ b/security/apparmor/include/policy.h
+@@ -135,7 +135,7 @@ struct aa_profile {
+
+ const char *attach;
+ struct aa_dfa *xmatch;
+- int xmatch_len;
++ unsigned int xmatch_len;
+ enum audit_mode audit;
+ long mode;
+ u32 path_flags;
+diff --git a/security/apparmor/label.c b/security/apparmor/label.c
+index 0b0265da19267..3fca010a58296 100644
+--- a/security/apparmor/label.c
++++ b/security/apparmor/label.c
+@@ -1631,9 +1631,9 @@ int aa_label_snxprint(char *str, size_t size, struct aa_ns *ns,
+ AA_BUG(!str && size != 0);
+ AA_BUG(!label);
+
+- if (flags & FLAG_ABS_ROOT) {
++ if (AA_DEBUG_LABEL && (flags & FLAG_ABS_ROOT)) {
+ ns = root_ns;
+- len = snprintf(str, size, "=");
++ len = snprintf(str, size, "_");
+ update_for_len(total, len, size, str);
+ } else if (!ns) {
+ ns = labels_ns(label);
+@@ -1744,7 +1744,7 @@ void aa_label_xaudit(struct audit_buffer *ab, struct aa_ns *ns,
+ if (!use_label_hname(ns, label, flags) ||
+ display_mode(ns, label, flags)) {
+ len = aa_label_asxprint(&name, ns, label, flags, gfp);
+- if (len == -1) {
++ if (len < 0) {
+ AA_DEBUG("label print error");
+ return;
+ }
+@@ -1772,7 +1772,7 @@ void aa_label_seq_xprint(struct seq_file *f, struct aa_ns *ns,
+ int len;
+
+ len = aa_label_asxprint(&str, ns, label, flags, gfp);
+- if (len == -1) {
++ if (len < 0) {
+ AA_DEBUG("label print error");
+ return;
+ }
+@@ -1795,7 +1795,7 @@ void aa_label_xprintk(struct aa_ns *ns, struct aa_label *label, int flags,
+ int len;
+
+ len = aa_label_asxprint(&str, ns, label, flags, gfp);
+- if (len == -1) {
++ if (len < 0) {
+ AA_DEBUG("label print error");
+ return;
+ }
+@@ -1895,7 +1895,8 @@ struct aa_label *aa_label_strn_parse(struct aa_label *base, const char *str,
+ AA_BUG(!str);
+
+ str = skipn_spaces(str, n);
+- if (str == NULL || (*str == '=' && base != &root_ns->unconfined->label))
++ if (str == NULL || (AA_DEBUG_LABEL && *str == '_' &&
++ base != &root_ns->unconfined->label))
+ return ERR_PTR(-EINVAL);
+
+ len = label_count_strn_entries(str, end - str);
+diff --git a/security/apparmor/mount.c b/security/apparmor/mount.c
+index aa6fcfde30514..f7bb47daf2ad6 100644
+--- a/security/apparmor/mount.c
++++ b/security/apparmor/mount.c
+@@ -229,7 +229,8 @@ static const char * const mnt_info_table[] = {
+ "failed srcname match",
+ "failed type match",
+ "failed flags match",
+- "failed data match"
++ "failed data match",
++ "failed perms check"
+ };
+
+ /*
+@@ -284,8 +285,8 @@ static int do_match_mnt(struct aa_dfa *dfa, unsigned int start,
+ return 0;
+ }
+
+- /* failed at end of flags match */
+- return 4;
++ /* failed at perms check, don't confuse with flags match */
++ return 6;
+ }
+
+
+@@ -718,6 +719,7 @@ int aa_pivotroot(struct aa_label *label, const struct path *old_path,
+ aa_put_label(target);
+ goto out;
+ }
++ aa_put_label(target);
+ } else
+ /* already audited error */
+ error = PTR_ERR(target);
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 0acca6f2a93fc..9f23cdde784fd 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -746,16 +746,18 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ profile->label.flags |= FLAG_HAT;
+ if (!unpack_u32(e, &tmp, NULL))
+ goto fail;
+- if (tmp == PACKED_MODE_COMPLAIN || (e->version & FORCE_COMPLAIN_FLAG))
++ if (tmp == PACKED_MODE_COMPLAIN || (e->version & FORCE_COMPLAIN_FLAG)) {
+ profile->mode = APPARMOR_COMPLAIN;
+- else if (tmp == PACKED_MODE_ENFORCE)
++ } else if (tmp == PACKED_MODE_ENFORCE) {
+ profile->mode = APPARMOR_ENFORCE;
+- else if (tmp == PACKED_MODE_KILL)
++ } else if (tmp == PACKED_MODE_KILL) {
+ profile->mode = APPARMOR_KILL;
+- else if (tmp == PACKED_MODE_UNCONFINED)
++ } else if (tmp == PACKED_MODE_UNCONFINED) {
+ profile->mode = APPARMOR_UNCONFINED;
+- else
++ profile->label.flags |= FLAG_UNCONFINED;
++ } else {
+ goto fail;
++ }
+ if (!unpack_u32(e, &tmp, NULL))
+ goto fail;
+ if (tmp)
+diff --git a/sound/core/control.c b/sound/core/control.c
+index a25c0d64d104f..f66fe4be30d35 100644
+--- a/sound/core/control.c
++++ b/sound/core/control.c
+@@ -127,6 +127,7 @@ static int snd_ctl_release(struct inode *inode, struct file *file)
+ if (control->vd[idx].owner == ctl)
+ control->vd[idx].owner = NULL;
+ up_write(&card->controls_rwsem);
++ snd_fasync_free(ctl->fasync);
+ snd_ctl_empty_read_queue(ctl);
+ put_pid(ctl->pid);
+ kfree(ctl);
+@@ -181,7 +182,7 @@ void snd_ctl_notify(struct snd_card *card, unsigned int mask,
+ _found:
+ wake_up(&ctl->change_sleep);
+ spin_unlock(&ctl->read_lock);
+- kill_fasync(&ctl->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(ctl->fasync, SIGIO, POLL_IN);
+ }
+ read_unlock_irqrestore(&card->ctl_files_rwlock, flags);
+ }
+@@ -2002,7 +2003,7 @@ static int snd_ctl_fasync(int fd, struct file * file, int on)
+ struct snd_ctl_file *ctl;
+
+ ctl = file->private_data;
+- return fasync_helper(fd, file, on, &ctl->fasync);
++ return snd_fasync_helper(fd, file, on, &ctl->fasync);
+ }
+
+ /* return the preferred subdevice number if already assigned;
+@@ -2170,7 +2171,7 @@ static int snd_ctl_dev_disconnect(struct snd_device *device)
+ read_lock_irqsave(&card->ctl_files_rwlock, flags);
+ list_for_each_entry(ctl, &card->ctl_files, list) {
+ wake_up(&ctl->change_sleep);
+- kill_fasync(&ctl->fasync, SIGIO, POLL_ERR);
++ snd_kill_fasync(ctl->fasync, SIGIO, POLL_ERR);
+ }
+ read_unlock_irqrestore(&card->ctl_files_rwlock, flags);
+
+diff --git a/sound/core/info.c b/sound/core/info.c
+index 782fba87cc043..e952441f71403 100644
+--- a/sound/core/info.c
++++ b/sound/core/info.c
+@@ -111,9 +111,9 @@ static loff_t snd_info_entry_llseek(struct file *file, loff_t offset, int orig)
+ entry = data->entry;
+ mutex_lock(&entry->access);
+ if (entry->c.ops->llseek) {
+- offset = entry->c.ops->llseek(entry,
+- data->file_private_data,
+- file, offset, orig);
++ ret = entry->c.ops->llseek(entry,
++ data->file_private_data,
++ file, offset, orig);
+ goto out;
+ }
+
+diff --git a/sound/core/misc.c b/sound/core/misc.c
+index 50e4aaa6270d1..d32a19976a2b9 100644
+--- a/sound/core/misc.c
++++ b/sound/core/misc.c
+@@ -10,6 +10,7 @@
+ #include <linux/time.h>
+ #include <linux/slab.h>
+ #include <linux/ioport.h>
++#include <linux/fs.h>
+ #include <sound/core.h>
+
+ #ifdef CONFIG_SND_DEBUG
+@@ -145,3 +146,96 @@ snd_pci_quirk_lookup(struct pci_dev *pci, const struct snd_pci_quirk *list)
+ }
+ EXPORT_SYMBOL(snd_pci_quirk_lookup);
+ #endif
++
++/*
++ * Deferred async signal helpers
++ *
++ * Below are a few helper functions to wrap the async signal handling
++ * in the deferred work. The main purpose is to avoid the messy deadlock
++ * around tasklist_lock and co at the kill_fasync() invocation.
++ * fasync_helper() and kill_fasync() are replaced with snd_fasync_helper()
++ * and snd_kill_fasync(), respectively. In addition, snd_fasync_free() has
++ * to be called at releasing the relevant file object.
++ */
++struct snd_fasync {
++ struct fasync_struct *fasync;
++ int signal;
++ int poll;
++ int on;
++ struct list_head list;
++};
++
++static DEFINE_SPINLOCK(snd_fasync_lock);
++static LIST_HEAD(snd_fasync_list);
++
++static void snd_fasync_work_fn(struct work_struct *work)
++{
++ struct snd_fasync *fasync;
++
++ spin_lock_irq(&snd_fasync_lock);
++ while (!list_empty(&snd_fasync_list)) {
++ fasync = list_first_entry(&snd_fasync_list, struct snd_fasync, list);
++ list_del_init(&fasync->list);
++ spin_unlock_irq(&snd_fasync_lock);
++ if (fasync->on)
++ kill_fasync(&fasync->fasync, fasync->signal, fasync->poll);
++ spin_lock_irq(&snd_fasync_lock);
++ }
++ spin_unlock_irq(&snd_fasync_lock);
++}
++
++static DECLARE_WORK(snd_fasync_work, snd_fasync_work_fn);
++
++int snd_fasync_helper(int fd, struct file *file, int on,
++ struct snd_fasync **fasyncp)
++{
++ struct snd_fasync *fasync = NULL;
++
++ if (on) {
++ fasync = kzalloc(sizeof(*fasync), GFP_KERNEL);
++ if (!fasync)
++ return -ENOMEM;
++ INIT_LIST_HEAD(&fasync->list);
++ }
++
++ spin_lock_irq(&snd_fasync_lock);
++ if (*fasyncp) {
++ kfree(fasync);
++ fasync = *fasyncp;
++ } else {
++ if (!fasync) {
++ spin_unlock_irq(&snd_fasync_lock);
++ return 0;
++ }
++ *fasyncp = fasync;
++ }
++ fasync->on = on;
++ spin_unlock_irq(&snd_fasync_lock);
++ return fasync_helper(fd, file, on, &fasync->fasync);
++}
++EXPORT_SYMBOL_GPL(snd_fasync_helper);
++
++void snd_kill_fasync(struct snd_fasync *fasync, int signal, int poll)
++{
++ unsigned long flags;
++
++ if (!fasync || !fasync->on)
++ return;
++ spin_lock_irqsave(&snd_fasync_lock, flags);
++ fasync->signal = signal;
++ fasync->poll = poll;
++ list_move(&fasync->list, &snd_fasync_list);
++ schedule_work(&snd_fasync_work);
++ spin_unlock_irqrestore(&snd_fasync_lock, flags);
++}
++EXPORT_SYMBOL_GPL(snd_kill_fasync);
++
++void snd_fasync_free(struct snd_fasync *fasync)
++{
++ if (!fasync)
++ return;
++ fasync->on = 0;
++ flush_work(&snd_fasync_work);
++ kfree(fasync);
++}
++EXPORT_SYMBOL_GPL(snd_fasync_free);
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index 977d54320a5ca..c917ac84a7e58 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -1005,6 +1005,7 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream)
+ substream->runtime = NULL;
+ }
+ mutex_destroy(&runtime->buffer_mutex);
++ snd_fasync_free(runtime->fasync);
+ kfree(runtime);
+ put_pid(substream->pid);
+ substream->pid = NULL;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 1fc7c50ffa625..40751e5aff09f 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1822,7 +1822,7 @@ void snd_pcm_period_elapsed_under_stream_lock(struct snd_pcm_substream *substrea
+ snd_timer_interrupt(substream->timer, 1);
+ #endif
+ _end:
+- kill_fasync(&runtime->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(runtime->fasync, SIGIO, POLL_IN);
+ }
+ EXPORT_SYMBOL(snd_pcm_period_elapsed_under_stream_lock);
+
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 4adaee62ef333..16fcf57c6f030 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -3945,7 +3945,7 @@ static int snd_pcm_fasync(int fd, struct file * file, int on)
+ runtime = substream->runtime;
+ if (runtime->status->state == SNDRV_PCM_STATE_DISCONNECTED)
+ return -EBADFD;
+- return fasync_helper(fd, file, on, &runtime->fasync);
++ return snd_fasync_helper(fd, file, on, &runtime->fasync);
+ }
+
+ /*
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index b3214baa89193..e08a37c23add8 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -83,7 +83,7 @@ struct snd_timer_user {
+ unsigned int filter;
+ struct timespec64 tstamp; /* trigger tstamp */
+ wait_queue_head_t qchange_sleep;
+- struct fasync_struct *fasync;
++ struct snd_fasync *fasync;
+ struct mutex ioctl_lock;
+ };
+
+@@ -1345,7 +1345,7 @@ static void snd_timer_user_interrupt(struct snd_timer_instance *timeri,
+ }
+ __wake:
+ spin_unlock(&tu->qlock);
+- kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ wake_up(&tu->qchange_sleep);
+ }
+
+@@ -1383,7 +1383,7 @@ static void snd_timer_user_ccallback(struct snd_timer_instance *timeri,
+ spin_lock_irqsave(&tu->qlock, flags);
+ snd_timer_user_append_to_tqueue(tu, &r1);
+ spin_unlock_irqrestore(&tu->qlock, flags);
+- kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ wake_up(&tu->qchange_sleep);
+ }
+
+@@ -1453,7 +1453,7 @@ static void snd_timer_user_tinterrupt(struct snd_timer_instance *timeri,
+ spin_unlock(&tu->qlock);
+ if (append == 0)
+ return;
+- kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ wake_up(&tu->qchange_sleep);
+ }
+
+@@ -1521,6 +1521,7 @@ static int snd_timer_user_release(struct inode *inode, struct file *file)
+ snd_timer_instance_free(tu->timeri);
+ }
+ mutex_unlock(&tu->ioctl_lock);
++ snd_fasync_free(tu->fasync);
+ kfree(tu->queue);
+ kfree(tu->tqueue);
+ kfree(tu);
+@@ -2135,7 +2136,7 @@ static int snd_timer_user_fasync(int fd, struct file * file, int on)
+ struct snd_timer_user *tu;
+
+ tu = file->private_data;
+- return fasync_helper(fd, file, on, &tu->fasync);
++ return snd_fasync_helper(fd, file, on, &tu->fasync);
+ }
+
+ static ssize_t snd_timer_user_read(struct file *file, char __user *buffer,
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 7579a6982f471..ccb195b7cb08a 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2935,8 +2935,7 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ if (!codec->card)
+ return 0;
+
+- if (!codec->bus->jackpoll_in_suspend)
+- cancel_delayed_work_sync(&codec->jackpoll_work);
++ cancel_delayed_work_sync(&codec->jackpoll_work);
+
+ state = hda_call_codec_suspend(codec);
+ if (codec->link_down_at_suspend ||
+@@ -2944,6 +2943,11 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ (state & AC_PWRST_CLK_STOP_OK)))
+ snd_hdac_codec_link_down(&codec->core);
+ snd_hda_codec_display_power(codec, false);
++
++ if (codec->bus->jackpoll_in_suspend &&
++ (dev->power.power_state.event != PM_EVENT_SUSPEND))
++ schedule_delayed_work(&codec->jackpoll_work,
++ codec->jackpoll_interval);
+ return 0;
+ }
+
+@@ -2967,6 +2971,9 @@ static int hda_codec_runtime_resume(struct device *dev)
+ #ifdef CONFIG_PM_SLEEP
+ static int hda_codec_pm_prepare(struct device *dev)
+ {
++ struct hda_codec *codec = dev_to_hda_codec(dev);
++
++ cancel_delayed_work_sync(&codec->jackpoll_work);
+ dev->power.power_state = PMSG_SUSPEND;
+ return pm_runtime_suspended(dev);
+ }
+@@ -2986,9 +2993,6 @@ static void hda_codec_pm_complete(struct device *dev)
+
+ static int hda_codec_pm_suspend(struct device *dev)
+ {
+- struct hda_codec *codec = dev_to_hda_codec(dev);
+-
+- cancel_delayed_work_sync(&codec->jackpoll_work);
+ dev->power.power_state = PMSG_SUSPEND;
+ return pm_runtime_force_suspend(dev);
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 619e6025ba97c..1ae9674fa8a3c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9234,6 +9234,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8aa3, "HP ProBook 450 G9 (MB 8AA1)", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8aa8, "HP EliteBook 640 G9 (MB 8AA6)", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8aab, "HP EliteBook 650 G9 (MB 8AA9)", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8ad1, "HP EliteBook 840 14 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -9352,6 +9354,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1558, 0x7717, "Clevo NS70PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+diff --git a/sound/soc/codecs/lpass-va-macro.c b/sound/soc/codecs/lpass-va-macro.c
+index d18b56e604330..1ea10dc70748a 100644
+--- a/sound/soc/codecs/lpass-va-macro.c
++++ b/sound/soc/codecs/lpass-va-macro.c
+@@ -199,6 +199,7 @@ struct va_macro {
+ struct clk *mclk;
+ struct clk *macro;
+ struct clk *dcodec;
++ struct clk *fsgen;
+ struct clk_hw hw;
+ struct lpass_macro *pds;
+
+@@ -467,9 +468,9 @@ static int va_macro_mclk_event(struct snd_soc_dapm_widget *w,
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+- return va_macro_mclk_enable(va, true);
++ return clk_prepare_enable(va->fsgen);
+ case SND_SOC_DAPM_POST_PMD:
+- return va_macro_mclk_enable(va, false);
++ clk_disable_unprepare(va->fsgen);
+ }
+
+ return 0;
+@@ -1473,6 +1474,12 @@ static int va_macro_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_clkout;
+
++ va->fsgen = clk_hw_get_clk(&va->hw, "fsgen");
++ if (IS_ERR(va->fsgen)) {
++ ret = PTR_ERR(va->fsgen);
++ goto err_clkout;
++ }
++
+ ret = devm_snd_soc_register_component(dev, &va_macro_component_drv,
+ va_macro_dais,
+ ARRAY_SIZE(va_macro_dais));
+diff --git a/sound/soc/codecs/nau8821.c b/sound/soc/codecs/nau8821.c
+index ce4e7f46bb067..e078d2ffb3f67 100644
+--- a/sound/soc/codecs/nau8821.c
++++ b/sound/soc/codecs/nau8821.c
+@@ -1665,15 +1665,6 @@ static int nau8821_i2c_probe(struct i2c_client *i2c)
+ return ret;
+ }
+
+-static int nau8821_i2c_remove(struct i2c_client *i2c_client)
+-{
+- struct nau8821 *nau8821 = i2c_get_clientdata(i2c_client);
+-
+- devm_free_irq(nau8821->dev, nau8821->irq, nau8821);
+-
+- return 0;
+-}
+-
+ static const struct i2c_device_id nau8821_i2c_ids[] = {
+ { "nau8821", 0 },
+ { }
+@@ -1703,7 +1694,6 @@ static struct i2c_driver nau8821_driver = {
+ .acpi_match_table = ACPI_PTR(nau8821_acpi_match),
+ },
+ .probe_new = nau8821_i2c_probe,
+- .remove = nau8821_i2c_remove,
+ .id_table = nau8821_i2c_ids,
+ };
+ module_i2c_driver(nau8821_driver);
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index c1dbd978d5502..9ea2aca65e899 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -46,34 +46,22 @@ static void tas2770_reset(struct tas2770_priv *tas2770)
+ usleep_range(1000, 2000);
+ }
+
+-static int tas2770_set_bias_level(struct snd_soc_component *component,
+- enum snd_soc_bias_level level)
++static int tas2770_update_pwr_ctrl(struct tas2770_priv *tas2770)
+ {
+- struct tas2770_priv *tas2770 =
+- snd_soc_component_get_drvdata(component);
++ struct snd_soc_component *component = tas2770->component;
++ unsigned int val;
++ int ret;
+
+- switch (level) {
+- case SND_SOC_BIAS_ON:
+- snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_ACTIVE);
+- break;
+- case SND_SOC_BIAS_STANDBY:
+- case SND_SOC_BIAS_PREPARE:
+- snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_MUTE);
+- break;
+- case SND_SOC_BIAS_OFF:
+- snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_SHUTDOWN);
+- break;
++ if (tas2770->dac_powered)
++ val = tas2770->unmuted ?
++ TAS2770_PWR_CTRL_ACTIVE : TAS2770_PWR_CTRL_MUTE;
++ else
++ val = TAS2770_PWR_CTRL_SHUTDOWN;
+
+- default:
+- dev_err(tas2770->dev, "wrong power level setting %d\n", level);
+- return -EINVAL;
+- }
++ ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
++ TAS2770_PWR_CTRL_MASK, val);
++ if (ret < 0)
++ return ret;
+
+ return 0;
+ }
+@@ -114,9 +102,7 @@ static int tas2770_codec_resume(struct snd_soc_component *component)
+ gpiod_set_value_cansleep(tas2770->sdz_gpio, 1);
+ usleep_range(1000, 2000);
+ } else {
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_ACTIVE);
++ ret = tas2770_update_pwr_ctrl(tas2770);
+ if (ret < 0)
+ return ret;
+ }
+@@ -152,24 +138,19 @@ static int tas2770_dac_event(struct snd_soc_dapm_widget *w,
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_MUTE);
++ tas2770->dac_powered = 1;
++ ret = tas2770_update_pwr_ctrl(tas2770);
+ break;
+ case SND_SOC_DAPM_PRE_PMD:
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_SHUTDOWN);
++ tas2770->dac_powered = 0;
++ ret = tas2770_update_pwr_ctrl(tas2770);
+ break;
+ default:
+ dev_err(tas2770->dev, "Not supported evevt\n");
+ return -EINVAL;
+ }
+
+- if (ret < 0)
+- return ret;
+-
+- return 0;
++ return ret;
+ }
+
+ static const struct snd_kcontrol_new isense_switch =
+@@ -203,21 +184,11 @@ static const struct snd_soc_dapm_route tas2770_audio_map[] = {
+ static int tas2770_mute(struct snd_soc_dai *dai, int mute, int direction)
+ {
+ struct snd_soc_component *component = dai->component;
+- int ret;
+-
+- if (mute)
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_MUTE);
+- else
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_ACTIVE);
+-
+- if (ret < 0)
+- return ret;
++ struct tas2770_priv *tas2770 =
++ snd_soc_component_get_drvdata(component);
+
+- return 0;
++ tas2770->unmuted = !mute;
++ return tas2770_update_pwr_ctrl(tas2770);
+ }
+
+ static int tas2770_set_bitwidth(struct tas2770_priv *tas2770, int bitwidth)
+@@ -337,7 +308,7 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ struct snd_soc_component *component = dai->component;
+ struct tas2770_priv *tas2770 =
+ snd_soc_component_get_drvdata(component);
+- u8 tdm_rx_start_slot = 0, asi_cfg_1 = 0;
++ u8 tdm_rx_start_slot = 0, invert_fpol = 0, fpol_preinv = 0, asi_cfg_1 = 0;
+ int ret;
+
+ switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+@@ -349,9 +320,15 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ }
+
+ switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
++ case SND_SOC_DAIFMT_NB_IF:
++ invert_fpol = 1;
++ fallthrough;
+ case SND_SOC_DAIFMT_NB_NF:
+ asi_cfg_1 |= TAS2770_TDM_CFG_REG1_RX_RSING;
+ break;
++ case SND_SOC_DAIFMT_IB_IF:
++ invert_fpol = 1;
++ fallthrough;
+ case SND_SOC_DAIFMT_IB_NF:
+ asi_cfg_1 |= TAS2770_TDM_CFG_REG1_RX_FALING;
+ break;
+@@ -369,15 +346,19 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ case SND_SOC_DAIFMT_I2S:
+ tdm_rx_start_slot = 1;
++ fpol_preinv = 0;
+ break;
+ case SND_SOC_DAIFMT_DSP_A:
+ tdm_rx_start_slot = 0;
++ fpol_preinv = 1;
+ break;
+ case SND_SOC_DAIFMT_DSP_B:
+ tdm_rx_start_slot = 1;
++ fpol_preinv = 1;
+ break;
+ case SND_SOC_DAIFMT_LEFT_J:
+ tdm_rx_start_slot = 0;
++ fpol_preinv = 1;
+ break;
+ default:
+ dev_err(tas2770->dev,
+@@ -391,6 +372,14 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ if (ret < 0)
+ return ret;
+
++ ret = snd_soc_component_update_bits(component, TAS2770_TDM_CFG_REG0,
++ TAS2770_TDM_CFG_REG0_FPOL_MASK,
++ (fpol_preinv ^ invert_fpol)
++ ? TAS2770_TDM_CFG_REG0_FPOL_RSING
++ : TAS2770_TDM_CFG_REG0_FPOL_FALING);
++ if (ret < 0)
++ return ret;
++
+ return 0;
+ }
+
+@@ -489,7 +478,7 @@ static struct snd_soc_dai_driver tas2770_dai_driver[] = {
+ .id = 0,
+ .playback = {
+ .stream_name = "ASI1 Playback",
+- .channels_min = 2,
++ .channels_min = 1,
+ .channels_max = 2,
+ .rates = TAS2770_RATES,
+ .formats = TAS2770_FORMATS,
+@@ -537,7 +526,6 @@ static const struct snd_soc_component_driver soc_component_driver_tas2770 = {
+ .probe = tas2770_codec_probe,
+ .suspend = tas2770_codec_suspend,
+ .resume = tas2770_codec_resume,
+- .set_bias_level = tas2770_set_bias_level,
+ .controls = tas2770_snd_controls,
+ .num_controls = ARRAY_SIZE(tas2770_snd_controls),
+ .dapm_widgets = tas2770_dapm_widgets,
+diff --git a/sound/soc/codecs/tas2770.h b/sound/soc/codecs/tas2770.h
+index d156666bcc552..f75f40781ab13 100644
+--- a/sound/soc/codecs/tas2770.h
++++ b/sound/soc/codecs/tas2770.h
+@@ -41,6 +41,9 @@
+ #define TAS2770_TDM_CFG_REG0_31_44_1_48KHZ 0x6
+ #define TAS2770_TDM_CFG_REG0_31_88_2_96KHZ 0x8
+ #define TAS2770_TDM_CFG_REG0_31_176_4_192KHZ 0xa
++#define TAS2770_TDM_CFG_REG0_FPOL_MASK BIT(0)
++#define TAS2770_TDM_CFG_REG0_FPOL_RSING 0
++#define TAS2770_TDM_CFG_REG0_FPOL_FALING 1
+ /* TDM Configuration Reg1 */
+ #define TAS2770_TDM_CFG_REG1 TAS2770_REG(0X0, 0x0B)
+ #define TAS2770_TDM_CFG_REG1_MASK GENMASK(5, 1)
+@@ -135,6 +138,8 @@ struct tas2770_priv {
+ struct device *dev;
+ int v_sense_slot;
+ int i_sense_slot;
++ bool dac_powered;
++ bool unmuted;
+ };
+
+ #endif /* __TAS2770__ */
+diff --git a/sound/soc/codecs/tlv320aic32x4.c b/sound/soc/codecs/tlv320aic32x4.c
+index 8f42fd7bc0539..9b082cc5ecc49 100644
+--- a/sound/soc/codecs/tlv320aic32x4.c
++++ b/sound/soc/codecs/tlv320aic32x4.c
+@@ -49,6 +49,8 @@ struct aic32x4_priv {
+ struct aic32x4_setup_data *setup;
+ struct device *dev;
+ enum aic32x4_type type;
++
++ unsigned int fmt;
+ };
+
+ static int aic32x4_reset_adc(struct snd_soc_dapm_widget *w,
+@@ -611,6 +613,7 @@ static int aic32x4_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ static int aic32x4_set_dai_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt)
+ {
+ struct snd_soc_component *component = codec_dai->component;
++ struct aic32x4_priv *aic32x4 = snd_soc_component_get_drvdata(component);
+ u8 iface_reg_1 = 0;
+ u8 iface_reg_2 = 0;
+ u8 iface_reg_3 = 0;
+@@ -654,6 +657,8 @@ static int aic32x4_set_dai_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt)
+ return -EINVAL;
+ }
+
++ aic32x4->fmt = fmt;
++
+ snd_soc_component_update_bits(component, AIC32X4_IFACE1,
+ AIC32X4_IFACE1_DATATYPE_MASK |
+ AIC32X4_IFACE1_MASTER_MASK, iface_reg_1);
+@@ -758,6 +763,10 @@ static int aic32x4_setup_clocks(struct snd_soc_component *component,
+ return -EINVAL;
+ }
+
++ /* PCM over I2S is always 2-channel */
++ if ((aic32x4->fmt & SND_SOC_DAIFMT_FORMAT_MASK) == SND_SOC_DAIFMT_I2S)
++ channels = 2;
++
+ madc = DIV_ROUND_UP((32 * adc_resource_class), aosr);
+ max_dosr = (AIC32X4_MAX_DOSR_FREQ / sample_rate / dosr_increment) *
+ dosr_increment;
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index 3a0997c3af2b9..cf373969bb69d 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -445,6 +445,7 @@ static int avs_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ dma_set_mask(dev, DMA_BIT_MASK(32));
+ dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
+ }
++ dma_set_max_seg_size(dev, UINT_MAX);
+
+ ret = avs_hdac_bus_init_streams(bus);
+ if (ret < 0) {
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index 668f533578a69..8d36d35e6eaab 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -636,8 +636,8 @@ static ssize_t topology_name_read(struct file *file, char __user *user_buf, size
+ char buf[64];
+ size_t len;
+
+- len = snprintf(buf, sizeof(buf), "%s/%s\n", component->driver->topology_name_prefix,
+- mach->tplg_filename);
++ len = scnprintf(buf, sizeof(buf), "%s/%s\n", component->driver->topology_name_prefix,
++ mach->tplg_filename);
+
+ return simple_read_from_buffer(user_buf, count, ppos, buf, len);
+ }
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index 23d03e0f77599..d70d8255b8c76 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -57,28 +57,26 @@ static const struct acpi_gpio_params enable_gpio0 = { 0, 0, true };
+ static const struct acpi_gpio_params enable_gpio1 = { 1, 0, true };
+
+ static const struct acpi_gpio_mapping acpi_speakers_enable_gpio0[] = {
+- { "speakers-enable-gpios", &enable_gpio0, 1 },
++ { "speakers-enable-gpios", &enable_gpio0, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
+ { }
+ };
+
+ static const struct acpi_gpio_mapping acpi_speakers_enable_gpio1[] = {
+- { "speakers-enable-gpios", &enable_gpio1, 1 },
++ { "speakers-enable-gpios", &enable_gpio1, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
+ };
+
+ static const struct acpi_gpio_mapping acpi_enable_both_gpios[] = {
+- { "speakers-enable-gpios", &enable_gpio0, 1 },
+- { "headphone-enable-gpios", &enable_gpio1, 1 },
++ { "speakers-enable-gpios", &enable_gpio0, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
++ { "headphone-enable-gpios", &enable_gpio1, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
+ { }
+ };
+
+ static const struct acpi_gpio_mapping acpi_enable_both_gpios_rev_order[] = {
+- { "speakers-enable-gpios", &enable_gpio1, 1 },
+- { "headphone-enable-gpios", &enable_gpio0, 1 },
++ { "speakers-enable-gpios", &enable_gpio1, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
++ { "headphone-enable-gpios", &enable_gpio0, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
+ { }
+ };
+
+-static const struct acpi_gpio_mapping *gpio_mapping = acpi_speakers_enable_gpio0;
+-
+ static void log_quirks(struct device *dev)
+ {
+ dev_info(dev, "quirk mask %#lx\n", quirk);
+@@ -272,15 +270,6 @@ static int sof_es8336_quirk_cb(const struct dmi_system_id *id)
+ {
+ quirk = (unsigned long)id->driver_data;
+
+- if (quirk & SOF_ES8336_HEADPHONE_GPIO) {
+- if (quirk & SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK)
+- gpio_mapping = acpi_enable_both_gpios;
+- else
+- gpio_mapping = acpi_enable_both_gpios_rev_order;
+- } else if (quirk & SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK) {
+- gpio_mapping = acpi_speakers_enable_gpio1;
+- }
+-
+ return 1;
+ }
+
+@@ -529,6 +518,7 @@ static int sof_es8336_probe(struct platform_device *pdev)
+ struct acpi_device *adev;
+ struct snd_soc_dai_link *dai_links;
+ struct device *codec_dev;
++ const struct acpi_gpio_mapping *gpio_mapping;
+ unsigned int cnt = 0;
+ int dmic_be_num = 0;
+ int hdmi_num = 3;
+@@ -635,6 +625,17 @@ static int sof_es8336_probe(struct platform_device *pdev)
+ }
+
+ /* get speaker enable GPIO */
++ if (quirk & SOF_ES8336_HEADPHONE_GPIO) {
++ if (quirk & SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK)
++ gpio_mapping = acpi_enable_both_gpios;
++ else
++ gpio_mapping = acpi_enable_both_gpios_rev_order;
++ } else if (quirk & SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK) {
++ gpio_mapping = acpi_speakers_enable_gpio1;
++ } else {
++ gpio_mapping = acpi_speakers_enable_gpio0;
++ }
++
+ ret = devm_acpi_dev_add_driver_gpios(codec_dev, gpio_mapping);
+ if (ret)
+ dev_warn(codec_dev, "unable to add GPIO mapping table\n");
+diff --git a/sound/soc/intel/boards/sof_nau8825.c b/sound/soc/intel/boards/sof_nau8825.c
+index 97dcd204a2466..9b3a2ff4d9cdc 100644
+--- a/sound/soc/intel/boards/sof_nau8825.c
++++ b/sound/soc/intel/boards/sof_nau8825.c
+@@ -177,11 +177,6 @@ static int sof_card_late_probe(struct snd_soc_card *card)
+ struct sof_hdmi_pcm *pcm;
+ int err;
+
+- if (list_empty(&ctx->hdmi_pcm_list))
+- return -EINVAL;
+-
+- pcm = list_first_entry(&ctx->hdmi_pcm_list, struct sof_hdmi_pcm, head);
+-
+ if (sof_nau8825_quirk & SOF_MAX98373_SPEAKER_AMP_PRESENT) {
+ /* Disable Left and Right Spk pin after boot */
+ snd_soc_dapm_disable_pin(dapm, "Left Spk");
+@@ -191,6 +186,11 @@ static int sof_card_late_probe(struct snd_soc_card *card)
+ return err;
+ }
+
++ if (list_empty(&ctx->hdmi_pcm_list))
++ return -EINVAL;
++
++ pcm = list_first_entry(&ctx->hdmi_pcm_list, struct sof_hdmi_pcm, head);
++
+ return hda_dsp_hdmi_build_controls(card, pcm->codec_dai->component);
+ }
+
+diff --git a/sound/soc/qcom/qdsp6/q6apm-dai.c b/sound/soc/qcom/qdsp6/q6apm-dai.c
+index ee59ef36b85a6..e45210c0e25e6 100644
+--- a/sound/soc/qcom/qdsp6/q6apm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
+@@ -153,6 +153,12 @@ static int q6apm_dai_prepare(struct snd_soc_component *component,
+ q6apm_unmap_memory_regions(prtd->graph, substream->stream);
+ }
+
++ if (prtd->state) {
++ /* clear the previous setup if any */
++ q6apm_graph_stop(prtd->graph);
++ q6apm_unmap_memory_regions(prtd->graph, substream->stream);
++ }
++
+ prtd->pcm_count = snd_pcm_lib_period_bytes(substream);
+ prtd->pos = 0;
+ /* rate and channels are sent to audio driver */
+diff --git a/sound/soc/sh/rcar/ssiu.c b/sound/soc/sh/rcar/ssiu.c
+index 4b8a63e336c77..d7f4646ee029c 100644
+--- a/sound/soc/sh/rcar/ssiu.c
++++ b/sound/soc/sh/rcar/ssiu.c
+@@ -67,6 +67,8 @@ static void rsnd_ssiu_busif_err_irq_ctrl(struct rsnd_mod *mod, int enable)
+ shift = 1;
+ offset = 1;
+ break;
++ default:
++ return;
+ }
+
+ for (i = 0; i < 4; i++) {
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index a827cc3c158ae..0c1de56248427 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1318,6 +1318,9 @@ static struct snd_soc_pcm_runtime *dpcm_get_be(struct snd_soc_card *card,
+ if (!be->dai_link->no_pcm)
+ continue;
+
++ if (!snd_soc_dpcm_get_substream(be, stream))
++ continue;
++
+ for_each_rtd_dais(be, i, dai) {
+ w = snd_soc_dai_get_widget(dai, stream);
+
+diff --git a/sound/soc/sof/debug.c b/sound/soc/sof/debug.c
+index cf1271eb29b23..15c906e9fe2e2 100644
+--- a/sound/soc/sof/debug.c
++++ b/sound/soc/sof/debug.c
+@@ -252,9 +252,9 @@ static int memory_info_update(struct snd_sof_dev *sdev, char *buf, size_t buff_s
+ }
+
+ for (i = 0, len = 0; i < reply->num_elems; i++) {
+- ret = snprintf(buf + len, buff_size - len, "zone %d.%d used %#8x free %#8x\n",
+- reply->elems[i].zone, reply->elems[i].id,
+- reply->elems[i].used, reply->elems[i].free);
++ ret = scnprintf(buf + len, buff_size - len, "zone %d.%d used %#8x free %#8x\n",
++ reply->elems[i].zone, reply->elems[i].id,
++ reply->elems[i].used, reply->elems[i].free);
+ if (ret < 0)
+ goto error;
+ len += ret;
+diff --git a/sound/soc/sof/intel/cnl.c b/sound/soc/sof/intel/cnl.c
+index cd6e5f8a5eb4d..6c98f65635fcc 100644
+--- a/sound/soc/sof/intel/cnl.c
++++ b/sound/soc/sof/intel/cnl.c
+@@ -60,17 +60,23 @@ irqreturn_t cnl_ipc4_irq_thread(int irq, void *context)
+
+ if (primary & SOF_IPC4_MSG_DIR_MASK) {
+ /* Reply received */
+- struct sof_ipc4_msg *data = sdev->ipc->msg.reply_data;
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ struct sof_ipc4_msg *data = sdev->ipc->msg.reply_data;
+
+- data->primary = primary;
+- data->extension = extension;
++ data->primary = primary;
++ data->extension = extension;
+
+- spin_lock_irq(&sdev->ipc_lock);
++ spin_lock_irq(&sdev->ipc_lock);
+
+- snd_sof_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, data->primary);
++ snd_sof_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, data->primary);
+
+- spin_unlock_irq(&sdev->ipc_lock);
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev,
++ "IPC reply before FW_READY: %#x|%#x\n",
++ primary, extension);
++ }
+ } else {
+ /* Notification received */
+ notification_data.primary = primary;
+@@ -124,15 +130,20 @@ irqreturn_t cnl_ipc_irq_thread(int irq, void *context)
+ CNL_DSP_REG_HIPCCTL,
+ CNL_DSP_REG_HIPCCTL_DONE, 0);
+
+- spin_lock_irq(&sdev->ipc_lock);
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ spin_lock_irq(&sdev->ipc_lock);
+
+- /* handle immediate reply from DSP core */
+- hda_dsp_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, msg);
++ /* handle immediate reply from DSP core */
++ hda_dsp_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, msg);
+
+- cnl_ipc_dsp_done(sdev);
++ cnl_ipc_dsp_done(sdev);
+
+- spin_unlock_irq(&sdev->ipc_lock);
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev, "IPC reply before FW_READY: %#x\n",
++ msg);
++ }
+
+ ipc_irq = true;
+ }
+diff --git a/sound/soc/sof/intel/hda-ipc.c b/sound/soc/sof/intel/hda-ipc.c
+index f080112499552..65e688f749eaf 100644
+--- a/sound/soc/sof/intel/hda-ipc.c
++++ b/sound/soc/sof/intel/hda-ipc.c
+@@ -148,17 +148,23 @@ irqreturn_t hda_dsp_ipc4_irq_thread(int irq, void *context)
+
+ if (primary & SOF_IPC4_MSG_DIR_MASK) {
+ /* Reply received */
+- struct sof_ipc4_msg *data = sdev->ipc->msg.reply_data;
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ struct sof_ipc4_msg *data = sdev->ipc->msg.reply_data;
+
+- data->primary = primary;
+- data->extension = extension;
++ data->primary = primary;
++ data->extension = extension;
+
+- spin_lock_irq(&sdev->ipc_lock);
++ spin_lock_irq(&sdev->ipc_lock);
+
+- snd_sof_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, data->primary);
++ snd_sof_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, data->primary);
+
+- spin_unlock_irq(&sdev->ipc_lock);
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev,
++ "IPC reply before FW_READY: %#x|%#x\n",
++ primary, extension);
++ }
+ } else {
+ /* Notification received */
+
+@@ -225,16 +231,21 @@ irqreturn_t hda_dsp_ipc_irq_thread(int irq, void *context)
+ * place, the message might not yet be marked as expecting a
+ * reply.
+ */
+- spin_lock_irq(&sdev->ipc_lock);
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ spin_lock_irq(&sdev->ipc_lock);
+
+- /* handle immediate reply from DSP core */
+- hda_dsp_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, msg);
++ /* handle immediate reply from DSP core */
++ hda_dsp_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, msg);
+
+- /* set the done bit */
+- hda_dsp_ipc_dsp_done(sdev);
++ /* set the done bit */
++ hda_dsp_ipc_dsp_done(sdev);
+
+- spin_unlock_irq(&sdev->ipc_lock);
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev, "IPC reply before FW_READY: %#x\n",
++ msg);
++ }
+
+ ipc_irq = true;
+ }
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index bc07df1fc39f0..17f2f3a982c38 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -467,7 +467,7 @@ static void hda_dsp_dump_ext_rom_status(struct snd_sof_dev *sdev, const char *le
+ chip = get_chip_info(sdev->pdata);
+ for (i = 0; i < HDA_EXT_ROM_STATUS_SIZE; i++) {
+ value = snd_sof_dsp_read(sdev, HDA_DSP_BAR, chip->rom_status_reg + i * 0x4);
+- len += snprintf(msg + len, sizeof(msg) - len, " 0x%x", value);
++ len += scnprintf(msg + len, sizeof(msg) - len, " 0x%x", value);
+ }
+
+ dev_printk(level, sdev->dev, "extended rom status: %s", msg);
+@@ -1395,6 +1395,7 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+
+ if (mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_SSP_NUMBER &&
+ mach->mach_params.i2s_link_mask) {
++ const struct sof_intel_dsp_desc *chip = get_chip_info(sdev->pdata);
+ int ssp_num;
+
+ if (hweight_long(mach->mach_params.i2s_link_mask) > 1 &&
+@@ -1404,6 +1405,12 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+ /* fls returns 1-based results, SSPs indices are 0-based */
+ ssp_num = fls(mach->mach_params.i2s_link_mask) - 1;
+
++ if (ssp_num >= chip->ssp_count) {
++ dev_err(sdev->dev, "Invalid SSP %d, max on this platform is %d\n",
++ ssp_num, chip->ssp_count);
++ return NULL;
++ }
++
+ tplg_filename = devm_kasprintf(sdev->dev, GFP_KERNEL,
+ "%s%s%d",
+ sof_pdata->tplg_filename,
+diff --git a/sound/soc/sof/sof-client-probes.c b/sound/soc/sof/sof-client-probes.c
+index 34e6bd356e717..60e4250fac876 100644
+--- a/sound/soc/sof/sof-client-probes.c
++++ b/sound/soc/sof/sof-client-probes.c
+@@ -693,6 +693,10 @@ static int sof_probes_client_probe(struct auxiliary_device *auxdev,
+ if (!sof_probes_enabled)
+ return -ENXIO;
+
++ /* only ipc3 is supported */
++ if (sof_client_get_ipc_type(cdev) != SOF_IPC)
++ return -ENXIO;
++
+ if (!dev->platform_data) {
+ dev_err(dev, "missing platform data\n");
+ return -ENODEV;
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 0fff96a5d3ab4..d356743de2ff9 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -387,6 +387,14 @@ static const struct usb_audio_device_name usb_audio_names[] = {
+ DEVICE_NAME(0x05e1, 0x0408, "Syntek", "STK1160"),
+ DEVICE_NAME(0x05e1, 0x0480, "Hauppauge", "Woodbury"),
+
++ /* ASUS ROG Zenith II: this machine has also two devices, one for
++ * the front headphone and another for the rest
++ */
++ PROFILE_NAME(0x0b05, 0x1915, "ASUS", "Zenith II Front Headphone",
++ "Zenith-II-Front-Headphone"),
++ PROFILE_NAME(0x0b05, 0x1916, "ASUS", "Zenith II Main Audio",
++ "Zenith-II-Main-Audio"),
++
+ /* ASUS ROG Strix */
+ PROFILE_NAME(0x0b05, 0x1917,
+ "Realtek", "ALC1220-VB-DT", "Realtek-ALC1220-VB-Desktop"),
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 3c795675f048b..f4bd1e8ae4b6c 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -374,13 +374,28 @@ static const struct usbmix_name_map corsair_virtuoso_map[] = {
+ { 0 }
+ };
+
+-/* Some mobos shipped with a dummy HD-audio show the invalid GET_MIN/GET_MAX
+- * response for Input Gain Pad (id=19, control=12) and the connector status
+- * for SPDIF terminal (id=18). Skip them.
+- */
+-static const struct usbmix_name_map asus_rog_map[] = {
+- { 18, NULL }, /* OT, connector control */
+- { 19, NULL, 12 }, /* FU, Input Gain Pad */
++/* ASUS ROG Zenith II with Realtek ALC1220-VB */
++static const struct usbmix_name_map asus_zenith_ii_map[] = {
++ { 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */
++ { 16, "Speaker" }, /* OT */
++ { 22, "Speaker Playback" }, /* FU */
++ { 7, "Line" }, /* IT */
++ { 19, "Line Capture" }, /* FU */
++ { 8, "Mic" }, /* IT */
++ { 20, "Mic Capture" }, /* FU */
++ { 9, "Front Mic" }, /* IT */
++ { 21, "Front Mic Capture" }, /* FU */
++ { 17, "IEC958" }, /* OT */
++ { 23, "IEC958 Playback" }, /* FU */
++ {}
++};
++
++static const struct usbmix_connector_map asus_zenith_ii_connector_map[] = {
++ { 10, 16 }, /* (Back) Speaker */
++ { 11, 17 }, /* SPDIF */
++ { 13, 7 }, /* Line */
++ { 14, 8 }, /* Mic */
++ { 15, 9 }, /* Front Mic */
+ {}
+ };
+
+@@ -611,9 +626,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ .map = gigabyte_b450_map,
+ .connector_map = gigabyte_b450_connector_map,
+ },
+- { /* ASUS ROG Zenith II */
++ { /* ASUS ROG Zenith II (main audio) */
+ .id = USB_ID(0x0b05, 0x1916),
+- .map = asus_rog_map,
++ .map = asus_zenith_ii_map,
++ .connector_map = asus_zenith_ii_connector_map,
+ },
+ { /* ASUS ROG Strix */
+ .id = USB_ID(0x0b05, 0x1917),
+diff --git a/tools/build/feature/test-libcrypto.c b/tools/build/feature/test-libcrypto.c
+index a98174e0569c8..bc34a5bbb5049 100644
+--- a/tools/build/feature/test-libcrypto.c
++++ b/tools/build/feature/test-libcrypto.c
+@@ -1,16 +1,23 @@
+ // SPDX-License-Identifier: GPL-2.0
++#include <openssl/evp.h>
+ #include <openssl/sha.h>
+ #include <openssl/md5.h>
+
+ int main(void)
+ {
+- MD5_CTX context;
++ EVP_MD_CTX *mdctx;
+ unsigned char md[MD5_DIGEST_LENGTH + SHA_DIGEST_LENGTH];
+ unsigned char dat[] = "12345";
++ unsigned int digest_len;
+
+- MD5_Init(&context);
+- MD5_Update(&context, &dat[0], sizeof(dat));
+- MD5_Final(&md[0], &context);
++ mdctx = EVP_MD_CTX_new();
++ if (!mdctx)
++ return 0;
++
++ EVP_DigestInit_ex(mdctx, EVP_md5(), NULL);
++ EVP_DigestUpdate(mdctx, &dat[0], sizeof(dat));
++ EVP_DigestFinal_ex(mdctx, &md[0], &digest_len);
++ EVP_MD_CTX_free(mdctx);
+
+ SHA1(&dat[0], sizeof(dat), &md[0]);
+
+diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
+index bd6f4505e7b1e..70adf7b119b99 100644
+--- a/tools/lib/bpf/skel_internal.h
++++ b/tools/lib/bpf/skel_internal.h
+@@ -66,13 +66,13 @@ struct bpf_load_and_run_opts {
+ const char *errstr;
+ };
+
+-long bpf_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
++long kern_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
+
+ static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
+ unsigned int size)
+ {
+ #ifdef __KERNEL__
+- return bpf_sys_bpf(cmd, attr, size);
++ return kern_sys_bpf(cmd, attr, size);
+ #else
+ return syscall(__NR_bpf, cmd, attr, size);
+ #endif
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index b341f8a8c7c56..31c719f99f66e 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -4096,7 +4096,8 @@ static int validate_ibt(struct objtool_file *file)
+ * These sections can reference text addresses, but not with
+ * the intent to indirect branch to them.
+ */
+- if (!strncmp(sec->name, ".discard", 8) ||
++ if ((!strncmp(sec->name, ".discard", 8) &&
++ strcmp(sec->name, ".discard.ibt_endbr_noseal")) ||
+ !strncmp(sec->name, ".debug", 6) ||
+ !strcmp(sec->name, ".altinstructions") ||
+ !strcmp(sec->name, ".ibt_endbr_seal") ||
+diff --git a/tools/perf/tests/switch-tracking.c b/tools/perf/tests/switch-tracking.c
+index 0c0c2328bf4e6..6f53bee33f7cb 100644
+--- a/tools/perf/tests/switch-tracking.c
++++ b/tools/perf/tests/switch-tracking.c
+@@ -324,6 +324,7 @@ out_free_nodes:
+ static int test__switch_tracking(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
+ {
+ const char *sched_switch = "sched:sched_switch";
++ const char *cycles = "cycles:u";
+ struct switch_tracking switch_tracking = { .tids = NULL, };
+ struct record_opts opts = {
+ .mmap_pages = UINT_MAX,
+@@ -372,12 +373,19 @@ static int test__switch_tracking(struct test_suite *test __maybe_unused, int sub
+ cpu_clocks_evsel = evlist__last(evlist);
+
+ /* Second event */
+- if (perf_pmu__has_hybrid())
+- err = parse_events(evlist, "cpu_core/cycles/u", NULL);
+- else
+- err = parse_events(evlist, "cycles:u", NULL);
++ if (perf_pmu__has_hybrid()) {
++ cycles = "cpu_core/cycles/u";
++ err = parse_events(evlist, cycles, NULL);
++ if (err) {
++ cycles = "cpu_atom/cycles/u";
++ pr_debug("Trying %s\n", cycles);
++ err = parse_events(evlist, cycles, NULL);
++ }
++ } else {
++ err = parse_events(evlist, cycles, NULL);
++ }
+ if (err) {
+- pr_debug("Failed to parse event cycles:u\n");
++ pr_debug("Failed to parse event %s\n", cycles);
+ goto out_err;
+ }
+
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 7ed2357404316..700c95eafd62a 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -2391,9 +2391,12 @@ void parse_events_error__exit(struct parse_events_error *err)
+ void parse_events_error__handle(struct parse_events_error *err, int idx,
+ char *str, char *help)
+ {
+- if (WARN(!str, "WARNING: failed to provide error string\n")) {
+- free(help);
+- return;
++ if (WARN(!str, "WARNING: failed to provide error string\n"))
++ goto out_free;
++ if (!err) {
++ /* Assume caller does not want message printed */
++ pr_debug("event syntax error: %s\n", str);
++ goto out_free;
+ }
+ switch (err->num_errors) {
+ case 0:
+@@ -2419,6 +2422,11 @@ void parse_events_error__handle(struct parse_events_error *err, int idx,
+ break;
+ }
+ err->num_errors++;
++ return;
++
++out_free:
++ free(str);
++ free(help);
+ }
+
+ #define MAX_WIDTH 1000
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 062b5cbe67aff..dee6c527021c2 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -1775,8 +1775,10 @@ int parse_perf_probe_command(const char *cmd, struct perf_probe_event *pev)
+ if (!pev->event && pev->point.function && pev->point.line
+ && !pev->point.lazy_line && !pev->point.offset) {
+ if (asprintf(&pev->event, "%s_L%d", pev->point.function,
+- pev->point.line) < 0)
+- return -ENOMEM;
++ pev->point.line) < 0) {
++ ret = -ENOMEM;
++ goto out;
++ }
+ }
+
+ /* Copy arguments and ensure return probe has no C argument */
+diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
+index 431f2bddf6c83..1f4f72d887f91 100644
+--- a/tools/testing/cxl/test/cxl.c
++++ b/tools/testing/cxl/test/cxl.c
+@@ -466,7 +466,6 @@ static int mock_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm)
+ .end = -1,
+ };
+
+- cxld->flags = CXL_DECODER_F_ENABLE;
+ cxld->interleave_ways = min_not_zero(target_count, 1);
+ cxld->interleave_granularity = SZ_4K;
+ cxld->target_type = CXL_DECODER_EXPANDER;
+diff --git a/tools/testing/cxl/test/mock.c b/tools/testing/cxl/test/mock.c
+index f1f8c40948c5c..bce6a21df0d58 100644
+--- a/tools/testing/cxl/test/mock.c
++++ b/tools/testing/cxl/test/mock.c
+@@ -208,13 +208,15 @@ int __wrap_cxl_await_media_ready(struct cxl_dev_state *cxlds)
+ }
+ EXPORT_SYMBOL_NS_GPL(__wrap_cxl_await_media_ready, CXL);
+
+-bool __wrap_cxl_hdm_decode_init(struct cxl_dev_state *cxlds,
+- struct cxl_hdm *cxlhdm)
++int __wrap_cxl_hdm_decode_init(struct cxl_dev_state *cxlds,
++ struct cxl_hdm *cxlhdm)
+ {
+ int rc = 0, index;
+ struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+
+- if (!ops || !ops->is_mock_dev(cxlds->dev))
++ if (ops && ops->is_mock_dev(cxlds->dev))
++ rc = 0;
++ else
+ rc = cxl_hdm_decode_init(cxlds, cxlhdm);
+ put_cxl_mock_ops(index);
+
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+index fa928b431555c..7c02509c71d0a 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+@@ -21,7 +21,6 @@ check_error 'p:^/bar vfs_read' # NO_GROUP_NAME
+ check_error 'p:^12345678901234567890123456789012345678901234567890123456789012345/bar vfs_read' # GROUP_TOO_LONG
+
+ check_error 'p:^foo.1/bar vfs_read' # BAD_GROUP_NAME
+-check_error 'p:foo/^ vfs_read' # NO_EVENT_NAME
+ check_error 'p:foo/^12345678901234567890123456789012345678901234567890123456789012345 vfs_read' # EVENT_TOO_LONG
+ check_error 'p:foo/^bar.1 vfs_read' # BAD_EVENT_NAME
+
+diff --git a/tools/testing/selftests/net/forwarding/custom_multipath_hash.sh b/tools/testing/selftests/net/forwarding/custom_multipath_hash.sh
+index a15d21dc035a6..56eb83d1a3bdd 100755
+--- a/tools/testing/selftests/net/forwarding/custom_multipath_hash.sh
++++ b/tools/testing/selftests/net/forwarding/custom_multipath_hash.sh
+@@ -181,37 +181,43 @@ ping_ipv6()
+
+ send_src_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_src_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+ send_src_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:4::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:4::2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:4::2-2001:db8:4::fd" \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B "2001:db8:4::2-2001:db8:4::fd" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+@@ -226,13 +232,15 @@ send_flowlabel()
+
+ send_src_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:4::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:4::2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:4::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:4::2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+diff --git a/tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh b/tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh
+index a73f52efcb6cf..0446db9c6f748 100755
+--- a/tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh
++++ b/tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh
+@@ -276,37 +276,43 @@ ping_ipv6()
+
+ send_src_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_src_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+ send_src_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+@@ -321,13 +327,15 @@ send_flowlabel()
+
+ send_src_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:2::2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:2::2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+diff --git a/tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh b/tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh
+index 8fea2c2e0b25d..d40183b4eccc8 100755
+--- a/tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh
++++ b/tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh
+@@ -278,37 +278,43 @@ ping_ipv6()
+
+ send_src_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_src_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+ send_src_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+@@ -323,13 +329,15 @@ send_flowlabel()
+
+ send_src_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:2::2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:2::2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index e2ea6c126c99f..24d4e9cb617e4 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -553,6 +553,18 @@ static void set_nonblock(int fd, bool nonblock)
+ fcntl(fd, F_SETFL, flags & ~O_NONBLOCK);
+ }
+
++static void shut_wr(int fd)
++{
++ /* Close our write side, ev. give some time
++ * for address notification and/or checking
++ * the current status
++ */
++ if (cfg_wait)
++ usleep(cfg_wait);
++
++ shutdown(fd, SHUT_WR);
++}
++
+ static int copyfd_io_poll(int infd, int peerfd, int outfd, bool *in_closed_after_out)
+ {
+ struct pollfd fds = {
+@@ -630,14 +642,7 @@ static int copyfd_io_poll(int infd, int peerfd, int outfd, bool *in_closed_after
+ /* ... and peer also closed already */
+ break;
+
+- /* ... but we still receive.
+- * Close our write side, ev. give some time
+- * for address notification and/or checking
+- * the current status
+- */
+- if (cfg_wait)
+- usleep(cfg_wait);
+- shutdown(peerfd, SHUT_WR);
++ shut_wr(peerfd);
+ } else {
+ if (errno == EINTR)
+ continue;
+@@ -767,7 +772,7 @@ static int copyfd_io_mmap(int infd, int peerfd, int outfd,
+ if (err)
+ return err;
+
+- shutdown(peerfd, SHUT_WR);
++ shut_wr(peerfd);
+
+ err = do_recvfile(peerfd, outfd);
+ *in_closed_after_out = true;
+@@ -791,6 +796,9 @@ static int copyfd_io_sendfile(int infd, int peerfd, int outfd,
+ err = do_sendfile(infd, peerfd, size);
+ if (err)
+ return err;
++
++ shut_wr(peerfd);
++
+ err = do_recvfile(peerfd, outfd);
+ *in_closed_after_out = true;
+ }
+diff --git a/tools/tracing/rtla/Makefile b/tools/tracing/rtla/Makefile
+index 1bea2d16d4c11..b8fe10d941ce3 100644
+--- a/tools/tracing/rtla/Makefile
++++ b/tools/tracing/rtla/Makefile
+@@ -108,9 +108,9 @@ install: doc_install
+ $(INSTALL) rtla -m 755 $(DESTDIR)$(BINDIR)
+ $(STRIP) $(DESTDIR)$(BINDIR)/rtla
+ @test ! -f $(DESTDIR)$(BINDIR)/osnoise || rm $(DESTDIR)$(BINDIR)/osnoise
+- ln -s $(DESTDIR)$(BINDIR)/rtla $(DESTDIR)$(BINDIR)/osnoise
++ ln -s rtla $(DESTDIR)$(BINDIR)/osnoise
+ @test ! -f $(DESTDIR)$(BINDIR)/timerlat || rm $(DESTDIR)$(BINDIR)/timerlat
+- ln -s $(DESTDIR)$(BINDIR)/rtla $(DESTDIR)$(BINDIR)/timerlat
++ ln -s rtla $(DESTDIR)$(BINDIR)/timerlat
+
+ .PHONY: clean tarball
+ clean: doc_clean
+diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
+index 5b98f3ee58a58..0fffaeedee767 100644
+--- a/tools/vm/slabinfo.c
++++ b/tools/vm/slabinfo.c
+@@ -125,7 +125,7 @@ static void usage(void)
+ "-n|--numa Show NUMA information\n"
+ "-N|--lines=K Show the first K slabs\n"
+ "-o|--ops Show kmem_cache_ops\n"
+- "-P|--partial Sort by number of partial slabs\n"
++ "-P|--partial Sort by number of partial slabs\n"
+ "-r|--report Detailed report on single slabs\n"
+ "-s|--shrink Shrink slabs\n"
+ "-S|--Size Sort by size\n"
+@@ -1067,15 +1067,27 @@ static void sort_slabs(void)
+ for (s2 = s1 + 1; s2 < slabinfo + slabs; s2++) {
+ int result;
+
+- if (sort_size)
+- result = slab_size(s1) < slab_size(s2);
+- else if (sort_active)
+- result = slab_activity(s1) < slab_activity(s2);
+- else if (sort_loss)
+- result = slab_waste(s1) < slab_waste(s2);
+- else if (sort_partial)
+- result = s1->partial < s2->partial;
+- else
++ if (sort_size) {
++ if (slab_size(s1) == slab_size(s2))
++ result = strcasecmp(s1->name, s2->name);
++ else
++ result = slab_size(s1) < slab_size(s2);
++ } else if (sort_active) {
++ if (slab_activity(s1) == slab_activity(s2))
++ result = strcasecmp(s1->name, s2->name);
++ else
++ result = slab_activity(s1) < slab_activity(s2);
++ } else if (sort_loss) {
++ if (slab_waste(s1) == slab_waste(s2))
++ result = strcasecmp(s1->name, s2->name);
++ else
++ result = slab_waste(s1) < slab_waste(s2);
++ } else if (sort_partial) {
++ if (s1->partial == s2->partial)
++ result = strcasecmp(s1->name, s2->name);
++ else
++ result = s1->partial < s2->partial;
++ } else
+ result = strcasecmp(s1->name, s2->name);
+
+ if (show_inverted)
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 98246f3dea87c..c56861ed0e382 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1085,6 +1085,9 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ if (!kvm)
+ return ERR_PTR(-ENOMEM);
+
++ /* KVM is pinned via open("/dev/kvm"), the fd passed to this ioctl(). */
++ __module_get(kvm_chardev_ops.owner);
++
+ KVM_MMU_LOCK_INIT(kvm);
+ mmgrab(current->mm);
+ kvm->mm = current->mm;
+@@ -1170,16 +1173,6 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ preempt_notifier_inc();
+ kvm_init_pm_notifier(kvm);
+
+- /*
+- * When the fd passed to this ioctl() is opened it pins the module,
+- * but try_module_get() also prevents getting a reference if the module
+- * is in MODULE_STATE_GOING (e.g. if someone ran "rmmod --wait").
+- */
+- if (!try_module_get(kvm_chardev_ops.owner)) {
+- r = -ENODEV;
+- goto out_err;
+- }
+-
+ return kvm;
+
+ out_err:
+@@ -1201,6 +1194,7 @@ out_err_no_irq_srcu:
+ out_err_no_srcu:
+ kvm_arch_free_vm(kvm);
+ mmdrop(current->mm);
++ module_put(kvm_chardev_ops.owner);
+ return ERR_PTR(r);
+ }
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-25 17:37 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-25 17:37 UTC (permalink / raw
To: gentoo-commits
commit: e637ccb494e7d6fa6f2e96a4eb6fba8a0c82e650
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 25 17:36:30 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 25 17:37:33 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e637ccb4
Add CONFIG_LANDLOCK to KSPP and RANDSTRUCT fix
Bug: https://bugs.gentoo.org/865685
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 0a380985..9e0701dd 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,14 @@
---- a/Kconfig 2022-05-11 13:20:07.110347567 -0400
-+++ b/Kconfig 2022-05-11 13:21:12.127174393 -0400
+--- a/Kconfig 2022-08-25 10:11:47.220973785 -0400
++++ b/Kconfig 2022-08-25 10:11:56.997682513 -0400
@@ -30,3 +30,5 @@ source "lib/Kconfig"
source "lib/Kconfig.debug"
source "Documentation/Kconfig"
+
+source "distro/Kconfig"
---- /dev/null 2022-05-10 13:47:17.750578524 -0400
-+++ b/distro/Kconfig 2022-05-11 13:21:20.540529032 -0400
-@@ -0,0 +1,290 @@
+--- /dev/null 2022-08-25 07:13:06.694086407 -0400
++++ b/distro/Kconfig 2022-08-25 13:21:55.150660724 -0400
+@@ -0,0 +1,291 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -185,7 +185,7 @@
+config GENTOO_KERNEL_SELF_PROTECTION_COMMON
+ bool "Enable Kernel Self Protection Project Recommendations"
+
-+ depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS && !IOMMU_DEFAULT_DMA_LAZY && !IOMMU_DEFAULT_PASSTHROUGH && IOMMU_DEFAULT_DMA_STRICT
++ depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS && !IOMMU_DEFAULT_DMA_LAZY && !IOMMU_DEFAULT_PASSTHROUGH && IOMMU_DEFAULT_DMA_STRICT && SECURITY && !ARCH_EPHEMERAL_INODES && RANDSTRUCT_PERFORMANCE
+
+ select BUG
+ select STRICT_KERNEL_RWX
@@ -202,6 +202,7 @@
+ select HARDENED_USERCOPY if HAVE_HARDENED_USERCOPY_ALLOCATOR=y
+ select KFENCE if HAVE_ARCH_KFENCE && (!SLAB || SLUB)
+ select RANDOMIZE_KSTACK_OFFSET_DEFAULT if HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET && (INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION>=140000)
++ select SECURITY_LANDLOCK
+ select SCHED_CORE if SCHED_SMT
+ select BUG_ON_DATA_CORRUPTION
+ select SCHED_STACK_END_CHECK
@@ -224,7 +225,7 @@
+ select GCC_PLUGIN_LATENT_ENTROPY
+ select GCC_PLUGIN_STRUCTLEAK
+ select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
-+ select GCC_PLUGIN_RANDSTRUCT
++ select GCC_PLUGIN_RANDSTRUCT
+ select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
+ select ZERO_CALL_USED_REGS if CC_HAS_ZERO_CALL_USED_REGS
+
@@ -239,12 +240,12 @@
+ depends on !X86_MSR && X86_64 && GENTOO_KERNEL_SELF_PROTECTION
+ default n
+
++ select GCC_PLUGIN_STACKLEAK
++ select LEGACY_VSYSCALL_NONE
++ select PAGE_TABLE_ISOLATION
+ select RANDOMIZE_BASE
+ select RANDOMIZE_MEMORY
+ select RELOCATABLE
-+ select LEGACY_VSYSCALL_NONE
-+ select PAGE_TABLE_ISOLATION
-+ select GCC_PLUGIN_STACKLEAK
+ select VMAP_STACK
+
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-29 10:45 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-29 10:45 UTC (permalink / raw
To: gentoo-commits
commit: ac98914d7ed7e6160bfce0df920d93cd885ee657
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 29 10:45:00 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug 29 10:45:00 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ac98914d
Linux 5.19.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++++
1004_linux-5.19.5.patch | 16 ++++++++++++++++
2 files changed, 20 insertions(+)
diff --git a/0000_README b/0000_README
index 920f6ada..54d13e58 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-5.19.4.patch
From: http://www.kernel.org
Desc: Linux 5.19.4
+Patch: 1004_linux-5.19.5.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-5.19.5.patch b/1004_linux-5.19.5.patch
new file mode 100644
index 00000000..5dfcf134
--- /dev/null
+++ b/1004_linux-5.19.5.patch
@@ -0,0 +1,16 @@
+diff --git a/Makefile b/Makefile
+index 65dc4f93ffdbf..1c4f1ecb93488 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/scripts/dummy-tools/dummy-plugin-dir/include/plugin-version.h b/scripts/dummy-tools/dummy-plugin-dir/include/plugin-version.h
+new file mode 100644
+index 0000000000000..e69de29bb2d1d
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-31 12:11 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-31 12:11 UTC (permalink / raw
To: gentoo-commits
commit: 0a9d19d1cdac2b8749a0d11ee55554609df56cc8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 31 12:10:53 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 31 12:10:53 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0a9d19d1
x86/sev: Don't use cc_platform_has() for early SEV-SNP calls
Bug: https://bugs.gentoo.org/865831
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
1800_x86-sev-cc-platform-SEV-SNP-fix.patch | 72 ++++++++++++++++++++++++++++++
2 files changed, 76 insertions(+)
diff --git a/0000_README b/0000_README
index 54d13e58..309b3933 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1700_sparc-address-warray-bound-warnings.patch
From: https://github.com/KSPP/linux/issues/109
Desc: Address -Warray-bounds warnings
+Patch: 1800_x86-sev-cc-platform-SEV-SNP-fix.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/x86/kernel/sev.c?id=cdaa0a407f1acd3a44861e3aea6e3c7349e668f1
+Desc: Don't use cc_platform_has() for early SEV-SNP calls
+
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1800_x86-sev-cc-platform-SEV-SNP-fix.patch b/1800_x86-sev-cc-platform-SEV-SNP-fix.patch
new file mode 100644
index 00000000..8d8ceca5
--- /dev/null
+++ b/1800_x86-sev-cc-platform-SEV-SNP-fix.patch
@@ -0,0 +1,72 @@
+From cdaa0a407f1acd3a44861e3aea6e3c7349e668f1 Mon Sep 17 00:00:00 2001
+From: Tom Lendacky <thomas.lendacky@amd.com>
+Date: Tue, 23 Aug 2022 16:55:51 -0500
+Subject: x86/sev: Don't use cc_platform_has() for early SEV-SNP calls
+
+When running identity-mapped and depending on the kernel configuration,
+it is possible that the compiler uses jump tables when generating code
+for cc_platform_has().
+
+This causes a boot failure because the jump table uses un-mapped kernel
+virtual addresses, not identity-mapped addresses. This has been seen
+with CONFIG_RETPOLINE=n.
+
+Similar to sme_encrypt_kernel(), use an open-coded direct check for the
+status of SNP rather than trying to eliminate the jump table. This
+preserves any code optimization in cc_platform_has() that can be useful
+post boot. It also limits the changes to SEV-specific files so that
+future compiler features won't necessarily require possible build changes
+just because they are not compatible with running identity-mapped.
+
+ [ bp: Massage commit message. ]
+
+Fixes: 5e5ccff60a29 ("x86/sev: Add helper for validating pages in early enc attribute changes")
+Reported-by: Sean Christopherson <seanjc@google.com>
+Suggested-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
+Signed-off-by: Borislav Petkov <bp@suse.de>
+Cc: <stable@vger.kernel.org> # 5.19.x
+Link: https://lore.kernel.org/all/YqfabnTRxFSM+LoX@google.com/
+---
+ arch/x86/kernel/sev.c | 16 ++++++++++++++--
+ 1 file changed, 14 insertions(+), 2 deletions(-)
+
+(limited to 'arch/x86/kernel/sev.c')
+
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index 63dc626627a03..4f84c3f11af5b 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -701,7 +701,13 @@ e_term:
+ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+ unsigned int npages)
+ {
+- if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
++ /*
++ * This can be invoked in early boot while running identity mapped, so
++ * use an open coded check for SNP instead of using cc_platform_has().
++ * This eliminates worries about jump tables or checking boot_cpu_data
++ * in the cc_platform_has() function.
++ */
++ if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
+ return;
+
+ /*
+@@ -717,7 +723,13 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
+ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+ unsigned int npages)
+ {
+- if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
++ /*
++ * This can be invoked in early boot while running identity mapped, so
++ * use an open coded check for SNP instead of using cc_platform_has().
++ * This eliminates worries about jump tables or checking boot_cpu_data
++ * in the cc_platform_has() function.
++ */
++ if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
+ return;
+
+ /* Invalidate the memory pages before they are marked shared in the RMP table. */
+--
+cgit
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-31 13:33 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-31 13:33 UTC (permalink / raw
To: gentoo-commits
commit: cad5407b6e30efaf8a467c509b29ba5fb67f0cc8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 31 13:32:29 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 31 13:32:29 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cad5407b
Revert fix from upstream for DRM/i915 thanks to Luigi 'Comio' Mantellini
Bug: https://bugs.gentoo.org/866023
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2700_revert-drm-i915-dma-resv-obj-fix.patch | 107 ++++++++++++++++++++++++++++
2 files changed, 111 insertions(+)
diff --git a/0000_README b/0000_README
index 309b3933..4172ad7c 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Patch: 2700_revert-drm-i915-dma-resv-obj-fix.patch
+From: https://bugs.gentoo.org/866023
+Desc: Revert Revert for drm i915 thanks to Luigi 'Comio' Mantellini
+
Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
diff --git a/2700_revert-drm-i915-dma-resv-obj-fix.patch b/2700_revert-drm-i915-dma-resv-obj-fix.patch
new file mode 100644
index 00000000..a9fcaf4a
--- /dev/null
+++ b/2700_revert-drm-i915-dma-resv-obj-fix.patch
@@ -0,0 +1,107 @@
+From d481c481ca7813d688ffcb1c5418b48f83d945c1 Mon Sep 17 00:00:00 2001
+From: Luigi 'Comio' Mantellini <luigi.mantellini@gmail.com>
+Date: Sun, 28 Aug 2022 09:17:35 +0200
+Subject: [PATCH] Revert "drm/i915: Individualize fences before adding to
+ dma_resv obj"
+
+This reverts commit 842d9346b2fdda4d2fb8ccb5b87faef1ac01ab51.
+---
+ .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 3 +-
+ drivers/gpu/drm/i915/i915_vma.c | 48 ++++++++-----------
+ 2 files changed, 21 insertions(+), 30 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index 30fe847c6664..c326bd2b444f 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -999,8 +999,7 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
+ }
+ }
+
+- /* Reserve enough slots to accommodate composite fences */
+- err = dma_resv_reserve_fences(vma->obj->base.resv, eb->num_batches);
++ err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
+ if (err)
+ return err;
+
+diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
+index 16460b169ed2..e71826f0e4b1 100644
+--- a/drivers/gpu/drm/i915/i915_vma.c
++++ b/drivers/gpu/drm/i915/i915_vma.c
+@@ -23,7 +23,6 @@
+ */
+
+ #include <linux/sched/mm.h>
+-#include <linux/dma-fence-array.h>
+ #include <drm/drm_gem.h>
+
+ #include "display/intel_frontbuffer.h"
+@@ -1839,21 +1838,6 @@ int _i915_vma_move_to_active(struct i915_vma *vma,
+ if (unlikely(err))
+ return err;
+
+- /*
+- * Reserve fences slot early to prevent an allocation after preparing
+- * the workload and associating fences with dma_resv.
+- */
+- if (fence && !(flags & __EXEC_OBJECT_NO_RESERVE)) {
+- struct dma_fence *curr;
+- int idx;
+-
+- dma_fence_array_for_each(curr, idx, fence)
+- ;
+- err = dma_resv_reserve_fences(vma->obj->base.resv, idx);
+- if (unlikely(err))
+- return err;
+- }
+-
+ if (flags & EXEC_OBJECT_WRITE) {
+ struct intel_frontbuffer *front;
+
+@@ -1863,23 +1847,31 @@ int _i915_vma_move_to_active(struct i915_vma *vma,
+ i915_active_add_request(&front->write, rq);
+ intel_frontbuffer_put(front);
+ }
+- }
+
+- if (fence) {
+- struct dma_fence *curr;
+- enum dma_resv_usage usage;
+- int idx;
++ if (!(flags & __EXEC_OBJECT_NO_RESERVE)) {
++ err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
++ if (unlikely(err))
++ return err;
++ }
+
+- obj->read_domains = 0;
+- if (flags & EXEC_OBJECT_WRITE) {
+- usage = DMA_RESV_USAGE_WRITE;
++ if (fence) {
++ dma_resv_add_fence(vma->obj->base.resv, fence,
++ DMA_RESV_USAGE_WRITE);
+ obj->write_domain = I915_GEM_DOMAIN_RENDER;
+- } else {
+- usage = DMA_RESV_USAGE_READ;
++ obj->read_domains = 0;
++ }
++ } else {
++ if (!(flags & __EXEC_OBJECT_NO_RESERVE)) {
++ err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
++ if (unlikely(err))
++ return err;
+ }
+
+- dma_fence_array_for_each(curr, idx, fence)
+- dma_resv_add_fence(vma->obj->base.resv, curr, usage);
++ if (fence) {
++ dma_resv_add_fence(vma->obj->base.resv, fence,
++ DMA_RESV_USAGE_READ);
++ obj->write_domain = 0;
++ }
+ }
+
+ if (flags & EXEC_OBJECT_NEEDS_FENCE && vma->fence)
+--
+2.37.2
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-08-31 15:44 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-08-31 15:44 UTC (permalink / raw
To: gentoo-commits
commit: cf0d78b576a8493230af436a1bcf0c6d71c3f370
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 31 15:44:02 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 31 15:44:02 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cf0d78b5
Linux patch 5.19.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-5.19.6.patch | 6485 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6489 insertions(+)
diff --git a/0000_README b/0000_README
index 4172ad7c..3deab328 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-5.19.5.patch
From: http://www.kernel.org
Desc: Linux 5.19.5
+Patch: 1005_linux-5.19.6.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-5.19.6.patch b/1005_linux-5.19.6.patch
new file mode 100644
index 00000000..c5d4f0ee
--- /dev/null
+++ b/1005_linux-5.19.6.patch
@@ -0,0 +1,6485 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index bcc974d276dc4..3cda940108f63 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -527,6 +527,7 @@ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ /sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ /sys/devices/system/cpu/vulnerabilities/mmio_stale_data
++ /sys/devices/system/cpu/vulnerabilities/retbleed
+ Date: January 2018
+ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description: Information about CPU vulnerabilities
+diff --git a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+index 9393c50b5afc9..c98fd11907cc8 100644
+--- a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
++++ b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+@@ -230,6 +230,20 @@ The possible values in this file are:
+ * - 'Mitigation: Clear CPU buffers'
+ - The processor is vulnerable and the CPU buffer clearing mitigation is
+ enabled.
++ * - 'Unknown: No mitigations'
++ - The processor vulnerability status is unknown because it is
++ out of Servicing period. Mitigation is not attempted.
++
++Definitions:
++------------
++
++Servicing period: The process of providing functional and security updates to
++Intel processors or platforms, utilizing the Intel Platform Update (IPU)
++process or other similar mechanisms.
++
++End of Servicing Updates (ESU): ESU is the date at which Intel will no
++longer provide Servicing, such as through IPU or other similar update
++processes. ESU dates will typically be aligned to end of quarter.
+
+ If the processor is vulnerable then the following information is appended to
+ the above information:
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index e4fe443bea77d..1b38d0f70677e 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5260,6 +5260,8 @@
+ rodata= [KNL]
+ on Mark read-only kernel memory as read-only (default).
+ off Leave read-only kernel memory writable for debugging.
++ full Mark read-only kernel memory and aliases as read-only
++ [arm64]
+
+ rockchip.usb_uart
+ Enable the uart passthrough on the designated usb port
+diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst
+index fcd650bdbc7e2..01d9858197832 100644
+--- a/Documentation/admin-guide/sysctl/net.rst
++++ b/Documentation/admin-guide/sysctl/net.rst
+@@ -271,7 +271,7 @@ poll cycle or the number of packets processed reaches netdev_budget.
+ netdev_max_backlog
+ ------------------
+
+-Maximum number of packets, queued on the INPUT side, when the interface
++Maximum number of packets, queued on the INPUT side, when the interface
+ receives packets faster than kernel can process them.
+
+ netdev_rss_key
+diff --git a/Makefile b/Makefile
+index 1c4f1ecb93488..cb68101ea070a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
+index 9bb1873f52951..6f86b7ab6c28f 100644
+--- a/arch/arm64/include/asm/fpsimd.h
++++ b/arch/arm64/include/asm/fpsimd.h
+@@ -153,7 +153,7 @@ struct vl_info {
+
+ #ifdef CONFIG_ARM64_SVE
+
+-extern void sve_alloc(struct task_struct *task);
++extern void sve_alloc(struct task_struct *task, bool flush);
+ extern void fpsimd_release_task(struct task_struct *task);
+ extern void fpsimd_sync_to_sve(struct task_struct *task);
+ extern void fpsimd_force_sync_to_sve(struct task_struct *task);
+@@ -256,7 +256,7 @@ size_t sve_state_size(struct task_struct const *task);
+
+ #else /* ! CONFIG_ARM64_SVE */
+
+-static inline void sve_alloc(struct task_struct *task) { }
++static inline void sve_alloc(struct task_struct *task, bool flush) { }
+ static inline void fpsimd_release_task(struct task_struct *task) { }
+ static inline void sve_sync_to_fpsimd(struct task_struct *task) { }
+ static inline void sve_sync_from_fpsimd_zeropad(struct task_struct *task) { }
+diff --git a/arch/arm64/include/asm/setup.h b/arch/arm64/include/asm/setup.h
+index 6437df6617009..f4af547ef54ca 100644
+--- a/arch/arm64/include/asm/setup.h
++++ b/arch/arm64/include/asm/setup.h
+@@ -3,6 +3,8 @@
+ #ifndef __ARM64_ASM_SETUP_H
+ #define __ARM64_ASM_SETUP_H
+
++#include <linux/string.h>
++
+ #include <uapi/asm/setup.h>
+
+ void *get_early_fdt_ptr(void);
+@@ -14,4 +16,19 @@ void early_fdt_map(u64 dt_phys);
+ extern phys_addr_t __fdt_pointer __initdata;
+ extern u64 __cacheline_aligned boot_args[4];
+
++static inline bool arch_parse_debug_rodata(char *arg)
++{
++ extern bool rodata_enabled;
++ extern bool rodata_full;
++
++ if (arg && !strcmp(arg, "full")) {
++ rodata_enabled = true;
++ rodata_full = true;
++ return true;
++ }
++
++ return false;
++}
++#define arch_parse_debug_rodata arch_parse_debug_rodata
++
+ #endif
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 6b92989f4cc27..b374e258f705f 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -208,6 +208,8 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_1286807
+ {
+ ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0),
++ },
++ {
+ /* Kryo4xx Gold (rcpe to rfpe) => (r0p0 to r3p0) */
+ ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe),
+ },
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index aecf3071efddd..52f7ffdffbcb9 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -716,10 +716,12 @@ size_t sve_state_size(struct task_struct const *task)
+ * do_sve_acc() case, there is no ABI requirement to hide stale data
+ * written previously be task.
+ */
+-void sve_alloc(struct task_struct *task)
++void sve_alloc(struct task_struct *task, bool flush)
+ {
+ if (task->thread.sve_state) {
+- memset(task->thread.sve_state, 0, sve_state_size(task));
++ if (flush)
++ memset(task->thread.sve_state, 0,
++ sve_state_size(task));
+ return;
+ }
+
+@@ -1389,7 +1391,7 @@ void do_sve_acc(unsigned long esr, struct pt_regs *regs)
+ return;
+ }
+
+- sve_alloc(current);
++ sve_alloc(current, true);
+ if (!current->thread.sve_state) {
+ force_sig(SIGKILL);
+ return;
+@@ -1440,7 +1442,7 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
+ return;
+ }
+
+- sve_alloc(current);
++ sve_alloc(current, false);
+ sme_alloc(current);
+ if (!current->thread.sve_state || !current->thread.za_state) {
+ force_sig(SIGKILL);
+@@ -1461,17 +1463,6 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
+ fpsimd_bind_task_to_cpu();
+ }
+
+- /*
+- * If SVE was not already active initialise the SVE registers,
+- * any non-shared state between the streaming and regular SVE
+- * registers is architecturally guaranteed to be zeroed when
+- * we enter streaming mode. We do not need to initialize ZA
+- * since ZA must be disabled at this point and enabling ZA is
+- * architecturally defined to zero ZA.
+- */
+- if (system_supports_sve() && !test_thread_flag(TIF_SVE))
+- sve_init_regs();
+-
+ put_cpu_fpsimd_context();
+ }
+
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 21da83187a602..eb7c08dfb8348 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -882,7 +882,7 @@ static int sve_set_common(struct task_struct *target,
+ * state and ensure there's storage.
+ */
+ if (target->thread.svcr != old_svcr)
+- sve_alloc(target);
++ sve_alloc(target, true);
+ }
+
+ /* Registers: FPSIMD-only case */
+@@ -912,7 +912,7 @@ static int sve_set_common(struct task_struct *target,
+ goto out;
+ }
+
+- sve_alloc(target);
++ sve_alloc(target, true);
+ if (!target->thread.sve_state) {
+ ret = -ENOMEM;
+ clear_tsk_thread_flag(target, TIF_SVE);
+@@ -1082,7 +1082,7 @@ static int za_set(struct task_struct *target,
+
+ /* Ensure there is some SVE storage for streaming mode */
+ if (!target->thread.sve_state) {
+- sve_alloc(target);
++ sve_alloc(target, false);
+ if (!target->thread.sve_state) {
+ clear_thread_flag(TIF_SME);
+ ret = -ENOMEM;
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index b0980fbb6bc7f..8bb631bf9464c 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -307,7 +307,7 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
+ fpsimd_flush_task_state(current);
+ /* From now, fpsimd_thread_switch() won't touch thread.sve_state */
+
+- sve_alloc(current);
++ sve_alloc(current, true);
+ if (!current->thread.sve_state) {
+ clear_thread_flag(TIF_SVE);
+ return -ENOMEM;
+@@ -922,6 +922,16 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka,
+
+ /* Signal handlers are invoked with ZA and streaming mode disabled */
+ if (system_supports_sme()) {
++ /*
++ * If we were in streaming mode the saved register
++ * state was SVE but we will exit SM and use the
++ * FPSIMD register state - flush the saved FPSIMD
++ * register state in case it gets loaded.
++ */
++ if (current->thread.svcr & SVCR_SM_MASK)
++ memset(¤t->thread.uw.fpsimd_state, 0,
++ sizeof(current->thread.uw.fpsimd_state));
++
+ current->thread.svcr &= ~(SVCR_ZA_MASK |
+ SVCR_SM_MASK);
+ sme_smstop();
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 626ec32873c6c..1de896e4d3347 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -625,24 +625,6 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
+ vm_area_add_early(vma);
+ }
+
+-static int __init parse_rodata(char *arg)
+-{
+- int ret = strtobool(arg, &rodata_enabled);
+- if (!ret) {
+- rodata_full = false;
+- return 0;
+- }
+-
+- /* permit 'full' in addition to boolean options */
+- if (strcmp(arg, "full"))
+- return -EINVAL;
+-
+- rodata_enabled = true;
+- rodata_full = true;
+- return 0;
+-}
+-early_param("rodata", parse_rodata);
+-
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ static int __init map_entry_trampoline(void)
+ {
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index fa400055b2d50..cd2b3fe156724 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -147,10 +147,10 @@ menu "Processor type and features"
+
+ choice
+ prompt "Processor type"
+- default PA7000
++ default PA7000 if "$(ARCH)" = "parisc"
+
+ config PA7000
+- bool "PA7000/PA7100"
++ bool "PA7000/PA7100" if "$(ARCH)" = "parisc"
+ help
+ This is the processor type of your CPU. This information is
+ used for optimizing purposes. In order to compile a kernel
+@@ -161,21 +161,21 @@ config PA7000
+ which is required on some machines.
+
+ config PA7100LC
+- bool "PA7100LC"
++ bool "PA7100LC" if "$(ARCH)" = "parisc"
+ help
+ Select this option for the PCX-L processor, as used in the
+ 712, 715/64, 715/80, 715/100, 715/100XC, 725/100, 743, 748,
+ D200, D210, D300, D310 and E-class
+
+ config PA7200
+- bool "PA7200"
++ bool "PA7200" if "$(ARCH)" = "parisc"
+ help
+ Select this option for the PCX-T' processor, as used in the
+ C100, C110, J100, J110, J210XC, D250, D260, D350, D360,
+ K100, K200, K210, K220, K400, K410 and K420
+
+ config PA7300LC
+- bool "PA7300LC"
++ bool "PA7300LC" if "$(ARCH)" = "parisc"
+ help
+ Select this option for the PCX-L2 processor, as used in the
+ 744, A180, B132L, B160L, B180L, C132L, C160L, C180L,
+@@ -225,17 +225,8 @@ config MLONGCALLS
+ Enabling this option will probably slow down your kernel.
+
+ config 64BIT
+- bool "64-bit kernel"
++ def_bool "$(ARCH)" = "parisc64"
+ depends on PA8X00
+- help
+- Enable this if you want to support 64bit kernel on PA-RISC platform.
+-
+- At the moment, only people willing to use more than 2GB of RAM,
+- or having a 64bit-only capable PA-RISC machine should say Y here.
+-
+- Since there is no 64bit userland on PA-RISC, there is no point to
+- enable this option otherwise. The 64bit kernel is significantly bigger
+- and slower than the 32bit one.
+
+ choice
+ prompt "Kernel page size"
+diff --git a/arch/parisc/kernel/unaligned.c b/arch/parisc/kernel/unaligned.c
+index bac581b5ecfc5..e8a4d77cff53a 100644
+--- a/arch/parisc/kernel/unaligned.c
++++ b/arch/parisc/kernel/unaligned.c
+@@ -93,7 +93,7 @@
+ #define R1(i) (((i)>>21)&0x1f)
+ #define R2(i) (((i)>>16)&0x1f)
+ #define R3(i) ((i)&0x1f)
+-#define FR3(i) ((((i)<<1)&0x1f)|(((i)>>6)&1))
++#define FR3(i) ((((i)&0x1f)<<1)|(((i)>>6)&1))
+ #define IM(i,n) (((i)>>1&((1<<(n-1))-1))|((i)&1?((0-1L)<<(n-1)):0))
+ #define IM5_2(i) IM((i)>>16,5)
+ #define IM5_3(i) IM((i),5)
+diff --git a/arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts b/arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts
+index 044982a11df50..f3f87ed2007f3 100644
+--- a/arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts
++++ b/arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts
+@@ -84,12 +84,10 @@
+
+ phy1: ethernet-phy@9 {
+ reg = <9>;
+- ti,fifo-depth = <0x1>;
+ };
+
+ phy0: ethernet-phy@8 {
+ reg = <8>;
+- ti,fifo-depth = <0x1>;
+ };
+ };
+
+@@ -102,7 +100,6 @@
+ disable-wp;
+ cap-sd-highspeed;
+ cap-mmc-highspeed;
+- card-detect-delay = <200>;
+ mmc-ddr-1_8v;
+ mmc-hs200-1_8v;
+ sd-uhs-sdr12;
+diff --git a/arch/riscv/boot/dts/microchip/mpfs-polarberry.dts b/arch/riscv/boot/dts/microchip/mpfs-polarberry.dts
+index 82c93c8f5c17e..c87cc2d8fe29f 100644
+--- a/arch/riscv/boot/dts/microchip/mpfs-polarberry.dts
++++ b/arch/riscv/boot/dts/microchip/mpfs-polarberry.dts
+@@ -54,12 +54,10 @@
+
+ phy1: ethernet-phy@5 {
+ reg = <5>;
+- ti,fifo-depth = <0x01>;
+ };
+
+ phy0: ethernet-phy@4 {
+ reg = <4>;
+- ti,fifo-depth = <0x01>;
+ };
+ };
+
+@@ -72,7 +70,6 @@
+ disable-wp;
+ cap-sd-highspeed;
+ cap-mmc-highspeed;
+- card-detect-delay = <200>;
+ mmc-ddr-1_8v;
+ mmc-hs200-1_8v;
+ sd-uhs-sdr12;
+diff --git a/arch/riscv/boot/dts/microchip/mpfs.dtsi b/arch/riscv/boot/dts/microchip/mpfs.dtsi
+index 496d3b7642bd1..9f5bce1488d93 100644
+--- a/arch/riscv/boot/dts/microchip/mpfs.dtsi
++++ b/arch/riscv/boot/dts/microchip/mpfs.dtsi
+@@ -169,7 +169,7 @@
+ cache-size = <2097152>;
+ cache-unified;
+ interrupt-parent = <&plic>;
+- interrupts = <1>, <2>, <3>;
++ interrupts = <1>, <3>, <4>, <2>;
+ };
+
+ clint: clint@2000000 {
+@@ -446,9 +446,8 @@
+ ranges = <0x3000000 0x0 0x8000000 0x20 0x8000000 0x0 0x80000000>;
+ msi-parent = <&pcie>;
+ msi-controller;
+- microchip,axi-m-atr0 = <0x10 0x0>;
+ status = "disabled";
+- pcie_intc: legacy-interrupt-controller {
++ pcie_intc: interrupt-controller {
+ #address-cells = <0>;
+ #interrupt-cells = <1>;
+ interrupt-controller;
+diff --git a/arch/riscv/include/asm/signal.h b/arch/riscv/include/asm/signal.h
+new file mode 100644
+index 0000000000000..532c29ef03769
+--- /dev/null
++++ b/arch/riscv/include/asm/signal.h
+@@ -0,0 +1,12 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++#ifndef __ASM_SIGNAL_H
++#define __ASM_SIGNAL_H
++
++#include <uapi/asm/signal.h>
++#include <uapi/asm/ptrace.h>
++
++asmlinkage __visible
++void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags);
++
++#endif
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index 78933ac04995b..67322f878e0d7 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -42,6 +42,8 @@
+
+ #ifndef __ASSEMBLY__
+
++extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)];
++
+ #include <asm/processor.h>
+ #include <asm/csr.h>
+
+diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
+index 38b05ca6fe669..5a2de6b6f8822 100644
+--- a/arch/riscv/kernel/signal.c
++++ b/arch/riscv/kernel/signal.c
+@@ -15,6 +15,7 @@
+
+ #include <asm/ucontext.h>
+ #include <asm/vdso.h>
++#include <asm/signal.h>
+ #include <asm/signal32.h>
+ #include <asm/switch_to.h>
+ #include <asm/csr.h>
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 39d0f8bba4b40..635e6ec269380 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -20,9 +20,10 @@
+
+ #include <asm/asm-prototypes.h>
+ #include <asm/bug.h>
++#include <asm/csr.h>
+ #include <asm/processor.h>
+ #include <asm/ptrace.h>
+-#include <asm/csr.h>
++#include <asm/thread_info.h>
+
+ int show_unhandled_signals = 1;
+
+diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
+index 89949b9f3cf88..d5119e039d855 100644
+--- a/arch/s390/kernel/process.c
++++ b/arch/s390/kernel/process.c
+@@ -91,6 +91,18 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+
+ memcpy(dst, src, arch_task_struct_size);
+ dst->thread.fpu.regs = dst->thread.fpu.fprs;
++
++ /*
++ * Don't transfer over the runtime instrumentation or the guarded
++ * storage control block pointers. These fields are cleared here instead
++ * of in copy_thread() to avoid premature freeing of associated memory
++ * on fork() failure. Wait to clear the RI flag because ->stack still
++ * refers to the source thread.
++ */
++ dst->thread.ri_cb = NULL;
++ dst->thread.gs_cb = NULL;
++ dst->thread.gs_bc_cb = NULL;
++
+ return 0;
+ }
+
+@@ -150,13 +162,11 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
+ frame->childregs.flags = 0;
+ if (new_stackp)
+ frame->childregs.gprs[15] = new_stackp;
+-
+- /* Don't copy runtime instrumentation info */
+- p->thread.ri_cb = NULL;
++ /*
++ * Clear the runtime instrumentation flag after the above childregs
++ * copy. The CB pointer was already cleared in arch_dup_task_struct().
++ */
+ frame->childregs.psw.mask &= ~PSW_MASK_RI;
+- /* Don't copy guarded storage control block */
+- p->thread.gs_cb = NULL;
+- p->thread.gs_bc_cb = NULL;
+
+ /* Set a new TLS ? */
+ if (clone_flags & CLONE_SETTLS) {
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index e173b6187ad56..516d232e66d9d 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -379,7 +379,9 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
+ flags = FAULT_FLAG_DEFAULT;
+ if (user_mode(regs))
+ flags |= FAULT_FLAG_USER;
+- if (access == VM_WRITE || is_write)
++ if (is_write)
++ access = VM_WRITE;
++ if (access == VM_WRITE)
+ flags |= FAULT_FLAG_WRITE;
+ mmap_read_lock(mm);
+
+diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
+index 4910bf230d7b4..62208ec04ca4b 100644
+--- a/arch/x86/boot/compressed/misc.h
++++ b/arch/x86/boot/compressed/misc.h
+@@ -132,7 +132,17 @@ void snp_set_page_private(unsigned long paddr);
+ void snp_set_page_shared(unsigned long paddr);
+ void sev_prep_identity_maps(unsigned long top_level_pgt);
+ #else
+-static inline void sev_enable(struct boot_params *bp) { }
++static inline void sev_enable(struct boot_params *bp)
++{
++ /*
++ * bp->cc_blob_address should only be set by boot/compressed kernel.
++ * Initialize it to 0 unconditionally (thus here in this stub too) to
++ * ensure that uninitialized values from buggy bootloaders aren't
++ * propagated.
++ */
++ if (bp)
++ bp->cc_blob_address = 0;
++}
+ static inline void sev_es_shutdown_ghcb(void) { }
+ static inline bool sev_es_check_ghcb_fault(unsigned long address)
+ {
+diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
+index 52f989f6acc28..c93930d5ccbd0 100644
+--- a/arch/x86/boot/compressed/sev.c
++++ b/arch/x86/boot/compressed/sev.c
+@@ -276,6 +276,14 @@ void sev_enable(struct boot_params *bp)
+ struct msr m;
+ bool snp;
+
++ /*
++ * bp->cc_blob_address should only be set by boot/compressed kernel.
++ * Initialize it to 0 to ensure that uninitialized values from
++ * buggy bootloaders aren't propagated.
++ */
++ if (bp)
++ bp->cc_blob_address = 0;
++
+ /*
+ * Setup/preliminary detection of SNP. This will be sanity-checked
+ * against CPUID/MSR values later.
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index 682338e7e2a38..4dd19819053a5 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -311,7 +311,7 @@ SYM_CODE_START(entry_INT80_compat)
+ * Interrupts are off on entry.
+ */
+ ASM_CLAC /* Do this early to minimize exposure */
+- SWAPGS
++ ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV
+
+ /*
+ * User tracing code (ptrace or signal handlers) might assume that
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index ba60427caa6d3..9b48d957d2b3f 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -291,6 +291,7 @@ static u64 load_latency_data(struct perf_event *event, u64 status)
+ static u64 store_latency_data(struct perf_event *event, u64 status)
+ {
+ union intel_x86_pebs_dse dse;
++ union perf_mem_data_src src;
+ u64 val;
+
+ dse.val = status;
+@@ -304,7 +305,14 @@ static u64 store_latency_data(struct perf_event *event, u64 status)
+
+ val |= P(BLK, NA);
+
+- return val;
++ /*
++ * the pebs_data_source table is only for loads
++ * so override the mem_op to say STORE instead
++ */
++ src.val = val;
++ src.mem_op = P(OP,STORE);
++
++ return src.val;
+ }
+
+ struct pebs_record_core {
+@@ -822,7 +830,7 @@ struct event_constraint intel_glm_pebs_event_constraints[] = {
+
+ struct event_constraint intel_grt_pebs_event_constraints[] = {
+ /* Allow all events as PEBS with no flags */
+- INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0xf),
++ INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0x3),
+ INTEL_HYBRID_LAT_CONSTRAINT(0x6d0, 0xf),
+ EVENT_CONSTRAINT_END
+ };
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index 4f70fb6c2c1eb..47fca6a7a8bcd 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1097,6 +1097,14 @@ static int intel_pmu_setup_hw_lbr_filter(struct perf_event *event)
+
+ if (static_cpu_has(X86_FEATURE_ARCH_LBR)) {
+ reg->config = mask;
++
++ /*
++ * The Arch LBR HW can retrieve the common branch types
++ * from the LBR_INFO. It doesn't require the high overhead
++ * SW disassemble.
++ * Enable the branch type by default for the Arch LBR.
++ */
++ reg->reg |= X86_BR_TYPE_SAVE;
+ return 0;
+ }
+
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index ce440011cc4e4..1ef4f7861e2ec 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -841,6 +841,22 @@ int snb_pci2phy_map_init(int devid)
+ return 0;
+ }
+
++static u64 snb_uncore_imc_read_counter(struct intel_uncore_box *box, struct perf_event *event)
++{
++ struct hw_perf_event *hwc = &event->hw;
++
++ /*
++ * SNB IMC counters are 32-bit and are laid out back to back
++ * in MMIO space. Therefore we must use a 32-bit accessor function
++ * using readq() from uncore_mmio_read_counter() causes problems
++ * because it is reading 64-bit at a time. This is okay for the
++ * uncore_perf_event_update() function because it drops the upper
++ * 32-bits but not okay for plain uncore_read_counter() as invoked
++ * in uncore_pmu_event_start().
++ */
++ return (u64)readl(box->io_addr + hwc->event_base);
++}
++
+ static struct pmu snb_uncore_imc_pmu = {
+ .task_ctx_nr = perf_invalid_context,
+ .event_init = snb_uncore_imc_event_init,
+@@ -860,7 +876,7 @@ static struct intel_uncore_ops snb_uncore_imc_ops = {
+ .disable_event = snb_uncore_imc_disable_event,
+ .enable_event = snb_uncore_imc_enable_event,
+ .hw_config = snb_uncore_imc_hw_config,
+- .read_counter = uncore_mmio_read_counter,
++ .read_counter = snb_uncore_imc_read_counter,
+ };
+
+ static struct intel_uncore_type snb_uncore_imc = {
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index ede8990f3e416..ccbb838f995c8 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -456,7 +456,8 @@
+ #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
+-#define X86_BUG_RETBLEED X86_BUG(26) /* CPU is affected by RETBleed */
+-#define X86_BUG_EIBRS_PBRSB X86_BUG(27) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
++#define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */
++#define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */
++#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
+
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index d3a3cc6772ee1..c31083d77e09c 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -35,33 +35,56 @@
+ #define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */
+
+ /*
++ * Common helper for __FILL_RETURN_BUFFER and __FILL_ONE_RETURN.
++ */
++#define __FILL_RETURN_SLOT \
++ ANNOTATE_INTRA_FUNCTION_CALL; \
++ call 772f; \
++ int3; \
++772:
++
++/*
++ * Stuff the entire RSB.
++ *
+ * Google experimented with loop-unrolling and this turned out to be
+ * the optimal version - two calls, each with their own speculation
+ * trap should their return address end up getting used, in a loop.
+ */
+-#define __FILL_RETURN_BUFFER(reg, nr, sp) \
+- mov $(nr/2), reg; \
+-771: \
+- ANNOTATE_INTRA_FUNCTION_CALL; \
+- call 772f; \
+-773: /* speculation trap */ \
+- UNWIND_HINT_EMPTY; \
+- pause; \
+- lfence; \
+- jmp 773b; \
+-772: \
+- ANNOTATE_INTRA_FUNCTION_CALL; \
+- call 774f; \
+-775: /* speculation trap */ \
+- UNWIND_HINT_EMPTY; \
+- pause; \
+- lfence; \
+- jmp 775b; \
+-774: \
+- add $(BITS_PER_LONG/8) * 2, sp; \
+- dec reg; \
+- jnz 771b; \
+- /* barrier for jnz misprediction */ \
++#ifdef CONFIG_X86_64
++#define __FILL_RETURN_BUFFER(reg, nr) \
++ mov $(nr/2), reg; \
++771: \
++ __FILL_RETURN_SLOT \
++ __FILL_RETURN_SLOT \
++ add $(BITS_PER_LONG/8) * 2, %_ASM_SP; \
++ dec reg; \
++ jnz 771b; \
++ /* barrier for jnz misprediction */ \
++ lfence;
++#else
++/*
++ * i386 doesn't unconditionally have LFENCE, as such it can't
++ * do a loop.
++ */
++#define __FILL_RETURN_BUFFER(reg, nr) \
++ .rept nr; \
++ __FILL_RETURN_SLOT; \
++ .endr; \
++ add $(BITS_PER_LONG/8) * nr, %_ASM_SP;
++#endif
++
++/*
++ * Stuff a single RSB slot.
++ *
++ * To mitigate Post-Barrier RSB speculation, one CALL instruction must be
++ * forced to retire before letting a RET instruction execute.
++ *
++ * On PBRSB-vulnerable CPUs, it is not safe for a RET to be executed
++ * before this point.
++ */
++#define __FILL_ONE_RETURN \
++ __FILL_RETURN_SLOT \
++ add $(BITS_PER_LONG/8), %_ASM_SP; \
+ lfence;
+
+ #ifdef __ASSEMBLY__
+@@ -120,28 +143,15 @@
+ #endif
+ .endm
+
+-.macro ISSUE_UNBALANCED_RET_GUARD
+- ANNOTATE_INTRA_FUNCTION_CALL
+- call .Lunbalanced_ret_guard_\@
+- int3
+-.Lunbalanced_ret_guard_\@:
+- add $(BITS_PER_LONG/8), %_ASM_SP
+- lfence
+-.endm
+-
+ /*
+ * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
+ * monstrosity above, manually.
+ */
+-.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2
+-.ifb \ftr2
+- ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr
+-.else
+- ALTERNATIVE_2 "jmp .Lskip_rsb_\@", "", \ftr, "jmp .Lunbalanced_\@", \ftr2
+-.endif
+- __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)
+-.Lunbalanced_\@:
+- ISSUE_UNBALANCED_RET_GUARD
++.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2=ALT_NOT(X86_FEATURE_ALWAYS)
++ ALTERNATIVE_2 "jmp .Lskip_rsb_\@", \
++ __stringify(__FILL_RETURN_BUFFER(\reg,\nr)), \ftr, \
++ __stringify(__FILL_ONE_RETURN), \ftr2
++
+ .Lskip_rsb_\@:
+ .endm
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 510d85261132b..da7c361f47e0d 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -433,7 +433,8 @@ static void __init mmio_select_mitigation(void)
+ u64 ia32_cap;
+
+ if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) ||
+- cpu_mitigations_off()) {
++ boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN) ||
++ cpu_mitigations_off()) {
+ mmio_mitigation = MMIO_MITIGATION_OFF;
+ return;
+ }
+@@ -538,6 +539,8 @@ out:
+ pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
+ if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
+ pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
++ else if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
++ pr_info("MMIO Stale Data: Unknown: No mitigations\n");
+ }
+
+ static void __init md_clear_select_mitigation(void)
+@@ -2275,6 +2278,9 @@ static ssize_t tsx_async_abort_show_state(char *buf)
+
+ static ssize_t mmio_stale_data_show_state(char *buf)
+ {
++ if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
++ return sysfs_emit(buf, "Unknown: No mitigations\n");
++
+ if (mmio_mitigation == MMIO_MITIGATION_OFF)
+ return sysfs_emit(buf, "%s\n", mmio_strings[mmio_mitigation]);
+
+@@ -2421,6 +2427,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ return srbds_show_state(buf);
+
+ case X86_BUG_MMIO_STALE_DATA:
++ case X86_BUG_MMIO_UNKNOWN:
+ return mmio_stale_data_show_state(buf);
+
+ case X86_BUG_RETBLEED:
+@@ -2480,7 +2487,10 @@ ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *
+
+ ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+- return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA);
++ if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
++ return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_UNKNOWN);
++ else
++ return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA);
+ }
+
+ ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, char *buf)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 64a73f415f036..3e508f2390983 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1135,7 +1135,8 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+ #define NO_SWAPGS BIT(6)
+ #define NO_ITLB_MULTIHIT BIT(7)
+ #define NO_SPECTRE_V2 BIT(8)
+-#define NO_EIBRS_PBRSB BIT(9)
++#define NO_MMIO BIT(9)
++#define NO_EIBRS_PBRSB BIT(10)
+
+ #define VULNWL(vendor, family, model, whitelist) \
+ X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, whitelist)
+@@ -1158,6 +1159,11 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ VULNWL(VORTEX, 6, X86_MODEL_ANY, NO_SPECULATION),
+
+ /* Intel Family 6 */
++ VULNWL_INTEL(TIGERLAKE, NO_MMIO),
++ VULNWL_INTEL(TIGERLAKE_L, NO_MMIO),
++ VULNWL_INTEL(ALDERLAKE, NO_MMIO),
++ VULNWL_INTEL(ALDERLAKE_L, NO_MMIO),
++
+ VULNWL_INTEL(ATOM_SALTWELL, NO_SPECULATION | NO_ITLB_MULTIHIT),
+ VULNWL_INTEL(ATOM_SALTWELL_TABLET, NO_SPECULATION | NO_ITLB_MULTIHIT),
+ VULNWL_INTEL(ATOM_SALTWELL_MID, NO_SPECULATION | NO_ITLB_MULTIHIT),
+@@ -1176,9 +1182,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
+ VULNWL_INTEL(ATOM_AIRMONT_NP, NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+
+- VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+- VULNWL_INTEL(ATOM_GOLDMONT_D, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+- VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
++ VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++ VULNWL_INTEL(ATOM_GOLDMONT_D, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++ VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
+
+ /*
+ * Technically, swapgs isn't serializing on AMD (despite it previously
+@@ -1193,18 +1199,18 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
+
+ /* AMD Family 0xf - 0x12 */
+- VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+- VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+- VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+- VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++ VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++ VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++ VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+
+ /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
+- VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+- VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
++ VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+
+ /* Zhaoxin Family 7 */
+- VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS),
+- VULNWL(ZHAOXIN, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS),
++ VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
++ VULNWL(ZHAOXIN, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
+ {}
+ };
+
+@@ -1358,10 +1364,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ * Affected CPU list is generally enough to enumerate the vulnerability,
+ * but for virtualization case check for ARCH_CAP MSR bits also, VMM may
+ * not want the guest to enumerate the bug.
++ *
++ * Set X86_BUG_MMIO_UNKNOWN for CPUs that are neither in the blacklist,
++ * nor in the whitelist and also don't enumerate MSR ARCH_CAP MMIO bits.
+ */
+- if (cpu_matches(cpu_vuln_blacklist, MMIO) &&
+- !arch_cap_mmio_immune(ia32_cap))
+- setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
++ if (!arch_cap_mmio_immune(ia32_cap)) {
++ if (cpu_matches(cpu_vuln_blacklist, MMIO))
++ setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
++ else if (!cpu_matches(cpu_vuln_whitelist, NO_MMIO))
++ setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN);
++ }
+
+ if (!cpu_has(c, X86_FEATURE_BTC_NO)) {
+ if (cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA))
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index 63dc626627a03..4f84c3f11af5b 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -701,7 +701,13 @@ e_term:
+ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+ unsigned int npages)
+ {
+- if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
++ /*
++ * This can be invoked in early boot while running identity mapped, so
++ * use an open coded check for SNP instead of using cc_platform_has().
++ * This eliminates worries about jump tables or checking boot_cpu_data
++ * in the cc_platform_has() function.
++ */
++ if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
+ return;
+
+ /*
+@@ -717,7 +723,13 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
+ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+ unsigned int npages)
+ {
+- if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
++ /*
++ * This can be invoked in early boot while running identity mapped, so
++ * use an open coded check for SNP instead of using cc_platform_has().
++ * This eliminates worries about jump tables or checking boot_cpu_data
++ * in the cc_platform_has() function.
++ */
++ if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
+ return;
+
+ /* Invalidate the memory pages before they are marked shared in the RMP table. */
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 38185aedf7d16..0ea57da929407 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -93,22 +93,27 @@ static struct orc_entry *orc_find(unsigned long ip);
+ static struct orc_entry *orc_ftrace_find(unsigned long ip)
+ {
+ struct ftrace_ops *ops;
+- unsigned long caller;
++ unsigned long tramp_addr, offset;
+
+ ops = ftrace_ops_trampoline(ip);
+ if (!ops)
+ return NULL;
+
++ /* Set tramp_addr to the start of the code copied by the trampoline */
+ if (ops->flags & FTRACE_OPS_FL_SAVE_REGS)
+- caller = (unsigned long)ftrace_regs_call;
++ tramp_addr = (unsigned long)ftrace_regs_caller;
+ else
+- caller = (unsigned long)ftrace_call;
++ tramp_addr = (unsigned long)ftrace_caller;
++
++ /* Now place tramp_addr to the location within the trampoline ip is at */
++ offset = ip - ops->trampoline;
++ tramp_addr += offset;
+
+ /* Prevent unlikely recursion */
+- if (ip == caller)
++ if (ip == tramp_addr)
+ return NULL;
+
+- return orc_find(caller);
++ return orc_find(tramp_addr);
+ }
+ #else
+ static struct orc_entry *orc_ftrace_find(unsigned long ip)
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index d5ef64ddd35e9..66a209f7eb86d 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -62,6 +62,7 @@
+
+ static bool __read_mostly pat_bp_initialized;
+ static bool __read_mostly pat_disabled = !IS_ENABLED(CONFIG_X86_PAT);
++static bool __initdata pat_force_disabled = !IS_ENABLED(CONFIG_X86_PAT);
+ static bool __read_mostly pat_bp_enabled;
+ static bool __read_mostly pat_cm_initialized;
+
+@@ -86,6 +87,7 @@ void pat_disable(const char *msg_reason)
+ static int __init nopat(char *str)
+ {
+ pat_disable("PAT support disabled via boot option.");
++ pat_force_disabled = true;
+ return 0;
+ }
+ early_param("nopat", nopat);
+@@ -272,7 +274,7 @@ static void pat_ap_init(u64 pat)
+ wrmsrl(MSR_IA32_CR_PAT, pat);
+ }
+
+-void init_cache_modes(void)
++void __init init_cache_modes(void)
+ {
+ u64 pat = 0;
+
+@@ -313,6 +315,12 @@ void init_cache_modes(void)
+ */
+ pat = PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT(3, UC) |
+ PAT(4, WB) | PAT(5, WT) | PAT(6, UC_MINUS) | PAT(7, UC);
++ } else if (!pat_force_disabled && cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) {
++ /*
++ * Clearly PAT is enabled underneath. Allow pat_enabled() to
++ * reflect this.
++ */
++ pat_bp_enabled = true;
+ }
+
+ __init_cache_modes(pat);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 1eb13d57a946f..0a299941c622e 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1925,7 +1925,8 @@ out:
+ /* If we didn't flush the entire list, we could have told the driver
+ * there was more coming, but that turned out to be a lie.
+ */
+- if ((!list_empty(list) || errors) && q->mq_ops->commit_rqs && queued)
++ if ((!list_empty(list) || errors || needs_resource ||
++ ret == BLK_STS_DEV_RESOURCE) && q->mq_ops->commit_rqs && queued)
+ q->mq_ops->commit_rqs(hctx);
+ /*
+ * Any items that need requeuing? Stuff them into hctx->dispatch,
+@@ -2678,6 +2679,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ list_del_init(&rq->queuelist);
+ ret = blk_mq_request_issue_directly(rq, list_empty(list));
+ if (ret != BLK_STS_OK) {
++ errors++;
+ if (ret == BLK_STS_RESOURCE ||
+ ret == BLK_STS_DEV_RESOURCE) {
+ blk_mq_request_bypass_insert(rq, false,
+@@ -2685,7 +2687,6 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ break;
+ }
+ blk_mq_end_request(rq, ret);
+- errors++;
+ } else
+ queued++;
+ }
+diff --git a/drivers/acpi/processor_thermal.c b/drivers/acpi/processor_thermal.c
+index d8b2dfcd59b5f..83a430ad22556 100644
+--- a/drivers/acpi/processor_thermal.c
++++ b/drivers/acpi/processor_thermal.c
+@@ -151,7 +151,7 @@ void acpi_thermal_cpufreq_exit(struct cpufreq_policy *policy)
+ unsigned int cpu;
+
+ for_each_cpu(cpu, policy->related_cpus) {
+- struct acpi_processor *pr = per_cpu(processors, policy->cpu);
++ struct acpi_processor *pr = per_cpu(processors, cpu);
+
+ if (pr)
+ freq_qos_remove_request(&pr->thermal_req);
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index d044418294f94..7981a25983764 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -395,12 +395,15 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ size_t size, data_offsets_size;
+ int ret;
+
++ mmap_read_lock(alloc->vma_vm_mm);
+ if (!binder_alloc_get_vma(alloc)) {
++ mmap_read_unlock(alloc->vma_vm_mm);
+ binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
+ "%d: binder_alloc_buf, no vma\n",
+ alloc->pid);
+ return ERR_PTR(-ESRCH);
+ }
++ mmap_read_unlock(alloc->vma_vm_mm);
+
+ data_offsets_size = ALIGN(data_size, sizeof(void *)) +
+ ALIGN(offsets_size, sizeof(void *));
+@@ -922,17 +925,25 @@ void binder_alloc_print_pages(struct seq_file *m,
+ * Make sure the binder_alloc is fully initialized, otherwise we might
+ * read inconsistent state.
+ */
+- if (binder_alloc_get_vma(alloc) != NULL) {
+- for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
+- page = &alloc->pages[i];
+- if (!page->page_ptr)
+- free++;
+- else if (list_empty(&page->lru))
+- active++;
+- else
+- lru++;
+- }
++
++ mmap_read_lock(alloc->vma_vm_mm);
++ if (binder_alloc_get_vma(alloc) == NULL) {
++ mmap_read_unlock(alloc->vma_vm_mm);
++ goto uninitialized;
+ }
++
++ mmap_read_unlock(alloc->vma_vm_mm);
++ for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
++ page = &alloc->pages[i];
++ if (!page->page_ptr)
++ free++;
++ else if (list_empty(&page->lru))
++ active++;
++ else
++ lru++;
++ }
++
++uninitialized:
+ mutex_unlock(&alloc->mutex);
+ seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
+ seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 084f9b8a0ba3c..a59910ef948e9 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -979,6 +979,11 @@ loop_set_status_from_info(struct loop_device *lo,
+
+ lo->lo_offset = info->lo_offset;
+ lo->lo_sizelimit = info->lo_sizelimit;
++
++ /* loff_t vars have been assigned __u64 */
++ if (lo->lo_offset < 0 || lo->lo_sizelimit < 0)
++ return -EOVERFLOW;
++
+ memcpy(lo->lo_file_name, info->lo_file_name, LO_NAME_SIZE);
+ lo->lo_file_name[LO_NAME_SIZE-1] = 0;
+ lo->lo_flags = info->lo_flags;
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index b8549c61ff2ce..b144be41290e3 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1144,14 +1144,15 @@ static ssize_t bd_stat_show(struct device *dev,
+ static ssize_t debug_stat_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+- int version = 2;
++ int version = 1;
+ struct zram *zram = dev_to_zram(dev);
+ ssize_t ret;
+
+ down_read(&zram->init_lock);
+ ret = scnprintf(buf, PAGE_SIZE,
+- "version: %d\n%8llu\n",
++ "version: %d\n%8llu %8llu\n",
+ version,
++ (u64)atomic64_read(&zram->stats.writestall),
+ (u64)atomic64_read(&zram->stats.miss_free));
+ up_read(&zram->init_lock);
+
+@@ -1367,6 +1368,7 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
+ }
+ kunmap_atomic(mem);
+
++compress_again:
+ zstrm = zcomp_stream_get(zram->comp);
+ src = kmap_atomic(page);
+ ret = zcomp_compress(zstrm, src, &comp_len);
+@@ -1375,20 +1377,39 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
+ if (unlikely(ret)) {
+ zcomp_stream_put(zram->comp);
+ pr_err("Compression failed! err=%d\n", ret);
++ zs_free(zram->mem_pool, handle);
+ return ret;
+ }
+
+ if (comp_len >= huge_class_size)
+ comp_len = PAGE_SIZE;
+-
+- handle = zs_malloc(zram->mem_pool, comp_len,
+- __GFP_KSWAPD_RECLAIM |
+- __GFP_NOWARN |
+- __GFP_HIGHMEM |
+- __GFP_MOVABLE);
+-
+- if (unlikely(!handle)) {
++ /*
++ * handle allocation has 2 paths:
++ * a) fast path is executed with preemption disabled (for
++ * per-cpu streams) and has __GFP_DIRECT_RECLAIM bit clear,
++ * since we can't sleep;
++ * b) slow path enables preemption and attempts to allocate
++ * the page with __GFP_DIRECT_RECLAIM bit set. we have to
++ * put per-cpu compression stream and, thus, to re-do
++ * the compression once handle is allocated.
++ *
++ * if we have a 'non-null' handle here then we are coming
++ * from the slow path and handle has already been allocated.
++ */
++ if (!handle)
++ handle = zs_malloc(zram->mem_pool, comp_len,
++ __GFP_KSWAPD_RECLAIM |
++ __GFP_NOWARN |
++ __GFP_HIGHMEM |
++ __GFP_MOVABLE);
++ if (!handle) {
+ zcomp_stream_put(zram->comp);
++ atomic64_inc(&zram->stats.writestall);
++ handle = zs_malloc(zram->mem_pool, comp_len,
++ GFP_NOIO | __GFP_HIGHMEM |
++ __GFP_MOVABLE);
++ if (handle)
++ goto compress_again;
+ return -ENOMEM;
+ }
+
+@@ -1946,6 +1967,7 @@ static int zram_add(void)
+ if (ZRAM_LOGICAL_BLOCK_SIZE == PAGE_SIZE)
+ blk_queue_max_write_zeroes_sectors(zram->disk->queue, UINT_MAX);
+
++ blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, zram->disk->queue);
+ ret = device_add_disk(NULL, zram->disk, zram_disk_groups);
+ if (ret)
+ goto out_cleanup_disk;
+diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
+index 158c91e548501..80c3b43b4828f 100644
+--- a/drivers/block/zram/zram_drv.h
++++ b/drivers/block/zram/zram_drv.h
+@@ -81,6 +81,7 @@ struct zram_stats {
+ atomic64_t huge_pages_since; /* no. of huge pages since zram set up */
+ atomic64_t pages_stored; /* no. of pages currently stored */
+ atomic_long_t max_used_pages; /* no. of maximum pages stored */
++ atomic64_t writestall; /* no. of write slow paths */
+ atomic64_t miss_free; /* no. of missed free */
+ #ifdef CONFIG_ZRAM_WRITEBACK
+ atomic64_t bd_count; /* no. of pages in backing device */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index d9f57a20a8bc5..29ebc9e51305a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -377,12 +377,8 @@ struct kfd_dev *kgd2kfd_probe(struct amdgpu_device *adev, bool vf)
+ f2g = &gfx_v10_3_kfd2kgd;
+ break;
+ case IP_VERSION(10, 3, 6):
+- gfx_target_version = 100306;
+- if (!vf)
+- f2g = &gfx_v10_3_kfd2kgd;
+- break;
+ case IP_VERSION(10, 3, 7):
+- gfx_target_version = 100307;
++ gfx_target_version = 100306;
+ if (!vf)
+ f2g = &gfx_v10_3_kfd2kgd;
+ break;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index 05076e530e7d4..e29175e4b44ce 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -820,6 +820,15 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict,
+ if (ret == 0) {
+ ret = nouveau_fence_new(chan, false, &fence);
+ if (ret == 0) {
++ /* TODO: figure out a better solution here
++ *
++ * wait on the fence here explicitly as going through
++ * ttm_bo_move_accel_cleanup somehow doesn't seem to do it.
++ *
++ * Without this the operation can timeout and we'll fallback to a
++ * software copy, which might take several minutes to finish.
++ */
++ nouveau_fence_wait(fence, false, false);
+ ret = ttm_bo_move_accel_cleanup(bo,
+ &fence->base,
+ evict, false,
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 522b3d6b8c46b..91e7e80fce489 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6244,11 +6244,11 @@ static void mddev_detach(struct mddev *mddev)
+ static void __md_stop(struct mddev *mddev)
+ {
+ struct md_personality *pers = mddev->pers;
++ md_bitmap_destroy(mddev);
+ mddev_detach(mddev);
+ /* Ensure ->event_work is done */
+ if (mddev->event_work.func)
+ flush_workqueue(md_misc_wq);
+- md_bitmap_destroy(mddev);
+ spin_lock(&mddev->lock);
+ mddev->pers = NULL;
+ spin_unlock(&mddev->lock);
+@@ -6266,6 +6266,7 @@ void md_stop(struct mddev *mddev)
+ /* stop the array and free an attached data structures.
+ * This is called from dm-raid
+ */
++ __md_stop_writes(mddev);
+ __md_stop(mddev);
+ bioset_exit(&mddev->bio_set);
+ bioset_exit(&mddev->sync_set);
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index d7fb33c078e81..1f0120cbe9e80 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -2007,30 +2007,24 @@ void bond_3ad_initiate_agg_selection(struct bonding *bond, int timeout)
+ */
+ void bond_3ad_initialize(struct bonding *bond, u16 tick_resolution)
+ {
+- /* check that the bond is not initialized yet */
+- if (!MAC_ADDRESS_EQUAL(&(BOND_AD_INFO(bond).system.sys_mac_addr),
+- bond->dev->dev_addr)) {
+-
+- BOND_AD_INFO(bond).aggregator_identifier = 0;
+-
+- BOND_AD_INFO(bond).system.sys_priority =
+- bond->params.ad_actor_sys_prio;
+- if (is_zero_ether_addr(bond->params.ad_actor_system))
+- BOND_AD_INFO(bond).system.sys_mac_addr =
+- *((struct mac_addr *)bond->dev->dev_addr);
+- else
+- BOND_AD_INFO(bond).system.sys_mac_addr =
+- *((struct mac_addr *)bond->params.ad_actor_system);
++ BOND_AD_INFO(bond).aggregator_identifier = 0;
++ BOND_AD_INFO(bond).system.sys_priority =
++ bond->params.ad_actor_sys_prio;
++ if (is_zero_ether_addr(bond->params.ad_actor_system))
++ BOND_AD_INFO(bond).system.sys_mac_addr =
++ *((struct mac_addr *)bond->dev->dev_addr);
++ else
++ BOND_AD_INFO(bond).system.sys_mac_addr =
++ *((struct mac_addr *)bond->params.ad_actor_system);
+
+- /* initialize how many times this module is called in one
+- * second (should be about every 100ms)
+- */
+- ad_ticks_per_sec = tick_resolution;
++ /* initialize how many times this module is called in one
++ * second (should be about every 100ms)
++ */
++ ad_ticks_per_sec = tick_resolution;
+
+- bond_3ad_initiate_agg_selection(bond,
+- AD_AGGREGATOR_SELECTION_TIMER *
+- ad_ticks_per_sec);
+- }
++ bond_3ad_initiate_agg_selection(bond,
++ AD_AGGREGATOR_SELECTION_TIMER *
++ ad_ticks_per_sec);
+ }
+
+ /**
+diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
+index 12a599d5e61a4..c771797fd902f 100644
+--- a/drivers/net/dsa/microchip/ksz8795.c
++++ b/drivers/net/dsa/microchip/ksz8795.c
+@@ -898,17 +898,6 @@ static void ksz8_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val)
+ }
+ }
+
+-static enum dsa_tag_protocol ksz8_get_tag_protocol(struct dsa_switch *ds,
+- int port,
+- enum dsa_tag_protocol mp)
+-{
+- struct ksz_device *dev = ds->priv;
+-
+- /* ksz88x3 uses the same tag schema as KSZ9893 */
+- return ksz_is_ksz88x3(dev) ?
+- DSA_TAG_PROTO_KSZ9893 : DSA_TAG_PROTO_KSZ8795;
+-}
+-
+ static u32 ksz8_sw_get_phy_flags(struct dsa_switch *ds, int port)
+ {
+ /* Silicon Errata Sheet (DS80000830A):
+@@ -969,11 +958,9 @@ static void ksz8_flush_dyn_mac_table(struct ksz_device *dev, int port)
+ }
+ }
+
+-static int ksz8_port_vlan_filtering(struct dsa_switch *ds, int port, bool flag,
++static int ksz8_port_vlan_filtering(struct ksz_device *dev, int port, bool flag,
+ struct netlink_ext_ack *extack)
+ {
+- struct ksz_device *dev = ds->priv;
+-
+ if (ksz_is_ksz88x3(dev))
+ return -ENOTSUPP;
+
+@@ -998,12 +985,11 @@ static void ksz8_port_enable_pvid(struct ksz_device *dev, int port, bool state)
+ }
+ }
+
+-static int ksz8_port_vlan_add(struct dsa_switch *ds, int port,
++static int ksz8_port_vlan_add(struct ksz_device *dev, int port,
+ const struct switchdev_obj_port_vlan *vlan,
+ struct netlink_ext_ack *extack)
+ {
+ bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+- struct ksz_device *dev = ds->priv;
+ struct ksz_port *p = &dev->ports[port];
+ u16 data, new_pvid = 0;
+ u8 fid, member, valid;
+@@ -1071,10 +1057,9 @@ static int ksz8_port_vlan_add(struct dsa_switch *ds, int port,
+ return 0;
+ }
+
+-static int ksz8_port_vlan_del(struct dsa_switch *ds, int port,
++static int ksz8_port_vlan_del(struct ksz_device *dev, int port,
+ const struct switchdev_obj_port_vlan *vlan)
+ {
+- struct ksz_device *dev = ds->priv;
+ u16 data, pvid;
+ u8 fid, member, valid;
+
+@@ -1104,12 +1089,10 @@ static int ksz8_port_vlan_del(struct dsa_switch *ds, int port,
+ return 0;
+ }
+
+-static int ksz8_port_mirror_add(struct dsa_switch *ds, int port,
++static int ksz8_port_mirror_add(struct ksz_device *dev, int port,
+ struct dsa_mall_mirror_tc_entry *mirror,
+ bool ingress, struct netlink_ext_ack *extack)
+ {
+- struct ksz_device *dev = ds->priv;
+-
+ if (ingress) {
+ ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_RX, true);
+ dev->mirror_rx |= BIT(port);
+@@ -1128,10 +1111,9 @@ static int ksz8_port_mirror_add(struct dsa_switch *ds, int port,
+ return 0;
+ }
+
+-static void ksz8_port_mirror_del(struct dsa_switch *ds, int port,
++static void ksz8_port_mirror_del(struct ksz_device *dev, int port,
+ struct dsa_mall_mirror_tc_entry *mirror)
+ {
+- struct ksz_device *dev = ds->priv;
+ u8 data;
+
+ if (mirror->ingress) {
+@@ -1272,7 +1254,7 @@ static void ksz8_config_cpu_port(struct dsa_switch *ds)
+ continue;
+ if (!ksz_is_ksz88x3(dev)) {
+ ksz_pread8(dev, i, regs[P_REMOTE_STATUS], &remote);
+- if (remote & PORT_FIBER_MODE)
++ if (remote & KSZ8_PORT_FIBER_MODE)
+ p->fiber = 1;
+ }
+ if (p->fiber)
+@@ -1371,13 +1353,9 @@ static int ksz8_setup(struct dsa_switch *ds)
+ return ksz8_handle_global_errata(ds);
+ }
+
+-static void ksz8_get_caps(struct dsa_switch *ds, int port,
++static void ksz8_get_caps(struct ksz_device *dev, int port,
+ struct phylink_config *config)
+ {
+- struct ksz_device *dev = ds->priv;
+-
+- ksz_phylink_get_caps(ds, port, config);
+-
+ config->mac_capabilities = MAC_10 | MAC_100;
+
+ /* Silicon Errata Sheet (DS80000830A):
+@@ -1394,12 +1372,12 @@ static void ksz8_get_caps(struct dsa_switch *ds, int port,
+ }
+
+ static const struct dsa_switch_ops ksz8_switch_ops = {
+- .get_tag_protocol = ksz8_get_tag_protocol,
++ .get_tag_protocol = ksz_get_tag_protocol,
+ .get_phy_flags = ksz8_sw_get_phy_flags,
+ .setup = ksz8_setup,
+ .phy_read = ksz_phy_read16,
+ .phy_write = ksz_phy_write16,
+- .phylink_get_caps = ksz8_get_caps,
++ .phylink_get_caps = ksz_phylink_get_caps,
+ .phylink_mac_link_down = ksz_mac_link_down,
+ .port_enable = ksz_enable_port,
+ .get_strings = ksz_get_strings,
+@@ -1409,14 +1387,14 @@ static const struct dsa_switch_ops ksz8_switch_ops = {
+ .port_bridge_leave = ksz_port_bridge_leave,
+ .port_stp_state_set = ksz8_port_stp_state_set,
+ .port_fast_age = ksz_port_fast_age,
+- .port_vlan_filtering = ksz8_port_vlan_filtering,
+- .port_vlan_add = ksz8_port_vlan_add,
+- .port_vlan_del = ksz8_port_vlan_del,
++ .port_vlan_filtering = ksz_port_vlan_filtering,
++ .port_vlan_add = ksz_port_vlan_add,
++ .port_vlan_del = ksz_port_vlan_del,
+ .port_fdb_dump = ksz_port_fdb_dump,
+ .port_mdb_add = ksz_port_mdb_add,
+ .port_mdb_del = ksz_port_mdb_del,
+- .port_mirror_add = ksz8_port_mirror_add,
+- .port_mirror_del = ksz8_port_mirror_del,
++ .port_mirror_add = ksz_port_mirror_add,
++ .port_mirror_del = ksz_port_mirror_del,
+ };
+
+ static u32 ksz8_get_port_addr(int port, int offset)
+@@ -1424,51 +1402,6 @@ static u32 ksz8_get_port_addr(int port, int offset)
+ return PORT_CTRL_ADDR(port, offset);
+ }
+
+-static int ksz8_switch_detect(struct ksz_device *dev)
+-{
+- u8 id1, id2;
+- u16 id16;
+- int ret;
+-
+- /* read chip id */
+- ret = ksz_read16(dev, REG_CHIP_ID0, &id16);
+- if (ret)
+- return ret;
+-
+- id1 = id16 >> 8;
+- id2 = id16 & SW_CHIP_ID_M;
+-
+- switch (id1) {
+- case KSZ87_FAMILY_ID:
+- if ((id2 != CHIP_ID_94 && id2 != CHIP_ID_95))
+- return -ENODEV;
+-
+- if (id2 == CHIP_ID_95) {
+- u8 val;
+-
+- id2 = 0x95;
+- ksz_read8(dev, REG_PORT_STATUS_0, &val);
+- if (val & PORT_FIBER_MODE)
+- id2 = 0x65;
+- } else if (id2 == CHIP_ID_94) {
+- id2 = 0x94;
+- }
+- break;
+- case KSZ88_FAMILY_ID:
+- if (id2 != CHIP_ID_63)
+- return -ENODEV;
+- break;
+- default:
+- dev_err(dev->dev, "invalid family id: %d\n", id1);
+- return -ENODEV;
+- }
+- id16 &= ~0xff;
+- id16 |= id2;
+- dev->chip_id = id16;
+-
+- return 0;
+-}
+-
+ static int ksz8_switch_init(struct ksz_device *dev)
+ {
+ struct ksz8 *ksz8 = dev->priv;
+@@ -1521,8 +1454,13 @@ static const struct ksz_dev_ops ksz8_dev_ops = {
+ .r_mib_pkt = ksz8_r_mib_pkt,
+ .freeze_mib = ksz8_freeze_mib,
+ .port_init_cnt = ksz8_port_init_cnt,
++ .vlan_filtering = ksz8_port_vlan_filtering,
++ .vlan_add = ksz8_port_vlan_add,
++ .vlan_del = ksz8_port_vlan_del,
++ .mirror_add = ksz8_port_mirror_add,
++ .mirror_del = ksz8_port_mirror_del,
++ .get_caps = ksz8_get_caps,
+ .shutdown = ksz8_reset_switch,
+- .detect = ksz8_switch_detect,
+ .init = ksz8_switch_init,
+ .exit = ksz8_switch_exit,
+ };
+diff --git a/drivers/net/dsa/microchip/ksz8795_reg.h b/drivers/net/dsa/microchip/ksz8795_reg.h
+index 4109433b6b6c2..b8f6ad7581bcd 100644
+--- a/drivers/net/dsa/microchip/ksz8795_reg.h
++++ b/drivers/net/dsa/microchip/ksz8795_reg.h
+@@ -14,23 +14,10 @@
+ #define KS_PRIO_M 0x3
+ #define KS_PRIO_S 2
+
+-#define REG_CHIP_ID0 0x00
+-
+-#define KSZ87_FAMILY_ID 0x87
+-#define KSZ88_FAMILY_ID 0x88
+-
+-#define REG_CHIP_ID1 0x01
+-
+-#define SW_CHIP_ID_M 0xF0
+-#define SW_CHIP_ID_S 4
+ #define SW_REVISION_M 0x0E
+ #define SW_REVISION_S 1
+ #define SW_START 0x01
+
+-#define CHIP_ID_94 0x60
+-#define CHIP_ID_95 0x90
+-#define CHIP_ID_63 0x30
+-
+ #define KSZ8863_REG_SW_RESET 0x43
+
+ #define KSZ8863_GLOBAL_SOFTWARE_RESET BIT(4)
+@@ -217,8 +204,6 @@
+ #define REG_PORT_4_STATUS_0 0x48
+
+ /* For KSZ8765. */
+-#define PORT_FIBER_MODE BIT(7)
+-
+ #define PORT_REMOTE_ASYM_PAUSE BIT(5)
+ #define PORT_REMOTE_SYM_PAUSE BIT(4)
+ #define PORT_REMOTE_100BTX_FD BIT(3)
+@@ -322,7 +307,6 @@
+
+ #define REG_PORT_CTRL_5 0x05
+
+-#define REG_PORT_STATUS_0 0x08
+ #define REG_PORT_STATUS_1 0x09
+ #define REG_PORT_LINK_MD_CTRL 0x0A
+ #define REG_PORT_LINK_MD_RESULT 0x0B
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index ebad795e4e95f..125124fdefbf4 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -276,18 +276,6 @@ static void ksz9477_port_init_cnt(struct ksz_device *dev, int port)
+ mutex_unlock(&mib->cnt_mutex);
+ }
+
+-static enum dsa_tag_protocol ksz9477_get_tag_protocol(struct dsa_switch *ds,
+- int port,
+- enum dsa_tag_protocol mp)
+-{
+- enum dsa_tag_protocol proto = DSA_TAG_PROTO_KSZ9477;
+- struct ksz_device *dev = ds->priv;
+-
+- if (dev->features & IS_9893)
+- proto = DSA_TAG_PROTO_KSZ9893;
+- return proto;
+-}
+-
+ static int ksz9477_phy_read16(struct dsa_switch *ds, int addr, int reg)
+ {
+ struct ksz_device *dev = ds->priv;
+@@ -389,12 +377,10 @@ static void ksz9477_flush_dyn_mac_table(struct ksz_device *dev, int port)
+ }
+ }
+
+-static int ksz9477_port_vlan_filtering(struct dsa_switch *ds, int port,
++static int ksz9477_port_vlan_filtering(struct ksz_device *dev, int port,
+ bool flag,
+ struct netlink_ext_ack *extack)
+ {
+- struct ksz_device *dev = ds->priv;
+-
+ if (flag) {
+ ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL,
+ PORT_VLAN_LOOKUP_VID_0, true);
+@@ -408,11 +394,10 @@ static int ksz9477_port_vlan_filtering(struct dsa_switch *ds, int port,
+ return 0;
+ }
+
+-static int ksz9477_port_vlan_add(struct dsa_switch *ds, int port,
++static int ksz9477_port_vlan_add(struct ksz_device *dev, int port,
+ const struct switchdev_obj_port_vlan *vlan,
+ struct netlink_ext_ack *extack)
+ {
+- struct ksz_device *dev = ds->priv;
+ u32 vlan_table[3];
+ bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+ int err;
+@@ -445,10 +430,9 @@ static int ksz9477_port_vlan_add(struct dsa_switch *ds, int port,
+ return 0;
+ }
+
+-static int ksz9477_port_vlan_del(struct dsa_switch *ds, int port,
++static int ksz9477_port_vlan_del(struct ksz_device *dev, int port,
+ const struct switchdev_obj_port_vlan *vlan)
+ {
+- struct ksz_device *dev = ds->priv;
+ bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+ u32 vlan_table[3];
+ u16 pvid;
+@@ -835,11 +819,10 @@ exit:
+ return ret;
+ }
+
+-static int ksz9477_port_mirror_add(struct dsa_switch *ds, int port,
++static int ksz9477_port_mirror_add(struct ksz_device *dev, int port,
+ struct dsa_mall_mirror_tc_entry *mirror,
+ bool ingress, struct netlink_ext_ack *extack)
+ {
+- struct ksz_device *dev = ds->priv;
+ u8 data;
+ int p;
+
+@@ -875,10 +858,9 @@ static int ksz9477_port_mirror_add(struct dsa_switch *ds, int port,
+ return 0;
+ }
+
+-static void ksz9477_port_mirror_del(struct dsa_switch *ds, int port,
++static void ksz9477_port_mirror_del(struct ksz_device *dev, int port,
+ struct dsa_mall_mirror_tc_entry *mirror)
+ {
+- struct ksz_device *dev = ds->priv;
+ bool in_use = false;
+ u8 data;
+ int p;
+@@ -1100,11 +1082,9 @@ static void ksz9477_phy_errata_setup(struct ksz_device *dev, int port)
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x20, 0xeeee);
+ }
+
+-static void ksz9477_get_caps(struct dsa_switch *ds, int port,
++static void ksz9477_get_caps(struct ksz_device *dev, int port,
+ struct phylink_config *config)
+ {
+- ksz_phylink_get_caps(ds, port, config);
+-
+ config->mac_capabilities = MAC_10 | MAC_100 | MAC_1000FD |
+ MAC_ASYM_PAUSE | MAC_SYM_PAUSE;
+ }
+@@ -1329,12 +1309,12 @@ static int ksz9477_setup(struct dsa_switch *ds)
+ }
+
+ static const struct dsa_switch_ops ksz9477_switch_ops = {
+- .get_tag_protocol = ksz9477_get_tag_protocol,
++ .get_tag_protocol = ksz_get_tag_protocol,
+ .setup = ksz9477_setup,
+ .phy_read = ksz9477_phy_read16,
+ .phy_write = ksz9477_phy_write16,
+ .phylink_mac_link_down = ksz_mac_link_down,
+- .phylink_get_caps = ksz9477_get_caps,
++ .phylink_get_caps = ksz_phylink_get_caps,
+ .port_enable = ksz_enable_port,
+ .get_strings = ksz_get_strings,
+ .get_ethtool_stats = ksz_get_ethtool_stats,
+@@ -1343,16 +1323,16 @@ static const struct dsa_switch_ops ksz9477_switch_ops = {
+ .port_bridge_leave = ksz_port_bridge_leave,
+ .port_stp_state_set = ksz9477_port_stp_state_set,
+ .port_fast_age = ksz_port_fast_age,
+- .port_vlan_filtering = ksz9477_port_vlan_filtering,
+- .port_vlan_add = ksz9477_port_vlan_add,
+- .port_vlan_del = ksz9477_port_vlan_del,
++ .port_vlan_filtering = ksz_port_vlan_filtering,
++ .port_vlan_add = ksz_port_vlan_add,
++ .port_vlan_del = ksz_port_vlan_del,
+ .port_fdb_dump = ksz9477_port_fdb_dump,
+ .port_fdb_add = ksz9477_port_fdb_add,
+ .port_fdb_del = ksz9477_port_fdb_del,
+ .port_mdb_add = ksz9477_port_mdb_add,
+ .port_mdb_del = ksz9477_port_mdb_del,
+- .port_mirror_add = ksz9477_port_mirror_add,
+- .port_mirror_del = ksz9477_port_mirror_del,
++ .port_mirror_add = ksz_port_mirror_add,
++ .port_mirror_del = ksz_port_mirror_del,
+ .get_stats64 = ksz_get_stats64,
+ .port_change_mtu = ksz9477_change_mtu,
+ .port_max_mtu = ksz9477_max_mtu,
+@@ -1363,14 +1343,15 @@ static u32 ksz9477_get_port_addr(int port, int offset)
+ return PORT_CTRL_ADDR(port, offset);
+ }
+
+-static int ksz9477_switch_detect(struct ksz_device *dev)
++static int ksz9477_switch_init(struct ksz_device *dev)
+ {
+ u8 data8;
+- u8 id_hi;
+- u8 id_lo;
+- u32 id32;
+ int ret;
+
++ dev->ds->ops = &ksz9477_switch_ops;
++
++ dev->port_mask = (1 << dev->info->port_cnt) - 1;
++
+ /* turn off SPI DO Edge select */
+ ret = ksz_read8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, &data8);
+ if (ret)
+@@ -1381,10 +1362,6 @@ static int ksz9477_switch_detect(struct ksz_device *dev)
+ if (ret)
+ return ret;
+
+- /* read chip id */
+- ret = ksz_read32(dev, REG_CHIP_ID0__1, &id32);
+- if (ret)
+- return ret;
+ ret = ksz_read8(dev, REG_GLOBAL_OPTIONS, &data8);
+ if (ret)
+ return ret;
+@@ -1395,12 +1372,7 @@ static int ksz9477_switch_detect(struct ksz_device *dev)
+ /* Default capability is gigabit capable. */
+ dev->features = GBIT_SUPPORT;
+
+- dev_dbg(dev->dev, "Switch detect: ID=%08x%02x\n", id32, data8);
+- id_hi = (u8)(id32 >> 16);
+- id_lo = (u8)(id32 >> 8);
+- if ((id_lo & 0xf) == 3) {
+- /* Chip is from KSZ9893 design. */
+- dev_info(dev->dev, "Found KSZ9893\n");
++ if (dev->chip_id == KSZ9893_CHIP_ID) {
+ dev->features |= IS_9893;
+
+ /* Chip does not support gigabit. */
+@@ -1408,7 +1380,6 @@ static int ksz9477_switch_detect(struct ksz_device *dev)
+ dev->features &= ~GBIT_SUPPORT;
+ dev->phy_port_cnt = 2;
+ } else {
+- dev_info(dev->dev, "Found KSZ9477 or compatible\n");
+ /* Chip uses new XMII register definitions. */
+ dev->features |= NEW_XMII;
+
+@@ -1416,21 +1387,6 @@ static int ksz9477_switch_detect(struct ksz_device *dev)
+ if (!(data8 & SW_GIGABIT_ABLE))
+ dev->features &= ~GBIT_SUPPORT;
+ }
+-
+- /* Change chip id to known ones so it can be matched against them. */
+- id32 = (id_hi << 16) | (id_lo << 8);
+-
+- dev->chip_id = id32;
+-
+- return 0;
+-}
+-
+-static int ksz9477_switch_init(struct ksz_device *dev)
+-{
+- dev->ds->ops = &ksz9477_switch_ops;
+-
+- dev->port_mask = (1 << dev->info->port_cnt) - 1;
+-
+ return 0;
+ }
+
+@@ -1449,8 +1405,13 @@ static const struct ksz_dev_ops ksz9477_dev_ops = {
+ .r_mib_stat64 = ksz_r_mib_stats64,
+ .freeze_mib = ksz9477_freeze_mib,
+ .port_init_cnt = ksz9477_port_init_cnt,
++ .vlan_filtering = ksz9477_port_vlan_filtering,
++ .vlan_add = ksz9477_port_vlan_add,
++ .vlan_del = ksz9477_port_vlan_del,
++ .mirror_add = ksz9477_port_mirror_add,
++ .mirror_del = ksz9477_port_mirror_del,
++ .get_caps = ksz9477_get_caps,
+ .shutdown = ksz9477_reset_switch,
+- .detect = ksz9477_switch_detect,
+ .init = ksz9477_switch_init,
+ .exit = ksz9477_switch_exit,
+ };
+diff --git a/drivers/net/dsa/microchip/ksz9477_reg.h b/drivers/net/dsa/microchip/ksz9477_reg.h
+index 7a2c8d4767aff..077e35ab11b54 100644
+--- a/drivers/net/dsa/microchip/ksz9477_reg.h
++++ b/drivers/net/dsa/microchip/ksz9477_reg.h
+@@ -25,7 +25,6 @@
+
+ #define REG_CHIP_ID2__1 0x0002
+
+-#define CHIP_ID_63 0x63
+ #define CHIP_ID_66 0x66
+ #define CHIP_ID_67 0x67
+ #define CHIP_ID_77 0x77
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 92a500e1ccd21..c9389880ad1fa 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -453,9 +453,18 @@ void ksz_phylink_get_caps(struct dsa_switch *ds, int port,
+ if (dev->info->supports_rgmii[port])
+ phy_interface_set_rgmii(config->supported_interfaces);
+
+- if (dev->info->internal_phy[port])
++ if (dev->info->internal_phy[port]) {
+ __set_bit(PHY_INTERFACE_MODE_INTERNAL,
+ config->supported_interfaces);
++ /* Compatibility for phylib's default interface type when the
++ * phy-mode property is absent
++ */
++ __set_bit(PHY_INTERFACE_MODE_GMII,
++ config->supported_interfaces);
++ }
++
++ if (dev->dev_ops->get_caps)
++ dev->dev_ops->get_caps(dev, port, config);
+ }
+ EXPORT_SYMBOL_GPL(ksz_phylink_get_caps);
+
+@@ -930,6 +939,156 @@ void ksz_port_stp_state_set(struct dsa_switch *ds, int port,
+ }
+ EXPORT_SYMBOL_GPL(ksz_port_stp_state_set);
+
++enum dsa_tag_protocol ksz_get_tag_protocol(struct dsa_switch *ds,
++ int port, enum dsa_tag_protocol mp)
++{
++ struct ksz_device *dev = ds->priv;
++ enum dsa_tag_protocol proto = DSA_TAG_PROTO_NONE;
++
++ if (dev->chip_id == KSZ8795_CHIP_ID ||
++ dev->chip_id == KSZ8794_CHIP_ID ||
++ dev->chip_id == KSZ8765_CHIP_ID)
++ proto = DSA_TAG_PROTO_KSZ8795;
++
++ if (dev->chip_id == KSZ8830_CHIP_ID ||
++ dev->chip_id == KSZ9893_CHIP_ID)
++ proto = DSA_TAG_PROTO_KSZ9893;
++
++ if (dev->chip_id == KSZ9477_CHIP_ID ||
++ dev->chip_id == KSZ9897_CHIP_ID ||
++ dev->chip_id == KSZ9567_CHIP_ID)
++ proto = DSA_TAG_PROTO_KSZ9477;
++
++ return proto;
++}
++EXPORT_SYMBOL_GPL(ksz_get_tag_protocol);
++
++int ksz_port_vlan_filtering(struct dsa_switch *ds, int port,
++ bool flag, struct netlink_ext_ack *extack)
++{
++ struct ksz_device *dev = ds->priv;
++
++ if (!dev->dev_ops->vlan_filtering)
++ return -EOPNOTSUPP;
++
++ return dev->dev_ops->vlan_filtering(dev, port, flag, extack);
++}
++EXPORT_SYMBOL_GPL(ksz_port_vlan_filtering);
++
++int ksz_port_vlan_add(struct dsa_switch *ds, int port,
++ const struct switchdev_obj_port_vlan *vlan,
++ struct netlink_ext_ack *extack)
++{
++ struct ksz_device *dev = ds->priv;
++
++ if (!dev->dev_ops->vlan_add)
++ return -EOPNOTSUPP;
++
++ return dev->dev_ops->vlan_add(dev, port, vlan, extack);
++}
++EXPORT_SYMBOL_GPL(ksz_port_vlan_add);
++
++int ksz_port_vlan_del(struct dsa_switch *ds, int port,
++ const struct switchdev_obj_port_vlan *vlan)
++{
++ struct ksz_device *dev = ds->priv;
++
++ if (!dev->dev_ops->vlan_del)
++ return -EOPNOTSUPP;
++
++ return dev->dev_ops->vlan_del(dev, port, vlan);
++}
++EXPORT_SYMBOL_GPL(ksz_port_vlan_del);
++
++int ksz_port_mirror_add(struct dsa_switch *ds, int port,
++ struct dsa_mall_mirror_tc_entry *mirror,
++ bool ingress, struct netlink_ext_ack *extack)
++{
++ struct ksz_device *dev = ds->priv;
++
++ if (!dev->dev_ops->mirror_add)
++ return -EOPNOTSUPP;
++
++ return dev->dev_ops->mirror_add(dev, port, mirror, ingress, extack);
++}
++EXPORT_SYMBOL_GPL(ksz_port_mirror_add);
++
++void ksz_port_mirror_del(struct dsa_switch *ds, int port,
++ struct dsa_mall_mirror_tc_entry *mirror)
++{
++ struct ksz_device *dev = ds->priv;
++
++ if (dev->dev_ops->mirror_del)
++ dev->dev_ops->mirror_del(dev, port, mirror);
++}
++EXPORT_SYMBOL_GPL(ksz_port_mirror_del);
++
++static int ksz_switch_detect(struct ksz_device *dev)
++{
++ u8 id1, id2;
++ u16 id16;
++ u32 id32;
++ int ret;
++
++ /* read chip id */
++ ret = ksz_read16(dev, REG_CHIP_ID0, &id16);
++ if (ret)
++ return ret;
++
++ id1 = FIELD_GET(SW_FAMILY_ID_M, id16);
++ id2 = FIELD_GET(SW_CHIP_ID_M, id16);
++
++ switch (id1) {
++ case KSZ87_FAMILY_ID:
++ if (id2 == KSZ87_CHIP_ID_95) {
++ u8 val;
++
++ dev->chip_id = KSZ8795_CHIP_ID;
++
++ ksz_read8(dev, KSZ8_PORT_STATUS_0, &val);
++ if (val & KSZ8_PORT_FIBER_MODE)
++ dev->chip_id = KSZ8765_CHIP_ID;
++ } else if (id2 == KSZ87_CHIP_ID_94) {
++ dev->chip_id = KSZ8794_CHIP_ID;
++ } else {
++ return -ENODEV;
++ }
++ break;
++ case KSZ88_FAMILY_ID:
++ if (id2 == KSZ88_CHIP_ID_63)
++ dev->chip_id = KSZ8830_CHIP_ID;
++ else
++ return -ENODEV;
++ break;
++ default:
++ ret = ksz_read32(dev, REG_CHIP_ID0, &id32);
++ if (ret)
++ return ret;
++
++ dev->chip_rev = FIELD_GET(SW_REV_ID_M, id32);
++ id32 &= ~0xFF;
++
++ switch (id32) {
++ case KSZ9477_CHIP_ID:
++ case KSZ9897_CHIP_ID:
++ case KSZ9893_CHIP_ID:
++ case KSZ9567_CHIP_ID:
++ case LAN9370_CHIP_ID:
++ case LAN9371_CHIP_ID:
++ case LAN9372_CHIP_ID:
++ case LAN9373_CHIP_ID:
++ case LAN9374_CHIP_ID:
++ dev->chip_id = id32;
++ break;
++ default:
++ dev_err(dev->dev,
++ "unsupported switch detected %x)\n", id32);
++ return -ENODEV;
++ }
++ }
++ return 0;
++}
++
+ struct ksz_device *ksz_switch_alloc(struct device *base, void *priv)
+ {
+ struct dsa_switch *ds;
+@@ -986,10 +1145,9 @@ int ksz_switch_register(struct ksz_device *dev,
+ mutex_init(&dev->alu_mutex);
+ mutex_init(&dev->vlan_mutex);
+
+- dev->dev_ops = ops;
+-
+- if (dev->dev_ops->detect(dev))
+- return -EINVAL;
++ ret = ksz_switch_detect(dev);
++ if (ret)
++ return ret;
+
+ info = ksz_lookup_info(dev->chip_id);
+ if (!info)
+@@ -998,10 +1156,15 @@ int ksz_switch_register(struct ksz_device *dev,
+ /* Update the compatible info with the probed one */
+ dev->info = info;
+
++ dev_info(dev->dev, "found switch: %s, rev %i\n",
++ dev->info->dev_name, dev->chip_rev);
++
+ ret = ksz_check_device_id(dev);
+ if (ret)
+ return ret;
+
++ dev->dev_ops = ops;
++
+ ret = dev->dev_ops->init(dev);
+ if (ret)
+ return ret;
+diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
+index 8500eaedad67a..10f9ef2dbf1ca 100644
+--- a/drivers/net/dsa/microchip/ksz_common.h
++++ b/drivers/net/dsa/microchip/ksz_common.h
+@@ -90,6 +90,7 @@ struct ksz_device {
+
+ /* chip specific data */
+ u32 chip_id;
++ u8 chip_rev;
+ int cpu_port; /* port connected to CPU */
+ int phy_port_cnt;
+ phy_interface_t compat_interface;
+@@ -179,10 +180,23 @@ struct ksz_dev_ops {
+ void (*r_mib_pkt)(struct ksz_device *dev, int port, u16 addr,
+ u64 *dropped, u64 *cnt);
+ void (*r_mib_stat64)(struct ksz_device *dev, int port);
++ int (*vlan_filtering)(struct ksz_device *dev, int port,
++ bool flag, struct netlink_ext_ack *extack);
++ int (*vlan_add)(struct ksz_device *dev, int port,
++ const struct switchdev_obj_port_vlan *vlan,
++ struct netlink_ext_ack *extack);
++ int (*vlan_del)(struct ksz_device *dev, int port,
++ const struct switchdev_obj_port_vlan *vlan);
++ int (*mirror_add)(struct ksz_device *dev, int port,
++ struct dsa_mall_mirror_tc_entry *mirror,
++ bool ingress, struct netlink_ext_ack *extack);
++ void (*mirror_del)(struct ksz_device *dev, int port,
++ struct dsa_mall_mirror_tc_entry *mirror);
++ void (*get_caps)(struct ksz_device *dev, int port,
++ struct phylink_config *config);
+ void (*freeze_mib)(struct ksz_device *dev, int port, bool freeze);
+ void (*port_init_cnt)(struct ksz_device *dev, int port);
+ int (*shutdown)(struct ksz_device *dev);
+- int (*detect)(struct ksz_device *dev);
+ int (*init)(struct ksz_device *dev);
+ void (*exit)(struct ksz_device *dev);
+ };
+@@ -231,6 +245,20 @@ int ksz_port_mdb_del(struct dsa_switch *ds, int port,
+ int ksz_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy);
+ void ksz_get_strings(struct dsa_switch *ds, int port,
+ u32 stringset, uint8_t *buf);
++enum dsa_tag_protocol ksz_get_tag_protocol(struct dsa_switch *ds,
++ int port, enum dsa_tag_protocol mp);
++int ksz_port_vlan_filtering(struct dsa_switch *ds, int port,
++ bool flag, struct netlink_ext_ack *extack);
++int ksz_port_vlan_add(struct dsa_switch *ds, int port,
++ const struct switchdev_obj_port_vlan *vlan,
++ struct netlink_ext_ack *extack);
++int ksz_port_vlan_del(struct dsa_switch *ds, int port,
++ const struct switchdev_obj_port_vlan *vlan);
++int ksz_port_mirror_add(struct dsa_switch *ds, int port,
++ struct dsa_mall_mirror_tc_entry *mirror,
++ bool ingress, struct netlink_ext_ack *extack);
++void ksz_port_mirror_del(struct dsa_switch *ds, int port,
++ struct dsa_mall_mirror_tc_entry *mirror);
+
+ /* Common register access functions */
+
+@@ -353,6 +381,23 @@ static inline void ksz_regmap_unlock(void *__mtx)
+ #define PORT_RX_ENABLE BIT(1)
+ #define PORT_LEARN_DISABLE BIT(0)
+
++/* Switch ID Defines */
++#define REG_CHIP_ID0 0x00
++
++#define SW_FAMILY_ID_M GENMASK(15, 8)
++#define KSZ87_FAMILY_ID 0x87
++#define KSZ88_FAMILY_ID 0x88
++
++#define KSZ8_PORT_STATUS_0 0x08
++#define KSZ8_PORT_FIBER_MODE BIT(7)
++
++#define SW_CHIP_ID_M GENMASK(7, 4)
++#define KSZ87_CHIP_ID_94 0x6
++#define KSZ87_CHIP_ID_95 0x9
++#define KSZ88_CHIP_ID_63 0x3
++
++#define SW_REV_ID_M GENMASK(7, 4)
++
+ /* Regmap tables generation */
+ #define KSZ_SPI_OP_RD 3
+ #define KSZ_SPI_OP_WR 2
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index cf9b00576ed36..964354536f9ce 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -11183,10 +11183,7 @@ static netdev_features_t bnxt_fix_features(struct net_device *dev,
+ if ((features & NETIF_F_NTUPLE) && !bnxt_rfs_capable(bp))
+ features &= ~NETIF_F_NTUPLE;
+
+- if (bp->flags & BNXT_FLAG_NO_AGG_RINGS)
+- features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW);
+-
+- if (!(bp->flags & BNXT_FLAG_TPA))
++ if ((bp->flags & BNXT_FLAG_NO_AGG_RINGS) || bp->xdp_prog)
+ features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW);
+
+ if (!(features & NETIF_F_GRO))
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 075c6206325ce..b1b17f9113006 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -2130,6 +2130,7 @@ struct bnxt {
+ #define BNXT_DUMP_CRASH 1
+
+ struct bpf_prog *xdp_prog;
++ u8 xdp_has_frags;
+
+ struct bnxt_ptp_cfg *ptp_cfg;
+ u8 ptp_all_rx_tstamp;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+index 6b3d4f4c2a75f..d83be40785b89 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+@@ -1246,6 +1246,7 @@ int bnxt_dl_register(struct bnxt *bp)
+ if (rc)
+ goto err_dl_port_unreg;
+
++ devlink_set_features(dl, DEVLINK_F_RELOAD);
+ out:
+ devlink_register(dl);
+ return 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index a1a2c7a64fd58..c9cf0569451a2 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -623,7 +623,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs, bool reset)
+ hw_resc->max_stat_ctxs -= le16_to_cpu(req->min_stat_ctx) * n;
+ hw_resc->max_vnics -= le16_to_cpu(req->min_vnics) * n;
+ if (bp->flags & BNXT_FLAG_CHIP_P5)
+- hw_resc->max_irqs -= vf_msix * n;
++ hw_resc->max_nqs -= vf_msix;
+
+ rc = pf->active_vfs;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index f53387ed0167b..c3065ec0a4798 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -181,6 +181,7 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ struct xdp_buff *xdp)
+ {
+ struct bnxt_sw_rx_bd *rx_buf;
++ u32 buflen = PAGE_SIZE;
+ struct pci_dev *pdev;
+ dma_addr_t mapping;
+ u32 offset;
+@@ -192,7 +193,10 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ mapping = rx_buf->mapping - bp->rx_dma_offset;
+ dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir);
+
+- xdp_init_buff(xdp, BNXT_PAGE_MODE_BUF_SIZE + offset, &rxr->xdp_rxq);
++ if (bp->xdp_has_frags)
++ buflen = BNXT_PAGE_MODE_BUF_SIZE + offset;
++
++ xdp_init_buff(xdp, buflen, &rxr->xdp_rxq);
+ xdp_prepare_buff(xdp, *data_ptr - offset, offset, *len, false);
+ }
+
+@@ -397,8 +401,10 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog)
+ netdev_warn(dev, "ethtool rx/tx channels must be combined to support XDP.\n");
+ return -EOPNOTSUPP;
+ }
+- if (prog)
++ if (prog) {
+ tx_xdp = bp->rx_nr_rings;
++ bp->xdp_has_frags = prog->aux->xdp_has_frags;
++ }
+
+ tc = netdev_get_num_tc(dev);
+ if (!tc)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 19704f5c8291c..22a61802a4027 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -4395,7 +4395,7 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi,
+ (struct in6_addr *)&ipv6_full_mask))
+ new_mask |= I40E_L3_V6_DST_MASK;
+ else if (ipv6_addr_any((struct in6_addr *)
+- &usr_ip6_spec->ip6src))
++ &usr_ip6_spec->ip6dst))
+ new_mask &= ~I40E_L3_V6_DST_MASK;
+ else
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 60453b3b8d233..6911cbb7afa50 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -684,8 +684,8 @@ static inline void ice_set_ring_xdp(struct ice_tx_ring *ring)
+ * ice_xsk_pool - get XSK buffer pool bound to a ring
+ * @ring: Rx ring to use
+ *
+- * Returns a pointer to xdp_umem structure if there is a buffer pool present,
+- * NULL otherwise.
++ * Returns a pointer to xsk_buff_pool structure if there is a buffer pool
++ * present, NULL otherwise.
+ */
+ static inline struct xsk_buff_pool *ice_xsk_pool(struct ice_rx_ring *ring)
+ {
+@@ -699,23 +699,33 @@ static inline struct xsk_buff_pool *ice_xsk_pool(struct ice_rx_ring *ring)
+ }
+
+ /**
+- * ice_tx_xsk_pool - get XSK buffer pool bound to a ring
+- * @ring: Tx ring to use
++ * ice_tx_xsk_pool - assign XSK buff pool to XDP ring
++ * @vsi: pointer to VSI
++ * @qid: index of a queue to look at XSK buff pool presence
+ *
+- * Returns a pointer to xdp_umem structure if there is a buffer pool present,
+- * NULL otherwise. Tx equivalent of ice_xsk_pool.
++ * Sets XSK buff pool pointer on XDP ring.
++ *
++ * XDP ring is picked from Rx ring, whereas Rx ring is picked based on provided
++ * queue id. Reason for doing so is that queue vectors might have assigned more
++ * than one XDP ring, e.g. when user reduced the queue count on netdev; Rx ring
++ * carries a pointer to one of these XDP rings for its own purposes, such as
++ * handling XDP_TX action, therefore we can piggyback here on the
++ * rx_ring->xdp_ring assignment that was done during XDP rings initialization.
+ */
+-static inline struct xsk_buff_pool *ice_tx_xsk_pool(struct ice_tx_ring *ring)
++static inline void ice_tx_xsk_pool(struct ice_vsi *vsi, u16 qid)
+ {
+- struct ice_vsi *vsi = ring->vsi;
+- u16 qid;
++ struct ice_tx_ring *ring;
+
+- qid = ring->q_index - vsi->alloc_txq;
++ ring = vsi->rx_rings[qid]->xdp_ring;
++ if (!ring)
++ return;
+
+- if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps))
+- return NULL;
++ if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps)) {
++ ring->xsk_pool = NULL;
++ return;
++ }
+
+- return xsk_get_pool_from_qid(vsi->netdev, qid);
++ ring->xsk_pool = xsk_get_pool_from_qid(vsi->netdev, qid);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index d6aafa272fb0b..6c4e1d45235ef 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -1983,8 +1983,8 @@ int ice_vsi_cfg_xdp_txqs(struct ice_vsi *vsi)
+ if (ret)
+ return ret;
+
+- ice_for_each_xdp_txq(vsi, i)
+- vsi->xdp_rings[i]->xsk_pool = ice_tx_xsk_pool(vsi->xdp_rings[i]);
++ ice_for_each_rxq(vsi, i)
++ ice_tx_xsk_pool(vsi, i);
+
+ return ret;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index bfd97a9a8f2e0..3d45e075204e3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2581,7 +2581,6 @@ static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi)
+ if (ice_setup_tx_ring(xdp_ring))
+ goto free_xdp_rings;
+ ice_set_ring_xdp(xdp_ring);
+- xdp_ring->xsk_pool = ice_tx_xsk_pool(xdp_ring);
+ spin_lock_init(&xdp_ring->tx_lock);
+ for (j = 0; j < xdp_ring->count; j++) {
+ tx_desc = ICE_TX_DESC(xdp_ring, j);
+@@ -2589,13 +2588,6 @@ static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi)
+ }
+ }
+
+- ice_for_each_rxq(vsi, i) {
+- if (static_key_enabled(&ice_xdp_locking_key))
+- vsi->rx_rings[i]->xdp_ring = vsi->xdp_rings[i % vsi->num_xdp_txq];
+- else
+- vsi->rx_rings[i]->xdp_ring = vsi->xdp_rings[i];
+- }
+-
+ return 0;
+
+ free_xdp_rings:
+@@ -2685,6 +2677,23 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog)
+ xdp_rings_rem -= xdp_rings_per_v;
+ }
+
++ ice_for_each_rxq(vsi, i) {
++ if (static_key_enabled(&ice_xdp_locking_key)) {
++ vsi->rx_rings[i]->xdp_ring = vsi->xdp_rings[i % vsi->num_xdp_txq];
++ } else {
++ struct ice_q_vector *q_vector = vsi->rx_rings[i]->q_vector;
++ struct ice_tx_ring *ring;
++
++ ice_for_each_tx_ring(ring, q_vector->tx) {
++ if (ice_ring_is_xdp(ring)) {
++ vsi->rx_rings[i]->xdp_ring = ring;
++ break;
++ }
++ }
++ }
++ ice_tx_xsk_pool(vsi, i);
++ }
++
+ /* omit the scheduler update if in reset path; XDP queues will be
+ * taken into account at the end of ice_vsi_rebuild, where
+ * ice_cfg_vsi_lan is being called
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 49ba8bfdbf047..e48e29258450f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -243,7 +243,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
+ if (err)
+ goto free_buf;
+ ice_set_ring_xdp(xdp_ring);
+- xdp_ring->xsk_pool = ice_tx_xsk_pool(xdp_ring);
++ ice_tx_xsk_pool(vsi, q_idx);
+ }
+
+ err = ice_vsi_cfg_rxq(rx_ring);
+@@ -329,6 +329,12 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
+ bool if_running, pool_present = !!pool;
+ int ret = 0, pool_failure = 0;
+
++ if (qid >= vsi->num_rxq || qid >= vsi->num_txq) {
++ netdev_err(vsi->netdev, "Please use queue id in scope of combined queues count\n");
++ pool_failure = -EINVAL;
++ goto failure;
++ }
++
+ if (!is_power_of_2(vsi->rx_rings[qid]->count) ||
+ !is_power_of_2(vsi->tx_rings[qid]->count)) {
+ netdev_err(vsi->netdev, "Please align ring sizes to power of 2\n");
+@@ -353,7 +359,7 @@ xsk_pool_if_up:
+ if (if_running) {
+ ret = ice_qp_ena(vsi, qid);
+ if (!ret && pool_present)
+- napi_schedule(&vsi->xdp_rings[qid]->q_vector->napi);
++ napi_schedule(&vsi->rx_rings[qid]->xdp_ring->q_vector->napi);
+ else if (ret)
+ netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret);
+ }
+@@ -944,13 +950,13 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id,
+ if (!ice_is_xdp_ena_vsi(vsi))
+ return -EINVAL;
+
+- if (queue_id >= vsi->num_txq)
++ if (queue_id >= vsi->num_txq || queue_id >= vsi->num_rxq)
+ return -EINVAL;
+
+- if (!vsi->xdp_rings[queue_id]->xsk_pool)
+- return -EINVAL;
++ ring = vsi->rx_rings[queue_id]->xdp_ring;
+
+- ring = vsi->xdp_rings[queue_id];
++ if (!ring->xsk_pool)
++ return -EINVAL;
+
+ /* The idea here is that if NAPI is running, mark a miss, so
+ * it will run again. If not, trigger an interrupt and
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+index 336426a67ac1b..38cda659f65f4 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+@@ -1208,7 +1208,6 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
+ struct cyclecounter cc;
+ unsigned long flags;
+ u32 incval = 0;
+- u32 tsauxc = 0;
+ u32 fuse0 = 0;
+
+ /* For some of the boards below this mask is technically incorrect.
+@@ -1243,18 +1242,6 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
+ case ixgbe_mac_x550em_a:
+ case ixgbe_mac_X550:
+ cc.read = ixgbe_ptp_read_X550;
+-
+- /* enable SYSTIME counter */
+- IXGBE_WRITE_REG(hw, IXGBE_SYSTIMR, 0);
+- IXGBE_WRITE_REG(hw, IXGBE_SYSTIML, 0);
+- IXGBE_WRITE_REG(hw, IXGBE_SYSTIMH, 0);
+- tsauxc = IXGBE_READ_REG(hw, IXGBE_TSAUXC);
+- IXGBE_WRITE_REG(hw, IXGBE_TSAUXC,
+- tsauxc & ~IXGBE_TSAUXC_DISABLE_SYSTIME);
+- IXGBE_WRITE_REG(hw, IXGBE_TSIM, IXGBE_TSIM_TXTS);
+- IXGBE_WRITE_REG(hw, IXGBE_EIMS, IXGBE_EIMS_TIMESYNC);
+-
+- IXGBE_WRITE_FLUSH(hw);
+ break;
+ case ixgbe_mac_X540:
+ cc.read = ixgbe_ptp_read_82599;
+@@ -1286,6 +1273,50 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+ }
+
++/**
++ * ixgbe_ptp_init_systime - Initialize SYSTIME registers
++ * @adapter: the ixgbe private board structure
++ *
++ * Initialize and start the SYSTIME registers.
++ */
++static void ixgbe_ptp_init_systime(struct ixgbe_adapter *adapter)
++{
++ struct ixgbe_hw *hw = &adapter->hw;
++ u32 tsauxc;
++
++ switch (hw->mac.type) {
++ case ixgbe_mac_X550EM_x:
++ case ixgbe_mac_x550em_a:
++ case ixgbe_mac_X550:
++ tsauxc = IXGBE_READ_REG(hw, IXGBE_TSAUXC);
++
++ /* Reset SYSTIME registers to 0 */
++ IXGBE_WRITE_REG(hw, IXGBE_SYSTIMR, 0);
++ IXGBE_WRITE_REG(hw, IXGBE_SYSTIML, 0);
++ IXGBE_WRITE_REG(hw, IXGBE_SYSTIMH, 0);
++
++ /* Reset interrupt settings */
++ IXGBE_WRITE_REG(hw, IXGBE_TSIM, IXGBE_TSIM_TXTS);
++ IXGBE_WRITE_REG(hw, IXGBE_EIMS, IXGBE_EIMS_TIMESYNC);
++
++ /* Activate the SYSTIME counter */
++ IXGBE_WRITE_REG(hw, IXGBE_TSAUXC,
++ tsauxc & ~IXGBE_TSAUXC_DISABLE_SYSTIME);
++ break;
++ case ixgbe_mac_X540:
++ case ixgbe_mac_82599EB:
++ /* Reset SYSTIME registers to 0 */
++ IXGBE_WRITE_REG(hw, IXGBE_SYSTIML, 0);
++ IXGBE_WRITE_REG(hw, IXGBE_SYSTIMH, 0);
++ break;
++ default:
++ /* Other devices aren't supported */
++ return;
++ };
++
++ IXGBE_WRITE_FLUSH(hw);
++}
++
+ /**
+ * ixgbe_ptp_reset
+ * @adapter: the ixgbe private board structure
+@@ -1312,6 +1343,8 @@ void ixgbe_ptp_reset(struct ixgbe_adapter *adapter)
+
+ ixgbe_ptp_start_cyclecounter(adapter);
+
++ ixgbe_ptp_init_systime(adapter);
++
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ timecounter_init(&adapter->hw_tc, &adapter->hw_cc,
+ ktime_to_ns(ktime_get_real()));
+diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c
+index 5edb68a8aab1e..57f27cc7724e7 100644
+--- a/drivers/net/ethernet/lantiq_xrx200.c
++++ b/drivers/net/ethernet/lantiq_xrx200.c
+@@ -193,6 +193,7 @@ static int xrx200_alloc_buf(struct xrx200_chan *ch, void *(*alloc)(unsigned int
+
+ ch->rx_buff[ch->dma.desc] = alloc(priv->rx_skb_size);
+ if (!ch->rx_buff[ch->dma.desc]) {
++ ch->rx_buff[ch->dma.desc] = buf;
+ ret = -ENOMEM;
+ goto skip;
+ }
+@@ -239,6 +240,12 @@ static int xrx200_hw_receive(struct xrx200_chan *ch)
+ }
+
+ skb = build_skb(buf, priv->rx_skb_size);
++ if (!skb) {
++ skb_free_frag(buf);
++ net_dev->stats.rx_dropped++;
++ return -ENOMEM;
++ }
++
+ skb_reserve(skb, NET_SKB_PAD);
+ skb_put(skb, len);
+
+@@ -288,7 +295,7 @@ static int xrx200_poll_rx(struct napi_struct *napi, int budget)
+ if (ret == XRX200_DMA_PACKET_IN_PROGRESS)
+ continue;
+ if (ret != XRX200_DMA_PACKET_COMPLETE)
+- return ret;
++ break;
+ rx++;
+ } else {
+ break;
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 59c9a10f83ba5..dcf0aac0aa65d 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1444,8 +1444,8 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
+ int done = 0, bytes = 0;
+
+ while (done < budget) {
++ unsigned int pktlen, *rxdcsum;
+ struct net_device *netdev;
+- unsigned int pktlen;
+ dma_addr_t dma_addr;
+ u32 hash, reason;
+ int mac = 0;
+@@ -1512,23 +1512,31 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
+ pktlen = RX_DMA_GET_PLEN0(trxd.rxd2);
+ skb->dev = netdev;
+ skb_put(skb, pktlen);
+- if (trxd.rxd4 & eth->soc->txrx.rx_dma_l4_valid)
++
++ if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
++ hash = trxd.rxd5 & MTK_RXD5_FOE_ENTRY;
++ if (hash != MTK_RXD5_FOE_ENTRY)
++ skb_set_hash(skb, jhash_1word(hash, 0),
++ PKT_HASH_TYPE_L4);
++ rxdcsum = &trxd.rxd3;
++ } else {
++ hash = trxd.rxd4 & MTK_RXD4_FOE_ENTRY;
++ if (hash != MTK_RXD4_FOE_ENTRY)
++ skb_set_hash(skb, jhash_1word(hash, 0),
++ PKT_HASH_TYPE_L4);
++ rxdcsum = &trxd.rxd4;
++ }
++
++ if (*rxdcsum & eth->soc->txrx.rx_dma_l4_valid)
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb_checksum_none_assert(skb);
+ skb->protocol = eth_type_trans(skb, netdev);
+ bytes += pktlen;
+
+- hash = trxd.rxd4 & MTK_RXD4_FOE_ENTRY;
+- if (hash != MTK_RXD4_FOE_ENTRY) {
+- hash = jhash_1word(hash, 0);
+- skb_set_hash(skb, hash, PKT_HASH_TYPE_L4);
+- }
+-
+ reason = FIELD_GET(MTK_RXD4_PPE_CPU_REASON, trxd.rxd4);
+ if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED)
+- mtk_ppe_check_skb(eth->ppe, skb,
+- trxd.rxd4 & MTK_RXD4_FOE_ENTRY);
++ mtk_ppe_check_skb(eth->ppe, skb, hash);
+
+ if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) {
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
+@@ -3761,6 +3769,7 @@ static const struct mtk_soc_data mt7986_data = {
+ .txd_size = sizeof(struct mtk_tx_dma_v2),
+ .rxd_size = sizeof(struct mtk_rx_dma_v2),
+ .rx_irq_done_mask = MTK_RX_DONE_INT_V2,
++ .rx_dma_l4_valid = RX_DMA_L4_VALID_V2,
+ .dma_max_len = MTK_TX_DMA_BUF_LEN_V2,
+ .dma_len_offset = 8,
+ },
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 0a632896451a4..98d6a6d047e32 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -307,6 +307,11 @@
+ #define RX_DMA_L4_VALID_PDMA BIT(30) /* when PDMA is used */
+ #define RX_DMA_SPECIAL_TAG BIT(22)
+
++/* PDMA descriptor rxd5 */
++#define MTK_RXD5_FOE_ENTRY GENMASK(14, 0)
++#define MTK_RXD5_PPE_CPU_REASON GENMASK(22, 18)
++#define MTK_RXD5_SRC_PORT GENMASK(29, 26)
++
+ #define RX_DMA_GET_SPORT(x) (((x) >> 19) & 0xf)
+ #define RX_DMA_GET_SPORT_V2(x) (((x) >> 26) & 0x7)
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 087952b84ccb0..9e6db779b6efa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3678,7 +3678,9 @@ static int set_feature_hw_tc(struct net_device *netdev, bool enable)
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+
+ #if IS_ENABLED(CONFIG_MLX5_CLS_ACT)
+- if (!enable && mlx5e_tc_num_filters(priv, MLX5_TC_FLAG(NIC_OFFLOAD))) {
++ int tc_flag = mlx5e_is_uplink_rep(priv) ? MLX5_TC_FLAG(ESW_OFFLOAD) :
++ MLX5_TC_FLAG(NIC_OFFLOAD);
++ if (!enable && mlx5e_tc_num_filters(priv, tc_flag)) {
+ netdev_err(netdev,
+ "Active offloaded tc filters, can't turn hw_tc_offload off\n");
+ return -EINVAL;
+@@ -4733,14 +4735,6 @@ void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16
+ /* RQ */
+ mlx5e_build_rq_params(mdev, params);
+
+- /* HW LRO */
+- if (MLX5_CAP_ETH(mdev, lro_cap) &&
+- params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
+- /* No XSK params: checking the availability of striding RQ in general. */
+- if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL))
+- params->packet_merge.type = slow_pci_heuristic(mdev) ?
+- MLX5E_PACKET_MERGE_NONE : MLX5E_PACKET_MERGE_LRO;
+- }
+ params->packet_merge.timeout = mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_LRO_TIMEOUT);
+
+ /* CQ moderation params */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index f797fd97d305b..7da3dc6261929 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -662,6 +662,8 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
+
+ params->mqprio.num_tc = 1;
+ params->tunneled_offload_en = false;
++ if (rep->vport != MLX5_VPORT_UPLINK)
++ params->vlan_strip_disable = true;
+
+ mlx5_query_min_inline(mdev, ¶ms->tx_min_inline_mode);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index eb79810199d3e..d04739cb793e5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -427,7 +427,8 @@ esw_setup_vport_dest(struct mlx5_flow_destination *dest, struct mlx5_flow_act *f
+ dest[dest_idx].vport.vhca_id =
+ MLX5_CAP_GEN(esw_attr->dests[attr_idx].mdev, vhca_id);
+ dest[dest_idx].vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
+- if (mlx5_lag_mpesw_is_activated(esw->dev))
++ if (dest[dest_idx].vport.num == MLX5_VPORT_UPLINK &&
++ mlx5_lag_mpesw_is_activated(esw->dev))
+ dest[dest_idx].type = MLX5_FLOW_DESTINATION_TYPE_UPLINK;
+ }
+ if (esw_attr->dests[attr_idx].flags & MLX5_ESW_DEST_ENCAP) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+index 5d41e19378e09..d98acd68af2ec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+@@ -1067,30 +1067,32 @@ static void mlx5_ldev_add_netdev(struct mlx5_lag *ldev,
+ struct net_device *netdev)
+ {
+ unsigned int fn = mlx5_get_dev_index(dev);
++ unsigned long flags;
+
+ if (fn >= ldev->ports)
+ return;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev->pf[fn].netdev = netdev;
+ ldev->tracker.netdev_state[fn].link_up = 0;
+ ldev->tracker.netdev_state[fn].tx_enabled = 0;
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+ }
+
+ static void mlx5_ldev_remove_netdev(struct mlx5_lag *ldev,
+ struct net_device *netdev)
+ {
++ unsigned long flags;
+ int i;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ for (i = 0; i < ldev->ports; i++) {
+ if (ldev->pf[i].netdev == netdev) {
+ ldev->pf[i].netdev = NULL;
+ break;
+ }
+ }
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+ }
+
+ static void mlx5_ldev_add_mdev(struct mlx5_lag *ldev,
+@@ -1234,7 +1236,7 @@ void mlx5_lag_add_netdev(struct mlx5_core_dev *dev,
+ mlx5_ldev_add_netdev(ldev, dev, netdev);
+
+ for (i = 0; i < ldev->ports; i++)
+- if (!ldev->pf[i].dev)
++ if (!ldev->pf[i].netdev)
+ break;
+
+ if (i >= ldev->ports)
+@@ -1246,12 +1248,13 @@ void mlx5_lag_add_netdev(struct mlx5_core_dev *dev,
+ bool mlx5_lag_is_roce(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+ bool res;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+ res = ldev && __mlx5_lag_is_roce(ldev);
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+
+ return res;
+ }
+@@ -1260,12 +1263,13 @@ EXPORT_SYMBOL(mlx5_lag_is_roce);
+ bool mlx5_lag_is_active(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+ bool res;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+ res = ldev && __mlx5_lag_is_active(ldev);
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+
+ return res;
+ }
+@@ -1274,13 +1278,14 @@ EXPORT_SYMBOL(mlx5_lag_is_active);
+ bool mlx5_lag_is_master(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+ bool res;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+ res = ldev && __mlx5_lag_is_active(ldev) &&
+ dev == ldev->pf[MLX5_LAG_P1].dev;
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+
+ return res;
+ }
+@@ -1289,12 +1294,13 @@ EXPORT_SYMBOL(mlx5_lag_is_master);
+ bool mlx5_lag_is_sriov(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+ bool res;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+ res = ldev && __mlx5_lag_is_sriov(ldev);
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+
+ return res;
+ }
+@@ -1303,13 +1309,14 @@ EXPORT_SYMBOL(mlx5_lag_is_sriov);
+ bool mlx5_lag_is_shared_fdb(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+ bool res;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+ res = ldev && __mlx5_lag_is_sriov(ldev) &&
+ test_bit(MLX5_LAG_MODE_FLAG_SHARED_FDB, &ldev->mode_flags);
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+
+ return res;
+ }
+@@ -1352,9 +1359,10 @@ struct net_device *mlx5_lag_get_roce_netdev(struct mlx5_core_dev *dev)
+ {
+ struct net_device *ndev = NULL;
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+ int i;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+
+ if (!(ldev && __mlx5_lag_is_roce(ldev)))
+@@ -1373,7 +1381,7 @@ struct net_device *mlx5_lag_get_roce_netdev(struct mlx5_core_dev *dev)
+ dev_hold(ndev);
+
+ unlock:
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+
+ return ndev;
+ }
+@@ -1383,10 +1391,11 @@ u8 mlx5_lag_get_slave_port(struct mlx5_core_dev *dev,
+ struct net_device *slave)
+ {
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+ u8 port = 0;
+ int i;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+ if (!(ldev && __mlx5_lag_is_roce(ldev)))
+ goto unlock;
+@@ -1401,7 +1410,7 @@ u8 mlx5_lag_get_slave_port(struct mlx5_core_dev *dev,
+ port = ldev->v2p_map[port * ldev->buckets];
+
+ unlock:
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+ return port;
+ }
+ EXPORT_SYMBOL(mlx5_lag_get_slave_port);
+@@ -1422,8 +1431,9 @@ struct mlx5_core_dev *mlx5_lag_get_peer_mdev(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_core_dev *peer_dev = NULL;
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+ if (!ldev)
+ goto unlock;
+@@ -1433,7 +1443,7 @@ struct mlx5_core_dev *mlx5_lag_get_peer_mdev(struct mlx5_core_dev *dev)
+ ldev->pf[MLX5_LAG_P1].dev;
+
+ unlock:
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+ return peer_dev;
+ }
+ EXPORT_SYMBOL(mlx5_lag_get_peer_mdev);
+@@ -1446,6 +1456,7 @@ int mlx5_lag_query_cong_counters(struct mlx5_core_dev *dev,
+ int outlen = MLX5_ST_SZ_BYTES(query_cong_statistics_out);
+ struct mlx5_core_dev **mdev;
+ struct mlx5_lag *ldev;
++ unsigned long flags;
+ int num_ports;
+ int ret, i, j;
+ void *out;
+@@ -1462,7 +1473,7 @@ int mlx5_lag_query_cong_counters(struct mlx5_core_dev *dev,
+
+ memset(values, 0, sizeof(*values) * num_counters);
+
+- spin_lock(&lag_lock);
++ spin_lock_irqsave(&lag_lock, flags);
+ ldev = mlx5_lag_dev(dev);
+ if (ldev && __mlx5_lag_is_active(ldev)) {
+ num_ports = ldev->ports;
+@@ -1472,7 +1483,7 @@ int mlx5_lag_query_cong_counters(struct mlx5_core_dev *dev,
+ num_ports = 1;
+ mdev[MLX5_LAG_P1] = dev;
+ }
+- spin_unlock(&lag_lock);
++ spin_unlock_irqrestore(&lag_lock, flags);
+
+ for (i = 0; i < num_ports; ++i) {
+ u32 in[MLX5_ST_SZ_DW(query_cong_statistics_in)] = {};
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index ba2e5232b90be..616207c3b187a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1472,7 +1472,9 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
+ memcpy(&dev->profile, &profile[profile_idx], sizeof(dev->profile));
+ INIT_LIST_HEAD(&priv->ctx_list);
+ spin_lock_init(&priv->ctx_lock);
++ lockdep_register_key(&dev->lock_key);
+ mutex_init(&dev->intf_state_mutex);
++ lockdep_set_class(&dev->intf_state_mutex, &dev->lock_key);
+
+ mutex_init(&priv->bfregs.reg_head.lock);
+ mutex_init(&priv->bfregs.wc_head.lock);
+@@ -1527,6 +1529,7 @@ err_timeout_init:
+ mutex_destroy(&priv->bfregs.wc_head.lock);
+ mutex_destroy(&priv->bfregs.reg_head.lock);
+ mutex_destroy(&dev->intf_state_mutex);
++ lockdep_unregister_key(&dev->lock_key);
+ return err;
+ }
+
+@@ -1545,6 +1548,7 @@ void mlx5_mdev_uninit(struct mlx5_core_dev *dev)
+ mutex_destroy(&priv->bfregs.wc_head.lock);
+ mutex_destroy(&priv->bfregs.reg_head.lock);
+ mutex_destroy(&dev->intf_state_mutex);
++ lockdep_unregister_key(&dev->lock_key);
+ }
+
+ static int probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+index ec76a8b1acc1c..60596357bfc7a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+@@ -376,8 +376,8 @@ retry:
+ goto out_dropped;
+ }
+ }
++ err = mlx5_cmd_check(dev, err, in, out);
+ if (err) {
+- err = mlx5_cmd_check(dev, err, in, out);
+ mlx5_core_warn(dev, "func_id 0x%x, npages %d, err %d\n",
+ func_id, npages, err);
+ goto out_dropped;
+@@ -524,10 +524,13 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
+ dev->priv.reclaim_pages_discard += npages;
+ }
+ /* if triggered by FW event and failed by FW then ignore */
+- if (event && err == -EREMOTEIO)
++ if (event && err == -EREMOTEIO) {
+ err = 0;
++ goto out_free;
++ }
++
++ err = mlx5_cmd_check(dev, err, in, out);
+ if (err) {
+- err = mlx5_cmd_check(dev, err, in, out);
+ mlx5_core_err(dev, "failed reclaiming pages: err %d\n", err);
+ goto out_free;
+ }
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index f11f1cb92025f..3b6beb96ca856 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -74,11 +74,6 @@ static int moxart_set_mac_address(struct net_device *ndev, void *addr)
+ static void moxart_mac_free_memory(struct net_device *ndev)
+ {
+ struct moxart_mac_priv_t *priv = netdev_priv(ndev);
+- int i;
+-
+- for (i = 0; i < RX_DESC_NUM; i++)
+- dma_unmap_single(&priv->pdev->dev, priv->rx_mapping[i],
+- priv->rx_buf_size, DMA_FROM_DEVICE);
+
+ if (priv->tx_desc_base)
+ dma_free_coherent(&priv->pdev->dev,
+@@ -193,6 +188,7 @@ static int moxart_mac_open(struct net_device *ndev)
+ static int moxart_mac_stop(struct net_device *ndev)
+ {
+ struct moxart_mac_priv_t *priv = netdev_priv(ndev);
++ int i;
+
+ napi_disable(&priv->napi);
+
+@@ -204,6 +200,11 @@ static int moxart_mac_stop(struct net_device *ndev)
+ /* disable all functions */
+ writel(0, priv->base + REG_MAC_CTRL);
+
++ /* unmap areas mapped in moxart_mac_setup_desc_ring() */
++ for (i = 0; i < RX_DESC_NUM; i++)
++ dma_unmap_single(&priv->pdev->dev, priv->rx_mapping[i],
++ priv->rx_buf_size, DMA_FROM_DEVICE);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 1443f788ee37c..0be79c5167813 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1564,8 +1564,67 @@ static int ionic_set_features(struct net_device *netdev,
+ return err;
+ }
+
++static int ionic_set_attr_mac(struct ionic_lif *lif, u8 *mac)
++{
++ struct ionic_admin_ctx ctx = {
++ .work = COMPLETION_INITIALIZER_ONSTACK(ctx.work),
++ .cmd.lif_setattr = {
++ .opcode = IONIC_CMD_LIF_SETATTR,
++ .index = cpu_to_le16(lif->index),
++ .attr = IONIC_LIF_ATTR_MAC,
++ },
++ };
++
++ ether_addr_copy(ctx.cmd.lif_setattr.mac, mac);
++ return ionic_adminq_post_wait(lif, &ctx);
++}
++
++static int ionic_get_attr_mac(struct ionic_lif *lif, u8 *mac_addr)
++{
++ struct ionic_admin_ctx ctx = {
++ .work = COMPLETION_INITIALIZER_ONSTACK(ctx.work),
++ .cmd.lif_getattr = {
++ .opcode = IONIC_CMD_LIF_GETATTR,
++ .index = cpu_to_le16(lif->index),
++ .attr = IONIC_LIF_ATTR_MAC,
++ },
++ };
++ int err;
++
++ err = ionic_adminq_post_wait(lif, &ctx);
++ if (err)
++ return err;
++
++ ether_addr_copy(mac_addr, ctx.comp.lif_getattr.mac);
++ return 0;
++}
++
++static int ionic_program_mac(struct ionic_lif *lif, u8 *mac)
++{
++ u8 get_mac[ETH_ALEN];
++ int err;
++
++ err = ionic_set_attr_mac(lif, mac);
++ if (err)
++ return err;
++
++ err = ionic_get_attr_mac(lif, get_mac);
++ if (err)
++ return err;
++
++ /* To deal with older firmware that silently ignores the set attr mac:
++ * doesn't actually change the mac and doesn't return an error, so we
++ * do the get attr to verify whether or not the set actually happened
++ */
++ if (!ether_addr_equal(get_mac, mac))
++ return 1;
++
++ return 0;
++}
++
+ static int ionic_set_mac_address(struct net_device *netdev, void *sa)
+ {
++ struct ionic_lif *lif = netdev_priv(netdev);
+ struct sockaddr *addr = sa;
+ u8 *mac;
+ int err;
+@@ -1574,6 +1633,14 @@ static int ionic_set_mac_address(struct net_device *netdev, void *sa)
+ if (ether_addr_equal(netdev->dev_addr, mac))
+ return 0;
+
++ err = ionic_program_mac(lif, mac);
++ if (err < 0)
++ return err;
++
++ if (err > 0)
++ netdev_dbg(netdev, "%s: SET and GET ATTR Mac are not equal-due to old FW running\n",
++ __func__);
++
+ err = eth_prepare_mac_addr_change(netdev, addr);
+ if (err)
+ return err;
+@@ -2963,6 +3030,9 @@ static void ionic_lif_handle_fw_up(struct ionic_lif *lif)
+
+ mutex_lock(&lif->queue_lock);
+
++ if (test_and_clear_bit(IONIC_LIF_F_BROKEN, lif->state))
++ dev_info(ionic->dev, "FW Up: clearing broken state\n");
++
+ err = ionic_qcqs_alloc(lif);
+ if (err)
+ goto err_unlock;
+@@ -3169,6 +3239,7 @@ static int ionic_station_set(struct ionic_lif *lif)
+ .attr = IONIC_LIF_ATTR_MAC,
+ },
+ };
++ u8 mac_address[ETH_ALEN];
+ struct sockaddr addr;
+ int err;
+
+@@ -3177,8 +3248,23 @@ static int ionic_station_set(struct ionic_lif *lif)
+ return err;
+ netdev_dbg(lif->netdev, "found initial MAC addr %pM\n",
+ ctx.comp.lif_getattr.mac);
+- if (is_zero_ether_addr(ctx.comp.lif_getattr.mac))
+- return 0;
++ ether_addr_copy(mac_address, ctx.comp.lif_getattr.mac);
++
++ if (is_zero_ether_addr(mac_address)) {
++ eth_hw_addr_random(netdev);
++ netdev_dbg(netdev, "Random Mac generated: %pM\n", netdev->dev_addr);
++ ether_addr_copy(mac_address, netdev->dev_addr);
++
++ err = ionic_program_mac(lif, mac_address);
++ if (err < 0)
++ return err;
++
++ if (err > 0) {
++ netdev_dbg(netdev, "%s:SET/GET ATTR Mac are not same-due to old FW running\n",
++ __func__);
++ return 0;
++ }
++ }
+
+ if (!is_zero_ether_addr(netdev->dev_addr)) {
+ /* If the netdev mac is non-zero and doesn't match the default
+@@ -3186,12 +3272,11 @@ static int ionic_station_set(struct ionic_lif *lif)
+ * likely here again after a fw-upgrade reset. We need to be
+ * sure the netdev mac is in our filter list.
+ */
+- if (!ether_addr_equal(ctx.comp.lif_getattr.mac,
+- netdev->dev_addr))
++ if (!ether_addr_equal(mac_address, netdev->dev_addr))
+ ionic_lif_addr_add(lif, netdev->dev_addr);
+ } else {
+ /* Update the netdev mac with the device's mac */
+- memcpy(addr.sa_data, ctx.comp.lif_getattr.mac, netdev->addr_len);
++ ether_addr_copy(addr.sa_data, mac_address);
+ addr.sa_family = AF_INET;
+ err = eth_prepare_mac_addr_change(netdev, &addr);
+ if (err) {
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index 4029b4e021f86..56f93b0305519 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -474,8 +474,8 @@ try_again:
+ ionic_opcode_to_str(opcode), opcode,
+ ionic_error_to_str(err), err);
+
+- msleep(1000);
+ iowrite32(0, &idev->dev_cmd_regs->done);
++ msleep(1000);
+ iowrite32(1, &idev->dev_cmd_regs->doorbell);
+ goto try_again;
+ }
+@@ -488,6 +488,8 @@ try_again:
+ return ionic_error_to_errno(err);
+ }
+
++ ionic_dev_cmd_clean(ionic);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
+index caa4bfc4c1d62..9b6138b117766 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
+@@ -258,14 +258,18 @@ EXPORT_SYMBOL_GPL(stmmac_set_mac_addr);
+ /* Enable disable MAC RX/TX */
+ void stmmac_set_mac(void __iomem *ioaddr, bool enable)
+ {
+- u32 value = readl(ioaddr + MAC_CTRL_REG);
++ u32 old_val, value;
++
++ old_val = readl(ioaddr + MAC_CTRL_REG);
++ value = old_val;
+
+ if (enable)
+ value |= MAC_ENABLE_RX | MAC_ENABLE_TX;
+ else
+ value &= ~(MAC_ENABLE_TX | MAC_ENABLE_RX);
+
+- writel(value, ioaddr + MAC_CTRL_REG);
++ if (value != old_val)
++ writel(value, ioaddr + MAC_CTRL_REG);
+ }
+
+ void stmmac_get_mac_addr(void __iomem *ioaddr, unsigned char *addr,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c5f33630e7718..78f11dabca056 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -983,10 +983,10 @@ static void stmmac_mac_link_up(struct phylink_config *config,
+ bool tx_pause, bool rx_pause)
+ {
+ struct stmmac_priv *priv = netdev_priv(to_net_dev(config->dev));
+- u32 ctrl;
++ u32 old_ctrl, ctrl;
+
+- ctrl = readl(priv->ioaddr + MAC_CTRL_REG);
+- ctrl &= ~priv->hw->link.speed_mask;
++ old_ctrl = readl(priv->ioaddr + MAC_CTRL_REG);
++ ctrl = old_ctrl & ~priv->hw->link.speed_mask;
+
+ if (interface == PHY_INTERFACE_MODE_USXGMII) {
+ switch (speed) {
+@@ -1061,7 +1061,8 @@ static void stmmac_mac_link_up(struct phylink_config *config,
+ if (tx_pause && rx_pause)
+ stmmac_mac_flow_ctrl(priv, duplex);
+
+- writel(ctrl, priv->ioaddr + MAC_CTRL_REG);
++ if (ctrl != old_ctrl)
++ writel(ctrl, priv->ioaddr + MAC_CTRL_REG);
+
+ stmmac_mac_set(priv, priv->ioaddr, true);
+ if (phy && priv->dma_cap.eee) {
+diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
+index 1e9eae208e44f..53a1dbeaffa6d 100644
+--- a/drivers/net/ipa/ipa_mem.c
++++ b/drivers/net/ipa/ipa_mem.c
+@@ -568,7 +568,7 @@ static int ipa_smem_init(struct ipa *ipa, u32 item, size_t size)
+ }
+
+ /* Align the address down and the size up to a page boundary */
+- addr = qcom_smem_virt_to_phys(virt) & PAGE_MASK;
++ addr = qcom_smem_virt_to_phys(virt);
+ phys = addr & PAGE_MASK;
+ size = PAGE_ALIGN(size + addr - phys);
+ iova = phys; /* We just want a direct mapping */
+diff --git a/drivers/net/ipvlan/ipvtap.c b/drivers/net/ipvlan/ipvtap.c
+index ef02f2cf5ce13..cbabca167a078 100644
+--- a/drivers/net/ipvlan/ipvtap.c
++++ b/drivers/net/ipvlan/ipvtap.c
+@@ -194,7 +194,7 @@ static struct notifier_block ipvtap_notifier_block __read_mostly = {
+ .notifier_call = ipvtap_device_event,
+ };
+
+-static int ipvtap_init(void)
++static int __init ipvtap_init(void)
+ {
+ int err;
+
+@@ -228,7 +228,7 @@ out1:
+ }
+ module_init(ipvtap_init);
+
+-static void ipvtap_exit(void)
++static void __exit ipvtap_exit(void)
+ {
+ rtnl_link_unregister(&ipvtap_link_ops);
+ unregister_netdevice_notifier(&ipvtap_notifier_block);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index f354fad05714a..5b0b23e55fa76 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -449,11 +449,6 @@ static struct macsec_eth_header *macsec_ethhdr(struct sk_buff *skb)
+ return (struct macsec_eth_header *)skb_mac_header(skb);
+ }
+
+-static sci_t dev_to_sci(struct net_device *dev, __be16 port)
+-{
+- return make_sci(dev->dev_addr, port);
+-}
+-
+ static void __macsec_pn_wrapped(struct macsec_secy *secy,
+ struct macsec_tx_sa *tx_sa)
+ {
+@@ -3622,7 +3617,6 @@ static int macsec_set_mac_address(struct net_device *dev, void *p)
+
+ out:
+ eth_hw_addr_set(dev, addr->sa_data);
+- macsec->secy.sci = dev_to_sci(dev, MACSEC_PORT_ES);
+
+ /* If h/w offloading is available, propagate to the device */
+ if (macsec_is_offloaded(macsec)) {
+@@ -3960,6 +3954,11 @@ static bool sci_exists(struct net_device *dev, sci_t sci)
+ return false;
+ }
+
++static sci_t dev_to_sci(struct net_device *dev, __be16 port)
++{
++ return make_sci(dev->dev_addr, port);
++}
++
+ static int macsec_add_dev(struct net_device *dev, sci_t sci, u8 icv_len)
+ {
+ struct macsec_dev *macsec = macsec_priv(dev);
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 608de5a94165f..f90a21781d8d6 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -316,11 +316,11 @@ static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
+
+ phydev->suspended_by_mdio_bus = 0;
+
+- /* If we managed to get here with the PHY state machine in a state other
+- * than PHY_HALTED this is an indication that something went wrong and
+- * we should most likely be using MAC managed PM and we are not.
++ /* If we manged to get here with the PHY state machine in a state neither
++ * PHY_HALTED nor PHY_READY this is an indication that something went wrong
++ * and we should most likely be using MAC managed PM and we are not.
+ */
+- WARN_ON(phydev->state != PHY_HALTED && !phydev->mac_managed_pm);
++ WARN_ON(phydev->state != PHY_HALTED && phydev->state != PHY_READY);
+
+ ret = phy_init_hw(phydev);
+ if (ret < 0)
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 0f6efaabaa32b..d142ac8fcf6e2 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -5906,6 +5906,11 @@ static void r8153_enter_oob(struct r8152 *tp)
+ ocp_data &= ~NOW_IS_OOB;
+ ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data);
+
++ /* RX FIFO settings for OOB */
++ ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL0, RXFIFO_THR1_OOB);
++ ocp_write_word(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL1, RXFIFO_THR2_OOB);
++ ocp_write_word(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL2, RXFIFO_THR3_OOB);
++
+ rtl_disable(tp);
+ rtl_reset_bmu(tp);
+
+@@ -6431,21 +6436,8 @@ static void r8156_fc_parameter(struct r8152 *tp)
+ u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp);
+ u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp);
+
+- switch (tp->version) {
+- case RTL_VER_10:
+- case RTL_VER_11:
+- ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 8);
+- ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 8);
+- break;
+- case RTL_VER_12:
+- case RTL_VER_13:
+- case RTL_VER_15:
+- ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16);
+- ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16);
+- break;
+- default:
+- break;
+- }
++ ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16);
++ ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16);
+ }
+
+ static void rtl8156_change_mtu(struct r8152 *tp)
+@@ -6557,6 +6549,11 @@ static void rtl8156_down(struct r8152 *tp)
+ ocp_data &= ~NOW_IS_OOB;
+ ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data);
+
++ /* RX FIFO settings for OOB */
++ ocp_write_word(tp, MCU_TYPE_PLA, PLA_RXFIFO_FULL, 64 / 16);
++ ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, 1024 / 16);
++ ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, 4096 / 16);
++
+ rtl_disable(tp);
+ rtl_reset_bmu(tp);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index faa279bbbcb2c..7eb23805aa942 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -1403,6 +1403,8 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
+ else
+ conn_type = CONNECTION_INFRA_AP;
+ basic_req.basic.conn_type = cpu_to_le32(conn_type);
++ /* Fully active/deactivate BSS network in AP mode only */
++ basic_req.basic.active = enable;
+ break;
+ case NL80211_IFTYPE_STATION:
+ if (vif->p2p)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index e86fe9ee4623e..d3f310877248b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -653,15 +653,6 @@ static void mt7921_bss_info_changed(struct ieee80211_hw *hw,
+ }
+ }
+
+- if (changed & BSS_CHANGED_BEACON_ENABLED && info->enable_beacon) {
+- struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
+-
+- mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid,
+- true);
+- mt7921_mcu_sta_update(dev, NULL, vif, true,
+- MT76_STA_INFO_STATE_NONE);
+- }
+-
+ if (changed & (BSS_CHANGED_BEACON |
+ BSS_CHANGED_BEACON_ENABLED))
+ mt7921_mcu_uni_add_beacon_offload(dev, hw, vif,
+@@ -1500,6 +1491,42 @@ mt7921_channel_switch_beacon(struct ieee80211_hw *hw,
+ mt7921_mutex_release(dev);
+ }
+
++static int
++mt7921_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
++{
++ struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
++ struct mt7921_phy *phy = mt7921_hw_phy(hw);
++ struct mt7921_dev *dev = mt7921_hw_dev(hw);
++ int err;
++
++ err = mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid,
++ true);
++ if (err)
++ return err;
++
++ err = mt7921_mcu_set_bss_pm(dev, vif, true);
++ if (err)
++ return err;
++
++ return mt7921_mcu_sta_update(dev, NULL, vif, true,
++ MT76_STA_INFO_STATE_NONE);
++}
++
++static void
++mt7921_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
++{
++ struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
++ struct mt7921_phy *phy = mt7921_hw_phy(hw);
++ struct mt7921_dev *dev = mt7921_hw_dev(hw);
++ int err;
++
++ err = mt7921_mcu_set_bss_pm(dev, vif, false);
++ if (err)
++ return;
++
++ mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid, false);
++}
++
+ const struct ieee80211_ops mt7921_ops = {
+ .tx = mt7921_tx,
+ .start = mt7921_start,
+@@ -1510,6 +1537,8 @@ const struct ieee80211_ops mt7921_ops = {
+ .conf_tx = mt7921_conf_tx,
+ .configure_filter = mt7921_configure_filter,
+ .bss_info_changed = mt7921_bss_info_changed,
++ .start_ap = mt7921_start_ap,
++ .stop_ap = mt7921_stop_ap,
+ .sta_state = mt7921_sta_state,
+ .sta_pre_rcu_remove = mt76_sta_pre_rcu_remove,
+ .set_key = mt7921_set_key,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 613a94be8ea44..6d0aceb5226ab 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -1020,7 +1020,7 @@ mt7921_mcu_uni_bss_bcnft(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ &bcnft_req, sizeof(bcnft_req), true);
+ }
+
+-static int
++int
+ mt7921_mcu_set_bss_pm(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ bool enable)
+ {
+@@ -1049,9 +1049,6 @@ mt7921_mcu_set_bss_pm(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ };
+ int err;
+
+- if (vif->type != NL80211_IFTYPE_STATION)
+- return 0;
+-
+ err = mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_BSS_ABORT),
+ &req_hdr, sizeof(req_hdr), false);
+ if (err < 0 || !enable)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 66054123bcc47..cebc3cfa01b8a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -280,6 +280,8 @@ int mt7921_wpdma_reset(struct mt7921_dev *dev, bool force);
+ int mt7921_wpdma_reinit_cond(struct mt7921_dev *dev);
+ void mt7921_dma_cleanup(struct mt7921_dev *dev);
+ int mt7921_run_firmware(struct mt7921_dev *dev);
++int mt7921_mcu_set_bss_pm(struct mt7921_dev *dev, struct ieee80211_vif *vif,
++ bool enable);
+ int mt7921_mcu_sta_update(struct mt7921_dev *dev, struct ieee80211_sta *sta,
+ struct ieee80211_vif *vif, bool enable,
+ enum mt76_sta_info_state state);
+diff --git a/drivers/nfc/pn533/uart.c b/drivers/nfc/pn533/uart.c
+index 2caf997f9bc94..07596bf5f7d6d 100644
+--- a/drivers/nfc/pn533/uart.c
++++ b/drivers/nfc/pn533/uart.c
+@@ -310,6 +310,7 @@ static void pn532_uart_remove(struct serdev_device *serdev)
+ pn53x_unregister_nfc(pn532->priv);
+ serdev_device_close(serdev);
+ pn53x_common_clean(pn532->priv);
++ del_timer_sync(&pn532->cmd_timeout);
+ kfree_skb(pn532->recv_skb);
+ kfree(pn532);
+ }
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 6ffc9e4258a80..78edb1ea4748d 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1549,7 +1549,6 @@ static blk_status_t scsi_prepare_cmd(struct request *req)
+ scsi_init_command(sdev, cmd);
+
+ cmd->eh_eflags = 0;
+- cmd->allowed = 0;
+ cmd->prot_type = 0;
+ cmd->prot_flags = 0;
+ cmd->submitter = 0;
+@@ -1600,6 +1599,8 @@ static blk_status_t scsi_prepare_cmd(struct request *req)
+ return ret;
+ }
+
++ /* Usually overridden by the ULP */
++ cmd->allowed = 0;
+ memset(cmd->cmnd, 0, sizeof(cmd->cmnd));
+ return scsi_cmd_to_driver(cmd)->init_command(cmd);
+ }
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index fe000da113327..8ced292c4b962 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -2012,7 +2012,7 @@ static int storvsc_probe(struct hv_device *device,
+ */
+ host_dev->handle_error_wq =
+ alloc_ordered_workqueue("storvsc_error_wq_%d",
+- WQ_MEM_RECLAIM,
++ 0,
+ host->host_no);
+ if (!host_dev->handle_error_wq) {
+ ret = -ENOMEM;
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index b89075f3b6ab7..2efaed36a3adc 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2402,15 +2402,21 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h, int charcount,
+ struct fb_info *info = fbcon_info_from_console(vc->vc_num);
+ struct fbcon_ops *ops = info->fbcon_par;
+ struct fbcon_display *p = &fb_display[vc->vc_num];
+- int resize;
++ int resize, ret, old_userfont, old_width, old_height, old_charcount;
+ char *old_data = NULL;
+
+ resize = (w != vc->vc_font.width) || (h != vc->vc_font.height);
+ if (p->userfont)
+ old_data = vc->vc_font.data;
+ vc->vc_font.data = (void *)(p->fontdata = data);
++ old_userfont = p->userfont;
+ if ((p->userfont = userfont))
+ REFCOUNT(data)++;
++
++ old_width = vc->vc_font.width;
++ old_height = vc->vc_font.height;
++ old_charcount = vc->vc_font.charcount;
++
+ vc->vc_font.width = w;
+ vc->vc_font.height = h;
+ vc->vc_font.charcount = charcount;
+@@ -2426,7 +2432,9 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h, int charcount,
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= w;
+ rows /= h;
+- vc_resize(vc, cols, rows);
++ ret = vc_resize(vc, cols, rows);
++ if (ret)
++ goto err_out;
+ } else if (con_is_visible(vc)
+ && vc->vc_mode == KD_TEXT) {
+ fbcon_clear_margins(vc, 0);
+@@ -2436,6 +2444,21 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h, int charcount,
+ if (old_data && (--REFCOUNT(old_data) == 0))
+ kfree(old_data - FONT_EXTRA_WORDS * sizeof(int));
+ return 0;
++
++err_out:
++ p->fontdata = old_data;
++ vc->vc_font.data = (void *)old_data;
++
++ if (userfont) {
++ p->userfont = old_userfont;
++ REFCOUNT(data)--;
++ }
++
++ vc->vc_font.width = old_width;
++ vc->vc_font.height = old_height;
++ vc->vc_font.charcount = old_charcount;
++
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index 3369734108af2..e88e8f6f0a334 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -581,27 +581,30 @@ static int lock_pages(
+ struct privcmd_dm_op_buf kbufs[], unsigned int num,
+ struct page *pages[], unsigned int nr_pages, unsigned int *pinned)
+ {
+- unsigned int i;
++ unsigned int i, off = 0;
+
+- for (i = 0; i < num; i++) {
++ for (i = 0; i < num; ) {
+ unsigned int requested;
+ int page_count;
+
+ requested = DIV_ROUND_UP(
+ offset_in_page(kbufs[i].uptr) + kbufs[i].size,
+- PAGE_SIZE);
++ PAGE_SIZE) - off;
+ if (requested > nr_pages)
+ return -ENOSPC;
+
+ page_count = pin_user_pages_fast(
+- (unsigned long) kbufs[i].uptr,
++ (unsigned long)kbufs[i].uptr + off * PAGE_SIZE,
+ requested, FOLL_WRITE, pages);
+- if (page_count < 0)
+- return page_count;
++ if (page_count <= 0)
++ return page_count ? : -EFAULT;
+
+ *pinned += page_count;
+ nr_pages -= page_count;
+ pages += page_count;
++
++ off = (requested == page_count) ? 0 : off + page_count;
++ i += !off;
+ }
+
+ return 0;
+@@ -677,10 +680,8 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
+ }
+
+ rc = lock_pages(kbufs, kdata.num, pages, nr_pages, &pinned);
+- if (rc < 0) {
+- nr_pages = pinned;
++ if (rc < 0)
+ goto out;
+- }
+
+ for (i = 0; i < kdata.num; i++) {
+ set_xen_guest_handle(xbufs[i].h, kbufs[i].uptr);
+@@ -692,7 +693,7 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
+ xen_preemptible_hcall_end();
+
+ out:
+- unlock_pages(pages, nr_pages);
++ unlock_pages(pages, pinned);
+ kfree(xbufs);
+ kfree(pages);
+ kfree(kbufs);
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index deaed255f301e..a8ecd83abb11e 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -440,39 +440,26 @@ void btrfs_wait_block_group_cache_progress(struct btrfs_block_group *cache,
+ btrfs_put_caching_control(caching_ctl);
+ }
+
+-int btrfs_wait_block_group_cache_done(struct btrfs_block_group *cache)
++static int btrfs_caching_ctl_wait_done(struct btrfs_block_group *cache,
++ struct btrfs_caching_control *caching_ctl)
++{
++ wait_event(caching_ctl->wait, btrfs_block_group_done(cache));
++ return cache->cached == BTRFS_CACHE_ERROR ? -EIO : 0;
++}
++
++static int btrfs_wait_block_group_cache_done(struct btrfs_block_group *cache)
+ {
+ struct btrfs_caching_control *caching_ctl;
+- int ret = 0;
++ int ret;
+
+ caching_ctl = btrfs_get_caching_control(cache);
+ if (!caching_ctl)
+ return (cache->cached == BTRFS_CACHE_ERROR) ? -EIO : 0;
+-
+- wait_event(caching_ctl->wait, btrfs_block_group_done(cache));
+- if (cache->cached == BTRFS_CACHE_ERROR)
+- ret = -EIO;
++ ret = btrfs_caching_ctl_wait_done(cache, caching_ctl);
+ btrfs_put_caching_control(caching_ctl);
+ return ret;
+ }
+
+-static bool space_cache_v1_done(struct btrfs_block_group *cache)
+-{
+- bool ret;
+-
+- spin_lock(&cache->lock);
+- ret = cache->cached != BTRFS_CACHE_FAST;
+- spin_unlock(&cache->lock);
+-
+- return ret;
+-}
+-
+-void btrfs_wait_space_cache_v1_finished(struct btrfs_block_group *cache,
+- struct btrfs_caching_control *caching_ctl)
+-{
+- wait_event(caching_ctl->wait, space_cache_v1_done(cache));
+-}
+-
+ #ifdef CONFIG_BTRFS_DEBUG
+ static void fragment_free_space(struct btrfs_block_group *block_group)
+ {
+@@ -750,9 +737,8 @@ done:
+ btrfs_put_block_group(block_group);
+ }
+
+-int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only)
++int btrfs_cache_block_group(struct btrfs_block_group *cache, bool wait)
+ {
+- DEFINE_WAIT(wait);
+ struct btrfs_fs_info *fs_info = cache->fs_info;
+ struct btrfs_caching_control *caching_ctl = NULL;
+ int ret = 0;
+@@ -785,10 +771,7 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only
+ }
+ WARN_ON(cache->caching_ctl);
+ cache->caching_ctl = caching_ctl;
+- if (btrfs_test_opt(fs_info, SPACE_CACHE))
+- cache->cached = BTRFS_CACHE_FAST;
+- else
+- cache->cached = BTRFS_CACHE_STARTED;
++ cache->cached = BTRFS_CACHE_STARTED;
+ cache->has_caching_ctl = 1;
+ spin_unlock(&cache->lock);
+
+@@ -801,8 +784,8 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only
+
+ btrfs_queue_work(fs_info->caching_workers, &caching_ctl->work);
+ out:
+- if (load_cache_only && caching_ctl)
+- btrfs_wait_space_cache_v1_finished(cache, caching_ctl);
++ if (wait && caching_ctl)
++ ret = btrfs_caching_ctl_wait_done(cache, caching_ctl);
+ if (caching_ctl)
+ btrfs_put_caching_control(caching_ctl);
+
+@@ -3313,7 +3296,7 @@ int btrfs_update_block_group(struct btrfs_trans_handle *trans,
+ * space back to the block group, otherwise we will leak space.
+ */
+ if (!alloc && !btrfs_block_group_done(cache))
+- btrfs_cache_block_group(cache, 1);
++ btrfs_cache_block_group(cache, true);
+
+ byte_in_group = bytenr - cache->start;
+ WARN_ON(byte_in_group > cache->length);
+diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
+index 35e0e860cc0bf..6b3cdc4cbc41e 100644
+--- a/fs/btrfs/block-group.h
++++ b/fs/btrfs/block-group.h
+@@ -263,9 +263,7 @@ void btrfs_dec_nocow_writers(struct btrfs_block_group *bg);
+ void btrfs_wait_nocow_writers(struct btrfs_block_group *bg);
+ void btrfs_wait_block_group_cache_progress(struct btrfs_block_group *cache,
+ u64 num_bytes);
+-int btrfs_wait_block_group_cache_done(struct btrfs_block_group *cache);
+-int btrfs_cache_block_group(struct btrfs_block_group *cache,
+- int load_cache_only);
++int btrfs_cache_block_group(struct btrfs_block_group *cache, bool wait);
+ void btrfs_put_caching_control(struct btrfs_caching_control *ctl);
+ struct btrfs_caching_control *btrfs_get_caching_control(
+ struct btrfs_block_group *cache);
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 3a51d0c13a957..7d3ca3ea0bcec 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -494,7 +494,6 @@ struct btrfs_free_cluster {
+ enum btrfs_caching_type {
+ BTRFS_CACHE_NO,
+ BTRFS_CACHE_STARTED,
+- BTRFS_CACHE_FAST,
+ BTRFS_CACHE_FINISHED,
+ BTRFS_CACHE_ERROR,
+ };
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index a7dd6ba25e990..c0f358b958abd 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -165,7 +165,7 @@ no_valid_dev_replace_entry_found:
+ */
+ if (btrfs_find_device(fs_info->fs_devices, &args)) {
+ btrfs_err(fs_info,
+- "replace devid present without an active replace item");
++"replace without active item, run 'device scan --forget' on the target device");
+ ret = -EUCLEAN;
+ } else {
+ dev_replace->srcdev = NULL;
+@@ -1128,8 +1128,7 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info)
+ up_write(&dev_replace->rwsem);
+
+ /* Scrub for replace must not be running in suspended state */
+- ret = btrfs_scrub_cancel(fs_info);
+- ASSERT(ret != -ENOTCONN);
++ btrfs_scrub_cancel(fs_info);
+
+ trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index f2c79838ebe52..ced3fc76063f1 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2567,17 +2567,10 @@ int btrfs_pin_extent_for_log_replay(struct btrfs_trans_handle *trans,
+ return -EINVAL;
+
+ /*
+- * pull in the free space cache (if any) so that our pin
+- * removes the free space from the cache. We have load_only set
+- * to one because the slow code to read in the free extents does check
+- * the pinned extents.
++ * Fully cache the free space first so that our pin removes the free space
++ * from the cache.
+ */
+- btrfs_cache_block_group(cache, 1);
+- /*
+- * Make sure we wait until the cache is completely built in case it is
+- * missing or is invalid and therefore needs to be rebuilt.
+- */
+- ret = btrfs_wait_block_group_cache_done(cache);
++ ret = btrfs_cache_block_group(cache, true);
+ if (ret)
+ goto out;
+
+@@ -2600,12 +2593,7 @@ static int __exclude_logged_extent(struct btrfs_fs_info *fs_info,
+ if (!block_group)
+ return -EINVAL;
+
+- btrfs_cache_block_group(block_group, 1);
+- /*
+- * Make sure we wait until the cache is completely built in case it is
+- * missing or is invalid and therefore needs to be rebuilt.
+- */
+- ret = btrfs_wait_block_group_cache_done(block_group);
++ ret = btrfs_cache_block_group(block_group, true);
+ if (ret)
+ goto out;
+
+@@ -4415,7 +4403,7 @@ have_block_group:
+ ffe_ctl->cached = btrfs_block_group_done(block_group);
+ if (unlikely(!ffe_ctl->cached)) {
+ ffe_ctl->have_caching_bg = true;
+- ret = btrfs_cache_block_group(block_group, 0);
++ ret = btrfs_cache_block_group(block_group, false);
+
+ /*
+ * If we get ENOMEM here or something else we want to
+@@ -6169,13 +6157,7 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+
+ if (end - start >= range->minlen) {
+ if (!btrfs_block_group_done(cache)) {
+- ret = btrfs_cache_block_group(cache, 0);
+- if (ret) {
+- bg_failed++;
+- bg_ret = ret;
+- continue;
+- }
+- ret = btrfs_wait_block_group_cache_done(cache);
++ ret = btrfs_cache_block_group(cache, true);
+ if (ret) {
+ bg_failed++;
+ bg_ret = ret;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 89c6d7ff19874..78df9b8557ddd 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2483,6 +2483,7 @@ static int fill_holes(struct btrfs_trans_handle *trans,
+ btrfs_set_file_extent_num_bytes(leaf, fi, num_bytes);
+ btrfs_set_file_extent_ram_bytes(leaf, fi, num_bytes);
+ btrfs_set_file_extent_offset(leaf, fi, 0);
++ btrfs_set_file_extent_generation(leaf, fi, trans->transid);
+ btrfs_mark_buffer_dirty(leaf);
+ goto out;
+ }
+@@ -2499,6 +2500,7 @@ static int fill_holes(struct btrfs_trans_handle *trans,
+ btrfs_set_file_extent_num_bytes(leaf, fi, num_bytes);
+ btrfs_set_file_extent_ram_bytes(leaf, fi, num_bytes);
+ btrfs_set_file_extent_offset(leaf, fi, 0);
++ btrfs_set_file_extent_generation(leaf, fi, trans->transid);
+ btrfs_mark_buffer_dirty(leaf);
+ goto out;
+ }
+diff --git a/fs/btrfs/root-tree.c b/fs/btrfs/root-tree.c
+index a64b26b169040..d647cb2938c01 100644
+--- a/fs/btrfs/root-tree.c
++++ b/fs/btrfs/root-tree.c
+@@ -349,9 +349,10 @@ int btrfs_del_root_ref(struct btrfs_trans_handle *trans, u64 root_id,
+ key.offset = ref_id;
+ again:
+ ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1);
+- if (ret < 0)
++ if (ret < 0) {
++ err = ret;
+ goto out;
+- if (ret == 0) {
++ } else if (ret == 0) {
+ leaf = path->nodes[0];
+ ref = btrfs_item_ptr(leaf, path->slots[0],
+ struct btrfs_root_ref);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 9cd9d06f54699..3460fd6743807 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2344,8 +2344,11 @@ int btrfs_get_dev_args_from_path(struct btrfs_fs_info *fs_info,
+
+ ret = btrfs_get_bdev_and_sb(path, FMODE_READ, fs_info->bdev_holder, 0,
+ &bdev, &disk_super);
+- if (ret)
++ if (ret) {
++ btrfs_put_dev_args_from_path(args);
+ return ret;
++ }
++
+ args->devid = btrfs_stack_device_id(&disk_super->dev_item);
+ memcpy(args->uuid, disk_super->dev_item.uuid, BTRFS_UUID_SIZE);
+ if (btrfs_fs_incompat(fs_info, METADATA_UUID))
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index 7421abcf325a5..5bb8d8c863119 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -371,6 +371,9 @@ static int btrfs_xattr_handler_set(const struct xattr_handler *handler,
+ const char *name, const void *buffer,
+ size_t size, int flags)
+ {
++ if (btrfs_root_readonly(BTRFS_I(inode)->root))
++ return -EROFS;
++
+ name = xattr_full_name(handler, name);
+ return btrfs_setxattr_trans(inode, name, buffer, size, flags);
+ }
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index aa4c1d403708f..3898ec2632dc4 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3671,7 +3671,7 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ loff_t offset, loff_t len)
+ {
+- struct inode *inode;
++ struct inode *inode = file_inode(file);
+ struct cifsFileInfo *cfile = file->private_data;
+ struct file_zero_data_information fsctl_buf;
+ long rc;
+@@ -3680,14 +3680,12 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+
+ xid = get_xid();
+
+- inode = d_inode(cfile->dentry);
+-
++ inode_lock(inode);
+ /* Need to make file sparse, if not already, before freeing range. */
+ /* Consider adding equivalent for compressed since it could also work */
+ if (!smb2_set_sparse(xid, tcon, cfile, inode, set_sparse)) {
+ rc = -EOPNOTSUPP;
+- free_xid(xid);
+- return rc;
++ goto out;
+ }
+
+ filemap_invalidate_lock(inode->i_mapping);
+@@ -3707,8 +3705,10 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ true /* is_fctl */, (char *)&fsctl_buf,
+ sizeof(struct file_zero_data_information),
+ CIFSMaxBufSize, NULL, NULL);
+- free_xid(xid);
+ filemap_invalidate_unlock(inode->i_mapping);
++out:
++ inode_unlock(inode);
++ free_xid(xid);
+ return rc;
+ }
+
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index c705de32e2257..c7614ade875b5 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2571,19 +2571,15 @@ alloc_path_with_tree_prefix(__le16 **out_path, int *out_size, int *out_len,
+
+ path_len = UniStrnlen((wchar_t *)path, PATH_MAX);
+
+- /*
+- * make room for one path separator between the treename and
+- * path
+- */
+- *out_len = treename_len + 1 + path_len;
++ /* make room for one path separator only if @path isn't empty */
++ *out_len = treename_len + (path[0] ? 1 : 0) + path_len;
+
+ /*
+- * final path needs to be null-terminated UTF16 with a
+- * size aligned to 8
++ * final path needs to be 8-byte aligned as specified in
++ * MS-SMB2 2.2.13 SMB2 CREATE Request.
+ */
+-
+- *out_size = roundup((*out_len+1)*2, 8);
+- *out_path = kzalloc(*out_size, GFP_KERNEL);
++ *out_size = roundup(*out_len * sizeof(__le16), 8);
++ *out_path = kzalloc(*out_size + sizeof(__le16) /* null */, GFP_KERNEL);
+ if (!*out_path)
+ return -ENOMEM;
+
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 05221366a16dc..08a1993ab7fd3 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -134,10 +134,10 @@ static bool inode_io_list_move_locked(struct inode *inode,
+
+ static void wb_wakeup(struct bdi_writeback *wb)
+ {
+- spin_lock_bh(&wb->work_lock);
++ spin_lock_irq(&wb->work_lock);
+ if (test_bit(WB_registered, &wb->state))
+ mod_delayed_work(bdi_wq, &wb->dwork, 0);
+- spin_unlock_bh(&wb->work_lock);
++ spin_unlock_irq(&wb->work_lock);
+ }
+
+ static void finish_writeback_work(struct bdi_writeback *wb,
+@@ -164,7 +164,7 @@ static void wb_queue_work(struct bdi_writeback *wb,
+ if (work->done)
+ atomic_inc(&work->done->cnt);
+
+- spin_lock_bh(&wb->work_lock);
++ spin_lock_irq(&wb->work_lock);
+
+ if (test_bit(WB_registered, &wb->state)) {
+ list_add_tail(&work->list, &wb->work_list);
+@@ -172,7 +172,7 @@ static void wb_queue_work(struct bdi_writeback *wb,
+ } else
+ finish_writeback_work(wb, work);
+
+- spin_unlock_bh(&wb->work_lock);
++ spin_unlock_irq(&wb->work_lock);
+ }
+
+ /**
+@@ -2082,13 +2082,13 @@ static struct wb_writeback_work *get_next_work_item(struct bdi_writeback *wb)
+ {
+ struct wb_writeback_work *work = NULL;
+
+- spin_lock_bh(&wb->work_lock);
++ spin_lock_irq(&wb->work_lock);
+ if (!list_empty(&wb->work_list)) {
+ work = list_entry(wb->work_list.next,
+ struct wb_writeback_work, list);
+ list_del_init(&work->list);
+ }
+- spin_unlock_bh(&wb->work_lock);
++ spin_unlock_irq(&wb->work_lock);
+ return work;
+ }
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index e6a7e769d25dd..a59f8d645654a 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -4238,6 +4238,13 @@ static int build_mount_idmapped(const struct mount_attr *attr, size_t usize,
+ err = -EPERM;
+ goto out_fput;
+ }
++
++ /* We're not controlling the target namespace. */
++ if (!ns_capable(mnt_userns, CAP_SYS_ADMIN)) {
++ err = -EPERM;
++ goto out_fput;
++ }
++
+ kattr->mnt_userns = get_user_ns(mnt_userns);
+
+ out_fput:
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 2d72b1b7ed74c..9a0e4a89cdf14 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -221,8 +221,10 @@ nfs_file_fsync_commit(struct file *file, int datasync)
+ int
+ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+- struct nfs_open_context *ctx = nfs_file_open_context(file);
+ struct inode *inode = file_inode(file);
++ struct nfs_inode *nfsi = NFS_I(inode);
++ long save_nredirtied = atomic_long_read(&nfsi->redirtied_pages);
++ long nredirtied;
+ int ret;
+
+ trace_nfs_fsync_enter(inode);
+@@ -237,15 +239,10 @@ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ ret = pnfs_sync_inode(inode, !!datasync);
+ if (ret != 0)
+ break;
+- if (!test_and_clear_bit(NFS_CONTEXT_RESEND_WRITES, &ctx->flags))
++ nredirtied = atomic_long_read(&nfsi->redirtied_pages);
++ if (nredirtied == save_nredirtied)
+ break;
+- /*
+- * If nfs_file_fsync_commit detected a server reboot, then
+- * resend all dirty pages that might have been covered by
+- * the NFS_CONTEXT_RESEND_WRITES flag
+- */
+- start = 0;
+- end = LLONG_MAX;
++ save_nredirtied = nredirtied;
+ }
+
+ trace_nfs_fsync_exit(inode, ret);
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index b4e46b0ffa2dc..bea7c005119c3 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -426,6 +426,7 @@ nfs_ilookup(struct super_block *sb, struct nfs_fattr *fattr, struct nfs_fh *fh)
+ static void nfs_inode_init_regular(struct nfs_inode *nfsi)
+ {
+ atomic_long_set(&nfsi->nrequests, 0);
++ atomic_long_set(&nfsi->redirtied_pages, 0);
+ INIT_LIST_HEAD(&nfsi->commit_info.list);
+ atomic_long_set(&nfsi->commit_info.ncommit, 0);
+ atomic_set(&nfsi->commit_info.rpcs_out, 0);
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index e88f6b18445ec..9eb1812878795 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -340,6 +340,11 @@ static struct file *__nfs42_ssc_open(struct vfsmount *ss_mnt,
+ goto out;
+ }
+
++ if (!S_ISREG(fattr->mode)) {
++ res = ERR_PTR(-EBADF);
++ goto out;
++ }
++
+ res = ERR_PTR(-ENOMEM);
+ len = strlen(SSC_READ_NAME_BODY) + 16;
+ read_name = kzalloc(len, GFP_KERNEL);
+@@ -357,6 +362,7 @@ static struct file *__nfs42_ssc_open(struct vfsmount *ss_mnt,
+ r_ino->i_fop);
+ if (IS_ERR(filep)) {
+ res = ERR_CAST(filep);
++ iput(r_ino);
+ goto out_free_name;
+ }
+
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 1c706465d090b..5d7e1c2061842 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1419,10 +1419,12 @@ static void nfs_initiate_write(struct nfs_pgio_header *hdr,
+ */
+ static void nfs_redirty_request(struct nfs_page *req)
+ {
++ struct nfs_inode *nfsi = NFS_I(page_file_mapping(req->wb_page)->host);
++
+ /* Bump the transmission count */
+ req->wb_nio++;
+ nfs_mark_request_dirty(req);
+- set_bit(NFS_CONTEXT_RESEND_WRITES, &nfs_req_openctx(req)->flags);
++ atomic_long_inc(&nfsi->redirtied_pages);
+ nfs_end_page_writeback(req);
+ nfs_release_request(req);
+ }
+@@ -1892,7 +1894,7 @@ static void nfs_commit_release_pages(struct nfs_commit_data *data)
+ /* We have a mismatch. Write the page again */
+ dprintk_cont(" mismatch\n");
+ nfs_mark_request_dirty(req);
+- set_bit(NFS_CONTEXT_RESEND_WRITES, &nfs_req_openctx(req)->flags);
++ atomic_long_inc(&NFS_I(data->inode)->redirtied_pages);
+ next:
+ nfs_unlock_and_release_request(req);
+ /* Latency breaker */
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index 1b8c89dbf6684..3629049decac1 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -478,8 +478,7 @@ out:
+ }
+
+ #ifdef CONFIG_NTFS3_FS_POSIX_ACL
+-static struct posix_acl *ntfs_get_acl_ex(struct user_namespace *mnt_userns,
+- struct inode *inode, int type,
++static struct posix_acl *ntfs_get_acl_ex(struct inode *inode, int type,
+ int locked)
+ {
+ struct ntfs_inode *ni = ntfs_i(inode);
+@@ -514,7 +513,7 @@ static struct posix_acl *ntfs_get_acl_ex(struct user_namespace *mnt_userns,
+
+ /* Translate extended attribute to acl. */
+ if (err >= 0) {
+- acl = posix_acl_from_xattr(mnt_userns, buf, err);
++ acl = posix_acl_from_xattr(&init_user_ns, buf, err);
+ } else if (err == -ENODATA) {
+ acl = NULL;
+ } else {
+@@ -537,8 +536,7 @@ struct posix_acl *ntfs_get_acl(struct inode *inode, int type, bool rcu)
+ if (rcu)
+ return ERR_PTR(-ECHILD);
+
+- /* TODO: init_user_ns? */
+- return ntfs_get_acl_ex(&init_user_ns, inode, type, 0);
++ return ntfs_get_acl_ex(inode, type, 0);
+ }
+
+ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+@@ -590,7 +588,7 @@ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+ value = kmalloc(size, GFP_NOFS);
+ if (!value)
+ return -ENOMEM;
+- err = posix_acl_to_xattr(mnt_userns, acl, value, size);
++ err = posix_acl_to_xattr(&init_user_ns, acl, value, size);
+ if (err < 0)
+ goto out;
+ flags = 0;
+@@ -641,7 +639,7 @@ static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns,
+ if (!acl)
+ return -ENODATA;
+
+- err = posix_acl_to_xattr(mnt_userns, acl, buffer, size);
++ err = posix_acl_to_xattr(&init_user_ns, acl, buffer, size);
+ posix_acl_release(acl);
+
+ return err;
+@@ -665,12 +663,12 @@ static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns,
+ if (!value) {
+ acl = NULL;
+ } else {
+- acl = posix_acl_from_xattr(mnt_userns, value, size);
++ acl = posix_acl_from_xattr(&init_user_ns, value, size);
+ if (IS_ERR(acl))
+ return PTR_ERR(acl);
+
+ if (acl) {
+- err = posix_acl_valid(mnt_userns, acl);
++ err = posix_acl_valid(&init_user_ns, acl);
+ if (err)
+ goto release_and_out;
+ }
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 801e60bab9555..c28bc983a7b1c 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -3403,10 +3403,12 @@ void ocfs2_dlm_shutdown(struct ocfs2_super *osb,
+ ocfs2_lock_res_free(&osb->osb_nfs_sync_lockres);
+ ocfs2_lock_res_free(&osb->osb_orphan_scan.os_lockres);
+
+- ocfs2_cluster_disconnect(osb->cconn, hangup_pending);
+- osb->cconn = NULL;
++ if (osb->cconn) {
++ ocfs2_cluster_disconnect(osb->cconn, hangup_pending);
++ osb->cconn = NULL;
+
+- ocfs2_dlm_shutdown_debug(osb);
++ ocfs2_dlm_shutdown_debug(osb);
++ }
+ }
+
+ static int ocfs2_drop_lock(struct ocfs2_super *osb,
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 438be028935d2..bc18c27e96830 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -1914,8 +1914,7 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err)
+ !ocfs2_is_hard_readonly(osb))
+ hangup_needed = 1;
+
+- if (osb->cconn)
+- ocfs2_dlm_shutdown(osb, hangup_needed);
++ ocfs2_dlm_shutdown(osb, hangup_needed);
+
+ ocfs2_blockcheck_stats_debugfs_remove(&osb->osb_ecc_stats);
+ debugfs_remove_recursive(osb->osb_debug_root);
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 2d04e3470d4cd..313788bc0c307 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -525,10 +525,12 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ struct vm_area_struct *vma = walk->vma;
+ bool locked = !!(vma->vm_flags & VM_LOCKED);
+ struct page *page = NULL;
+- bool migration = false;
++ bool migration = false, young = false, dirty = false;
+
+ if (pte_present(*pte)) {
+ page = vm_normal_page(vma, addr, *pte);
++ young = pte_young(*pte);
++ dirty = pte_dirty(*pte);
+ } else if (is_swap_pte(*pte)) {
+ swp_entry_t swpent = pte_to_swp_entry(*pte);
+
+@@ -558,8 +560,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ if (!page)
+ return;
+
+- smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte),
+- locked, migration);
++ smaps_account(mss, page, false, young, dirty, locked, migration);
+ }
+
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index de86f5b2859f9..ab0576d372d6e 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1601,6 +1601,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
+ wake_userfault(vma->vm_userfaultfd_ctx.ctx, &range);
+ }
+
++ /* Reset ptes for the whole vma range if wr-protected */
++ if (userfaultfd_wp(vma))
++ uffd_wp_range(mm, vma, start, vma_end - start, false);
++
+ new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS;
+ prev = vma_merge(mm, prev, start, vma_end, new_flags,
+ vma->anon_vma, vma->vm_file, vma->vm_pgoff,
+diff --git a/include/asm-generic/sections.h b/include/asm-generic/sections.h
+index d0f7bdd2fdf23..db13bb620f527 100644
+--- a/include/asm-generic/sections.h
++++ b/include/asm-generic/sections.h
+@@ -97,7 +97,7 @@ static inline bool memory_contains(void *begin, void *end, void *virt,
+ /**
+ * memory_intersects - checks if the region occupied by an object intersects
+ * with another memory region
+- * @begin: virtual address of the beginning of the memory regien
++ * @begin: virtual address of the beginning of the memory region
+ * @end: virtual address of the end of the memory region
+ * @virt: virtual address of the memory object
+ * @size: size of the memory object
+@@ -110,7 +110,10 @@ static inline bool memory_intersects(void *begin, void *end, void *virt,
+ {
+ void *vend = virt + size;
+
+- return (virt >= begin && virt < end) || (vend >= begin && vend < end);
++ if (virt < end && vend > begin)
++ return true;
++
++ return false;
+ }
+
+ /**
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 9ecead1042b9c..c322b98260f52 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -978,19 +978,30 @@ static inline void mod_memcg_page_state(struct page *page,
+
+ static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx)
+ {
+- return READ_ONCE(memcg->vmstats.state[idx]);
++ long x = READ_ONCE(memcg->vmstats.state[idx]);
++#ifdef CONFIG_SMP
++ if (x < 0)
++ x = 0;
++#endif
++ return x;
+ }
+
+ static inline unsigned long lruvec_page_state(struct lruvec *lruvec,
+ enum node_stat_item idx)
+ {
+ struct mem_cgroup_per_node *pn;
++ long x;
+
+ if (mem_cgroup_disabled())
+ return node_page_state(lruvec_pgdat(lruvec), idx);
+
+ pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
+- return READ_ONCE(pn->lruvec_stats.state[idx]);
++ x = READ_ONCE(pn->lruvec_stats.state[idx]);
++#ifdef CONFIG_SMP
++ if (x < 0)
++ x = 0;
++#endif
++ return x;
+ }
+
+ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 5040cd774c5a3..b0b4ac92354a2 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -773,6 +773,7 @@ struct mlx5_core_dev {
+ enum mlx5_device_state state;
+ /* sync interface state */
+ struct mutex intf_state_mutex;
++ struct lock_class_key lock_key;
+ unsigned long intf_state;
+ struct mlx5_priv priv;
+ struct mlx5_profile profile;
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 7898e29bcfb54..25b8860f47cc6 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2939,7 +2939,6 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
+ #define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */
+ #define FOLL_TRIED 0x800 /* a retry, previous pass started an IO */
+ #define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */
+-#define FOLL_COW 0x4000 /* internal GUP flag */
+ #define FOLL_ANON 0x8000 /* don't do file mappings */
+ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */
+ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 2563d30736e9a..db40bc62213bd 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -640,9 +640,23 @@ extern int sysctl_devconf_inherit_init_net;
+ */
+ static inline bool net_has_fallback_tunnels(const struct net *net)
+ {
+- return !IS_ENABLED(CONFIG_SYSCTL) ||
+- !sysctl_fb_tunnels_only_for_init_net ||
+- (net == &init_net && sysctl_fb_tunnels_only_for_init_net == 1);
++#if IS_ENABLED(CONFIG_SYSCTL)
++ int fb_tunnels_only_for_init_net = READ_ONCE(sysctl_fb_tunnels_only_for_init_net);
++
++ return !fb_tunnels_only_for_init_net ||
++ (net_eq(net, &init_net) && fb_tunnels_only_for_init_net == 1);
++#else
++ return true;
++#endif
++}
++
++static inline int net_inherit_devconf(void)
++{
++#if IS_ENABLED(CONFIG_SYSCTL)
++ return READ_ONCE(sysctl_devconf_inherit_init_net);
++#else
++ return 0;
++#endif
+ }
+
+ static inline int netdev_queue_numa_node_read(const struct netdev_queue *q)
+diff --git a/include/linux/netfilter_bridge/ebtables.h b/include/linux/netfilter_bridge/ebtables.h
+index a13296d6c7ceb..fd533552a062c 100644
+--- a/include/linux/netfilter_bridge/ebtables.h
++++ b/include/linux/netfilter_bridge/ebtables.h
+@@ -94,10 +94,6 @@ struct ebt_table {
+ struct ebt_replace_kernel *table;
+ unsigned int valid_hooks;
+ rwlock_t lock;
+- /* e.g. could be the table explicitly only allows certain
+- * matches, targets, ... 0 == let it in */
+- int (*check)(const struct ebt_table_info *info,
+- unsigned int valid_hooks);
+ /* the data used by the kernel */
+ struct ebt_table_info *private;
+ struct nf_hook_ops *ops;
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index a17c337dbdf1d..b1f83697699ee 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -182,6 +182,7 @@ struct nfs_inode {
+ /* Regular file */
+ struct {
+ atomic_long_t nrequests;
++ atomic_long_t redirtied_pages;
+ struct nfs_mds_commit_info commit_info;
+ struct mutex commit_mutex;
+ };
+diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
+index 732b522bacb7e..e1b8a915e9e9f 100644
+--- a/include/linux/userfaultfd_k.h
++++ b/include/linux/userfaultfd_k.h
+@@ -73,6 +73,8 @@ extern ssize_t mcopy_continue(struct mm_struct *dst_mm, unsigned long dst_start,
+ extern int mwriteprotect_range(struct mm_struct *dst_mm,
+ unsigned long start, unsigned long len,
+ bool enable_wp, atomic_t *mmap_changing);
++extern void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *vma,
++ unsigned long start, unsigned long len, bool enable_wp);
+
+ /* mm helpers */
+ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
+diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
+index c4898fcbf923b..f90f0021f5f2d 100644
+--- a/include/net/busy_poll.h
++++ b/include/net/busy_poll.h
+@@ -33,7 +33,7 @@ extern unsigned int sysctl_net_busy_poll __read_mostly;
+
+ static inline bool net_busy_loop_on(void)
+ {
+- return sysctl_net_busy_poll;
++ return READ_ONCE(sysctl_net_busy_poll);
+ }
+
+ static inline bool sk_can_busy_loop(const struct sock *sk)
+diff --git a/include/net/gro.h b/include/net/gro.h
+index 867656b0739c0..24003dea8fa4d 100644
+--- a/include/net/gro.h
++++ b/include/net/gro.h
+@@ -439,7 +439,7 @@ static inline void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb,
+ {
+ list_add_tail(&skb->list, &napi->rx_list);
+ napi->rx_count += segs;
+- if (napi->rx_count >= gro_normal_batch)
++ if (napi->rx_count >= READ_ONCE(gro_normal_batch))
+ gro_normal_list(napi);
+ }
+
+diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
+index 64daafd1fc41c..9c93e4981b680 100644
+--- a/include/net/netfilter/nf_flow_table.h
++++ b/include/net/netfilter/nf_flow_table.h
+@@ -270,6 +270,7 @@ void flow_offload_refresh(struct nf_flowtable *flow_table,
+
+ struct flow_offload_tuple_rhash *flow_offload_lookup(struct nf_flowtable *flow_table,
+ struct flow_offload_tuple *tuple);
++void nf_flow_table_gc_run(struct nf_flowtable *flow_table);
+ void nf_flow_table_gc_cleanup(struct nf_flowtable *flowtable,
+ struct net_device *dev);
+ void nf_flow_table_cleanup(struct net_device *dev);
+@@ -306,6 +307,8 @@ void nf_flow_offload_stats(struct nf_flowtable *flowtable,
+ struct flow_offload *flow);
+
+ void nf_flow_table_offload_flush(struct nf_flowtable *flowtable);
++void nf_flow_table_offload_flush_cleanup(struct nf_flowtable *flowtable);
++
+ int nf_flow_table_offload_setup(struct nf_flowtable *flowtable,
+ struct net_device *dev,
+ enum flow_block_command cmd);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index b8890ace0f879..0daad6e63ccb2 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1635,6 +1635,7 @@ struct nftables_pernet {
+ struct list_head module_list;
+ struct list_head notify_list;
+ struct mutex commit_mutex;
++ u64 table_handle;
+ unsigned int base_seq;
+ u8 validate_state;
+ };
+diff --git a/include/ufs/ufshci.h b/include/ufs/ufshci.h
+index f81aa95ffbc40..f525566a0864d 100644
+--- a/include/ufs/ufshci.h
++++ b/include/ufs/ufshci.h
+@@ -135,11 +135,7 @@ static inline u32 ufshci_version(u32 major, u32 minor)
+
+ #define UFSHCD_UIC_MASK (UIC_COMMAND_COMPL | UFSHCD_UIC_PWR_MASK)
+
+-#define UFSHCD_ERROR_MASK (UIC_ERROR |\
+- DEVICE_FATAL_ERROR |\
+- CONTROLLER_FATAL_ERROR |\
+- SYSTEM_BUS_FATAL_ERROR |\
+- CRYPTO_ENGINE_FATAL_ERROR)
++#define UFSHCD_ERROR_MASK (UIC_ERROR | INT_FATAL_ERRORS)
+
+ #define INT_FATAL_ERRORS (DEVICE_FATAL_ERROR |\
+ CONTROLLER_FATAL_ERROR |\
+diff --git a/init/main.c b/init/main.c
+index 91642a4e69be6..1fe7942f5d4a8 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1446,13 +1446,25 @@ static noinline void __init kernel_init_freeable(void);
+
+ #if defined(CONFIG_STRICT_KERNEL_RWX) || defined(CONFIG_STRICT_MODULE_RWX)
+ bool rodata_enabled __ro_after_init = true;
++
++#ifndef arch_parse_debug_rodata
++static inline bool arch_parse_debug_rodata(char *str) { return false; }
++#endif
++
+ static int __init set_debug_rodata(char *str)
+ {
+- if (strtobool(str, &rodata_enabled))
++ if (arch_parse_debug_rodata(str))
++ return 0;
++
++ if (str && !strcmp(str, "on"))
++ rodata_enabled = true;
++ else if (str && !strcmp(str, "off"))
++ rodata_enabled = false;
++ else
+ pr_warn("Invalid option string for rodata: '%s'\n", str);
+- return 1;
++ return 0;
+ }
+-__setup("rodata=", set_debug_rodata);
++early_param("rodata", set_debug_rodata);
+ #endif
+
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 6a67dbf5195f0..cd155b7e1346d 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -4331,7 +4331,12 @@ done:
+ copy_iov:
+ iov_iter_restore(&s->iter, &s->iter_state);
+ ret = io_setup_async_rw(req, iovec, s, false);
+- return ret ?: -EAGAIN;
++ if (!ret) {
++ if (kiocb->ki_flags & IOCB_WRITE)
++ kiocb_end_write(req);
++ return -EAGAIN;
++ }
++ return ret;
+ }
+ out_free:
+ /* it's reportedly faster than delegating the null check to kfree() */
+diff --git a/kernel/audit_fsnotify.c b/kernel/audit_fsnotify.c
+index 6432a37ac1c94..c565fbf66ac87 100644
+--- a/kernel/audit_fsnotify.c
++++ b/kernel/audit_fsnotify.c
+@@ -102,6 +102,7 @@ struct audit_fsnotify_mark *audit_alloc_mark(struct audit_krule *krule, char *pa
+
+ ret = fsnotify_add_inode_mark(&audit_mark->mark, inode, 0);
+ if (ret < 0) {
++ audit_mark->path = NULL;
+ fsnotify_put_mark(&audit_mark->mark);
+ audit_mark = ERR_PTR(ret);
+ }
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 3a8c9d744800a..0c33e04c293ad 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1965,6 +1965,7 @@ void __audit_uring_exit(int success, long code)
+ goto out;
+ }
+
++ audit_return_fixup(ctx, success, code);
+ if (ctx->context == AUDIT_CTX_SYSCALL) {
+ /*
+ * NOTE: See the note in __audit_uring_entry() about the case
+@@ -2006,7 +2007,6 @@ void __audit_uring_exit(int success, long code)
+ audit_filter_inodes(current, ctx);
+ if (ctx->current_state != AUDIT_STATE_RECORD)
+ goto out;
+- audit_return_fixup(ctx, success, code);
+ audit_log_exit();
+
+ out:
+@@ -2090,13 +2090,13 @@ void __audit_syscall_exit(int success, long return_code)
+ if (!list_empty(&context->killed_trees))
+ audit_kill_trees(context);
+
++ audit_return_fixup(context, success, return_code);
+ /* run through both filters to ensure we set the filterkey properly */
+ audit_filter_syscall(current, context);
+ audit_filter_inodes(current, context);
+ if (context->current_state < AUDIT_STATE_RECORD)
+ goto out;
+
+- audit_return_fixup(context, success, return_code);
+ audit_log_exit();
+
+ out:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index e91d2faef1605..0e45d405f151c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -6999,8 +6999,7 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
+ struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx];
+ struct bpf_reg_state *regs = cur_regs(env), *reg;
+ struct bpf_map *map = meta->map_ptr;
+- struct tnum range;
+- u64 val;
++ u64 val, max;
+ int err;
+
+ if (func_id != BPF_FUNC_tail_call)
+@@ -7010,10 +7009,11 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
+ return -EINVAL;
+ }
+
+- range = tnum_range(0, map->max_entries - 1);
+ reg = ®s[BPF_REG_3];
++ val = reg->var_off.value;
++ max = map->max_entries;
+
+- if (!register_is_const(reg) || !tnum_in(range, reg->var_off)) {
++ if (!(register_is_const(reg) && val < max)) {
+ bpf_map_key_store(aux, BPF_MAP_KEY_POISON);
+ return 0;
+ }
+@@ -7021,8 +7021,6 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
+ err = mark_chain_precision(env, BPF_REG_3);
+ if (err)
+ return err;
+-
+- val = reg->var_off.value;
+ if (bpf_map_key_unseen(aux))
+ bpf_map_key_store(aux, val);
+ else if (!bpf_map_key_poisoned(aux) &&
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 13c8e91d78620..ce95aee05e8ae 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1811,6 +1811,7 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+
+ if (ss->css_rstat_flush) {
+ list_del_rcu(&css->rstat_css_node);
++ synchronize_rcu();
+ list_add_rcu(&css->rstat_css_node,
+ &dcgrp->rstat_css_list);
+ }
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 80697e5e03e49..08350e35aba24 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1707,11 +1707,12 @@ static struct kprobe *__disable_kprobe(struct kprobe *p)
+ /* Try to disarm and disable this/parent probe */
+ if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
+ /*
+- * If 'kprobes_all_disarmed' is set, 'orig_p'
+- * should have already been disarmed, so
+- * skip unneed disarming process.
++ * Don't be lazy here. Even if 'kprobes_all_disarmed'
++ * is false, 'orig_p' might not have been armed yet.
++ * Note arm_all_kprobes() __tries__ to arm all kprobes
++ * on the best effort basis.
+ */
+- if (!kprobes_all_disarmed) {
++ if (!kprobes_all_disarmed && !kprobe_disabled(orig_p)) {
+ ret = disarm_kprobe(orig_p, true);
+ if (ret) {
+ p->flags &= ~KPROBE_FLAG_DISABLED;
+diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
+index a492f159624fa..860b2dcf3ac46 100644
+--- a/kernel/sys_ni.c
++++ b/kernel/sys_ni.c
+@@ -277,6 +277,7 @@ COND_SYSCALL(landlock_restrict_self);
+
+ /* mm/fadvise.c */
+ COND_SYSCALL(fadvise64_64);
++COND_SYSCALL_COMPAT(fadvise64_64);
+
+ /* mm/, CONFIG_MMU only */
+ COND_SYSCALL(swapon);
+diff --git a/lib/ratelimit.c b/lib/ratelimit.c
+index e01a93f46f833..ce945c17980b9 100644
+--- a/lib/ratelimit.c
++++ b/lib/ratelimit.c
+@@ -26,10 +26,16 @@
+ */
+ int ___ratelimit(struct ratelimit_state *rs, const char *func)
+ {
++ /* Paired with WRITE_ONCE() in .proc_handler().
++ * Changing two values seperately could be inconsistent
++ * and some message could be lost. (See: net_ratelimit_state).
++ */
++ int interval = READ_ONCE(rs->interval);
++ int burst = READ_ONCE(rs->burst);
+ unsigned long flags;
+ int ret;
+
+- if (!rs->interval)
++ if (!interval)
+ return 1;
+
+ /*
+@@ -44,7 +50,7 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func)
+ if (!rs->begin)
+ rs->begin = jiffies;
+
+- if (time_is_before_jiffies(rs->begin + rs->interval)) {
++ if (time_is_before_jiffies(rs->begin + interval)) {
+ if (rs->missed) {
+ if (!(rs->flags & RATELIMIT_MSG_ON_RELEASE)) {
+ printk_deferred(KERN_WARNING
+@@ -56,7 +62,7 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func)
+ rs->begin = jiffies;
+ rs->printed = 0;
+ }
+- if (rs->burst && rs->burst > rs->printed) {
++ if (burst && burst > rs->printed) {
+ rs->printed++;
+ ret = 1;
+ } else {
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 95550b8fa7fe2..de65cb1e5f761 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -260,10 +260,10 @@ void wb_wakeup_delayed(struct bdi_writeback *wb)
+ unsigned long timeout;
+
+ timeout = msecs_to_jiffies(dirty_writeback_interval * 10);
+- spin_lock_bh(&wb->work_lock);
++ spin_lock_irq(&wb->work_lock);
+ if (test_bit(WB_registered, &wb->state))
+ queue_delayed_work(bdi_wq, &wb->dwork, timeout);
+- spin_unlock_bh(&wb->work_lock);
++ spin_unlock_irq(&wb->work_lock);
+ }
+
+ static void wb_update_bandwidth_workfn(struct work_struct *work)
+@@ -334,12 +334,12 @@ static void cgwb_remove_from_bdi_list(struct bdi_writeback *wb);
+ static void wb_shutdown(struct bdi_writeback *wb)
+ {
+ /* Make sure nobody queues further work */
+- spin_lock_bh(&wb->work_lock);
++ spin_lock_irq(&wb->work_lock);
+ if (!test_and_clear_bit(WB_registered, &wb->state)) {
+- spin_unlock_bh(&wb->work_lock);
++ spin_unlock_irq(&wb->work_lock);
+ return;
+ }
+- spin_unlock_bh(&wb->work_lock);
++ spin_unlock_irq(&wb->work_lock);
+
+ cgwb_remove_from_bdi_list(wb);
+ /*
+diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
+index f18a631e74797..b1efebfcf94bb 100644
+--- a/mm/bootmem_info.c
++++ b/mm/bootmem_info.c
+@@ -12,6 +12,7 @@
+ #include <linux/memblock.h>
+ #include <linux/bootmem_info.h>
+ #include <linux/memory_hotplug.h>
++#include <linux/kmemleak.h>
+
+ void get_page_bootmem(unsigned long info, struct page *page, unsigned long type)
+ {
+@@ -33,6 +34,7 @@ void put_page_bootmem(struct page *page)
+ ClearPagePrivate(page);
+ set_page_private(page, 0);
+ INIT_LIST_HEAD(&page->lru);
++ kmemleak_free_part(page_to_virt(page), PAGE_SIZE);
+ free_reserved_page(page);
+ }
+ }
+diff --git a/mm/damon/dbgfs.c b/mm/damon/dbgfs.c
+index a0dab8b5e45f2..53ba8b1e619ca 100644
+--- a/mm/damon/dbgfs.c
++++ b/mm/damon/dbgfs.c
+@@ -787,6 +787,9 @@ static int dbgfs_mk_context(char *name)
+ return -ENOENT;
+
+ new_dir = debugfs_create_dir(name, root);
++ /* Below check is required for a potential duplicated name case */
++ if (IS_ERR(new_dir))
++ return PTR_ERR(new_dir);
+ dbgfs_dirs[dbgfs_nr_ctxs] = new_dir;
+
+ new_ctx = dbgfs_new_ctx();
+diff --git a/mm/gup.c b/mm/gup.c
+index fd3262ae92fc2..38effce68b48d 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -478,14 +478,43 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
+ return -EEXIST;
+ }
+
+-/*
+- * FOLL_FORCE can write to even unwritable pte's, but only
+- * after we've gone through a COW cycle and they are dirty.
+- */
+-static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
++/* FOLL_FORCE can write to even unwritable PTEs in COW mappings. */
++static inline bool can_follow_write_pte(pte_t pte, struct page *page,
++ struct vm_area_struct *vma,
++ unsigned int flags)
+ {
+- return pte_write(pte) ||
+- ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
++ /* If the pte is writable, we can write to the page. */
++ if (pte_write(pte))
++ return true;
++
++ /* Maybe FOLL_FORCE is set to override it? */
++ if (!(flags & FOLL_FORCE))
++ return false;
++
++ /* But FOLL_FORCE has no effect on shared mappings */
++ if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
++ return false;
++
++ /* ... or read-only private ones */
++ if (!(vma->vm_flags & VM_MAYWRITE))
++ return false;
++
++ /* ... or already writable ones that just need to take a write fault */
++ if (vma->vm_flags & VM_WRITE)
++ return false;
++
++ /*
++ * See can_change_pte_writable(): we broke COW and could map the page
++ * writable if we have an exclusive anonymous page ...
++ */
++ if (!page || !PageAnon(page) || !PageAnonExclusive(page))
++ return false;
++
++ /* ... and a write-fault isn't required for other reasons. */
++ if (IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) &&
++ !(vma->vm_flags & VM_SOFTDIRTY) && !pte_soft_dirty(pte))
++ return false;
++ return !userfaultfd_pte_wp(vma, pte);
+ }
+
+ static struct page *follow_page_pte(struct vm_area_struct *vma,
+@@ -528,12 +557,19 @@ retry:
+ }
+ if ((flags & FOLL_NUMA) && pte_protnone(pte))
+ goto no_page;
+- if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) {
+- pte_unmap_unlock(ptep, ptl);
+- return NULL;
+- }
+
+ page = vm_normal_page(vma, address, pte);
++
++ /*
++ * We only care about anon pages in can_follow_write_pte() and don't
++ * have to worry about pte_devmap() because they are never anon.
++ */
++ if ((flags & FOLL_WRITE) &&
++ !can_follow_write_pte(pte, page, vma, flags)) {
++ page = NULL;
++ goto out;
++ }
++
+ if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) {
+ /*
+ * Only return device mapping pages in the FOLL_GET or FOLL_PIN
+@@ -967,17 +1003,6 @@ static int faultin_page(struct vm_area_struct *vma,
+ return -EBUSY;
+ }
+
+- /*
+- * The VM_FAULT_WRITE bit tells us that do_wp_page has broken COW when
+- * necessary, even if maybe_mkwrite decided not to set pte_write. We
+- * can thus safely do subsequent page lookups as if they were reads.
+- * But only do so when looping for pte_write is futile: in some cases
+- * userspace may also be wanting to write to the gotten user page,
+- * which a read fault here might prevent (a readonly page might get
+- * reCOWed by userspace write).
+- */
+- if ((ret & VM_FAULT_WRITE) && !(vma->vm_flags & VM_WRITE))
+- *flags |= FOLL_COW;
+ return 0;
+ }
+
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 15965084816d3..0f465e70349cb 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -978,12 +978,6 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
+
+ assert_spin_locked(pmd_lockptr(mm, pmd));
+
+- /*
+- * When we COW a devmap PMD entry, we split it into PTEs, so we should
+- * not be in this function with `flags & FOLL_COW` set.
+- */
+- WARN_ONCE(flags & FOLL_COW, "mm: In follow_devmap_pmd with FOLL_COW set");
+-
+ /* FOLL_GET and FOLL_PIN are mutually exclusive. */
+ if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
+ (FOLL_PIN | FOLL_GET)))
+@@ -1349,14 +1343,43 @@ fallback:
+ return VM_FAULT_FALLBACK;
+ }
+
+-/*
+- * FOLL_FORCE can write to even unwritable pmd's, but only
+- * after we've gone through a COW cycle and they are dirty.
+- */
+-static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags)
++/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
++static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
++ struct vm_area_struct *vma,
++ unsigned int flags)
+ {
+- return pmd_write(pmd) ||
+- ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd));
++ /* If the pmd is writable, we can write to the page. */
++ if (pmd_write(pmd))
++ return true;
++
++ /* Maybe FOLL_FORCE is set to override it? */
++ if (!(flags & FOLL_FORCE))
++ return false;
++
++ /* But FOLL_FORCE has no effect on shared mappings */
++ if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
++ return false;
++
++ /* ... or read-only private ones */
++ if (!(vma->vm_flags & VM_MAYWRITE))
++ return false;
++
++ /* ... or already writable ones that just need to take a write fault */
++ if (vma->vm_flags & VM_WRITE)
++ return false;
++
++ /*
++ * See can_change_pte_writable(): we broke COW and could map the page
++ * writable if we have an exclusive anonymous page ...
++ */
++ if (!page || !PageAnon(page) || !PageAnonExclusive(page))
++ return false;
++
++ /* ... and a write-fault isn't required for other reasons. */
++ if (IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) &&
++ !(vma->vm_flags & VM_SOFTDIRTY) && !pmd_soft_dirty(pmd))
++ return false;
++ return !userfaultfd_huge_pmd_wp(vma, pmd);
+ }
+
+ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
+@@ -1365,12 +1388,16 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
+ unsigned int flags)
+ {
+ struct mm_struct *mm = vma->vm_mm;
+- struct page *page = NULL;
++ struct page *page;
+
+ assert_spin_locked(pmd_lockptr(mm, pmd));
+
+- if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags))
+- goto out;
++ page = pmd_page(*pmd);
++ VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
++
++ if ((flags & FOLL_WRITE) &&
++ !can_follow_write_pmd(*pmd, page, vma, flags))
++ return NULL;
+
+ /* Avoid dumping huge zero page */
+ if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
+@@ -1378,10 +1405,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
+
+ /* Full NUMA hinting faults to serialise migration in fault paths */
+ if ((flags & FOLL_NUMA) && pmd_protnone(*pmd))
+- goto out;
+-
+- page = pmd_page(*pmd);
+- VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
++ return NULL;
+
+ if (!pmd_write(*pmd) && gup_must_unshare(flags, page))
+ return ERR_PTR(-EMLINK);
+@@ -1398,7 +1422,6 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
+ page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
+ VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
+
+-out:
+ return page;
+ }
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 474bfbe9929e1..299dcfaa35b25 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5232,6 +5232,21 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
+ VM_BUG_ON(unshare && (flags & FOLL_WRITE));
+ VM_BUG_ON(!unshare && !(flags & FOLL_WRITE));
+
++ /*
++ * hugetlb does not support FOLL_FORCE-style write faults that keep the
++ * PTE mapped R/O such as maybe_mkwrite() would do.
++ */
++ if (WARN_ON_ONCE(!unshare && !(vma->vm_flags & VM_WRITE)))
++ return VM_FAULT_SIGSEGV;
++
++ /* Let's take out MAP_SHARED mappings first. */
++ if (vma->vm_flags & VM_MAYSHARE) {
++ if (unlikely(unshare))
++ return 0;
++ set_huge_ptep_writable(vma, haddr, ptep);
++ return 0;
++ }
++
+ pte = huge_ptep_get(ptep);
+ old_page = pte_page(pte);
+
+@@ -5766,12 +5781,11 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ * If we are going to COW/unshare the mapping later, we examine the
+ * pending reservations for this page now. This will ensure that any
+ * allocations necessary to record that reservation occur outside the
+- * spinlock. For private mappings, we also lookup the pagecache
+- * page now as it is used to determine if a reservation has been
+- * consumed.
++ * spinlock. Also lookup the pagecache page now as it is used to
++ * determine if a reservation has been consumed.
+ */
+ if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) &&
+- !huge_pte_write(entry)) {
++ !(vma->vm_flags & VM_MAYSHARE) && !huge_pte_write(entry)) {
+ if (vma_needs_reservation(h, vma, haddr) < 0) {
+ ret = VM_FAULT_OOM;
+ goto out_mutex;
+@@ -5779,9 +5793,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ /* Just decrements count, does not deallocate */
+ vma_end_reservation(h, vma, haddr);
+
+- if (!(vma->vm_flags & VM_MAYSHARE))
+- pagecache_page = hugetlbfs_pagecache_page(h,
+- vma, haddr);
++ pagecache_page = hugetlbfs_pagecache_page(h, vma, haddr);
+ }
+
+ ptl = huge_pte_lock(h, mm, ptep);
+@@ -6014,7 +6026,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
+ if (!huge_pte_none_mostly(huge_ptep_get(dst_pte)))
+ goto out_release_unlock;
+
+- if (vm_shared) {
++ if (page_in_pagecache) {
+ page_dup_file_rmap(page, true);
+ } else {
+ ClearHPageRestoreReserve(page);
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 7c59ec73acc34..3b284b091bb7e 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1693,8 +1693,12 @@ int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot)
+ pgprot_val(vm_pgprot_modify(vm_page_prot, vm_flags)))
+ return 0;
+
+- /* Do we need to track softdirty? */
+- if (IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) && !(vm_flags & VM_SOFTDIRTY))
++ /*
++ * Do we need to track softdirty? hugetlb does not support softdirty
++ * tracking yet.
++ */
++ if (IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) && !(vm_flags & VM_SOFTDIRTY) &&
++ !is_vm_hugetlb_page(vma))
+ return 1;
+
+ /* Specialty mapping? */
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index ba5592655ee3b..0d38d5b637621 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -158,10 +158,11 @@ static unsigned long change_pte_range(struct mmu_gather *tlb,
+ pages++;
+ } else if (is_swap_pte(oldpte)) {
+ swp_entry_t entry = pte_to_swp_entry(oldpte);
+- struct page *page = pfn_swap_entry_to_page(entry);
+ pte_t newpte;
+
+ if (is_writable_migration_entry(entry)) {
++ struct page *page = pfn_swap_entry_to_page(entry);
++
+ /*
+ * A protection check is difficult so
+ * just be safe and disable write
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index 55c2776ae6999..3c34db15cf706 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -2867,6 +2867,7 @@ static void wb_inode_writeback_start(struct bdi_writeback *wb)
+
+ static void wb_inode_writeback_end(struct bdi_writeback *wb)
+ {
++ unsigned long flags;
+ atomic_dec(&wb->writeback_inodes);
+ /*
+ * Make sure estimate of writeback throughput gets updated after
+@@ -2875,7 +2876,10 @@ static void wb_inode_writeback_end(struct bdi_writeback *wb)
+ * that if multiple inodes end writeback at a similar time, they get
+ * batched into one bandwidth update.
+ */
+- queue_delayed_work(bdi_wq, &wb->bw_dwork, BANDWIDTH_INTERVAL);
++ spin_lock_irqsave(&wb->work_lock, flags);
++ if (test_bit(WB_registered, &wb->state))
++ queue_delayed_work(bdi_wq, &wb->bw_dwork, BANDWIDTH_INTERVAL);
++ spin_unlock_irqrestore(&wb->work_lock, flags);
+ }
+
+ bool __folio_end_writeback(struct folio *folio)
+diff --git a/mm/shmem.c b/mm/shmem.c
+index b7f2d4a568673..f152375e770bf 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1771,6 +1771,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
+
+ if (shmem_should_replace_folio(folio, gfp)) {
+ error = shmem_replace_page(&page, gfp, info, index);
++ folio = page_folio(page);
+ if (error)
+ goto failed;
+ }
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 07d3befc80e41..7327b2573f7c2 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -703,14 +703,29 @@ ssize_t mcopy_continue(struct mm_struct *dst_mm, unsigned long start,
+ mmap_changing, 0);
+ }
+
++void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma,
++ unsigned long start, unsigned long len, bool enable_wp)
++{
++ struct mmu_gather tlb;
++ pgprot_t newprot;
++
++ if (enable_wp)
++ newprot = vm_get_page_prot(dst_vma->vm_flags & ~(VM_WRITE));
++ else
++ newprot = vm_get_page_prot(dst_vma->vm_flags);
++
++ tlb_gather_mmu(&tlb, dst_mm);
++ change_protection(&tlb, dst_vma, start, start + len, newprot,
++ enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE);
++ tlb_finish_mmu(&tlb);
++}
++
+ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
+ unsigned long len, bool enable_wp,
+ atomic_t *mmap_changing)
+ {
+ struct vm_area_struct *dst_vma;
+ unsigned long page_mask;
+- struct mmu_gather tlb;
+- pgprot_t newprot;
+ int err;
+
+ /*
+@@ -750,15 +765,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
+ goto out_unlock;
+ }
+
+- if (enable_wp)
+- newprot = vm_get_page_prot(dst_vma->vm_flags & ~(VM_WRITE));
+- else
+- newprot = vm_get_page_prot(dst_vma->vm_flags);
+-
+- tlb_gather_mmu(&tlb, dst_mm);
+- change_protection(&tlb, dst_vma, start, start + len, newprot,
+- enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE);
+- tlb_finish_mmu(&tlb);
++ uffd_wp_range(dst_mm, dst_vma, start, len, enable_wp);
+
+ err = 0;
+ out_unlock:
+diff --git a/net/bridge/netfilter/ebtable_broute.c b/net/bridge/netfilter/ebtable_broute.c
+index 1a11064f99907..8f19253024b0a 100644
+--- a/net/bridge/netfilter/ebtable_broute.c
++++ b/net/bridge/netfilter/ebtable_broute.c
+@@ -36,18 +36,10 @@ static struct ebt_replace_kernel initial_table = {
+ .entries = (char *)&initial_chain,
+ };
+
+-static int check(const struct ebt_table_info *info, unsigned int valid_hooks)
+-{
+- if (valid_hooks & ~(1 << NF_BR_BROUTING))
+- return -EINVAL;
+- return 0;
+-}
+-
+ static const struct ebt_table broute_table = {
+ .name = "broute",
+ .table = &initial_table,
+ .valid_hooks = 1 << NF_BR_BROUTING,
+- .check = check,
+ .me = THIS_MODULE,
+ };
+
+diff --git a/net/bridge/netfilter/ebtable_filter.c b/net/bridge/netfilter/ebtable_filter.c
+index cb949436bc0e3..278f324e67524 100644
+--- a/net/bridge/netfilter/ebtable_filter.c
++++ b/net/bridge/netfilter/ebtable_filter.c
+@@ -43,18 +43,10 @@ static struct ebt_replace_kernel initial_table = {
+ .entries = (char *)initial_chains,
+ };
+
+-static int check(const struct ebt_table_info *info, unsigned int valid_hooks)
+-{
+- if (valid_hooks & ~FILTER_VALID_HOOKS)
+- return -EINVAL;
+- return 0;
+-}
+-
+ static const struct ebt_table frame_filter = {
+ .name = "filter",
+ .table = &initial_table,
+ .valid_hooks = FILTER_VALID_HOOKS,
+- .check = check,
+ .me = THIS_MODULE,
+ };
+
+diff --git a/net/bridge/netfilter/ebtable_nat.c b/net/bridge/netfilter/ebtable_nat.c
+index 5ee0531ae5061..9066f7f376d57 100644
+--- a/net/bridge/netfilter/ebtable_nat.c
++++ b/net/bridge/netfilter/ebtable_nat.c
+@@ -43,18 +43,10 @@ static struct ebt_replace_kernel initial_table = {
+ .entries = (char *)initial_chains,
+ };
+
+-static int check(const struct ebt_table_info *info, unsigned int valid_hooks)
+-{
+- if (valid_hooks & ~NAT_VALID_HOOKS)
+- return -EINVAL;
+- return 0;
+-}
+-
+ static const struct ebt_table frame_nat = {
+ .name = "nat",
+ .table = &initial_table,
+ .valid_hooks = NAT_VALID_HOOKS,
+- .check = check,
+ .me = THIS_MODULE,
+ };
+
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index f2dbefb61ce83..9a0ae59cdc500 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -1040,8 +1040,7 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
+ goto free_iterate;
+ }
+
+- /* the table doesn't like it */
+- if (t->check && (ret = t->check(newinfo, repl->valid_hooks)))
++ if (repl->valid_hooks != t->valid_hooks)
+ goto free_unlock;
+
+ if (repl->num_counters && repl->num_counters != t->private->nentries) {
+@@ -1231,11 +1230,6 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ if (ret != 0)
+ goto free_chainstack;
+
+- if (table->check && table->check(newinfo, table->valid_hooks)) {
+- ret = -EINVAL;
+- goto free_chainstack;
+- }
+-
+ table->private = newinfo;
+ rwlock_init(&table->lock);
+ mutex_lock(&ebt_mutex);
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index 1b7f385643b4c..94374d529ea42 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -310,11 +310,12 @@ BPF_CALL_2(bpf_sk_storage_delete, struct bpf_map *, map, struct sock *, sk)
+ static int bpf_sk_storage_charge(struct bpf_local_storage_map *smap,
+ void *owner, u32 size)
+ {
++ int optmem_max = READ_ONCE(sysctl_optmem_max);
+ struct sock *sk = (struct sock *)owner;
+
+ /* same check as in sock_kmalloc() */
+- if (size <= sysctl_optmem_max &&
+- atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
++ if (size <= optmem_max &&
++ atomic_read(&sk->sk_omem_alloc) + size < optmem_max) {
+ atomic_add(size, &sk->sk_omem_alloc);
+ return 0;
+ }
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 30a1603a7225c..a77a979a4bf75 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4623,7 +4623,7 @@ static bool skb_flow_limit(struct sk_buff *skb, unsigned int qlen)
+ struct softnet_data *sd;
+ unsigned int old_flow, new_flow;
+
+- if (qlen < (netdev_max_backlog >> 1))
++ if (qlen < (READ_ONCE(netdev_max_backlog) >> 1))
+ return false;
+
+ sd = this_cpu_ptr(&softnet_data);
+@@ -4671,7 +4671,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
+ if (!netif_running(skb->dev))
+ goto drop;
+ qlen = skb_queue_len(&sd->input_pkt_queue);
+- if (qlen <= netdev_max_backlog && !skb_flow_limit(skb, qlen)) {
++ if (qlen <= READ_ONCE(netdev_max_backlog) && !skb_flow_limit(skb, qlen)) {
+ if (qlen) {
+ enqueue:
+ __skb_queue_tail(&sd->input_pkt_queue, skb);
+@@ -4927,7 +4927,7 @@ static int netif_rx_internal(struct sk_buff *skb)
+ {
+ int ret;
+
+- net_timestamp_check(netdev_tstamp_prequeue, skb);
++ net_timestamp_check(READ_ONCE(netdev_tstamp_prequeue), skb);
+
+ trace_netif_rx(skb);
+
+@@ -5280,7 +5280,7 @@ static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc,
+ int ret = NET_RX_DROP;
+ __be16 type;
+
+- net_timestamp_check(!netdev_tstamp_prequeue, skb);
++ net_timestamp_check(!READ_ONCE(netdev_tstamp_prequeue), skb);
+
+ trace_netif_receive_skb(skb);
+
+@@ -5663,7 +5663,7 @@ static int netif_receive_skb_internal(struct sk_buff *skb)
+ {
+ int ret;
+
+- net_timestamp_check(netdev_tstamp_prequeue, skb);
++ net_timestamp_check(READ_ONCE(netdev_tstamp_prequeue), skb);
+
+ if (skb_defer_rx_timestamp(skb))
+ return NET_RX_SUCCESS;
+@@ -5693,7 +5693,7 @@ void netif_receive_skb_list_internal(struct list_head *head)
+
+ INIT_LIST_HEAD(&sublist);
+ list_for_each_entry_safe(skb, next, head, list) {
+- net_timestamp_check(netdev_tstamp_prequeue, skb);
++ net_timestamp_check(READ_ONCE(netdev_tstamp_prequeue), skb);
+ skb_list_del_init(skb);
+ if (!skb_defer_rx_timestamp(skb))
+ list_add_tail(&skb->list, &sublist);
+@@ -5917,7 +5917,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
+ net_rps_action_and_irq_enable(sd);
+ }
+
+- napi->weight = dev_rx_weight;
++ napi->weight = READ_ONCE(dev_rx_weight);
+ while (again) {
+ struct sk_buff *skb;
+
+@@ -6646,8 +6646,8 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
+ {
+ struct softnet_data *sd = this_cpu_ptr(&softnet_data);
+ unsigned long time_limit = jiffies +
+- usecs_to_jiffies(netdev_budget_usecs);
+- int budget = netdev_budget;
++ usecs_to_jiffies(READ_ONCE(netdev_budget_usecs));
++ int budget = READ_ONCE(netdev_budget);
+ LIST_HEAD(list);
+ LIST_HEAD(repoll);
+
+@@ -10265,7 +10265,7 @@ static struct net_device *netdev_wait_allrefs_any(struct list_head *list)
+ return dev;
+
+ if (time_after(jiffies, warning_time +
+- netdev_unregister_timeout_secs * HZ)) {
++ READ_ONCE(netdev_unregister_timeout_secs) * HZ)) {
+ list_for_each_entry(dev, list, todo_list) {
+ pr_emerg("unregister_netdevice: waiting for %s to become free. Usage count = %d\n",
+ dev->name, netdev_refcnt_read(dev));
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 74f05ed6aff29..063176428086b 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1214,10 +1214,11 @@ void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp)
+ static bool __sk_filter_charge(struct sock *sk, struct sk_filter *fp)
+ {
+ u32 filter_size = bpf_prog_size(fp->prog->len);
++ int optmem_max = READ_ONCE(sysctl_optmem_max);
+
+ /* same check as in sock_kmalloc() */
+- if (filter_size <= sysctl_optmem_max &&
+- atomic_read(&sk->sk_omem_alloc) + filter_size < sysctl_optmem_max) {
++ if (filter_size <= optmem_max &&
++ atomic_read(&sk->sk_omem_alloc) + filter_size < optmem_max) {
+ atomic_add(filter_size, &sk->sk_omem_alloc);
+ return true;
+ }
+@@ -1548,7 +1549,7 @@ int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk)
+ if (IS_ERR(prog))
+ return PTR_ERR(prog);
+
+- if (bpf_prog_size(prog->len) > sysctl_optmem_max)
++ if (bpf_prog_size(prog->len) > READ_ONCE(sysctl_optmem_max))
+ err = -ENOMEM;
+ else
+ err = reuseport_attach_prog(sk, prog);
+@@ -1615,7 +1616,7 @@ int sk_reuseport_attach_bpf(u32 ufd, struct sock *sk)
+ }
+ } else {
+ /* BPF_PROG_TYPE_SOCKET_FILTER */
+- if (bpf_prog_size(prog->len) > sysctl_optmem_max) {
++ if (bpf_prog_size(prog->len) > READ_ONCE(sysctl_optmem_max)) {
+ err = -ENOMEM;
+ goto err_prog_put;
+ }
+@@ -5036,14 +5037,14 @@ static int _bpf_setsockopt(struct sock *sk, int level, int optname,
+ /* Only some socketops are supported */
+ switch (optname) {
+ case SO_RCVBUF:
+- val = min_t(u32, val, sysctl_rmem_max);
++ val = min_t(u32, val, READ_ONCE(sysctl_rmem_max));
+ val = min_t(int, val, INT_MAX / 2);
+ sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+ WRITE_ONCE(sk->sk_rcvbuf,
+ max_t(int, val * 2, SOCK_MIN_RCVBUF));
+ break;
+ case SO_SNDBUF:
+- val = min_t(u32, val, sysctl_wmem_max);
++ val = min_t(u32, val, READ_ONCE(sysctl_wmem_max));
+ val = min_t(int, val, INT_MAX / 2);
+ sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+ WRITE_ONCE(sk->sk_sndbuf,
+diff --git a/net/core/gro_cells.c b/net/core/gro_cells.c
+index 541c7a72a28a4..21619c70a82b7 100644
+--- a/net/core/gro_cells.c
++++ b/net/core/gro_cells.c
+@@ -26,7 +26,7 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
+
+ cell = this_cpu_ptr(gcells->cells);
+
+- if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
++ if (skb_queue_len(&cell->napi_skbs) > READ_ONCE(netdev_max_backlog)) {
+ drop:
+ dev_core_stats_rx_dropped_inc(dev);
+ kfree_skb(skb);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 5b3559cb1d827..bebf58464d667 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4772,7 +4772,7 @@ static bool skb_may_tx_timestamp(struct sock *sk, bool tsonly)
+ {
+ bool ret;
+
+- if (likely(sysctl_tstamp_allow_data || tsonly))
++ if (likely(READ_ONCE(sysctl_tstamp_allow_data) || tsonly))
+ return true;
+
+ read_lock_bh(&sk->sk_callback_lock);
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 2ff40dd0a7a65..16ab5ef749c60 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1100,7 +1100,7 @@ int sock_setsockopt(struct socket *sock, int level, int optname,
+ * play 'guess the biggest size' games. RCVBUF/SNDBUF
+ * are treated in BSD as hints
+ */
+- val = min_t(u32, val, sysctl_wmem_max);
++ val = min_t(u32, val, READ_ONCE(sysctl_wmem_max));
+ set_sndbuf:
+ /* Ensure val * 2 fits into an int, to prevent max_t()
+ * from treating it as a negative value.
+@@ -1132,7 +1132,7 @@ set_sndbuf:
+ * play 'guess the biggest size' games. RCVBUF/SNDBUF
+ * are treated in BSD as hints
+ */
+- __sock_set_rcvbuf(sk, min_t(u32, val, sysctl_rmem_max));
++ __sock_set_rcvbuf(sk, min_t(u32, val, READ_ONCE(sysctl_rmem_max)));
+ break;
+
+ case SO_RCVBUFFORCE:
+@@ -2535,7 +2535,7 @@ struct sk_buff *sock_omalloc(struct sock *sk, unsigned long size,
+
+ /* small safe race: SKB_TRUESIZE may differ from final skb->truesize */
+ if (atomic_read(&sk->sk_omem_alloc) + SKB_TRUESIZE(size) >
+- sysctl_optmem_max)
++ READ_ONCE(sysctl_optmem_max))
+ return NULL;
+
+ skb = alloc_skb(size, priority);
+@@ -2553,8 +2553,10 @@ struct sk_buff *sock_omalloc(struct sock *sk, unsigned long size,
+ */
+ void *sock_kmalloc(struct sock *sk, int size, gfp_t priority)
+ {
+- if ((unsigned int)size <= sysctl_optmem_max &&
+- atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
++ int optmem_max = READ_ONCE(sysctl_optmem_max);
++
++ if ((unsigned int)size <= optmem_max &&
++ atomic_read(&sk->sk_omem_alloc) + size < optmem_max) {
+ void *mem;
+ /* First do the add, to avoid the race if kmalloc
+ * might sleep.
+@@ -3307,8 +3309,8 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ timer_setup(&sk->sk_timer, NULL, 0);
+
+ sk->sk_allocation = GFP_KERNEL;
+- sk->sk_rcvbuf = sysctl_rmem_default;
+- sk->sk_sndbuf = sysctl_wmem_default;
++ sk->sk_rcvbuf = READ_ONCE(sysctl_rmem_default);
++ sk->sk_sndbuf = READ_ONCE(sysctl_wmem_default);
+ sk->sk_state = TCP_CLOSE;
+ sk_set_socket(sk, sock);
+
+@@ -3363,7 +3365,7 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+ sk->sk_napi_id = 0;
+- sk->sk_ll_usec = sysctl_net_busy_read;
++ sk->sk_ll_usec = READ_ONCE(sysctl_net_busy_read);
+ #endif
+
+ sk->sk_max_pacing_rate = ~0UL;
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index 71a13596ea2bf..725891527814c 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -234,14 +234,17 @@ static int set_default_qdisc(struct ctl_table *table, int write,
+ static int proc_do_dev_weight(struct ctl_table *table, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- int ret;
++ static DEFINE_MUTEX(dev_weight_mutex);
++ int ret, weight;
+
++ mutex_lock(&dev_weight_mutex);
+ ret = proc_dointvec(table, write, buffer, lenp, ppos);
+- if (ret != 0)
+- return ret;
+-
+- dev_rx_weight = weight_p * dev_weight_rx_bias;
+- dev_tx_weight = weight_p * dev_weight_tx_bias;
++ if (!ret && write) {
++ weight = READ_ONCE(weight_p);
++ WRITE_ONCE(dev_rx_weight, weight * dev_weight_rx_bias);
++ WRITE_ONCE(dev_tx_weight, weight * dev_weight_tx_bias);
++ }
++ mutex_unlock(&dev_weight_mutex);
+
+ return ret;
+ }
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index b2366ad540e62..787a44e3222db 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -2682,23 +2682,27 @@ static __net_init int devinet_init_net(struct net *net)
+ #endif
+
+ if (!net_eq(net, &init_net)) {
+- if (IS_ENABLED(CONFIG_SYSCTL) &&
+- sysctl_devconf_inherit_init_net == 3) {
++ switch (net_inherit_devconf()) {
++ case 3:
+ /* copy from the current netns */
+ memcpy(all, current->nsproxy->net_ns->ipv4.devconf_all,
+ sizeof(ipv4_devconf));
+ memcpy(dflt,
+ current->nsproxy->net_ns->ipv4.devconf_dflt,
+ sizeof(ipv4_devconf_dflt));
+- } else if (!IS_ENABLED(CONFIG_SYSCTL) ||
+- sysctl_devconf_inherit_init_net != 2) {
+- /* inherit == 0 or 1: copy from init_net */
++ break;
++ case 0:
++ case 1:
++ /* copy from init_net */
+ memcpy(all, init_net.ipv4.devconf_all,
+ sizeof(ipv4_devconf));
+ memcpy(dflt, init_net.ipv4.devconf_dflt,
+ sizeof(ipv4_devconf_dflt));
++ break;
++ case 2:
++ /* use compiled values */
++ break;
+ }
+- /* else inherit == 2: use compiled values */
+ }
+
+ #ifdef CONFIG_SYSCTL
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 00b4bf26fd932..da8b3cc67234d 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1712,7 +1712,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+
+ sk->sk_protocol = ip_hdr(skb)->protocol;
+ sk->sk_bound_dev_if = arg->bound_dev_if;
+- sk->sk_sndbuf = sysctl_wmem_default;
++ sk->sk_sndbuf = READ_ONCE(sysctl_wmem_default);
+ ipc.sockc.mark = fl4.flowi4_mark;
+ err = ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base,
+ len, 0, &ipc, &rt, MSG_DONTWAIT);
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index a8a323ecbb54b..e49a61a053a68 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -772,7 +772,7 @@ static int ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval, int optlen)
+
+ if (optlen < GROUP_FILTER_SIZE(0))
+ return -EINVAL;
+- if (optlen > sysctl_optmem_max)
++ if (optlen > READ_ONCE(sysctl_optmem_max))
+ return -ENOBUFS;
+
+ gsf = memdup_sockptr(optval, optlen);
+@@ -808,7 +808,7 @@ static int compat_ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+
+ if (optlen < size0)
+ return -EINVAL;
+- if (optlen > sysctl_optmem_max - 4)
++ if (optlen > READ_ONCE(sysctl_optmem_max) - 4)
+ return -ENOBUFS;
+
+ p = kmalloc(optlen + 4, GFP_KERNEL);
+@@ -1233,7 +1233,7 @@ static int do_ip_setsockopt(struct sock *sk, int level, int optname,
+
+ if (optlen < IP_MSFILTER_SIZE(0))
+ goto e_inval;
+- if (optlen > sysctl_optmem_max) {
++ if (optlen > READ_ONCE(sysctl_optmem_max)) {
+ err = -ENOBUFS;
+ break;
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 3ae2ea0488838..3d446773ff2a5 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1000,7 +1000,7 @@ new_segment:
+
+ i = skb_shinfo(skb)->nr_frags;
+ can_coalesce = skb_can_coalesce(skb, i, page, offset);
+- if (!can_coalesce && i >= sysctl_max_skb_frags) {
++ if (!can_coalesce && i >= READ_ONCE(sysctl_max_skb_frags)) {
+ tcp_mark_push(tp, skb);
+ goto new_segment;
+ }
+@@ -1348,7 +1348,7 @@ new_segment:
+
+ if (!skb_can_coalesce(skb, i, pfrag->page,
+ pfrag->offset)) {
+- if (i >= sysctl_max_skb_frags) {
++ if (i >= READ_ONCE(sysctl_max_skb_frags)) {
+ tcp_mark_push(tp, skb);
+ goto new_segment;
+ }
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index aed0c5f828bef..84314de754f87 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -239,7 +239,7 @@ void tcp_select_initial_window(const struct sock *sk, int __space, __u32 mss,
+ if (wscale_ok) {
+ /* Set window scaling on max possible window */
+ space = max_t(u32, space, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
+- space = max_t(u32, space, sysctl_rmem_max);
++ space = max_t(u32, space, READ_ONCE(sysctl_rmem_max));
+ space = min_t(u32, space, *window_clamp);
+ *rcv_wscale = clamp_t(int, ilog2(space) - 15,
+ 0, TCP_MAX_WSCALE);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 49cc6587dd771..b738eb7e1cae8 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -7158,9 +7158,8 @@ static int __net_init addrconf_init_net(struct net *net)
+ if (!dflt)
+ goto err_alloc_dflt;
+
+- if (IS_ENABLED(CONFIG_SYSCTL) &&
+- !net_eq(net, &init_net)) {
+- switch (sysctl_devconf_inherit_init_net) {
++ if (!net_eq(net, &init_net)) {
++ switch (net_inherit_devconf()) {
+ case 1: /* copy from init_net */
+ memcpy(all, init_net.ipv6.devconf_all,
+ sizeof(ipv6_devconf));
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 222f6bf220ba0..e0dcc7a193df2 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -210,7 +210,7 @@ static int ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+
+ if (optlen < GROUP_FILTER_SIZE(0))
+ return -EINVAL;
+- if (optlen > sysctl_optmem_max)
++ if (optlen > READ_ONCE(sysctl_optmem_max))
+ return -ENOBUFS;
+
+ gsf = memdup_sockptr(optval, optlen);
+@@ -244,7 +244,7 @@ static int compat_ipv6_set_mcast_msfilter(struct sock *sk, sockptr_t optval,
+
+ if (optlen < size0)
+ return -EINVAL;
+- if (optlen > sysctl_optmem_max - 4)
++ if (optlen > READ_ONCE(sysctl_optmem_max) - 4)
+ return -ENOBUFS;
+
+ p = kmalloc(optlen + 4, GFP_KERNEL);
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index fb16d7c4e1b8d..20e73643b9c89 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1697,9 +1697,12 @@ static int pfkey_register(struct sock *sk, struct sk_buff *skb, const struct sad
+ pfk->registered |= (1<<hdr->sadb_msg_satype);
+ }
+
++ mutex_lock(&pfkey_mutex);
+ xfrm_probe_algs();
+
+ supp_skb = compose_sadb_supported(hdr, GFP_KERNEL | __GFP_ZERO);
++ mutex_unlock(&pfkey_mutex);
++
+ if (!supp_skb) {
+ if (hdr->sadb_msg_satype != SADB_SATYPE_UNSPEC)
+ pfk->registered &= ~(1<<hdr->sadb_msg_satype);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 3d90fa9653ef3..513f571a082ba 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1299,7 +1299,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+
+ i = skb_shinfo(skb)->nr_frags;
+ can_coalesce = skb_can_coalesce(skb, i, dfrag->page, offset);
+- if (!can_coalesce && i >= sysctl_max_skb_frags) {
++ if (!can_coalesce && i >= READ_ONCE(sysctl_max_skb_frags)) {
+ tcp_mark_push(tcp_sk(ssk), skb);
+ goto alloc_skb;
+ }
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index 9d43277b8b4fe..a56fd0b5a430a 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -1280,12 +1280,12 @@ static void set_sock_size(struct sock *sk, int mode, int val)
+ lock_sock(sk);
+ if (mode) {
+ val = clamp_t(int, val, (SOCK_MIN_SNDBUF + 1) / 2,
+- sysctl_wmem_max);
++ READ_ONCE(sysctl_wmem_max));
+ sk->sk_sndbuf = val * 2;
+ sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+ } else {
+ val = clamp_t(int, val, (SOCK_MIN_RCVBUF + 1) / 2,
+- sysctl_rmem_max);
++ READ_ONCE(sysctl_rmem_max));
+ sk->sk_rcvbuf = val * 2;
+ sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+ }
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index f2def06d10709..483b18d35cade 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -442,12 +442,17 @@ static void nf_flow_offload_gc_step(struct nf_flowtable *flow_table,
+ }
+ }
+
++void nf_flow_table_gc_run(struct nf_flowtable *flow_table)
++{
++ nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, NULL);
++}
++
+ static void nf_flow_offload_work_gc(struct work_struct *work)
+ {
+ struct nf_flowtable *flow_table;
+
+ flow_table = container_of(work, struct nf_flowtable, gc_work.work);
+- nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, NULL);
++ nf_flow_table_gc_run(flow_table);
+ queue_delayed_work(system_power_efficient_wq, &flow_table->gc_work, HZ);
+ }
+
+@@ -605,11 +610,11 @@ void nf_flow_table_free(struct nf_flowtable *flow_table)
+ mutex_unlock(&flowtable_lock);
+
+ cancel_delayed_work_sync(&flow_table->gc_work);
+- nf_flow_table_iterate(flow_table, nf_flow_table_do_cleanup, NULL);
+- nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, NULL);
+ nf_flow_table_offload_flush(flow_table);
+- if (nf_flowtable_hw_offload(flow_table))
+- nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, NULL);
++ /* ... no more pending work after this stage ... */
++ nf_flow_table_iterate(flow_table, nf_flow_table_do_cleanup, NULL);
++ nf_flow_table_gc_run(flow_table);
++ nf_flow_table_offload_flush_cleanup(flow_table);
+ rhashtable_destroy(&flow_table->rhashtable);
+ }
+ EXPORT_SYMBOL_GPL(nf_flow_table_free);
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index 11b6e19420920..4d1169b634c5f 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -1063,6 +1063,14 @@ void nf_flow_offload_stats(struct nf_flowtable *flowtable,
+ flow_offload_queue_work(offload);
+ }
+
++void nf_flow_table_offload_flush_cleanup(struct nf_flowtable *flowtable)
++{
++ if (nf_flowtable_hw_offload(flowtable)) {
++ flush_workqueue(nf_flow_offload_del_wq);
++ nf_flow_table_gc_run(flowtable);
++ }
++}
++
+ void nf_flow_table_offload_flush(struct nf_flowtable *flowtable)
+ {
+ if (nf_flowtable_hw_offload(flowtable)) {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4bd6e9427c918..bc690238a3c56 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -32,7 +32,6 @@ static LIST_HEAD(nf_tables_objects);
+ static LIST_HEAD(nf_tables_flowtables);
+ static LIST_HEAD(nf_tables_destroy_list);
+ static DEFINE_SPINLOCK(nf_tables_destroy_list_lock);
+-static u64 table_handle;
+
+ enum {
+ NFT_VALIDATE_SKIP = 0,
+@@ -1235,7 +1234,7 @@ static int nf_tables_newtable(struct sk_buff *skb, const struct nfnl_info *info,
+ INIT_LIST_HEAD(&table->flowtables);
+ table->family = family;
+ table->flags = flags;
+- table->handle = ++table_handle;
++ table->handle = ++nft_net->table_handle;
+ if (table->flags & NFT_TABLE_F_OWNER)
+ table->nlpid = NETLINK_CB(skb).portid;
+
+@@ -2196,9 +2195,9 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ struct netlink_ext_ack *extack)
+ {
+ const struct nlattr * const *nla = ctx->nla;
++ struct nft_stats __percpu *stats = NULL;
+ struct nft_table *table = ctx->table;
+ struct nft_base_chain *basechain;
+- struct nft_stats __percpu *stats;
+ struct net *net = ctx->net;
+ char name[NFT_NAME_MAXLEN];
+ struct nft_rule_blob *blob;
+@@ -2236,7 +2235,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ return PTR_ERR(stats);
+ }
+ rcu_assign_pointer(basechain->stats, stats);
+- static_branch_inc(&nft_counters_enabled);
+ }
+
+ err = nft_basechain_init(basechain, family, &hook, flags);
+@@ -2319,6 +2317,9 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ goto err_unregister_hook;
+ }
+
++ if (stats)
++ static_branch_inc(&nft_counters_enabled);
++
+ table->use++;
+
+ return 0;
+@@ -2574,6 +2575,9 @@ static int nf_tables_newchain(struct sk_buff *skb, const struct nfnl_info *info,
+ nft_ctx_init(&ctx, net, skb, info->nlh, family, table, chain, nla);
+
+ if (chain != NULL) {
++ if (chain->flags & NFT_CHAIN_BINDING)
++ return -EINVAL;
++
+ if (info->nlh->nlmsg_flags & NLM_F_EXCL) {
+ NL_SET_BAD_ATTR(extack, attr);
+ return -EEXIST;
+@@ -9653,6 +9657,8 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ return PTR_ERR(chain);
+ if (nft_is_base_chain(chain))
+ return -EOPNOTSUPP;
++ if (nft_chain_is_bound(chain))
++ return -EINVAL;
+ if (desc->flags & NFT_DATA_DESC_SETELEM &&
+ chain->flags & NFT_CHAIN_BINDING)
+ return -EINVAL;
+diff --git a/net/netfilter/nft_osf.c b/net/netfilter/nft_osf.c
+index 5eed18f90b020..175d666c8d87e 100644
+--- a/net/netfilter/nft_osf.c
++++ b/net/netfilter/nft_osf.c
+@@ -115,9 +115,21 @@ static int nft_osf_validate(const struct nft_ctx *ctx,
+ const struct nft_expr *expr,
+ const struct nft_data **data)
+ {
+- return nft_chain_validate_hooks(ctx->chain, (1 << NF_INET_LOCAL_IN) |
+- (1 << NF_INET_PRE_ROUTING) |
+- (1 << NF_INET_FORWARD));
++ unsigned int hooks;
++
++ switch (ctx->family) {
++ case NFPROTO_IPV4:
++ case NFPROTO_IPV6:
++ case NFPROTO_INET:
++ hooks = (1 << NF_INET_LOCAL_IN) |
++ (1 << NF_INET_PRE_ROUTING) |
++ (1 << NF_INET_FORWARD);
++ break;
++ default:
++ return -EOPNOTSUPP;
++ }
++
++ return nft_chain_validate_hooks(ctx->chain, hooks);
+ }
+
+ static bool nft_osf_reduce(struct nft_regs_track *track,
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 2e7ac007cb30f..eb0e40c297121 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -740,17 +740,23 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
+ const struct nlattr * const tb[])
+ {
+ struct nft_payload_set *priv = nft_expr_priv(expr);
++ u32 csum_offset, csum_type = NFT_PAYLOAD_CSUM_NONE;
++ int err;
+
+ priv->base = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE]));
+ priv->offset = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET]));
+ priv->len = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN]));
+
+ if (tb[NFTA_PAYLOAD_CSUM_TYPE])
+- priv->csum_type =
+- ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_TYPE]));
+- if (tb[NFTA_PAYLOAD_CSUM_OFFSET])
+- priv->csum_offset =
+- ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_OFFSET]));
++ csum_type = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_TYPE]));
++ if (tb[NFTA_PAYLOAD_CSUM_OFFSET]) {
++ err = nft_parse_u32_check(tb[NFTA_PAYLOAD_CSUM_OFFSET], U8_MAX,
++ &csum_offset);
++ if (err < 0)
++ return err;
++
++ priv->csum_offset = csum_offset;
++ }
+ if (tb[NFTA_PAYLOAD_CSUM_FLAGS]) {
+ u32 flags;
+
+@@ -761,7 +767,7 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
+ priv->csum_flags = flags;
+ }
+
+- switch (priv->csum_type) {
++ switch (csum_type) {
+ case NFT_PAYLOAD_CSUM_NONE:
+ case NFT_PAYLOAD_CSUM_INET:
+ break;
+@@ -775,6 +781,7 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
+ default:
+ return -EOPNOTSUPP;
+ }
++ priv->csum_type = csum_type;
+
+ return nft_parse_register_load(tb[NFTA_PAYLOAD_SREG], &priv->sreg,
+ priv->len);
+@@ -833,6 +840,7 @@ nft_payload_select_ops(const struct nft_ctx *ctx,
+ {
+ enum nft_payload_bases base;
+ unsigned int offset, len;
++ int err;
+
+ if (tb[NFTA_PAYLOAD_BASE] == NULL ||
+ tb[NFTA_PAYLOAD_OFFSET] == NULL ||
+@@ -859,8 +867,13 @@ nft_payload_select_ops(const struct nft_ctx *ctx,
+ if (tb[NFTA_PAYLOAD_DREG] == NULL)
+ return ERR_PTR(-EINVAL);
+
+- offset = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET]));
+- len = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN]));
++ err = nft_parse_u32_check(tb[NFTA_PAYLOAD_OFFSET], U8_MAX, &offset);
++ if (err < 0)
++ return ERR_PTR(err);
++
++ err = nft_parse_u32_check(tb[NFTA_PAYLOAD_LEN], U8_MAX, &len);
++ if (err < 0)
++ return ERR_PTR(err);
+
+ if (len <= 4 && is_power_of_2(len) && IS_ALIGNED(offset, len) &&
+ base != NFT_PAYLOAD_LL_HEADER && base != NFT_PAYLOAD_INNER_HEADER)
+diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
+index 801f013971dfa..a701ad64f10af 100644
+--- a/net/netfilter/nft_tproxy.c
++++ b/net/netfilter/nft_tproxy.c
+@@ -312,6 +312,13 @@ static int nft_tproxy_dump(struct sk_buff *skb,
+ return 0;
+ }
+
++static int nft_tproxy_validate(const struct nft_ctx *ctx,
++ const struct nft_expr *expr,
++ const struct nft_data **data)
++{
++ return nft_chain_validate_hooks(ctx->chain, 1 << NF_INET_PRE_ROUTING);
++}
++
+ static struct nft_expr_type nft_tproxy_type;
+ static const struct nft_expr_ops nft_tproxy_ops = {
+ .type = &nft_tproxy_type,
+@@ -321,6 +328,7 @@ static const struct nft_expr_ops nft_tproxy_ops = {
+ .destroy = nft_tproxy_destroy,
+ .dump = nft_tproxy_dump,
+ .reduce = NFT_REDUCE_READONLY,
++ .validate = nft_tproxy_validate,
+ };
+
+ static struct nft_expr_type nft_tproxy_type __read_mostly = {
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index d0f9b1d51b0e9..96b03e0bf74ff 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -161,6 +161,7 @@ static const struct nft_expr_ops nft_tunnel_get_ops = {
+
+ static struct nft_expr_type nft_tunnel_type __read_mostly = {
+ .name = "tunnel",
++ .family = NFPROTO_NETDEV,
+ .ops = &nft_tunnel_get_ops,
+ .policy = nft_tunnel_policy,
+ .maxattr = NFTA_TUNNEL_MAX,
+diff --git a/net/rose/rose_loopback.c b/net/rose/rose_loopback.c
+index 11c45c8c6c164..036d92c0ad794 100644
+--- a/net/rose/rose_loopback.c
++++ b/net/rose/rose_loopback.c
+@@ -96,7 +96,8 @@ static void rose_loopback_timer(struct timer_list *unused)
+ }
+
+ if (frametype == ROSE_CALL_REQUEST) {
+- if (!rose_loopback_neigh->dev) {
++ if (!rose_loopback_neigh->dev &&
++ !rose_loopback_neigh->loopback) {
+ kfree_skb(skb);
+ continue;
+ }
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 84d0a41096450..6401cdf7a6246 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -285,8 +285,10 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ _enter("%p,%lx", rx, p->user_call_ID);
+
+ limiter = rxrpc_get_call_slot(p, gfp);
+- if (!limiter)
++ if (!limiter) {
++ release_sock(&rx->sk);
+ return ERR_PTR(-ERESTARTSYS);
++ }
+
+ call = rxrpc_alloc_client_call(rx, srx, gfp, debug_id);
+ if (IS_ERR(call)) {
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 1d38e279e2efa..3c3a626459deb 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -51,10 +51,7 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx,
+ return sock_intr_errno(*timeo);
+
+ trace_rxrpc_transmit(call, rxrpc_transmit_wait);
+- mutex_unlock(&call->user_mutex);
+ *timeo = schedule_timeout(*timeo);
+- if (mutex_lock_interruptible(&call->user_mutex) < 0)
+- return sock_intr_errno(*timeo);
+ }
+ }
+
+@@ -290,37 +287,48 @@ out:
+ static int rxrpc_send_data(struct rxrpc_sock *rx,
+ struct rxrpc_call *call,
+ struct msghdr *msg, size_t len,
+- rxrpc_notify_end_tx_t notify_end_tx)
++ rxrpc_notify_end_tx_t notify_end_tx,
++ bool *_dropped_lock)
+ {
+ struct rxrpc_skb_priv *sp;
+ struct sk_buff *skb;
+ struct sock *sk = &rx->sk;
++ enum rxrpc_call_state state;
+ long timeo;
+- bool more;
+- int ret, copied;
++ bool more = msg->msg_flags & MSG_MORE;
++ int ret, copied = 0;
+
+ timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
+
+ /* this should be in poll */
+ sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+
++reload:
++ ret = -EPIPE;
+ if (sk->sk_shutdown & SEND_SHUTDOWN)
+- return -EPIPE;
+-
+- more = msg->msg_flags & MSG_MORE;
+-
++ goto maybe_error;
++ state = READ_ONCE(call->state);
++ ret = -ESHUTDOWN;
++ if (state >= RXRPC_CALL_COMPLETE)
++ goto maybe_error;
++ ret = -EPROTO;
++ if (state != RXRPC_CALL_CLIENT_SEND_REQUEST &&
++ state != RXRPC_CALL_SERVER_ACK_REQUEST &&
++ state != RXRPC_CALL_SERVER_SEND_REPLY)
++ goto maybe_error;
++
++ ret = -EMSGSIZE;
+ if (call->tx_total_len != -1) {
+- if (len > call->tx_total_len)
+- return -EMSGSIZE;
+- if (!more && len != call->tx_total_len)
+- return -EMSGSIZE;
++ if (len - copied > call->tx_total_len)
++ goto maybe_error;
++ if (!more && len - copied != call->tx_total_len)
++ goto maybe_error;
+ }
+
+ skb = call->tx_pending;
+ call->tx_pending = NULL;
+ rxrpc_see_skb(skb, rxrpc_skb_seen);
+
+- copied = 0;
+ do {
+ /* Check to see if there's a ping ACK to reply to. */
+ if (call->ackr_reason == RXRPC_ACK_PING_RESPONSE)
+@@ -331,16 +339,8 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
+
+ _debug("alloc");
+
+- if (!rxrpc_check_tx_space(call, NULL)) {
+- ret = -EAGAIN;
+- if (msg->msg_flags & MSG_DONTWAIT)
+- goto maybe_error;
+- ret = rxrpc_wait_for_tx_window(rx, call,
+- &timeo,
+- msg->msg_flags & MSG_WAITALL);
+- if (ret < 0)
+- goto maybe_error;
+- }
++ if (!rxrpc_check_tx_space(call, NULL))
++ goto wait_for_space;
+
+ /* Work out the maximum size of a packet. Assume that
+ * the security header is going to be in the padded
+@@ -468,6 +468,27 @@ maybe_error:
+ efault:
+ ret = -EFAULT;
+ goto out;
++
++wait_for_space:
++ ret = -EAGAIN;
++ if (msg->msg_flags & MSG_DONTWAIT)
++ goto maybe_error;
++ mutex_unlock(&call->user_mutex);
++ *_dropped_lock = true;
++ ret = rxrpc_wait_for_tx_window(rx, call, &timeo,
++ msg->msg_flags & MSG_WAITALL);
++ if (ret < 0)
++ goto maybe_error;
++ if (call->interruptibility == RXRPC_INTERRUPTIBLE) {
++ if (mutex_lock_interruptible(&call->user_mutex) < 0) {
++ ret = sock_intr_errno(timeo);
++ goto maybe_error;
++ }
++ } else {
++ mutex_lock(&call->user_mutex);
++ }
++ *_dropped_lock = false;
++ goto reload;
+ }
+
+ /*
+@@ -629,6 +650,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ enum rxrpc_call_state state;
+ struct rxrpc_call *call;
+ unsigned long now, j;
++ bool dropped_lock = false;
+ int ret;
+
+ struct rxrpc_send_params p = {
+@@ -737,21 +759,13 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ ret = rxrpc_send_abort_packet(call);
+ } else if (p.command != RXRPC_CMD_SEND_DATA) {
+ ret = -EINVAL;
+- } else if (rxrpc_is_client_call(call) &&
+- state != RXRPC_CALL_CLIENT_SEND_REQUEST) {
+- /* request phase complete for this client call */
+- ret = -EPROTO;
+- } else if (rxrpc_is_service_call(call) &&
+- state != RXRPC_CALL_SERVER_ACK_REQUEST &&
+- state != RXRPC_CALL_SERVER_SEND_REPLY) {
+- /* Reply phase not begun or not complete for service call. */
+- ret = -EPROTO;
+ } else {
+- ret = rxrpc_send_data(rx, call, msg, len, NULL);
++ ret = rxrpc_send_data(rx, call, msg, len, NULL, &dropped_lock);
+ }
+
+ out_put_unlock:
+- mutex_unlock(&call->user_mutex);
++ if (!dropped_lock)
++ mutex_unlock(&call->user_mutex);
+ error_put:
+ rxrpc_put_call(call, rxrpc_call_put);
+ _leave(" = %d", ret);
+@@ -779,6 +793,7 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call,
+ struct msghdr *msg, size_t len,
+ rxrpc_notify_end_tx_t notify_end_tx)
+ {
++ bool dropped_lock = false;
+ int ret;
+
+ _enter("{%d,%s},", call->debug_id, rxrpc_call_states[call->state]);
+@@ -796,7 +811,7 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call,
+ case RXRPC_CALL_SERVER_ACK_REQUEST:
+ case RXRPC_CALL_SERVER_SEND_REPLY:
+ ret = rxrpc_send_data(rxrpc_sk(sock->sk), call, msg, len,
+- notify_end_tx);
++ notify_end_tx, &dropped_lock);
+ break;
+ case RXRPC_CALL_COMPLETE:
+ read_lock_bh(&call->state_lock);
+@@ -810,7 +825,8 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call,
+ break;
+ }
+
+- mutex_unlock(&call->user_mutex);
++ if (!dropped_lock)
++ mutex_unlock(&call->user_mutex);
+ _leave(" = %d", ret);
+ return ret;
+ }
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index dba0b3e24af5e..a64c3c1541118 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -409,7 +409,7 @@ static inline bool qdisc_restart(struct Qdisc *q, int *packets)
+
+ void __qdisc_run(struct Qdisc *q)
+ {
+- int quota = dev_tx_weight;
++ int quota = READ_ONCE(dev_tx_weight);
+ int packets;
+
+ while (qdisc_restart(q, &packets)) {
+diff --git a/net/socket.c b/net/socket.c
+index 96300cdc06251..34102aa4ab0a6 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1801,7 +1801,7 @@ int __sys_listen(int fd, int backlog)
+
+ sock = sockfd_lookup_light(fd, &err, &fput_needed);
+ if (sock) {
+- somaxconn = sock_net(sock->sk)->core.sysctl_somaxconn;
++ somaxconn = READ_ONCE(sock_net(sock->sk)->core.sysctl_somaxconn);
+ if ((unsigned int)backlog > somaxconn)
+ backlog = somaxconn;
+
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 733f9f2260926..c1a01947530f0 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1888,7 +1888,7 @@ call_encode(struct rpc_task *task)
+ break;
+ case -EKEYEXPIRED:
+ if (!task->tk_cred_retry) {
+- rpc_exit(task, task->tk_status);
++ rpc_call_rpcerror(task, task->tk_status);
+ } else {
+ task->tk_action = call_refresh;
+ task->tk_cred_retry--;
+diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c
+index 82d14eea1b5ad..974eb97b77d22 100644
+--- a/net/xfrm/espintcp.c
++++ b/net/xfrm/espintcp.c
+@@ -168,7 +168,7 @@ int espintcp_queue_out(struct sock *sk, struct sk_buff *skb)
+ {
+ struct espintcp_ctx *ctx = espintcp_getctx(sk);
+
+- if (skb_queue_len(&ctx->out_queue) >= netdev_max_backlog)
++ if (skb_queue_len(&ctx->out_queue) >= READ_ONCE(netdev_max_backlog))
+ return -ENOBUFS;
+
+ __skb_queue_tail(&ctx->out_queue, skb);
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 144238a50f3d4..b2f4ec9c537f0 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -669,7 +669,6 @@ resume:
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+- x->curlft.use_time = ktime_get_real_seconds();
+
+ spin_unlock(&x->lock);
+
+@@ -783,7 +782,7 @@ int xfrm_trans_queue_net(struct net *net, struct sk_buff *skb,
+
+ trans = this_cpu_ptr(&xfrm_trans_tasklet);
+
+- if (skb_queue_len(&trans->queue) >= netdev_max_backlog)
++ if (skb_queue_len(&trans->queue) >= READ_ONCE(netdev_max_backlog))
+ return -ENOBUFS;
+
+ BUILD_BUG_ON(sizeof(struct xfrm_trans_cb) > sizeof(skb->cb));
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index 555ab35cd119a..9a5e79a38c679 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -534,7 +534,6 @@ static int xfrm_output_one(struct sk_buff *skb, int err)
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+- x->curlft.use_time = ktime_get_real_seconds();
+
+ spin_unlock_bh(&x->lock);
+
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index f1a0bab920a55..cc6ab79609e29 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3162,7 +3162,7 @@ ok:
+ return dst;
+
+ nopol:
+- if (!(dst_orig->dev->flags & IFF_LOOPBACK) &&
++ if ((!dst_orig->dev || !(dst_orig->dev->flags & IFF_LOOPBACK)) &&
+ net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) {
+ err = -EPERM;
+ goto error;
+@@ -3599,6 +3599,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ if (pols[1]) {
+ if (IS_ERR(pols[1])) {
+ XFRM_INC_STATS(net, LINUX_MIB_XFRMINPOLERROR);
++ xfrm_pol_put(pols[0]);
+ return 0;
+ }
+ pols[1]->curlft.use_time = ktime_get_real_seconds();
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index ccfb172eb5b8d..11d89af9cb55a 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1592,6 +1592,7 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ x->replay = orig->replay;
+ x->preplay = orig->preplay;
+ x->mapping_maxage = orig->mapping_maxage;
++ x->lastused = orig->lastused;
+ x->new_mapping = 0;
+ x->new_mapping_sport = 0;
+
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 73e0762092feb..02a6000a82bbf 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -265,7 +265,7 @@ endif
+ # defined. get-executable-or-default fails with an error if the first argument is supplied but
+ # doesn't exist.
+ override PYTHON_CONFIG := $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO))
+-override PYTHON := $(call get-executable-or-default,PYTHON,$(subst -config,,$(PYTHON_AUTO)))
++override PYTHON := $(call get-executable-or-default,PYTHON,$(subst -config,,$(PYTHON_CONFIG)))
+
+ grep-libs = $(filter -l%,$(1))
+ strip-libs = $(filter-out -l%,$(1))
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 86f838c5661ee..5f0333a8acd8a 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -826,6 +826,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+
+ evlist__for_each_entry(evsel_list, counter) {
++ counter->reset_group = false;
+ if (bpf_counter__load(counter, &target))
+ return -1;
+ if (!evsel__is_bpf(counter))
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-05 12:02 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-05 12:02 UTC (permalink / raw
To: gentoo-commits
commit: f61861b22544740f604006aa8cbfe1c734790cae
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 5 12:02:27 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 5 12:02:27 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f61861b2
Linux patch 5.19.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-5.19.7.patch | 3011 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3015 insertions(+)
diff --git a/0000_README b/0000_README
index 3deab328..e6423950 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-5.19.6.patch
From: http://www.kernel.org
Desc: Linux 5.19.6
+Patch: 1006_linux-5.19.7.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1006_linux-5.19.7.patch b/1006_linux-5.19.7.patch
new file mode 100644
index 00000000..db3fad36
--- /dev/null
+++ b/1006_linux-5.19.7.patch
@@ -0,0 +1,3011 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 0b4235b1f8c46..33b04db8408f9 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -106,6 +106,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A510 | #2077057 | ARM64_ERRATUM_2077057 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM | Cortex-A510 | #2441009 | ARM64_ERRATUM_2441009 |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A710 | #2119858 | ARM64_ERRATUM_2119858 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A710 | #2054223 | ARM64_ERRATUM_2054223 |
+diff --git a/Documentation/sphinx/kerneldoc-preamble.sty b/Documentation/sphinx/kerneldoc-preamble.sty
+index 2a29cbe51396d..9707e033c8c45 100644
+--- a/Documentation/sphinx/kerneldoc-preamble.sty
++++ b/Documentation/sphinx/kerneldoc-preamble.sty
+@@ -70,8 +70,16 @@
+
+ % Translations have Asian (CJK) characters which are only displayed if
+ % xeCJK is used
++\usepackage{ifthen}
++\newboolean{enablecjk}
++\setboolean{enablecjk}{false}
+ \IfFontExistsTF{Noto Sans CJK SC}{
+- % Load xeCJK when CJK font is available
++ \IfFileExists{xeCJK.sty}{
++ \setboolean{enablecjk}{true}
++ }{}
++}{}
++\ifthenelse{\boolean{enablecjk}}{
++ % Load xeCJK when both the Noto Sans CJK font and xeCJK.sty are available.
+ \usepackage{xeCJK}
+ % Noto CJK fonts don't provide slant shape. [AutoFakeSlant] permits
+ % its emulation.
+@@ -196,7 +204,7 @@
+ % Inactivate CJK after tableofcontents
+ \apptocmd{\sphinxtableofcontents}{\kerneldocCJKoff}{}{}
+ \xeCJKsetup{CJKspace = true}% For inter-phrase space of Korean TOC
+-}{ % No CJK font found
++}{ % Don't enable CJK
+ % Custom macros to on/off CJK and switch CJK fonts (Dummy)
+ \newcommand{\kerneldocCJKon}{}
+ \newcommand{\kerneldocCJKoff}{}
+@@ -204,14 +212,16 @@
+ %% and ignore the argument (#1) in their definitions, whole contents of
+ %% CJK chapters can be ignored.
+ \newcommand{\kerneldocBeginSC}[1]{%
+- %% Put a note on missing CJK fonts in place of zh_CN translation.
+- \begin{sphinxadmonition}{note}{Note on missing fonts:}
++ %% Put a note on missing CJK fonts or the xecjk package in place of
++ %% zh_CN translation.
++ \begin{sphinxadmonition}{note}{Note on missing fonts and a package:}
+ Translations of Simplified Chinese (zh\_CN), Traditional Chinese
+ (zh\_TW), Korean (ko\_KR), and Japanese (ja\_JP) were skipped
+- due to the lack of suitable font families.
++ due to the lack of suitable font families and/or the texlive-xecjk
++ package.
+
+ If you want them, please install ``Noto Sans CJK'' font families
+- by following instructions from
++ along with the texlive-xecjk package by following instructions from
+ \sphinxcode{./scripts/sphinx-pre-install}.
+ Having optional ``Noto Serif CJK'' font families will improve
+ the looks of those translations.
+diff --git a/Documentation/tools/rtla/rtla-timerlat-hist.rst b/Documentation/tools/rtla/rtla-timerlat-hist.rst
+index e12eae1f33019..6bf7f0ca45564 100644
+--- a/Documentation/tools/rtla/rtla-timerlat-hist.rst
++++ b/Documentation/tools/rtla/rtla-timerlat-hist.rst
+@@ -33,7 +33,7 @@ EXAMPLE
+ =======
+ In the example below, **rtla timerlat hist** is set to run for *10* minutes,
+ in the cpus *0-4*, *skipping zero* only lines. Moreover, **rtla timerlat
+-hist** will change the priority of the *timelat* threads to run under
++hist** will change the priority of the *timerlat* threads to run under
+ *SCHED_DEADLINE* priority, with a *10us* runtime every *1ms* period. The
+ *1ms* period is also passed to the *timerlat* tracer::
+
+diff --git a/Makefile b/Makefile
+index cb68101ea070a..3d88923df694e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index a5d1b561ed53f..001eaba5a6b4b 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -838,6 +838,23 @@ config ARM64_ERRATUM_2224489
+
+ If unsure, say Y.
+
++config ARM64_ERRATUM_2441009
++ bool "Cortex-A510: Completion of affected memory accesses might not be guaranteed by completion of a TLBI"
++ default y
++ select ARM64_WORKAROUND_REPEAT_TLBI
++ help
++ This option adds a workaround for ARM Cortex-A510 erratum #2441009.
++
++ Under very rare circumstances, affected Cortex-A510 CPUs
++ may not handle a race between a break-before-make sequence on one
++ CPU, and another CPU accessing the same page. This could allow a
++ store to a page that has been unmapped.
++
++ Work around this by adding the affected CPUs to the list that needs
++ TLB sequences to be done twice.
++
++ If unsure, say Y.
++
+ config ARM64_ERRATUM_2064142
+ bool "Cortex-A510: 2064142: workaround TRBE register writes while disabled"
+ depends on CORESIGHT_TRBE
+diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
+index 587543c6c51cb..97c42be71338a 100644
+--- a/arch/arm64/kernel/cacheinfo.c
++++ b/arch/arm64/kernel/cacheinfo.c
+@@ -45,7 +45,8 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
+
+ int init_cache_level(unsigned int cpu)
+ {
+- unsigned int ctype, level, leaves, fw_level;
++ unsigned int ctype, level, leaves;
++ int fw_level;
+ struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+
+ for (level = 1, leaves = 0; level <= MAX_CACHE_LEVEL; level++) {
+@@ -63,6 +64,9 @@ int init_cache_level(unsigned int cpu)
+ else
+ fw_level = acpi_find_last_cache_level(cpu);
+
++ if (fw_level < 0)
++ return fw_level;
++
+ if (level < fw_level) {
+ /*
+ * some external caches not specified in CLIDR_EL1
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index b374e258f705f..5f4117dae8888 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -213,6 +213,12 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
+ /* Kryo4xx Gold (rcpe to rfpe) => (r0p0 to r3p0) */
+ ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe),
+ },
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_2441009
++ {
++ /* Cortex-A510 r0p0 -> r1p1. Fixed in r1p2 */
++ ERRATA_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1),
++ },
+ #endif
+ {},
+ };
+@@ -490,7 +496,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #endif
+ #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI
+ {
+- .desc = "Qualcomm erratum 1009, or ARM erratum 1286807",
++ .desc = "Qualcomm erratum 1009, or ARM erratum 1286807, 2441009",
+ .capability = ARM64_WORKAROUND_REPEAT_TLBI,
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = cpucap_multi_entry_cap_matches,
+diff --git a/arch/s390/hypfs/hypfs_diag.c b/arch/s390/hypfs/hypfs_diag.c
+index f0bc4dc3e9bf0..6511d15ace45e 100644
+--- a/arch/s390/hypfs/hypfs_diag.c
++++ b/arch/s390/hypfs/hypfs_diag.c
+@@ -437,7 +437,7 @@ __init int hypfs_diag_init(void)
+ int rc;
+
+ if (diag204_probe()) {
+- pr_err("The hardware system does not support hypfs\n");
++ pr_info("The hardware system does not support hypfs\n");
+ return -ENODATA;
+ }
+
+diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
+index 5c97f48cea91d..ee919bfc81867 100644
+--- a/arch/s390/hypfs/inode.c
++++ b/arch/s390/hypfs/inode.c
+@@ -496,9 +496,9 @@ fail_hypfs_sprp_exit:
+ hypfs_vm_exit();
+ fail_hypfs_diag_exit:
+ hypfs_diag_exit();
++ pr_err("Initialization of hypfs failed with rc=%i\n", rc);
+ fail_dbfs_exit:
+ hypfs_dbfs_exit();
+- pr_err("Initialization of hypfs failed with rc=%i\n", rc);
+ return rc;
+ }
+ device_initcall(hypfs_init)
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 7981a25983764..5d437c0c842cb 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -315,12 +315,19 @@ static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
+ {
+ unsigned long vm_start = 0;
+
++ /*
++ * Allow clearing the vma with holding just the read lock to allow
++ * munmapping downgrade of the write lock before freeing and closing the
++ * file using binder_alloc_vma_close().
++ */
+ if (vma) {
+ vm_start = vma->vm_start;
+ alloc->vma_vm_mm = vma->vm_mm;
++ mmap_assert_write_locked(alloc->vma_vm_mm);
++ } else {
++ mmap_assert_locked(alloc->vma_vm_mm);
+ }
+
+- mmap_assert_write_locked(alloc->vma_vm_mm);
+ alloc->vma_addr = vm_start;
+ }
+
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index 9631f2fd2faf7..38e8767ec3715 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -368,7 +368,23 @@ static struct miscdevice udmabuf_misc = {
+
+ static int __init udmabuf_dev_init(void)
+ {
+- return misc_register(&udmabuf_misc);
++ int ret;
++
++ ret = misc_register(&udmabuf_misc);
++ if (ret < 0) {
++ pr_err("Could not initialize udmabuf device\n");
++ return ret;
++ }
++
++ ret = dma_coerce_mask_and_coherent(udmabuf_misc.this_device,
++ DMA_BIT_MASK(64));
++ if (ret < 0) {
++ pr_err("Could not setup DMA mask for udmabuf device\n");
++ misc_deregister(&udmabuf_misc);
++ return ret;
++ }
++
++ return 0;
+ }
+
+ static void __exit udmabuf_dev_exit(void)
+diff --git a/drivers/firmware/tegra/bpmp.c b/drivers/firmware/tegra/bpmp.c
+index 5654c5e9862b1..037db21de510c 100644
+--- a/drivers/firmware/tegra/bpmp.c
++++ b/drivers/firmware/tegra/bpmp.c
+@@ -201,7 +201,7 @@ static ssize_t __tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel,
+ int err;
+
+ if (data && size > 0)
+- memcpy(data, channel->ib->data, size);
++ memcpy_fromio(data, channel->ib->data, size);
+
+ err = tegra_bpmp_ack_response(channel);
+ if (err < 0)
+@@ -245,7 +245,7 @@ static ssize_t __tegra_bpmp_channel_write(struct tegra_bpmp_channel *channel,
+ channel->ob->flags = flags;
+
+ if (data && size > 0)
+- memcpy(channel->ob->data, data, size);
++ memcpy_toio(channel->ob->data, data, size);
+
+ return tegra_bpmp_post_request(channel);
+ }
+@@ -420,7 +420,7 @@ void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, int code,
+ channel->ob->code = code;
+
+ if (data && size > 0)
+- memcpy(channel->ob->data, data, size);
++ memcpy_toio(channel->ob->data, data, size);
+
+ err = tegra_bpmp_post_response(channel);
+ if (WARN_ON(err < 0))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 30ce6bb6fa77a..310754b1f6702 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -313,7 +313,7 @@ enum amdgpu_kiq_irq {
+ AMDGPU_CP_KIQ_IRQ_DRIVER0 = 0,
+ AMDGPU_CP_KIQ_IRQ_LAST
+ };
+-
++#define SRIOV_USEC_TIMEOUT 1200000 /* wait 12 * 100ms for SRIOV */
+ #define MAX_KIQ_REG_WAIT 5000 /* in usecs, 5ms */
+ #define MAX_KIQ_REG_BAILOUT_INTERVAL 5 /* in msecs, 5ms */
+ #define MAX_KIQ_REG_TRY 1000
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index 9077dfccaf3cf..809408c8c79a1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -416,6 +416,7 @@ static int gmc_v10_0_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
+ uint32_t seq;
+ uint16_t queried_pasid;
+ bool ret;
++ u32 usec_timeout = amdgpu_sriov_vf(adev) ? SRIOV_USEC_TIMEOUT : adev->usec_timeout;
+ struct amdgpu_ring *ring = &adev->gfx.kiq.ring;
+ struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+
+@@ -434,7 +435,7 @@ static int gmc_v10_0_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
+
+ amdgpu_ring_commit(ring);
+ spin_unlock(&adev->gfx.kiq.ring_lock);
+- r = amdgpu_fence_wait_polling(ring, seq, adev->usec_timeout);
++ r = amdgpu_fence_wait_polling(ring, seq, usec_timeout);
+ if (r < 1) {
+ dev_err(adev->dev, "wait for kiq fence error: %ld.\n", r);
+ return -ETIME;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 22761a3bb8181..566c1243c051b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -896,6 +896,7 @@ static int gmc_v9_0_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
+ uint32_t seq;
+ uint16_t queried_pasid;
+ bool ret;
++ u32 usec_timeout = amdgpu_sriov_vf(adev) ? SRIOV_USEC_TIMEOUT : adev->usec_timeout;
+ struct amdgpu_ring *ring = &adev->gfx.kiq.ring;
+ struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+
+@@ -935,7 +936,7 @@ static int gmc_v9_0_flush_gpu_tlb_pasid(struct amdgpu_device *adev,
+
+ amdgpu_ring_commit(ring);
+ spin_unlock(&adev->gfx.kiq.ring_lock);
+- r = amdgpu_fence_wait_polling(ring, seq, adev->usec_timeout);
++ r = amdgpu_fence_wait_polling(ring, seq, usec_timeout);
+ if (r < 1) {
+ dev_err(adev->dev, "wait for kiq fence error: %ld.\n", r);
+ up_read(&adev->reset_domain->sem);
+diff --git a/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c b/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
+index 92dc60a9d2094..085e613f3646d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
+@@ -727,6 +727,7 @@ static const struct amd_ip_funcs ih_v6_0_ip_funcs = {
+ static const struct amdgpu_ih_funcs ih_v6_0_funcs = {
+ .get_wptr = ih_v6_0_get_wptr,
+ .decode_iv = amdgpu_ih_decode_iv_helper,
++ .decode_iv_ts = amdgpu_ih_decode_iv_ts_helper,
+ .set_rptr = ih_v6_0_set_rptr
+ };
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/navi10_ih.c b/drivers/gpu/drm/amd/amdgpu/navi10_ih.c
+index 4b5396d3e60f6..eec13cb5bf758 100644
+--- a/drivers/gpu/drm/amd/amdgpu/navi10_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/navi10_ih.c
+@@ -409,9 +409,11 @@ static u32 navi10_ih_get_wptr(struct amdgpu_device *adev,
+ u32 wptr, tmp;
+ struct amdgpu_ih_regs *ih_regs;
+
+- if (ih == &adev->irq.ih) {
++ if (ih == &adev->irq.ih || ih == &adev->irq.ih_soft) {
+ /* Only ring0 supports writeback. On other rings fall back
+ * to register-based code with overflow checking below.
++ * ih_soft ring doesn't have any backing hardware registers,
++ * update wptr and return.
+ */
+ wptr = le32_to_cpu(*ih->wptr_cpu);
+
+@@ -483,6 +485,9 @@ static void navi10_ih_set_rptr(struct amdgpu_device *adev,
+ {
+ struct amdgpu_ih_regs *ih_regs;
+
++ if (ih == &adev->irq.ih_soft)
++ return;
++
+ if (ih->use_doorbell) {
+ /* XXX check if swapping is necessary on BE */
+ *ih->rptr_cpu = ih->rptr;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+index a2588200ea580..0b2ac418e4ac4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+@@ -101,6 +101,16 @@ static int psp_v12_0_init_microcode(struct psp_context *psp)
+ adev->psp.dtm_context.context.bin_desc.start_addr =
+ (uint8_t *)adev->psp.hdcp_context.context.bin_desc.start_addr +
+ le32_to_cpu(ta_hdr->dtm.offset_bytes);
++
++ if (adev->apu_flags & AMD_APU_IS_RENOIR) {
++ adev->psp.securedisplay_context.context.bin_desc.fw_version =
++ le32_to_cpu(ta_hdr->securedisplay.fw_version);
++ adev->psp.securedisplay_context.context.bin_desc.size_bytes =
++ le32_to_cpu(ta_hdr->securedisplay.size_bytes);
++ adev->psp.securedisplay_context.context.bin_desc.start_addr =
++ (uint8_t *)adev->psp.hdcp_context.context.bin_desc.start_addr +
++ le32_to_cpu(ta_hdr->securedisplay.offset_bytes);
++ }
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
+index 9e18a2b22607b..8d5c452a91007 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
+@@ -530,8 +530,10 @@ static int soc21_common_early_init(void *handle)
+ case IP_VERSION(11, 0, 0):
+ adev->cg_flags = AMD_CG_SUPPORT_GFX_CGCG |
+ AMD_CG_SUPPORT_GFX_CGLS |
++#if 0
+ AMD_CG_SUPPORT_GFX_3D_CGCG |
+ AMD_CG_SUPPORT_GFX_3D_CGLS |
++#endif
+ AMD_CG_SUPPORT_GFX_MGCG |
+ AMD_CG_SUPPORT_REPEATER_FGCG |
+ AMD_CG_SUPPORT_GFX_FGCG |
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
+index cdd599a081258..03b7066471f9a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
+@@ -334,9 +334,11 @@ static u32 vega10_ih_get_wptr(struct amdgpu_device *adev,
+ u32 wptr, tmp;
+ struct amdgpu_ih_regs *ih_regs;
+
+- if (ih == &adev->irq.ih) {
++ if (ih == &adev->irq.ih || ih == &adev->irq.ih_soft) {
+ /* Only ring0 supports writeback. On other rings fall back
+ * to register-based code with overflow checking below.
++ * ih_soft ring doesn't have any backing hardware registers,
++ * update wptr and return.
+ */
+ wptr = le32_to_cpu(*ih->wptr_cpu);
+
+@@ -409,6 +411,9 @@ static void vega10_ih_set_rptr(struct amdgpu_device *adev,
+ {
+ struct amdgpu_ih_regs *ih_regs;
+
++ if (ih == &adev->irq.ih_soft)
++ return;
++
+ if (ih->use_doorbell) {
+ /* XXX check if swapping is necessary on BE */
+ *ih->rptr_cpu = ih->rptr;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+index 3b4eb8285943c..2022ffbb8dba5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+@@ -385,9 +385,11 @@ static u32 vega20_ih_get_wptr(struct amdgpu_device *adev,
+ u32 wptr, tmp;
+ struct amdgpu_ih_regs *ih_regs;
+
+- if (ih == &adev->irq.ih) {
++ if (ih == &adev->irq.ih || ih == &adev->irq.ih_soft) {
+ /* Only ring0 supports writeback. On other rings fall back
+ * to register-based code with overflow checking below.
++ * ih_soft ring doesn't have any backing hardware registers,
++ * update wptr and return.
+ */
+ wptr = le32_to_cpu(*ih->wptr_cpu);
+
+@@ -461,6 +463,9 @@ static void vega20_ih_set_rptr(struct amdgpu_device *adev,
+ {
+ struct amdgpu_ih_regs *ih_regs;
+
++ if (ih == &adev->irq.ih_soft)
++ return;
++
+ if (ih->use_doorbell) {
+ /* XXX check if swapping is necessary on BE */
+ *ih->rptr_cpu = ih->rptr;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 1c7016958d6d9..bfca17ca399c6 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -814,7 +814,7 @@ static int kfd_ioctl_wait_events(struct file *filp, struct kfd_process *p,
+ err = kfd_wait_on_events(p, args->num_events,
+ (void __user *)args->events_ptr,
+ (args->wait_for_all != 0),
+- args->timeout, &args->wait_result);
++ &args->timeout, &args->wait_result);
+
+ return err;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+index 4df9c36146ba9..cbc20d779e5aa 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+@@ -895,7 +895,8 @@ static long user_timeout_to_jiffies(uint32_t user_timeout_ms)
+ return msecs_to_jiffies(user_timeout_ms) + 1;
+ }
+
+-static void free_waiters(uint32_t num_events, struct kfd_event_waiter *waiters)
++static void free_waiters(uint32_t num_events, struct kfd_event_waiter *waiters,
++ bool undo_auto_reset)
+ {
+ uint32_t i;
+
+@@ -904,6 +905,9 @@ static void free_waiters(uint32_t num_events, struct kfd_event_waiter *waiters)
+ spin_lock(&waiters[i].event->lock);
+ remove_wait_queue(&waiters[i].event->wq,
+ &waiters[i].wait);
++ if (undo_auto_reset && waiters[i].activated &&
++ waiters[i].event && waiters[i].event->auto_reset)
++ set_event(waiters[i].event);
+ spin_unlock(&waiters[i].event->lock);
+ }
+
+@@ -912,7 +916,7 @@ static void free_waiters(uint32_t num_events, struct kfd_event_waiter *waiters)
+
+ int kfd_wait_on_events(struct kfd_process *p,
+ uint32_t num_events, void __user *data,
+- bool all, uint32_t user_timeout_ms,
++ bool all, uint32_t *user_timeout_ms,
+ uint32_t *wait_result)
+ {
+ struct kfd_event_data __user *events =
+@@ -921,7 +925,7 @@ int kfd_wait_on_events(struct kfd_process *p,
+ int ret = 0;
+
+ struct kfd_event_waiter *event_waiters = NULL;
+- long timeout = user_timeout_to_jiffies(user_timeout_ms);
++ long timeout = user_timeout_to_jiffies(*user_timeout_ms);
+
+ event_waiters = alloc_event_waiters(num_events);
+ if (!event_waiters) {
+@@ -971,15 +975,11 @@ int kfd_wait_on_events(struct kfd_process *p,
+ }
+
+ if (signal_pending(current)) {
+- /*
+- * This is wrong when a nonzero, non-infinite timeout
+- * is specified. We need to use
+- * ERESTARTSYS_RESTARTBLOCK, but struct restart_block
+- * contains a union with data for each user and it's
+- * in generic kernel code that I don't want to
+- * touch yet.
+- */
+ ret = -ERESTARTSYS;
++ if (*user_timeout_ms != KFD_EVENT_TIMEOUT_IMMEDIATE &&
++ *user_timeout_ms != KFD_EVENT_TIMEOUT_INFINITE)
++ *user_timeout_ms = jiffies_to_msecs(
++ max(0l, timeout-1));
+ break;
+ }
+
+@@ -1020,7 +1020,7 @@ int kfd_wait_on_events(struct kfd_process *p,
+ event_waiters, events);
+
+ out_unlock:
+- free_waiters(num_events, event_waiters);
++ free_waiters(num_events, event_waiters, ret == -ERESTARTSYS);
+ mutex_unlock(&p->event_mutex);
+ out:
+ if (ret)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 2585d6e61d422..c6eec54b8102f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -1314,7 +1314,7 @@ void kfd_event_free_process(struct kfd_process *p);
+ int kfd_event_mmap(struct kfd_process *process, struct vm_area_struct *vma);
+ int kfd_wait_on_events(struct kfd_process *p,
+ uint32_t num_events, void __user *data,
+- bool all, uint32_t user_timeout_ms,
++ bool all, uint32_t *user_timeout_ms,
+ uint32_t *wait_result);
+ void kfd_signal_event_interrupt(u32 pasid, uint32_t partial_id,
+ uint32_t valid_id_bits);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index f144494011882..9dbd965d8afb3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1067,8 +1067,15 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
+ struct dc_stream_state *old_stream =
+ dc->current_state->res_ctx.pipe_ctx[i].stream;
+ bool should_disable = true;
+- bool pipe_split_change =
+- context->res_ctx.pipe_ctx[i].top_pipe != dc->current_state->res_ctx.pipe_ctx[i].top_pipe;
++ bool pipe_split_change = false;
++
++ if ((context->res_ctx.pipe_ctx[i].top_pipe) &&
++ (dc->current_state->res_ctx.pipe_ctx[i].top_pipe))
++ pipe_split_change = context->res_ctx.pipe_ctx[i].top_pipe->pipe_idx !=
++ dc->current_state->res_ctx.pipe_ctx[i].top_pipe->pipe_idx;
++ else
++ pipe_split_change = context->res_ctx.pipe_ctx[i].top_pipe !=
++ dc->current_state->res_ctx.pipe_ctx[i].top_pipe;
+
+ for (j = 0; j < context->stream_count; j++) {
+ if (old_stream == context->streams[j]) {
+@@ -3783,6 +3790,7 @@ void dc_enable_dmub_outbox(struct dc *dc)
+ struct dc_context *dc_ctx = dc->ctx;
+
+ dmub_enable_outbox_notification(dc_ctx->dmub_srv);
++ DC_LOG_DC("%s: dmub outbox notifications enabled\n", __func__);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h b/drivers/gpu/drm/amd/display/dc/dc_link.h
+index a3c37ee3f849c..f96f53c1bc258 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_link.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
+@@ -337,6 +337,7 @@ enum dc_detect_reason {
+ DETECT_REASON_HPDRX,
+ DETECT_REASON_FALLBACK,
+ DETECT_REASON_RETRAIN,
++ DETECT_REASON_TDR,
+ };
+
+ bool dc_link_detect(struct dc_link *dc_link, enum dc_detect_reason reason);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index 845aa8a1027d8..c4040adb88b03 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -545,9 +545,11 @@ static void dce112_get_pix_clk_dividers_helper (
+ switch (pix_clk_params->color_depth) {
+ case COLOR_DEPTH_101010:
+ actual_pixel_clock_100hz = (actual_pixel_clock_100hz * 5) >> 2;
++ actual_pixel_clock_100hz -= actual_pixel_clock_100hz % 10;
+ break;
+ case COLOR_DEPTH_121212:
+ actual_pixel_clock_100hz = (actual_pixel_clock_100hz * 6) >> 2;
++ actual_pixel_clock_100hz -= actual_pixel_clock_100hz % 10;
+ break;
+ case COLOR_DEPTH_161616:
+ actual_pixel_clock_100hz = actual_pixel_clock_100hz * 2;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index e3a62873c0e70..d9ab279915355 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -108,6 +108,7 @@ void dcn10_lock_all_pipes(struct dc *dc,
+ */
+ if (pipe_ctx->top_pipe ||
+ !pipe_ctx->stream ||
++ !pipe_ctx->plane_state ||
+ !tg->funcs->is_tg_enabled(tg))
+ continue;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
+index 11019c2c62ccb..8192f1967e924 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
+@@ -126,6 +126,12 @@ struct mpcc *mpc1_get_mpcc_for_dpp(struct mpc_tree *tree, int dpp_id)
+ while (tmp_mpcc != NULL) {
+ if (tmp_mpcc->dpp_id == dpp_id)
+ return tmp_mpcc;
++
++ /* avoid circular linked list */
++ ASSERT(tmp_mpcc != tmp_mpcc->mpcc_bot);
++ if (tmp_mpcc == tmp_mpcc->mpcc_bot)
++ break;
++
+ tmp_mpcc = tmp_mpcc->mpcc_bot;
+ }
+ return NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
+index b1671b00ce405..2349977b0abb2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
+@@ -464,6 +464,11 @@ void optc1_enable_optc_clock(struct timing_generator *optc, bool enable)
+ OTG_CLOCK_ON, 1,
+ 1, 1000);
+ } else {
++
++ //last chance to clear underflow, otherwise, it will always there due to clock is off.
++ if (optc->funcs->is_optc_underflow_occurred(optc) == true)
++ optc->funcs->clear_optc_underflow(optc);
++
+ REG_UPDATE_2(OTG_CLOCK_CONTROL,
+ OTG_CLOCK_GATE_DIS, 0,
+ OTG_CLOCK_EN, 0);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
+index 15734db0cdea4..f3c311d093197 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
+@@ -531,6 +531,12 @@ static struct mpcc *mpc2_get_mpcc_for_dpp(struct mpc_tree *tree, int dpp_id)
+ while (tmp_mpcc != NULL) {
+ if (tmp_mpcc->dpp_id == 0xf || tmp_mpcc->dpp_id == dpp_id)
+ return tmp_mpcc;
++
++ /* avoid circular linked list */
++ ASSERT(tmp_mpcc != tmp_mpcc->mpcc_bot);
++ if (tmp_mpcc == tmp_mpcc->mpcc_bot)
++ break;
++
+ tmp_mpcc = tmp_mpcc->mpcc_bot;
+ }
+ return NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
+index c5e200d09038f..5752271f22dfe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
+@@ -67,9 +67,15 @@ static uint32_t convert_and_clamp(
+ void dcn21_dchvm_init(struct hubbub *hubbub)
+ {
+ struct dcn20_hubbub *hubbub1 = TO_DCN20_HUBBUB(hubbub);
+- uint32_t riommu_active;
++ uint32_t riommu_active, prefetch_done;
+ int i;
+
++ REG_GET(DCHVM_RIOMMU_STAT0, HOSTVM_PREFETCH_DONE, &prefetch_done);
++
++ if (prefetch_done) {
++ hubbub->riommu_active = true;
++ return;
++ }
+ //Init DCHVM block
+ REG_UPDATE(DCHVM_CTRL0, HOSTVM_INIT_REQ, 1);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.c
+index 6a4dcafb9bba5..dc3e8df706b34 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.c
+@@ -86,7 +86,7 @@ bool hubp3_program_surface_flip_and_addr(
+ VMID, address->vmid);
+
+ if (address->type == PLN_ADDR_TYPE_GRPH_STEREO) {
+- REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_MODE_FOR_STEREOSYNC, 0x1);
++ REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_MODE_FOR_STEREOSYNC, 0);
+ REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_IN_STEREOSYNC, 0x1);
+
+ } else {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.h
+index 7c77c71591a08..82c3b3ac1f0d0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.h
+@@ -162,7 +162,8 @@
+ SE_SF(DP_SYM32_ENC0_DP_SYM32_ENC_SDP_AUDIO_CONTROL0, AIP_ENABLE, mask_sh),\
+ SE_SF(DP_SYM32_ENC0_DP_SYM32_ENC_SDP_AUDIO_CONTROL0, ACM_ENABLE, mask_sh),\
+ SE_SF(DP_SYM32_ENC0_DP_SYM32_ENC_VID_CRC_CONTROL, CRC_ENABLE, mask_sh),\
+- SE_SF(DP_SYM32_ENC0_DP_SYM32_ENC_VID_CRC_CONTROL, CRC_CONT_MODE_ENABLE, mask_sh)
++ SE_SF(DP_SYM32_ENC0_DP_SYM32_ENC_VID_CRC_CONTROL, CRC_CONT_MODE_ENABLE, mask_sh),\
++ SE_SF(DP_SYM32_ENC0_DP_SYM32_ENC_HBLANK_CONTROL, HBLANK_MINIMUM_SYMBOL_WIDTH, mask_sh)
+
+
+ #define DCN3_1_HPO_DP_STREAM_ENC_REG_FIELD_LIST(type) \
+diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+index 03fa63d56fa65..948151e735739 100644
+--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
++++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+@@ -615,10 +615,6 @@ static void build_vrr_infopacket_data_v1(const struct mod_vrr_params *vrr,
+ * Note: We should never go above the field rate of the mode timing set.
+ */
+ infopacket->sb[8] = (unsigned char)((vrr->max_refresh_in_uhz + 500000) / 1000000);
+-
+- /* FreeSync HDR */
+- infopacket->sb[9] = 0;
+- infopacket->sb[10] = 0;
+ }
+
+ static void build_vrr_infopacket_data_v3(const struct mod_vrr_params *vrr,
+@@ -686,10 +682,6 @@ static void build_vrr_infopacket_data_v3(const struct mod_vrr_params *vrr,
+
+ /* PB16 : Reserved bits 7:1, FixedRate bit 0 */
+ infopacket->sb[16] = (vrr->state == VRR_STATE_ACTIVE_FIXED) ? 1 : 0;
+-
+- //FreeSync HDR
+- infopacket->sb[9] = 0;
+- infopacket->sb[10] = 0;
+ }
+
+ static void build_vrr_infopacket_fs2_data(enum color_transfer_func app_tf,
+@@ -774,8 +766,7 @@ static void build_vrr_infopacket_header_v2(enum signal_type signal,
+ /* HB2 = [Bits 7:5 = 0] [Bits 4:0 = Length = 0x09] */
+ infopacket->hb2 = 0x09;
+
+- *payload_size = 0x0A;
+-
++ *payload_size = 0x09;
+ } else if (dc_is_dp_signal(signal)) {
+
+ /* HEADER */
+@@ -824,9 +815,9 @@ static void build_vrr_infopacket_header_v3(enum signal_type signal,
+ infopacket->hb1 = version;
+
+ /* HB2 = [Bits 7:5 = 0] [Bits 4:0 = Length] */
+- *payload_size = 0x10;
+- infopacket->hb2 = *payload_size - 1; //-1 for checksum
++ infopacket->hb2 = 0x10;
+
++ *payload_size = 0x10;
+ } else if (dc_is_dp_signal(signal)) {
+
+ /* HEADER */
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 78f3d9e722bb7..32bb6b1d95261 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -4281,6 +4281,7 @@ static const struct pptable_funcs sienna_cichlid_ppt_funcs = {
+ .dump_pptable = sienna_cichlid_dump_pptable,
+ .init_microcode = smu_v11_0_init_microcode,
+ .load_microcode = smu_v11_0_load_microcode,
++ .fini_microcode = smu_v11_0_fini_microcode,
+ .init_smc_tables = sienna_cichlid_init_smc_tables,
+ .fini_smc_tables = smu_v11_0_fini_smc_tables,
+ .init_power = smu_v11_0_init_power,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index 5aa08c031f721..1d8a9e5b3cc08 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -203,6 +203,9 @@ int smu_v13_0_init_pptable_microcode(struct smu_context *smu)
+ if (!adev->scpm_enabled)
+ return 0;
+
++ if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7))
++ return 0;
++
+ /* override pptable_id from driver parameter */
+ if (amdgpu_smu_pptable_id >= 0) {
+ pptable_id = amdgpu_smu_pptable_id;
+@@ -210,13 +213,6 @@ int smu_v13_0_init_pptable_microcode(struct smu_context *smu)
+ } else {
+ pptable_id = smu->smu_table.boot_values.pp_table_id;
+
+- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7) &&
+- pptable_id == 3667)
+- pptable_id = 36671;
+-
+- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7) &&
+- pptable_id == 3688)
+- pptable_id = 36881;
+ /*
+ * Temporary solution for SMU V13.0.0 with SCPM enabled:
+ * - use 36831 signed pptable when pp_table_id is 3683
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index 7432b3e76d3d7..201546c369945 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -1583,7 +1583,9 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
+ .dump_pptable = smu_v13_0_0_dump_pptable,
+ .init_microcode = smu_v13_0_init_microcode,
+ .load_microcode = smu_v13_0_load_microcode,
++ .fini_microcode = smu_v13_0_fini_microcode,
+ .init_smc_tables = smu_v13_0_0_init_smc_tables,
++ .fini_smc_tables = smu_v13_0_fini_smc_tables,
+ .init_power = smu_v13_0_init_power,
+ .fini_power = smu_v13_0_fini_power,
+ .check_fw_status = smu_v13_0_check_fw_status,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
+index 5a17b51aa0f9f..7df360c25d51e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
+@@ -190,6 +190,9 @@ static int smu_v13_0_4_fini_smc_tables(struct smu_context *smu)
+ kfree(smu_table->watermarks_table);
+ smu_table->watermarks_table = NULL;
+
++ kfree(smu_table->gpu_metrics_table);
++ smu_table->gpu_metrics_table = NULL;
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index 4e1861fb2c6a4..9cde13b07dd26 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -1539,7 +1539,9 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
+ .dump_pptable = smu_v13_0_7_dump_pptable,
+ .init_microcode = smu_v13_0_init_microcode,
+ .load_microcode = smu_v13_0_load_microcode,
++ .fini_microcode = smu_v13_0_fini_microcode,
+ .init_smc_tables = smu_v13_0_7_init_smc_tables,
++ .fini_smc_tables = smu_v13_0_fini_smc_tables,
+ .init_power = smu_v13_0_init_power,
+ .check_fw_status = smu_v13_0_7_check_fw_status,
+ .setup_pptable = smu_v13_0_7_setup_pptable,
+diff --git a/drivers/gpu/drm/vc4/Kconfig b/drivers/gpu/drm/vc4/Kconfig
+index 061be9a6619df..b0f3117102ca5 100644
+--- a/drivers/gpu/drm/vc4/Kconfig
++++ b/drivers/gpu/drm/vc4/Kconfig
+@@ -8,6 +8,7 @@ config DRM_VC4
+ depends on DRM
+ depends on SND && SND_SOC
+ depends on COMMON_CLK
++ depends on PM
+ select DRM_DISPLAY_HDMI_HELPER
+ select DRM_DISPLAY_HELPER
+ select DRM_KMS_HELPER
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 23ff6aa5e8f60..199bc398817fa 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -2875,7 +2875,7 @@ static int vc5_hdmi_init_resources(struct vc4_hdmi *vc4_hdmi)
+ return 0;
+ }
+
+-static int __maybe_unused vc4_hdmi_runtime_suspend(struct device *dev)
++static int vc4_hdmi_runtime_suspend(struct device *dev)
+ {
+ struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+
+@@ -2992,17 +2992,15 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ vc4_hdmi->disable_4kp60 = true;
+ }
+
++ pm_runtime_enable(dev);
++
+ /*
+- * We need to have the device powered up at this point to call
+- * our reset hook and for the CEC init.
++ * We need to have the device powered up at this point to call
++ * our reset hook and for the CEC init.
+ */
+- ret = vc4_hdmi_runtime_resume(dev);
++ ret = pm_runtime_resume_and_get(dev);
+ if (ret)
+- goto err_put_ddc;
+-
+- pm_runtime_get_noresume(dev);
+- pm_runtime_set_active(dev);
+- pm_runtime_enable(dev);
++ goto err_disable_runtime_pm;
+
+ if ((of_device_is_compatible(dev->of_node, "brcm,bcm2711-hdmi0") ||
+ of_device_is_compatible(dev->of_node, "brcm,bcm2711-hdmi1")) &&
+@@ -3048,6 +3046,7 @@ err_destroy_conn:
+ err_destroy_encoder:
+ drm_encoder_cleanup(encoder);
+ pm_runtime_put_sync(dev);
++err_disable_runtime_pm:
+ pm_runtime_disable(dev);
+ err_put_ddc:
+ put_device(&vc4_hdmi->ddc->dev);
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 1441787a154a8..9b97dc0695e3a 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -285,11 +285,29 @@ static int amd_sfh_irq_init(struct amd_mp2_dev *privdata)
+ return 0;
+ }
+
++static const struct dmi_system_id dmi_nodevs[] = {
++ {
++ /*
++ * Google Chromebooks use Chrome OS Embedded Controller Sensor
++ * Hub instead of Sensor Hub Fusion and leaves MP2
++ * uninitialized, which disables all functionalities, even
++ * including the registers necessary for feature detections.
++ */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Google"),
++ },
++ },
++ { }
++};
++
+ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ struct amd_mp2_dev *privdata;
+ int rc;
+
++ if (dmi_first_match(dmi_nodevs))
++ return -ENODEV;
++
+ privdata = devm_kzalloc(&pdev->dev, sizeof(*privdata), GFP_KERNEL);
+ if (!privdata)
+ return -ENOMEM;
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index 08c9a9a60ae47..b59c3dafa6a48 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -1212,6 +1212,13 @@ static __u8 *asus_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ rdesc = new_rdesc;
+ }
+
++ if (drvdata->quirks & QUIRK_ROG_NKEY_KEYBOARD &&
++ *rsize == 331 && rdesc[190] == 0x85 && rdesc[191] == 0x5a &&
++ rdesc[204] == 0x95 && rdesc[205] == 0x05) {
++ hid_info(hdev, "Fixing up Asus N-KEY keyb report descriptor\n");
++ rdesc[205] = 0x01;
++ }
++
+ return rdesc;
+ }
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 9c4e92a9c6460..bc550e884f37b 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -185,6 +185,8 @@
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021 0x029c
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021 0x029a
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2021 0x029f
++#define USB_DEVICE_ID_APPLE_TOUCHBAR_BACKLIGHT 0x8102
++#define USB_DEVICE_ID_APPLE_TOUCHBAR_DISPLAY 0x8302
+
+ #define USB_VENDOR_ID_ASUS 0x0486
+ #define USB_DEVICE_ID_ASUS_T91MT 0x0185
+@@ -414,6 +416,7 @@
+ #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
+ #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A
+ #define I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN 0x2A1C
++#define I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN 0x279F
+
+ #define USB_VENDOR_ID_ELECOM 0x056e
+ #define USB_DEVICE_ID_ELECOM_BM084 0x0061
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 48c1c02c69f4e..859aeb07542e3 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -383,6 +383,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ HID_BATTERY_QUIRK_IGNORE },
+ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN),
+ HID_BATTERY_QUIRK_IGNORE },
++ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN),
++ HID_BATTERY_QUIRK_IGNORE },
+ {}
+ };
+
+@@ -1532,7 +1534,10 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct
+ * assume ours
+ */
+ if (!report->tool)
+- hid_report_set_tool(report, input, usage->code);
++ report->tool = usage->code;
++
++ /* drivers may have changed the value behind our back, resend it */
++ hid_report_set_tool(report, input, report->tool);
+ } else {
+ hid_report_release_tool(report, input, usage->code);
+ }
+diff --git a/drivers/hid/hid-nintendo.c b/drivers/hid/hid-nintendo.c
+index 4b1173957c17c..f33a03c96ba68 100644
+--- a/drivers/hid/hid-nintendo.c
++++ b/drivers/hid/hid-nintendo.c
+@@ -1222,6 +1222,7 @@ static void joycon_parse_report(struct joycon_ctlr *ctlr,
+
+ spin_lock_irqsave(&ctlr->lock, flags);
+ if (IS_ENABLED(CONFIG_NINTENDO_FF) && rep->vibrator_report &&
++ ctlr->ctlr_state != JOYCON_CTLR_STATE_REMOVED &&
+ (msecs - ctlr->rumble_msecs) >= JC_RUMBLE_PERIOD_MS &&
+ (ctlr->rumble_queue_head != ctlr->rumble_queue_tail ||
+ ctlr->rumble_zero_countdown > 0)) {
+@@ -1546,12 +1547,13 @@ static int joycon_set_rumble(struct joycon_ctlr *ctlr, u16 amp_r, u16 amp_l,
+ ctlr->rumble_queue_head = 0;
+ memcpy(ctlr->rumble_data[ctlr->rumble_queue_head], data,
+ JC_RUMBLE_DATA_SIZE);
+- spin_unlock_irqrestore(&ctlr->lock, flags);
+
+ /* don't wait for the periodic send (reduces latency) */
+- if (schedule_now)
++ if (schedule_now && ctlr->ctlr_state != JOYCON_CTLR_STATE_REMOVED)
+ queue_work(ctlr->rumble_queue, &ctlr->rumble_worker);
+
++ spin_unlock_irqrestore(&ctlr->lock, flags);
++
+ return 0;
+ }
+
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index dc67717d2dabc..70f602c64fd13 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -314,6 +314,8 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_TOUCHBAR_BACKLIGHT) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_TOUCHBAR_DISPLAY) },
+ #endif
+ #if IS_ENABLED(CONFIG_HID_APPLEIR)
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_IRCONTROL) },
+diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c
+index a3b151b29bd71..fc616db4231bb 100644
+--- a/drivers/hid/hid-steam.c
++++ b/drivers/hid/hid-steam.c
+@@ -134,6 +134,11 @@ static int steam_recv_report(struct steam_device *steam,
+ int ret;
+
+ r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0];
++ if (!r) {
++ hid_err(steam->hdev, "No HID_FEATURE_REPORT submitted - nothing to read\n");
++ return -EINVAL;
++ }
++
+ if (hid_report_len(r) < 64)
+ return -EINVAL;
+
+@@ -165,6 +170,11 @@ static int steam_send_report(struct steam_device *steam,
+ int ret;
+
+ r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0];
++ if (!r) {
++ hid_err(steam->hdev, "No HID_FEATURE_REPORT submitted - nothing to read\n");
++ return -EINVAL;
++ }
++
+ if (hid_report_len(r) < 64)
+ return -EINVAL;
+
+diff --git a/drivers/hid/hid-thrustmaster.c b/drivers/hid/hid-thrustmaster.c
+index c3e6d69fdfbd9..cf1679b0d4fbb 100644
+--- a/drivers/hid/hid-thrustmaster.c
++++ b/drivers/hid/hid-thrustmaster.c
+@@ -67,12 +67,13 @@ static const struct tm_wheel_info tm_wheels_infos[] = {
+ {0x0200, 0x0005, "Thrustmaster T300RS (Missing Attachment)"},
+ {0x0206, 0x0005, "Thrustmaster T300RS"},
+ {0x0209, 0x0005, "Thrustmaster T300RS (Open Wheel Attachment)"},
++ {0x020a, 0x0005, "Thrustmaster T300RS (Sparco R383 Mod)"},
+ {0x0204, 0x0005, "Thrustmaster T300 Ferrari Alcantara Edition"},
+ {0x0002, 0x0002, "Thrustmaster T500RS"}
+ //{0x0407, 0x0001, "Thrustmaster TMX"}
+ };
+
+-static const uint8_t tm_wheels_infos_length = 4;
++static const uint8_t tm_wheels_infos_length = 7;
+
+ /*
+ * This structs contains (in little endian) the response data
+diff --git a/drivers/hid/hidraw.c b/drivers/hid/hidraw.c
+index 681614a8302a5..197b1e7bf029e 100644
+--- a/drivers/hid/hidraw.c
++++ b/drivers/hid/hidraw.c
+@@ -350,6 +350,8 @@ static int hidraw_release(struct inode * inode, struct file * file)
+ down_write(&minors_rwsem);
+
+ spin_lock_irqsave(&hidraw_table[minor]->list_lock, flags);
++ for (int i = list->tail; i < list->head; i++)
++ kfree(list->buffer[i].value);
+ list_del(&list->node);
+ spin_unlock_irqrestore(&hidraw_table[minor]->list_lock, flags);
+ kfree(list);
+diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+index e600dbf04dfc6..fc108f19a64c3 100644
+--- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h
++++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+@@ -32,6 +32,7 @@
+ #define ADL_P_DEVICE_ID 0x51FC
+ #define ADL_N_DEVICE_ID 0x54FC
+ #define RPL_S_DEVICE_ID 0x7A78
++#define MTL_P_DEVICE_ID 0x7E45
+
+ #define REVISION_ID_CHT_A0 0x6
+ #define REVISION_ID_CHT_Ax_SI 0x0
+diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+index 2c67ec17bec6f..7120b30ac51d0 100644
+--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+@@ -43,6 +43,7 @@ static const struct pci_device_id ish_pci_tbl[] = {
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, ADL_P_DEVICE_ID)},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, ADL_N_DEVICE_ID)},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, RPL_S_DEVICE_ID)},
++ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MTL_P_DEVICE_ID)},
+ {0, }
+ };
+ MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index a9666373af6b9..92d6db1ad00f5 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -2610,6 +2610,7 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf,
+ del_timer_sync(&hdw->encoder_run_timer);
+ del_timer_sync(&hdw->encoder_wait_timer);
+ flush_work(&hdw->workpoll);
++ v4l2_device_unregister(&hdw->v4l2_dev);
+ usb_free_urb(hdw->ctl_read_urb);
+ usb_free_urb(hdw->ctl_write_urb);
+ kfree(hdw->ctl_read_buffer);
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 9da4489dc345a..378a26a1825c4 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2414,6 +2414,9 @@ static void msdc_cqe_disable(struct mmc_host *mmc, bool recovery)
+ /* disable busy check */
+ sdr_clr_bits(host->base + MSDC_PATCH_BIT1, MSDC_PB1_BUSY_CHECK_SEL);
+
++ val = readl(host->base + MSDC_INT);
++ writel(val, host->base + MSDC_INT);
++
+ if (recovery) {
+ sdr_set_field(host->base + MSDC_DMA_CTRL,
+ MSDC_DMA_CTRL_STOP, 1);
+@@ -2871,11 +2874,14 @@ static int __maybe_unused msdc_suspend(struct device *dev)
+ {
+ struct mmc_host *mmc = dev_get_drvdata(dev);
+ int ret;
++ u32 val;
+
+ if (mmc->caps2 & MMC_CAP2_CQE) {
+ ret = cqhci_suspend(mmc);
+ if (ret)
+ return ret;
++ val = readl(((struct msdc_host *)mmc_priv(mmc))->base + MSDC_INT);
++ writel(val, ((struct msdc_host *)mmc_priv(mmc))->base + MSDC_INT);
+ }
+
+ return pm_runtime_force_suspend(dev);
+diff --git a/drivers/mmc/host/sdhci-of-dwcmshc.c b/drivers/mmc/host/sdhci-of-dwcmshc.c
+index bac874ab0b33a..335c88fd849c4 100644
+--- a/drivers/mmc/host/sdhci-of-dwcmshc.c
++++ b/drivers/mmc/host/sdhci-of-dwcmshc.c
+@@ -15,6 +15,7 @@
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
++#include <linux/reset.h>
+ #include <linux/sizes.h>
+
+ #include "sdhci-pltfm.h"
+@@ -55,14 +56,15 @@
+ #define DLL_LOCK_WO_TMOUT(x) \
+ ((((x) & DWCMSHC_EMMC_DLL_LOCKED) == DWCMSHC_EMMC_DLL_LOCKED) && \
+ (((x) & DWCMSHC_EMMC_DLL_TIMEOUT) == 0))
+-#define RK3568_MAX_CLKS 3
++#define RK35xx_MAX_CLKS 3
+
+ #define BOUNDARY_OK(addr, len) \
+ ((addr | (SZ_128M - 1)) == ((addr + len - 1) | (SZ_128M - 1)))
+
+-struct rk3568_priv {
++struct rk35xx_priv {
+ /* Rockchip specified optional clocks */
+- struct clk_bulk_data rockchip_clks[RK3568_MAX_CLKS];
++ struct clk_bulk_data rockchip_clks[RK35xx_MAX_CLKS];
++ struct reset_control *reset;
+ u8 txclk_tapnum;
+ };
+
+@@ -176,7 +178,7 @@ static void dwcmshc_rk3568_set_clock(struct sdhci_host *host, unsigned int clock
+ {
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host);
+- struct rk3568_priv *priv = dwc_priv->priv;
++ struct rk35xx_priv *priv = dwc_priv->priv;
+ u8 txclk_tapnum = DLL_TXCLK_TAPNUM_DEFAULT;
+ u32 extra, reg;
+ int err;
+@@ -255,6 +257,21 @@ static void dwcmshc_rk3568_set_clock(struct sdhci_host *host, unsigned int clock
+ sdhci_writel(host, extra, DWCMSHC_EMMC_DLL_STRBIN);
+ }
+
++static void rk35xx_sdhci_reset(struct sdhci_host *host, u8 mask)
++{
++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++ struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host);
++ struct rk35xx_priv *priv = dwc_priv->priv;
++
++ if (mask & SDHCI_RESET_ALL && priv->reset) {
++ reset_control_assert(priv->reset);
++ udelay(1);
++ reset_control_deassert(priv->reset);
++ }
++
++ sdhci_reset(host, mask);
++}
++
+ static const struct sdhci_ops sdhci_dwcmshc_ops = {
+ .set_clock = sdhci_set_clock,
+ .set_bus_width = sdhci_set_bus_width,
+@@ -264,12 +281,12 @@ static const struct sdhci_ops sdhci_dwcmshc_ops = {
+ .adma_write_desc = dwcmshc_adma_write_desc,
+ };
+
+-static const struct sdhci_ops sdhci_dwcmshc_rk3568_ops = {
++static const struct sdhci_ops sdhci_dwcmshc_rk35xx_ops = {
+ .set_clock = dwcmshc_rk3568_set_clock,
+ .set_bus_width = sdhci_set_bus_width,
+ .set_uhs_signaling = dwcmshc_set_uhs_signaling,
+ .get_max_clock = sdhci_pltfm_clk_get_max_clock,
+- .reset = sdhci_reset,
++ .reset = rk35xx_sdhci_reset,
+ .adma_write_desc = dwcmshc_adma_write_desc,
+ };
+
+@@ -279,30 +296,46 @@ static const struct sdhci_pltfm_data sdhci_dwcmshc_pdata = {
+ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+ };
+
+-static const struct sdhci_pltfm_data sdhci_dwcmshc_rk3568_pdata = {
+- .ops = &sdhci_dwcmshc_rk3568_ops,
++#ifdef CONFIG_ACPI
++static const struct sdhci_pltfm_data sdhci_dwcmshc_bf3_pdata = {
++ .ops = &sdhci_dwcmshc_ops,
++ .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++ SDHCI_QUIRK2_ACMD23_BROKEN,
++};
++#endif
++
++static const struct sdhci_pltfm_data sdhci_dwcmshc_rk35xx_pdata = {
++ .ops = &sdhci_dwcmshc_rk35xx_ops,
+ .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN |
+ SDHCI_QUIRK_BROKEN_TIMEOUT_VAL,
+ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
+ SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN,
+ };
+
+-static int dwcmshc_rk3568_init(struct sdhci_host *host, struct dwcmshc_priv *dwc_priv)
++static int dwcmshc_rk35xx_init(struct sdhci_host *host, struct dwcmshc_priv *dwc_priv)
+ {
+ int err;
+- struct rk3568_priv *priv = dwc_priv->priv;
++ struct rk35xx_priv *priv = dwc_priv->priv;
++
++ priv->reset = devm_reset_control_array_get_optional_exclusive(mmc_dev(host->mmc));
++ if (IS_ERR(priv->reset)) {
++ err = PTR_ERR(priv->reset);
++ dev_err(mmc_dev(host->mmc), "failed to get reset control %d\n", err);
++ return err;
++ }
+
+ priv->rockchip_clks[0].id = "axi";
+ priv->rockchip_clks[1].id = "block";
+ priv->rockchip_clks[2].id = "timer";
+- err = devm_clk_bulk_get_optional(mmc_dev(host->mmc), RK3568_MAX_CLKS,
++ err = devm_clk_bulk_get_optional(mmc_dev(host->mmc), RK35xx_MAX_CLKS,
+ priv->rockchip_clks);
+ if (err) {
+ dev_err(mmc_dev(host->mmc), "failed to get clocks %d\n", err);
+ return err;
+ }
+
+- err = clk_bulk_prepare_enable(RK3568_MAX_CLKS, priv->rockchip_clks);
++ err = clk_bulk_prepare_enable(RK35xx_MAX_CLKS, priv->rockchip_clks);
+ if (err) {
+ dev_err(mmc_dev(host->mmc), "failed to enable clocks %d\n", err);
+ return err;
+@@ -324,7 +357,7 @@ static int dwcmshc_rk3568_init(struct sdhci_host *host, struct dwcmshc_priv *dwc
+ static const struct of_device_id sdhci_dwcmshc_dt_ids[] = {
+ {
+ .compatible = "rockchip,rk3568-dwcmshc",
+- .data = &sdhci_dwcmshc_rk3568_pdata,
++ .data = &sdhci_dwcmshc_rk35xx_pdata,
+ },
+ {
+ .compatible = "snps,dwcmshc-sdhci",
+@@ -336,7 +369,10 @@ MODULE_DEVICE_TABLE(of, sdhci_dwcmshc_dt_ids);
+
+ #ifdef CONFIG_ACPI
+ static const struct acpi_device_id sdhci_dwcmshc_acpi_ids[] = {
+- { .id = "MLNXBF30" },
++ {
++ .id = "MLNXBF30",
++ .driver_data = (kernel_ulong_t)&sdhci_dwcmshc_bf3_pdata,
++ },
+ {}
+ };
+ #endif
+@@ -347,12 +383,12 @@ static int dwcmshc_probe(struct platform_device *pdev)
+ struct sdhci_pltfm_host *pltfm_host;
+ struct sdhci_host *host;
+ struct dwcmshc_priv *priv;
+- struct rk3568_priv *rk_priv = NULL;
++ struct rk35xx_priv *rk_priv = NULL;
+ const struct sdhci_pltfm_data *pltfm_data;
+ int err;
+ u32 extra;
+
+- pltfm_data = of_device_get_match_data(&pdev->dev);
++ pltfm_data = device_get_match_data(&pdev->dev);
+ if (!pltfm_data) {
+ dev_err(&pdev->dev, "Error: No device match data found\n");
+ return -ENODEV;
+@@ -402,8 +438,8 @@ static int dwcmshc_probe(struct platform_device *pdev)
+ host->mmc_host_ops.request = dwcmshc_request;
+ host->mmc_host_ops.hs400_enhanced_strobe = dwcmshc_hs400_enhanced_strobe;
+
+- if (pltfm_data == &sdhci_dwcmshc_rk3568_pdata) {
+- rk_priv = devm_kzalloc(&pdev->dev, sizeof(struct rk3568_priv), GFP_KERNEL);
++ if (pltfm_data == &sdhci_dwcmshc_rk35xx_pdata) {
++ rk_priv = devm_kzalloc(&pdev->dev, sizeof(struct rk35xx_priv), GFP_KERNEL);
+ if (!rk_priv) {
+ err = -ENOMEM;
+ goto err_clk;
+@@ -411,7 +447,7 @@ static int dwcmshc_probe(struct platform_device *pdev)
+
+ priv->priv = rk_priv;
+
+- err = dwcmshc_rk3568_init(host, priv);
++ err = dwcmshc_rk35xx_init(host, priv);
+ if (err)
+ goto err_clk;
+ }
+@@ -428,7 +464,7 @@ err_clk:
+ clk_disable_unprepare(pltfm_host->clk);
+ clk_disable_unprepare(priv->bus_clk);
+ if (rk_priv)
+- clk_bulk_disable_unprepare(RK3568_MAX_CLKS,
++ clk_bulk_disable_unprepare(RK35xx_MAX_CLKS,
+ rk_priv->rockchip_clks);
+ free_pltfm:
+ sdhci_pltfm_free(pdev);
+@@ -440,14 +476,14 @@ static int dwcmshc_remove(struct platform_device *pdev)
+ struct sdhci_host *host = platform_get_drvdata(pdev);
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host);
+- struct rk3568_priv *rk_priv = priv->priv;
++ struct rk35xx_priv *rk_priv = priv->priv;
+
+ sdhci_remove_host(host, 0);
+
+ clk_disable_unprepare(pltfm_host->clk);
+ clk_disable_unprepare(priv->bus_clk);
+ if (rk_priv)
+- clk_bulk_disable_unprepare(RK3568_MAX_CLKS,
++ clk_bulk_disable_unprepare(RK35xx_MAX_CLKS,
+ rk_priv->rockchip_clks);
+ sdhci_pltfm_free(pdev);
+
+@@ -460,7 +496,7 @@ static int dwcmshc_suspend(struct device *dev)
+ struct sdhci_host *host = dev_get_drvdata(dev);
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host);
+- struct rk3568_priv *rk_priv = priv->priv;
++ struct rk35xx_priv *rk_priv = priv->priv;
+ int ret;
+
+ ret = sdhci_suspend_host(host);
+@@ -472,7 +508,7 @@ static int dwcmshc_suspend(struct device *dev)
+ clk_disable_unprepare(priv->bus_clk);
+
+ if (rk_priv)
+- clk_bulk_disable_unprepare(RK3568_MAX_CLKS,
++ clk_bulk_disable_unprepare(RK35xx_MAX_CLKS,
+ rk_priv->rockchip_clks);
+
+ return ret;
+@@ -483,7 +519,7 @@ static int dwcmshc_resume(struct device *dev)
+ struct sdhci_host *host = dev_get_drvdata(dev);
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host);
+- struct rk3568_priv *rk_priv = priv->priv;
++ struct rk35xx_priv *rk_priv = priv->priv;
+ int ret;
+
+ ret = clk_prepare_enable(pltfm_host->clk);
+@@ -497,7 +533,7 @@ static int dwcmshc_resume(struct device *dev)
+ }
+
+ if (rk_priv) {
+- ret = clk_bulk_prepare_enable(RK3568_MAX_CLKS,
++ ret = clk_bulk_prepare_enable(RK35xx_MAX_CLKS,
+ rk_priv->rockchip_clks);
+ if (ret)
+ return ret;
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+index 1d6e3b641b2e6..d928b75f37803 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+@@ -710,7 +710,7 @@ static void lan966x_cleanup_ports(struct lan966x *lan966x)
+ disable_irq(lan966x->xtr_irq);
+ lan966x->xtr_irq = -ENXIO;
+
+- if (lan966x->ana_irq) {
++ if (lan966x->ana_irq > 0) {
+ disable_irq(lan966x->ana_irq);
+ lan966x->ana_irq = -ENXIO;
+ }
+@@ -718,10 +718,10 @@ static void lan966x_cleanup_ports(struct lan966x *lan966x)
+ if (lan966x->fdma)
+ devm_free_irq(lan966x->dev, lan966x->fdma_irq, lan966x);
+
+- if (lan966x->ptp_irq)
++ if (lan966x->ptp_irq > 0)
+ devm_free_irq(lan966x->dev, lan966x->ptp_irq, lan966x);
+
+- if (lan966x->ptp_ext_irq)
++ if (lan966x->ptp_ext_irq > 0)
+ devm_free_irq(lan966x->dev, lan966x->ptp_ext_irq, lan966x);
+ }
+
+@@ -1049,7 +1049,7 @@ static int lan966x_probe(struct platform_device *pdev)
+ }
+
+ lan966x->ana_irq = platform_get_irq_byname(pdev, "ana");
+- if (lan966x->ana_irq) {
++ if (lan966x->ana_irq > 0) {
+ err = devm_request_threaded_irq(&pdev->dev, lan966x->ana_irq, NULL,
+ lan966x_ana_irq_handler, IRQF_ONESHOT,
+ "ana irq", lan966x);
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index 1ac7fec47d6fb..604feeb84ee40 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -222,8 +222,15 @@ static int get_port_device_capability(struct pci_dev *dev)
+
+ #ifdef CONFIG_PCIEAER
+ if (dev->aer_cap && pci_aer_available() &&
+- (pcie_ports_native || host->native_aer))
++ (pcie_ports_native || host->native_aer)) {
+ services |= PCIE_PORT_SERVICE_AER;
++
++ /*
++ * Disable AER on this port in case it's been enabled by the
++ * BIOS (the AER service driver will enable it when necessary).
++ */
++ pci_disable_pcie_error_reporting(dev);
++ }
+ #endif
+
+ /* Root Ports and Root Complex Event Collectors may generate PMEs */
+diff --git a/drivers/platform/x86/serial-multi-instantiate.c b/drivers/platform/x86/serial-multi-instantiate.c
+index 1e8063b7c169e..e98007197cf52 100644
+--- a/drivers/platform/x86/serial-multi-instantiate.c
++++ b/drivers/platform/x86/serial-multi-instantiate.c
+@@ -329,6 +329,7 @@ static const struct acpi_device_id smi_acpi_ids[] = {
+ { "CSC3551", (unsigned long)&cs35l41_hda },
+ /* Non-conforming _HID for Cirrus Logic already released */
+ { "CLSA0100", (unsigned long)&cs35l41_hda },
++ { "CLSA0101", (unsigned long)&cs35l41_hda },
+ { }
+ };
+ MODULE_DEVICE_TABLE(acpi, smi_acpi_ids);
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 7886497253ccf..cafcf260394cd 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1728,13 +1728,14 @@ static int usb_udc_uevent(struct device *dev, struct kobj_uevent_env *env)
+ return ret;
+ }
+
+- if (udc->driver) {
++ mutex_lock(&udc_lock);
++ if (udc->driver)
+ ret = add_uevent_var(env, "USB_UDC_DRIVER=%s",
+ udc->driver->function);
+- if (ret) {
+- dev_err(dev, "failed to add uevent USB_UDC_DRIVER\n");
+- return ret;
+- }
++ mutex_unlock(&udc_lock);
++ if (ret) {
++ dev_err(dev, "failed to add uevent USB_UDC_DRIVER\n");
++ return ret;
+ }
+
+ return 0;
+diff --git a/drivers/video/fbdev/pm2fb.c b/drivers/video/fbdev/pm2fb.c
+index d3be2c64f1c08..8fd79deb1e2ae 100644
+--- a/drivers/video/fbdev/pm2fb.c
++++ b/drivers/video/fbdev/pm2fb.c
+@@ -617,6 +617,11 @@ static int pm2fb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ return -EINVAL;
+ }
+
++ if (!var->pixclock) {
++ DPRINTK("pixclock is zero\n");
++ return -EINVAL;
++ }
++
+ if (PICOS2KHZ(var->pixclock) > PM2_MAX_PIXCLOCK) {
+ DPRINTK("pixclock too high (%ldKHz)\n",
+ PICOS2KHZ(var->pixclock));
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 6e556031a8f3a..ebfa35fe1c38b 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -2075,6 +2075,9 @@ cow_done:
+
+ if (!p->skip_locking) {
+ level = btrfs_header_level(b);
++
++ btrfs_maybe_reset_lockdep_class(root, b);
++
+ if (level <= write_lock_level) {
+ btrfs_tree_lock(b);
+ p->locks[level] = BTRFS_WRITE_LOCK;
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 7d3ca3ea0bcec..4d8acd7e63eb5 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1146,6 +1146,8 @@ enum {
+ BTRFS_ROOT_ORPHAN_CLEANUP,
+ /* This root has a drop operation that was started previously. */
+ BTRFS_ROOT_UNFINISHED_DROP,
++ /* This reloc root needs to have its buffers lockdep class reset. */
++ BTRFS_ROOT_RESET_LOCKDEP_CLASS,
+ };
+
+ static inline void btrfs_wake_unfinished_drop(struct btrfs_fs_info *fs_info)
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index bc30306615837..a2505cfc6bc10 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -121,88 +121,6 @@ struct async_submit_bio {
+ blk_status_t status;
+ };
+
+-/*
+- * Lockdep class keys for extent_buffer->lock's in this root. For a given
+- * eb, the lockdep key is determined by the btrfs_root it belongs to and
+- * the level the eb occupies in the tree.
+- *
+- * Different roots are used for different purposes and may nest inside each
+- * other and they require separate keysets. As lockdep keys should be
+- * static, assign keysets according to the purpose of the root as indicated
+- * by btrfs_root->root_key.objectid. This ensures that all special purpose
+- * roots have separate keysets.
+- *
+- * Lock-nesting across peer nodes is always done with the immediate parent
+- * node locked thus preventing deadlock. As lockdep doesn't know this, use
+- * subclass to avoid triggering lockdep warning in such cases.
+- *
+- * The key is set by the readpage_end_io_hook after the buffer has passed
+- * csum validation but before the pages are unlocked. It is also set by
+- * btrfs_init_new_buffer on freshly allocated blocks.
+- *
+- * We also add a check to make sure the highest level of the tree is the
+- * same as our lockdep setup here. If BTRFS_MAX_LEVEL changes, this code
+- * needs update as well.
+- */
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# if BTRFS_MAX_LEVEL != 8
+-# error
+-# endif
+-
+-#define DEFINE_LEVEL(stem, level) \
+- .names[level] = "btrfs-" stem "-0" #level,
+-
+-#define DEFINE_NAME(stem) \
+- DEFINE_LEVEL(stem, 0) \
+- DEFINE_LEVEL(stem, 1) \
+- DEFINE_LEVEL(stem, 2) \
+- DEFINE_LEVEL(stem, 3) \
+- DEFINE_LEVEL(stem, 4) \
+- DEFINE_LEVEL(stem, 5) \
+- DEFINE_LEVEL(stem, 6) \
+- DEFINE_LEVEL(stem, 7)
+-
+-static struct btrfs_lockdep_keyset {
+- u64 id; /* root objectid */
+- /* Longest entry: btrfs-free-space-00 */
+- char names[BTRFS_MAX_LEVEL][20];
+- struct lock_class_key keys[BTRFS_MAX_LEVEL];
+-} btrfs_lockdep_keysets[] = {
+- { .id = BTRFS_ROOT_TREE_OBJECTID, DEFINE_NAME("root") },
+- { .id = BTRFS_EXTENT_TREE_OBJECTID, DEFINE_NAME("extent") },
+- { .id = BTRFS_CHUNK_TREE_OBJECTID, DEFINE_NAME("chunk") },
+- { .id = BTRFS_DEV_TREE_OBJECTID, DEFINE_NAME("dev") },
+- { .id = BTRFS_CSUM_TREE_OBJECTID, DEFINE_NAME("csum") },
+- { .id = BTRFS_QUOTA_TREE_OBJECTID, DEFINE_NAME("quota") },
+- { .id = BTRFS_TREE_LOG_OBJECTID, DEFINE_NAME("log") },
+- { .id = BTRFS_TREE_RELOC_OBJECTID, DEFINE_NAME("treloc") },
+- { .id = BTRFS_DATA_RELOC_TREE_OBJECTID, DEFINE_NAME("dreloc") },
+- { .id = BTRFS_UUID_TREE_OBJECTID, DEFINE_NAME("uuid") },
+- { .id = BTRFS_FREE_SPACE_TREE_OBJECTID, DEFINE_NAME("free-space") },
+- { .id = 0, DEFINE_NAME("tree") },
+-};
+-
+-#undef DEFINE_LEVEL
+-#undef DEFINE_NAME
+-
+-void btrfs_set_buffer_lockdep_class(u64 objectid, struct extent_buffer *eb,
+- int level)
+-{
+- struct btrfs_lockdep_keyset *ks;
+-
+- BUG_ON(level >= ARRAY_SIZE(ks->keys));
+-
+- /* find the matching keyset, id 0 is the default entry */
+- for (ks = btrfs_lockdep_keysets; ks->id; ks++)
+- if (ks->id == objectid)
+- break;
+-
+- lockdep_set_class_and_name(&eb->lock,
+- &ks->keys[level], ks->names[level]);
+-}
+-
+-#endif
+-
+ /*
+ * Compute the csum of a btree block and store the result to provided buffer.
+ */
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index 4ee8c42c9f783..b4962b7d7117d 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -148,14 +148,4 @@ int btrfs_init_root_free_objectid(struct btrfs_root *root);
+ int __init btrfs_end_io_wq_init(void);
+ void __cold btrfs_end_io_wq_exit(void);
+
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-void btrfs_set_buffer_lockdep_class(u64 objectid,
+- struct extent_buffer *eb, int level);
+-#else
+-static inline void btrfs_set_buffer_lockdep_class(u64 objectid,
+- struct extent_buffer *eb, int level)
+-{
+-}
+-#endif
+-
+ #endif
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index ced3fc76063f1..92f3f5ed8bf1e 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4871,6 +4871,7 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ {
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ struct extent_buffer *buf;
++ u64 lockdep_owner = owner;
+
+ buf = btrfs_find_create_tree_block(fs_info, bytenr, owner, level);
+ if (IS_ERR(buf))
+@@ -4889,12 +4890,27 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ return ERR_PTR(-EUCLEAN);
+ }
+
++ /*
++ * The reloc trees are just snapshots, so we need them to appear to be
++ * just like any other fs tree WRT lockdep.
++ *
++ * The exception however is in replace_path() in relocation, where we
++ * hold the lock on the original fs root and then search for the reloc
++ * root. At that point we need to make sure any reloc root buffers are
++ * set to the BTRFS_TREE_RELOC_OBJECTID lockdep class in order to make
++ * lockdep happy.
++ */
++ if (lockdep_owner == BTRFS_TREE_RELOC_OBJECTID &&
++ !test_bit(BTRFS_ROOT_RESET_LOCKDEP_CLASS, &root->state))
++ lockdep_owner = BTRFS_FS_TREE_OBJECTID;
++
+ /*
+ * This needs to stay, because we could allocate a freed block from an
+ * old tree into a new tree, so we need to make sure this new block is
+ * set to the appropriate level and owner.
+ */
+- btrfs_set_buffer_lockdep_class(owner, buf, level);
++ btrfs_set_buffer_lockdep_class(lockdep_owner, buf, level);
++
+ __btrfs_tree_lock(buf, nest);
+ btrfs_clean_tree_block(buf);
+ clear_bit(EXTENT_BUFFER_STALE, &buf->bflags);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index cda25018ebd74..5785ed241f6f8 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -6228,6 +6228,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
+ struct extent_buffer *exists = NULL;
+ struct page *p;
+ struct address_space *mapping = fs_info->btree_inode->i_mapping;
++ u64 lockdep_owner = owner_root;
+ int uptodate = 1;
+ int ret;
+
+@@ -6252,7 +6253,15 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
+ eb = __alloc_extent_buffer(fs_info, start, len);
+ if (!eb)
+ return ERR_PTR(-ENOMEM);
+- btrfs_set_buffer_lockdep_class(owner_root, eb, level);
++
++ /*
++ * The reloc trees are just snapshots, so we need them to appear to be
++ * just like any other fs tree WRT lockdep.
++ */
++ if (lockdep_owner == BTRFS_TREE_RELOC_OBJECTID)
++ lockdep_owner = BTRFS_FS_TREE_OBJECTID;
++
++ btrfs_set_buffer_lockdep_class(lockdep_owner, eb, level);
+
+ num_pages = num_extent_pages(eb);
+ for (i = 0; i < num_pages; i++, index++) {
+diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c
+index 33461b4f9c8b5..9063072b399bd 100644
+--- a/fs/btrfs/locking.c
++++ b/fs/btrfs/locking.c
+@@ -13,6 +13,93 @@
+ #include "extent_io.h"
+ #include "locking.h"
+
++/*
++ * Lockdep class keys for extent_buffer->lock's in this root. For a given
++ * eb, the lockdep key is determined by the btrfs_root it belongs to and
++ * the level the eb occupies in the tree.
++ *
++ * Different roots are used for different purposes and may nest inside each
++ * other and they require separate keysets. As lockdep keys should be
++ * static, assign keysets according to the purpose of the root as indicated
++ * by btrfs_root->root_key.objectid. This ensures that all special purpose
++ * roots have separate keysets.
++ *
++ * Lock-nesting across peer nodes is always done with the immediate parent
++ * node locked thus preventing deadlock. As lockdep doesn't know this, use
++ * subclass to avoid triggering lockdep warning in such cases.
++ *
++ * The key is set by the readpage_end_io_hook after the buffer has passed
++ * csum validation but before the pages are unlocked. It is also set by
++ * btrfs_init_new_buffer on freshly allocated blocks.
++ *
++ * We also add a check to make sure the highest level of the tree is the
++ * same as our lockdep setup here. If BTRFS_MAX_LEVEL changes, this code
++ * needs update as well.
++ */
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++#if BTRFS_MAX_LEVEL != 8
++#error
++#endif
++
++#define DEFINE_LEVEL(stem, level) \
++ .names[level] = "btrfs-" stem "-0" #level,
++
++#define DEFINE_NAME(stem) \
++ DEFINE_LEVEL(stem, 0) \
++ DEFINE_LEVEL(stem, 1) \
++ DEFINE_LEVEL(stem, 2) \
++ DEFINE_LEVEL(stem, 3) \
++ DEFINE_LEVEL(stem, 4) \
++ DEFINE_LEVEL(stem, 5) \
++ DEFINE_LEVEL(stem, 6) \
++ DEFINE_LEVEL(stem, 7)
++
++static struct btrfs_lockdep_keyset {
++ u64 id; /* root objectid */
++ /* Longest entry: btrfs-free-space-00 */
++ char names[BTRFS_MAX_LEVEL][20];
++ struct lock_class_key keys[BTRFS_MAX_LEVEL];
++} btrfs_lockdep_keysets[] = {
++ { .id = BTRFS_ROOT_TREE_OBJECTID, DEFINE_NAME("root") },
++ { .id = BTRFS_EXTENT_TREE_OBJECTID, DEFINE_NAME("extent") },
++ { .id = BTRFS_CHUNK_TREE_OBJECTID, DEFINE_NAME("chunk") },
++ { .id = BTRFS_DEV_TREE_OBJECTID, DEFINE_NAME("dev") },
++ { .id = BTRFS_CSUM_TREE_OBJECTID, DEFINE_NAME("csum") },
++ { .id = BTRFS_QUOTA_TREE_OBJECTID, DEFINE_NAME("quota") },
++ { .id = BTRFS_TREE_LOG_OBJECTID, DEFINE_NAME("log") },
++ { .id = BTRFS_TREE_RELOC_OBJECTID, DEFINE_NAME("treloc") },
++ { .id = BTRFS_DATA_RELOC_TREE_OBJECTID, DEFINE_NAME("dreloc") },
++ { .id = BTRFS_UUID_TREE_OBJECTID, DEFINE_NAME("uuid") },
++ { .id = BTRFS_FREE_SPACE_TREE_OBJECTID, DEFINE_NAME("free-space") },
++ { .id = 0, DEFINE_NAME("tree") },
++};
++
++#undef DEFINE_LEVEL
++#undef DEFINE_NAME
++
++void btrfs_set_buffer_lockdep_class(u64 objectid, struct extent_buffer *eb, int level)
++{
++ struct btrfs_lockdep_keyset *ks;
++
++ BUG_ON(level >= ARRAY_SIZE(ks->keys));
++
++ /* Find the matching keyset, id 0 is the default entry */
++ for (ks = btrfs_lockdep_keysets; ks->id; ks++)
++ if (ks->id == objectid)
++ break;
++
++ lockdep_set_class_and_name(&eb->lock, &ks->keys[level], ks->names[level]);
++}
++
++void btrfs_maybe_reset_lockdep_class(struct btrfs_root *root, struct extent_buffer *eb)
++{
++ if (test_bit(BTRFS_ROOT_RESET_LOCKDEP_CLASS, &root->state))
++ btrfs_set_buffer_lockdep_class(root->root_key.objectid,
++ eb, btrfs_header_level(eb));
++}
++
++#endif
++
+ /*
+ * Extent buffer locking
+ * =====================
+@@ -164,6 +251,8 @@ struct extent_buffer *btrfs_lock_root_node(struct btrfs_root *root)
+
+ while (1) {
+ eb = btrfs_root_node(root);
++
++ btrfs_maybe_reset_lockdep_class(root, eb);
+ btrfs_tree_lock(eb);
+ if (eb == root->node)
+ break;
+@@ -185,6 +274,8 @@ struct extent_buffer *btrfs_read_lock_root_node(struct btrfs_root *root)
+
+ while (1) {
+ eb = btrfs_root_node(root);
++
++ btrfs_maybe_reset_lockdep_class(root, eb);
+ btrfs_tree_read_lock(eb);
+ if (eb == root->node)
+ break;
+diff --git a/fs/btrfs/locking.h b/fs/btrfs/locking.h
+index bbc45534ae9a6..ab268be09bb54 100644
+--- a/fs/btrfs/locking.h
++++ b/fs/btrfs/locking.h
+@@ -131,4 +131,18 @@ void btrfs_drew_write_unlock(struct btrfs_drew_lock *lock);
+ void btrfs_drew_read_lock(struct btrfs_drew_lock *lock);
+ void btrfs_drew_read_unlock(struct btrfs_drew_lock *lock);
+
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++void btrfs_set_buffer_lockdep_class(u64 objectid, struct extent_buffer *eb, int level);
++void btrfs_maybe_reset_lockdep_class(struct btrfs_root *root, struct extent_buffer *eb);
++#else
++static inline void btrfs_set_buffer_lockdep_class(u64 objectid,
++ struct extent_buffer *eb, int level)
++{
++}
++static inline void btrfs_maybe_reset_lockdep_class(struct btrfs_root *root,
++ struct extent_buffer *eb)
++{
++}
++#endif
++
+ #endif
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 33411baf5c7a3..45c02aba2492b 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1326,7 +1326,9 @@ again:
+ btrfs_release_path(path);
+
+ path->lowest_level = level;
++ set_bit(BTRFS_ROOT_RESET_LOCKDEP_CLASS, &src->state);
+ ret = btrfs_search_slot(trans, src, &key, path, 0, 1);
++ clear_bit(BTRFS_ROOT_RESET_LOCKDEP_CLASS, &src->state);
+ path->lowest_level = 0;
+ if (ret) {
+ if (ret > 0)
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 9e0e0ae2288cd..43f905ab0a18d 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1233,7 +1233,8 @@ static void extent_err(const struct extent_buffer *eb, int slot,
+ }
+
+ static int check_extent_item(struct extent_buffer *leaf,
+- struct btrfs_key *key, int slot)
++ struct btrfs_key *key, int slot,
++ struct btrfs_key *prev_key)
+ {
+ struct btrfs_fs_info *fs_info = leaf->fs_info;
+ struct btrfs_extent_item *ei;
+@@ -1453,6 +1454,26 @@ static int check_extent_item(struct extent_buffer *leaf,
+ total_refs, inline_refs);
+ return -EUCLEAN;
+ }
++
++ if ((prev_key->type == BTRFS_EXTENT_ITEM_KEY) ||
++ (prev_key->type == BTRFS_METADATA_ITEM_KEY)) {
++ u64 prev_end = prev_key->objectid;
++
++ if (prev_key->type == BTRFS_METADATA_ITEM_KEY)
++ prev_end += fs_info->nodesize;
++ else
++ prev_end += prev_key->offset;
++
++ if (unlikely(prev_end > key->objectid)) {
++ extent_err(leaf, slot,
++ "previous extent [%llu %u %llu] overlaps current extent [%llu %u %llu]",
++ prev_key->objectid, prev_key->type,
++ prev_key->offset, key->objectid, key->type,
++ key->offset);
++ return -EUCLEAN;
++ }
++ }
++
+ return 0;
+ }
+
+@@ -1621,7 +1642,7 @@ static int check_leaf_item(struct extent_buffer *leaf,
+ break;
+ case BTRFS_EXTENT_ITEM_KEY:
+ case BTRFS_METADATA_ITEM_KEY:
+- ret = check_extent_item(leaf, key, slot);
++ ret = check_extent_item(leaf, key, slot, prev_key);
+ break;
+ case BTRFS_TREE_BLOCK_REF_KEY:
+ case BTRFS_SHARED_DATA_REF_KEY:
+diff --git a/fs/ksmbd/mgmt/tree_connect.c b/fs/ksmbd/mgmt/tree_connect.c
+index 0d28e723a28c7..940385c6a9135 100644
+--- a/fs/ksmbd/mgmt/tree_connect.c
++++ b/fs/ksmbd/mgmt/tree_connect.c
+@@ -18,7 +18,7 @@
+ struct ksmbd_tree_conn_status
+ ksmbd_tree_conn_connect(struct ksmbd_session *sess, char *share_name)
+ {
+- struct ksmbd_tree_conn_status status = {-EINVAL, NULL};
++ struct ksmbd_tree_conn_status status = {-ENOENT, NULL};
+ struct ksmbd_tree_connect_response *resp = NULL;
+ struct ksmbd_share_config *sc;
+ struct ksmbd_tree_connect *tree_conn = NULL;
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index a9c33d15ca1fb..35f5ea1c9dfcd 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -1930,8 +1930,9 @@ out_err1:
+ rsp->hdr.Status = STATUS_SUCCESS;
+ rc = 0;
+ break;
++ case -ENOENT:
+ case KSMBD_TREE_CONN_STATUS_NO_SHARE:
+- rsp->hdr.Status = STATUS_BAD_NETWORK_PATH;
++ rsp->hdr.Status = STATUS_BAD_NETWORK_NAME;
+ break;
+ case -ENOMEM:
+ case KSMBD_TREE_CONN_STATUS_NOMEM:
+@@ -2314,15 +2315,15 @@ static int smb2_remove_smb_xattrs(struct path *path)
+ name += strlen(name) + 1) {
+ ksmbd_debug(SMB, "%s, len %zd\n", name, strlen(name));
+
+- if (strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
+- strncmp(&name[XATTR_USER_PREFIX_LEN], DOS_ATTRIBUTE_PREFIX,
+- DOS_ATTRIBUTE_PREFIX_LEN) &&
+- strncmp(&name[XATTR_USER_PREFIX_LEN], STREAM_PREFIX, STREAM_PREFIX_LEN))
+- continue;
+-
+- err = ksmbd_vfs_remove_xattr(user_ns, path->dentry, name);
+- if (err)
+- ksmbd_debug(SMB, "remove xattr failed : %s\n", name);
++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
++ !strncmp(&name[XATTR_USER_PREFIX_LEN], STREAM_PREFIX,
++ STREAM_PREFIX_LEN)) {
++ err = ksmbd_vfs_remove_xattr(user_ns, path->dentry,
++ name);
++ if (err)
++ ksmbd_debug(SMB, "remove xattr failed : %s\n",
++ name);
++ }
+ }
+ out:
+ kvfree(xattr_list);
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index 3629049decac1..e3d443ccb9be6 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -118,7 +118,7 @@ static int ntfs_read_ea(struct ntfs_inode *ni, struct EA_FULL **ea,
+
+ run_init(&run);
+
+- err = attr_load_runs(attr_ea, ni, &run, NULL);
++ err = attr_load_runs_range(ni, ATTR_EA, NULL, 0, &run, 0, size);
+ if (!err)
+ err = ntfs_read_run_nb(sbi, &run, 0, ea_p, size, NULL);
+ run_close(&run);
+@@ -444,6 +444,11 @@ update_ea:
+ /* Delete xattr, ATTR_EA */
+ ni_remove_attr_le(ni, attr, mi, le);
+ } else if (attr->non_res) {
++ err = attr_load_runs_range(ni, ATTR_EA, NULL, 0, &ea_run, 0,
++ size);
++ if (err)
++ goto out;
++
+ err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size, 0);
+ if (err)
+ goto out;
+diff --git a/include/linux/rmap.h b/include/linux/rmap.h
+index bf80adca980b9..b89b4b86951f8 100644
+--- a/include/linux/rmap.h
++++ b/include/linux/rmap.h
+@@ -41,12 +41,15 @@ struct anon_vma {
+ atomic_t refcount;
+
+ /*
+- * Count of child anon_vmas and VMAs which points to this anon_vma.
++ * Count of child anon_vmas. Equals to the count of all anon_vmas that
++ * have ->parent pointing to this one, including itself.
+ *
+ * This counter is used for making decision about reusing anon_vma
+ * instead of forking new one. See comments in function anon_vma_clone.
+ */
+- unsigned degree;
++ unsigned long num_children;
++ /* Count of VMAs whose ->anon_vma pointer points to this object. */
++ unsigned long num_active_vmas;
+
+ struct anon_vma *parent; /* Parent of this anon_vma */
+
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index d3d10556f0fae..2f41364a6791e 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -2624,6 +2624,14 @@ static inline void skb_set_tail_pointer(struct sk_buff *skb, const int offset)
+
+ #endif /* NET_SKBUFF_DATA_USES_OFFSET */
+
++static inline void skb_assert_len(struct sk_buff *skb)
++{
++#ifdef CONFIG_DEBUG_NET
++ if (WARN_ONCE(!skb->len, "%s\n", __func__))
++ DO_ONCE_LITE(skb_dump, KERN_ERR, skb, false);
++#endif /* CONFIG_DEBUG_NET */
++}
++
+ /*
+ * Add data to an sk_buff
+ */
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index c5a2d6f50f25b..c5bdc5975a5c6 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -277,7 +277,8 @@ static inline void sk_msg_sg_copy_clear(struct sk_msg *msg, u32 start)
+
+ static inline struct sk_psock *sk_psock(const struct sock *sk)
+ {
+- return rcu_dereference_sk_user_data(sk);
++ return __rcu_dereference_sk_user_data_with_flags(sk,
++ SK_USER_DATA_PSOCK);
+ }
+
+ static inline void sk_psock_set_state(struct sk_psock *psock,
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 13944ceea7ed0..bf45b572f2897 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -545,14 +545,26 @@ enum sk_pacing {
+ SK_PACING_FQ = 2,
+ };
+
+-/* Pointer stored in sk_user_data might not be suitable for copying
+- * when cloning the socket. For instance, it can point to a reference
+- * counted object. sk_user_data bottom bit is set if pointer must not
+- * be copied.
++/* flag bits in sk_user_data
++ *
++ * - SK_USER_DATA_NOCOPY: Pointer stored in sk_user_data might
++ * not be suitable for copying when cloning the socket. For instance,
++ * it can point to a reference counted object. sk_user_data bottom
++ * bit is set if pointer must not be copied.
++ *
++ * - SK_USER_DATA_BPF: Mark whether sk_user_data field is
++ * managed/owned by a BPF reuseport array. This bit should be set
++ * when sk_user_data's sk is added to the bpf's reuseport_array.
++ *
++ * - SK_USER_DATA_PSOCK: Mark whether pointer stored in
++ * sk_user_data points to psock type. This bit should be set
++ * when sk_user_data is assigned to a psock object.
+ */
+ #define SK_USER_DATA_NOCOPY 1UL
+-#define SK_USER_DATA_BPF 2UL /* Managed by BPF */
+-#define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY | SK_USER_DATA_BPF)
++#define SK_USER_DATA_BPF 2UL
++#define SK_USER_DATA_PSOCK 4UL
++#define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY | SK_USER_DATA_BPF |\
++ SK_USER_DATA_PSOCK)
+
+ /**
+ * sk_user_data_is_nocopy - Test if sk_user_data pointer must not be copied
+@@ -565,24 +577,40 @@ static inline bool sk_user_data_is_nocopy(const struct sock *sk)
+
+ #define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data)))
+
++/**
++ * __rcu_dereference_sk_user_data_with_flags - return the pointer
++ * only if argument flags all has been set in sk_user_data. Otherwise
++ * return NULL
++ *
++ * @sk: socket
++ * @flags: flag bits
++ */
++static inline void *
++__rcu_dereference_sk_user_data_with_flags(const struct sock *sk,
++ uintptr_t flags)
++{
++ uintptr_t sk_user_data = (uintptr_t)rcu_dereference(__sk_user_data(sk));
++
++ WARN_ON_ONCE(flags & SK_USER_DATA_PTRMASK);
++
++ if ((sk_user_data & flags) == flags)
++ return (void *)(sk_user_data & SK_USER_DATA_PTRMASK);
++ return NULL;
++}
++
+ #define rcu_dereference_sk_user_data(sk) \
++ __rcu_dereference_sk_user_data_with_flags(sk, 0)
++#define __rcu_assign_sk_user_data_with_flags(sk, ptr, flags) \
+ ({ \
+- void *__tmp = rcu_dereference(__sk_user_data((sk))); \
+- (void *)((uintptr_t)__tmp & SK_USER_DATA_PTRMASK); \
+-})
+-#define rcu_assign_sk_user_data(sk, ptr) \
+-({ \
+- uintptr_t __tmp = (uintptr_t)(ptr); \
+- WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK); \
+- rcu_assign_pointer(__sk_user_data((sk)), __tmp); \
+-})
+-#define rcu_assign_sk_user_data_nocopy(sk, ptr) \
+-({ \
+- uintptr_t __tmp = (uintptr_t)(ptr); \
+- WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK); \
++ uintptr_t __tmp1 = (uintptr_t)(ptr), \
++ __tmp2 = (uintptr_t)(flags); \
++ WARN_ON_ONCE(__tmp1 & ~SK_USER_DATA_PTRMASK); \
++ WARN_ON_ONCE(__tmp2 & SK_USER_DATA_PTRMASK); \
+ rcu_assign_pointer(__sk_user_data((sk)), \
+- __tmp | SK_USER_DATA_NOCOPY); \
++ __tmp1 | __tmp2); \
+ })
++#define rcu_assign_sk_user_data(sk, ptr) \
++ __rcu_assign_sk_user_data_with_flags(sk, ptr, 0)
+
+ static inline
+ struct net *sock_net(const struct sock *sk)
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 601ccf1b2f091..4baa99363b166 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -2937,6 +2937,16 @@ int ftrace_startup(struct ftrace_ops *ops, int command)
+
+ ftrace_startup_enable(command);
+
++ /*
++ * If ftrace is in an undefined state, we just remove ops from list
++ * to prevent the NULL pointer, instead of totally rolling it back and
++ * free trampoline, because those actions could cause further damage.
++ */
++ if (unlikely(ftrace_disabled)) {
++ __unregister_ftrace_function(ops);
++ return -ENODEV;
++ }
++
+ ops->flags &= ~FTRACE_OPS_FL_ADDING;
+
+ return 0;
+diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
+index 2082af43d51fb..0717a0dcefed1 100644
+--- a/lib/crypto/Kconfig
++++ b/lib/crypto/Kconfig
+@@ -33,7 +33,6 @@ config CRYPTO_ARCH_HAVE_LIB_CHACHA
+
+ config CRYPTO_LIB_CHACHA_GENERIC
+ tristate
+- select XOR_BLOCKS
+ help
+ This symbol can be depended upon by arch implementations of the
+ ChaCha library interface that require the generic code as a
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 746c05acad270..f6dd281df410e 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -93,7 +93,8 @@ static inline struct anon_vma *anon_vma_alloc(void)
+ anon_vma = kmem_cache_alloc(anon_vma_cachep, GFP_KERNEL);
+ if (anon_vma) {
+ atomic_set(&anon_vma->refcount, 1);
+- anon_vma->degree = 1; /* Reference for first vma */
++ anon_vma->num_children = 0;
++ anon_vma->num_active_vmas = 0;
+ anon_vma->parent = anon_vma;
+ /*
+ * Initialise the anon_vma root to point to itself. If called
+@@ -201,6 +202,7 @@ int __anon_vma_prepare(struct vm_area_struct *vma)
+ anon_vma = anon_vma_alloc();
+ if (unlikely(!anon_vma))
+ goto out_enomem_free_avc;
++ anon_vma->num_children++; /* self-parent link for new root */
+ allocated = anon_vma;
+ }
+
+@@ -210,8 +212,7 @@ int __anon_vma_prepare(struct vm_area_struct *vma)
+ if (likely(!vma->anon_vma)) {
+ vma->anon_vma = anon_vma;
+ anon_vma_chain_link(vma, avc, anon_vma);
+- /* vma reference or self-parent link for new root */
+- anon_vma->degree++;
++ anon_vma->num_active_vmas++;
+ allocated = NULL;
+ avc = NULL;
+ }
+@@ -296,19 +297,19 @@ int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
+ anon_vma_chain_link(dst, avc, anon_vma);
+
+ /*
+- * Reuse existing anon_vma if its degree lower than two,
+- * that means it has no vma and only one anon_vma child.
++ * Reuse existing anon_vma if it has no vma and only one
++ * anon_vma child.
+ *
+- * Do not choose parent anon_vma, otherwise first child
+- * will always reuse it. Root anon_vma is never reused:
++ * Root anon_vma is never reused:
+ * it has self-parent reference and at least one child.
+ */
+ if (!dst->anon_vma && src->anon_vma &&
+- anon_vma != src->anon_vma && anon_vma->degree < 2)
++ anon_vma->num_children < 2 &&
++ anon_vma->num_active_vmas == 0)
+ dst->anon_vma = anon_vma;
+ }
+ if (dst->anon_vma)
+- dst->anon_vma->degree++;
++ dst->anon_vma->num_active_vmas++;
+ unlock_anon_vma_root(root);
+ return 0;
+
+@@ -358,6 +359,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
+ anon_vma = anon_vma_alloc();
+ if (!anon_vma)
+ goto out_error;
++ anon_vma->num_active_vmas++;
+ avc = anon_vma_chain_alloc(GFP_KERNEL);
+ if (!avc)
+ goto out_error_free_anon_vma;
+@@ -378,7 +380,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
+ vma->anon_vma = anon_vma;
+ anon_vma_lock_write(anon_vma);
+ anon_vma_chain_link(vma, avc, anon_vma);
+- anon_vma->parent->degree++;
++ anon_vma->parent->num_children++;
+ anon_vma_unlock_write(anon_vma);
+
+ return 0;
+@@ -410,7 +412,7 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
+ * to free them outside the lock.
+ */
+ if (RB_EMPTY_ROOT(&anon_vma->rb_root.rb_root)) {
+- anon_vma->parent->degree--;
++ anon_vma->parent->num_children--;
+ continue;
+ }
+
+@@ -418,7 +420,7 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
+ anon_vma_chain_free(avc);
+ }
+ if (vma->anon_vma) {
+- vma->anon_vma->degree--;
++ vma->anon_vma->num_active_vmas--;
+
+ /*
+ * vma would still be needed after unlink, and anon_vma will be prepared
+@@ -436,7 +438,8 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
+ list_for_each_entry_safe(avc, next, &vma->anon_vma_chain, same_vma) {
+ struct anon_vma *anon_vma = avc->anon_vma;
+
+- VM_WARN_ON(anon_vma->degree);
++ VM_WARN_ON(anon_vma->num_children);
++ VM_WARN_ON(anon_vma->num_active_vmas);
+ put_anon_vma(anon_vma);
+
+ list_del(&avc->same_vma);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index f18d0c72713f1..48fbd0ae882bf 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1991,11 +1991,11 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
+ src_match = !bacmp(&c->src, src);
+ dst_match = !bacmp(&c->dst, dst);
+ if (src_match && dst_match) {
+- c = l2cap_chan_hold_unless_zero(c);
+- if (c) {
+- read_unlock(&chan_list_lock);
+- return c;
+- }
++ if (!l2cap_chan_hold_unless_zero(c))
++ continue;
++
++ read_unlock(&chan_list_lock);
++ return c;
+ }
+
+ /* Closest match */
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 56f059b3c242d..42f8de4ebbd7e 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -955,6 +955,9 @@ static int convert___skb_to_skb(struct sk_buff *skb, struct __sk_buff *__skb)
+ {
+ struct qdisc_skb_cb *cb = (struct qdisc_skb_cb *)skb->cb;
+
++ if (!skb->len)
++ return -EINVAL;
++
+ if (!__skb)
+ return 0;
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index a77a979a4bf75..ecaeb3ef8e5c3 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4168,6 +4168,7 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
+ bool again = false;
+
+ skb_reset_mac_header(skb);
++ skb_assert_len(skb);
+
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_SCHED_TSTAMP))
+ __skb_tstamp_tx(skb, NULL, NULL, skb->sk, SCM_TSTAMP_SCHED);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 54625287ee5b0..fbaa557ed7ece 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -307,11 +307,26 @@ static int neigh_del_timer(struct neighbour *n)
+ return 0;
+ }
+
+-static void pneigh_queue_purge(struct sk_buff_head *list)
++static void pneigh_queue_purge(struct sk_buff_head *list, struct net *net)
+ {
++ struct sk_buff_head tmp;
++ unsigned long flags;
+ struct sk_buff *skb;
+
+- while ((skb = skb_dequeue(list)) != NULL) {
++ skb_queue_head_init(&tmp);
++ spin_lock_irqsave(&list->lock, flags);
++ skb = skb_peek(list);
++ while (skb != NULL) {
++ struct sk_buff *skb_next = skb_peek_next(skb, list);
++ if (net == NULL || net_eq(dev_net(skb->dev), net)) {
++ __skb_unlink(skb, list);
++ __skb_queue_tail(&tmp, skb);
++ }
++ skb = skb_next;
++ }
++ spin_unlock_irqrestore(&list->lock, flags);
++
++ while ((skb = __skb_dequeue(&tmp))) {
+ dev_put(skb->dev);
+ kfree_skb(skb);
+ }
+@@ -385,9 +400,9 @@ static int __neigh_ifdown(struct neigh_table *tbl, struct net_device *dev,
+ write_lock_bh(&tbl->lock);
+ neigh_flush_dev(tbl, dev, skip_perm);
+ pneigh_ifdown_and_unlock(tbl, dev);
+-
+- del_timer_sync(&tbl->proxy_timer);
+- pneigh_queue_purge(&tbl->proxy_queue);
++ pneigh_queue_purge(&tbl->proxy_queue, dev_net(dev));
++ if (skb_queue_empty_lockless(&tbl->proxy_queue))
++ del_timer_sync(&tbl->proxy_timer);
+ return 0;
+ }
+
+@@ -1787,7 +1802,7 @@ int neigh_table_clear(int index, struct neigh_table *tbl)
+ cancel_delayed_work_sync(&tbl->managed_work);
+ cancel_delayed_work_sync(&tbl->gc_work);
+ del_timer_sync(&tbl->proxy_timer);
+- pneigh_queue_purge(&tbl->proxy_queue);
++ pneigh_queue_purge(&tbl->proxy_queue, NULL);
+ neigh_ifdown(tbl, NULL);
+ if (atomic_read(&tbl->entries))
+ pr_crit("neighbour leakage\n");
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index a8dbea559c7f6..84209e661171e 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -735,7 +735,9 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
+ sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED);
+ refcount_set(&psock->refcnt, 1);
+
+- rcu_assign_sk_user_data_nocopy(sk, psock);
++ __rcu_assign_sk_user_data_with_flags(sk, psock,
++ SK_USER_DATA_NOCOPY |
++ SK_USER_DATA_PSOCK);
+ sock_hold(sk);
+
+ out:
+diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
+index ddc54b6d18ee4..8c0fea1bdc8d6 100644
+--- a/net/netfilter/Kconfig
++++ b/net/netfilter/Kconfig
+@@ -144,7 +144,6 @@ config NF_CONNTRACK_ZONES
+
+ config NF_CONNTRACK_PROCFS
+ bool "Supply CT list in procfs (OBSOLETE)"
+- default y
+ depends on PROC_FS
+ help
+ This option enables for the list of known conntrack entries
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index ca6e92a229239..492bd35cccc09 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3037,8 +3037,8 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ if (err)
+ goto out_free;
+
+- if (sock->type == SOCK_RAW &&
+- !dev_validate_header(dev, skb->data, len)) {
++ if ((sock->type == SOCK_RAW &&
++ !dev_validate_header(dev, skb->data, len)) || !skb->len) {
+ err = -EINVAL;
+ goto out_free;
+ }
+diff --git a/sound/pci/hda/patch_cs8409-tables.c b/sound/pci/hda/patch_cs8409-tables.c
+index 4f4cc82159179..5b140301ca666 100644
+--- a/sound/pci/hda/patch_cs8409-tables.c
++++ b/sound/pci/hda/patch_cs8409-tables.c
+@@ -546,6 +546,10 @@ const struct snd_pci_quirk cs8409_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0BD6, "Dolphin", CS8409_DOLPHIN),
+ SND_PCI_QUIRK(0x1028, 0x0BD7, "Dolphin", CS8409_DOLPHIN),
+ SND_PCI_QUIRK(0x1028, 0x0BD8, "Dolphin", CS8409_DOLPHIN),
++ SND_PCI_QUIRK(0x1028, 0x0C43, "Dolphin", CS8409_DOLPHIN),
++ SND_PCI_QUIRK(0x1028, 0x0C50, "Dolphin", CS8409_DOLPHIN),
++ SND_PCI_QUIRK(0x1028, 0x0C51, "Dolphin", CS8409_DOLPHIN),
++ SND_PCI_QUIRK(0x1028, 0x0C52, "Dolphin", CS8409_DOLPHIN),
+ {} /* terminator */
+ };
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1ae9674fa8a3c..b44b882f8378c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9248,6 +9248,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -9268,6 +9269,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+ SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B),
+ SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index 18b3da9211e32..5ada0d318d0ff 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -1986,7 +1986,7 @@ static int rt5640_set_bias_level(struct snd_soc_component *component,
+ snd_soc_component_write(component, RT5640_PWR_MIXER, 0x0000);
+ if (rt5640->jd_src == RT5640_JD_SRC_HDA_HEADER)
+ snd_soc_component_write(component, RT5640_PWR_ANLG1,
+- 0x0018);
++ 0x2818);
+ else
+ snd_soc_component_write(component, RT5640_PWR_ANLG1,
+ 0x0000);
+@@ -2592,7 +2592,8 @@ static void rt5640_enable_hda_jack_detect(
+ snd_soc_component_update_bits(component, RT5640_DUMMY1, 0x400, 0x0);
+
+ snd_soc_component_update_bits(component, RT5640_PWR_ANLG1,
+- RT5640_PWR_VREF2, RT5640_PWR_VREF2);
++ RT5640_PWR_VREF2 | RT5640_PWR_MB | RT5640_PWR_BG,
++ RT5640_PWR_VREF2 | RT5640_PWR_MB | RT5640_PWR_BG);
+ usleep_range(10000, 15000);
+ snd_soc_component_update_bits(component, RT5640_PWR_ANLG1,
+ RT5640_PWR_FV2, RT5640_PWR_FV2);
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index e392de7a262ef..3d74acffec11f 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -1016,32 +1016,36 @@ static int rz_ssi_probe(struct platform_device *pdev)
+
+ ssi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ if (IS_ERR(ssi->rstc)) {
+- rz_ssi_release_dma_channels(ssi);
+- return PTR_ERR(ssi->rstc);
++ ret = PTR_ERR(ssi->rstc);
++ goto err_reset;
+ }
+
+ reset_control_deassert(ssi->rstc);
+ pm_runtime_enable(&pdev->dev);
+ ret = pm_runtime_resume_and_get(&pdev->dev);
+ if (ret < 0) {
+- rz_ssi_release_dma_channels(ssi);
+- pm_runtime_disable(ssi->dev);
+- reset_control_assert(ssi->rstc);
+- return dev_err_probe(ssi->dev, ret, "pm_runtime_resume_and_get failed\n");
++ dev_err(&pdev->dev, "pm_runtime_resume_and_get failed\n");
++ goto err_pm;
+ }
+
+ ret = devm_snd_soc_register_component(&pdev->dev, &rz_ssi_soc_component,
+ rz_ssi_soc_dai,
+ ARRAY_SIZE(rz_ssi_soc_dai));
+ if (ret < 0) {
+- rz_ssi_release_dma_channels(ssi);
+-
+- pm_runtime_put(ssi->dev);
+- pm_runtime_disable(ssi->dev);
+- reset_control_assert(ssi->rstc);
+ dev_err(&pdev->dev, "failed to register snd component\n");
++ goto err_snd_soc;
+ }
+
++ return 0;
++
++err_snd_soc:
++ pm_runtime_put(ssi->dev);
++err_pm:
++ pm_runtime_disable(ssi->dev);
++ reset_control_assert(ssi->rstc);
++err_reset:
++ rz_ssi_release_dma_channels(ssi);
++
+ return ret;
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 168fd802d70bd..9bfead5efc4c1 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1903,6 +1903,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ DEVICE_FLG(0x21b4, 0x0081, /* AudioQuest DragonFly */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
++ DEVICE_FLG(0x2522, 0x0007, /* LH Labs Geek Out HD Audio 1V5 */
++ QUIRK_FLAG_SET_IFACE_FIRST),
+ DEVICE_FLG(0x2708, 0x0002, /* Audient iD14 */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */
+diff --git a/tools/testing/selftests/netfilter/nft_flowtable.sh b/tools/testing/selftests/netfilter/nft_flowtable.sh
+index d4ffebb989f88..c336e6c148d1f 100755
+--- a/tools/testing/selftests/netfilter/nft_flowtable.sh
++++ b/tools/testing/selftests/netfilter/nft_flowtable.sh
+@@ -14,6 +14,11 @@
+ # nft_flowtable.sh -o8000 -l1500 -r2000
+ #
+
++sfx=$(mktemp -u "XXXXXXXX")
++ns1="ns1-$sfx"
++ns2="ns2-$sfx"
++nsr1="nsr1-$sfx"
++nsr2="nsr2-$sfx"
+
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
+@@ -36,18 +41,17 @@ checktool (){
+ checktool "nft --version" "run test without nft tool"
+ checktool "ip -Version" "run test without ip tool"
+ checktool "which nc" "run test without nc (netcat)"
+-checktool "ip netns add nsr1" "create net namespace"
++checktool "ip netns add $nsr1" "create net namespace $nsr1"
+
+-ip netns add ns1
+-ip netns add ns2
+-
+-ip netns add nsr2
++ip netns add $ns1
++ip netns add $ns2
++ip netns add $nsr2
+
+ cleanup() {
+- for i in 1 2; do
+- ip netns del ns$i
+- ip netns del nsr$i
+- done
++ ip netns del $ns1
++ ip netns del $ns2
++ ip netns del $nsr1
++ ip netns del $nsr2
+
+ rm -f "$ns1in" "$ns1out"
+ rm -f "$ns2in" "$ns2out"
+@@ -59,22 +63,21 @@ trap cleanup EXIT
+
+ sysctl -q net.netfilter.nf_log_all_netns=1
+
+-ip link add veth0 netns nsr1 type veth peer name eth0 netns ns1
+-ip link add veth1 netns nsr1 type veth peer name veth0 netns nsr2
++ip link add veth0 netns $nsr1 type veth peer name eth0 netns $ns1
++ip link add veth1 netns $nsr1 type veth peer name veth0 netns $nsr2
+
+-ip link add veth1 netns nsr2 type veth peer name eth0 netns ns2
++ip link add veth1 netns $nsr2 type veth peer name eth0 netns $ns2
+
+ for dev in lo veth0 veth1; do
+- for i in 1 2; do
+- ip -net nsr$i link set $dev up
+- done
++ ip -net $nsr1 link set $dev up
++ ip -net $nsr2 link set $dev up
+ done
+
+-ip -net nsr1 addr add 10.0.1.1/24 dev veth0
+-ip -net nsr1 addr add dead:1::1/64 dev veth0
++ip -net $nsr1 addr add 10.0.1.1/24 dev veth0
++ip -net $nsr1 addr add dead:1::1/64 dev veth0
+
+-ip -net nsr2 addr add 10.0.2.1/24 dev veth1
+-ip -net nsr2 addr add dead:2::1/64 dev veth1
++ip -net $nsr2 addr add 10.0.2.1/24 dev veth1
++ip -net $nsr2 addr add dead:2::1/64 dev veth1
+
+ # set different MTUs so we need to push packets coming from ns1 (large MTU)
+ # to ns2 (smaller MTU) to stack either to perform fragmentation (ip_no_pmtu_disc=1),
+@@ -106,49 +109,56 @@ do
+ esac
+ done
+
+-if ! ip -net nsr1 link set veth0 mtu $omtu; then
++if ! ip -net $nsr1 link set veth0 mtu $omtu; then
+ exit 1
+ fi
+
+-ip -net ns1 link set eth0 mtu $omtu
++ip -net $ns1 link set eth0 mtu $omtu
+
+-if ! ip -net nsr2 link set veth1 mtu $rmtu; then
++if ! ip -net $nsr2 link set veth1 mtu $rmtu; then
+ exit 1
+ fi
+
+-ip -net ns2 link set eth0 mtu $rmtu
++ip -net $ns2 link set eth0 mtu $rmtu
+
+ # transfer-net between nsr1 and nsr2.
+ # these addresses are not used for connections.
+-ip -net nsr1 addr add 192.168.10.1/24 dev veth1
+-ip -net nsr1 addr add fee1:2::1/64 dev veth1
+-
+-ip -net nsr2 addr add 192.168.10.2/24 dev veth0
+-ip -net nsr2 addr add fee1:2::2/64 dev veth0
+-
+-for i in 1 2; do
+- ip netns exec nsr$i sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null
+- ip netns exec nsr$i sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null
+-
+- ip -net ns$i link set lo up
+- ip -net ns$i link set eth0 up
+- ip -net ns$i addr add 10.0.$i.99/24 dev eth0
+- ip -net ns$i route add default via 10.0.$i.1
+- ip -net ns$i addr add dead:$i::99/64 dev eth0
+- ip -net ns$i route add default via dead:$i::1
+- if ! ip netns exec ns$i sysctl net.ipv4.tcp_no_metrics_save=1 > /dev/null; then
++ip -net $nsr1 addr add 192.168.10.1/24 dev veth1
++ip -net $nsr1 addr add fee1:2::1/64 dev veth1
++
++ip -net $nsr2 addr add 192.168.10.2/24 dev veth0
++ip -net $nsr2 addr add fee1:2::2/64 dev veth0
++
++for i in 0 1; do
++ ip netns exec $nsr1 sysctl net.ipv4.conf.veth$i.forwarding=1 > /dev/null
++ ip netns exec $nsr2 sysctl net.ipv4.conf.veth$i.forwarding=1 > /dev/null
++done
++
++for ns in $ns1 $ns2;do
++ ip -net $ns link set lo up
++ ip -net $ns link set eth0 up
++
++ if ! ip netns exec $ns sysctl net.ipv4.tcp_no_metrics_save=1 > /dev/null; then
+ echo "ERROR: Check Originator/Responder values (problem during address addition)"
+ exit 1
+ fi
+-
+ # don't set ip DF bit for first two tests
+- ip netns exec ns$i sysctl net.ipv4.ip_no_pmtu_disc=1 > /dev/null
++ ip netns exec $ns sysctl net.ipv4.ip_no_pmtu_disc=1 > /dev/null
+ done
+
+-ip -net nsr1 route add default via 192.168.10.2
+-ip -net nsr2 route add default via 192.168.10.1
++ip -net $ns1 addr add 10.0.1.99/24 dev eth0
++ip -net $ns2 addr add 10.0.2.99/24 dev eth0
++ip -net $ns1 route add default via 10.0.1.1
++ip -net $ns2 route add default via 10.0.2.1
++ip -net $ns1 addr add dead:1::99/64 dev eth0
++ip -net $ns2 addr add dead:2::99/64 dev eth0
++ip -net $ns1 route add default via dead:1::1
++ip -net $ns2 route add default via dead:2::1
++
++ip -net $nsr1 route add default via 192.168.10.2
++ip -net $nsr2 route add default via 192.168.10.1
+
+-ip netns exec nsr1 nft -f - <<EOF
++ip netns exec $nsr1 nft -f - <<EOF
+ table inet filter {
+ flowtable f1 {
+ hook ingress priority 0
+@@ -197,18 +207,18 @@ if [ $? -ne 0 ]; then
+ fi
+
+ # test basic connectivity
+-if ! ip netns exec ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then
+- echo "ERROR: ns1 cannot reach ns2" 1>&2
++if ! ip netns exec $ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then
++ echo "ERROR: $ns1 cannot reach ns2" 1>&2
+ exit 1
+ fi
+
+-if ! ip netns exec ns2 ping -c 1 -q 10.0.1.99 > /dev/null; then
+- echo "ERROR: ns2 cannot reach ns1" 1>&2
++if ! ip netns exec $ns2 ping -c 1 -q 10.0.1.99 > /dev/null; then
++ echo "ERROR: $ns2 cannot reach $ns1" 1>&2
+ exit 1
+ fi
+
+ if [ $ret -eq 0 ];then
+- echo "PASS: netns routing/connectivity: ns1 can reach ns2"
++ echo "PASS: netns routing/connectivity: $ns1 can reach $ns2"
+ fi
+
+ ns1in=$(mktemp)
+@@ -312,24 +322,24 @@ make_file "$ns2in"
+
+ # First test:
+ # No PMTU discovery, nsr1 is expected to fragment packets from ns1 to ns2 as needed.
+-if test_tcp_forwarding ns1 ns2; then
++if test_tcp_forwarding $ns1 $ns2; then
+ echo "PASS: flow offloaded for ns1/ns2"
+ else
+ echo "FAIL: flow offload for ns1/ns2:" 1>&2
+- ip netns exec nsr1 nft list ruleset
++ ip netns exec $nsr1 nft list ruleset
+ ret=1
+ fi
+
+ # delete default route, i.e. ns2 won't be able to reach ns1 and
+ # will depend on ns1 being masqueraded in nsr1.
+ # expect ns1 has nsr1 address.
+-ip -net ns2 route del default via 10.0.2.1
+-ip -net ns2 route del default via dead:2::1
+-ip -net ns2 route add 192.168.10.1 via 10.0.2.1
++ip -net $ns2 route del default via 10.0.2.1
++ip -net $ns2 route del default via dead:2::1
++ip -net $ns2 route add 192.168.10.1 via 10.0.2.1
+
+ # Second test:
+ # Same, but with NAT enabled.
+-ip netns exec nsr1 nft -f - <<EOF
++ip netns exec $nsr1 nft -f - <<EOF
+ table ip nat {
+ chain prerouting {
+ type nat hook prerouting priority 0; policy accept;
+@@ -343,47 +353,47 @@ table ip nat {
+ }
+ EOF
+
+-if test_tcp_forwarding_nat ns1 ns2; then
++if test_tcp_forwarding_nat $ns1 $ns2; then
+ echo "PASS: flow offloaded for ns1/ns2 with NAT"
+ else
+ echo "FAIL: flow offload for ns1/ns2 with NAT" 1>&2
+- ip netns exec nsr1 nft list ruleset
++ ip netns exec $nsr1 nft list ruleset
+ ret=1
+ fi
+
+ # Third test:
+ # Same as second test, but with PMTU discovery enabled.
+-handle=$(ip netns exec nsr1 nft -a list table inet filter | grep something-to-grep-for | cut -d \# -f 2)
++handle=$(ip netns exec $nsr1 nft -a list table inet filter | grep something-to-grep-for | cut -d \# -f 2)
+
+-if ! ip netns exec nsr1 nft delete rule inet filter forward $handle; then
++if ! ip netns exec $nsr1 nft delete rule inet filter forward $handle; then
+ echo "FAIL: Could not delete large-packet accept rule"
+ exit 1
+ fi
+
+-ip netns exec ns1 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
+-ip netns exec ns2 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
++ip netns exec $ns1 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
++ip netns exec $ns2 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
+
+-if test_tcp_forwarding_nat ns1 ns2; then
++if test_tcp_forwarding_nat $ns1 $ns2; then
+ echo "PASS: flow offloaded for ns1/ns2 with NAT and pmtu discovery"
+ else
+ echo "FAIL: flow offload for ns1/ns2 with NAT and pmtu discovery" 1>&2
+- ip netns exec nsr1 nft list ruleset
++ ip netns exec $nsr1 nft list ruleset
+ fi
+
+ # Another test:
+ # Add bridge interface br0 to Router1, with NAT enabled.
+-ip -net nsr1 link add name br0 type bridge
+-ip -net nsr1 addr flush dev veth0
+-ip -net nsr1 link set up dev veth0
+-ip -net nsr1 link set veth0 master br0
+-ip -net nsr1 addr add 10.0.1.1/24 dev br0
+-ip -net nsr1 addr add dead:1::1/64 dev br0
+-ip -net nsr1 link set up dev br0
++ip -net $nsr1 link add name br0 type bridge
++ip -net $nsr1 addr flush dev veth0
++ip -net $nsr1 link set up dev veth0
++ip -net $nsr1 link set veth0 master br0
++ip -net $nsr1 addr add 10.0.1.1/24 dev br0
++ip -net $nsr1 addr add dead:1::1/64 dev br0
++ip -net $nsr1 link set up dev br0
+
+-ip netns exec nsr1 sysctl net.ipv4.conf.br0.forwarding=1 > /dev/null
++ip netns exec $nsr1 sysctl net.ipv4.conf.br0.forwarding=1 > /dev/null
+
+ # br0 with NAT enabled.
+-ip netns exec nsr1 nft -f - <<EOF
++ip netns exec $nsr1 nft -f - <<EOF
+ flush table ip nat
+ table ip nat {
+ chain prerouting {
+@@ -398,59 +408,59 @@ table ip nat {
+ }
+ EOF
+
+-if test_tcp_forwarding_nat ns1 ns2; then
++if test_tcp_forwarding_nat $ns1 $ns2; then
+ echo "PASS: flow offloaded for ns1/ns2 with bridge NAT"
+ else
+ echo "FAIL: flow offload for ns1/ns2 with bridge NAT" 1>&2
+- ip netns exec nsr1 nft list ruleset
++ ip netns exec $nsr1 nft list ruleset
+ ret=1
+ fi
+
+ # Another test:
+ # Add bridge interface br0 to Router1, with NAT and VLAN.
+-ip -net nsr1 link set veth0 nomaster
+-ip -net nsr1 link set down dev veth0
+-ip -net nsr1 link add link veth0 name veth0.10 type vlan id 10
+-ip -net nsr1 link set up dev veth0
+-ip -net nsr1 link set up dev veth0.10
+-ip -net nsr1 link set veth0.10 master br0
+-
+-ip -net ns1 addr flush dev eth0
+-ip -net ns1 link add link eth0 name eth0.10 type vlan id 10
+-ip -net ns1 link set eth0 up
+-ip -net ns1 link set eth0.10 up
+-ip -net ns1 addr add 10.0.1.99/24 dev eth0.10
+-ip -net ns1 route add default via 10.0.1.1
+-ip -net ns1 addr add dead:1::99/64 dev eth0.10
+-
+-if test_tcp_forwarding_nat ns1 ns2; then
++ip -net $nsr1 link set veth0 nomaster
++ip -net $nsr1 link set down dev veth0
++ip -net $nsr1 link add link veth0 name veth0.10 type vlan id 10
++ip -net $nsr1 link set up dev veth0
++ip -net $nsr1 link set up dev veth0.10
++ip -net $nsr1 link set veth0.10 master br0
++
++ip -net $ns1 addr flush dev eth0
++ip -net $ns1 link add link eth0 name eth0.10 type vlan id 10
++ip -net $ns1 link set eth0 up
++ip -net $ns1 link set eth0.10 up
++ip -net $ns1 addr add 10.0.1.99/24 dev eth0.10
++ip -net $ns1 route add default via 10.0.1.1
++ip -net $ns1 addr add dead:1::99/64 dev eth0.10
++
++if test_tcp_forwarding_nat $ns1 $ns2; then
+ echo "PASS: flow offloaded for ns1/ns2 with bridge NAT and VLAN"
+ else
+ echo "FAIL: flow offload for ns1/ns2 with bridge NAT and VLAN" 1>&2
+- ip netns exec nsr1 nft list ruleset
++ ip netns exec $nsr1 nft list ruleset
+ ret=1
+ fi
+
+ # restore test topology (remove bridge and VLAN)
+-ip -net nsr1 link set veth0 nomaster
+-ip -net nsr1 link set veth0 down
+-ip -net nsr1 link set veth0.10 down
+-ip -net nsr1 link delete veth0.10 type vlan
+-ip -net nsr1 link delete br0 type bridge
+-ip -net ns1 addr flush dev eth0.10
+-ip -net ns1 link set eth0.10 down
+-ip -net ns1 link set eth0 down
+-ip -net ns1 link delete eth0.10 type vlan
++ip -net $nsr1 link set veth0 nomaster
++ip -net $nsr1 link set veth0 down
++ip -net $nsr1 link set veth0.10 down
++ip -net $nsr1 link delete veth0.10 type vlan
++ip -net $nsr1 link delete br0 type bridge
++ip -net $ns1 addr flush dev eth0.10
++ip -net $ns1 link set eth0.10 down
++ip -net $ns1 link set eth0 down
++ip -net $ns1 link delete eth0.10 type vlan
+
+ # restore address in ns1 and nsr1
+-ip -net ns1 link set eth0 up
+-ip -net ns1 addr add 10.0.1.99/24 dev eth0
+-ip -net ns1 route add default via 10.0.1.1
+-ip -net ns1 addr add dead:1::99/64 dev eth0
+-ip -net ns1 route add default via dead:1::1
+-ip -net nsr1 addr add 10.0.1.1/24 dev veth0
+-ip -net nsr1 addr add dead:1::1/64 dev veth0
+-ip -net nsr1 link set up dev veth0
++ip -net $ns1 link set eth0 up
++ip -net $ns1 addr add 10.0.1.99/24 dev eth0
++ip -net $ns1 route add default via 10.0.1.1
++ip -net $ns1 addr add dead:1::99/64 dev eth0
++ip -net $ns1 route add default via dead:1::1
++ip -net $nsr1 addr add 10.0.1.1/24 dev veth0
++ip -net $nsr1 addr add dead:1::1/64 dev veth0
++ip -net $nsr1 link set up dev veth0
+
+ KEY_SHA="0x"$(ps -xaf | sha1sum | cut -d " " -f 1)
+ KEY_AES="0x"$(ps -xaf | md5sum | cut -d " " -f 1)
+@@ -480,23 +490,23 @@ do_esp() {
+
+ }
+
+-do_esp nsr1 192.168.10.1 192.168.10.2 10.0.1.0/24 10.0.2.0/24 $SPI1 $SPI2
++do_esp $nsr1 192.168.10.1 192.168.10.2 10.0.1.0/24 10.0.2.0/24 $SPI1 $SPI2
+
+-do_esp nsr2 192.168.10.2 192.168.10.1 10.0.2.0/24 10.0.1.0/24 $SPI2 $SPI1
++do_esp $nsr2 192.168.10.2 192.168.10.1 10.0.2.0/24 10.0.1.0/24 $SPI2 $SPI1
+
+-ip netns exec nsr1 nft delete table ip nat
++ip netns exec $nsr1 nft delete table ip nat
+
+ # restore default routes
+-ip -net ns2 route del 192.168.10.1 via 10.0.2.1
+-ip -net ns2 route add default via 10.0.2.1
+-ip -net ns2 route add default via dead:2::1
++ip -net $ns2 route del 192.168.10.1 via 10.0.2.1
++ip -net $ns2 route add default via 10.0.2.1
++ip -net $ns2 route add default via dead:2::1
+
+-if test_tcp_forwarding ns1 ns2; then
++if test_tcp_forwarding $ns1 $ns2; then
+ echo "PASS: ipsec tunnel mode for ns1/ns2"
+ else
+ echo "FAIL: ipsec tunnel mode for ns1/ns2"
+- ip netns exec nsr1 nft list ruleset 1>&2
+- ip netns exec nsr1 cat /proc/net/xfrm_stat 1>&2
++ ip netns exec $nsr1 nft list ruleset 1>&2
++ ip netns exec $nsr1 cat /proc/net/xfrm_stat 1>&2
+ fi
+
+ exit $ret
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index f3ec628f5e519..4b48af8a83096 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -892,7 +892,7 @@ int timerlat_hist_main(int argc, char *argv[])
+ return_value = 0;
+
+ if (trace_is_off(&tool->trace, &record->trace)) {
+- printf("rtla timelat hit stop tracing\n");
++ printf("rtla timerlat hit stop tracing\n");
+ if (params->trace_output) {
+ printf(" Saving trace to %s\n", params->trace_output);
+ save_trace_to_file(record->trace.inst, params->trace_output);
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 35452a1d45e9f..3342719352222 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -687,7 +687,7 @@ int timerlat_top_main(int argc, char *argv[])
+ return_value = 0;
+
+ if (trace_is_off(&top->trace, &record->trace)) {
+- printf("rtla timelat hit stop tracing\n");
++ printf("rtla timerlat hit stop tracing\n");
+ if (params->trace_output) {
+ printf(" Saving trace to %s\n", params->trace_output);
+ save_trace_to_file(record->trace.inst, params->trace_output);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-08 10:45 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-08 10:45 UTC (permalink / raw
To: gentoo-commits
commit: 036eac410993ee6f8ed2440ba5bf687f2733eda5
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 8 10:45:14 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 8 10:45:14 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=036eac41
Linux patch 5.19.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-5.19.8.patch | 6128 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6132 insertions(+)
diff --git a/0000_README b/0000_README
index e6423950..d9225608 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-5.19.7.patch
From: http://www.kernel.org
Desc: Linux 5.19.7
+Patch: 1007_linux-5.19.8.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-5.19.8.patch b/1007_linux-5.19.8.patch
new file mode 100644
index 00000000..7bcf0c97
--- /dev/null
+++ b/1007_linux-5.19.8.patch
@@ -0,0 +1,6128 @@
+diff --git a/Makefile b/Makefile
+index 3d88923df694e..e361c6230e9e5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c
+index 889951291cc0f..a11a6e14ba89f 100644
+--- a/arch/arm64/kernel/machine_kexec_file.c
++++ b/arch/arm64/kernel/machine_kexec_file.c
+@@ -47,7 +47,7 @@ static int prepare_elf_headers(void **addr, unsigned long *sz)
+ u64 i;
+ phys_addr_t start, end;
+
+- nr_ranges = 1; /* for exclusion of crashkernel region */
++ nr_ranges = 2; /* for exclusion of crashkernel region */
+ for_each_mem_range(i, &start, &end)
+ nr_ranges++;
+
+diff --git a/arch/powerpc/include/asm/firmware.h b/arch/powerpc/include/asm/firmware.h
+index 8dddd34b8ecf1..fabc82f5e935d 100644
+--- a/arch/powerpc/include/asm/firmware.h
++++ b/arch/powerpc/include/asm/firmware.h
+@@ -82,6 +82,8 @@ enum {
+ FW_FEATURE_POWERNV_ALWAYS = 0,
+ FW_FEATURE_PS3_POSSIBLE = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
+ FW_FEATURE_PS3_ALWAYS = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
++ FW_FEATURE_NATIVE_POSSIBLE = 0,
++ FW_FEATURE_NATIVE_ALWAYS = 0,
+ FW_FEATURE_POSSIBLE =
+ #ifdef CONFIG_PPC_PSERIES
+ FW_FEATURE_PSERIES_POSSIBLE |
+@@ -91,6 +93,9 @@ enum {
+ #endif
+ #ifdef CONFIG_PPC_PS3
+ FW_FEATURE_PS3_POSSIBLE |
++#endif
++#ifdef CONFIG_PPC_HASH_MMU_NATIVE
++ FW_FEATURE_NATIVE_ALWAYS |
+ #endif
+ 0,
+ FW_FEATURE_ALWAYS =
+@@ -102,6 +107,9 @@ enum {
+ #endif
+ #ifdef CONFIG_PPC_PS3
+ FW_FEATURE_PS3_ALWAYS &
++#endif
++#ifdef CONFIG_PPC_HASH_MMU_NATIVE
++ FW_FEATURE_NATIVE_ALWAYS &
+ #endif
+ FW_FEATURE_POSSIBLE,
+
+diff --git a/arch/powerpc/kernel/rtas_entry.S b/arch/powerpc/kernel/rtas_entry.S
+index 9a434d42e660a..6ce95ddadbcdb 100644
+--- a/arch/powerpc/kernel/rtas_entry.S
++++ b/arch/powerpc/kernel/rtas_entry.S
+@@ -109,8 +109,12 @@ __enter_rtas:
+ * its critical regions (as specified in PAPR+ section 7.2.1). MSR[S]
+ * is not impacted by RFI_TO_KERNEL (only urfid can unset it). So if
+ * MSR[S] is set, it will remain when entering RTAS.
++ * If we're in HV mode, RTAS must also run in HV mode, so extract MSR_HV
++ * from the saved MSR value and insert into the value RTAS will use.
+ */
++ extrdi r0, r6, 1, 63 - MSR_HV_LG
+ LOAD_REG_IMMEDIATE(r6, MSR_ME | MSR_RI)
++ insrdi r6, r0, 1, 63 - MSR_HV_LG
+
+ li r0,0
+ mtmsrd r0,1 /* disable RI before using SRR0/1 */
+diff --git a/arch/powerpc/kernel/systbl.S b/arch/powerpc/kernel/systbl.S
+index cb3358886203e..6c1db3b6de2dc 100644
+--- a/arch/powerpc/kernel/systbl.S
++++ b/arch/powerpc/kernel/systbl.S
+@@ -18,6 +18,7 @@
+ .p2align 3
+ #define __SYSCALL(nr, entry) .8byte entry
+ #else
++ .p2align 2
+ #define __SYSCALL(nr, entry) .long entry
+ #endif
+
+diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
+index 82cae08976bcd..92074a6c49d43 100644
+--- a/arch/powerpc/platforms/pseries/papr_scm.c
++++ b/arch/powerpc/platforms/pseries/papr_scm.c
+@@ -124,9 +124,6 @@ struct papr_scm_priv {
+
+ /* The bits which needs to be overridden */
+ u64 health_bitmap_inject_mask;
+-
+- /* array to have event_code and stat_id mappings */
+- u8 *nvdimm_events_map;
+ };
+
+ static int papr_scm_pmem_flush(struct nd_region *nd_region,
+@@ -350,6 +347,25 @@ static ssize_t drc_pmem_query_stats(struct papr_scm_priv *p,
+ #ifdef CONFIG_PERF_EVENTS
+ #define to_nvdimm_pmu(_pmu) container_of(_pmu, struct nvdimm_pmu, pmu)
+
++static const char * const nvdimm_events_map[] = {
++ [1] = "CtlResCt",
++ [2] = "CtlResTm",
++ [3] = "PonSecs ",
++ [4] = "MemLife ",
++ [5] = "CritRscU",
++ [6] = "HostLCnt",
++ [7] = "HostSCnt",
++ [8] = "HostSDur",
++ [9] = "HostLDur",
++ [10] = "MedRCnt ",
++ [11] = "MedWCnt ",
++ [12] = "MedRDur ",
++ [13] = "MedWDur ",
++ [14] = "CchRHCnt",
++ [15] = "CchWHCnt",
++ [16] = "FastWCnt",
++};
++
+ static int papr_scm_pmu_get_value(struct perf_event *event, struct device *dev, u64 *count)
+ {
+ struct papr_scm_perf_stat *stat;
+@@ -357,11 +373,15 @@ static int papr_scm_pmu_get_value(struct perf_event *event, struct device *dev,
+ struct papr_scm_priv *p = (struct papr_scm_priv *)dev->driver_data;
+ int rc, size;
+
++ /* Invalid eventcode */
++ if (event->attr.config == 0 || event->attr.config >= ARRAY_SIZE(nvdimm_events_map))
++ return -EINVAL;
++
+ /* Allocate request buffer enough to hold single performance stat */
+ size = sizeof(struct papr_scm_perf_stats) +
+ sizeof(struct papr_scm_perf_stat);
+
+- if (!p || !p->nvdimm_events_map)
++ if (!p)
+ return -EINVAL;
+
+ stats = kzalloc(size, GFP_KERNEL);
+@@ -370,7 +390,7 @@ static int papr_scm_pmu_get_value(struct perf_event *event, struct device *dev,
+
+ stat = &stats->scm_statistic[0];
+ memcpy(&stat->stat_id,
+- &p->nvdimm_events_map[event->attr.config * sizeof(stat->stat_id)],
++ nvdimm_events_map[event->attr.config],
+ sizeof(stat->stat_id));
+ stat->stat_val = 0;
+
+@@ -458,56 +478,6 @@ static void papr_scm_pmu_del(struct perf_event *event, int flags)
+ papr_scm_pmu_read(event);
+ }
+
+-static int papr_scm_pmu_check_events(struct papr_scm_priv *p, struct nvdimm_pmu *nd_pmu)
+-{
+- struct papr_scm_perf_stat *stat;
+- struct papr_scm_perf_stats *stats;
+- u32 available_events;
+- int index, rc = 0;
+-
+- if (!p->stat_buffer_len)
+- return -ENOENT;
+-
+- available_events = (p->stat_buffer_len - sizeof(struct papr_scm_perf_stats))
+- / sizeof(struct papr_scm_perf_stat);
+- if (available_events == 0)
+- return -EOPNOTSUPP;
+-
+- /* Allocate the buffer for phyp where stats are written */
+- stats = kzalloc(p->stat_buffer_len, GFP_KERNEL);
+- if (!stats) {
+- rc = -ENOMEM;
+- return rc;
+- }
+-
+- /* Called to get list of events supported */
+- rc = drc_pmem_query_stats(p, stats, 0);
+- if (rc)
+- goto out;
+-
+- /*
+- * Allocate memory and populate nvdimm_event_map.
+- * Allocate an extra element for NULL entry
+- */
+- p->nvdimm_events_map = kcalloc(available_events + 1,
+- sizeof(stat->stat_id),
+- GFP_KERNEL);
+- if (!p->nvdimm_events_map) {
+- rc = -ENOMEM;
+- goto out;
+- }
+-
+- /* Copy all stat_ids to event map */
+- for (index = 0, stat = stats->scm_statistic;
+- index < available_events; index++, ++stat) {
+- memcpy(&p->nvdimm_events_map[index * sizeof(stat->stat_id)],
+- &stat->stat_id, sizeof(stat->stat_id));
+- }
+-out:
+- kfree(stats);
+- return rc;
+-}
+-
+ static void papr_scm_pmu_register(struct papr_scm_priv *p)
+ {
+ struct nvdimm_pmu *nd_pmu;
+@@ -519,9 +489,10 @@ static void papr_scm_pmu_register(struct papr_scm_priv *p)
+ goto pmu_err_print;
+ }
+
+- rc = papr_scm_pmu_check_events(p, nd_pmu);
+- if (rc)
++ if (!p->stat_buffer_len) {
++ rc = -ENOENT;
+ goto pmu_check_events_err;
++ }
+
+ nd_pmu->pmu.task_ctx_nr = perf_invalid_context;
+ nd_pmu->pmu.name = nvdimm_name(p->nvdimm);
+@@ -539,7 +510,7 @@ static void papr_scm_pmu_register(struct papr_scm_priv *p)
+
+ rc = register_nvdimm_pmu(nd_pmu, p->pdev);
+ if (rc)
+- goto pmu_register_err;
++ goto pmu_check_events_err;
+
+ /*
+ * Set archdata.priv value to nvdimm_pmu structure, to handle the
+@@ -548,8 +519,6 @@ static void papr_scm_pmu_register(struct papr_scm_priv *p)
+ p->pdev->archdata.priv = nd_pmu;
+ return;
+
+-pmu_register_err:
+- kfree(p->nvdimm_events_map);
+ pmu_check_events_err:
+ kfree(nd_pmu);
+ pmu_err_print:
+@@ -1560,7 +1529,6 @@ static int papr_scm_remove(struct platform_device *pdev)
+ unregister_nvdimm_pmu(pdev->archdata.priv);
+
+ pdev->archdata.priv = NULL;
+- kfree(p->nvdimm_events_map);
+ kfree(p->bus_desc.provider_name);
+ kfree(p);
+
+diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
+index 83d6d4d2b1dff..26a446a34057b 100644
+--- a/arch/riscv/include/asm/kvm_vcpu_sbi.h
++++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
+@@ -33,4 +33,16 @@ void kvm_riscv_vcpu_sbi_system_reset(struct kvm_vcpu *vcpu,
+ u32 type, u64 flags);
+ const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid);
+
++#ifdef CONFIG_RISCV_SBI_V01
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01;
++#endif
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base;
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time;
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi;
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence;
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_srst;
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm;
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_experimental;
++extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_vendor;
++
+ #endif /* __RISCV_KVM_VCPU_SBI_H__ */
+diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
+index d45e7da3f0d32..f96991d230bfc 100644
+--- a/arch/riscv/kvm/vcpu_sbi.c
++++ b/arch/riscv/kvm/vcpu_sbi.c
+@@ -32,23 +32,13 @@ static int kvm_linux_err_map_sbi(int err)
+ };
+ }
+
+-#ifdef CONFIG_RISCV_SBI_V01
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01;
+-#else
++#ifndef CONFIG_RISCV_SBI_V01
+ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = {
+ .extid_start = -1UL,
+ .extid_end = -1UL,
+ .handler = NULL,
+ };
+ #endif
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base;
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time;
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi;
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence;
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_srst;
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm;
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_experimental;
+-extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_vendor;
+
+ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
+ &vcpu_sbi_ext_v01,
+diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
+index 5e49e4b4a4ccc..86c56616e5dea 100644
+--- a/arch/riscv/mm/pageattr.c
++++ b/arch/riscv/mm/pageattr.c
+@@ -118,10 +118,10 @@ static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
+ if (!numpages)
+ return 0;
+
+- mmap_read_lock(&init_mm);
++ mmap_write_lock(&init_mm);
+ ret = walk_page_range_novma(&init_mm, start, end, &pageattr_ops, NULL,
+ &masks);
+- mmap_read_unlock(&init_mm);
++ mmap_write_unlock(&init_mm);
+
+ flush_tlb_kernel_range(start, end);
+
+diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
+index f22beda9e6d5c..ccdbccfde148c 100644
+--- a/arch/s390/include/asm/hugetlb.h
++++ b/arch/s390/include/asm/hugetlb.h
+@@ -28,9 +28,11 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ static inline int prepare_hugepage_range(struct file *file,
+ unsigned long addr, unsigned long len)
+ {
+- if (len & ~HPAGE_MASK)
++ struct hstate *h = hstate_file(file);
++
++ if (len & ~huge_page_mask(h))
+ return -EINVAL;
+- if (addr & ~HPAGE_MASK)
++ if (addr & ~huge_page_mask(h))
+ return -EINVAL;
+ return 0;
+ }
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index 2e526f11b91e2..5ea3830af0ccf 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -131,6 +131,7 @@ SECTIONS
+ /*
+ * Table with the patch locations to undo expolines
+ */
++ . = ALIGN(4);
+ .nospec_call_table : {
+ __nospec_call_start = . ;
+ *(.s390_indirect*)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 0aaea87a14597..b09a50e0af29d 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -835,8 +835,7 @@ static bool msr_write_intercepted(struct vcpu_vmx *vmx, u32 msr)
+ if (!(exec_controls_get(vmx) & CPU_BASED_USE_MSR_BITMAPS))
+ return true;
+
+- return vmx_test_msr_bitmap_write(vmx->loaded_vmcs->msr_bitmap,
+- MSR_IA32_SPEC_CTRL);
++ return vmx_test_msr_bitmap_write(vmx->loaded_vmcs->msr_bitmap, msr);
+ }
+
+ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index bc411d19dac08..55de0d1981e52 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1570,12 +1570,32 @@ static const u32 msr_based_features_all[] = {
+ static u32 msr_based_features[ARRAY_SIZE(msr_based_features_all)];
+ static unsigned int num_msr_based_features;
+
++/*
++ * Some IA32_ARCH_CAPABILITIES bits have dependencies on MSRs that KVM
++ * does not yet virtualize. These include:
++ * 10 - MISC_PACKAGE_CTRLS
++ * 11 - ENERGY_FILTERING_CTL
++ * 12 - DOITM
++ * 18 - FB_CLEAR_CTRL
++ * 21 - XAPIC_DISABLE_STATUS
++ * 23 - OVERCLOCKING_STATUS
++ */
++
++#define KVM_SUPPORTED_ARCH_CAP \
++ (ARCH_CAP_RDCL_NO | ARCH_CAP_IBRS_ALL | ARCH_CAP_RSBA | \
++ ARCH_CAP_SKIP_VMENTRY_L1DFLUSH | ARCH_CAP_SSB_NO | ARCH_CAP_MDS_NO | \
++ ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \
++ ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \
++ ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO)
++
+ static u64 kvm_get_arch_capabilities(void)
+ {
+ u64 data = 0;
+
+- if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
++ if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) {
+ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, data);
++ data &= KVM_SUPPORTED_ARCH_CAP;
++ }
+
+ /*
+ * If nx_huge_pages is enabled, KVM's shadow paging will ensure that
+@@ -1623,9 +1643,6 @@ static u64 kvm_get_arch_capabilities(void)
+ */
+ }
+
+- /* Guests don't need to know "Fill buffer clear control" exists */
+- data &= ~ARCH_CAP_FB_CLEAR_CTRL;
+-
+ return data;
+ }
+
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 54ac94fed0151..8bac11d8e618a 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1385,6 +1385,18 @@ static int binder_inc_ref_for_node(struct binder_proc *proc,
+ }
+ ret = binder_inc_ref_olocked(ref, strong, target_list);
+ *rdata = ref->data;
++ if (ret && ref == new_ref) {
++ /*
++ * Cleanup the failed reference here as the target
++ * could now be dead and have already released its
++ * references by now. Calling on the new reference
++ * with strong=0 and a tmp_refs will not decrement
++ * the node. The new_ref gets kfree'd below.
++ */
++ binder_cleanup_ref_olocked(new_ref);
++ ref = NULL;
++ }
++
+ binder_proc_unlock(proc);
+ if (new_ref && ref != new_ref)
+ /*
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 5d437c0c842cb..53797453a6ee8 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -322,7 +322,6 @@ static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
+ */
+ if (vma) {
+ vm_start = vma->vm_start;
+- alloc->vma_vm_mm = vma->vm_mm;
+ mmap_assert_write_locked(alloc->vma_vm_mm);
+ } else {
+ mmap_assert_locked(alloc->vma_vm_mm);
+@@ -795,7 +794,6 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ binder_insert_free_buffer(alloc, buffer);
+ alloc->free_async_space = alloc->buffer_size / 2;
+ binder_alloc_set_vma(alloc, vma);
+- mmgrab(alloc->vma_vm_mm);
+
+ return 0;
+
+@@ -1091,6 +1089,8 @@ static struct shrinker binder_shrinker = {
+ void binder_alloc_init(struct binder_alloc *alloc)
+ {
+ alloc->pid = current->group_leader->pid;
++ alloc->vma_vm_mm = current->mm;
++ mmgrab(alloc->vma_vm_mm);
+ mutex_init(&alloc->mutex);
+ INIT_LIST_HEAD(&alloc->buffers);
+ }
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index b766968a873ce..2ccbde111c352 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -897,6 +897,11 @@ static int __device_attach_driver(struct device_driver *drv, void *_data)
+ dev_dbg(dev, "Device match requests probe deferral\n");
+ dev->can_match = true;
+ driver_deferred_probe_add(dev);
++ /*
++ * Device can't match with a driver right now, so don't attempt
++ * to match or bind with other drivers on the bus.
++ */
++ return ret;
+ } else if (ret < 0) {
+ dev_dbg(dev, "Bus failed to match device: %d\n", ret);
+ return ret;
+@@ -1136,6 +1141,11 @@ static int __driver_attach(struct device *dev, void *data)
+ dev_dbg(dev, "Device match requests probe deferral\n");
+ dev->can_match = true;
+ driver_deferred_probe_add(dev);
++ /*
++ * Driver could not match with device, but may match with
++ * another device on the bus.
++ */
++ return 0;
+ } else if (ret < 0) {
+ dev_dbg(dev, "Bus failed to match device: %d\n", ret);
+ return ret;
+diff --git a/drivers/base/firmware_loader/sysfs.c b/drivers/base/firmware_loader/sysfs.c
+index 5b0b85b70b6f2..28b9cbb8a6dd3 100644
+--- a/drivers/base/firmware_loader/sysfs.c
++++ b/drivers/base/firmware_loader/sysfs.c
+@@ -93,10 +93,9 @@ static void fw_dev_release(struct device *dev)
+ {
+ struct fw_sysfs *fw_sysfs = to_fw_sysfs(dev);
+
+- if (fw_sysfs->fw_upload_priv) {
+- free_fw_priv(fw_sysfs->fw_priv);
+- kfree(fw_sysfs->fw_upload_priv);
+- }
++ if (fw_sysfs->fw_upload_priv)
++ fw_upload_free(fw_sysfs);
++
+ kfree(fw_sysfs);
+ }
+
+diff --git a/drivers/base/firmware_loader/sysfs.h b/drivers/base/firmware_loader/sysfs.h
+index 5d8ff1675c794..df1d5add698f1 100644
+--- a/drivers/base/firmware_loader/sysfs.h
++++ b/drivers/base/firmware_loader/sysfs.h
+@@ -106,12 +106,17 @@ extern struct device_attribute dev_attr_cancel;
+ extern struct device_attribute dev_attr_remaining_size;
+
+ int fw_upload_start(struct fw_sysfs *fw_sysfs);
++void fw_upload_free(struct fw_sysfs *fw_sysfs);
+ umode_t fw_upload_is_visible(struct kobject *kobj, struct attribute *attr, int n);
+ #else
+ static inline int fw_upload_start(struct fw_sysfs *fw_sysfs)
+ {
+ return 0;
+ }
++
++static inline void fw_upload_free(struct fw_sysfs *fw_sysfs)
++{
++}
+ #endif
+
+ #endif /* __FIRMWARE_SYSFS_H */
+diff --git a/drivers/base/firmware_loader/sysfs_upload.c b/drivers/base/firmware_loader/sysfs_upload.c
+index 87044d52322aa..a0af8f5f13d88 100644
+--- a/drivers/base/firmware_loader/sysfs_upload.c
++++ b/drivers/base/firmware_loader/sysfs_upload.c
+@@ -264,6 +264,15 @@ int fw_upload_start(struct fw_sysfs *fw_sysfs)
+ return 0;
+ }
+
++void fw_upload_free(struct fw_sysfs *fw_sysfs)
++{
++ struct fw_upload_priv *fw_upload_priv = fw_sysfs->fw_upload_priv;
++
++ free_fw_priv(fw_sysfs->fw_priv);
++ kfree(fw_upload_priv->fw_upload);
++ kfree(fw_upload_priv);
++}
++
+ /**
+ * firmware_upload_register() - register for the firmware upload sysfs API
+ * @module: kernel module of this device
+@@ -377,6 +386,7 @@ void firmware_upload_unregister(struct fw_upload *fw_upload)
+ {
+ struct fw_sysfs *fw_sysfs = fw_upload->priv;
+ struct fw_upload_priv *fw_upload_priv = fw_sysfs->fw_upload_priv;
++ struct module *module = fw_upload_priv->module;
+
+ mutex_lock(&fw_upload_priv->lock);
+ if (fw_upload_priv->progress == FW_UPLOAD_PROG_IDLE) {
+@@ -392,6 +402,6 @@ void firmware_upload_unregister(struct fw_upload *fw_upload)
+
+ unregister:
+ device_unregister(&fw_sysfs->dev);
+- module_put(fw_upload_priv->module);
++ module_put(module);
+ }
+ EXPORT_SYMBOL_GPL(firmware_upload_unregister);
+diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
+index bda5c815e4415..a28473470e662 100644
+--- a/drivers/block/xen-blkback/common.h
++++ b/drivers/block/xen-blkback/common.h
+@@ -226,6 +226,9 @@ struct xen_vbd {
+ sector_t size;
+ unsigned int flush_support:1;
+ unsigned int discard_secure:1;
++ /* Connect-time cached feature_persistent parameter value */
++ unsigned int feature_gnt_persistent_parm:1;
++ /* Persistent grants feature negotiation result */
+ unsigned int feature_gnt_persistent:1;
+ unsigned int overflow_max_grants:1;
+ };
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index ee7ad2fb432d1..c0227dfa46887 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -907,7 +907,7 @@ again:
+ xen_blkbk_barrier(xbt, be, be->blkif->vbd.flush_support);
+
+ err = xenbus_printf(xbt, dev->nodename, "feature-persistent", "%u",
+- be->blkif->vbd.feature_gnt_persistent);
++ be->blkif->vbd.feature_gnt_persistent_parm);
+ if (err) {
+ xenbus_dev_fatal(dev, err, "writing %s/feature-persistent",
+ dev->nodename);
+@@ -1085,7 +1085,9 @@ static int connect_ring(struct backend_info *be)
+ return -ENOSYS;
+ }
+
+- blkif->vbd.feature_gnt_persistent = feature_persistent &&
++ blkif->vbd.feature_gnt_persistent_parm = feature_persistent;
++ blkif->vbd.feature_gnt_persistent =
++ blkif->vbd.feature_gnt_persistent_parm &&
+ xenbus_read_unsigned(dev->otherend, "feature-persistent", 0);
+
+ blkif->vbd.overflow_max_grants = 0;
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 4e763701b3720..1f85750f981e5 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -213,6 +213,9 @@ struct blkfront_info
+ unsigned int feature_fua:1;
+ unsigned int feature_discard:1;
+ unsigned int feature_secdiscard:1;
++ /* Connect-time cached feature_persistent parameter */
++ unsigned int feature_persistent_parm:1;
++ /* Persistent grants feature negotiation result */
+ unsigned int feature_persistent:1;
+ unsigned int bounce:1;
+ unsigned int discard_granularity;
+@@ -1756,6 +1759,12 @@ abort_transaction:
+ return err;
+ }
+
++/* Enable the persistent grants feature. */
++static bool feature_persistent = true;
++module_param(feature_persistent, bool, 0644);
++MODULE_PARM_DESC(feature_persistent,
++ "Enables the persistent grants feature");
++
+ /* Common code used when first setting up, and when resuming. */
+ static int talk_to_blkback(struct xenbus_device *dev,
+ struct blkfront_info *info)
+@@ -1847,8 +1856,9 @@ again:
+ message = "writing protocol";
+ goto abort_transaction;
+ }
++ info->feature_persistent_parm = feature_persistent;
+ err = xenbus_printf(xbt, dev->nodename, "feature-persistent", "%u",
+- info->feature_persistent);
++ info->feature_persistent_parm);
+ if (err)
+ dev_warn(&dev->dev,
+ "writing persistent grants feature to xenbus");
+@@ -1916,12 +1926,6 @@ static int negotiate_mq(struct blkfront_info *info)
+ return 0;
+ }
+
+-/* Enable the persistent grants feature. */
+-static bool feature_persistent = true;
+-module_param(feature_persistent, bool, 0644);
+-MODULE_PARM_DESC(feature_persistent,
+- "Enables the persistent grants feature");
+-
+ /*
+ * Entry point to this code when a new device is created. Allocate the basic
+ * structures and the ring buffer for communication with the backend, and
+@@ -2281,7 +2285,7 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
+ if (xenbus_read_unsigned(info->xbdev->otherend, "feature-discard", 0))
+ blkfront_setup_discard(info);
+
+- if (feature_persistent)
++ if (info->feature_persistent_parm)
+ info->feature_persistent =
+ !!xenbus_read_unsigned(info->xbdev->otherend,
+ "feature-persistent", 0);
+diff --git a/drivers/clk/bcm/clk-raspberrypi.c b/drivers/clk/bcm/clk-raspberrypi.c
+index 73518009a0f20..4df921d1e21ca 100644
+--- a/drivers/clk/bcm/clk-raspberrypi.c
++++ b/drivers/clk/bcm/clk-raspberrypi.c
+@@ -203,7 +203,7 @@ static unsigned long raspberrypi_fw_get_rate(struct clk_hw *hw,
+ ret = raspberrypi_clock_property(rpi->firmware, data,
+ RPI_FIRMWARE_GET_CLOCK_RATE, &val);
+ if (ret)
+- return ret;
++ return 0;
+
+ return val;
+ }
+@@ -220,7 +220,7 @@ static int raspberrypi_fw_set_rate(struct clk_hw *hw, unsigned long rate,
+ ret = raspberrypi_clock_property(rpi->firmware, data,
+ RPI_FIRMWARE_SET_CLOCK_RATE, &_rate);
+ if (ret)
+- dev_err_ratelimited(rpi->dev, "Failed to change %s frequency: %d",
++ dev_err_ratelimited(rpi->dev, "Failed to change %s frequency: %d\n",
+ clk_hw_get_name(hw), ret);
+
+ return ret;
+@@ -288,7 +288,7 @@ static struct clk_hw *raspberrypi_clk_register(struct raspberrypi_clk *rpi,
+ RPI_FIRMWARE_GET_MIN_CLOCK_RATE,
+ &min_rate);
+ if (ret) {
+- dev_err(rpi->dev, "Failed to get clock %d min freq: %d",
++ dev_err(rpi->dev, "Failed to get clock %d min freq: %d\n",
+ id, ret);
+ return ERR_PTR(ret);
+ }
+@@ -344,8 +344,13 @@ static int raspberrypi_discover_clocks(struct raspberrypi_clk *rpi,
+ struct rpi_firmware_get_clocks_response *clks;
+ int ret;
+
++ /*
++ * The firmware doesn't guarantee that the last element of
++ * RPI_FIRMWARE_GET_CLOCKS is zeroed. So allocate an additional
++ * zero element as sentinel.
++ */
+ clks = devm_kcalloc(rpi->dev,
+- RPI_FIRMWARE_NUM_CLK_ID, sizeof(*clks),
++ RPI_FIRMWARE_NUM_CLK_ID + 1, sizeof(*clks),
+ GFP_KERNEL);
+ if (!clks)
+ return -ENOMEM;
+@@ -360,7 +365,7 @@ static int raspberrypi_discover_clocks(struct raspberrypi_clk *rpi,
+ struct raspberrypi_clk_variant *variant;
+
+ if (clks->id > RPI_FIRMWARE_NUM_CLK_ID) {
+- dev_err(rpi->dev, "Unknown clock id: %u", clks->id);
++ dev_err(rpi->dev, "Unknown clock id: %u\n", clks->id);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index f00d4c1158d72..f246d66f8261f 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -840,10 +840,9 @@ static void clk_core_unprepare(struct clk_core *core)
+ if (core->ops->unprepare)
+ core->ops->unprepare(core->hw);
+
+- clk_pm_runtime_put(core);
+-
+ trace_clk_unprepare_complete(core);
+ clk_core_unprepare(core->parent);
++ clk_pm_runtime_put(core);
+ }
+
+ static void clk_core_unprepare_lock(struct clk_core *core)
+diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
+index 3463579220b51..121d8610beb15 100644
+--- a/drivers/clk/ti/clk.c
++++ b/drivers/clk/ti/clk.c
+@@ -143,6 +143,7 @@ static struct device_node *ti_find_clock_provider(struct device_node *from,
+ continue;
+
+ if (!strncmp(n, tmp, strlen(tmp))) {
++ of_node_get(np);
+ found = true;
+ break;
+ }
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index 205acb2c744de..e3885c90a3acb 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -295,7 +295,8 @@ void dma_resv_add_fence(struct dma_resv *obj, struct dma_fence *fence,
+ enum dma_resv_usage old_usage;
+
+ dma_resv_list_entry(fobj, i, obj, &old, &old_usage);
+- if ((old->context == fence->context && old_usage >= usage) ||
++ if ((old->context == fence->context && old_usage >= usage &&
++ dma_fence_is_later(fence, old)) ||
+ dma_fence_is_signaled(old)) {
+ dma_resv_list_set(fobj, i, fence, usage);
+ dma_fence_put(old);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index ecd7d169470b0..2925f4d8cef36 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1175,7 +1175,9 @@ static int pca953x_suspend(struct device *dev)
+ {
+ struct pca953x_chip *chip = dev_get_drvdata(dev);
+
++ mutex_lock(&chip->i2c_lock);
+ regcache_cache_only(chip->regmap, true);
++ mutex_unlock(&chip->i2c_lock);
+
+ if (atomic_read(&chip->wakeup_path))
+ device_set_wakeup_path(dev);
+@@ -1198,13 +1200,17 @@ static int pca953x_resume(struct device *dev)
+ }
+ }
+
++ mutex_lock(&chip->i2c_lock);
+ regcache_cache_only(chip->regmap, false);
+ regcache_mark_dirty(chip->regmap);
+ ret = pca953x_regcache_sync(dev);
+- if (ret)
++ if (ret) {
++ mutex_unlock(&chip->i2c_lock);
+ return ret;
++ }
+
+ ret = regcache_sync(chip->regmap);
++ mutex_unlock(&chip->i2c_lock);
+ if (ret) {
+ dev_err(dev, "Failed to restore register map: %d\n", ret);
+ return ret;
+diff --git a/drivers/gpio/gpio-realtek-otto.c b/drivers/gpio/gpio-realtek-otto.c
+index 63dcf42f7c206..d6418f89d3f63 100644
+--- a/drivers/gpio/gpio-realtek-otto.c
++++ b/drivers/gpio/gpio-realtek-otto.c
+@@ -46,10 +46,20 @@
+ * @lock: Lock for accessing the IRQ registers and values
+ * @intr_mask: Mask for interrupts lines
+ * @intr_type: Interrupt type selection
++ * @bank_read: Read a bank setting as a single 32-bit value
++ * @bank_write: Write a bank setting as a single 32-bit value
++ * @imr_line_pos: Bit shift of an IRQ line's IMR value.
++ *
++ * The DIR, DATA, and ISR registers consist of four 8-bit port values, packed
++ * into a single 32-bit register. Use @bank_read (@bank_write) to get (assign)
++ * a value from (to) these registers. The IMR register consists of four 16-bit
++ * port values, packed into two 32-bit registers. Use @imr_line_pos to get the
++ * bit shift of the 2-bit field for a line's IMR settings. Shifts larger than
++ * 32 overflow into the second register.
+ *
+ * Because the interrupt mask register (IMR) combines the function of IRQ type
+ * selection and masking, two extra values are stored. @intr_mask is used to
+- * mask/unmask the interrupts for a GPIO port, and @intr_type is used to store
++ * mask/unmask the interrupts for a GPIO line, and @intr_type is used to store
+ * the selected interrupt types. The logical AND of these values is written to
+ * IMR on changes.
+ */
+@@ -59,10 +69,11 @@ struct realtek_gpio_ctrl {
+ void __iomem *cpumask_base;
+ struct cpumask cpu_irq_maskable;
+ raw_spinlock_t lock;
+- u16 intr_mask[REALTEK_GPIO_PORTS_PER_BANK];
+- u16 intr_type[REALTEK_GPIO_PORTS_PER_BANK];
+- unsigned int (*port_offset_u8)(unsigned int port);
+- unsigned int (*port_offset_u16)(unsigned int port);
++ u8 intr_mask[REALTEK_GPIO_MAX];
++ u8 intr_type[REALTEK_GPIO_MAX];
++ u32 (*bank_read)(void __iomem *reg);
++ void (*bank_write)(void __iomem *reg, u32 value);
++ unsigned int (*line_imr_pos)(unsigned int line);
+ };
+
+ /* Expand with more flags as devices with other quirks are added */
+@@ -101,14 +112,22 @@ static struct realtek_gpio_ctrl *irq_data_to_ctrl(struct irq_data *data)
+ * port. The two interrupt mask registers store two bits per GPIO, so use u16
+ * values.
+ */
+-static unsigned int realtek_gpio_port_offset_u8(unsigned int port)
++static u32 realtek_gpio_bank_read_swapped(void __iomem *reg)
+ {
+- return port;
++ return ioread32be(reg);
+ }
+
+-static unsigned int realtek_gpio_port_offset_u16(unsigned int port)
++static void realtek_gpio_bank_write_swapped(void __iomem *reg, u32 value)
+ {
+- return 2 * port;
++ iowrite32be(value, reg);
++}
++
++static unsigned int realtek_gpio_line_imr_pos_swapped(unsigned int line)
++{
++ unsigned int port_pin = line % 8;
++ unsigned int port = line / 8;
++
++ return 2 * (8 * (port ^ 1) + port_pin);
+ }
+
+ /*
+@@ -119,66 +138,67 @@ static unsigned int realtek_gpio_port_offset_u16(unsigned int port)
+ * per GPIO, so use u16 values. The first register contains ports 1 and 0, the
+ * second ports 3 and 2.
+ */
+-static unsigned int realtek_gpio_port_offset_u8_rev(unsigned int port)
++static u32 realtek_gpio_bank_read(void __iomem *reg)
+ {
+- return 3 - port;
++ return ioread32(reg);
+ }
+
+-static unsigned int realtek_gpio_port_offset_u16_rev(unsigned int port)
++static void realtek_gpio_bank_write(void __iomem *reg, u32 value)
+ {
+- return 2 * (port ^ 1);
++ iowrite32(value, reg);
+ }
+
+-static void realtek_gpio_write_imr(struct realtek_gpio_ctrl *ctrl,
+- unsigned int port, u16 irq_type, u16 irq_mask)
++static unsigned int realtek_gpio_line_imr_pos(unsigned int line)
+ {
+- iowrite16(irq_type & irq_mask,
+- ctrl->base + REALTEK_GPIO_REG_IMR + ctrl->port_offset_u16(port));
++ return 2 * line;
+ }
+
+-static void realtek_gpio_clear_isr(struct realtek_gpio_ctrl *ctrl,
+- unsigned int port, u8 mask)
++static void realtek_gpio_clear_isr(struct realtek_gpio_ctrl *ctrl, u32 mask)
+ {
+- iowrite8(mask, ctrl->base + REALTEK_GPIO_REG_ISR + ctrl->port_offset_u8(port));
++ ctrl->bank_write(ctrl->base + REALTEK_GPIO_REG_ISR, mask);
+ }
+
+-static u8 realtek_gpio_read_isr(struct realtek_gpio_ctrl *ctrl, unsigned int port)
++static u32 realtek_gpio_read_isr(struct realtek_gpio_ctrl *ctrl)
+ {
+- return ioread8(ctrl->base + REALTEK_GPIO_REG_ISR + ctrl->port_offset_u8(port));
++ return ctrl->bank_read(ctrl->base + REALTEK_GPIO_REG_ISR);
+ }
+
+-/* Set the rising and falling edge mask bits for a GPIO port pin */
+-static u16 realtek_gpio_imr_bits(unsigned int pin, u16 value)
++/* Set the rising and falling edge mask bits for a GPIO pin */
++static void realtek_gpio_update_line_imr(struct realtek_gpio_ctrl *ctrl, unsigned int line)
+ {
+- return (value & REALTEK_GPIO_IMR_LINE_MASK) << 2 * pin;
++ void __iomem *reg = ctrl->base + REALTEK_GPIO_REG_IMR;
++ unsigned int line_shift = ctrl->line_imr_pos(line);
++ unsigned int shift = line_shift % 32;
++ u32 irq_type = ctrl->intr_type[line];
++ u32 irq_mask = ctrl->intr_mask[line];
++ u32 reg_val;
++
++ reg += 4 * (line_shift / 32);
++ reg_val = ioread32(reg);
++ reg_val &= ~(REALTEK_GPIO_IMR_LINE_MASK << shift);
++ reg_val |= (irq_type & irq_mask & REALTEK_GPIO_IMR_LINE_MASK) << shift;
++ iowrite32(reg_val, reg);
+ }
+
+ static void realtek_gpio_irq_ack(struct irq_data *data)
+ {
+ struct realtek_gpio_ctrl *ctrl = irq_data_to_ctrl(data);
+ irq_hw_number_t line = irqd_to_hwirq(data);
+- unsigned int port = line / 8;
+- unsigned int port_pin = line % 8;
+
+- realtek_gpio_clear_isr(ctrl, port, BIT(port_pin));
++ realtek_gpio_clear_isr(ctrl, BIT(line));
+ }
+
+ static void realtek_gpio_irq_unmask(struct irq_data *data)
+ {
+ struct realtek_gpio_ctrl *ctrl = irq_data_to_ctrl(data);
+ unsigned int line = irqd_to_hwirq(data);
+- unsigned int port = line / 8;
+- unsigned int port_pin = line % 8;
+ unsigned long flags;
+- u16 m;
+
+ gpiochip_enable_irq(&ctrl->gc, line);
+
+ raw_spin_lock_irqsave(&ctrl->lock, flags);
+- m = ctrl->intr_mask[port];
+- m |= realtek_gpio_imr_bits(port_pin, REALTEK_GPIO_IMR_LINE_MASK);
+- ctrl->intr_mask[port] = m;
+- realtek_gpio_write_imr(ctrl, port, ctrl->intr_type[port], m);
++ ctrl->intr_mask[line] = REALTEK_GPIO_IMR_LINE_MASK;
++ realtek_gpio_update_line_imr(ctrl, line);
+ raw_spin_unlock_irqrestore(&ctrl->lock, flags);
+ }
+
+@@ -186,16 +206,11 @@ static void realtek_gpio_irq_mask(struct irq_data *data)
+ {
+ struct realtek_gpio_ctrl *ctrl = irq_data_to_ctrl(data);
+ unsigned int line = irqd_to_hwirq(data);
+- unsigned int port = line / 8;
+- unsigned int port_pin = line % 8;
+ unsigned long flags;
+- u16 m;
+
+ raw_spin_lock_irqsave(&ctrl->lock, flags);
+- m = ctrl->intr_mask[port];
+- m &= ~realtek_gpio_imr_bits(port_pin, REALTEK_GPIO_IMR_LINE_MASK);
+- ctrl->intr_mask[port] = m;
+- realtek_gpio_write_imr(ctrl, port, ctrl->intr_type[port], m);
++ ctrl->intr_mask[line] = 0;
++ realtek_gpio_update_line_imr(ctrl, line);
+ raw_spin_unlock_irqrestore(&ctrl->lock, flags);
+
+ gpiochip_disable_irq(&ctrl->gc, line);
+@@ -205,10 +220,8 @@ static int realtek_gpio_irq_set_type(struct irq_data *data, unsigned int flow_ty
+ {
+ struct realtek_gpio_ctrl *ctrl = irq_data_to_ctrl(data);
+ unsigned int line = irqd_to_hwirq(data);
+- unsigned int port = line / 8;
+- unsigned int port_pin = line % 8;
+ unsigned long flags;
+- u16 type, t;
++ u8 type;
+
+ switch (flow_type & IRQ_TYPE_SENSE_MASK) {
+ case IRQ_TYPE_EDGE_FALLING:
+@@ -227,11 +240,8 @@ static int realtek_gpio_irq_set_type(struct irq_data *data, unsigned int flow_ty
+ irq_set_handler_locked(data, handle_edge_irq);
+
+ raw_spin_lock_irqsave(&ctrl->lock, flags);
+- t = ctrl->intr_type[port];
+- t &= ~realtek_gpio_imr_bits(port_pin, REALTEK_GPIO_IMR_LINE_MASK);
+- t |= realtek_gpio_imr_bits(port_pin, type);
+- ctrl->intr_type[port] = t;
+- realtek_gpio_write_imr(ctrl, port, t, ctrl->intr_mask[port]);
++ ctrl->intr_type[line] = type;
++ realtek_gpio_update_line_imr(ctrl, line);
+ raw_spin_unlock_irqrestore(&ctrl->lock, flags);
+
+ return 0;
+@@ -242,28 +252,21 @@ static void realtek_gpio_irq_handler(struct irq_desc *desc)
+ struct gpio_chip *gc = irq_desc_get_handler_data(desc);
+ struct realtek_gpio_ctrl *ctrl = gpiochip_get_data(gc);
+ struct irq_chip *irq_chip = irq_desc_get_chip(desc);
+- unsigned int lines_done;
+- unsigned int port_pin_count;
+ unsigned long status;
+ int offset;
+
+ chained_irq_enter(irq_chip, desc);
+
+- for (lines_done = 0; lines_done < gc->ngpio; lines_done += 8) {
+- status = realtek_gpio_read_isr(ctrl, lines_done / 8);
+- port_pin_count = min(gc->ngpio - lines_done, 8U);
+- for_each_set_bit(offset, &status, port_pin_count)
+- generic_handle_domain_irq(gc->irq.domain, offset + lines_done);
+- }
++ status = realtek_gpio_read_isr(ctrl);
++ for_each_set_bit(offset, &status, gc->ngpio)
++ generic_handle_domain_irq(gc->irq.domain, offset);
+
+ chained_irq_exit(irq_chip, desc);
+ }
+
+-static inline void __iomem *realtek_gpio_irq_cpu_mask(struct realtek_gpio_ctrl *ctrl,
+- unsigned int port, int cpu)
++static inline void __iomem *realtek_gpio_irq_cpu_mask(struct realtek_gpio_ctrl *ctrl, int cpu)
+ {
+- return ctrl->cpumask_base + ctrl->port_offset_u8(port) +
+- REALTEK_GPIO_PORTS_PER_BANK * cpu;
++ return ctrl->cpumask_base + REALTEK_GPIO_PORTS_PER_BANK * cpu;
+ }
+
+ static int realtek_gpio_irq_set_affinity(struct irq_data *data,
+@@ -271,12 +274,10 @@ static int realtek_gpio_irq_set_affinity(struct irq_data *data,
+ {
+ struct realtek_gpio_ctrl *ctrl = irq_data_to_ctrl(data);
+ unsigned int line = irqd_to_hwirq(data);
+- unsigned int port = line / 8;
+- unsigned int port_pin = line % 8;
+ void __iomem *irq_cpu_mask;
+ unsigned long flags;
+ int cpu;
+- u8 v;
++ u32 v;
+
+ if (!ctrl->cpumask_base)
+ return -ENXIO;
+@@ -284,15 +285,15 @@ static int realtek_gpio_irq_set_affinity(struct irq_data *data,
+ raw_spin_lock_irqsave(&ctrl->lock, flags);
+
+ for_each_cpu(cpu, &ctrl->cpu_irq_maskable) {
+- irq_cpu_mask = realtek_gpio_irq_cpu_mask(ctrl, port, cpu);
+- v = ioread8(irq_cpu_mask);
++ irq_cpu_mask = realtek_gpio_irq_cpu_mask(ctrl, cpu);
++ v = ctrl->bank_read(irq_cpu_mask);
+
+ if (cpumask_test_cpu(cpu, dest))
+- v |= BIT(port_pin);
++ v |= BIT(line);
+ else
+- v &= ~BIT(port_pin);
++ v &= ~BIT(line);
+
+- iowrite8(v, irq_cpu_mask);
++ ctrl->bank_write(irq_cpu_mask, v);
+ }
+
+ raw_spin_unlock_irqrestore(&ctrl->lock, flags);
+@@ -305,16 +306,17 @@ static int realtek_gpio_irq_set_affinity(struct irq_data *data,
+ static int realtek_gpio_irq_init(struct gpio_chip *gc)
+ {
+ struct realtek_gpio_ctrl *ctrl = gpiochip_get_data(gc);
+- unsigned int port;
++ u32 mask_all = GENMASK(gc->ngpio - 1, 0);
++ unsigned int line;
+ int cpu;
+
+- for (port = 0; (port * 8) < gc->ngpio; port++) {
+- realtek_gpio_write_imr(ctrl, port, 0, 0);
+- realtek_gpio_clear_isr(ctrl, port, GENMASK(7, 0));
++ for (line = 0; line < gc->ngpio; line++)
++ realtek_gpio_update_line_imr(ctrl, line);
+
+- for_each_cpu(cpu, &ctrl->cpu_irq_maskable)
+- iowrite8(GENMASK(7, 0), realtek_gpio_irq_cpu_mask(ctrl, port, cpu));
+- }
++ realtek_gpio_clear_isr(ctrl, mask_all);
++
++ for_each_cpu(cpu, &ctrl->cpu_irq_maskable)
++ ctrl->bank_write(realtek_gpio_irq_cpu_mask(ctrl, cpu), mask_all);
+
+ return 0;
+ }
+@@ -387,12 +389,14 @@ static int realtek_gpio_probe(struct platform_device *pdev)
+
+ if (dev_flags & GPIO_PORTS_REVERSED) {
+ bgpio_flags = 0;
+- ctrl->port_offset_u8 = realtek_gpio_port_offset_u8_rev;
+- ctrl->port_offset_u16 = realtek_gpio_port_offset_u16_rev;
++ ctrl->bank_read = realtek_gpio_bank_read;
++ ctrl->bank_write = realtek_gpio_bank_write;
++ ctrl->line_imr_pos = realtek_gpio_line_imr_pos;
+ } else {
+ bgpio_flags = BGPIOF_BIG_ENDIAN_BYTE_ORDER;
+- ctrl->port_offset_u8 = realtek_gpio_port_offset_u8;
+- ctrl->port_offset_u16 = realtek_gpio_port_offset_u16;
++ ctrl->bank_read = realtek_gpio_bank_read_swapped;
++ ctrl->bank_write = realtek_gpio_bank_write_swapped;
++ ctrl->line_imr_pos = realtek_gpio_line_imr_pos_swapped;
+ }
+
+ err = bgpio_init(&ctrl->gc, dev, 4,
+diff --git a/drivers/gpu/drm/i915/display/intel_backlight.c b/drivers/gpu/drm/i915/display/intel_backlight.c
+index c8e1fc53a881f..3e200a2e4ba29 100644
+--- a/drivers/gpu/drm/i915/display/intel_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_backlight.c
+@@ -15,6 +15,7 @@
+ #include "intel_dsi_dcs_backlight.h"
+ #include "intel_panel.h"
+ #include "intel_pci_config.h"
++#include "intel_pps.h"
+
+ /**
+ * scale - scale values from one range to another
+@@ -970,26 +971,24 @@ int intel_backlight_device_register(struct intel_connector *connector)
+ if (!name)
+ return -ENOMEM;
+
+- bd = backlight_device_register(name, connector->base.kdev, connector,
+- &intel_backlight_device_ops, &props);
+-
+- /*
+- * Using the same name independent of the drm device or connector
+- * prevents registration of multiple backlight devices in the
+- * driver. However, we need to use the default name for backward
+- * compatibility. Use unique names for subsequent backlight devices as a
+- * fallback when the default name already exists.
+- */
+- if (IS_ERR(bd) && PTR_ERR(bd) == -EEXIST) {
++ bd = backlight_device_get_by_name(name);
++ if (bd) {
++ put_device(&bd->dev);
++ /*
++ * Using the same name independent of the drm device or connector
++ * prevents registration of multiple backlight devices in the
++ * driver. However, we need to use the default name for backward
++ * compatibility. Use unique names for subsequent backlight devices as a
++ * fallback when the default name already exists.
++ */
+ kfree(name);
+ name = kasprintf(GFP_KERNEL, "card%d-%s-backlight",
+ i915->drm.primary->index, connector->base.name);
+ if (!name)
+ return -ENOMEM;
+-
+- bd = backlight_device_register(name, connector->base.kdev, connector,
+- &intel_backlight_device_ops, &props);
+ }
++ bd = backlight_device_register(name, connector->base.kdev, connector,
++ &intel_backlight_device_ops, &props);
+
+ if (IS_ERR(bd)) {
+ drm_err(&i915->drm,
+@@ -1771,9 +1770,13 @@ void intel_backlight_init_funcs(struct intel_panel *panel)
+ panel->backlight.pwm_funcs = &i9xx_pwm_funcs;
+ }
+
+- if (connector->base.connector_type == DRM_MODE_CONNECTOR_eDP &&
+- intel_dp_aux_init_backlight_funcs(connector) == 0)
+- return;
++ if (connector->base.connector_type == DRM_MODE_CONNECTOR_eDP) {
++ if (intel_dp_aux_init_backlight_funcs(connector) == 0)
++ return;
++
++ if (!(dev_priv->quirks & QUIRK_NO_PPS_BACKLIGHT_POWER_HOOK))
++ connector->panel.backlight.power = intel_pps_backlight_power;
++ }
+
+ /* We're using a standard PWM backlight interface */
+ panel->backlight.funcs = &pwm_bl_funcs;
+diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
+index 37bd7b17f3d0b..f2fad199e2e0b 100644
+--- a/drivers/gpu/drm/i915/display/intel_bw.c
++++ b/drivers/gpu/drm/i915/display/intel_bw.c
+@@ -404,15 +404,17 @@ static int tgl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
+ int clpchgroup;
+ int j;
+
+- if (i < num_groups - 1)
+- bi_next = &dev_priv->max_bw[i + 1];
+-
+ clpchgroup = (sa->deburst * qi.deinterleave / num_channels) << i;
+
+- if (i < num_groups - 1 && clpchgroup < clperchgroup)
+- bi_next->num_planes = (ipqdepth - clpchgroup) / clpchgroup + 1;
+- else
+- bi_next->num_planes = 0;
++ if (i < num_groups - 1) {
++ bi_next = &dev_priv->max_bw[i + 1];
++
++ if (clpchgroup < clperchgroup)
++ bi_next->num_planes = (ipqdepth - clpchgroup) /
++ clpchgroup + 1;
++ else
++ bi_next->num_planes = 0;
++ }
+
+ bi->num_qgv_points = qi.num_points;
+ bi->num_psf_gv_points = qi.num_psf_points;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index ff67899522cf7..41aaa6c98114f 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -5248,8 +5248,6 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp,
+
+ intel_panel_init(intel_connector);
+
+- if (!(dev_priv->quirks & QUIRK_NO_PPS_BACKLIGHT_POWER_HOOK))
+- intel_connector->panel.backlight.power = intel_pps_backlight_power;
+ intel_backlight_setup(intel_connector, pipe);
+
+ intel_edp_add_properties(intel_dp);
+diff --git a/drivers/gpu/drm/i915/display/intel_quirks.c b/drivers/gpu/drm/i915/display/intel_quirks.c
+index c8488f5ebd044..e415cd7c0b84b 100644
+--- a/drivers/gpu/drm/i915/display/intel_quirks.c
++++ b/drivers/gpu/drm/i915/display/intel_quirks.c
+@@ -191,6 +191,9 @@ static struct intel_quirk intel_quirks[] = {
+ /* ASRock ITX*/
+ { 0x3185, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
+ { 0x3184, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
++ /* ECS Liva Q2 */
++ { 0x3185, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
++ { 0x3184, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
+ };
+
+ void intel_init_quirks(struct drm_i915_private *i915)
+diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c
+index 2b10b96b17b5b..933648cc90ff9 100644
+--- a/drivers/gpu/drm/i915/gt/intel_migrate.c
++++ b/drivers/gpu/drm/i915/gt/intel_migrate.c
+@@ -638,9 +638,9 @@ static int emit_copy(struct i915_request *rq,
+ return 0;
+ }
+
+-static int scatter_list_length(struct scatterlist *sg)
++static u64 scatter_list_length(struct scatterlist *sg)
+ {
+- int len = 0;
++ u64 len = 0;
+
+ while (sg && sg_dma_len(sg)) {
+ len += sg_dma_len(sg);
+@@ -650,28 +650,26 @@ static int scatter_list_length(struct scatterlist *sg)
+ return len;
+ }
+
+-static void
++static int
+ calculate_chunk_sz(struct drm_i915_private *i915, bool src_is_lmem,
+- int *src_sz, u32 bytes_to_cpy, u32 ccs_bytes_to_cpy)
++ u64 bytes_to_cpy, u64 ccs_bytes_to_cpy)
+ {
+- if (ccs_bytes_to_cpy) {
+- if (!src_is_lmem)
+- /*
+- * When CHUNK_SZ is passed all the pages upto CHUNK_SZ
+- * will be taken for the blt. in Flat-ccs supported
+- * platform Smem obj will have more pages than required
+- * for main meory hence limit it to the required size
+- * for main memory
+- */
+- *src_sz = min_t(int, bytes_to_cpy, CHUNK_SZ);
+- } else { /* ccs handling is not required */
+- *src_sz = CHUNK_SZ;
+- }
++ if (ccs_bytes_to_cpy && !src_is_lmem)
++ /*
++ * When CHUNK_SZ is passed all the pages upto CHUNK_SZ
++ * will be taken for the blt. in Flat-ccs supported
++ * platform Smem obj will have more pages than required
++ * for main meory hence limit it to the required size
++ * for main memory
++ */
++ return min_t(u64, bytes_to_cpy, CHUNK_SZ);
++ else
++ return CHUNK_SZ;
+ }
+
+-static void get_ccs_sg_sgt(struct sgt_dma *it, u32 bytes_to_cpy)
++static void get_ccs_sg_sgt(struct sgt_dma *it, u64 bytes_to_cpy)
+ {
+- u32 len;
++ u64 len;
+
+ do {
+ GEM_BUG_ON(!it->sg || !sg_dma_len(it->sg));
+@@ -702,12 +700,12 @@ intel_context_migrate_copy(struct intel_context *ce,
+ {
+ struct sgt_dma it_src = sg_sgt(src), it_dst = sg_sgt(dst), it_ccs;
+ struct drm_i915_private *i915 = ce->engine->i915;
+- u32 ccs_bytes_to_cpy = 0, bytes_to_cpy;
++ u64 ccs_bytes_to_cpy = 0, bytes_to_cpy;
+ enum i915_cache_level ccs_cache_level;
+ u32 src_offset, dst_offset;
+ u8 src_access, dst_access;
+ struct i915_request *rq;
+- int src_sz, dst_sz;
++ u64 src_sz, dst_sz;
+ bool ccs_is_src, overwrite_ccs;
+ int err;
+
+@@ -790,8 +788,8 @@ intel_context_migrate_copy(struct intel_context *ce,
+ if (err)
+ goto out_rq;
+
+- calculate_chunk_sz(i915, src_is_lmem, &src_sz,
+- bytes_to_cpy, ccs_bytes_to_cpy);
++ src_sz = calculate_chunk_sz(i915, src_is_lmem,
++ bytes_to_cpy, ccs_bytes_to_cpy);
+
+ len = emit_pte(rq, &it_src, src_cache_level, src_is_lmem,
+ src_offset, src_sz);
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index 2d9f5f1c79d3a..26a051ef119df 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -4010,6 +4010,13 @@ static inline void guc_init_lrc_mapping(struct intel_guc *guc)
+ /* make sure all descriptors are clean... */
+ xa_destroy(&guc->context_lookup);
+
++ /*
++ * A reset might have occurred while we had a pending stalled request,
++ * so make sure we clean that up.
++ */
++ guc->stalled_request = NULL;
++ guc->submission_stall_reason = STALL_NONE;
++
+ /*
+ * Some contexts might have been pinned before we enabled GuC
+ * submission, so we need to add them to the GuC bookeeping.
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index beea5895e4992..73e74a6a76037 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -905,7 +905,7 @@ static int update_fdi_rx_iir_status(struct intel_vgpu *vgpu,
+ else if (FDI_RX_IMR_TO_PIPE(offset) != INVALID_INDEX)
+ index = FDI_RX_IMR_TO_PIPE(offset);
+ else {
+- gvt_vgpu_err("Unsupport registers %x\n", offset);
++ gvt_vgpu_err("Unsupported registers %x\n", offset);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/gpu/drm/i915/intel_gvt_mmio_table.c b/drivers/gpu/drm/i915/intel_gvt_mmio_table.c
+index 72dac1718f3e7..6163aeaee9b98 100644
+--- a/drivers/gpu/drm/i915/intel_gvt_mmio_table.c
++++ b/drivers/gpu/drm/i915/intel_gvt_mmio_table.c
+@@ -1074,7 +1074,8 @@ static int iterate_skl_plus_mmio(struct intel_gvt_mmio_table_iter *iter)
+ MMIO_D(GEN8_HDC_CHICKEN1);
+ MMIO_D(GEN9_WM_CHICKEN3);
+
+- if (IS_KABYLAKE(dev_priv) || IS_COFFEELAKE(dev_priv))
++ if (IS_KABYLAKE(dev_priv) ||
++ IS_COFFEELAKE(dev_priv) || IS_COMETLAKE(dev_priv))
+ MMIO_D(GAMT_CHKN_BIT_REG);
+ if (!IS_BROXTON(dev_priv))
+ MMIO_D(GEN9_CTX_PREEMPT_REG);
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 5735915facc51..7d5803f2343a9 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -6560,7 +6560,10 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
+ enum plane_id plane_id;
+ u8 slices;
+
+- skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
++ memset(&crtc_state->wm.skl.optimal, 0,
++ sizeof(crtc_state->wm.skl.optimal));
++ if (crtc_state->hw.active)
++ skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
+ crtc_state->wm.skl.raw = crtc_state->wm.skl.optimal;
+
+ memset(&dbuf_state->ddb[pipe], 0, sizeof(dbuf_state->ddb[pipe]));
+@@ -6571,6 +6574,9 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
+ struct skl_ddb_entry *ddb_y =
+ &crtc_state->wm.skl.plane_ddb_y[plane_id];
+
++ if (!crtc_state->hw.active)
++ continue;
++
+ skl_ddb_get_hw_plane_state(dev_priv, crtc->pipe,
+ plane_id, ddb, ddb_y);
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 9b4df3084366b..d98c7f7da7c08 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -1998,6 +1998,12 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc)
+
+ intf_cfg.stream_sel = 0; /* Don't care value for video mode */
+ intf_cfg.mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc);
++
++ if (phys_enc->hw_intf)
++ intf_cfg.intf = phys_enc->hw_intf->idx;
++ if (phys_enc->hw_wb)
++ intf_cfg.wb = phys_enc->hw_wb->idx;
++
+ if (phys_enc->hw_pp->merge_3d)
+ intf_cfg.merge_3d = phys_enc->hw_pp->merge_3d->idx;
+
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 703249384e7c7..45aa06a31a9fd 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1214,7 +1214,7 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+ if (ret)
+ return ret;
+
+- dp_ctrl_train_pattern_set(ctrl, pattern | DP_RECOVERED_CLOCK_OUT_EN);
++ dp_ctrl_train_pattern_set(ctrl, pattern);
+
+ for (tries = 0; tries <= maximum_retries; tries++) {
+ drm_dp_link_train_channel_eq_delay(ctrl->aux, ctrl->panel->dpcd);
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_cfg.c b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+index 2c23324a2296b..72c018e26f47f 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_cfg.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+@@ -109,7 +109,7 @@ static const char * const dsi_8996_bus_clk_names[] = {
+ static const struct msm_dsi_config msm8996_dsi_cfg = {
+ .io_offset = DSI_6G_REG_SHIFT,
+ .reg_cfg = {
+- .num = 2,
++ .num = 3,
+ .regs = {
+ {"vdda", 18160, 1 }, /* 1.25 V */
+ {"vcca", 17000, 32 }, /* 0.925 V */
+@@ -148,7 +148,7 @@ static const char * const dsi_sdm660_bus_clk_names[] = {
+ static const struct msm_dsi_config sdm660_dsi_cfg = {
+ .io_offset = DSI_6G_REG_SHIFT,
+ .reg_cfg = {
+- .num = 2,
++ .num = 1,
+ .regs = {
+ {"vdda", 12560, 4 }, /* 1.2 V */
+ },
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+index a39de3bdc7faf..56dfa2d24be1f 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+@@ -347,7 +347,7 @@ int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing *timing,
+ } else {
+ timing->shared_timings.clk_pre =
+ linear_inter(tmax, tmin, pcnt2, 0, false);
+- timing->shared_timings.clk_pre_inc_by_2 = 0;
++ timing->shared_timings.clk_pre_inc_by_2 = 0;
+ }
+
+ timing->ta_go = 3;
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 14ab9a627d8b0..7c0314d6566af 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -424,6 +424,8 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ }
+ }
+
++ drm_helper_move_panel_connectors_to_head(ddev);
++
+ ddev->mode_config.funcs = &mode_config_funcs;
+ ddev->mode_config.helper_private = &mode_config_helper_funcs;
+
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index ea94bc18e72eb..89cc93eb67557 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -213,6 +213,8 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+
+ if (IS_ERR(df->devfreq)) {
+ DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize GPU devfreq\n");
++ dev_pm_qos_remove_request(&df->idle_freq);
++ dev_pm_qos_remove_request(&df->boost_freq);
+ df->devfreq = NULL;
+ return;
+ }
+diff --git a/drivers/hwmon/gpio-fan.c b/drivers/hwmon/gpio-fan.c
+index befe989ca7b94..fbf3f5a4ecb67 100644
+--- a/drivers/hwmon/gpio-fan.c
++++ b/drivers/hwmon/gpio-fan.c
+@@ -391,6 +391,9 @@ static int gpio_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ if (!fan_data)
+ return -EINVAL;
+
++ if (state >= fan_data->num_speed)
++ return -EINVAL;
++
+ set_fan_speed(fan_data, state);
+ return 0;
+ }
+diff --git a/drivers/iio/adc/ad7292.c b/drivers/iio/adc/ad7292.c
+index 92c68d467c505..a2f9fda25ff34 100644
+--- a/drivers/iio/adc/ad7292.c
++++ b/drivers/iio/adc/ad7292.c
+@@ -287,10 +287,8 @@ static int ad7292_probe(struct spi_device *spi)
+
+ ret = devm_add_action_or_reset(&spi->dev,
+ ad7292_regulator_disable, st);
+- if (ret) {
+- regulator_disable(st->reg);
++ if (ret)
+ return ret;
+- }
+
+ ret = regulator_get_voltage(st->reg);
+ if (ret < 0)
+diff --git a/drivers/iio/adc/mcp3911.c b/drivers/iio/adc/mcp3911.c
+index 1cb4590fe4125..890af7dca62de 100644
+--- a/drivers/iio/adc/mcp3911.c
++++ b/drivers/iio/adc/mcp3911.c
+@@ -40,8 +40,8 @@
+ #define MCP3911_CHANNEL(x) (MCP3911_REG_CHANNEL0 + x * 3)
+ #define MCP3911_OFFCAL(x) (MCP3911_REG_OFFCAL_CH0 + x * 6)
+
+-/* Internal voltage reference in uV */
+-#define MCP3911_INT_VREF_UV 1200000
++/* Internal voltage reference in mV */
++#define MCP3911_INT_VREF_MV 1200
+
+ #define MCP3911_REG_READ(reg, id) ((((reg) << 1) | ((id) << 5) | (1 << 0)) & 0xff)
+ #define MCP3911_REG_WRITE(reg, id) ((((reg) << 1) | ((id) << 5) | (0 << 0)) & 0xff)
+@@ -113,6 +113,8 @@ static int mcp3911_read_raw(struct iio_dev *indio_dev,
+ if (ret)
+ goto out;
+
++ *val = sign_extend32(*val, 23);
++
+ ret = IIO_VAL_INT;
+ break;
+
+@@ -137,11 +139,18 @@ static int mcp3911_read_raw(struct iio_dev *indio_dev,
+
+ *val = ret / 1000;
+ } else {
+- *val = MCP3911_INT_VREF_UV;
++ *val = MCP3911_INT_VREF_MV;
+ }
+
+- *val2 = 24;
+- ret = IIO_VAL_FRACTIONAL_LOG2;
++ /*
++ * For 24bit Conversion
++ * Raw = ((Voltage)/(Vref) * 2^23 * Gain * 1.5
++ * Voltage = Raw * (Vref)/(2^23 * Gain * 1.5)
++ */
++
++ /* val2 = (2^23 * 1.5) */
++ *val2 = 12582912;
++ ret = IIO_VAL_FRACTIONAL;
+ break;
+ }
+
+@@ -208,7 +217,14 @@ static int mcp3911_config(struct mcp3911 *adc)
+ u32 configreg;
+ int ret;
+
+- device_property_read_u32(dev, "device-addr", &adc->dev_addr);
++ ret = device_property_read_u32(dev, "microchip,device-addr", &adc->dev_addr);
++
++ /*
++ * Fallback to "device-addr" due to historical mismatch between
++ * dt-bindings and implementation
++ */
++ if (ret)
++ device_property_read_u32(dev, "device-addr", &adc->dev_addr);
+ if (adc->dev_addr > 3) {
+ dev_err(&adc->spi->dev,
+ "invalid device address (%i). Must be in range 0-3.\n",
+diff --git a/drivers/iio/light/cm3605.c b/drivers/iio/light/cm3605.c
+index 50d34a98839c0..a68b95a79d482 100644
+--- a/drivers/iio/light/cm3605.c
++++ b/drivers/iio/light/cm3605.c
+@@ -226,8 +226,10 @@ static int cm3605_probe(struct platform_device *pdev)
+ }
+
+ irq = platform_get_irq(pdev, 0);
+- if (irq < 0)
+- return dev_err_probe(dev, irq, "failed to get irq\n");
++ if (irq < 0) {
++ ret = dev_err_probe(dev, irq, "failed to get irq\n");
++ goto out_disable_aset;
++ }
+
+ ret = devm_request_threaded_irq(dev, irq, cm3605_prox_irq,
+ NULL, 0, "cm3605", indio_dev);
+diff --git a/drivers/input/joystick/iforce/iforce-serio.c b/drivers/input/joystick/iforce/iforce-serio.c
+index f95a81b9fac72..2380546d79782 100644
+--- a/drivers/input/joystick/iforce/iforce-serio.c
++++ b/drivers/input/joystick/iforce/iforce-serio.c
+@@ -39,7 +39,7 @@ static void iforce_serio_xmit(struct iforce *iforce)
+
+ again:
+ if (iforce->xmit.head == iforce->xmit.tail) {
+- clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
++ iforce_clear_xmit_and_wake(iforce);
+ spin_unlock_irqrestore(&iforce->xmit_lock, flags);
+ return;
+ }
+@@ -64,7 +64,7 @@ again:
+ if (test_and_clear_bit(IFORCE_XMIT_AGAIN, iforce->xmit_flags))
+ goto again;
+
+- clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
++ iforce_clear_xmit_and_wake(iforce);
+
+ spin_unlock_irqrestore(&iforce->xmit_lock, flags);
+ }
+@@ -169,7 +169,7 @@ static irqreturn_t iforce_serio_irq(struct serio *serio,
+ iforce_serio->cmd_response_len = iforce_serio->len;
+
+ /* Signal that command is done */
+- wake_up(&iforce->wait);
++ wake_up_all(&iforce->wait);
+ } else if (likely(iforce->type)) {
+ iforce_process_packet(iforce, iforce_serio->id,
+ iforce_serio->data_in,
+diff --git a/drivers/input/joystick/iforce/iforce-usb.c b/drivers/input/joystick/iforce/iforce-usb.c
+index ea58805c480fa..cba92bd590a8d 100644
+--- a/drivers/input/joystick/iforce/iforce-usb.c
++++ b/drivers/input/joystick/iforce/iforce-usb.c
+@@ -30,7 +30,7 @@ static void __iforce_usb_xmit(struct iforce *iforce)
+ spin_lock_irqsave(&iforce->xmit_lock, flags);
+
+ if (iforce->xmit.head == iforce->xmit.tail) {
+- clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
++ iforce_clear_xmit_and_wake(iforce);
+ spin_unlock_irqrestore(&iforce->xmit_lock, flags);
+ return;
+ }
+@@ -58,9 +58,9 @@ static void __iforce_usb_xmit(struct iforce *iforce)
+ XMIT_INC(iforce->xmit.tail, n);
+
+ if ( (n=usb_submit_urb(iforce_usb->out, GFP_ATOMIC)) ) {
+- clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
+ dev_warn(&iforce_usb->intf->dev,
+ "usb_submit_urb failed %d\n", n);
++ iforce_clear_xmit_and_wake(iforce);
+ }
+
+ /* The IFORCE_XMIT_RUNNING bit is not cleared here. That's intended.
+@@ -175,15 +175,15 @@ static void iforce_usb_out(struct urb *urb)
+ struct iforce *iforce = &iforce_usb->iforce;
+
+ if (urb->status) {
+- clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
+ dev_dbg(&iforce_usb->intf->dev, "urb->status %d, exiting\n",
+ urb->status);
++ iforce_clear_xmit_and_wake(iforce);
+ return;
+ }
+
+ __iforce_usb_xmit(iforce);
+
+- wake_up(&iforce->wait);
++ wake_up_all(&iforce->wait);
+ }
+
+ static int iforce_usb_probe(struct usb_interface *intf,
+diff --git a/drivers/input/joystick/iforce/iforce.h b/drivers/input/joystick/iforce/iforce.h
+index 6aa761ebbdf77..9ccb9107ccbef 100644
+--- a/drivers/input/joystick/iforce/iforce.h
++++ b/drivers/input/joystick/iforce/iforce.h
+@@ -119,6 +119,12 @@ static inline int iforce_get_id_packet(struct iforce *iforce, u8 id,
+ response_data, response_len);
+ }
+
++static inline void iforce_clear_xmit_and_wake(struct iforce *iforce)
++{
++ clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
++ wake_up_all(&iforce->wait);
++}
++
+ /* Public functions */
+ /* iforce-main.c */
+ int iforce_init_device(struct device *parent, u16 bustype,
+diff --git a/drivers/input/misc/rk805-pwrkey.c b/drivers/input/misc/rk805-pwrkey.c
+index 3fb64dbda1a21..76873aa005b41 100644
+--- a/drivers/input/misc/rk805-pwrkey.c
++++ b/drivers/input/misc/rk805-pwrkey.c
+@@ -98,6 +98,7 @@ static struct platform_driver rk805_pwrkey_driver = {
+ };
+ module_platform_driver(rk805_pwrkey_driver);
+
++MODULE_ALIAS("platform:rk805-pwrkey");
+ MODULE_AUTHOR("Joseph Chen <chenjh@rock-chips.com>");
+ MODULE_DESCRIPTION("RK805 PMIC Power Key driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index 0834d5f866fd8..39d2b03e26317 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -1416,42 +1416,37 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ {
+ int ret;
+ struct device *dev = ir->dev;
+- char *data;
+-
+- data = kzalloc(USB_CTRL_MSG_SZ, GFP_KERNEL);
+- if (!data) {
+- dev_err(dev, "%s: memory allocation failed!", __func__);
+- return;
+- }
++ char data[USB_CTRL_MSG_SZ];
+
+ /*
+ * This is a strange one. Windows issues a set address to the device
+ * on the receive control pipe and expect a certain value pair back
+ */
+- ret = usb_control_msg(ir->usbdev, usb_rcvctrlpipe(ir->usbdev, 0),
+- USB_REQ_SET_ADDRESS, USB_TYPE_VENDOR, 0, 0,
+- data, USB_CTRL_MSG_SZ, 3000);
++ ret = usb_control_msg_recv(ir->usbdev, 0, USB_REQ_SET_ADDRESS,
++ USB_DIR_IN | USB_TYPE_VENDOR,
++ 0, 0, data, USB_CTRL_MSG_SZ, 3000,
++ GFP_KERNEL);
+ dev_dbg(dev, "set address - ret = %d", ret);
+ dev_dbg(dev, "set address - data[0] = %d, data[1] = %d",
+ data[0], data[1]);
+
+ /* set feature: bit rate 38400 bps */
+- ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+- USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
+- 0xc04e, 0x0000, NULL, 0, 3000);
++ ret = usb_control_msg_send(ir->usbdev, 0,
++ USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
++ 0xc04e, 0x0000, NULL, 0, 3000, GFP_KERNEL);
+
+ dev_dbg(dev, "set feature - ret = %d", ret);
+
+ /* bRequest 4: set char length to 8 bits */
+- ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+- 4, USB_TYPE_VENDOR,
+- 0x0808, 0x0000, NULL, 0, 3000);
++ ret = usb_control_msg_send(ir->usbdev, 0,
++ 4, USB_TYPE_VENDOR,
++ 0x0808, 0x0000, NULL, 0, 3000, GFP_KERNEL);
+ dev_dbg(dev, "set char length - retB = %d", ret);
+
+ /* bRequest 2: set handshaking to use DTR/DSR */
+- ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+- 2, USB_TYPE_VENDOR,
+- 0x0000, 0x0100, NULL, 0, 3000);
++ ret = usb_control_msg_send(ir->usbdev, 0,
++ 2, USB_TYPE_VENDOR,
++ 0x0000, 0x0100, NULL, 0, 3000, GFP_KERNEL);
+ dev_dbg(dev, "set handshake - retC = %d", ret);
+
+ /* device resume */
+@@ -1459,8 +1454,6 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+
+ /* get hw/sw revision? */
+ mce_command_out(ir, GET_REVISION, sizeof(GET_REVISION));
+-
+- kfree(data);
+ }
+
+ static void mceusb_gen2_init(struct mceusb_dev *ir)
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 93ebd174d8487..6e312ac856686 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1943,7 +1943,12 @@ static int fastrpc_cb_probe(struct platform_device *pdev)
+ of_property_read_u32(dev->of_node, "qcom,nsessions", &sessions);
+
+ spin_lock_irqsave(&cctx->lock, flags);
+- sess = &cctx->session[cctx->sesscount];
++ if (cctx->sesscount >= FASTRPC_MAX_SESSIONS) {
++ dev_err(&pdev->dev, "too many sessions\n");
++ spin_unlock_irqrestore(&cctx->lock, flags);
++ return -ENOSPC;
++ }
++ sess = &cctx->session[cctx->sesscount++];
+ sess->used = false;
+ sess->valid = true;
+ sess->dev = dev;
+@@ -1956,13 +1961,12 @@ static int fastrpc_cb_probe(struct platform_device *pdev)
+ struct fastrpc_session_ctx *dup_sess;
+
+ for (i = 1; i < sessions; i++) {
+- if (cctx->sesscount++ >= FASTRPC_MAX_SESSIONS)
++ if (cctx->sesscount >= FASTRPC_MAX_SESSIONS)
+ break;
+- dup_sess = &cctx->session[cctx->sesscount];
++ dup_sess = &cctx->session[cctx->sesscount++];
+ memcpy(dup_sess, sess, sizeof(*dup_sess));
+ }
+ }
+- cctx->sesscount++;
+ spin_unlock_irqrestore(&cctx->lock, flags);
+ rc = dma_set_mask(dev, DMA_BIT_MASK(32));
+ if (rc) {
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index c5f1df6ce4c0a..5e4e2d2182d91 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -949,15 +949,16 @@ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
+
+ /* Erase init depends on CSD and SSR */
+ mmc_init_erase(card);
+-
+- /*
+- * Fetch switch information from card.
+- */
+- err = mmc_read_switch(card);
+- if (err)
+- return err;
+ }
+
++ /*
++ * Fetch switch information from card. Note, sd3_bus_mode can change if
++ * voltage switch outcome changes, so do this always.
++ */
++ err = mmc_read_switch(card);
++ if (err)
++ return err;
++
+ /*
+ * For SPI, enable CRC as appropriate.
+ * This CRC enable is located AFTER the reading of the
+@@ -1480,26 +1481,15 @@ retry:
+ if (!v18_fixup_failed && !mmc_host_is_spi(host) && mmc_host_uhs(host) &&
+ mmc_sd_card_using_v18(card) &&
+ host->ios.signal_voltage != MMC_SIGNAL_VOLTAGE_180) {
+- /*
+- * Re-read switch information in case it has changed since
+- * oldcard was initialized.
+- */
+- if (oldcard) {
+- err = mmc_read_switch(card);
+- if (err)
+- goto free_card;
+- }
+- if (mmc_sd_card_using_v18(card)) {
+- if (mmc_host_set_uhs_voltage(host) ||
+- mmc_sd_init_uhs_card(card)) {
+- v18_fixup_failed = true;
+- mmc_power_cycle(host, ocr);
+- if (!oldcard)
+- mmc_remove_card(card);
+- goto retry;
+- }
+- goto done;
++ if (mmc_host_set_uhs_voltage(host) ||
++ mmc_sd_init_uhs_card(card)) {
++ v18_fixup_failed = true;
++ mmc_power_cycle(host, ocr);
++ if (!oldcard)
++ mmc_remove_card(card);
++ goto retry;
+ }
++ goto cont;
+ }
+
+ /* Initialization sequence for UHS-I cards */
+@@ -1534,7 +1524,7 @@ retry:
+ mmc_set_bus_width(host, MMC_BUS_WIDTH_4);
+ }
+ }
+-
++cont:
+ if (!oldcard) {
+ /* Read/parse the extension registers. */
+ err = sd_read_ext_regs(card);
+@@ -1566,7 +1556,7 @@ retry:
+ err = -EINVAL;
+ goto free_card;
+ }
+-done:
++
+ host->card = card;
+ return 0;
+
+diff --git a/drivers/net/dsa/xrs700x/xrs700x.c b/drivers/net/dsa/xrs700x/xrs700x.c
+index 3887ed33c5fe2..fa622639d6401 100644
+--- a/drivers/net/dsa/xrs700x/xrs700x.c
++++ b/drivers/net/dsa/xrs700x/xrs700x.c
+@@ -109,6 +109,7 @@ static void xrs700x_read_port_counters(struct xrs700x *priv, int port)
+ {
+ struct xrs700x_port *p = &priv->ports[port];
+ struct rtnl_link_stats64 stats;
++ unsigned long flags;
+ int i;
+
+ memset(&stats, 0, sizeof(stats));
+@@ -138,9 +139,9 @@ static void xrs700x_read_port_counters(struct xrs700x *priv, int port)
+ */
+ stats.rx_packets += stats.multicast;
+
+- u64_stats_update_begin(&p->syncp);
++ flags = u64_stats_update_begin_irqsave(&p->syncp);
+ p->stats64 = stats;
+- u64_stats_update_end(&p->syncp);
++ u64_stats_update_end_irqrestore(&p->syncp, flags);
+
+ mutex_unlock(&p->mib_mutex);
+ }
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 9e6de2f968fa3..6dae768671e3d 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -1919,7 +1919,7 @@ static void gmac_get_stats64(struct net_device *netdev,
+
+ /* Racing with RX NAPI */
+ do {
+- start = u64_stats_fetch_begin(&port->rx_stats_syncp);
++ start = u64_stats_fetch_begin_irq(&port->rx_stats_syncp);
+
+ stats->rx_packets = port->stats.rx_packets;
+ stats->rx_bytes = port->stats.rx_bytes;
+@@ -1931,11 +1931,11 @@ static void gmac_get_stats64(struct net_device *netdev,
+ stats->rx_crc_errors = port->stats.rx_crc_errors;
+ stats->rx_frame_errors = port->stats.rx_frame_errors;
+
+- } while (u64_stats_fetch_retry(&port->rx_stats_syncp, start));
++ } while (u64_stats_fetch_retry_irq(&port->rx_stats_syncp, start));
+
+ /* Racing with MIB and TX completion interrupts */
+ do {
+- start = u64_stats_fetch_begin(&port->ir_stats_syncp);
++ start = u64_stats_fetch_begin_irq(&port->ir_stats_syncp);
+
+ stats->tx_errors = port->stats.tx_errors;
+ stats->tx_packets = port->stats.tx_packets;
+@@ -1945,15 +1945,15 @@ static void gmac_get_stats64(struct net_device *netdev,
+ stats->rx_missed_errors = port->stats.rx_missed_errors;
+ stats->rx_fifo_errors = port->stats.rx_fifo_errors;
+
+- } while (u64_stats_fetch_retry(&port->ir_stats_syncp, start));
++ } while (u64_stats_fetch_retry_irq(&port->ir_stats_syncp, start));
+
+ /* Racing with hard_start_xmit */
+ do {
+- start = u64_stats_fetch_begin(&port->tx_stats_syncp);
++ start = u64_stats_fetch_begin_irq(&port->tx_stats_syncp);
+
+ stats->tx_dropped = port->stats.tx_dropped;
+
+- } while (u64_stats_fetch_retry(&port->tx_stats_syncp, start));
++ } while (u64_stats_fetch_retry_irq(&port->tx_stats_syncp, start));
+
+ stats->rx_dropped += stats->rx_missed_errors;
+ }
+@@ -2031,18 +2031,18 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
+ /* Racing with MIB interrupt */
+ do {
+ p = values;
+- start = u64_stats_fetch_begin(&port->ir_stats_syncp);
++ start = u64_stats_fetch_begin_irq(&port->ir_stats_syncp);
+
+ for (i = 0; i < RX_STATS_NUM; i++)
+ *p++ = port->hw_stats[i];
+
+- } while (u64_stats_fetch_retry(&port->ir_stats_syncp, start));
++ } while (u64_stats_fetch_retry_irq(&port->ir_stats_syncp, start));
+ values = p;
+
+ /* Racing with RX NAPI */
+ do {
+ p = values;
+- start = u64_stats_fetch_begin(&port->rx_stats_syncp);
++ start = u64_stats_fetch_begin_irq(&port->rx_stats_syncp);
+
+ for (i = 0; i < RX_STATUS_NUM; i++)
+ *p++ = port->rx_stats[i];
+@@ -2050,13 +2050,13 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
+ *p++ = port->rx_csum_stats[i];
+ *p++ = port->rx_napi_exits;
+
+- } while (u64_stats_fetch_retry(&port->rx_stats_syncp, start));
++ } while (u64_stats_fetch_retry_irq(&port->rx_stats_syncp, start));
+ values = p;
+
+ /* Racing with TX start_xmit */
+ do {
+ p = values;
+- start = u64_stats_fetch_begin(&port->tx_stats_syncp);
++ start = u64_stats_fetch_begin_irq(&port->tx_stats_syncp);
+
+ for (i = 0; i < TX_MAX_FRAGS; i++) {
+ *values++ = port->tx_frag_stats[i];
+@@ -2065,7 +2065,7 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
+ *values++ = port->tx_frags_linearized;
+ *values++ = port->tx_hw_csummed;
+
+- } while (u64_stats_fetch_retry(&port->tx_stats_syncp, start));
++ } while (u64_stats_fetch_retry_irq(&port->tx_stats_syncp, start));
+ }
+
+ static int gmac_get_ksettings(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/fungible/funeth/funeth_txrx.h b/drivers/net/ethernet/fungible/funeth/funeth_txrx.h
+index 8708e2895946d..6b125ed04bbad 100644
+--- a/drivers/net/ethernet/fungible/funeth/funeth_txrx.h
++++ b/drivers/net/ethernet/fungible/funeth/funeth_txrx.h
+@@ -205,9 +205,9 @@ struct funeth_rxq {
+
+ #define FUN_QSTAT_READ(q, seq, stats_copy) \
+ do { \
+- seq = u64_stats_fetch_begin(&(q)->syncp); \
++ seq = u64_stats_fetch_begin_irq(&(q)->syncp); \
+ stats_copy = (q)->stats; \
+- } while (u64_stats_fetch_retry(&(q)->syncp, (seq)))
++ } while (u64_stats_fetch_retry_irq(&(q)->syncp, (seq)))
+
+ #define FUN_INT_NAME_LEN (IFNAMSIZ + 16)
+
+diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
+index 50b384910c839..7b9a2d9d96243 100644
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -177,14 +177,14 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ struct gve_rx_ring *rx = &priv->rx[ring];
+
+ start =
+- u64_stats_fetch_begin(&priv->rx[ring].statss);
++ u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
+ tmp_rx_pkts = rx->rpackets;
+ tmp_rx_bytes = rx->rbytes;
+ tmp_rx_skb_alloc_fail = rx->rx_skb_alloc_fail;
+ tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
+ tmp_rx_desc_err_dropped_pkt =
+ rx->rx_desc_err_dropped_pkt;
+- } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
++ } while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
+ start));
+ rx_pkts += tmp_rx_pkts;
+ rx_bytes += tmp_rx_bytes;
+@@ -198,10 +198,10 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ if (priv->tx) {
+ do {
+ start =
+- u64_stats_fetch_begin(&priv->tx[ring].statss);
++ u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
+ tmp_tx_pkts = priv->tx[ring].pkt_done;
+ tmp_tx_bytes = priv->tx[ring].bytes_done;
+- } while (u64_stats_fetch_retry(&priv->tx[ring].statss,
++ } while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
+ start));
+ tx_pkts += tmp_tx_pkts;
+ tx_bytes += tmp_tx_bytes;
+@@ -259,13 +259,13 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ data[i++] = rx->fill_cnt - rx->cnt;
+ do {
+ start =
+- u64_stats_fetch_begin(&priv->rx[ring].statss);
++ u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
+ tmp_rx_bytes = rx->rbytes;
+ tmp_rx_skb_alloc_fail = rx->rx_skb_alloc_fail;
+ tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
+ tmp_rx_desc_err_dropped_pkt =
+ rx->rx_desc_err_dropped_pkt;
+- } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
++ } while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
+ start));
+ data[i++] = tmp_rx_bytes;
+ data[i++] = rx->rx_cont_packet_cnt;
+@@ -331,9 +331,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ }
+ do {
+ start =
+- u64_stats_fetch_begin(&priv->tx[ring].statss);
++ u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
+ tmp_tx_bytes = tx->bytes_done;
+- } while (u64_stats_fetch_retry(&priv->tx[ring].statss,
++ } while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
+ start));
+ data[i++] = tmp_tx_bytes;
+ data[i++] = tx->wake_queue;
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 6cafee55efc32..044db3ebb071c 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -51,10 +51,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
+ do {
+ start =
+- u64_stats_fetch_begin(&priv->rx[ring].statss);
++ u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
+ packets = priv->rx[ring].rpackets;
+ bytes = priv->rx[ring].rbytes;
+- } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
++ } while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
+ start));
+ s->rx_packets += packets;
+ s->rx_bytes += bytes;
+@@ -64,10 +64,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
+ do {
+ start =
+- u64_stats_fetch_begin(&priv->tx[ring].statss);
++ u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
+ packets = priv->tx[ring].pkt_done;
+ bytes = priv->tx[ring].bytes_done;
+- } while (u64_stats_fetch_retry(&priv->tx[ring].statss,
++ } while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
+ start));
+ s->tx_packets += packets;
+ s->tx_bytes += bytes;
+@@ -1274,9 +1274,9 @@ void gve_handle_report_stats(struct gve_priv *priv)
+ }
+
+ do {
+- start = u64_stats_fetch_begin(&priv->tx[idx].statss);
++ start = u64_stats_fetch_begin_irq(&priv->tx[idx].statss);
+ tx_bytes = priv->tx[idx].bytes_done;
+- } while (u64_stats_fetch_retry(&priv->tx[idx].statss, start));
++ } while (u64_stats_fetch_retry_irq(&priv->tx[idx].statss, start));
+ stats[stats_idx++] = (struct stats) {
+ .stat_name = cpu_to_be32(TX_WAKE_CNT),
+ .value = cpu_to_be64(priv->tx[idx].wake_queue),
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+index a866bea651103..e5828a658caf4 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+@@ -74,14 +74,14 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
+ unsigned int start;
+
+ do {
+- start = u64_stats_fetch_begin(&rxq_stats->syncp);
++ start = u64_stats_fetch_begin_irq(&rxq_stats->syncp);
+ stats->pkts = rxq_stats->pkts;
+ stats->bytes = rxq_stats->bytes;
+ stats->errors = rxq_stats->csum_errors +
+ rxq_stats->other_errors;
+ stats->csum_errors = rxq_stats->csum_errors;
+ stats->other_errors = rxq_stats->other_errors;
+- } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
++ } while (u64_stats_fetch_retry_irq(&rxq_stats->syncp, start));
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+index 5051cdff2384b..3b6c7b5857376 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+@@ -99,14 +99,14 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
+ unsigned int start;
+
+ do {
+- start = u64_stats_fetch_begin(&txq_stats->syncp);
++ start = u64_stats_fetch_begin_irq(&txq_stats->syncp);
+ stats->pkts = txq_stats->pkts;
+ stats->bytes = txq_stats->bytes;
+ stats->tx_busy = txq_stats->tx_busy;
+ stats->tx_wake = txq_stats->tx_wake;
+ stats->tx_dropped = txq_stats->tx_dropped;
+ stats->big_frags_pkts = txq_stats->big_frags_pkts;
+- } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
++ } while (u64_stats_fetch_retry_irq(&txq_stats->syncp, start));
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
+index 5fdf9b7179f55..5a1027b072155 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
+@@ -75,6 +75,7 @@ struct mlxbf_gige {
+ struct net_device *netdev;
+ struct platform_device *pdev;
+ void __iomem *mdio_io;
++ void __iomem *clk_io;
+ struct mii_bus *mdiobus;
+ spinlock_t lock; /* for packet processing indices */
+ u16 rx_q_entries;
+@@ -137,7 +138,8 @@ enum mlxbf_gige_res {
+ MLXBF_GIGE_RES_MDIO9,
+ MLXBF_GIGE_RES_GPIO0,
+ MLXBF_GIGE_RES_LLU,
+- MLXBF_GIGE_RES_PLU
++ MLXBF_GIGE_RES_PLU,
++ MLXBF_GIGE_RES_CLK
+ };
+
+ /* Version of register data returned by mlxbf_gige_get_regs() */
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
+index 2e6c1b7af0964..85155cd9405c5 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
+@@ -22,10 +22,23 @@
+ #include <linux/property.h>
+
+ #include "mlxbf_gige.h"
++#include "mlxbf_gige_regs.h"
+
+ #define MLXBF_GIGE_MDIO_GW_OFFSET 0x0
+ #define MLXBF_GIGE_MDIO_CFG_OFFSET 0x4
+
++#define MLXBF_GIGE_MDIO_FREQ_REFERENCE 156250000ULL
++#define MLXBF_GIGE_MDIO_COREPLL_CONST 16384ULL
++#define MLXBF_GIGE_MDC_CLK_NS 400
++#define MLXBF_GIGE_MDIO_PLL_I1CLK_REG1 0x4
++#define MLXBF_GIGE_MDIO_PLL_I1CLK_REG2 0x8
++#define MLXBF_GIGE_MDIO_CORE_F_SHIFT 0
++#define MLXBF_GIGE_MDIO_CORE_F_MASK GENMASK(25, 0)
++#define MLXBF_GIGE_MDIO_CORE_R_SHIFT 26
++#define MLXBF_GIGE_MDIO_CORE_R_MASK GENMASK(31, 26)
++#define MLXBF_GIGE_MDIO_CORE_OD_SHIFT 0
++#define MLXBF_GIGE_MDIO_CORE_OD_MASK GENMASK(3, 0)
++
+ /* Support clause 22 */
+ #define MLXBF_GIGE_MDIO_CL22_ST1 0x1
+ #define MLXBF_GIGE_MDIO_CL22_WRITE 0x1
+@@ -50,27 +63,76 @@
+ #define MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK GENMASK(23, 16)
+ #define MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK GENMASK(31, 24)
+
++#define MLXBF_GIGE_MDIO_CFG_VAL (FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) | \
++ FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO3_3_MASK, 1) | \
++ FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1) | \
++ FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) | \
++ FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13))
++
++#define MLXBF_GIGE_BF2_COREPLL_ADDR 0x02800c30
++#define MLXBF_GIGE_BF2_COREPLL_SIZE 0x0000000c
++
++static struct resource corepll_params[] = {
++ [MLXBF_GIGE_VERSION_BF2] = {
++ .start = MLXBF_GIGE_BF2_COREPLL_ADDR,
++ .end = MLXBF_GIGE_BF2_COREPLL_ADDR + MLXBF_GIGE_BF2_COREPLL_SIZE - 1,
++ .name = "COREPLL_RES"
++ },
++};
++
++/* Returns core clock i1clk in Hz */
++static u64 calculate_i1clk(struct mlxbf_gige *priv)
++{
++ u8 core_od, core_r;
++ u64 freq_output;
++ u32 reg1, reg2;
++ u32 core_f;
++
++ reg1 = readl(priv->clk_io + MLXBF_GIGE_MDIO_PLL_I1CLK_REG1);
++ reg2 = readl(priv->clk_io + MLXBF_GIGE_MDIO_PLL_I1CLK_REG2);
++
++ core_f = (reg1 & MLXBF_GIGE_MDIO_CORE_F_MASK) >>
++ MLXBF_GIGE_MDIO_CORE_F_SHIFT;
++ core_r = (reg1 & MLXBF_GIGE_MDIO_CORE_R_MASK) >>
++ MLXBF_GIGE_MDIO_CORE_R_SHIFT;
++ core_od = (reg2 & MLXBF_GIGE_MDIO_CORE_OD_MASK) >>
++ MLXBF_GIGE_MDIO_CORE_OD_SHIFT;
++
++ /* Compute PLL output frequency as follow:
++ *
++ * CORE_F / 16384
++ * freq_output = freq_reference * ----------------------------
++ * (CORE_R + 1) * (CORE_OD + 1)
++ */
++ freq_output = div_u64((MLXBF_GIGE_MDIO_FREQ_REFERENCE * core_f),
++ MLXBF_GIGE_MDIO_COREPLL_CONST);
++ freq_output = div_u64(freq_output, (core_r + 1) * (core_od + 1));
++
++ return freq_output;
++}
++
+ /* Formula for encoding the MDIO period. The encoded value is
+ * passed to the MDIO config register.
+ *
+- * mdc_clk = 2*(val + 1)*i1clk
++ * mdc_clk = 2*(val + 1)*(core clock in sec)
+ *
+- * 400 ns = 2*(val + 1)*(((1/430)*1000) ns)
++ * i1clk is in Hz:
++ * 400 ns = 2*(val + 1)*(1/i1clk)
+ *
+- * val = (((400 * 430 / 1000) / 2) - 1)
++ * val = (((400/10^9) / (1/i1clk) / 2) - 1)
++ * val = (400/2 * i1clk)/10^9 - 1
+ */
+-#define MLXBF_GIGE_I1CLK_MHZ 430
+-#define MLXBF_GIGE_MDC_CLK_NS 400
++static u8 mdio_period_map(struct mlxbf_gige *priv)
++{
++ u8 mdio_period;
++ u64 i1clk;
+
+-#define MLXBF_GIGE_MDIO_PERIOD (((MLXBF_GIGE_MDC_CLK_NS * MLXBF_GIGE_I1CLK_MHZ / 1000) / 2) - 1)
++ i1clk = calculate_i1clk(priv);
+
+-#define MLXBF_GIGE_MDIO_CFG_VAL (FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) | \
+- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO3_3_MASK, 1) | \
+- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1) | \
+- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDC_PERIOD_MASK, \
+- MLXBF_GIGE_MDIO_PERIOD) | \
+- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) | \
+- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13))
++ mdio_period = div_u64((MLXBF_GIGE_MDC_CLK_NS >> 1) * i1clk, 1000000000) - 1;
++
++ return mdio_period;
++}
+
+ static u32 mlxbf_gige_mdio_create_cmd(u16 data, int phy_add,
+ int phy_reg, u32 opcode)
+@@ -124,9 +186,9 @@ static int mlxbf_gige_mdio_write(struct mii_bus *bus, int phy_add,
+ int phy_reg, u16 val)
+ {
+ struct mlxbf_gige *priv = bus->priv;
++ u32 temp;
+ u32 cmd;
+ int ret;
+- u32 temp;
+
+ if (phy_reg & MII_ADDR_C45)
+ return -EOPNOTSUPP;
+@@ -144,18 +206,44 @@ static int mlxbf_gige_mdio_write(struct mii_bus *bus, int phy_add,
+ return ret;
+ }
+
++static void mlxbf_gige_mdio_cfg(struct mlxbf_gige *priv)
++{
++ u8 mdio_period;
++ u32 val;
++
++ mdio_period = mdio_period_map(priv);
++
++ val = MLXBF_GIGE_MDIO_CFG_VAL;
++ val |= FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDC_PERIOD_MASK, mdio_period);
++ writel(val, priv->mdio_io + MLXBF_GIGE_MDIO_CFG_OFFSET);
++}
++
+ int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv)
+ {
+ struct device *dev = &pdev->dev;
++ struct resource *res;
+ int ret;
+
+ priv->mdio_io = devm_platform_ioremap_resource(pdev, MLXBF_GIGE_RES_MDIO9);
+ if (IS_ERR(priv->mdio_io))
+ return PTR_ERR(priv->mdio_io);
+
+- /* Configure mdio parameters */
+- writel(MLXBF_GIGE_MDIO_CFG_VAL,
+- priv->mdio_io + MLXBF_GIGE_MDIO_CFG_OFFSET);
++ /* clk resource shared with other drivers so cannot use
++ * devm_platform_ioremap_resource
++ */
++ res = platform_get_resource(pdev, IORESOURCE_MEM, MLXBF_GIGE_RES_CLK);
++ if (!res) {
++ /* For backward compatibility with older ACPI tables, also keep
++ * CLK resource internal to the driver.
++ */
++ res = &corepll_params[MLXBF_GIGE_VERSION_BF2];
++ }
++
++ priv->clk_io = devm_ioremap(dev, res->start, resource_size(res));
++ if (IS_ERR(priv->clk_io))
++ return PTR_ERR(priv->clk_io);
++
++ mlxbf_gige_mdio_cfg(priv);
+
+ priv->mdiobus = devm_mdiobus_alloc(dev);
+ if (!priv->mdiobus) {
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
+index 5fb33c9294bf9..7be3a793984d5 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
+@@ -8,6 +8,8 @@
+ #ifndef __MLXBF_GIGE_REGS_H__
+ #define __MLXBF_GIGE_REGS_H__
+
++#define MLXBF_GIGE_VERSION 0x0000
++#define MLXBF_GIGE_VERSION_BF2 0x0
+ #define MLXBF_GIGE_STATUS 0x0010
+ #define MLXBF_GIGE_STATUS_READY BIT(0)
+ #define MLXBF_GIGE_INT_STATUS 0x0028
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+index fe663b0ab7086..68d87e61bdc05 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+@@ -423,7 +423,8 @@ mlxsw_sp_span_gretap4_route(const struct net_device *to_dev,
+
+ parms = mlxsw_sp_ipip_netdev_parms4(to_dev);
+ ip_tunnel_init_flow(&fl4, parms.iph.protocol, *daddrp, *saddrp,
+- 0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0);
++ 0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0,
++ 0);
+
+ rt = ip_route_output_key(tun->net, &fl4);
+ if (IS_ERR(rt))
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
+index 6dea7f8c14814..51f8a08163777 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
+@@ -425,7 +425,8 @@ static struct sk_buff *lan966x_fdma_rx_get_frame(struct lan966x_rx *rx)
+ lan966x_ifh_get_src_port(skb->data, &src_port);
+ lan966x_ifh_get_timestamp(skb->data, ×tamp);
+
+- WARN_ON(src_port >= lan966x->num_phys_ports);
++ if (WARN_ON(src_port >= lan966x->num_phys_ports))
++ goto free_skb;
+
+ skb->dev = lan966x->ports[src_port]->dev;
+ skb_pull(skb, IFH_LEN * sizeof(u32));
+@@ -449,6 +450,8 @@ static struct sk_buff *lan966x_fdma_rx_get_frame(struct lan966x_rx *rx)
+
+ return skb;
+
++free_skb:
++ kfree_skb(skb);
+ unmap_page:
+ dma_unmap_page(lan966x->dev, (dma_addr_t)db->dataptr,
+ FDMA_DCB_STATUS_BLOCKL(db->status),
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+index 304f84aadc36b..21844beba72df 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+@@ -113,6 +113,8 @@ static void sparx5_xtr_grp(struct sparx5 *sparx5, u8 grp, bool byte_swap)
+ /* This assumes STATUS_WORD_POS == 1, Status
+ * just after last data
+ */
++ if (!byte_swap)
++ val = ntohl((__force __be32)val);
+ byte_cnt -= (4 - XTR_VALID_BYTES(val));
+ eof_flag = true;
+ break;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c b/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
+index 3206ba83b1aaa..de2ef5bf8c694 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
+@@ -127,10 +127,11 @@ static int nfp_policer_validate(const struct flow_action *action,
+ return -EOPNOTSUPP;
+ }
+
+- if (act->police.notexceed.act_id != FLOW_ACTION_PIPE &&
++ if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE &&
++ act->police.notexceed.act_id != FLOW_ACTION_PIPE &&
+ act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) {
+ NL_SET_ERR_MSG_MOD(extack,
+- "Offload not supported when conform action is not pipe or ok");
++ "Offload not supported when conform action is not continue, pipe or ok");
+ return -EOPNOTSUPP;
+ }
+
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index 4e56a99087fab..32d46f07ea851 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -1629,21 +1629,21 @@ static void nfp_net_stat64(struct net_device *netdev,
+ unsigned int start;
+
+ do {
+- start = u64_stats_fetch_begin(&r_vec->rx_sync);
++ start = u64_stats_fetch_begin_irq(&r_vec->rx_sync);
+ data[0] = r_vec->rx_pkts;
+ data[1] = r_vec->rx_bytes;
+ data[2] = r_vec->rx_drops;
+- } while (u64_stats_fetch_retry(&r_vec->rx_sync, start));
++ } while (u64_stats_fetch_retry_irq(&r_vec->rx_sync, start));
+ stats->rx_packets += data[0];
+ stats->rx_bytes += data[1];
+ stats->rx_dropped += data[2];
+
+ do {
+- start = u64_stats_fetch_begin(&r_vec->tx_sync);
++ start = u64_stats_fetch_begin_irq(&r_vec->tx_sync);
+ data[0] = r_vec->tx_pkts;
+ data[1] = r_vec->tx_bytes;
+ data[2] = r_vec->tx_errors;
+- } while (u64_stats_fetch_retry(&r_vec->tx_sync, start));
++ } while (u64_stats_fetch_retry_irq(&r_vec->tx_sync, start));
+ stats->tx_packets += data[0];
+ stats->tx_bytes += data[1];
+ stats->tx_errors += data[2];
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index e6ee45afd80c7..2d7d30ec54301 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -494,7 +494,7 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
+ unsigned int start;
+
+ do {
+- start = u64_stats_fetch_begin(&nn->r_vecs[i].rx_sync);
++ start = u64_stats_fetch_begin_irq(&nn->r_vecs[i].rx_sync);
+ data[0] = nn->r_vecs[i].rx_pkts;
+ tmp[0] = nn->r_vecs[i].hw_csum_rx_ok;
+ tmp[1] = nn->r_vecs[i].hw_csum_rx_inner_ok;
+@@ -502,10 +502,10 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
+ tmp[3] = nn->r_vecs[i].hw_csum_rx_error;
+ tmp[4] = nn->r_vecs[i].rx_replace_buf_alloc_fail;
+ tmp[5] = nn->r_vecs[i].hw_tls_rx;
+- } while (u64_stats_fetch_retry(&nn->r_vecs[i].rx_sync, start));
++ } while (u64_stats_fetch_retry_irq(&nn->r_vecs[i].rx_sync, start));
+
+ do {
+- start = u64_stats_fetch_begin(&nn->r_vecs[i].tx_sync);
++ start = u64_stats_fetch_begin_irq(&nn->r_vecs[i].tx_sync);
+ data[1] = nn->r_vecs[i].tx_pkts;
+ data[2] = nn->r_vecs[i].tx_busy;
+ tmp[6] = nn->r_vecs[i].hw_csum_tx;
+@@ -515,7 +515,7 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
+ tmp[10] = nn->r_vecs[i].hw_tls_tx;
+ tmp[11] = nn->r_vecs[i].tls_tx_fallback;
+ tmp[12] = nn->r_vecs[i].tls_tx_no_fallback;
+- } while (u64_stats_fetch_retry(&nn->r_vecs[i].tx_sync, start));
++ } while (u64_stats_fetch_retry_irq(&nn->r_vecs[i].tx_sync, start));
+
+ data += NN_RVEC_PER_Q_STATS;
+
+diff --git a/drivers/net/ethernet/rocker/rocker_ofdpa.c b/drivers/net/ethernet/rocker/rocker_ofdpa.c
+index bc70c6abd6a5b..58cf7cc54f408 100644
+--- a/drivers/net/ethernet/rocker/rocker_ofdpa.c
++++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c
+@@ -1273,7 +1273,7 @@ static int ofdpa_port_ipv4_neigh(struct ofdpa_port *ofdpa_port,
+ bool removing;
+ int err = 0;
+
+- entry = kzalloc(sizeof(*entry), GFP_KERNEL);
++ entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
+ if (!entry)
+ return -ENOMEM;
+
+diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c
+index 3bf20211cceb4..3829c2805b16c 100644
+--- a/drivers/net/ethernet/smsc/smsc911x.c
++++ b/drivers/net/ethernet/smsc/smsc911x.c
+@@ -1037,6 +1037,8 @@ static int smsc911x_mii_probe(struct net_device *dev)
+ return ret;
+ }
+
++ /* Indicate that the MAC is responsible for managing PHY PM */
++ phydev->mac_managed_pm = true;
+ phy_attached_info(phydev);
+
+ phy_set_max_speed(phydev, SPEED_100);
+@@ -2587,6 +2589,8 @@ static int smsc911x_suspend(struct device *dev)
+ if (netif_running(ndev)) {
+ netif_stop_queue(ndev);
+ netif_device_detach(ndev);
++ if (!device_may_wakeup(dev))
++ phy_stop(ndev->phydev);
+ }
+
+ /* enable wake on LAN, energy detection and the external PME
+@@ -2628,6 +2632,8 @@ static int smsc911x_resume(struct device *dev)
+ if (netif_running(ndev)) {
+ netif_device_attach(ndev);
+ netif_start_queue(ndev);
++ if (!device_may_wakeup(dev))
++ phy_start(ndev->phydev);
+ }
+
+ return 0;
+diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
+index 6afdf1622944e..5cf218c674a5a 100644
+--- a/drivers/net/ieee802154/adf7242.c
++++ b/drivers/net/ieee802154/adf7242.c
+@@ -1310,10 +1310,11 @@ static void adf7242_remove(struct spi_device *spi)
+
+ debugfs_remove_recursive(lp->debugfs_root);
+
++ ieee802154_unregister_hw(lp->hw);
++
+ cancel_delayed_work_sync(&lp->work);
+ destroy_workqueue(lp->wqueue);
+
+- ieee802154_unregister_hw(lp->hw);
+ mutex_destroy(&lp->bmux);
+ ieee802154_free_hw(lp->hw);
+ }
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index e470e3398abc2..9a1a5b2036240 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -67,10 +67,10 @@ nsim_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats)
+ unsigned int start;
+
+ do {
+- start = u64_stats_fetch_begin(&ns->syncp);
++ start = u64_stats_fetch_begin_irq(&ns->syncp);
+ stats->tx_bytes = ns->tx_bytes;
+ stats->tx_packets = ns->tx_packets;
+- } while (u64_stats_fetch_retry(&ns->syncp, start));
++ } while (u64_stats_fetch_retry_irq(&ns->syncp, start));
+ }
+
+ static int
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 22139901f01c7..34483a4bd688a 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -2838,12 +2838,18 @@ static int lan8814_config_init(struct phy_device *phydev)
+ return 0;
+ }
+
++/* It is expected that there will not be any 'lan8814_take_coma_mode'
++ * function called in suspend. Because the GPIO line can be shared, so if one of
++ * the phys goes back in coma mode, then all the other PHYs will go, which is
++ * wrong.
++ */
+ static int lan8814_release_coma_mode(struct phy_device *phydev)
+ {
+ struct gpio_desc *gpiod;
+
+ gpiod = devm_gpiod_get_optional(&phydev->mdio.dev, "coma-mode",
+- GPIOD_OUT_HIGH_OPEN_DRAIN);
++ GPIOD_OUT_HIGH_OPEN_DRAIN |
++ GPIOD_FLAGS_BIT_NONEXCLUSIVE);
+ if (IS_ERR(gpiod))
+ return PTR_ERR(gpiod);
+
+diff --git a/drivers/peci/controller/peci-aspeed.c b/drivers/peci/controller/peci-aspeed.c
+index 1925ddc13f002..731c5d8f75c66 100644
+--- a/drivers/peci/controller/peci-aspeed.c
++++ b/drivers/peci/controller/peci-aspeed.c
+@@ -523,7 +523,7 @@ static int aspeed_peci_probe(struct platform_device *pdev)
+ return PTR_ERR(priv->base);
+
+ priv->irq = platform_get_irq(pdev, 0);
+- if (!priv->irq)
++ if (priv->irq < 0)
+ return priv->irq;
+
+ ret = devm_request_irq(&pdev->dev, priv->irq, aspeed_peci_irq_handler,
+diff --git a/drivers/platform/mellanox/mlxreg-lc.c b/drivers/platform/mellanox/mlxreg-lc.c
+index 55834ccb4ac7c..e578c7bc060bb 100644
+--- a/drivers/platform/mellanox/mlxreg-lc.c
++++ b/drivers/platform/mellanox/mlxreg-lc.c
+@@ -460,8 +460,6 @@ static int mlxreg_lc_power_on_off(struct mlxreg_lc *mlxreg_lc, u8 action)
+ u32 regval;
+ int err;
+
+- mutex_lock(&mlxreg_lc->lock);
+-
+ err = regmap_read(mlxreg_lc->par_regmap, mlxreg_lc->data->reg_pwr, ®val);
+ if (err)
+ goto regmap_read_fail;
+@@ -474,7 +472,6 @@ static int mlxreg_lc_power_on_off(struct mlxreg_lc *mlxreg_lc, u8 action)
+ err = regmap_write(mlxreg_lc->par_regmap, mlxreg_lc->data->reg_pwr, regval);
+
+ regmap_read_fail:
+- mutex_unlock(&mlxreg_lc->lock);
+ return err;
+ }
+
+@@ -491,8 +488,6 @@ static int mlxreg_lc_enable_disable(struct mlxreg_lc *mlxreg_lc, bool action)
+ * line card which is already has been enabled. Disabling does not affect the disabled line
+ * card.
+ */
+- mutex_lock(&mlxreg_lc->lock);
+-
+ err = regmap_read(mlxreg_lc->par_regmap, mlxreg_lc->data->reg_ena, ®val);
+ if (err)
+ goto regmap_read_fail;
+@@ -505,7 +500,6 @@ static int mlxreg_lc_enable_disable(struct mlxreg_lc *mlxreg_lc, bool action)
+ err = regmap_write(mlxreg_lc->par_regmap, mlxreg_lc->data->reg_ena, regval);
+
+ regmap_read_fail:
+- mutex_unlock(&mlxreg_lc->lock);
+ return err;
+ }
+
+@@ -537,6 +531,15 @@ mlxreg_lc_sn4800_c16_config_init(struct mlxreg_lc *mlxreg_lc, void *regmap,
+
+ static void
+ mlxreg_lc_state_update(struct mlxreg_lc *mlxreg_lc, enum mlxreg_lc_state state, u8 action)
++{
++ if (action)
++ mlxreg_lc->state |= state;
++ else
++ mlxreg_lc->state &= ~state;
++}
++
++static void
++mlxreg_lc_state_update_locked(struct mlxreg_lc *mlxreg_lc, enum mlxreg_lc_state state, u8 action)
+ {
+ mutex_lock(&mlxreg_lc->lock);
+
+@@ -560,8 +563,11 @@ static int mlxreg_lc_event_handler(void *handle, enum mlxreg_hotplug_kind kind,
+ dev_info(mlxreg_lc->dev, "linecard#%d state %d event kind %d action %d\n",
+ mlxreg_lc->data->slot, mlxreg_lc->state, kind, action);
+
+- if (!(mlxreg_lc->state & MLXREG_LC_INITIALIZED))
++ mutex_lock(&mlxreg_lc->lock);
++ if (!(mlxreg_lc->state & MLXREG_LC_INITIALIZED)) {
++ mutex_unlock(&mlxreg_lc->lock);
+ return 0;
++ }
+
+ switch (kind) {
+ case MLXREG_HOTPLUG_LC_SYNCED:
+@@ -574,7 +580,7 @@ static int mlxreg_lc_event_handler(void *handle, enum mlxreg_hotplug_kind kind,
+ if (!(mlxreg_lc->state & MLXREG_LC_POWERED) && action) {
+ err = mlxreg_lc_power_on_off(mlxreg_lc, 1);
+ if (err)
+- return err;
++ goto mlxreg_lc_power_on_off_fail;
+ }
+ /* In case line card is configured - enable it. */
+ if (mlxreg_lc->state & MLXREG_LC_CONFIGURED && action)
+@@ -588,12 +594,13 @@ static int mlxreg_lc_event_handler(void *handle, enum mlxreg_hotplug_kind kind,
+ /* In case line card is configured - enable it. */
+ if (mlxreg_lc->state & MLXREG_LC_CONFIGURED)
+ err = mlxreg_lc_enable_disable(mlxreg_lc, 1);
++ mutex_unlock(&mlxreg_lc->lock);
+ return err;
+ }
+ err = mlxreg_lc_create_static_devices(mlxreg_lc, mlxreg_lc->main_devs,
+ mlxreg_lc->main_devs_num);
+ if (err)
+- return err;
++ goto mlxreg_lc_create_static_devices_fail;
+
+ /* In case line card is already in ready state - enable it. */
+ if (mlxreg_lc->state & MLXREG_LC_CONFIGURED)
+@@ -620,6 +627,10 @@ static int mlxreg_lc_event_handler(void *handle, enum mlxreg_hotplug_kind kind,
+ break;
+ }
+
++mlxreg_lc_power_on_off_fail:
++mlxreg_lc_create_static_devices_fail:
++ mutex_unlock(&mlxreg_lc->lock);
++
+ return err;
+ }
+
+@@ -665,7 +676,7 @@ static int mlxreg_lc_completion_notify(void *handle, struct i2c_adapter *parent,
+ if (err)
+ goto mlxreg_lc_create_static_devices_failed;
+
+- mlxreg_lc_state_update(mlxreg_lc, MLXREG_LC_POWERED, 1);
++ mlxreg_lc_state_update_locked(mlxreg_lc, MLXREG_LC_POWERED, 1);
+ }
+
+ /* Verify if line card is synchronized. */
+@@ -676,7 +687,7 @@ static int mlxreg_lc_completion_notify(void *handle, struct i2c_adapter *parent,
+ /* Power on line card if necessary. */
+ if (regval & mlxreg_lc->data->mask) {
+ mlxreg_lc->state |= MLXREG_LC_SYNCED;
+- mlxreg_lc_state_update(mlxreg_lc, MLXREG_LC_SYNCED, 1);
++ mlxreg_lc_state_update_locked(mlxreg_lc, MLXREG_LC_SYNCED, 1);
+ if (mlxreg_lc->state & ~MLXREG_LC_POWERED) {
+ err = mlxreg_lc_power_on_off(mlxreg_lc, 1);
+ if (err)
+@@ -684,7 +695,7 @@ static int mlxreg_lc_completion_notify(void *handle, struct i2c_adapter *parent,
+ }
+ }
+
+- mlxreg_lc_state_update(mlxreg_lc, MLXREG_LC_INITIALIZED, 1);
++ mlxreg_lc_state_update_locked(mlxreg_lc, MLXREG_LC_INITIALIZED, 1);
+
+ return 0;
+
+@@ -863,7 +874,6 @@ static int mlxreg_lc_probe(struct platform_device *pdev)
+ if (err) {
+ dev_err(&pdev->dev, "Failed to sync regmap for client %s at bus %d at addr 0x%02x\n",
+ data->hpdev.brdinfo->type, data->hpdev.nr, data->hpdev.brdinfo->addr);
+- err = PTR_ERR(regmap);
+ goto regcache_sync_fail;
+ }
+
+@@ -905,6 +915,8 @@ static int mlxreg_lc_remove(struct platform_device *pdev)
+ struct mlxreg_core_data *data = dev_get_platdata(&pdev->dev);
+ struct mlxreg_lc *mlxreg_lc = platform_get_drvdata(pdev);
+
++ mlxreg_lc_state_update_locked(mlxreg_lc, MLXREG_LC_INITIALIZED, 0);
++
+ /*
+ * Probing and removing are invoked by hotplug events raised upon line card insertion and
+ * removing. If probing procedure fails all data is cleared. However, hotplug event still
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index 154317e9910d2..5c757c7f64dee 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -232,7 +232,7 @@ static void pmc_power_off(void)
+ pm1_cnt_port = acpi_base_addr + PM1_CNT;
+
+ pm1_cnt_value = inl(pm1_cnt_port);
+- pm1_cnt_value &= SLEEP_TYPE_MASK;
++ pm1_cnt_value &= ~SLEEP_TYPE_MASK;
+ pm1_cnt_value |= SLEEP_TYPE_S5;
+ pm1_cnt_value |= SLEEP_ENABLE;
+
+diff --git a/drivers/platform/x86/x86-android-tablets.c b/drivers/platform/x86/x86-android-tablets.c
+index 4803759774358..4acd6fa8d43b8 100644
+--- a/drivers/platform/x86/x86-android-tablets.c
++++ b/drivers/platform/x86/x86-android-tablets.c
+@@ -663,9 +663,23 @@ static const struct x86_i2c_client_info chuwi_hi8_i2c_clients[] __initconst = {
+ },
+ };
+
++static int __init chuwi_hi8_init(void)
++{
++ /*
++ * Avoid the acpi_unregister_gsi() call in x86_acpi_irq_helper_get()
++ * breaking the touchscreen + logging various errors when the Windows
++ * BIOS is used.
++ */
++ if (acpi_dev_present("MSSL0001", NULL, 1))
++ return -ENODEV;
++
++ return 0;
++}
++
+ static const struct x86_dev_info chuwi_hi8_info __initconst = {
+ .i2c_client_info = chuwi_hi8_i2c_clients,
+ .i2c_client_count = ARRAY_SIZE(chuwi_hi8_i2c_clients),
++ .init = chuwi_hi8_init,
+ };
+
+ #define CZC_EC_EXTRA_PORT 0x68
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index b5ec7726592c8..71d2931cb885c 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -167,7 +167,7 @@ struct qcom_swrm_ctrl {
+ u8 wcmd_id;
+ struct qcom_swrm_port_config pconfig[QCOM_SDW_MAX_PORTS];
+ struct sdw_stream_runtime *sruntime[SWRM_MAX_DAIS];
+- enum sdw_slave_status status[SDW_MAX_DEVICES];
++ enum sdw_slave_status status[SDW_MAX_DEVICES + 1];
+ int (*reg_read)(struct qcom_swrm_ctrl *ctrl, int reg, u32 *val);
+ int (*reg_write)(struct qcom_swrm_ctrl *ctrl, int reg, int val);
+ u32 slave_status;
+@@ -411,7 +411,7 @@ static int qcom_swrm_get_alert_slave_dev_num(struct qcom_swrm_ctrl *ctrl)
+
+ ctrl->reg_read(ctrl, SWRM_MCP_SLV_STATUS, &val);
+
+- for (dev_num = 0; dev_num < SDW_MAX_DEVICES; dev_num++) {
++ for (dev_num = 0; dev_num <= SDW_MAX_DEVICES; dev_num++) {
+ status = (val >> (dev_num * SWRM_MCP_SLV_STATUS_SZ));
+
+ if ((status & SWRM_MCP_SLV_STATUS_MASK) == SDW_SLAVE_ALERT) {
+@@ -431,7 +431,7 @@ static void qcom_swrm_get_device_status(struct qcom_swrm_ctrl *ctrl)
+ ctrl->reg_read(ctrl, SWRM_MCP_SLV_STATUS, &val);
+ ctrl->slave_status = val;
+
+- for (i = 0; i < SDW_MAX_DEVICES; i++) {
++ for (i = 0; i <= SDW_MAX_DEVICES; i++) {
+ u32 s;
+
+ s = (val >> (i * 2));
+diff --git a/drivers/staging/r8188eu/os_dep/os_intfs.c b/drivers/staging/r8188eu/os_dep/os_intfs.c
+index cac9553666e6d..aa100b5141e1e 100644
+--- a/drivers/staging/r8188eu/os_dep/os_intfs.c
++++ b/drivers/staging/r8188eu/os_dep/os_intfs.c
+@@ -18,6 +18,7 @@ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Realtek Wireless Lan Driver");
+ MODULE_AUTHOR("Realtek Semiconductor Corp.");
+ MODULE_VERSION(DRIVERVERSION);
++MODULE_FIRMWARE("rtlwifi/rtl8188eufw.bin");
+
+ #define CONFIG_BR_EXT_BRNAME "br0"
+ #define RTW_NOTCH_FILTER 0 /* 0:Disable, 1:Enable, */
+diff --git a/drivers/staging/r8188eu/os_dep/usb_intf.c b/drivers/staging/r8188eu/os_dep/usb_intf.c
+index 68869c5daeff8..e5dc977d2fa21 100644
+--- a/drivers/staging/r8188eu/os_dep/usb_intf.c
++++ b/drivers/staging/r8188eu/os_dep/usb_intf.c
+@@ -28,6 +28,7 @@ static struct usb_device_id rtw_usb_id_tbl[] = {
+ /*=== Realtek demoboard ===*/
+ {USB_DEVICE(USB_VENDER_ID_REALTEK, 0x8179)}, /* 8188EUS */
+ {USB_DEVICE(USB_VENDER_ID_REALTEK, 0x0179)}, /* 8188ETV */
++ {USB_DEVICE(USB_VENDER_ID_REALTEK, 0xffef)}, /* Rosewill USB-N150 Nano */
+ /*=== Customer ID ===*/
+ /****** 8188EUS ********/
+ {USB_DEVICE(0x07B8, 0x8179)}, /* Abocom - Abocom */
+diff --git a/drivers/staging/rtl8712/rtl8712_cmd.c b/drivers/staging/rtl8712/rtl8712_cmd.c
+index 2326aae6709e2..bb7db96ed8219 100644
+--- a/drivers/staging/rtl8712/rtl8712_cmd.c
++++ b/drivers/staging/rtl8712/rtl8712_cmd.c
+@@ -117,34 +117,6 @@ static void r871x_internal_cmd_hdl(struct _adapter *padapter, u8 *pbuf)
+ kfree(pdrvcmd->pbuf);
+ }
+
+-static u8 read_macreg_hdl(struct _adapter *padapter, u8 *pbuf)
+-{
+- void (*pcmd_callback)(struct _adapter *dev, struct cmd_obj *pcmd);
+- struct cmd_obj *pcmd = (struct cmd_obj *)pbuf;
+-
+- /* invoke cmd->callback function */
+- pcmd_callback = cmd_callback[pcmd->cmdcode].callback;
+- if (!pcmd_callback)
+- r8712_free_cmd_obj(pcmd);
+- else
+- pcmd_callback(padapter, pcmd);
+- return H2C_SUCCESS;
+-}
+-
+-static u8 write_macreg_hdl(struct _adapter *padapter, u8 *pbuf)
+-{
+- void (*pcmd_callback)(struct _adapter *dev, struct cmd_obj *pcmd);
+- struct cmd_obj *pcmd = (struct cmd_obj *)pbuf;
+-
+- /* invoke cmd->callback function */
+- pcmd_callback = cmd_callback[pcmd->cmdcode].callback;
+- if (!pcmd_callback)
+- r8712_free_cmd_obj(pcmd);
+- else
+- pcmd_callback(padapter, pcmd);
+- return H2C_SUCCESS;
+-}
+-
+ static u8 read_bbreg_hdl(struct _adapter *padapter, u8 *pbuf)
+ {
+ struct cmd_obj *pcmd = (struct cmd_obj *)pbuf;
+@@ -213,14 +185,6 @@ static struct cmd_obj *cmd_hdl_filter(struct _adapter *padapter,
+ pcmd_r = NULL;
+
+ switch (pcmd->cmdcode) {
+- case GEN_CMD_CODE(_Read_MACREG):
+- read_macreg_hdl(padapter, (u8 *)pcmd);
+- pcmd_r = pcmd;
+- break;
+- case GEN_CMD_CODE(_Write_MACREG):
+- write_macreg_hdl(padapter, (u8 *)pcmd);
+- pcmd_r = pcmd;
+- break;
+ case GEN_CMD_CODE(_Read_BBREG):
+ read_bbreg_hdl(padapter, (u8 *)pcmd);
+ break;
+diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
+index e92c658dba1c6..3f7827d72c480 100644
+--- a/drivers/thunderbolt/ctl.c
++++ b/drivers/thunderbolt/ctl.c
+@@ -407,7 +407,7 @@ static void tb_ctl_rx_submit(struct ctl_pkg *pkg)
+
+ static int tb_async_error(const struct ctl_pkg *pkg)
+ {
+- const struct cfg_error_pkg *error = (const struct cfg_error_pkg *)pkg;
++ const struct cfg_error_pkg *error = pkg->buffer;
+
+ if (pkg->frame.eof != TB_CFG_PKG_ERROR)
+ return false;
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 561e1d77240e2..64f0aec7e70ae 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -3781,14 +3781,18 @@ int tb_switch_pcie_l1_enable(struct tb_switch *sw)
+ */
+ int tb_switch_xhci_connect(struct tb_switch *sw)
+ {
+- bool usb_port1, usb_port3, xhci_port1, xhci_port3;
+ struct tb_port *port1, *port3;
+ int ret;
+
++ if (sw->generation != 3)
++ return 0;
++
+ port1 = &sw->ports[1];
+ port3 = &sw->ports[3];
+
+ if (tb_switch_is_alpine_ridge(sw)) {
++ bool usb_port1, usb_port3, xhci_port1, xhci_port3;
++
+ usb_port1 = tb_lc_is_usb_plugged(port1);
+ usb_port3 = tb_lc_is_usb_plugged(port3);
+ xhci_port1 = tb_lc_is_xhci_connected(port1);
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index caa5c14ed57f0..01c112e2e2142 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -248,7 +248,7 @@ struct gsm_mux {
+ bool constipated; /* Asked by remote to shut up */
+ bool has_devices; /* Devices were registered */
+
+- spinlock_t tx_lock;
++ struct mutex tx_mutex;
+ unsigned int tx_bytes; /* TX data outstanding */
+ #define TX_THRESH_HI 8192
+ #define TX_THRESH_LO 2048
+@@ -256,7 +256,7 @@ struct gsm_mux {
+ struct list_head tx_data_list; /* Pending data packets */
+
+ /* Control messages */
+- struct timer_list kick_timer; /* Kick TX queuing on timeout */
++ struct delayed_work kick_timeout; /* Kick TX queuing on timeout */
+ struct timer_list t2_timer; /* Retransmit timer for commands */
+ int cretries; /* Command retry counter */
+ struct gsm_control *pending_cmd;/* Our current pending command */
+@@ -680,7 +680,6 @@ static int gsm_send(struct gsm_mux *gsm, int addr, int cr, int control)
+ struct gsm_msg *msg;
+ u8 *dp;
+ int ocr;
+- unsigned long flags;
+
+ msg = gsm_data_alloc(gsm, addr, 0, control);
+ if (!msg)
+@@ -702,10 +701,10 @@ static int gsm_send(struct gsm_mux *gsm, int addr, int cr, int control)
+
+ gsm_print_packet("Q->", addr, cr, control, NULL, 0);
+
+- spin_lock_irqsave(&gsm->tx_lock, flags);
++ mutex_lock(&gsm->tx_mutex);
+ list_add_tail(&msg->list, &gsm->tx_ctrl_list);
+ gsm->tx_bytes += msg->len;
+- spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ mutex_unlock(&gsm->tx_mutex);
+ gsmld_write_trigger(gsm);
+
+ return 0;
+@@ -730,7 +729,7 @@ static void gsm_dlci_clear_queues(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ spin_unlock_irqrestore(&dlci->lock, flags);
+
+ /* Clear data packets in MUX write queue */
+- spin_lock_irqsave(&gsm->tx_lock, flags);
++ mutex_lock(&gsm->tx_mutex);
+ list_for_each_entry_safe(msg, nmsg, &gsm->tx_data_list, list) {
+ if (msg->addr != addr)
+ continue;
+@@ -738,7 +737,7 @@ static void gsm_dlci_clear_queues(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ list_del(&msg->list);
+ kfree(msg);
+ }
+- spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ mutex_unlock(&gsm->tx_mutex);
+ }
+
+ /**
+@@ -1009,7 +1008,7 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+ gsm->tx_bytes += msg->len;
+
+ gsmld_write_trigger(gsm);
+- mod_timer(&gsm->kick_timer, jiffies + 10 * gsm->t1 * HZ / 100);
++ schedule_delayed_work(&gsm->kick_timeout, 10 * gsm->t1 * HZ / 100);
+ }
+
+ /**
+@@ -1024,10 +1023,9 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+
+ static void gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+ {
+- unsigned long flags;
+- spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
++ mutex_lock(&dlci->gsm->tx_mutex);
+ __gsm_data_queue(dlci, msg);
+- spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags);
++ mutex_unlock(&dlci->gsm->tx_mutex);
+ }
+
+ /**
+@@ -1039,7 +1037,7 @@ static void gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+ * is data. Keep to the MRU of the mux. This path handles the usual tty
+ * interface which is a byte stream with optional modem data.
+ *
+- * Caller must hold the tx_lock of the mux.
++ * Caller must hold the tx_mutex of the mux.
+ */
+
+ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+@@ -1099,7 +1097,7 @@ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ * is data. Keep to the MRU of the mux. This path handles framed data
+ * queued as skbuffs to the DLCI.
+ *
+- * Caller must hold the tx_lock of the mux.
++ * Caller must hold the tx_mutex of the mux.
+ */
+
+ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+@@ -1115,7 +1113,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+ if (dlci->adaption == 4)
+ overhead = 1;
+
+- /* dlci->skb is locked by tx_lock */
++ /* dlci->skb is locked by tx_mutex */
+ if (dlci->skb == NULL) {
+ dlci->skb = skb_dequeue_tail(&dlci->skb_list);
+ if (dlci->skb == NULL)
+@@ -1169,7 +1167,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+ * Push an empty frame in to the transmit queue to update the modem status
+ * bits and to transmit an optional break.
+ *
+- * Caller must hold the tx_lock of the mux.
++ * Caller must hold the tx_mutex of the mux.
+ */
+
+ static int gsm_dlci_modem_output(struct gsm_mux *gsm, struct gsm_dlci *dlci,
+@@ -1283,13 +1281,12 @@ static int gsm_dlci_data_sweep(struct gsm_mux *gsm)
+
+ static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
+ {
+- unsigned long flags;
+ int sweep;
+
+ if (dlci->constipated)
+ return;
+
+- spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
++ mutex_lock(&dlci->gsm->tx_mutex);
+ /* If we have nothing running then we need to fire up */
+ sweep = (dlci->gsm->tx_bytes < TX_THRESH_LO);
+ if (dlci->gsm->tx_bytes == 0) {
+@@ -1300,7 +1297,7 @@ static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
+ }
+ if (sweep)
+ gsm_dlci_data_sweep(dlci->gsm);
+- spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags);
++ mutex_unlock(&dlci->gsm->tx_mutex);
+ }
+
+ /*
+@@ -1984,24 +1981,23 @@ static void gsm_dlci_command(struct gsm_dlci *dlci, const u8 *data, int len)
+ }
+
+ /**
+- * gsm_kick_timer - transmit if possible
+- * @t: timer contained in our gsm object
++ * gsm_kick_timeout - transmit if possible
++ * @work: work contained in our gsm object
+ *
+ * Transmit data from DLCIs if the queue is empty. We can't rely on
+ * a tty wakeup except when we filled the pipe so we need to fire off
+ * new data ourselves in other cases.
+ */
+-static void gsm_kick_timer(struct timer_list *t)
++static void gsm_kick_timeout(struct work_struct *work)
+ {
+- struct gsm_mux *gsm = from_timer(gsm, t, kick_timer);
+- unsigned long flags;
++ struct gsm_mux *gsm = container_of(work, struct gsm_mux, kick_timeout.work);
+ int sent = 0;
+
+- spin_lock_irqsave(&gsm->tx_lock, flags);
++ mutex_lock(&gsm->tx_mutex);
+ /* If we have nothing running then we need to fire up */
+ if (gsm->tx_bytes < TX_THRESH_LO)
+ sent = gsm_dlci_data_sweep(gsm);
+- spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ mutex_unlock(&gsm->tx_mutex);
+
+ if (sent && debug & 4)
+ pr_info("%s TX queue stalled\n", __func__);
+@@ -2458,7 +2454,7 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ }
+
+ /* Finish outstanding timers, making sure they are done */
+- del_timer_sync(&gsm->kick_timer);
++ cancel_delayed_work_sync(&gsm->kick_timeout);
+ del_timer_sync(&gsm->t2_timer);
+
+ /* Finish writing to ldisc */
+@@ -2501,13 +2497,6 @@ static int gsm_activate_mux(struct gsm_mux *gsm)
+ if (dlci == NULL)
+ return -ENOMEM;
+
+- timer_setup(&gsm->kick_timer, gsm_kick_timer, 0);
+- timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
+- INIT_WORK(&gsm->tx_work, gsmld_write_task);
+- init_waitqueue_head(&gsm->event);
+- spin_lock_init(&gsm->control_lock);
+- spin_lock_init(&gsm->tx_lock);
+-
+ if (gsm->encoding == 0)
+ gsm->receive = gsm0_receive;
+ else
+@@ -2538,6 +2527,7 @@ static void gsm_free_mux(struct gsm_mux *gsm)
+ break;
+ }
+ }
++ mutex_destroy(&gsm->tx_mutex);
+ mutex_destroy(&gsm->mutex);
+ kfree(gsm->txframe);
+ kfree(gsm->buf);
+@@ -2609,9 +2599,15 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ }
+ spin_lock_init(&gsm->lock);
+ mutex_init(&gsm->mutex);
++ mutex_init(&gsm->tx_mutex);
+ kref_init(&gsm->ref);
+ INIT_LIST_HEAD(&gsm->tx_ctrl_list);
+ INIT_LIST_HEAD(&gsm->tx_data_list);
++ INIT_DELAYED_WORK(&gsm->kick_timeout, gsm_kick_timeout);
++ timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
++ INIT_WORK(&gsm->tx_work, gsmld_write_task);
++ init_waitqueue_head(&gsm->event);
++ spin_lock_init(&gsm->control_lock);
+
+ gsm->t1 = T1;
+ gsm->t2 = T2;
+@@ -2636,6 +2632,7 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ }
+ spin_unlock(&gsm_mux_lock);
+ if (i == MAX_MUX) {
++ mutex_destroy(&gsm->tx_mutex);
+ mutex_destroy(&gsm->mutex);
+ kfree(gsm->txframe);
+ kfree(gsm->buf);
+@@ -2791,17 +2788,16 @@ static void gsmld_write_trigger(struct gsm_mux *gsm)
+ static void gsmld_write_task(struct work_struct *work)
+ {
+ struct gsm_mux *gsm = container_of(work, struct gsm_mux, tx_work);
+- unsigned long flags;
+ int i, ret;
+
+ /* All outstanding control channel and control messages and one data
+ * frame is sent.
+ */
+ ret = -ENODEV;
+- spin_lock_irqsave(&gsm->tx_lock, flags);
++ mutex_lock(&gsm->tx_mutex);
+ if (gsm->tty)
+ ret = gsm_data_kick(gsm);
+- spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ mutex_unlock(&gsm->tx_mutex);
+
+ if (ret >= 0)
+ for (i = 0; i < NUM_DLCI; i++)
+@@ -2858,7 +2854,8 @@ static void gsmld_receive_buf(struct tty_struct *tty, const unsigned char *cp,
+ flags = *fp++;
+ switch (flags) {
+ case TTY_NORMAL:
+- gsm->receive(gsm, *cp);
++ if (gsm->receive)
++ gsm->receive(gsm, *cp);
+ break;
+ case TTY_OVERRUN:
+ case TTY_BREAK:
+@@ -2946,10 +2943,6 @@ static int gsmld_open(struct tty_struct *tty)
+
+ gsmld_attach_gsm(tty, gsm);
+
+- timer_setup(&gsm->kick_timer, gsm_kick_timer, 0);
+- timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
+- INIT_WORK(&gsm->tx_work, gsmld_write_task);
+-
+ return 0;
+ }
+
+@@ -3012,7 +3005,6 @@ static ssize_t gsmld_write(struct tty_struct *tty, struct file *file,
+ const unsigned char *buf, size_t nr)
+ {
+ struct gsm_mux *gsm = tty->disc_data;
+- unsigned long flags;
+ int space;
+ int ret;
+
+@@ -3020,13 +3012,13 @@ static ssize_t gsmld_write(struct tty_struct *tty, struct file *file,
+ return -ENODEV;
+
+ ret = -ENOBUFS;
+- spin_lock_irqsave(&gsm->tx_lock, flags);
++ mutex_lock(&gsm->tx_mutex);
+ space = tty_write_room(tty);
+ if (space >= nr)
+ ret = tty->ops->write(tty, buf, nr);
+ else
+ set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+- spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ mutex_unlock(&gsm->tx_mutex);
+
+ return ret;
+ }
+@@ -3323,14 +3315,13 @@ static struct tty_ldisc_ops tty_ldisc_packet = {
+ static void gsm_modem_upd_via_data(struct gsm_dlci *dlci, u8 brk)
+ {
+ struct gsm_mux *gsm = dlci->gsm;
+- unsigned long flags;
+
+ if (dlci->state != DLCI_OPEN || dlci->adaption != 2)
+ return;
+
+- spin_lock_irqsave(&gsm->tx_lock, flags);
++ mutex_lock(&gsm->tx_mutex);
+ gsm_dlci_modem_output(gsm, dlci, brk);
+- spin_unlock_irqrestore(&gsm->tx_lock, flags);
++ mutex_unlock(&gsm->tx_mutex);
+ }
+
+ /**
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index dd1c7e4bd1c95..400a1686a6b26 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -296,9 +296,6 @@ static int atmel_config_rs485(struct uart_port *port,
+
+ mode = atmel_uart_readl(port, ATMEL_US_MR);
+
+- /* Resetting serial mode to RS232 (0x0) */
+- mode &= ~ATMEL_US_USMODE;
+-
+ if (rs485conf->flags & SER_RS485_ENABLED) {
+ dev_dbg(port->dev, "Setting UART to RS485\n");
+ if (rs485conf->flags & SER_RS485_RX_DURING_TX)
+@@ -308,6 +305,7 @@ static int atmel_config_rs485(struct uart_port *port,
+
+ atmel_uart_writel(port, ATMEL_US_TTGR,
+ rs485conf->delay_rts_after_send);
++ mode &= ~ATMEL_US_USMODE;
+ mode |= ATMEL_US_USMODE_RS485;
+ } else {
+ dev_dbg(port->dev, "Setting UART to RS232\n");
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 561d6d0b7c945..2945c1b890880 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1381,9 +1381,9 @@ static int lpuart_config_rs485(struct uart_port *port,
+ * Note: UART is assumed to be active high.
+ */
+ if (rs485->flags & SER_RS485_RTS_ON_SEND)
+- modem &= ~UARTMODEM_TXRTSPOL;
+- else if (rs485->flags & SER_RS485_RTS_AFTER_SEND)
+ modem |= UARTMODEM_TXRTSPOL;
++ else if (rs485->flags & SER_RS485_RTS_AFTER_SEND)
++ modem &= ~UARTMODEM_TXRTSPOL;
+ }
+
+ writeb(modem, sport->port.membase + UARTMODEM);
+@@ -2182,6 +2182,7 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ uart_update_timeout(port, termios->c_cflag, baud);
+
+ /* wait transmit engin complete */
++ lpuart32_write(&sport->port, 0, UARTMODIR);
+ lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
+
+ /* disable transmit and receive */
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 6eaf8eb846619..b8f5bc19416d9 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -4662,9 +4662,11 @@ static int con_font_set(struct vc_data *vc, struct console_font_op *op)
+ console_lock();
+ if (vc->vc_mode != KD_TEXT)
+ rc = -EINVAL;
+- else if (vc->vc_sw->con_font_set)
++ else if (vc->vc_sw->con_font_set) {
++ if (vc_is_sel(vc))
++ clear_selection();
+ rc = vc->vc_sw->con_font_set(vc, &font, op->flags);
+- else
++ } else
+ rc = -ENOSYS;
+ console_unlock();
+ kfree(font.data);
+@@ -4691,9 +4693,11 @@ static int con_font_default(struct vc_data *vc, struct console_font_op *op)
+ console_unlock();
+ return -EINVAL;
+ }
+- if (vc->vc_sw->con_font_default)
++ if (vc->vc_sw->con_font_default) {
++ if (vc_is_sel(vc))
++ clear_selection();
+ rc = vc->vc_sw->con_font_default(vc, &font, s);
+- else
++ } else
+ rc = -ENOSYS;
+ console_unlock();
+ if (!rc) {
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index d21b69997e750..5adcb349718c3 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -1530,7 +1530,8 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
+ TRB_LEN(le32_to_cpu(trb->length));
+
+ if (priv_req->num_of_trb > 1 &&
+- le32_to_cpu(trb->control) & TRB_SMM)
++ le32_to_cpu(trb->control) & TRB_SMM &&
++ le32_to_cpu(trb->control) & TRB_CHAIN)
+ transfer_end = true;
+
+ cdns3_ep_inc_deq(priv_ep);
+@@ -1690,6 +1691,7 @@ static int cdns3_check_ep_interrupt_proceed(struct cdns3_endpoint *priv_ep)
+ ep_cfg &= ~EP_CFG_ENABLE;
+ writel(ep_cfg, &priv_dev->regs->ep_cfg);
+ priv_ep->flags &= ~EP_QUIRK_ISO_OUT_EN;
++ priv_ep->flags |= EP_UPDATE_EP_TRBADDR;
+ }
+ cdns3_transfer_completed(priv_dev, priv_ep);
+ } else if (!(priv_ep->flags & EP_STALLED) &&
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 9b9aea24d58c4..f3c6aad277895 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1810,6 +1810,9 @@ static const struct usb_device_id acm_ids[] = {
+ { USB_DEVICE(0x09d8, 0x0320), /* Elatec GmbH TWN3 */
+ .driver_info = NO_UNION_NORMAL, /* has misplaced union descriptor */
+ },
++ { USB_DEVICE(0x0c26, 0x0020), /* Icom ICF3400 Serie */
++ .driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */
++ },
+ { USB_DEVICE(0x0ca6, 0xa050), /* Castles VEGA3000 */
+ .driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */
+ },
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 68e9121c18788..dfef85a18eb55 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -6048,6 +6048,11 @@ re_enumerate:
+ * the reset is over (using their post_reset method).
+ *
+ * Return: The same as for usb_reset_and_verify_device().
++ * However, if a reset is already in progress (for instance, if a
++ * driver doesn't have pre_ or post_reset() callbacks, and while
++ * being unbound or re-bound during the ongoing reset its disconnect()
++ * or probe() routine tries to perform a second, nested reset), the
++ * routine returns -EINPROGRESS.
+ *
+ * Note:
+ * The caller must own the device lock. For example, it's safe to use
+@@ -6081,6 +6086,10 @@ int usb_reset_device(struct usb_device *udev)
+ return -EISDIR;
+ }
+
++ if (udev->reset_in_progress)
++ return -EINPROGRESS;
++ udev->reset_in_progress = 1;
++
+ port_dev = hub->ports[udev->portnum - 1];
+
+ /*
+@@ -6145,6 +6154,7 @@ int usb_reset_device(struct usb_device *udev)
+
+ usb_autosuspend_device(udev);
+ memalloc_noio_restore(noio_flag);
++ udev->reset_in_progress = 0;
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(usb_reset_device);
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index c8ba87df7abef..fd0ccf6f3ec5a 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -154,9 +154,9 @@ static int __dwc2_lowlevel_hw_enable(struct dwc2_hsotg *hsotg)
+ } else if (hsotg->plat && hsotg->plat->phy_init) {
+ ret = hsotg->plat->phy_init(pdev, hsotg->plat->phy_type);
+ } else {
+- ret = phy_power_on(hsotg->phy);
++ ret = phy_init(hsotg->phy);
+ if (ret == 0)
+- ret = phy_init(hsotg->phy);
++ ret = phy_power_on(hsotg->phy);
+ }
+
+ return ret;
+@@ -188,9 +188,9 @@ static int __dwc2_lowlevel_hw_disable(struct dwc2_hsotg *hsotg)
+ } else if (hsotg->plat && hsotg->plat->phy_exit) {
+ ret = hsotg->plat->phy_exit(pdev, hsotg->plat->phy_type);
+ } else {
+- ret = phy_exit(hsotg->phy);
++ ret = phy_power_off(hsotg->phy);
+ if (ret == 0)
+- ret = phy_power_off(hsotg->phy);
++ ret = phy_exit(hsotg->phy);
+ }
+ if (ret)
+ return ret;
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index ba2fa91be1d64..1db9f51f98aef 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -833,15 +833,16 @@ static void dwc3_core_exit(struct dwc3 *dwc)
+ {
+ dwc3_event_buffers_cleanup(dwc);
+
++ usb_phy_set_suspend(dwc->usb2_phy, 1);
++ usb_phy_set_suspend(dwc->usb3_phy, 1);
++ phy_power_off(dwc->usb2_generic_phy);
++ phy_power_off(dwc->usb3_generic_phy);
++
+ usb_phy_shutdown(dwc->usb2_phy);
+ usb_phy_shutdown(dwc->usb3_phy);
+ phy_exit(dwc->usb2_generic_phy);
+ phy_exit(dwc->usb3_generic_phy);
+
+- usb_phy_set_suspend(dwc->usb2_phy, 1);
+- usb_phy_set_suspend(dwc->usb3_phy, 1);
+- phy_power_off(dwc->usb2_generic_phy);
+- phy_power_off(dwc->usb3_generic_phy);
+ dwc3_clk_disable(dwc);
+ reset_control_assert(dwc->reset);
+ }
+@@ -1844,16 +1845,16 @@ err5:
+ dwc3_debugfs_exit(dwc);
+ dwc3_event_buffers_cleanup(dwc);
+
+- usb_phy_shutdown(dwc->usb2_phy);
+- usb_phy_shutdown(dwc->usb3_phy);
+- phy_exit(dwc->usb2_generic_phy);
+- phy_exit(dwc->usb3_generic_phy);
+-
+ usb_phy_set_suspend(dwc->usb2_phy, 1);
+ usb_phy_set_suspend(dwc->usb3_phy, 1);
+ phy_power_off(dwc->usb2_generic_phy);
+ phy_power_off(dwc->usb3_generic_phy);
+
++ usb_phy_shutdown(dwc->usb2_phy);
++ usb_phy_shutdown(dwc->usb3_phy);
++ phy_exit(dwc->usb2_generic_phy);
++ phy_exit(dwc->usb3_generic_phy);
++
+ dwc3_ulpi_exit(dwc);
+
+ err4:
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 6b018048fe2e1..4ee4ca09873af 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -44,6 +44,7 @@
+ #define PCI_DEVICE_ID_INTEL_ADLP 0x51ee
+ #define PCI_DEVICE_ID_INTEL_ADLM 0x54ee
+ #define PCI_DEVICE_ID_INTEL_ADLS 0x7ae1
++#define PCI_DEVICE_ID_INTEL_RPL 0x460e
+ #define PCI_DEVICE_ID_INTEL_RPLS 0x7a61
+ #define PCI_DEVICE_ID_INTEL_MTLP 0x7ec1
+ #define PCI_DEVICE_ID_INTEL_MTL 0x7e7e
+@@ -456,6 +457,9 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADLS),
+ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL),
++ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
++
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPLS),
+ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 52d5a7c81362a..886fab0008a75 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2538,9 +2538,6 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+
+ is_on = !!is_on;
+
+- if (dwc->pullups_connected == is_on)
+- return 0;
+-
+ dwc->softconnect = is_on;
+
+ /*
+@@ -2565,6 +2562,11 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ return 0;
+ }
+
++ if (dwc->pullups_connected == is_on) {
++ pm_runtime_put(dwc->dev);
++ return 0;
++ }
++
+ if (!is_on) {
+ ret = dwc3_gadget_soft_disconnect(dwc);
+ } else {
+diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
+index f56c30cf151e4..06b3d988fbf32 100644
+--- a/drivers/usb/dwc3/host.c
++++ b/drivers/usb/dwc3/host.c
+@@ -11,8 +11,13 @@
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+
++#include "../host/xhci-plat.h"
+ #include "core.h"
+
++static const struct xhci_plat_priv dwc3_xhci_plat_priv = {
++ .quirks = XHCI_SKIP_PHY_INIT,
++};
++
+ static void dwc3_host_fill_xhci_irq_res(struct dwc3 *dwc,
+ int irq, char *name)
+ {
+@@ -92,6 +97,11 @@ int dwc3_host_init(struct dwc3 *dwc)
+ goto err;
+ }
+
++ ret = platform_device_add_data(xhci, &dwc3_xhci_plat_priv,
++ sizeof(dwc3_xhci_plat_priv));
++ if (ret)
++ goto err;
++
+ memset(props, 0, sizeof(struct property_entry) * ARRAY_SIZE(props));
+
+ if (dwc->usb3_lpm_capable)
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index 1905a8d8e0c9f..08726e4c68a56 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -291,6 +291,12 @@ static struct usb_endpoint_descriptor ss_ep_int_desc = {
+ .bInterval = 4,
+ };
+
++static struct usb_ss_ep_comp_descriptor ss_ep_int_desc_comp = {
++ .bLength = sizeof(ss_ep_int_desc_comp),
++ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
++ .wBytesPerInterval = cpu_to_le16(6),
++};
++
+ /* Audio Streaming OUT Interface - Alt0 */
+ static struct usb_interface_descriptor std_as_out_if0_desc = {
+ .bLength = sizeof std_as_out_if0_desc,
+@@ -604,7 +610,8 @@ static struct usb_descriptor_header *ss_audio_desc[] = {
+ (struct usb_descriptor_header *)&in_feature_unit_desc,
+ (struct usb_descriptor_header *)&io_out_ot_desc,
+
+- (struct usb_descriptor_header *)&ss_ep_int_desc,
++ (struct usb_descriptor_header *)&ss_ep_int_desc,
++ (struct usb_descriptor_header *)&ss_ep_int_desc_comp,
+
+ (struct usb_descriptor_header *)&std_as_out_if0_desc,
+ (struct usb_descriptor_header *)&std_as_out_if1_desc,
+@@ -800,6 +807,7 @@ static void setup_headers(struct f_uac2_opts *opts,
+ struct usb_ss_ep_comp_descriptor *epout_desc_comp = NULL;
+ struct usb_ss_ep_comp_descriptor *epin_desc_comp = NULL;
+ struct usb_ss_ep_comp_descriptor *epin_fback_desc_comp = NULL;
++ struct usb_ss_ep_comp_descriptor *ep_int_desc_comp = NULL;
+ struct usb_endpoint_descriptor *epout_desc;
+ struct usb_endpoint_descriptor *epin_desc;
+ struct usb_endpoint_descriptor *epin_fback_desc;
+@@ -827,6 +835,7 @@ static void setup_headers(struct f_uac2_opts *opts,
+ epin_fback_desc = &ss_epin_fback_desc;
+ epin_fback_desc_comp = &ss_epin_fback_desc_comp;
+ ep_int_desc = &ss_ep_int_desc;
++ ep_int_desc_comp = &ss_ep_int_desc_comp;
+ }
+
+ i = 0;
+@@ -855,8 +864,11 @@ static void setup_headers(struct f_uac2_opts *opts,
+ if (EPOUT_EN(opts))
+ headers[i++] = USBDHDR(&io_out_ot_desc);
+
+- if (FUOUT_EN(opts) || FUIN_EN(opts))
++ if (FUOUT_EN(opts) || FUIN_EN(opts)) {
+ headers[i++] = USBDHDR(ep_int_desc);
++ if (ep_int_desc_comp)
++ headers[i++] = USBDHDR(ep_int_desc_comp);
++ }
+
+ if (EPOUT_EN(opts)) {
+ headers[i++] = USBDHDR(&std_as_out_if0_desc);
+diff --git a/drivers/usb/gadget/function/storage_common.c b/drivers/usb/gadget/function/storage_common.c
+index b859a158a4140..e122050eebaf1 100644
+--- a/drivers/usb/gadget/function/storage_common.c
++++ b/drivers/usb/gadget/function/storage_common.c
+@@ -294,8 +294,10 @@ EXPORT_SYMBOL_GPL(fsg_lun_fsync_sub);
+ void store_cdrom_address(u8 *dest, int msf, u32 addr)
+ {
+ if (msf) {
+- /* Convert to Minutes-Seconds-Frames */
+- addr >>= 2; /* Convert to 2048-byte frames */
++ /*
++ * Convert to Minutes-Seconds-Frames.
++ * Sector size is already set to 2048 bytes.
++ */
+ addr += 2*75; /* Lead-in occupies 2 seconds */
+ dest[3] = addr % 75; /* Frames */
+ addr /= 75;
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index cafcf260394cd..c63c0c2cf649d 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -736,7 +736,10 @@ int usb_gadget_disconnect(struct usb_gadget *gadget)
+ ret = gadget->ops->pullup(gadget, 0);
+ if (!ret) {
+ gadget->connected = 0;
+- gadget->udc->driver->disconnect(gadget);
++ mutex_lock(&udc_lock);
++ if (gadget->udc->driver)
++ gadget->udc->driver->disconnect(gadget);
++ mutex_unlock(&udc_lock);
+ }
+
+ out:
+@@ -1489,7 +1492,6 @@ static int gadget_bind_driver(struct device *dev)
+
+ usb_gadget_udc_set_speed(udc, driver->max_speed);
+
+- mutex_lock(&udc_lock);
+ ret = driver->bind(udc->gadget, driver);
+ if (ret)
+ goto err_bind;
+@@ -1499,7 +1501,6 @@ static int gadget_bind_driver(struct device *dev)
+ goto err_start;
+ usb_gadget_enable_async_callbacks(udc);
+ usb_udc_connect_control(udc);
+- mutex_unlock(&udc_lock);
+
+ kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+ return 0;
+@@ -1512,6 +1513,7 @@ static int gadget_bind_driver(struct device *dev)
+ dev_err(&udc->dev, "failed to start %s: %d\n",
+ driver->function, ret);
+
++ mutex_lock(&udc_lock);
+ udc->driver = NULL;
+ driver->is_bound = false;
+ mutex_unlock(&udc_lock);
+@@ -1529,7 +1531,6 @@ static void gadget_unbind_driver(struct device *dev)
+
+ kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+
+- mutex_lock(&udc_lock);
+ usb_gadget_disconnect(gadget);
+ usb_gadget_disable_async_callbacks(udc);
+ if (gadget->irq)
+@@ -1537,6 +1538,7 @@ static void gadget_unbind_driver(struct device *dev)
+ udc->driver->unbind(gadget);
+ usb_gadget_udc_stop(udc);
+
++ mutex_lock(&udc_lock);
+ driver->is_bound = false;
+ udc->driver = NULL;
+ mutex_unlock(&udc_lock);
+@@ -1612,7 +1614,7 @@ static ssize_t soft_connect_store(struct device *dev,
+ struct usb_udc *udc = container_of(dev, struct usb_udc, dev);
+ ssize_t ret;
+
+- mutex_lock(&udc_lock);
++ device_lock(&udc->gadget->dev);
+ if (!udc->driver) {
+ dev_err(dev, "soft-connect without a gadget driver\n");
+ ret = -EOPNOTSUPP;
+@@ -1633,7 +1635,7 @@ static ssize_t soft_connect_store(struct device *dev,
+
+ ret = n;
+ out:
+- mutex_unlock(&udc_lock);
++ device_unlock(&udc->gadget->dev);
+ return ret;
+ }
+ static DEVICE_ATTR_WO(soft_connect);
+@@ -1652,11 +1654,15 @@ static ssize_t function_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+ struct usb_udc *udc = container_of(dev, struct usb_udc, dev);
+- struct usb_gadget_driver *drv = udc->driver;
++ struct usb_gadget_driver *drv;
++ int rc = 0;
+
+- if (!drv || !drv->function)
+- return 0;
+- return scnprintf(buf, PAGE_SIZE, "%s\n", drv->function);
++ mutex_lock(&udc_lock);
++ drv = udc->driver;
++ if (drv && drv->function)
++ rc = scnprintf(buf, PAGE_SIZE, "%s\n", drv->function);
++ mutex_unlock(&udc_lock);
++ return rc;
+ }
+ static DEVICE_ATTR_RO(function);
+
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 0fdc014c94011..4619d5e89d5be 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -652,7 +652,7 @@ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd)
+ * It will release and re-aquire the lock while calling ACPI
+ * method.
+ */
+-void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
++static void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd,
+ u16 index, bool on, unsigned long *flags)
+ __must_hold(&xhci->lock)
+ {
+@@ -1648,6 +1648,17 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+
+ status = bus_state->resuming_ports;
+
++ /*
++ * SS devices are only visible to roothub after link training completes.
++ * Keep polling roothubs for a grace period after xHC start
++ */
++ if (xhci->run_graceperiod) {
++ if (time_before(jiffies, xhci->run_graceperiod))
++ status = 1;
++ else
++ xhci->run_graceperiod = 0;
++ }
++
+ mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;
+
+ /* For each port, did anything change? If so, set that bit in buf. */
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index 06a6b19acaae6..579899eb24c15 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -425,7 +425,6 @@ static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
+
+ static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
+ {
+- u32 extra_cs_count;
+ u32 start_ss, last_ss;
+ u32 start_cs, last_cs;
+
+@@ -461,18 +460,12 @@ static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
+ if (last_cs > 7)
+ return -ESCH_CS_OVERFLOW;
+
+- if (sch_ep->ep_type == ISOC_IN_EP)
+- extra_cs_count = (last_cs == 7) ? 1 : 2;
+- else /* ep_type : INTR IN / INTR OUT */
+- extra_cs_count = 1;
+-
+- cs_count += extra_cs_count;
+ if (cs_count > 7)
+ cs_count = 7; /* HW limit */
+
+ sch_ep->cs_count = cs_count;
+- /* one for ss, the other for idle */
+- sch_ep->num_budget_microframes = cs_count + 2;
++ /* ss, idle are ignored */
++ sch_ep->num_budget_microframes = cs_count;
+
+ /*
+ * if interval=1, maxp >752, num_budge_micoframe is larger
+@@ -771,8 +764,8 @@ int xhci_mtk_drop_ep(struct usb_hcd *hcd, struct usb_device *udev,
+ if (ret)
+ return ret;
+
+- if (ep->hcpriv)
+- drop_ep_quirk(hcd, udev, ep);
++ /* needn't check @ep->hcpriv, xhci_endpoint_disable set it NULL */
++ drop_ep_quirk(hcd, udev, ep);
+
+ return 0;
+ }
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 044855818cb11..a8641b6536eea 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -398,12 +398,17 @@ static int xhci_plat_remove(struct platform_device *dev)
+ pm_runtime_get_sync(&dev->dev);
+ xhci->xhc_state |= XHCI_STATE_REMOVING;
+
+- usb_remove_hcd(shared_hcd);
+- xhci->shared_hcd = NULL;
++ if (shared_hcd) {
++ usb_remove_hcd(shared_hcd);
++ xhci->shared_hcd = NULL;
++ }
++
+ usb_phy_shutdown(hcd->usb_phy);
+
+ usb_remove_hcd(hcd);
+- usb_put_hcd(shared_hcd);
++
++ if (shared_hcd)
++ usb_put_hcd(shared_hcd);
+
+ clk_disable_unprepare(clk);
+ clk_disable_unprepare(reg_clk);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 65858f6074377..38649284ff889 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -151,9 +151,11 @@ int xhci_start(struct xhci_hcd *xhci)
+ xhci_err(xhci, "Host took too long to start, "
+ "waited %u microseconds.\n",
+ XHCI_MAX_HALT_USEC);
+- if (!ret)
++ if (!ret) {
+ /* clear state flags. Including dying, halted or removing */
+ xhci->xhc_state = 0;
++ xhci->run_graceperiod = jiffies + msecs_to_jiffies(500);
++ }
+
+ return ret;
+ }
+@@ -791,8 +793,6 @@ static void xhci_stop(struct usb_hcd *hcd)
+ void xhci_shutdown(struct usb_hcd *hcd)
+ {
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+- unsigned long flags;
+- int i;
+
+ if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
+ usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev));
+@@ -808,21 +808,12 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ del_timer_sync(&xhci->shared_hcd->rh_timer);
+ }
+
+- spin_lock_irqsave(&xhci->lock, flags);
++ spin_lock_irq(&xhci->lock);
+ xhci_halt(xhci);
+-
+- /* Power off USB2 ports*/
+- for (i = 0; i < xhci->usb2_rhub.num_ports; i++)
+- xhci_set_port_power(xhci, xhci->main_hcd, i, false, &flags);
+-
+- /* Power off USB3 ports*/
+- for (i = 0; i < xhci->usb3_rhub.num_ports; i++)
+- xhci_set_port_power(xhci, xhci->shared_hcd, i, false, &flags);
+-
+ /* Workaround for spurious wakeups at shutdown with HSW */
+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+ xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+- spin_unlock_irqrestore(&xhci->lock, flags);
++ spin_unlock_irq(&xhci->lock);
+
+ xhci_cleanup_msix(xhci);
+
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 1960b47acfb28..7caa0db5e826d 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1826,7 +1826,7 @@ struct xhci_hcd {
+
+ /* Host controller watchdog timer structures */
+ unsigned int xhc_state;
+-
++ unsigned long run_graceperiod;
+ u32 command;
+ struct s3_save s3;
+ /* Host controller is dying - not responding to commands. "I'm not dead yet!"
+@@ -2196,8 +2196,6 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex,
+ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf);
+ int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1);
+ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd);
+-void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, u16 index,
+- bool on, unsigned long *flags);
+
+ void xhci_hc_died(struct xhci_hcd *xhci);
+
+diff --git a/drivers/usb/musb/Kconfig b/drivers/usb/musb/Kconfig
+index 4d61df6a9b5c8..70693cae83efb 100644
+--- a/drivers/usb/musb/Kconfig
++++ b/drivers/usb/musb/Kconfig
+@@ -86,7 +86,7 @@ config USB_MUSB_TUSB6010
+ tristate "TUSB6010"
+ depends on HAS_IOMEM
+ depends on ARCH_OMAP2PLUS || COMPILE_TEST
+- depends on NOP_USB_XCEIV = USB_MUSB_HDRC # both built-in or both modules
++ depends on NOP_USB_XCEIV!=m || USB_MUSB_HDRC=m
+
+ config USB_MUSB_OMAP2PLUS
+ tristate "OMAP2430 and onwards"
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 2798fca712612..af01a462cc43c 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -97,7 +97,10 @@ struct ch341_private {
+ u8 mcr;
+ u8 msr;
+ u8 lcr;
++
+ unsigned long quirks;
++ u8 version;
++
+ unsigned long break_end;
+ };
+
+@@ -250,8 +253,12 @@ static int ch341_set_baudrate_lcr(struct usb_device *dev,
+ /*
+ * CH341A buffers data until a full endpoint-size packet (32 bytes)
+ * has been received unless bit 7 is set.
++ *
++ * At least one device with version 0x27 appears to have this bit
++ * inverted.
+ */
+- val |= BIT(7);
++ if (priv->version > 0x27)
++ val |= BIT(7);
+
+ r = ch341_control_out(dev, CH341_REQ_WRITE_REG,
+ CH341_REG_DIVISOR << 8 | CH341_REG_PRESCALER,
+@@ -265,6 +272,9 @@ static int ch341_set_baudrate_lcr(struct usb_device *dev,
+ * (stop bits, parity and word length). Version 0x30 and above use
+ * CH341_REG_LCR only and CH341_REG_LCR2 is always set to zero.
+ */
++ if (priv->version < 0x30)
++ return 0;
++
+ r = ch341_control_out(dev, CH341_REQ_WRITE_REG,
+ CH341_REG_LCR2 << 8 | CH341_REG_LCR, lcr);
+ if (r)
+@@ -308,7 +318,9 @@ static int ch341_configure(struct usb_device *dev, struct ch341_private *priv)
+ r = ch341_control_in(dev, CH341_REQ_READ_VERSION, 0, 0, buffer, size);
+ if (r)
+ return r;
+- dev_dbg(&dev->dev, "Chip version: 0x%02x\n", buffer[0]);
++
++ priv->version = buffer[0];
++ dev_dbg(&dev->dev, "Chip version: 0x%02x\n", priv->version);
+
+ r = ch341_control_out(dev, CH341_REQ_SERIAL_INIT, 0, 0);
+ if (r < 0)
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index c374620a486f0..a34957c4b64c0 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -130,6 +130,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x83AA) }, /* Mark-10 Digital Force Gauge */
+ { USB_DEVICE(0x10C4, 0x83D8) }, /* DekTec DTA Plus VHF/UHF Booster/Attenuator */
+ { USB_DEVICE(0x10C4, 0x8411) }, /* Kyocera GPS Module */
++ { USB_DEVICE(0x10C4, 0x8414) }, /* Decagon USB Cable Adapter */
+ { USB_DEVICE(0x10C4, 0x8418) }, /* IRZ Automation Teleport SG-10 GSM/GPRS Modem */
+ { USB_DEVICE(0x10C4, 0x846E) }, /* BEI USB Sensor Interface (VCP) */
+ { USB_DEVICE(0x10C4, 0x8470) }, /* Juniper Networks BX Series System Console */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index d5a3986dfee75..52d59be920342 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1045,6 +1045,8 @@ static const struct usb_device_id id_table_combined[] = {
+ /* IDS GmbH devices */
+ { USB_DEVICE(IDS_VID, IDS_SI31A_PID) },
+ { USB_DEVICE(IDS_VID, IDS_CM31A_PID) },
++ /* Omron devices */
++ { USB_DEVICE(OMRON_VID, OMRON_CS1W_CIF31_PID) },
+ /* U-Blox devices */
+ { USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) },
+ { USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 4e92c165c86bf..31c8ccabbbb78 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -661,6 +661,12 @@
+ #define INFINEON_TRIBOARD_TC1798_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */
+ #define INFINEON_TRIBOARD_TC2X7_PID 0x0043 /* DAS JTAG TriBoard TC2X7 V1.0 */
+
++/*
++ * Omron corporation (https://www.omron.com)
++ */
++ #define OMRON_VID 0x0590
++ #define OMRON_CS1W_CIF31_PID 0x00b2
++
+ /*
+ * Acton Research Corp.
+ */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index de59fa919540a..a5e8374a8d710 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -253,6 +253,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_BG96 0x0296
+ #define QUECTEL_PRODUCT_EP06 0x0306
+ #define QUECTEL_PRODUCT_EM05G 0x030a
++#define QUECTEL_PRODUCT_EM060K 0x030b
+ #define QUECTEL_PRODUCT_EM12 0x0512
+ #define QUECTEL_PRODUCT_RM500Q 0x0800
+ #define QUECTEL_PRODUCT_EC200S_CN 0x6002
+@@ -438,6 +439,8 @@ static void option_instat_callback(struct urb *urb);
+ #define CINTERION_PRODUCT_MV31_2_RMNET 0x00b9
+ #define CINTERION_PRODUCT_MV32_WA 0x00f1
+ #define CINTERION_PRODUCT_MV32_WB 0x00f2
++#define CINTERION_PRODUCT_MV32_WA_RMNET 0x00f3
++#define CINTERION_PRODUCT_MV32_WB_RMNET 0x00f4
+
+ /* Olivetti products */
+ #define OLIVETTI_VENDOR_ID 0x0b3c
+@@ -573,6 +576,10 @@ static void option_instat_callback(struct urb *urb);
+ #define WETELECOM_PRODUCT_6802 0x6802
+ #define WETELECOM_PRODUCT_WMD300 0x6803
+
++/* OPPO products */
++#define OPPO_VENDOR_ID 0x22d9
++#define OPPO_PRODUCT_R11 0x276c
++
+
+ /* Device flags */
+
+@@ -1138,6 +1145,9 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
+ .driver_info = RSVD(6) | ZLP },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
+ .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+@@ -1993,8 +2003,12 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(0)},
+ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA, 0xff),
+ .driver_info = RSVD(3)},
++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA_RMNET, 0xff),
++ .driver_info = RSVD(0) },
+ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB, 0xff),
+ .driver_info = RSVD(3)},
++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB_RMNET, 0xff),
++ .driver_info = RSVD(0) },
+ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100),
+ .driver_info = RSVD(4) },
+ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120),
+@@ -2155,6 +2169,7 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */
+ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */
+ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */
++ { USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ { } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 1a05e3dcfec8a..4993227ab2930 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2294,6 +2294,13 @@ UNUSUAL_DEV( 0x1e74, 0x4621, 0x0000, 0x0000,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_BULK_IGNORE_TAG | US_FL_MAX_SECTORS_64 ),
+
++/* Reported by Witold Lipieta <witold.lipieta@thaumatec.com> */
++UNUSUAL_DEV( 0x1fc9, 0x0117, 0x0100, 0x0100,
++ "NXP Semiconductors",
++ "PN7462AU",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_RESIDUE ),
++
+ /* Supplied with some Castlewood ORB removable drives */
+ UNUSUAL_DEV( 0x2027, 0xa001, 0x0000, 0x9999,
+ "Double-H Technology",
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index c1d8c23baa399..de66a2949e33b 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -99,8 +99,8 @@ static int dp_altmode_configure(struct dp_altmode *dp, u8 con)
+ case DP_STATUS_CON_UFP_D:
+ case DP_STATUS_CON_BOTH: /* NOTE: First acting as DP source */
+ conf |= DP_CONF_UFP_U_AS_UFP_D;
+- pin_assign = DP_CAP_DFP_D_PIN_ASSIGN(dp->alt->vdo) &
+- DP_CAP_UFP_D_PIN_ASSIGN(dp->port->vdo);
++ pin_assign = DP_CAP_PIN_ASSIGN_UFP_D(dp->alt->vdo) &
++ DP_CAP_PIN_ASSIGN_DFP_D(dp->port->vdo);
+ break;
+ default:
+ break;
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index 47b733f78fb0d..a8e273fe204ab 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -571,9 +571,11 @@ err_unregister_switch:
+
+ static int is_memory(struct acpi_resource *res, void *data)
+ {
+- struct resource r;
++ struct resource_win win = {};
++ struct resource *r = &win.res;
+
+- return !acpi_dev_resource_memory(res, &r);
++ return !(acpi_dev_resource_memory(res, r) ||
++ acpi_dev_resource_address_space(res, &win));
+ }
+
+ /* IOM ACPI IDs and IOM_PORT_STATUS_OFFSET */
+@@ -583,6 +585,9 @@ static const struct acpi_device_id iom_acpi_ids[] = {
+
+ /* AlderLake */
+ { "INTC1079", 0x160, },
++
++ /* Meteor Lake */
++ { "INTC107A", 0x160, },
+ {}
+ };
+
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 3bc2f4ebd1feb..984a13a9efc22 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -6191,6 +6191,13 @@ static int tcpm_psy_set_prop(struct power_supply *psy,
+ struct tcpm_port *port = power_supply_get_drvdata(psy);
+ int ret;
+
++ /*
++ * All the properties below are related to USB PD. The check needs to be
++ * property specific when a non-pd related property is added.
++ */
++ if (!port->pd_supported)
++ return -EOPNOTSUPP;
++
+ switch (psp) {
+ case POWER_SUPPLY_PROP_ONLINE:
+ ret = tcpm_psy_set_online(port, val);
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 1aea46493b852..7f2624f427241 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -1200,32 +1200,6 @@ out_unlock:
+ return ret;
+ }
+
+-static void ucsi_unregister_connectors(struct ucsi *ucsi)
+-{
+- struct ucsi_connector *con;
+- int i;
+-
+- if (!ucsi->connector)
+- return;
+-
+- for (i = 0; i < ucsi->cap.num_connectors; i++) {
+- con = &ucsi->connector[i];
+-
+- if (!con->wq)
+- break;
+-
+- cancel_work_sync(&con->work);
+- ucsi_unregister_partner(con);
+- ucsi_unregister_altmodes(con, UCSI_RECIPIENT_CON);
+- ucsi_unregister_port_psy(con);
+- destroy_workqueue(con->wq);
+- typec_unregister_port(con->port);
+- }
+-
+- kfree(ucsi->connector);
+- ucsi->connector = NULL;
+-}
+-
+ /**
+ * ucsi_init - Initialize UCSI interface
+ * @ucsi: UCSI to be initialized
+@@ -1234,6 +1208,7 @@ static void ucsi_unregister_connectors(struct ucsi *ucsi)
+ */
+ static int ucsi_init(struct ucsi *ucsi)
+ {
++ struct ucsi_connector *con;
+ u64 command;
+ int ret;
+ int i;
+@@ -1264,7 +1239,7 @@ static int ucsi_init(struct ucsi *ucsi)
+ }
+
+ /* Allocate the connectors. Released in ucsi_unregister() */
+- ucsi->connector = kcalloc(ucsi->cap.num_connectors,
++ ucsi->connector = kcalloc(ucsi->cap.num_connectors + 1,
+ sizeof(*ucsi->connector), GFP_KERNEL);
+ if (!ucsi->connector) {
+ ret = -ENOMEM;
+@@ -1288,7 +1263,15 @@ static int ucsi_init(struct ucsi *ucsi)
+ return 0;
+
+ err_unregister:
+- ucsi_unregister_connectors(ucsi);
++ for (con = ucsi->connector; con->port; con++) {
++ ucsi_unregister_partner(con);
++ ucsi_unregister_altmodes(con, UCSI_RECIPIENT_CON);
++ ucsi_unregister_port_psy(con);
++ if (con->wq)
++ destroy_workqueue(con->wq);
++ typec_unregister_port(con->port);
++ con->port = NULL;
++ }
+
+ err_reset:
+ memset(&ucsi->cap, 0, sizeof(ucsi->cap));
+@@ -1402,6 +1385,7 @@ EXPORT_SYMBOL_GPL(ucsi_register);
+ void ucsi_unregister(struct ucsi *ucsi)
+ {
+ u64 cmd = UCSI_SET_NOTIFICATION_ENABLE;
++ int i;
+
+ /* Make sure that we are not in the middle of driver initialization */
+ cancel_delayed_work_sync(&ucsi->work);
+@@ -1409,7 +1393,18 @@ void ucsi_unregister(struct ucsi *ucsi)
+ /* Disable notifications */
+ ucsi->ops->async_write(ucsi, UCSI_CONTROL, &cmd, sizeof(cmd));
+
+- ucsi_unregister_connectors(ucsi);
++ for (i = 0; i < ucsi->cap.num_connectors; i++) {
++ cancel_work_sync(&ucsi->connector[i].work);
++ ucsi_unregister_partner(&ucsi->connector[i]);
++ ucsi_unregister_altmodes(&ucsi->connector[i],
++ UCSI_RECIPIENT_CON);
++ ucsi_unregister_port_psy(&ucsi->connector[i]);
++ if (ucsi->connector[i].wq)
++ destroy_workqueue(ucsi->connector[i].wq);
++ typec_unregister_port(ucsi->connector[i].port);
++ }
++
++ kfree(ucsi->connector);
+ }
+ EXPORT_SYMBOL_GPL(ucsi_unregister);
+
+diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
+index 738029de3c672..e1ec725c2819d 100644
+--- a/drivers/xen/grant-table.c
++++ b/drivers/xen/grant-table.c
+@@ -1047,6 +1047,9 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
+ size_t size;
+ int i, ret;
+
++ if (args->nr_pages < 0 || args->nr_pages > (INT_MAX >> PAGE_SHIFT))
++ return -ENOMEM;
++
+ size = args->nr_pages << PAGE_SHIFT;
+ if (args->coherent)
+ args->vaddr = dma_alloc_coherent(args->dev, size,
+diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
+index 6cba2c6de2f96..2ad58c4652084 100644
+--- a/fs/cachefiles/internal.h
++++ b/fs/cachefiles/internal.h
+@@ -111,6 +111,7 @@ struct cachefiles_cache {
+ char *tag; /* cache binding tag */
+ refcount_t unbind_pincount;/* refcount to do daemon unbind */
+ struct xarray reqs; /* xarray of pending on-demand requests */
++ unsigned long req_id_next;
+ struct xarray ondemand_ids; /* xarray for ondemand_id allocation */
+ u32 ondemand_id_next;
+ };
+diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
+index 1fee702d55293..0254ed39f68ce 100644
+--- a/fs/cachefiles/ondemand.c
++++ b/fs/cachefiles/ondemand.c
+@@ -158,9 +158,13 @@ int cachefiles_ondemand_copen(struct cachefiles_cache *cache, char *args)
+
+ /* fail OPEN request if daemon reports an error */
+ if (size < 0) {
+- if (!IS_ERR_VALUE(size))
+- size = -EINVAL;
+- req->error = size;
++ if (!IS_ERR_VALUE(size)) {
++ req->error = -EINVAL;
++ ret = -EINVAL;
++ } else {
++ req->error = size;
++ ret = 0;
++ }
+ goto out;
+ }
+
+@@ -238,14 +242,19 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
+ unsigned long id = 0;
+ size_t n;
+ int ret = 0;
+- XA_STATE(xas, &cache->reqs, 0);
++ XA_STATE(xas, &cache->reqs, cache->req_id_next);
+
+ /*
+- * Search for a request that has not ever been processed, to prevent
+- * requests from being processed repeatedly.
++ * Cyclically search for a request that has not ever been processed,
++ * to prevent requests from being processed repeatedly, and make
++ * request distribution fair.
+ */
+ xa_lock(&cache->reqs);
+ req = xas_find_marked(&xas, UINT_MAX, CACHEFILES_REQ_NEW);
++ if (!req && cache->req_id_next > 0) {
++ xas_set(&xas, 0);
++ req = xas_find_marked(&xas, cache->req_id_next - 1, CACHEFILES_REQ_NEW);
++ }
+ if (!req) {
+ xa_unlock(&cache->reqs);
+ return 0;
+@@ -260,6 +269,7 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
+ }
+
+ xas_clear_mark(&xas, CACHEFILES_REQ_NEW);
++ cache->req_id_next = xas.xa_index + 1;
+ xa_unlock(&cache->reqs);
+
+ id = xas.xa_index;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index c7614ade875b5..ba58d7fd54f9e 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -964,16 +964,17 @@ SMB2_negotiate(const unsigned int xid,
+ } else if (rc != 0)
+ goto neg_exit;
+
++ rc = -EIO;
+ if (strcmp(server->vals->version_string,
+ SMB3ANY_VERSION_STRING) == 0) {
+ if (rsp->DialectRevision == cpu_to_le16(SMB20_PROT_ID)) {
+ cifs_server_dbg(VFS,
+ "SMB2 dialect returned but not requested\n");
+- return -EIO;
++ goto neg_exit;
+ } else if (rsp->DialectRevision == cpu_to_le16(SMB21_PROT_ID)) {
+ cifs_server_dbg(VFS,
+ "SMB2.1 dialect returned but not requested\n");
+- return -EIO;
++ goto neg_exit;
+ } else if (rsp->DialectRevision == cpu_to_le16(SMB311_PROT_ID)) {
+ /* ops set to 3.0 by default for default so update */
+ server->ops = &smb311_operations;
+@@ -984,7 +985,7 @@ SMB2_negotiate(const unsigned int xid,
+ if (rsp->DialectRevision == cpu_to_le16(SMB20_PROT_ID)) {
+ cifs_server_dbg(VFS,
+ "SMB2 dialect returned but not requested\n");
+- return -EIO;
++ goto neg_exit;
+ } else if (rsp->DialectRevision == cpu_to_le16(SMB21_PROT_ID)) {
+ /* ops set to 3.0 by default for default so update */
+ server->ops = &smb21_operations;
+@@ -998,7 +999,7 @@ SMB2_negotiate(const unsigned int xid,
+ /* if requested single dialect ensure returned dialect matched */
+ cifs_server_dbg(VFS, "Invalid 0x%x dialect returned: not requested\n",
+ le16_to_cpu(rsp->DialectRevision));
+- return -EIO;
++ goto neg_exit;
+ }
+
+ cifs_dbg(FYI, "mode 0x%x\n", rsp->SecurityMode);
+@@ -1016,9 +1017,10 @@ SMB2_negotiate(const unsigned int xid,
+ else {
+ cifs_server_dbg(VFS, "Invalid dialect returned by server 0x%x\n",
+ le16_to_cpu(rsp->DialectRevision));
+- rc = -EIO;
+ goto neg_exit;
+ }
++
++ rc = 0;
+ server->dialect = le16_to_cpu(rsp->DialectRevision);
+
+ /*
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 7424cf234ae03..ed352c00330cd 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -398,6 +398,9 @@ enum bpf_type_flag {
+ /* DYNPTR points to a ringbuf record. */
+ DYNPTR_TYPE_RINGBUF = BIT(9 + BPF_BASE_TYPE_BITS),
+
++ /* Size is known at compile time. */
++ MEM_FIXED_SIZE = BIT(10 + BPF_BASE_TYPE_BITS),
++
+ __BPF_TYPE_FLAG_MAX,
+ __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1,
+ };
+@@ -461,6 +464,8 @@ enum bpf_arg_type {
+ * all bytes or clear them in error case.
+ */
+ ARG_PTR_TO_UNINIT_MEM = MEM_UNINIT | ARG_PTR_TO_MEM,
++ /* Pointer to valid memory of size known at compile time. */
++ ARG_PTR_TO_FIXED_SIZE_MEM = MEM_FIXED_SIZE | ARG_PTR_TO_MEM,
+
+ /* This must be the last entry. Its purpose is to ensure the enum is
+ * wide enough to hold the higher bits reserved for bpf_type_flag.
+@@ -526,6 +531,14 @@ struct bpf_func_proto {
+ u32 *arg5_btf_id;
+ };
+ u32 *arg_btf_id[5];
++ struct {
++ size_t arg1_size;
++ size_t arg2_size;
++ size_t arg3_size;
++ size_t arg4_size;
++ size_t arg5_size;
++ };
++ size_t arg_size[5];
+ };
+ int *ret_btf_id; /* return value btf_id */
+ bool (*allowed)(const struct bpf_prog *prog);
+diff --git a/include/linux/platform_data/x86/pmc_atom.h b/include/linux/platform_data/x86/pmc_atom.h
+index 6807839c718bd..ea01dd80153b3 100644
+--- a/include/linux/platform_data/x86/pmc_atom.h
++++ b/include/linux/platform_data/x86/pmc_atom.h
+@@ -7,6 +7,8 @@
+ #ifndef PMC_ATOM_H
+ #define PMC_ATOM_H
+
++#include <linux/bits.h>
++
+ /* ValleyView Power Control Unit PCI Device ID */
+ #define PCI_DEVICE_ID_VLV_PMC 0x0F1C
+ /* CherryTrail Power Control Unit PCI Device ID */
+@@ -139,9 +141,9 @@
+ #define ACPI_MMIO_REG_LEN 0x100
+
+ #define PM1_CNT 0x4
+-#define SLEEP_TYPE_MASK 0xFFFFECFF
++#define SLEEP_TYPE_MASK GENMASK(12, 10)
+ #define SLEEP_TYPE_S5 0x1C00
+-#define SLEEP_ENABLE 0x2000
++#define SLEEP_ENABLE BIT(13)
+
+ extern int pmc_atom_read(int offset, u32 *value);
+
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 60bee864d8977..1a664ab2ebc66 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -575,6 +575,7 @@ struct usb3_lpm_parameters {
+ * @devaddr: device address, XHCI: assigned by HW, others: same as devnum
+ * @can_submit: URBs may be submitted
+ * @persist_enabled: USB_PERSIST enabled for this device
++ * @reset_in_progress: the device is being reset
+ * @have_langid: whether string_langid is valid
+ * @authorized: policy has said we can use it;
+ * (user space) policy determines if we authorize this device to be
+@@ -661,6 +662,7 @@ struct usb_device {
+
+ unsigned can_submit:1;
+ unsigned persist_enabled:1;
++ unsigned reset_in_progress:1;
+ unsigned have_langid:1;
+ unsigned authorized:1;
+ unsigned authenticated:1;
+diff --git a/include/linux/usb/typec_dp.h b/include/linux/usb/typec_dp.h
+index cfb916cccd316..8d09c2f0a9b80 100644
+--- a/include/linux/usb/typec_dp.h
++++ b/include/linux/usb/typec_dp.h
+@@ -73,6 +73,11 @@ enum {
+ #define DP_CAP_USB BIT(7)
+ #define DP_CAP_DFP_D_PIN_ASSIGN(_cap_) (((_cap_) & GENMASK(15, 8)) >> 8)
+ #define DP_CAP_UFP_D_PIN_ASSIGN(_cap_) (((_cap_) & GENMASK(23, 16)) >> 16)
++/* Get pin assignment taking plug & receptacle into consideration */
++#define DP_CAP_PIN_ASSIGN_UFP_D(_cap_) ((_cap_ & DP_CAP_RECEPTACLE) ? \
++ DP_CAP_UFP_D_PIN_ASSIGN(_cap_) : DP_CAP_DFP_D_PIN_ASSIGN(_cap_))
++#define DP_CAP_PIN_ASSIGN_DFP_D(_cap_) ((_cap_ & DP_CAP_RECEPTACLE) ? \
++ DP_CAP_DFP_D_PIN_ASSIGN(_cap_) : DP_CAP_UFP_D_PIN_ASSIGN(_cap_))
+
+ /* DisplayPort Status Update VDO bits */
+ #define DP_STATUS_CONNECTION(_status_) ((_status_) & 3)
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index 20f60d9da7418..cf1f22c01ed3d 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -246,7 +246,8 @@ static inline void ip_tunnel_init_flow(struct flowi4 *fl4,
+ __be32 daddr, __be32 saddr,
+ __be32 key, __u8 tos,
+ struct net *net, int oif,
+- __u32 mark, __u32 tun_inner_hash)
++ __u32 mark, __u32 tun_inner_hash,
++ __u8 flow_flags)
+ {
+ memset(fl4, 0, sizeof(*fl4));
+
+@@ -263,6 +264,7 @@ static inline void ip_tunnel_init_flow(struct flowi4 *fl4,
+ fl4->fl4_gre_key = key;
+ fl4->flowi4_mark = mark;
+ fl4->flowi4_multipath_hash = tun_inner_hash;
++ fl4->flowi4_flags = flow_flags;
+ }
+
+ int ip_tunnel_init(struct net_device *dev);
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 7a394f7c205c4..34dfa45ef4f3b 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -762,8 +762,10 @@ static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
+ pos++;
+ }
+ }
++
++ /* no link or prog match, skip the cgroup of this layer */
++ continue;
+ found:
+- BUG_ON(!cg);
+ progs = rcu_dereference_protected(
+ desc->bpf.effective[atype],
+ lockdep_is_held(&cgroup_mutex));
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index fb6bd57228a84..cf44ff50b1f23 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1005,7 +1005,7 @@ pure_initcall(bpf_jit_charge_init);
+
+ int bpf_jit_charge_modmem(u32 size)
+ {
+- if (atomic_long_add_return(size, &bpf_jit_current) > bpf_jit_limit) {
++ if (atomic_long_add_return(size, &bpf_jit_current) > READ_ONCE(bpf_jit_limit)) {
+ if (!bpf_capable()) {
+ atomic_long_sub(size, &bpf_jit_current);
+ return -EPERM;
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 82e83cfb4114a..dd0fc2a86ce17 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -5153,7 +5153,7 @@ syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ {
+ switch (func_id) {
+ case BPF_FUNC_sys_bpf:
+- return &bpf_sys_bpf_proto;
++ return !perfmon_capable() ? NULL : &bpf_sys_bpf_proto;
+ case BPF_FUNC_btf_find_by_name_kind:
+ return &bpf_btf_find_by_name_kind_proto;
+ case BPF_FUNC_sys_close:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 0e45d405f151c..339147061127a 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5533,17 +5533,6 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
+ type == ARG_CONST_SIZE_OR_ZERO;
+ }
+
+-static bool arg_type_is_alloc_size(enum bpf_arg_type type)
+-{
+- return type == ARG_CONST_ALLOC_SIZE_OR_ZERO;
+-}
+-
+-static bool arg_type_is_int_ptr(enum bpf_arg_type type)
+-{
+- return type == ARG_PTR_TO_INT ||
+- type == ARG_PTR_TO_LONG;
+-}
+-
+ static bool arg_type_is_release(enum bpf_arg_type type)
+ {
+ return type & OBJ_RELEASE;
+@@ -5847,6 +5836,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
+ enum bpf_arg_type arg_type = fn->arg_type[arg];
+ enum bpf_reg_type type = reg->type;
++ u32 *arg_btf_id = NULL;
+ int err = 0;
+
+ if (arg_type == ARG_DONTCARE)
+@@ -5883,7 +5873,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ */
+ goto skip_type_check;
+
+- err = check_reg_type(env, regno, arg_type, fn->arg_btf_id[arg], meta);
++ /* arg_btf_id and arg_size are in a union. */
++ if (base_type(arg_type) == ARG_PTR_TO_BTF_ID)
++ arg_btf_id = fn->arg_btf_id[arg];
++
++ err = check_reg_type(env, regno, arg_type, arg_btf_id, meta);
+ if (err)
+ return err;
+
+@@ -5924,7 +5918,8 @@ skip_type_check:
+ meta->ref_obj_id = reg->ref_obj_id;
+ }
+
+- if (arg_type == ARG_CONST_MAP_PTR) {
++ switch (base_type(arg_type)) {
++ case ARG_CONST_MAP_PTR:
+ /* bpf_map_xxx(map_ptr) call: remember that map_ptr */
+ if (meta->map_ptr) {
+ /* Use map_uid (which is unique id of inner map) to reject:
+@@ -5949,7 +5944,8 @@ skip_type_check:
+ }
+ meta->map_ptr = reg->map_ptr;
+ meta->map_uid = reg->map_uid;
+- } else if (arg_type == ARG_PTR_TO_MAP_KEY) {
++ break;
++ case ARG_PTR_TO_MAP_KEY:
+ /* bpf_map_xxx(..., map_ptr, ..., key) call:
+ * check that [key, key + map->key_size) are within
+ * stack limits and initialized
+@@ -5966,7 +5962,8 @@ skip_type_check:
+ err = check_helper_mem_access(env, regno,
+ meta->map_ptr->key_size, false,
+ NULL);
+- } else if (base_type(arg_type) == ARG_PTR_TO_MAP_VALUE) {
++ break;
++ case ARG_PTR_TO_MAP_VALUE:
+ if (type_may_be_null(arg_type) && register_is_null(reg))
+ return 0;
+
+@@ -5982,14 +5979,16 @@ skip_type_check:
+ err = check_helper_mem_access(env, regno,
+ meta->map_ptr->value_size, false,
+ meta);
+- } else if (arg_type == ARG_PTR_TO_PERCPU_BTF_ID) {
++ break;
++ case ARG_PTR_TO_PERCPU_BTF_ID:
+ if (!reg->btf_id) {
+ verbose(env, "Helper has invalid btf_id in R%d\n", regno);
+ return -EACCES;
+ }
+ meta->ret_btf = reg->btf;
+ meta->ret_btf_id = reg->btf_id;
+- } else if (arg_type == ARG_PTR_TO_SPIN_LOCK) {
++ break;
++ case ARG_PTR_TO_SPIN_LOCK:
+ if (meta->func_id == BPF_FUNC_spin_lock) {
+ if (process_spin_lock(env, regno, true))
+ return -EACCES;
+@@ -6000,21 +5999,32 @@ skip_type_check:
+ verbose(env, "verifier internal error\n");
+ return -EFAULT;
+ }
+- } else if (arg_type == ARG_PTR_TO_TIMER) {
++ break;
++ case ARG_PTR_TO_TIMER:
+ if (process_timer_func(env, regno, meta))
+ return -EACCES;
+- } else if (arg_type == ARG_PTR_TO_FUNC) {
++ break;
++ case ARG_PTR_TO_FUNC:
+ meta->subprogno = reg->subprogno;
+- } else if (base_type(arg_type) == ARG_PTR_TO_MEM) {
++ break;
++ case ARG_PTR_TO_MEM:
+ /* The access to this pointer is only checked when we hit the
+ * next is_mem_size argument below.
+ */
+ meta->raw_mode = arg_type & MEM_UNINIT;
+- } else if (arg_type_is_mem_size(arg_type)) {
+- bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO);
+-
+- err = check_mem_size_reg(env, reg, regno, zero_size_allowed, meta);
+- } else if (arg_type_is_dynptr(arg_type)) {
++ if (arg_type & MEM_FIXED_SIZE) {
++ err = check_helper_mem_access(env, regno,
++ fn->arg_size[arg], false,
++ meta);
++ }
++ break;
++ case ARG_CONST_SIZE:
++ err = check_mem_size_reg(env, reg, regno, false, meta);
++ break;
++ case ARG_CONST_SIZE_OR_ZERO:
++ err = check_mem_size_reg(env, reg, regno, true, meta);
++ break;
++ case ARG_PTR_TO_DYNPTR:
+ if (arg_type & MEM_UNINIT) {
+ if (!is_dynptr_reg_valid_uninit(env, reg)) {
+ verbose(env, "Dynptr has to be an uninitialized dynptr\n");
+@@ -6048,21 +6058,31 @@ skip_type_check:
+ err_extra, arg + 1);
+ return -EINVAL;
+ }
+- } else if (arg_type_is_alloc_size(arg_type)) {
++ break;
++ case ARG_CONST_ALLOC_SIZE_OR_ZERO:
+ if (!tnum_is_const(reg->var_off)) {
+ verbose(env, "R%d is not a known constant'\n",
+ regno);
+ return -EACCES;
+ }
+ meta->mem_size = reg->var_off.value;
+- } else if (arg_type_is_int_ptr(arg_type)) {
++ err = mark_chain_precision(env, regno);
++ if (err)
++ return err;
++ break;
++ case ARG_PTR_TO_INT:
++ case ARG_PTR_TO_LONG:
++ {
+ int size = int_ptr_type_to_size(arg_type);
+
+ err = check_helper_mem_access(env, regno, size, false, meta);
+ if (err)
+ return err;
+ err = check_ptr_alignment(env, reg, 0, size, true);
+- } else if (arg_type == ARG_PTR_TO_CONST_STR) {
++ break;
++ }
++ case ARG_PTR_TO_CONST_STR:
++ {
+ struct bpf_map *map = reg->map_ptr;
+ int map_off;
+ u64 map_addr;
+@@ -6101,9 +6121,12 @@ skip_type_check:
+ verbose(env, "string is not zero-terminated\n");
+ return -EINVAL;
+ }
+- } else if (arg_type == ARG_PTR_TO_KPTR) {
++ break;
++ }
++ case ARG_PTR_TO_KPTR:
+ if (process_kptr_func(env, regno, meta))
+ return -EACCES;
++ break;
+ }
+
+ return err;
+@@ -6400,11 +6423,19 @@ static bool check_raw_mode_ok(const struct bpf_func_proto *fn)
+ return count <= 1;
+ }
+
+-static bool check_args_pair_invalid(enum bpf_arg_type arg_curr,
+- enum bpf_arg_type arg_next)
++static bool check_args_pair_invalid(const struct bpf_func_proto *fn, int arg)
+ {
+- return (base_type(arg_curr) == ARG_PTR_TO_MEM) !=
+- arg_type_is_mem_size(arg_next);
++ bool is_fixed = fn->arg_type[arg] & MEM_FIXED_SIZE;
++ bool has_size = fn->arg_size[arg] != 0;
++ bool is_next_size = false;
++
++ if (arg + 1 < ARRAY_SIZE(fn->arg_type))
++ is_next_size = arg_type_is_mem_size(fn->arg_type[arg + 1]);
++
++ if (base_type(fn->arg_type[arg]) != ARG_PTR_TO_MEM)
++ return is_next_size;
++
++ return has_size == is_next_size || is_next_size == is_fixed;
+ }
+
+ static bool check_arg_pair_ok(const struct bpf_func_proto *fn)
+@@ -6415,11 +6446,11 @@ static bool check_arg_pair_ok(const struct bpf_func_proto *fn)
+ * helper function specification.
+ */
+ if (arg_type_is_mem_size(fn->arg1_type) ||
+- base_type(fn->arg5_type) == ARG_PTR_TO_MEM ||
+- check_args_pair_invalid(fn->arg1_type, fn->arg2_type) ||
+- check_args_pair_invalid(fn->arg2_type, fn->arg3_type) ||
+- check_args_pair_invalid(fn->arg3_type, fn->arg4_type) ||
+- check_args_pair_invalid(fn->arg4_type, fn->arg5_type))
++ check_args_pair_invalid(fn, 0) ||
++ check_args_pair_invalid(fn, 1) ||
++ check_args_pair_invalid(fn, 2) ||
++ check_args_pair_invalid(fn, 3) ||
++ check_args_pair_invalid(fn, 4))
+ return false;
+
+ return true;
+@@ -6460,7 +6491,10 @@ static bool check_btf_id_ok(const struct bpf_func_proto *fn)
+ if (base_type(fn->arg_type[i]) == ARG_PTR_TO_BTF_ID && !fn->arg_btf_id[i])
+ return false;
+
+- if (base_type(fn->arg_type[i]) != ARG_PTR_TO_BTF_ID && fn->arg_btf_id[i])
++ if (base_type(fn->arg_type[i]) != ARG_PTR_TO_BTF_ID && fn->arg_btf_id[i] &&
++ /* arg_btf_id and arg_size are in a union. */
++ (base_type(fn->arg_type[i]) != ARG_PTR_TO_MEM ||
++ !(fn->arg_type[i] & MEM_FIXED_SIZE)))
+ return false;
+ }
+
+diff --git a/mm/pagewalk.c b/mm/pagewalk.c
+index 9b3db11a4d1db..fa7a3d21a7518 100644
+--- a/mm/pagewalk.c
++++ b/mm/pagewalk.c
+@@ -110,7 +110,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
+ do {
+ again:
+ next = pmd_addr_end(addr, end);
+- if (pmd_none(*pmd) || (!walk->vma && !walk->no_vma)) {
++ if (pmd_none(*pmd)) {
+ if (ops->pte_hole)
+ err = ops->pte_hole(addr, next, depth, walk);
+ if (err)
+@@ -171,7 +171,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
+ do {
+ again:
+ next = pud_addr_end(addr, end);
+- if (pud_none(*pud) || (!walk->vma && !walk->no_vma)) {
++ if (pud_none(*pud)) {
+ if (ops->pte_hole)
+ err = ops->pte_hole(addr, next, depth, walk);
+ if (err)
+@@ -366,19 +366,19 @@ static int __walk_page_range(unsigned long start, unsigned long end,
+ struct vm_area_struct *vma = walk->vma;
+ const struct mm_walk_ops *ops = walk->ops;
+
+- if (vma && ops->pre_vma) {
++ if (ops->pre_vma) {
+ err = ops->pre_vma(start, end, walk);
+ if (err)
+ return err;
+ }
+
+- if (vma && is_vm_hugetlb_page(vma)) {
++ if (is_vm_hugetlb_page(vma)) {
+ if (ops->hugetlb_entry)
+ err = walk_hugetlb_range(start, end, walk);
+ } else
+ err = walk_pgd_range(start, end, walk);
+
+- if (vma && ops->post_vma)
++ if (ops->post_vma)
+ ops->post_vma(walk);
+
+ return err;
+@@ -450,9 +450,13 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
+ if (!vma) { /* after the last vma */
+ walk.vma = NULL;
+ next = end;
++ if (ops->pte_hole)
++ err = ops->pte_hole(start, next, -1, &walk);
+ } else if (start < vma->vm_start) { /* outside vma */
+ walk.vma = NULL;
+ next = min(end, vma->vm_start);
++ if (ops->pte_hole)
++ err = ops->pte_hole(start, next, -1, &walk);
+ } else { /* inside vma */
+ walk.vma = vma;
+ next = min(end, vma->vm_end);
+@@ -470,9 +474,8 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
+ }
+ if (err < 0)
+ break;
+- }
+- if (walk.vma || walk.ops->pte_hole)
+ err = __walk_page_range(start, next, &walk);
++ }
+ if (err)
+ break;
+ } while (start = next, start < end);
+@@ -501,9 +504,9 @@ int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
+ if (start >= end || !walk.mm)
+ return -EINVAL;
+
+- mmap_assert_locked(walk.mm);
++ mmap_assert_write_locked(walk.mm);
+
+- return __walk_page_range(start, end, &walk);
++ return walk_pgd_range(start, end, &walk);
+ }
+
+ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
+diff --git a/mm/ptdump.c b/mm/ptdump.c
+index eea3d28d173c2..8adab455a68b3 100644
+--- a/mm/ptdump.c
++++ b/mm/ptdump.c
+@@ -152,13 +152,13 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
+ {
+ const struct ptdump_range *range = st->range;
+
+- mmap_read_lock(mm);
++ mmap_write_lock(mm);
+ while (range->start != range->end) {
+ walk_page_range_novma(mm, range->start, range->end,
+ &ptdump_ops, pgd, st);
+ range++;
+ }
+- mmap_read_unlock(mm);
++ mmap_write_unlock(mm);
+
+ /* Flush out the last page */
+ st->note_page(st, 0, -1, 0);
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 77c3adf40e504..dbd4b6f9b0e79 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -420,6 +420,28 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align,
+ }
+ EXPORT_SYMBOL(kmem_cache_create);
+
++#ifdef SLAB_SUPPORTS_SYSFS
++/*
++ * For a given kmem_cache, kmem_cache_destroy() should only be called
++ * once or there will be a use-after-free problem. The actual deletion
++ * and release of the kobject does not need slab_mutex or cpu_hotplug_lock
++ * protection. So they are now done without holding those locks.
++ *
++ * Note that there will be a slight delay in the deletion of sysfs files
++ * if kmem_cache_release() is called indrectly from a work function.
++ */
++static void kmem_cache_release(struct kmem_cache *s)
++{
++ sysfs_slab_unlink(s);
++ sysfs_slab_release(s);
++}
++#else
++static void kmem_cache_release(struct kmem_cache *s)
++{
++ slab_kmem_cache_release(s);
++}
++#endif
++
+ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
+ {
+ LIST_HEAD(to_destroy);
+@@ -446,11 +468,7 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
+ list_for_each_entry_safe(s, s2, &to_destroy, list) {
+ debugfs_slab_release(s);
+ kfence_shutdown_cache(s);
+-#ifdef SLAB_SUPPORTS_SYSFS
+- sysfs_slab_release(s);
+-#else
+- slab_kmem_cache_release(s);
+-#endif
++ kmem_cache_release(s);
+ }
+ }
+
+@@ -465,20 +483,11 @@ static int shutdown_cache(struct kmem_cache *s)
+ list_del(&s->list);
+
+ if (s->flags & SLAB_TYPESAFE_BY_RCU) {
+-#ifdef SLAB_SUPPORTS_SYSFS
+- sysfs_slab_unlink(s);
+-#endif
+ list_add_tail(&s->list, &slab_caches_to_rcu_destroy);
+ schedule_work(&slab_caches_to_rcu_destroy_work);
+ } else {
+ kfence_shutdown_cache(s);
+ debugfs_slab_release(s);
+-#ifdef SLAB_SUPPORTS_SYSFS
+- sysfs_slab_unlink(s);
+- sysfs_slab_release(s);
+-#else
+- slab_kmem_cache_release(s);
+-#endif
+ }
+
+ return 0;
+@@ -493,14 +502,16 @@ void slab_kmem_cache_release(struct kmem_cache *s)
+
+ void kmem_cache_destroy(struct kmem_cache *s)
+ {
++ int refcnt;
++
+ if (unlikely(!s) || !kasan_check_byte(s))
+ return;
+
+ cpus_read_lock();
+ mutex_lock(&slab_mutex);
+
+- s->refcount--;
+- if (s->refcount)
++ refcnt = --s->refcount;
++ if (refcnt)
+ goto out_unlock;
+
+ WARN(shutdown_cache(s),
+@@ -509,6 +520,8 @@ void kmem_cache_destroy(struct kmem_cache *s)
+ out_unlock:
+ mutex_unlock(&slab_mutex);
+ cpus_read_unlock();
++ if (!refcnt && !(s->flags & SLAB_TYPESAFE_BY_RCU))
++ kmem_cache_release(s);
+ }
+ EXPORT_SYMBOL(kmem_cache_destroy);
+
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 7cb956d3abb26..2c320a8fe70d7 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3998,6 +3998,17 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, void *data,
+ }
+ }
+
++ if (i == ARRAY_SIZE(hci_cc_table)) {
++ /* Unknown opcode, assume byte 0 contains the status, so
++ * that e.g. __hci_cmd_sync() properly returns errors
++ * for vendor specific commands send by HCI drivers.
++ * If a vendor doesn't actually follow this convention we may
++ * need to introduce a vendor CC table in order to properly set
++ * the status.
++ */
++ *status = skb->data[0];
++ }
++
+ handle_cmd_cnt_and_timer(hdev, ev->ncmd);
+
+ hci_req_cmd_complete(hdev, *opcode, *status, req_complete,
+@@ -5557,7 +5568,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ */
+ hci_dev_clear_flag(hdev, HCI_LE_ADV);
+
+- conn = hci_lookup_le_connect(hdev);
++ conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, bdaddr);
+ if (!conn) {
+ /* In case of error status and there is no connection pending
+ * just unlock as there is nothing to cleanup.
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index b5e7d4b8ab24a..3b4cee67bbd60 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4452,9 +4452,11 @@ static int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ /* Cleanup hci_conn object if it cannot be cancelled as it
+ * likelly means the controller and host stack are out of sync.
+ */
+- if (err)
++ if (err) {
++ hci_dev_lock(hdev);
+ hci_conn_failed(conn, err);
+-
++ hci_dev_unlock(hdev);
++ }
+ return err;
+ case BT_CONNECT2:
+ return hci_reject_conn_sync(hdev, conn, reason);
+@@ -4967,17 +4969,21 @@ int hci_suspend_sync(struct hci_dev *hdev)
+ /* Prevent disconnects from causing scanning to be re-enabled */
+ hci_pause_scan_sync(hdev);
+
+- /* Soft disconnect everything (power off) */
+- err = hci_disconnect_all_sync(hdev, HCI_ERROR_REMOTE_POWER_OFF);
+- if (err) {
+- /* Set state to BT_RUNNING so resume doesn't notify */
+- hdev->suspend_state = BT_RUNNING;
+- hci_resume_sync(hdev);
+- return err;
+- }
++ if (hci_conn_count(hdev)) {
++ /* Soft disconnect everything (power off) */
++ err = hci_disconnect_all_sync(hdev, HCI_ERROR_REMOTE_POWER_OFF);
++ if (err) {
++ /* Set state to BT_RUNNING so resume doesn't notify */
++ hdev->suspend_state = BT_RUNNING;
++ hci_resume_sync(hdev);
++ return err;
++ }
+
+- /* Update event mask so only the allowed event can wakeup the host */
+- hci_set_event_mask_sync(hdev);
++ /* Update event mask so only the allowed event can wakeup the
++ * host.
++ */
++ hci_set_event_mask_sync(hdev);
++ }
+
+ /* Only configure accept list if disconnect succeeded and wake
+ * isn't being prevented.
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 84209e661171e..69ac686c7cae3 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -462,7 +462,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+
+ if (copied == len)
+ break;
+- } while (!sg_is_last(sge));
++ } while ((i != msg_rx->sg.end) && !sg_is_last(sge));
+
+ if (unlikely(peek)) {
+ msg_rx = sk_psock_next_msg(psock, msg_rx);
+@@ -472,7 +472,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ }
+
+ msg_rx->sg.start = i;
+- if (!sge->length && sg_is_last(sge)) {
++ if (!sge->length && (i == msg_rx->sg.end || sg_is_last(sge))) {
+ msg_rx = sk_psock_dequeue_msg(psock);
+ kfree_sk_msg(msg_rx);
+ }
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index f361d3d56be27..943edf4ad4db0 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -389,7 +389,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ dev_match = dev_match || (res.type == RTN_LOCAL &&
+ dev == net->loopback_dev);
+ if (dev_match) {
+- ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_HOST;
++ ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_LINK;
+ return ret;
+ }
+ if (no_addr)
+@@ -401,7 +401,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ ret = 0;
+ if (fib_lookup(net, &fl4, &res, FIB_LOOKUP_IGNORE_LINKSTATE) == 0) {
+ if (res.type == RTN_UNICAST)
+- ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_HOST;
++ ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_LINK;
+ }
+ return ret;
+
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 5c58e21f724e9..f866d6282b2b3 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -609,7 +609,7 @@ static int gre_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src,
+ tunnel_id_to_key32(key->tun_id),
+ key->tos & ~INET_ECN_MASK, dev_net(dev), 0,
+- skb->mark, skb_get_hash(skb));
++ skb->mark, skb_get_hash(skb), key->flow_flags);
+ rt = ip_route_output_key(dev_net(dev), &fl4);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 94017a8c39945..1ad8809fc2e3b 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -295,7 +295,7 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
+ ip_tunnel_init_flow(&fl4, iph->protocol, iph->daddr,
+ iph->saddr, tunnel->parms.o_key,
+ RT_TOS(iph->tos), dev_net(dev),
+- tunnel->parms.link, tunnel->fwmark, 0);
++ tunnel->parms.link, tunnel->fwmark, 0, 0);
+ rt = ip_route_output_key(tunnel->net, &fl4);
+
+ if (!IS_ERR(rt)) {
+@@ -570,7 +570,8 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ }
+ ip_tunnel_init_flow(&fl4, proto, key->u.ipv4.dst, key->u.ipv4.src,
+ tunnel_id_to_key32(key->tun_id), RT_TOS(tos),
+- dev_net(dev), 0, skb->mark, skb_get_hash(skb));
++ dev_net(dev), 0, skb->mark, skb_get_hash(skb),
++ key->flow_flags);
+ if (tunnel->encap.type != TUNNEL_ENCAP_NONE)
+ goto tx_error;
+
+@@ -728,7 +729,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ ip_tunnel_init_flow(&fl4, protocol, dst, tnl_params->saddr,
+ tunnel->parms.o_key, RT_TOS(tos),
+ dev_net(dev), tunnel->parms.link,
+- tunnel->fwmark, skb_get_hash(skb));
++ tunnel->fwmark, skb_get_hash(skb), 0);
+
+ if (ip_tunnel_encap(skb, tunnel, &protocol, &fl4) < 0)
+ goto tx_error;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index b1637990d5708..e5435156e545d 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3630,11 +3630,11 @@ static void tcp_send_challenge_ack(struct sock *sk)
+
+ /* Then check host-wide RFC 5961 rate limit. */
+ now = jiffies / HZ;
+- if (now != challenge_timestamp) {
++ if (now != READ_ONCE(challenge_timestamp)) {
+ u32 ack_limit = READ_ONCE(net->ipv4.sysctl_tcp_challenge_ack_limit);
+ u32 half = (ack_limit + 1) >> 1;
+
+- challenge_timestamp = now;
++ WRITE_ONCE(challenge_timestamp, now);
+ WRITE_ONCE(challenge_count, half + prandom_u32_max(ack_limit));
+ }
+ count = READ_ONCE(challenge_count);
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 71899e5a5a111..1215c863e1c41 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -1412,12 +1412,6 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
+ psock->sk = csk;
+ psock->bpf_prog = prog;
+
+- err = strp_init(&psock->strp, csk, &cb);
+- if (err) {
+- kmem_cache_free(kcm_psockp, psock);
+- goto out;
+- }
+-
+ write_lock_bh(&csk->sk_callback_lock);
+
+ /* Check if sk_user_data is already by KCM or someone else.
+@@ -1425,13 +1419,18 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
+ */
+ if (csk->sk_user_data) {
+ write_unlock_bh(&csk->sk_callback_lock);
+- strp_stop(&psock->strp);
+- strp_done(&psock->strp);
+ kmem_cache_free(kcm_psockp, psock);
+ err = -EALREADY;
+ goto out;
+ }
+
++ err = strp_init(&psock->strp, csk, &cb);
++ if (err) {
++ write_unlock_bh(&csk->sk_callback_lock);
++ kmem_cache_free(kcm_psockp, psock);
++ goto out;
++ }
++
+ psock->save_data_ready = csk->sk_data_ready;
+ psock->save_write_space = csk->sk_write_space;
+ psock->save_state_change = csk->sk_state_change;
+diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
+index 8ff547ff351ed..4e4c9df637354 100644
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -534,6 +534,10 @@ int ieee80211_ibss_finish_csa(struct ieee80211_sub_if_data *sdata)
+
+ sdata_assert_lock(sdata);
+
++ /* When not connected/joined, sending CSA doesn't make sense. */
++ if (ifibss->state != IEEE80211_IBSS_MLME_JOINED)
++ return -ENOLINK;
++
+ /* update cfg80211 bss information with the new channel */
+ if (!is_zero_ether_addr(ifibss->bssid)) {
+ cbss = cfg80211_get_bss(sdata->local->hw.wiphy,
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index b698756887eb5..e692a2487eb5d 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -465,16 +465,19 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
+ scan_req = rcu_dereference_protected(local->scan_req,
+ lockdep_is_held(&local->mtx));
+
+- if (scan_req != local->int_scan_req) {
+- local->scan_info.aborted = aborted;
+- cfg80211_scan_done(scan_req, &local->scan_info);
+- }
+ RCU_INIT_POINTER(local->scan_req, NULL);
+ RCU_INIT_POINTER(local->scan_sdata, NULL);
+
+ local->scanning = 0;
+ local->scan_chandef.chan = NULL;
+
++ synchronize_rcu();
++
++ if (scan_req != local->int_scan_req) {
++ local->scan_info.aborted = aborted;
++ cfg80211_scan_done(scan_req, &local->scan_info);
++ }
++
+ /* Set power back to normal operating levels. */
+ ieee80211_hw_config(local, 0);
+
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index c0b2ce70e101c..fef9ad44d82ec 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2221,9 +2221,9 @@ static inline u64 sta_get_tidstats_msdu(struct ieee80211_sta_rx_stats *rxstats,
+ u64 value;
+
+ do {
+- start = u64_stats_fetch_begin(&rxstats->syncp);
++ start = u64_stats_fetch_begin_irq(&rxstats->syncp);
+ value = rxstats->msdu[tid];
+- } while (u64_stats_fetch_retry(&rxstats->syncp, start));
++ } while (u64_stats_fetch_retry_irq(&rxstats->syncp, start));
+
+ return value;
+ }
+@@ -2289,9 +2289,9 @@ static inline u64 sta_get_stats_bytes(struct ieee80211_sta_rx_stats *rxstats)
+ u64 value;
+
+ do {
+- start = u64_stats_fetch_begin(&rxstats->syncp);
++ start = u64_stats_fetch_begin_irq(&rxstats->syncp);
+ value = rxstats->bytes;
+- } while (u64_stats_fetch_retry(&rxstats->syncp, start));
++ } while (u64_stats_fetch_retry_irq(&rxstats->syncp, start));
+
+ return value;
+ }
+diff --git a/net/mac802154/rx.c b/net/mac802154/rx.c
+index b8ce84618a55b..c439125ef2b91 100644
+--- a/net/mac802154/rx.c
++++ b/net/mac802154/rx.c
+@@ -44,7 +44,7 @@ ieee802154_subif_frame(struct ieee802154_sub_if_data *sdata,
+
+ switch (mac_cb(skb)->dest.mode) {
+ case IEEE802154_ADDR_NONE:
+- if (mac_cb(skb)->dest.mode != IEEE802154_ADDR_NONE)
++ if (hdr->source.mode != IEEE802154_ADDR_NONE)
+ /* FIXME: check if we are PAN coordinator */
+ skb->pkt_type = PACKET_OTHERHOST;
+ else
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index 35b5f806fdda1..b52afe316dc41 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -1079,9 +1079,9 @@ static void mpls_get_stats(struct mpls_dev *mdev,
+
+ p = per_cpu_ptr(mdev->stats, i);
+ do {
+- start = u64_stats_fetch_begin(&p->syncp);
++ start = u64_stats_fetch_begin_irq(&p->syncp);
+ local = p->stats;
+- } while (u64_stats_fetch_retry(&p->syncp, start));
++ } while (u64_stats_fetch_retry_irq(&p->syncp, start));
+
+ stats->rx_packets += local.rx_packets;
+ stats->rx_bytes += local.rx_bytes;
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 7e8a39a356271..6c9d153afbeee 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -1802,7 +1802,7 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ ovs_dp_reset_user_features(skb, info);
+ }
+
+- goto err_unlock_and_destroy_meters;
++ goto err_destroy_portids;
+ }
+
+ err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid,
+@@ -1817,6 +1817,8 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ ovs_notify(&dp_datapath_genl_family, reply, info);
+ return 0;
+
++err_destroy_portids:
++ kfree(rcu_dereference_raw(dp->upcall_portids));
+ err_unlock_and_destroy_meters:
+ ovs_unlock();
+ ovs_meters_exit(dp);
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index a64c3c1541118..b3596d4bd14a2 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1125,6 +1125,21 @@ struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue,
+ }
+ EXPORT_SYMBOL(dev_graft_qdisc);
+
++static void shutdown_scheduler_queue(struct net_device *dev,
++ struct netdev_queue *dev_queue,
++ void *_qdisc_default)
++{
++ struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
++ struct Qdisc *qdisc_default = _qdisc_default;
++
++ if (qdisc) {
++ rcu_assign_pointer(dev_queue->qdisc, qdisc_default);
++ dev_queue->qdisc_sleeping = qdisc_default;
++
++ qdisc_put(qdisc);
++ }
++}
++
+ static void attach_one_default_qdisc(struct net_device *dev,
+ struct netdev_queue *dev_queue,
+ void *_unused)
+@@ -1172,6 +1187,7 @@ static void attach_default_qdiscs(struct net_device *dev)
+ if (qdisc == &noop_qdisc) {
+ netdev_warn(dev, "default qdisc (%s) fail, fallback to %s\n",
+ default_qdisc_ops->id, noqueue_qdisc_ops.id);
++ netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
+ dev->priv_flags |= IFF_NO_QUEUE;
+ netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
+ qdisc = txq->qdisc_sleeping;
+@@ -1450,21 +1466,6 @@ void dev_init_scheduler(struct net_device *dev)
+ timer_setup(&dev->watchdog_timer, dev_watchdog, 0);
+ }
+
+-static void shutdown_scheduler_queue(struct net_device *dev,
+- struct netdev_queue *dev_queue,
+- void *_qdisc_default)
+-{
+- struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
+- struct Qdisc *qdisc_default = _qdisc_default;
+-
+- if (qdisc) {
+- rcu_assign_pointer(dev_queue->qdisc, qdisc_default);
+- dev_queue->qdisc_sleeping = qdisc_default;
+-
+- qdisc_put(qdisc);
+- }
+-}
+-
+ void dev_shutdown(struct net_device *dev)
+ {
+ netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
+diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
+index 72102277449e1..36079fdde2cb5 100644
+--- a/net/sched/sch_tbf.c
++++ b/net/sched/sch_tbf.c
+@@ -356,6 +356,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
+ struct nlattr *tb[TCA_TBF_MAX + 1];
+ struct tc_tbf_qopt *qopt;
+ struct Qdisc *child = NULL;
++ struct Qdisc *old = NULL;
+ struct psched_ratecfg rate;
+ struct psched_ratecfg peak;
+ u64 max_size;
+@@ -447,7 +448,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
+ sch_tree_lock(sch);
+ if (child) {
+ qdisc_tree_flush_backlog(q->qdisc);
+- qdisc_put(q->qdisc);
++ old = q->qdisc;
+ q->qdisc = child;
+ }
+ q->limit = qopt->limit;
+@@ -467,6 +468,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
+ memcpy(&q->peak, &peak, sizeof(struct psched_ratecfg));
+
+ sch_tree_unlock(sch);
++ qdisc_put(old);
+ err = 0;
+
+ tbf_offload_change(sch);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 433bb5a7df31e..a51d5ed2ad764 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1812,7 +1812,6 @@ static void smc_listen_out_connected(struct smc_sock *new_smc)
+ {
+ struct sock *newsmcsk = &new_smc->sk;
+
+- sk_refcnt_debug_inc(newsmcsk);
+ if (newsmcsk->sk_state == SMC_INIT)
+ newsmcsk->sk_state = SMC_ACTIVE;
+
+diff --git a/net/wireless/debugfs.c b/net/wireless/debugfs.c
+index aab43469a2f04..0878b162890af 100644
+--- a/net/wireless/debugfs.c
++++ b/net/wireless/debugfs.c
+@@ -65,9 +65,10 @@ static ssize_t ht40allow_map_read(struct file *file,
+ {
+ struct wiphy *wiphy = file->private_data;
+ char *buf;
+- unsigned int offset = 0, buf_size = PAGE_SIZE, i, r;
++ unsigned int offset = 0, buf_size = PAGE_SIZE, i;
+ enum nl80211_band band;
+ struct ieee80211_supported_band *sband;
++ ssize_t r;
+
+ buf = kzalloc(buf_size, GFP_KERNEL);
+ if (!buf)
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index f70112176b7c1..a71a8c6edf553 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -379,6 +379,16 @@ static void xp_check_dma_contiguity(struct xsk_dma_map *dma_map)
+
+ static int xp_init_dma_info(struct xsk_buff_pool *pool, struct xsk_dma_map *dma_map)
+ {
++ if (!pool->unaligned) {
++ u32 i;
++
++ for (i = 0; i < pool->heads_cnt; i++) {
++ struct xdp_buff_xsk *xskb = &pool->heads[i];
++
++ xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr);
++ }
++ }
++
+ pool->dma_pages = kvcalloc(dma_map->dma_pages_cnt, sizeof(*pool->dma_pages), GFP_KERNEL);
+ if (!pool->dma_pages)
+ return -ENOMEM;
+@@ -428,12 +438,6 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
+
+ if (pool->unaligned)
+ xp_check_dma_contiguity(dma_map);
+- else
+- for (i = 0; i < pool->heads_cnt; i++) {
+- struct xdp_buff_xsk *xskb = &pool->heads[i];
+-
+- xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr);
+- }
+
+ err = xp_init_dma_info(pool, dma_map);
+ if (err) {
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index ec5a6247cd3e7..a9dbd99d9ee76 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -149,6 +149,16 @@ retry:
+ LANDLOCK_ACCESS_FS_READ_FILE)
+ /* clang-format on */
+
++/*
++ * All access rights that are denied by default whether they are handled or not
++ * by a ruleset/layer. This must be ORed with all ruleset->fs_access_masks[]
++ * entries when we need to get the absolute handled access masks.
++ */
++/* clang-format off */
++#define ACCESS_INITIALLY_DENIED ( \
++ LANDLOCK_ACCESS_FS_REFER)
++/* clang-format on */
++
+ /*
+ * @path: Should have been checked by get_path_from_fd().
+ */
+@@ -167,7 +177,9 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
+ return -EINVAL;
+
+ /* Transforms relative access rights to absolute ones. */
+- access_rights |= LANDLOCK_MASK_ACCESS_FS & ~ruleset->fs_access_masks[0];
++ access_rights |=
++ LANDLOCK_MASK_ACCESS_FS &
++ ~(ruleset->fs_access_masks[0] | ACCESS_INITIALLY_DENIED);
+ object = get_inode_object(d_backing_inode(path->dentry));
+ if (IS_ERR(object))
+ return PTR_ERR(object);
+@@ -277,23 +289,12 @@ static inline bool is_nouser_or_private(const struct dentry *dentry)
+ static inline access_mask_t
+ get_handled_accesses(const struct landlock_ruleset *const domain)
+ {
+- access_mask_t access_dom = 0;
+- unsigned long access_bit;
+-
+- for (access_bit = 0; access_bit < LANDLOCK_NUM_ACCESS_FS;
+- access_bit++) {
+- size_t layer_level;
++ access_mask_t access_dom = ACCESS_INITIALLY_DENIED;
++ size_t layer_level;
+
+- for (layer_level = 0; layer_level < domain->num_layers;
+- layer_level++) {
+- if (domain->fs_access_masks[layer_level] &
+- BIT_ULL(access_bit)) {
+- access_dom |= BIT_ULL(access_bit);
+- break;
+- }
+- }
+- }
+- return access_dom;
++ for (layer_level = 0; layer_level < domain->num_layers; layer_level++)
++ access_dom |= domain->fs_access_masks[layer_level];
++ return access_dom & LANDLOCK_MASK_ACCESS_FS;
+ }
+
+ static inline access_mask_t
+@@ -316,8 +317,13 @@ init_layer_masks(const struct landlock_ruleset *const domain,
+
+ for_each_set_bit(access_bit, &access_req,
+ ARRAY_SIZE(*layer_masks)) {
+- if (domain->fs_access_masks[layer_level] &
+- BIT_ULL(access_bit)) {
++ /*
++ * Artificially handles all initially denied by default
++ * access rights.
++ */
++ if (BIT_ULL(access_bit) &
++ (domain->fs_access_masks[layer_level] |
++ ACCESS_INITIALLY_DENIED)) {
+ (*layer_masks)[access_bit] |=
+ BIT_ULL(layer_level);
+ handled_accesses |= BIT_ULL(access_bit);
+@@ -857,10 +863,6 @@ static int current_check_refer_path(struct dentry *const old_dentry,
+ NULL, NULL);
+ }
+
+- /* Backward compatibility: no reparenting support. */
+- if (!(get_handled_accesses(dom) & LANDLOCK_ACCESS_FS_REFER))
+- return -EXDEV;
+-
+ access_request_parent1 |= LANDLOCK_ACCESS_FS_REFER;
+ access_request_parent2 |= LANDLOCK_ACCESS_FS_REFER;
+
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 8cfdaee779050..55b3c49ba61de 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -20,6 +20,13 @@
+
+ static const struct snd_malloc_ops *snd_dma_get_ops(struct snd_dma_buffer *dmab);
+
++#ifdef CONFIG_SND_DMA_SGBUF
++static void *do_alloc_fallback_pages(struct device *dev, size_t size,
++ dma_addr_t *addr, bool wc);
++static void do_free_fallback_pages(void *p, size_t size, bool wc);
++static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size);
++#endif
++
+ /* a cast to gfp flag from the dev pointer; for CONTINUOUS and VMALLOC types */
+ static inline gfp_t snd_mem_get_gfp_flags(const struct snd_dma_buffer *dmab,
+ gfp_t default_gfp)
+@@ -269,16 +276,21 @@ EXPORT_SYMBOL(snd_sgbuf_get_chunk_size);
+ /*
+ * Continuous pages allocator
+ */
+-static void *snd_dma_continuous_alloc(struct snd_dma_buffer *dmab, size_t size)
++static void *do_alloc_pages(size_t size, dma_addr_t *addr, gfp_t gfp)
+ {
+- gfp_t gfp = snd_mem_get_gfp_flags(dmab, GFP_KERNEL);
+ void *p = alloc_pages_exact(size, gfp);
+
+ if (p)
+- dmab->addr = page_to_phys(virt_to_page(p));
++ *addr = page_to_phys(virt_to_page(p));
+ return p;
+ }
+
++static void *snd_dma_continuous_alloc(struct snd_dma_buffer *dmab, size_t size)
++{
++ return do_alloc_pages(size, &dmab->addr,
++ snd_mem_get_gfp_flags(dmab, GFP_KERNEL));
++}
++
+ static void snd_dma_continuous_free(struct snd_dma_buffer *dmab)
+ {
+ free_pages_exact(dmab->area, dmab->bytes);
+@@ -455,6 +467,25 @@ static const struct snd_malloc_ops snd_dma_dev_ops = {
+ /*
+ * Write-combined pages
+ */
++/* x86-specific allocations */
++#ifdef CONFIG_SND_DMA_SGBUF
++static void *snd_dma_wc_alloc(struct snd_dma_buffer *dmab, size_t size)
++{
++ return do_alloc_fallback_pages(dmab->dev.dev, size, &dmab->addr, true);
++}
++
++static void snd_dma_wc_free(struct snd_dma_buffer *dmab)
++{
++ do_free_fallback_pages(dmab->area, dmab->bytes, true);
++}
++
++static int snd_dma_wc_mmap(struct snd_dma_buffer *dmab,
++ struct vm_area_struct *area)
++{
++ area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
++ return snd_dma_continuous_mmap(dmab, area);
++}
++#else
+ static void *snd_dma_wc_alloc(struct snd_dma_buffer *dmab, size_t size)
+ {
+ return dma_alloc_wc(dmab->dev.dev, size, &dmab->addr, DEFAULT_GFP);
+@@ -471,6 +502,7 @@ static int snd_dma_wc_mmap(struct snd_dma_buffer *dmab,
+ return dma_mmap_wc(dmab->dev.dev, area,
+ dmab->area, dmab->addr, dmab->bytes);
+ }
++#endif /* CONFIG_SND_DMA_SGBUF */
+
+ static const struct snd_malloc_ops snd_dma_wc_ops = {
+ .alloc = snd_dma_wc_alloc,
+@@ -478,10 +510,6 @@ static const struct snd_malloc_ops snd_dma_wc_ops = {
+ .mmap = snd_dma_wc_mmap,
+ };
+
+-#ifdef CONFIG_SND_DMA_SGBUF
+-static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size);
+-#endif
+-
+ /*
+ * Non-contiguous pages allocator
+ */
+@@ -661,6 +689,37 @@ static const struct snd_malloc_ops snd_dma_sg_wc_ops = {
+ .get_chunk_size = snd_dma_noncontig_get_chunk_size,
+ };
+
++/* manual page allocations with wc setup */
++static void *do_alloc_fallback_pages(struct device *dev, size_t size,
++ dma_addr_t *addr, bool wc)
++{
++ gfp_t gfp = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
++ void *p;
++
++ again:
++ p = do_alloc_pages(size, addr, gfp);
++ if (!p || (*addr + size - 1) & ~dev->coherent_dma_mask) {
++ if (IS_ENABLED(CONFIG_ZONE_DMA32) && !(gfp & GFP_DMA32)) {
++ gfp |= GFP_DMA32;
++ goto again;
++ }
++ if (IS_ENABLED(CONFIG_ZONE_DMA) && !(gfp & GFP_DMA)) {
++ gfp = (gfp & ~GFP_DMA32) | GFP_DMA;
++ goto again;
++ }
++ }
++ if (p && wc)
++ set_memory_wc((unsigned long)(p), size >> PAGE_SHIFT);
++ return p;
++}
++
++static void do_free_fallback_pages(void *p, size_t size, bool wc)
++{
++ if (wc)
++ set_memory_wb((unsigned long)(p), size >> PAGE_SHIFT);
++ free_pages_exact(p, size);
++}
++
+ /* Fallback SG-buffer allocations for x86 */
+ struct snd_dma_sg_fallback {
+ size_t count;
+@@ -671,14 +730,11 @@ struct snd_dma_sg_fallback {
+ static void __snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab,
+ struct snd_dma_sg_fallback *sgbuf)
+ {
++ bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;
+ size_t i;
+
+- if (sgbuf->count && dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK)
+- set_pages_array_wb(sgbuf->pages, sgbuf->count);
+ for (i = 0; i < sgbuf->count && sgbuf->pages[i]; i++)
+- dma_free_coherent(dmab->dev.dev, PAGE_SIZE,
+- page_address(sgbuf->pages[i]),
+- sgbuf->addrs[i]);
++ do_free_fallback_pages(page_address(sgbuf->pages[i]), PAGE_SIZE, wc);
+ kvfree(sgbuf->pages);
+ kvfree(sgbuf->addrs);
+ kfree(sgbuf);
+@@ -690,6 +746,7 @@ static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size)
+ struct page **pages;
+ size_t i, count;
+ void *p;
++ bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;
+
+ sgbuf = kzalloc(sizeof(*sgbuf), GFP_KERNEL);
+ if (!sgbuf)
+@@ -704,15 +761,13 @@ static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size)
+ goto error;
+
+ for (i = 0; i < count; sgbuf->count++, i++) {
+- p = dma_alloc_coherent(dmab->dev.dev, PAGE_SIZE,
+- &sgbuf->addrs[i], DEFAULT_GFP);
++ p = do_alloc_fallback_pages(dmab->dev.dev, PAGE_SIZE,
++ &sgbuf->addrs[i], wc);
+ if (!p)
+ goto error;
+ sgbuf->pages[i] = virt_to_page(p);
+ }
+
+- if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK)
+- set_pages_array_wc(pages, count);
+ p = vmap(pages, count, VM_MAP, PAGE_KERNEL);
+ if (!p)
+ goto error;
+diff --git a/sound/core/seq/oss/seq_oss_midi.c b/sound/core/seq/oss/seq_oss_midi.c
+index 1e3bf086f8671..07efb38f58ac1 100644
+--- a/sound/core/seq/oss/seq_oss_midi.c
++++ b/sound/core/seq/oss/seq_oss_midi.c
+@@ -270,7 +270,9 @@ snd_seq_oss_midi_clear_all(void)
+ void
+ snd_seq_oss_midi_setup(struct seq_oss_devinfo *dp)
+ {
++ spin_lock_irq(®ister_lock);
+ dp->max_mididev = max_midi_devs;
++ spin_unlock_irq(®ister_lock);
+ }
+
+ /*
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 2e9d695d336c9..2d707afa1ef1c 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -121,13 +121,13 @@ struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
+ spin_unlock_irqrestore(&clients_lock, flags);
+ #ifdef CONFIG_MODULES
+ if (!in_interrupt()) {
+- static char client_requested[SNDRV_SEQ_GLOBAL_CLIENTS];
+- static char card_requested[SNDRV_CARDS];
++ static DECLARE_BITMAP(client_requested, SNDRV_SEQ_GLOBAL_CLIENTS);
++ static DECLARE_BITMAP(card_requested, SNDRV_CARDS);
++
+ if (clientid < SNDRV_SEQ_GLOBAL_CLIENTS) {
+ int idx;
+
+- if (!client_requested[clientid]) {
+- client_requested[clientid] = 1;
++ if (!test_and_set_bit(clientid, client_requested)) {
+ for (idx = 0; idx < 15; idx++) {
+ if (seq_client_load[idx] < 0)
+ break;
+@@ -142,10 +142,8 @@ struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
+ int card = (clientid - SNDRV_SEQ_GLOBAL_CLIENTS) /
+ SNDRV_SEQ_CLIENTS_PER_CARD;
+ if (card < snd_ecards_limit) {
+- if (! card_requested[card]) {
+- card_requested[card] = 1;
++ if (!test_and_set_bit(card, card_requested))
+ snd_request_card(card);
+- }
+ snd_seq_device_load_drivers();
+ }
+ }
+diff --git a/sound/hda/intel-nhlt.c b/sound/hda/intel-nhlt.c
+index 9db5ccd9aa2db..13bb0ccfb36c0 100644
+--- a/sound/hda/intel-nhlt.c
++++ b/sound/hda/intel-nhlt.c
+@@ -55,16 +55,22 @@ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt)
+
+ /* find max number of channels based on format_configuration */
+ if (fmt_configs->fmt_count) {
++ struct nhlt_fmt_cfg *fmt_cfg = fmt_configs->fmt_config;
++
+ dev_dbg(dev, "found %d format definitions\n",
+ fmt_configs->fmt_count);
+
+ for (i = 0; i < fmt_configs->fmt_count; i++) {
+ struct wav_fmt_ext *fmt_ext;
+
+- fmt_ext = &fmt_configs->fmt_config[i].fmt_ext;
++ fmt_ext = &fmt_cfg->fmt_ext;
+
+ if (fmt_ext->fmt.channels > max_ch)
+ max_ch = fmt_ext->fmt.channels;
++
++ /* Move to the next nhlt_fmt_cfg */
++ fmt_cfg = (struct nhlt_fmt_cfg *)(fmt_cfg->config.caps +
++ fmt_cfg->config.size);
+ }
+ dev_dbg(dev, "max channels found %d\n", max_ch);
+ } else {
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b44b882f8378c..799f6bf266dd0 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4689,6 +4689,48 @@ static void alc236_fixup_hp_mute_led_micmute_vref(struct hda_codec *codec,
+ alc236_fixup_hp_micmute_led_vref(codec, fix, action);
+ }
+
++static inline void alc298_samsung_write_coef_pack(struct hda_codec *codec,
++ const unsigned short coefs[2])
++{
++ alc_write_coef_idx(codec, 0x23, coefs[0]);
++ alc_write_coef_idx(codec, 0x25, coefs[1]);
++ alc_write_coef_idx(codec, 0x26, 0xb011);
++}
++
++struct alc298_samsung_amp_desc {
++ unsigned char nid;
++ unsigned short init_seq[2][2];
++};
++
++static void alc298_fixup_samsung_amp(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ int i, j;
++ static const unsigned short init_seq[][2] = {
++ { 0x19, 0x00 }, { 0x20, 0xc0 }, { 0x22, 0x44 }, { 0x23, 0x08 },
++ { 0x24, 0x85 }, { 0x25, 0x41 }, { 0x35, 0x40 }, { 0x36, 0x01 },
++ { 0x38, 0x81 }, { 0x3a, 0x03 }, { 0x3b, 0x81 }, { 0x40, 0x3e },
++ { 0x41, 0x07 }, { 0x400, 0x1 }
++ };
++ static const struct alc298_samsung_amp_desc amps[] = {
++ { 0x3a, { { 0x18, 0x1 }, { 0x26, 0x0 } } },
++ { 0x39, { { 0x18, 0x2 }, { 0x26, 0x1 } } }
++ };
++
++ if (action != HDA_FIXUP_ACT_INIT)
++ return;
++
++ for (i = 0; i < ARRAY_SIZE(amps); i++) {
++ alc_write_coef_idx(codec, 0x22, amps[i].nid);
++
++ for (j = 0; j < ARRAY_SIZE(amps[i].init_seq); j++)
++ alc298_samsung_write_coef_pack(codec, amps[i].init_seq[j]);
++
++ for (j = 0; j < ARRAY_SIZE(init_seq); j++)
++ alc298_samsung_write_coef_pack(codec, init_seq[j]);
++ }
++}
++
+ #if IS_REACHABLE(CONFIG_INPUT)
+ static void gpio2_mic_hotkey_event(struct hda_codec *codec,
+ struct hda_jack_callback *event)
+@@ -7000,6 +7042,7 @@ enum {
+ ALC236_FIXUP_HP_GPIO_LED,
+ ALC236_FIXUP_HP_MUTE_LED,
+ ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
++ ALC298_FIXUP_SAMSUNG_AMP,
+ ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+@@ -8365,6 +8408,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc236_fixup_hp_mute_led_micmute_vref,
+ },
++ [ALC298_FIXUP_SAMSUNG_AMP] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc298_fixup_samsung_amp,
++ .chained = true,
++ .chain_id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET
++ },
+ [ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -9307,13 +9356,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+ SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+- SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+- SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+- SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+- SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
++ SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_AMP),
++ SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_AMP),
++ SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
+ SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+- SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+- SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP),
++ SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
+ SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+@@ -9679,7 +9728,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
+ {.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
+ {.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+- {.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
++ {.id = ALC298_FIXUP_SAMSUNG_AMP, .name = "alc298-samsung-amp"},
+ {.id = ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc256-samsung-headphone"},
+ {.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ {.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
+diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
+index 21a2ce8fa739d..45de42a027c54 100644
+--- a/tools/testing/selftests/landlock/fs_test.c
++++ b/tools/testing/selftests/landlock/fs_test.c
+@@ -4,7 +4,7 @@
+ *
+ * Copyright © 2017-2020 Mickaël Salaün <mic@digikod.net>
+ * Copyright © 2020 ANSSI
+- * Copyright © 2020-2021 Microsoft Corporation
++ * Copyright © 2020-2022 Microsoft Corporation
+ */
+
+ #define _GNU_SOURCE
+@@ -371,6 +371,13 @@ TEST_F_FORK(layout1, inval)
+ ASSERT_EQ(EINVAL, errno);
+ path_beneath.allowed_access &= ~LANDLOCK_ACCESS_FS_EXECUTE;
+
++ /* Tests with denied-by-default access right. */
++ path_beneath.allowed_access |= LANDLOCK_ACCESS_FS_REFER;
++ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
++ &path_beneath, 0));
++ ASSERT_EQ(EINVAL, errno);
++ path_beneath.allowed_access &= ~LANDLOCK_ACCESS_FS_REFER;
++
+ /* Test with unknown (64-bits) value. */
+ path_beneath.allowed_access |= (1ULL << 60);
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+@@ -1826,6 +1833,20 @@ TEST_F_FORK(layout1, link)
+ ASSERT_EQ(0, link(file1_s1d3, file2_s1d3));
+ }
+
++static int test_rename(const char *const oldpath, const char *const newpath)
++{
++ if (rename(oldpath, newpath))
++ return errno;
++ return 0;
++}
++
++static int test_exchange(const char *const oldpath, const char *const newpath)
++{
++ if (renameat2(AT_FDCWD, oldpath, AT_FDCWD, newpath, RENAME_EXCHANGE))
++ return errno;
++ return 0;
++}
++
+ TEST_F_FORK(layout1, rename_file)
+ {
+ const struct rule rules[] = {
+@@ -1867,10 +1888,10 @@ TEST_F_FORK(layout1, rename_file)
+ * to a different directory (which allows file removal).
+ */
+ ASSERT_EQ(-1, rename(file1_s2d1, file1_s1d3));
+- ASSERT_EQ(EXDEV, errno);
++ ASSERT_EQ(EACCES, errno);
+ ASSERT_EQ(-1, renameat2(AT_FDCWD, file1_s2d1, AT_FDCWD, file1_s1d3,
+ RENAME_EXCHANGE));
+- ASSERT_EQ(EXDEV, errno);
++ ASSERT_EQ(EACCES, errno);
+ ASSERT_EQ(-1, renameat2(AT_FDCWD, dir_s2d2, AT_FDCWD, file1_s1d3,
+ RENAME_EXCHANGE));
+ ASSERT_EQ(EXDEV, errno);
+@@ -1894,7 +1915,7 @@ TEST_F_FORK(layout1, rename_file)
+ ASSERT_EQ(EXDEV, errno);
+ ASSERT_EQ(0, unlink(file1_s1d3));
+ ASSERT_EQ(-1, rename(file1_s2d1, file1_s1d3));
+- ASSERT_EQ(EXDEV, errno);
++ ASSERT_EQ(EACCES, errno);
+
+ /* Exchanges and renames files with same parent. */
+ ASSERT_EQ(0, renameat2(AT_FDCWD, file2_s2d3, AT_FDCWD, file1_s2d3,
+@@ -2014,6 +2035,115 @@ TEST_F_FORK(layout1, reparent_refer)
+ ASSERT_EQ(0, rename(dir_s1d3, dir_s2d3));
+ }
+
++/* Checks renames beneath dir_s1d1. */
++static void refer_denied_by_default(struct __test_metadata *const _metadata,
++ const struct rule layer1[],
++ const int layer1_err,
++ const struct rule layer2[])
++{
++ int ruleset_fd;
++
++ ASSERT_EQ(0, unlink(file1_s1d2));
++
++ ruleset_fd = create_ruleset(_metadata, layer1[0].access, layer1);
++ ASSERT_LE(0, ruleset_fd);
++ enforce_ruleset(_metadata, ruleset_fd);
++ ASSERT_EQ(0, close(ruleset_fd));
++
++ /*
++ * If the first layer handles LANDLOCK_ACCESS_FS_REFER (according to
++ * layer1_err), then it allows some different-parent renames and links.
++ */
++ ASSERT_EQ(layer1_err, test_rename(file1_s1d1, file1_s1d2));
++ if (layer1_err == 0)
++ ASSERT_EQ(layer1_err, test_rename(file1_s1d2, file1_s1d1));
++ ASSERT_EQ(layer1_err, test_exchange(file2_s1d1, file2_s1d2));
++ ASSERT_EQ(layer1_err, test_exchange(file2_s1d2, file2_s1d1));
++
++ ruleset_fd = create_ruleset(_metadata, layer2[0].access, layer2);
++ ASSERT_LE(0, ruleset_fd);
++ enforce_ruleset(_metadata, ruleset_fd);
++ ASSERT_EQ(0, close(ruleset_fd));
++
++ /*
++ * Now, either the first or the second layer does not handle
++ * LANDLOCK_ACCESS_FS_REFER, which means that any different-parent
++ * renames and links are denied, thus making the layer handling
++ * LANDLOCK_ACCESS_FS_REFER null and void.
++ */
++ ASSERT_EQ(EXDEV, test_rename(file1_s1d1, file1_s1d2));
++ ASSERT_EQ(EXDEV, test_exchange(file2_s1d1, file2_s1d2));
++ ASSERT_EQ(EXDEV, test_exchange(file2_s1d2, file2_s1d1));
++}
++
++const struct rule layer_dir_s1d1_refer[] = {
++ {
++ .path = dir_s1d1,
++ .access = LANDLOCK_ACCESS_FS_REFER,
++ },
++ {},
++};
++
++const struct rule layer_dir_s1d1_execute[] = {
++ {
++ /* Matches a parent directory. */
++ .path = dir_s1d1,
++ .access = LANDLOCK_ACCESS_FS_EXECUTE,
++ },
++ {},
++};
++
++const struct rule layer_dir_s2d1_execute[] = {
++ {
++ /* Does not match a parent directory. */
++ .path = dir_s2d1,
++ .access = LANDLOCK_ACCESS_FS_EXECUTE,
++ },
++ {},
++};
++
++/*
++ * Tests precedence over renames: denied by default for different parent
++ * directories, *with* a rule matching a parent directory, but not directly
++ * denying access (with MAKE_REG nor REMOVE).
++ */
++TEST_F_FORK(layout1, refer_denied_by_default1)
++{
++ refer_denied_by_default(_metadata, layer_dir_s1d1_refer, 0,
++ layer_dir_s1d1_execute);
++}
++
++/*
++ * Same test but this time turning around the ABI version order: the first
++ * layer does not handle LANDLOCK_ACCESS_FS_REFER.
++ */
++TEST_F_FORK(layout1, refer_denied_by_default2)
++{
++ refer_denied_by_default(_metadata, layer_dir_s1d1_execute, EXDEV,
++ layer_dir_s1d1_refer);
++}
++
++/*
++ * Tests precedence over renames: denied by default for different parent
++ * directories, *without* a rule matching a parent directory, but not directly
++ * denying access (with MAKE_REG nor REMOVE).
++ */
++TEST_F_FORK(layout1, refer_denied_by_default3)
++{
++ refer_denied_by_default(_metadata, layer_dir_s1d1_refer, 0,
++ layer_dir_s2d1_execute);
++}
++
++/*
++ * Same test but this time turning around the ABI version order: the first
++ * layer does not handle LANDLOCK_ACCESS_FS_REFER.
++ */
++TEST_F_FORK(layout1, refer_denied_by_default4)
++{
++ refer_denied_by_default(_metadata, layer_dir_s2d1_execute, EXDEV,
++ layer_dir_s1d1_refer);
++}
++
+ TEST_F_FORK(layout1, reparent_link)
+ {
+ const struct rule layer1[] = {
+@@ -2336,11 +2466,12 @@ TEST_F_FORK(layout1, reparent_exdev_layers_rename1)
+ ASSERT_EQ(EXDEV, errno);
+
+ /*
+- * However, moving the file2_s1d3 file below dir_s2d3 is allowed
+- * because it cannot inherit MAKE_REG nor MAKE_DIR rights (which are
+- * dedicated to directories).
++ * Moving the file2_s1d3 file below dir_s2d3 is denied because the
++ * second layer does not handle REFER, which is always denied by
++ * default.
+ */
+- ASSERT_EQ(0, rename(file2_s1d3, file1_s2d3));
++ ASSERT_EQ(-1, rename(file2_s1d3, file1_s2d3));
++ ASSERT_EQ(EXDEV, errno);
+ }
+
+ TEST_F_FORK(layout1, reparent_exdev_layers_rename2)
+@@ -2373,8 +2504,12 @@ TEST_F_FORK(layout1, reparent_exdev_layers_rename2)
+ ASSERT_EQ(EACCES, errno);
+ ASSERT_EQ(-1, rename(file1_s1d1, file1_s2d3));
+ ASSERT_EQ(EXDEV, errno);
+- /* Modify layout! */
+- ASSERT_EQ(0, rename(file2_s1d2, file1_s2d3));
++ /*
++ * Modifying the layout is now denied because the second layer does not
++ * handle REFER, which is always denied by default.
++ */
++ ASSERT_EQ(-1, rename(file2_s1d2, file1_s2d3));
++ ASSERT_EQ(EXDEV, errno);
+
+ /* Without REFER source, EACCES wins over EXDEV. */
+ ASSERT_EQ(-1, rename(dir_s1d1, file1_s2d2));
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-15 10:29 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-15 10:29 UTC (permalink / raw
To: gentoo-commits
commit: b2d227b15318551f9ad9b71d723b3f45334d3356
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 15 10:28:54 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 15 10:28:54 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b2d227b1
Linux patch 5.19.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-5.19.9.patch | 8234 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8238 insertions(+)
diff --git a/0000_README b/0000_README
index d9225608..341e7dca 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-5.19.8.patch
From: http://www.kernel.org
Desc: Linux 5.19.8
+Patch: 1008_linux-5.19.9.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.9
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1008_linux-5.19.9.patch b/1008_linux-5.19.9.patch
new file mode 100644
index 00000000..f12fb560
--- /dev/null
+++ b/1008_linux-5.19.9.patch
@@ -0,0 +1,8234 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 33b04db8408f9..fda97b3fcf018 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -52,6 +52,8 @@ stable kernels.
+ | Allwinner | A64/R18 | UNKNOWN1 | SUN50I_ERRATUM_UNKNOWN1 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM | Cortex-A510 | #2457168 | ARM64_ERRATUM_2457168 |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A510 | #2064142 | ARM64_ERRATUM_2064142 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A510 | #2038923 | ARM64_ERRATUM_2038923 |
+diff --git a/Documentation/hwmon/asus_ec_sensors.rst b/Documentation/hwmon/asus_ec_sensors.rst
+index 78ca69eda8778..02f4ad314a1eb 100644
+--- a/Documentation/hwmon/asus_ec_sensors.rst
++++ b/Documentation/hwmon/asus_ec_sensors.rst
+@@ -13,12 +13,16 @@ Supported boards:
+ * ROG CROSSHAIR VIII FORMULA
+ * ROG CROSSHAIR VIII HERO
+ * ROG CROSSHAIR VIII IMPACT
++ * ROG MAXIMUS XI HERO
++ * ROG MAXIMUS XI HERO (WI-FI)
+ * ROG STRIX B550-E GAMING
+ * ROG STRIX B550-I GAMING
+ * ROG STRIX X570-E GAMING
+ * ROG STRIX X570-E GAMING WIFI II
+ * ROG STRIX X570-F GAMING
+ * ROG STRIX X570-I GAMING
++ * ROG STRIX Z690-A GAMING WIFI D4
++ * ROG ZENITH II EXTREME
+
+ Authors:
+ - Eugene Shalygin <eugene.shalygin@gmail.com>
+diff --git a/Makefile b/Makefile
+index e361c6230e9e5..1f27c4bd09e67 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+@@ -1286,8 +1286,7 @@ hdr-inst := -f $(srctree)/scripts/Makefile.headersinst obj
+
+ PHONY += headers
+ headers: $(version_h) scripts_unifdef uapi-asm-generic archheaders archscripts
+- $(if $(wildcard $(srctree)/arch/$(SRCARCH)/include/uapi/asm/Kbuild),, \
+- $(error Headers not exportable for the $(SRCARCH) architecture))
++ $(if $(filter um, $(SRCARCH)), $(error Headers not exportable for UML))
+ $(Q)$(MAKE) $(hdr-inst)=include/uapi
+ $(Q)$(MAKE) $(hdr-inst)=arch/$(SRCARCH)/include/uapi
+
+diff --git a/arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi b/arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi
+index ba621783acdbc..d6f364c6be94b 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi
++++ b/arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi
+@@ -76,8 +76,8 @@
+ regulators {
+ vdd_3v3: VDD_IO {
+ regulator-name = "VDD_IO";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <3700000>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -95,8 +95,8 @@
+
+ vddio_ddr: VDD_DDR {
+ regulator-name = "VDD_DDR";
+- regulator-min-microvolt = <600000>;
+- regulator-max-microvolt = <1850000>;
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <1200000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -118,8 +118,8 @@
+
+ vdd_core: VDD_CORE {
+ regulator-name = "VDD_CORE";
+- regulator-min-microvolt = <600000>;
+- regulator-max-microvolt = <1850000>;
++ regulator-min-microvolt = <1250000>;
++ regulator-max-microvolt = <1250000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -160,8 +160,8 @@
+
+ LDO1 {
+ regulator-name = "LDO1";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <3700000>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+
+ regulator-state-standby {
+@@ -175,9 +175,8 @@
+
+ LDO2 {
+ regulator-name = "LDO2";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <3700000>;
+- regulator-always-on;
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <3300000>;
+
+ regulator-state-standby {
+ regulator-on-in-suspend;
+diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+index 164201a8fbf2d..492456e195a37 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_icp.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+@@ -197,8 +197,8 @@
+ regulators {
+ vdd_io_reg: VDD_IO {
+ regulator-name = "VDD_IO";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <3700000>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -216,8 +216,8 @@
+
+ VDD_DDR {
+ regulator-name = "VDD_DDR";
+- regulator-min-microvolt = <600000>;
+- regulator-max-microvolt = <1850000>;
++ regulator-min-microvolt = <1350000>;
++ regulator-max-microvolt = <1350000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -235,8 +235,8 @@
+
+ VDD_CORE {
+ regulator-name = "VDD_CORE";
+- regulator-min-microvolt = <600000>;
+- regulator-max-microvolt = <1850000>;
++ regulator-min-microvolt = <1250000>;
++ regulator-max-microvolt = <1250000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -258,7 +258,6 @@
+ regulator-max-microvolt = <1850000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+- regulator-always-on;
+
+ regulator-state-standby {
+ regulator-on-in-suspend;
+@@ -273,8 +272,8 @@
+
+ LDO1 {
+ regulator-name = "LDO1";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <3700000>;
++ regulator-min-microvolt = <2500000>;
++ regulator-max-microvolt = <2500000>;
+ regulator-always-on;
+
+ regulator-state-standby {
+@@ -288,8 +287,8 @@
+
+ LDO2 {
+ regulator-name = "LDO2";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <3700000>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+
+ regulator-state-standby {
+diff --git a/arch/arm/boot/dts/at91-sama7g5ek.dts b/arch/arm/boot/dts/at91-sama7g5ek.dts
+index 103544620fd7c..b261b4da08502 100644
+--- a/arch/arm/boot/dts/at91-sama7g5ek.dts
++++ b/arch/arm/boot/dts/at91-sama7g5ek.dts
+@@ -244,8 +244,8 @@
+ regulators {
+ vdd_3v3: VDD_IO {
+ regulator-name = "VDD_IO";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <3700000>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -264,8 +264,8 @@
+
+ vddioddr: VDD_DDR {
+ regulator-name = "VDD_DDR";
+- regulator-min-microvolt = <1300000>;
+- regulator-max-microvolt = <1450000>;
++ regulator-min-microvolt = <1350000>;
++ regulator-max-microvolt = <1350000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -285,8 +285,8 @@
+
+ vddcore: VDD_CORE {
+ regulator-name = "VDD_CORE";
+- regulator-min-microvolt = <1100000>;
+- regulator-max-microvolt = <1850000>;
++ regulator-min-microvolt = <1150000>;
++ regulator-max-microvolt = <1150000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-always-on;
+@@ -306,7 +306,7 @@
+ vddcpu: VDD_OTHER {
+ regulator-name = "VDD_OTHER";
+ regulator-min-microvolt = <1050000>;
+- regulator-max-microvolt = <1850000>;
++ regulator-max-microvolt = <1250000>;
+ regulator-initial-mode = <2>;
+ regulator-allowed-modes = <2>, <4>;
+ regulator-ramp-delay = <3125>;
+@@ -326,8 +326,8 @@
+
+ vldo1: LDO1 {
+ regulator-name = "LDO1";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <3700000>;
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+
+ regulator-state-standby {
+diff --git a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+index 095c9143d99a3..6b791d515e294 100644
+--- a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+@@ -51,16 +51,6 @@
+ vin-supply = <®_3p3v_s5>;
+ };
+
+- reg_3p3v_s0: regulator-3p3v-s0 {
+- compatible = "regulator-fixed";
+- regulator-name = "V_3V3_S0";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+- regulator-always-on;
+- regulator-boot-on;
+- vin-supply = <®_3p3v_s5>;
+- };
+-
+ reg_3p3v_s5: regulator-3p3v-s5 {
+ compatible = "regulator-fixed";
+ regulator-name = "V_3V3_S5";
+@@ -259,7 +249,7 @@
+
+ /* default boot source: workaround #1 for errata ERR006282 */
+ smarc_flash: flash@0 {
+- compatible = "winbond,w25q16dw", "jedec,spi-nor";
++ compatible = "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <20000000>;
+ };
+diff --git a/arch/arm/boot/dts/imx6qdl-vicut1.dtsi b/arch/arm/boot/dts/imx6qdl-vicut1.dtsi
+index a1676b5d2980f..c5a98b0110dd3 100644
+--- a/arch/arm/boot/dts/imx6qdl-vicut1.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-vicut1.dtsi
+@@ -28,7 +28,7 @@
+ enable-gpios = <&gpio4 28 GPIO_ACTIVE_HIGH>;
+ };
+
+- backlight_led: backlight_led {
++ backlight_led: backlight-led {
+ compatible = "pwm-backlight";
+ pwms = <&pwm3 0 5000000 0>;
+ brightness-levels = <0 16 64 255>;
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index df6d673e83d56..f4501dea98b04 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -541,9 +541,41 @@ extern u32 at91_pm_suspend_in_sram_sz;
+
+ static int at91_suspend_finish(unsigned long val)
+ {
++ unsigned char modified_gray_code[] = {
++ 0x00, 0x01, 0x02, 0x03, 0x06, 0x07, 0x04, 0x05, 0x0c, 0x0d,
++ 0x0e, 0x0f, 0x0a, 0x0b, 0x08, 0x09, 0x18, 0x19, 0x1a, 0x1b,
++ 0x1e, 0x1f, 0x1c, 0x1d, 0x14, 0x15, 0x16, 0x17, 0x12, 0x13,
++ 0x10, 0x11,
++ };
++ unsigned int tmp, index;
+ int i;
+
+ if (soc_pm.data.mode == AT91_PM_BACKUP && soc_pm.data.ramc_phy) {
++ /*
++ * Bootloader will perform DDR recalibration and will try to
++ * restore the ZQ0SR0 with the value saved here. But the
++ * calibration is buggy and restoring some values from ZQ0SR0
++ * is forbidden and risky thus we need to provide processed
++ * values for these (modified gray code values).
++ */
++ tmp = readl(soc_pm.data.ramc_phy + DDR3PHY_ZQ0SR0);
++
++ /* Store pull-down output impedance select. */
++ index = (tmp >> DDR3PHY_ZQ0SR0_PDO_OFF) & 0x1f;
++ soc_pm.bu->ddr_phy_calibration[0] = modified_gray_code[index];
++
++ /* Store pull-up output impedance select. */
++ index = (tmp >> DDR3PHY_ZQ0SR0_PUO_OFF) & 0x1f;
++ soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index];
++
++ /* Store pull-down on-die termination impedance select. */
++ index = (tmp >> DDR3PHY_ZQ0SR0_PDODT_OFF) & 0x1f;
++ soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index];
++
++ /* Store pull-up on-die termination impedance select. */
++ index = (tmp >> DDR3PHY_ZQ0SRO_PUODT_OFF) & 0x1f;
++ soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index];
++
+ /*
+ * The 1st 8 words of memory might get corrupted in the process
+ * of DDR PHY recalibration; it is saved here in securam and it
+@@ -1066,10 +1098,6 @@ static int __init at91_pm_backup_init(void)
+ of_scan_flat_dt(at91_pm_backup_scan_memcs, &located);
+ if (!located)
+ goto securam_fail;
+-
+- /* DDR3PHY_ZQ0SR0 */
+- soc_pm.bu->ddr_phy_calibration[0] = readl(soc_pm.data.ramc_phy +
+- 0x188);
+ }
+
+ return 0;
+diff --git a/arch/arm/mach-at91/pm_suspend.S b/arch/arm/mach-at91/pm_suspend.S
+index abe4ced33edaf..ffed4d9490428 100644
+--- a/arch/arm/mach-at91/pm_suspend.S
++++ b/arch/arm/mach-at91/pm_suspend.S
+@@ -172,9 +172,15 @@ sr_ena_2:
+ /* Put DDR PHY's DLL in bypass mode for non-backup modes. */
+ cmp r7, #AT91_PM_BACKUP
+ beq sr_ena_3
+- ldr tmp1, [r3, #DDR3PHY_PIR]
+- orr tmp1, tmp1, #DDR3PHY_PIR_DLLBYP
+- str tmp1, [r3, #DDR3PHY_PIR]
++
++ /* Disable DX DLLs. */
++ ldr tmp1, [r3, #DDR3PHY_DX0DLLCR]
++ orr tmp1, tmp1, #DDR3PHY_DXDLLCR_DLLDIS
++ str tmp1, [r3, #DDR3PHY_DX0DLLCR]
++
++ ldr tmp1, [r3, #DDR3PHY_DX1DLLCR]
++ orr tmp1, tmp1, #DDR3PHY_DXDLLCR_DLLDIS
++ str tmp1, [r3, #DDR3PHY_DX1DLLCR]
+
+ sr_ena_3:
+ /* Power down DDR PHY data receivers. */
+@@ -221,10 +227,14 @@ sr_ena_3:
+ bic tmp1, tmp1, #DDR3PHY_DSGCR_ODTPDD_ODT0
+ str tmp1, [r3, #DDR3PHY_DSGCR]
+
+- /* Take DDR PHY's DLL out of bypass mode. */
+- ldr tmp1, [r3, #DDR3PHY_PIR]
+- bic tmp1, tmp1, #DDR3PHY_PIR_DLLBYP
+- str tmp1, [r3, #DDR3PHY_PIR]
++ /* Enable DX DLLs. */
++ ldr tmp1, [r3, #DDR3PHY_DX0DLLCR]
++ bic tmp1, tmp1, #DDR3PHY_DXDLLCR_DLLDIS
++ str tmp1, [r3, #DDR3PHY_DX0DLLCR]
++
++ ldr tmp1, [r3, #DDR3PHY_DX1DLLCR]
++ bic tmp1, tmp1, #DDR3PHY_DXDLLCR_DLLDIS
++ str tmp1, [r3, #DDR3PHY_DX1DLLCR]
+
+ /* Enable quasi-dynamic programming. */
+ mov tmp1, #0
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 001eaba5a6b4b..cc1e7bb49d38b 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -914,6 +914,23 @@ config ARM64_ERRATUM_1902691
+
+ If unsure, say Y.
+
++config ARM64_ERRATUM_2457168
++ bool "Cortex-A510: 2457168: workaround for AMEVCNTR01 incrementing incorrectly"
++ depends on ARM64_AMU_EXTN
++ default y
++ help
++ This option adds the workaround for ARM Cortex-A510 erratum 2457168.
++
++ The AMU counter AMEVCNTR01 (constant counter) should increment at the same rate
++ as the system counter. On affected Cortex-A510 cores AMEVCNTR01 increments
++ incorrectly giving a significantly higher output value.
++
++ Work around this problem by returning 0 when reading the affected counter in
++ key locations that results in disabling all users of this counter. This effect
++ is the same to firmware disabling affected counters.
++
++ If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ bool "Cavium erratum 22375, 24313"
+ default y
+@@ -1867,6 +1884,8 @@ config ARM64_BTI_KERNEL
+ depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697
+ depends on !CC_IS_GCC || GCC_VERSION >= 100100
++ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106671
++ depends on !CC_IS_GCC
+ # https://github.com/llvm/llvm-project/commit/a88c722e687e6780dcd6a58718350dc76fcc4cc9
+ depends on !CC_IS_CLANG || CLANG_VERSION >= 120000
+ depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds-65bb.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds-65bb.dts
+index 40d34c8384a5e..b949cac037427 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds-65bb.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds-65bb.dts
+@@ -25,7 +25,6 @@
+ &enetc_port0 {
+ phy-handle = <&slot1_sgmii>;
+ phy-mode = "2500base-x";
+- managed = "in-band-status";
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts
+index 24737e89038a4..96cac0f969a77 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts
+@@ -626,24 +626,28 @@
+ lan1: port@0 {
+ reg = <0>;
+ label = "lan1";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+ lan2: port@1 {
+ reg = <1>;
+ label = "lan2";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+ lan3: port@2 {
+ reg = <2>;
+ label = "lan3";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+ lan4: port@3 {
+ reg = <3>;
+ label = "lan4";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+index eafa88d980b32..c2d4da25482ff 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+@@ -32,10 +32,10 @@
+ };
+
+ /* Fixed clock dedicated to SPI CAN controller */
+- clk20m: oscillator {
++ clk40m: oscillator {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <20000000>;
++ clock-frequency = <40000000>;
+ };
+
+ gpio-keys {
+@@ -194,8 +194,8 @@
+
+ can1: can@0 {
+ compatible = "microchip,mcp251xfd";
+- clocks = <&clk20m>;
+- interrupts-extended = <&gpio1 6 IRQ_TYPE_EDGE_FALLING>;
++ clocks = <&clk40m>;
++ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_can1_int>;
+ reg = <0>;
+@@ -595,7 +595,7 @@
+ pinctrl-0 = <&pinctrl_gpio_9_dsi>, <&pinctrl_i2s_2_bclk_touch_reset>;
+ reg = <0x4a>;
+ /* Verdin I2S_2_BCLK (TOUCH_RESET#, SODIMM 42) */
+- reset-gpios = <&gpio3 23 GPIO_ACTIVE_HIGH>;
++ reset-gpios = <&gpio3 23 GPIO_ACTIVE_LOW>;
+ status = "disabled";
+ };
+
+@@ -737,6 +737,7 @@
+ };
+
+ &usbphynop2 {
++ power-domains = <&pgc_otg2>;
+ vcc-supply = <®_vdd_3v3>;
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+index 521215520a0f4..6630ec561dc25 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+@@ -770,10 +770,10 @@
+
+ pinctrl_sai2: sai2grp {
+ fsl,pins = <
+- MX8MP_IOMUXC_SAI2_TXFS__AUDIOMIX_SAI2_TX_SYNC
+- MX8MP_IOMUXC_SAI2_TXD0__AUDIOMIX_SAI2_TX_DATA00
+- MX8MP_IOMUXC_SAI2_TXC__AUDIOMIX_SAI2_TX_BCLK
+- MX8MP_IOMUXC_SAI2_MCLK__AUDIOMIX_SAI2_MCLK
++ MX8MP_IOMUXC_SAI2_TXFS__AUDIOMIX_SAI2_TX_SYNC 0xd6
++ MX8MP_IOMUXC_SAI2_TXD0__AUDIOMIX_SAI2_TX_DATA00 0xd6
++ MX8MP_IOMUXC_SAI2_TXC__AUDIOMIX_SAI2_TX_BCLK 0xd6
++ MX8MP_IOMUXC_SAI2_MCLK__AUDIOMIX_SAI2_MCLK 0xd6
+ >;
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
+index fb17e329cd370..f5323291a9b24 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
+@@ -620,7 +620,7 @@
+ interrupts = <5 IRQ_TYPE_EDGE_FALLING>;
+ reg = <0x4a>;
+ /* Verdin GPIO_2 (SODIMM 208) */
+- reset-gpios = <&gpio1 1 GPIO_ACTIVE_HIGH>;
++ reset-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
+ status = "disabled";
+ };
+ };
+@@ -697,7 +697,7 @@
+ pinctrl-0 = <&pinctrl_gpio_9_dsi>, <&pinctrl_i2s_2_bclk_touch_reset>;
+ reg = <0x4a>;
+ /* Verdin I2S_2_BCLK (TOUCH_RESET#, SODIMM 42) */
+- reset-gpios = <&gpio5 0 GPIO_ACTIVE_HIGH>;
++ reset-gpios = <&gpio5 0 GPIO_ACTIVE_LOW>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi
+index 899e8e7dbc24f..802ad6e5cef61 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi
+@@ -204,7 +204,6 @@
+ reg = <0x51>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_rtc>;
+- interrupt-names = "irq";
+ interrupt-parent = <&gpio1>;
+ interrupts = <1 IRQ_TYPE_EDGE_FALLING>;
+ quartz-load-femtofarads = <7000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a779g0.dtsi b/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
+index 7cbb0de060ddc..1c15726cff8bf 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
+@@ -85,7 +85,7 @@
+ "renesas,rcar-gen4-hscif",
+ "renesas,hscif";
+ reg = <0 0xe6540000 0 96>;
+- interrupts = <GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 514>,
+ <&cpg CPG_CORE R8A779G0_CLK_S0D3_PER>,
+ <&scif_clk>;
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 5f4117dae8888..af137f91607da 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -656,6 +656,16 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2)
+ },
+ #endif
++#ifdef CONFIG_ARM64_ERRATUM_2457168
++ {
++ .desc = "ARM erratum 2457168",
++ .capability = ARM64_WORKAROUND_2457168,
++ .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
++
++ /* Cortex-A510 r0p0-r1p1 */
++ CAP_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1)
++ },
++#endif
+ #ifdef CONFIG_ARM64_ERRATUM_2038923
+ {
+ .desc = "ARM erratum 2038923",
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index ebdfbd1cf207b..f34c9f8b9ee0a 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1798,7 +1798,10 @@ static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap)
+ pr_info("detected CPU%d: Activity Monitors Unit (AMU)\n",
+ smp_processor_id());
+ cpumask_set_cpu(smp_processor_id(), &amu_cpus);
+- update_freq_counters_refs();
++
++ /* 0 reference values signal broken/disabled counters */
++ if (!this_cpu_has_cap(ARM64_WORKAROUND_2457168))
++ update_freq_counters_refs();
+ }
+ }
+
+diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
+index af5df48ba915b..2e248342476ea 100644
+--- a/arch/arm64/kernel/hibernate.c
++++ b/arch/arm64/kernel/hibernate.c
+@@ -300,6 +300,11 @@ static void swsusp_mte_restore_tags(void)
+ unsigned long pfn = xa_state.xa_index;
+ struct page *page = pfn_to_online_page(pfn);
+
++ /*
++ * It is not required to invoke page_kasan_tag_reset(page)
++ * at this point since the tags stored in page->flags are
++ * already restored.
++ */
+ mte_restore_page_tags(page_address(page), tags);
+
+ mte_free_tag_storage(tags);
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index b2b730233274b..f6b00743c3994 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -48,6 +48,15 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
+ if (!pte_is_tagged)
+ return;
+
++ page_kasan_tag_reset(page);
++ /*
++ * We need smp_wmb() in between setting the flags and clearing the
++ * tags because if another thread reads page->flags and builds a
++ * tagged address out of it, there is an actual dependency to the
++ * memory access, but on the current thread we do not guarantee that
++ * the new page->flags are visible before the tags were updated.
++ */
++ smp_wmb();
+ mte_clear_page_tags(page_address(page));
+ }
+
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index 9ab78ad826e2a..707b5451929d4 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -310,12 +310,25 @@ core_initcall(init_amu_fie);
+
+ static void cpu_read_corecnt(void *val)
+ {
++ /*
++ * A value of 0 can be returned if the current CPU does not support AMUs
++ * or if the counter is disabled for this CPU. A return value of 0 at
++ * counter read is properly handled as an error case by the users of the
++ * counter.
++ */
+ *(u64 *)val = read_corecnt();
+ }
+
+ static void cpu_read_constcnt(void *val)
+ {
+- *(u64 *)val = read_constcnt();
++ /*
++ * Return 0 if the current CPU is affected by erratum 2457168. A value
++ * of 0 is also returned if the current CPU does not support AMUs or if
++ * the counter is disabled. A return value of 0 at counter read is
++ * properly handled as an error case by the users of the counter.
++ */
++ *(u64 *)val = this_cpu_has_cap(ARM64_WORKAROUND_2457168) ?
++ 0UL : read_constcnt();
+ }
+
+ static inline
+@@ -342,7 +355,22 @@ int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
+ */
+ bool cpc_ffh_supported(void)
+ {
+- return freq_counters_valid(get_cpu_with_amu_feat());
++ int cpu = get_cpu_with_amu_feat();
++
++ /*
++ * FFH is considered supported if there is at least one present CPU that
++ * supports AMUs. Using FFH to read core and reference counters for CPUs
++ * that do not support AMUs, have counters disabled or that are affected
++ * by errata, will result in a return value of 0.
++ *
++ * This is done to allow any enabled and valid counters to be read
++ * through FFH, knowing that potentially returning 0 as counter value is
++ * properly handled by the users of these counters.
++ */
++ if ((cpu >= nr_cpu_ids) || !cpumask_test_cpu(cpu, cpu_present_mask))
++ return false;
++
++ return true;
+ }
+
+ int cpc_read_ffh(int cpu, struct cpc_reg *reg, u64 *val)
+diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
+index 24913271e898c..0dea80bf6de46 100644
+--- a/arch/arm64/mm/copypage.c
++++ b/arch/arm64/mm/copypage.c
+@@ -23,6 +23,15 @@ void copy_highpage(struct page *to, struct page *from)
+
+ if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
+ set_bit(PG_mte_tagged, &to->flags);
++ page_kasan_tag_reset(to);
++ /*
++ * We need smp_wmb() in between setting the flags and clearing the
++ * tags because if another thread reads page->flags and builds a
++ * tagged address out of it, there is an actual dependency to the
++ * memory access, but on the current thread we do not guarantee that
++ * the new page->flags are visible before the tags were updated.
++ */
++ smp_wmb();
+ mte_copy_page_tags(kto, kfrom);
+ }
+ }
+diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
+index 4334dec93bd44..a9e50e930484a 100644
+--- a/arch/arm64/mm/mteswap.c
++++ b/arch/arm64/mm/mteswap.c
+@@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
+ if (!tags)
+ return false;
+
++ page_kasan_tag_reset(page);
++ /*
++ * We need smp_wmb() in between setting the flags and clearing the
++ * tags because if another thread reads page->flags and builds a
++ * tagged address out of it, there is an actual dependency to the
++ * memory access, but on the current thread we do not guarantee that
++ * the new page->flags are visible before the tags were updated.
++ */
++ smp_wmb();
+ mte_restore_page_tags(page_address(page), tags);
+
+ return true;
+diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
+index 8809e14cf86a2..18999f46df19f 100644
+--- a/arch/arm64/tools/cpucaps
++++ b/arch/arm64/tools/cpucaps
+@@ -66,6 +66,7 @@ WORKAROUND_1902691
+ WORKAROUND_2038923
+ WORKAROUND_2064142
+ WORKAROUND_2077057
++WORKAROUND_2457168
+ WORKAROUND_TRBE_OVERWRITE_FILL_MODE
+ WORKAROUND_TSB_FLUSH_FAILURE
+ WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
+diff --git a/arch/mips/loongson32/ls1c/board.c b/arch/mips/loongson32/ls1c/board.c
+index e9de6da0ce51f..9dcfe9de55b0a 100644
+--- a/arch/mips/loongson32/ls1c/board.c
++++ b/arch/mips/loongson32/ls1c/board.c
+@@ -15,7 +15,6 @@ static struct platform_device *ls1c_platform_devices[] __initdata = {
+ static int __init ls1c_platform_init(void)
+ {
+ ls1x_serial_set_uartclk(&ls1x_uart_pdev);
+- ls1x_rtc_set_extclk(&ls1x_rtc_pdev);
+
+ return platform_add_devices(ls1c_platform_devices,
+ ARRAY_SIZE(ls1c_platform_devices));
+diff --git a/arch/parisc/include/asm/bitops.h b/arch/parisc/include/asm/bitops.h
+index 56ffd260c669b..0ec9cfc5131fc 100644
+--- a/arch/parisc/include/asm/bitops.h
++++ b/arch/parisc/include/asm/bitops.h
+@@ -12,14 +12,6 @@
+ #include <asm/barrier.h>
+ #include <linux/atomic.h>
+
+-/* compiler build environment sanity checks: */
+-#if !defined(CONFIG_64BIT) && defined(__LP64__)
+-#error "Please use 'ARCH=parisc' to build the 32-bit kernel."
+-#endif
+-#if defined(CONFIG_64BIT) && !defined(__LP64__)
+-#error "Please use 'ARCH=parisc64' to build the 64-bit kernel."
+-#endif
+-
+ /* See http://marc.theaimsgroup.com/?t=108826637900003 for discussion
+ * on use of volatile and __*_bit() (set/clear/change):
+ * *_bit() want use of volatile.
+diff --git a/arch/parisc/kernel/head.S b/arch/parisc/kernel/head.S
+index e0a9e96576221..fd15fd4bbb61b 100644
+--- a/arch/parisc/kernel/head.S
++++ b/arch/parisc/kernel/head.S
+@@ -22,7 +22,7 @@
+ #include <linux/init.h>
+ #include <linux/pgtable.h>
+
+- .level PA_ASM_LEVEL
++ .level 1.1
+
+ __INITDATA
+ ENTRY(boot_args)
+@@ -70,6 +70,47 @@ $bss_loop:
+ stw,ma %arg2,4(%r1)
+ stw,ma %arg3,4(%r1)
+
++#if !defined(CONFIG_64BIT) && defined(CONFIG_PA20)
++ /* This 32-bit kernel was compiled for PA2.0 CPUs. Check current CPU
++ * and halt kernel if we detect a PA1.x CPU. */
++ ldi 32,%r10
++ mtctl %r10,%cr11
++ .level 2.0
++ mfctl,w %cr11,%r10
++ .level 1.1
++ comib,<>,n 0,%r10,$cpu_ok
++
++ load32 PA(msg1),%arg0
++ ldi msg1_end-msg1,%arg1
++$iodc_panic:
++ copy %arg0, %r10
++ copy %arg1, %r11
++ load32 PA(init_stack),%sp
++#define MEM_CONS 0x3A0
++ ldw MEM_CONS+32(%r0),%arg0 // HPA
++ ldi ENTRY_IO_COUT,%arg1
++ ldw MEM_CONS+36(%r0),%arg2 // SPA
++ ldw MEM_CONS+8(%r0),%arg3 // layers
++ load32 PA(__bss_start),%r1
++ stw %r1,-52(%sp) // arg4
++ stw %r0,-56(%sp) // arg5
++ stw %r10,-60(%sp) // arg6 = ptr to text
++ stw %r11,-64(%sp) // arg7 = len
++ stw %r0,-68(%sp) // arg8
++ load32 PA(.iodc_panic_ret), %rp
++ ldw MEM_CONS+40(%r0),%r1 // ENTRY_IODC
++ bv,n (%r1)
++.iodc_panic_ret:
++ b . /* wait endless with ... */
++ or %r10,%r10,%r10 /* qemu idle sleep */
++msg1: .ascii "Can't boot kernel which was built for PA8x00 CPUs on this machine.\r\n"
++msg1_end:
++
++$cpu_ok:
++#endif
++
++ .level PA_ASM_LEVEL
++
+ /* Initialize startup VM. Just map first 16/32 MB of memory */
+ load32 PA(swapper_pg_dir),%r4
+ mtctl %r4,%cr24 /* Initialize kernel root pointer */
+diff --git a/arch/riscv/boot/dts/microchip/mpfs.dtsi b/arch/riscv/boot/dts/microchip/mpfs.dtsi
+index 9f5bce1488d93..9bf37ef379509 100644
+--- a/arch/riscv/boot/dts/microchip/mpfs.dtsi
++++ b/arch/riscv/boot/dts/microchip/mpfs.dtsi
+@@ -161,7 +161,7 @@
+ ranges;
+
+ cctrllr: cache-controller@2010000 {
+- compatible = "sifive,fu540-c000-ccache", "cache";
++ compatible = "microchip,mpfs-ccache", "sifive,fu540-c000-ccache", "cache";
+ reg = <0x0 0x2010000 0x0 0x1000>;
+ cache-block-size = <64>;
+ cache-level = <2>;
+diff --git a/arch/s390/kernel/nmi.c b/arch/s390/kernel/nmi.c
+index 53ed3884fe644..5d66e3947070c 100644
+--- a/arch/s390/kernel/nmi.c
++++ b/arch/s390/kernel/nmi.c
+@@ -63,7 +63,7 @@ static inline unsigned long nmi_get_mcesa_size(void)
+ * structure. The structure is required for machine check happening
+ * early in the boot process.
+ */
+-static struct mcesa boot_mcesa __initdata __aligned(MCESA_MAX_SIZE);
++static struct mcesa boot_mcesa __aligned(MCESA_MAX_SIZE);
+
+ void __init nmi_alloc_mcesa_early(u64 *mcesad)
+ {
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 0a37f5de28631..3e0361db963ef 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -486,6 +486,7 @@ static void __init setup_lowcore_dat_off(void)
+ put_abs_lowcore(restart_data, lc->restart_data);
+ put_abs_lowcore(restart_source, lc->restart_source);
+ put_abs_lowcore(restart_psw, lc->restart_psw);
++ put_abs_lowcore(mcesad, lc->mcesad);
+
+ lc->spinlock_lockval = arch_spin_lockval(0);
+ lc->spinlock_index = 0;
+diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
+index 4a23e52fe0ee1..ebc271bb6d8ed 100644
+--- a/arch/x86/include/asm/sev.h
++++ b/arch/x86/include/asm/sev.h
+@@ -195,7 +195,7 @@ void snp_set_memory_shared(unsigned long vaddr, unsigned int npages);
+ void snp_set_memory_private(unsigned long vaddr, unsigned int npages);
+ void snp_set_wakeup_secondary_cpu(void);
+ bool snp_init(struct boot_params *bp);
+-void snp_abort(void);
++void __init __noreturn snp_abort(void);
+ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, unsigned long *fw_err);
+ #else
+ static inline void sev_es_ist_enter(struct pt_regs *regs) { }
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index 4f84c3f11af5b..a428c62330d37 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -2112,7 +2112,7 @@ bool __init snp_init(struct boot_params *bp)
+ return true;
+ }
+
+-void __init snp_abort(void)
++void __init __noreturn snp_abort(void)
+ {
+ sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED);
+ }
+diff --git a/block/partitions/core.c b/block/partitions/core.c
+index 8a0ec929023bc..76617b1d2d47f 100644
+--- a/block/partitions/core.c
++++ b/block/partitions/core.c
+@@ -597,6 +597,9 @@ static int blk_add_partitions(struct gendisk *disk)
+ if (disk->flags & GENHD_FL_NO_PART)
+ return 0;
+
++ if (test_bit(GD_SUPPRESS_PART_SCAN, &disk->state))
++ return 0;
++
+ state = check_partition(disk);
+ if (!state)
+ return 0;
+diff --git a/drivers/base/driver.c b/drivers/base/driver.c
+index 15a75afe6b845..676b6275d5b53 100644
+--- a/drivers/base/driver.c
++++ b/drivers/base/driver.c
+@@ -63,6 +63,12 @@ int driver_set_override(struct device *dev, const char **override,
+ if (len >= (PAGE_SIZE - 1))
+ return -EINVAL;
+
++ /*
++ * Compute the real length of the string in case userspace sends us a
++ * bunch of \0 characters like python likes to do.
++ */
++ len = strlen(s);
++
+ if (!len) {
+ /* Empty string passed - clear override */
+ device_lock(dev);
+diff --git a/drivers/base/regmap/regmap-spi.c b/drivers/base/regmap/regmap-spi.c
+index 719323bc6c7f1..37ab23a9d0345 100644
+--- a/drivers/base/regmap/regmap-spi.c
++++ b/drivers/base/regmap/regmap-spi.c
+@@ -113,6 +113,7 @@ static const struct regmap_bus *regmap_get_spi_bus(struct spi_device *spi,
+ const struct regmap_config *config)
+ {
+ size_t max_size = spi_max_transfer_size(spi);
++ size_t max_msg_size, reg_reserve_size;
+ struct regmap_bus *bus;
+
+ if (max_size != SIZE_MAX) {
+@@ -120,9 +121,16 @@ static const struct regmap_bus *regmap_get_spi_bus(struct spi_device *spi,
+ if (!bus)
+ return ERR_PTR(-ENOMEM);
+
++ max_msg_size = spi_max_message_size(spi);
++ reg_reserve_size = config->reg_bits / BITS_PER_BYTE
++ + config->pad_bits / BITS_PER_BYTE;
++ if (max_size + reg_reserve_size > max_msg_size)
++ max_size -= reg_reserve_size;
++
+ bus->free_on_exit = true;
+ bus->max_raw_read = max_size;
+ bus->max_raw_write = max_size;
++
+ return bus;
+ }
+
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 2cad427741647..f9fd1b6c15d42 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -532,7 +532,7 @@ static unsigned int __resolve_freq(struct cpufreq_policy *policy,
+
+ target_freq = clamp_val(target_freq, policy->min, policy->max);
+
+- if (!cpufreq_driver->target_index)
++ if (!policy->freq_table)
+ return target_freq;
+
+ idx = cpufreq_frequency_table_target(policy, target_freq, relation);
+diff --git a/drivers/firmware/efi/capsule-loader.c b/drivers/firmware/efi/capsule-loader.c
+index 4dde8edd53b62..3e8d4b51a8140 100644
+--- a/drivers/firmware/efi/capsule-loader.c
++++ b/drivers/firmware/efi/capsule-loader.c
+@@ -242,29 +242,6 @@ failed:
+ return ret;
+ }
+
+-/**
+- * efi_capsule_flush - called by file close or file flush
+- * @file: file pointer
+- * @id: not used
+- *
+- * If a capsule is being partially uploaded then calling this function
+- * will be treated as upload termination and will free those completed
+- * buffer pages and -ECANCELED will be returned.
+- **/
+-static int efi_capsule_flush(struct file *file, fl_owner_t id)
+-{
+- int ret = 0;
+- struct capsule_info *cap_info = file->private_data;
+-
+- if (cap_info->index > 0) {
+- pr_err("capsule upload not complete\n");
+- efi_free_all_buff_pages(cap_info);
+- ret = -ECANCELED;
+- }
+-
+- return ret;
+-}
+-
+ /**
+ * efi_capsule_release - called by file close
+ * @inode: not used
+@@ -277,6 +254,13 @@ static int efi_capsule_release(struct inode *inode, struct file *file)
+ {
+ struct capsule_info *cap_info = file->private_data;
+
++ if (cap_info->index > 0 &&
++ (cap_info->header.headersize == 0 ||
++ cap_info->count < cap_info->total_size)) {
++ pr_err("capsule upload not complete\n");
++ efi_free_all_buff_pages(cap_info);
++ }
++
+ kfree(cap_info->pages);
+ kfree(cap_info->phys);
+ kfree(file->private_data);
+@@ -324,7 +308,6 @@ static const struct file_operations efi_capsule_fops = {
+ .owner = THIS_MODULE,
+ .open = efi_capsule_open,
+ .write = efi_capsule_write,
+- .flush = efi_capsule_flush,
+ .release = efi_capsule_release,
+ .llseek = no_llseek,
+ };
+diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
+index d0537573501e9..2c67f71f23753 100644
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -37,6 +37,13 @@ KBUILD_CFLAGS := $(cflags-y) -Os -DDISABLE_BRANCH_PROFILING \
+ $(call cc-option,-fno-addrsig) \
+ -D__DISABLE_EXPORTS
+
++#
++# struct randomization only makes sense for Linux internal types, which the EFI
++# stub code never touches, so let's turn off struct randomization for the stub
++# altogether
++#
++KBUILD_CFLAGS := $(filter-out $(RANDSTRUCT_CFLAGS), $(KBUILD_CFLAGS))
++
+ # remove SCS flags from all objects in this directory
+ KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
+ # disable LTO
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 3adebb63680e0..67d4a3c13ed19 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2482,12 +2482,14 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ if (!hive->reset_domain ||
+ !amdgpu_reset_get_reset_domain(hive->reset_domain)) {
+ r = -ENOENT;
++ amdgpu_put_xgmi_hive(hive);
+ goto init_failed;
+ }
+
+ /* Drop the early temporary reset domain we created for device */
+ amdgpu_reset_put_reset_domain(adev->reset_domain);
+ adev->reset_domain = hive->reset_domain;
++ amdgpu_put_xgmi_hive(hive);
+ }
+ }
+
+@@ -4473,8 +4475,6 @@ static int amdgpu_device_reset_sriov(struct amdgpu_device *adev,
+ retry:
+ amdgpu_amdkfd_pre_reset(adev);
+
+- amdgpu_amdkfd_pre_reset(adev);
+-
+ if (from_hypervisor)
+ r = amdgpu_virt_request_full_gpu(adev, true);
+ else
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index e9411c28d88ba..2b00f8fe15a89 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -2612,6 +2612,9 @@ static int psp_hw_fini(void *handle)
+ psp_rap_terminate(psp);
+ psp_dtm_terminate(psp);
+ psp_hdcp_terminate(psp);
++
++ if (adev->gmc.xgmi.num_physical_nodes > 1)
++ psp_xgmi_terminate(psp);
+ }
+
+ psp_asd_terminate(psp);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+index 1b108d03e7859..f2aebbf3fbe38 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+@@ -742,7 +742,7 @@ int amdgpu_xgmi_remove_device(struct amdgpu_device *adev)
+ amdgpu_put_xgmi_hive(hive);
+ }
+
+- return psp_xgmi_terminate(&adev->psp);
++ return 0;
+ }
+
+ static int amdgpu_xgmi_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *ras_block)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index a4a6751b1e449..30998ac47707c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -5090,9 +5090,12 @@ static void gfx_v11_0_update_coarse_grain_clock_gating(struct amdgpu_device *ade
+ data = REG_SET_FIELD(data, SDMA0_RLC_CGCG_CTRL, CGCG_INT_ENABLE, 1);
+ WREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL, data);
+
+- data = RREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL);
+- data = REG_SET_FIELD(data, SDMA1_RLC_CGCG_CTRL, CGCG_INT_ENABLE, 1);
+- WREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL, data);
++ /* Some ASICs only have one SDMA instance, not need to configure SDMA1 */
++ if (adev->sdma.num_instances > 1) {
++ data = RREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL);
++ data = REG_SET_FIELD(data, SDMA1_RLC_CGCG_CTRL, CGCG_INT_ENABLE, 1);
++ WREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL, data);
++ }
+ } else {
+ /* Program RLC_CGCG_CGLS_CTRL */
+ def = data = RREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL);
+@@ -5121,9 +5124,12 @@ static void gfx_v11_0_update_coarse_grain_clock_gating(struct amdgpu_device *ade
+ data &= ~SDMA0_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
+ WREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL, data);
+
+- data = RREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL);
+- data &= ~SDMA1_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
+- WREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL, data);
++ /* Some ASICs only have one SDMA instance, not need to configure SDMA1 */
++ if (adev->sdma.num_instances > 1) {
++ data = RREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL);
++ data &= ~SDMA1_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
++ WREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL, data);
++ }
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 5349ca4d19e38..6d8ff3b099422 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -2587,7 +2587,8 @@ static void gfx_v9_0_constants_init(struct amdgpu_device *adev)
+
+ gfx_v9_0_tiling_mode_table_init(adev);
+
+- gfx_v9_0_setup_rb(adev);
++ if (adev->gfx.num_gfx_rings)
++ gfx_v9_0_setup_rb(adev);
+ gfx_v9_0_get_cu_info(adev, &adev->gfx.cu_info);
+ adev->gfx.config.db_debug2 = RREG32_SOC15(GC, 0, mmDB_DEBUG2);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+index 3f44a099c52a4..3e51e773f92be 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+@@ -176,6 +176,7 @@ static void mmhub_v1_0_init_cache_regs(struct amdgpu_device *adev)
+ tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
+ WREG32_SOC15(MMHUB, 0, mmVM_L2_CNTL2, tmp);
+
++ tmp = mmVM_L2_CNTL3_DEFAULT;
+ if (adev->gmc.translate_further) {
+ tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3, BANK_SELECT, 12);
+ tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index c7a592d68febf..275bfb8ca6f89 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -3188,7 +3188,7 @@ void crtc_debugfs_init(struct drm_crtc *crtc)
+ &crc_win_y_end_fops);
+ debugfs_create_file_unsafe("crc_win_update", 0644, dir, crtc,
+ &crc_win_update_fops);
+-
++ dput(dir);
+ }
+ #endif
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c
+index 30c6f9cd717f3..27fbe906682f9 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c
+@@ -41,6 +41,12 @@
+ #define FN(reg_name, field) \
+ FD(reg_name##__##field)
+
++#include "logger_types.h"
++#undef DC_LOGGER
++#define DC_LOGGER \
++ CTX->logger
++#define smu_print(str, ...) {DC_LOG_SMU(str, ##__VA_ARGS__); }
++
+ #define VBIOSSMC_MSG_TestMessage 0x1
+ #define VBIOSSMC_MSG_GetSmuVersion 0x2
+ #define VBIOSSMC_MSG_PowerUpGfx 0x3
+@@ -95,7 +101,13 @@ static int rn_vbios_smu_send_msg_with_param(struct clk_mgr_internal *clk_mgr,
+ uint32_t result;
+
+ result = rn_smu_wait_for_response(clk_mgr, 10, 200000);
+- ASSERT(result == VBIOSSMC_Result_OK);
++
++ if (result != VBIOSSMC_Result_OK)
++ smu_print("SMU Response was not OK. SMU response after wait received is: %d\n", result);
++
++ if (result == VBIOSSMC_Status_BUSY) {
++ return -1;
++ }
+
+ /* First clear response register */
+ REG_WRITE(MP1_SMN_C2PMSG_91, VBIOSSMC_Status_BUSY);
+@@ -176,6 +188,10 @@ int rn_vbios_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int reque
+ VBIOSSMC_MSG_SetHardMinDcfclkByFreq,
+ khz_to_mhz_ceil(requested_dcfclk_khz));
+
++#ifdef DBG
++ smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000);
++#endif
++
+ return actual_dcfclk_set_mhz * 1000;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/dcn301_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/dcn301_smu.c
+index 1cae01a91a69d..e4f96b6fd79d0 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/dcn301_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/dcn301_smu.c
+@@ -41,6 +41,12 @@
+ #define FN(reg_name, field) \
+ FD(reg_name##__##field)
+
++#include "logger_types.h"
++#undef DC_LOGGER
++#define DC_LOGGER \
++ CTX->logger
++#define smu_print(str, ...) {DC_LOG_SMU(str, ##__VA_ARGS__); }
++
+ #define VBIOSSMC_MSG_GetSmuVersion 0x2
+ #define VBIOSSMC_MSG_SetDispclkFreq 0x4
+ #define VBIOSSMC_MSG_SetDprefclkFreq 0x5
+@@ -96,6 +102,13 @@ static int dcn301_smu_send_msg_with_param(struct clk_mgr_internal *clk_mgr,
+
+ result = dcn301_smu_wait_for_response(clk_mgr, 10, 200000);
+
++ if (result != VBIOSSMC_Result_OK)
++ smu_print("SMU Response was not OK. SMU response after wait received is: %d\n", result);
++
++ if (result == VBIOSSMC_Status_BUSY) {
++ return -1;
++ }
++
+ /* First clear response register */
+ REG_WRITE(MP1_SMN_C2PMSG_91, VBIOSSMC_Status_BUSY);
+
+@@ -167,6 +180,10 @@ int dcn301_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int request
+ VBIOSSMC_MSG_SetHardMinDcfclkByFreq,
+ khz_to_mhz_ceil(requested_dcfclk_khz));
+
++#ifdef DBG
++ smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000);
++#endif
++
+ return actual_dcfclk_set_mhz * 1000;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
+index c5d7d075026f3..090b2c02aee17 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
+@@ -40,6 +40,12 @@
+ #define FN(reg_name, field) \
+ FD(reg_name##__##field)
+
++#include "logger_types.h"
++#undef DC_LOGGER
++#define DC_LOGGER \
++ CTX->logger
++#define smu_print(str, ...) {DC_LOG_SMU(str, ##__VA_ARGS__); }
++
+ #define VBIOSSMC_MSG_TestMessage 0x1
+ #define VBIOSSMC_MSG_GetSmuVersion 0x2
+ #define VBIOSSMC_MSG_PowerUpGfx 0x3
+@@ -102,7 +108,9 @@ static int dcn31_smu_send_msg_with_param(struct clk_mgr_internal *clk_mgr,
+ uint32_t result;
+
+ result = dcn31_smu_wait_for_response(clk_mgr, 10, 200000);
+- ASSERT(result == VBIOSSMC_Result_OK);
++
++ if (result != VBIOSSMC_Result_OK)
++ smu_print("SMU Response was not OK. SMU response after wait received is: %d\n", result);
+
+ if (result == VBIOSSMC_Status_BUSY) {
+ return -1;
+@@ -194,6 +202,10 @@ int dcn31_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int requeste
+ VBIOSSMC_MSG_SetHardMinDcfclkByFreq,
+ khz_to_mhz_ceil(requested_dcfclk_khz));
+
++#ifdef DBG
++ smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000);
++#endif
++
+ return actual_dcfclk_set_mhz * 1000;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
+index 2600313fea579..925d6e13620ec 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
+@@ -70,6 +70,12 @@ static const struct IP_BASE NBIO_BASE = { { { { 0x00000000, 0x00000014, 0x00000D
+ #define REG_NBIO(reg_name) \
+ (NBIO_BASE.instance[0].segment[regBIF_BX_PF2_ ## reg_name ## _BASE_IDX] + regBIF_BX_PF2_ ## reg_name)
+
++#include "logger_types.h"
++#undef DC_LOGGER
++#define DC_LOGGER \
++ CTX->logger
++#define smu_print(str, ...) {DC_LOG_SMU(str, ##__VA_ARGS__); }
++
+ #define mmMP1_C2PMSG_3 0x3B1050C
+
+ #define VBIOSSMC_MSG_TestMessage 0x01 ///< To check if PMFW is alive and responding. Requirement specified by PMFW team
+@@ -130,7 +136,9 @@ static int dcn315_smu_send_msg_with_param(
+ uint32_t result;
+
+ result = dcn315_smu_wait_for_response(clk_mgr, 10, 200000);
+- ASSERT(result == VBIOSSMC_Result_OK);
++
++ if (result != VBIOSSMC_Result_OK)
++ smu_print("SMU Response was not OK. SMU response after wait received is: %d\n", result);
+
+ if (result == VBIOSSMC_Status_BUSY) {
+ return -1;
+@@ -197,6 +205,10 @@ int dcn315_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int request
+ VBIOSSMC_MSG_SetHardMinDcfclkByFreq,
+ khz_to_mhz_ceil(requested_dcfclk_khz));
+
++#ifdef DBG
++ smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000);
++#endif
++
+ return actual_dcfclk_set_mhz * 1000;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c
+index dceec4b960527..457a9254ae1c8 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c
+@@ -58,6 +58,12 @@ static const struct IP_BASE MP0_BASE = { { { { 0x00016000, 0x00DC0000, 0x00E0000
+ #define FN(reg_name, field) \
+ FD(reg_name##__##field)
+
++#include "logger_types.h"
++#undef DC_LOGGER
++#define DC_LOGGER \
++ CTX->logger
++#define smu_print(str, ...) {DC_LOG_SMU(str, ##__VA_ARGS__); }
++
+ #define VBIOSSMC_MSG_TestMessage 0x01 ///< To check if PMFW is alive and responding. Requirement specified by PMFW team
+ #define VBIOSSMC_MSG_GetPmfwVersion 0x02 ///< Get PMFW version
+ #define VBIOSSMC_MSG_Spare0 0x03 ///< Spare0
+@@ -118,7 +124,9 @@ static int dcn316_smu_send_msg_with_param(
+ uint32_t result;
+
+ result = dcn316_smu_wait_for_response(clk_mgr, 10, 200000);
+- ASSERT(result == VBIOSSMC_Result_OK);
++
++ if (result != VBIOSSMC_Result_OK)
++ smu_print("SMU Response was not OK. SMU response after wait received is: %d\n", result);
+
+ if (result == VBIOSSMC_Status_BUSY) {
+ return -1;
+@@ -183,6 +191,10 @@ int dcn316_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int request
+ VBIOSSMC_MSG_SetHardMinDcfclkByFreq,
+ khz_to_mhz_ceil(requested_dcfclk_khz));
+
++#ifdef DBG
++ smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000);
++#endif
++
+ return actual_dcfclk_set_mhz * 1000;
+ }
+
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index 86d670c712867..ad068865ba206 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -168,21 +168,6 @@ void drm_gem_private_object_init(struct drm_device *dev,
+ }
+ EXPORT_SYMBOL(drm_gem_private_object_init);
+
+-static void
+-drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp)
+-{
+- /*
+- * Note: obj->dma_buf can't disappear as long as we still hold a
+- * handle reference in obj->handle_count.
+- */
+- mutex_lock(&filp->prime.lock);
+- if (obj->dma_buf) {
+- drm_prime_remove_buf_handle_locked(&filp->prime,
+- obj->dma_buf);
+- }
+- mutex_unlock(&filp->prime.lock);
+-}
+-
+ /**
+ * drm_gem_object_handle_free - release resources bound to userspace handles
+ * @obj: GEM object to clean up.
+@@ -253,7 +238,7 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
+ if (obj->funcs->close)
+ obj->funcs->close(obj, file_priv);
+
+- drm_gem_remove_prime_handles(obj, file_priv);
++ drm_prime_remove_buf_handle(&file_priv->prime, id);
+ drm_vma_node_revoke(&obj->vma_node, file_priv);
+
+ drm_gem_object_handle_put_unlocked(obj);
+diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
+index 1fbbc19f1ac09..7bb98e6a446d0 100644
+--- a/drivers/gpu/drm/drm_internal.h
++++ b/drivers/gpu/drm/drm_internal.h
+@@ -74,8 +74,8 @@ int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
+
+ void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv);
+ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv);
+-void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv,
+- struct dma_buf *dma_buf);
++void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv,
++ uint32_t handle);
+
+ /* drm_drv.c */
+ struct drm_minor *drm_minor_acquire(unsigned int minor_id);
+diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
+index e3f09f18110c7..bd5366b16381b 100644
+--- a/drivers/gpu/drm/drm_prime.c
++++ b/drivers/gpu/drm/drm_prime.c
+@@ -190,29 +190,33 @@ static int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpri
+ return -ENOENT;
+ }
+
+-void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv,
+- struct dma_buf *dma_buf)
++void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv,
++ uint32_t handle)
+ {
+ struct rb_node *rb;
+
+- rb = prime_fpriv->dmabufs.rb_node;
++ mutex_lock(&prime_fpriv->lock);
++
++ rb = prime_fpriv->handles.rb_node;
+ while (rb) {
+ struct drm_prime_member *member;
+
+- member = rb_entry(rb, struct drm_prime_member, dmabuf_rb);
+- if (member->dma_buf == dma_buf) {
++ member = rb_entry(rb, struct drm_prime_member, handle_rb);
++ if (member->handle == handle) {
+ rb_erase(&member->handle_rb, &prime_fpriv->handles);
+ rb_erase(&member->dmabuf_rb, &prime_fpriv->dmabufs);
+
+- dma_buf_put(dma_buf);
++ dma_buf_put(member->dma_buf);
+ kfree(member);
+- return;
+- } else if (member->dma_buf < dma_buf) {
++ break;
++ } else if (member->handle < handle) {
+ rb = rb->rb_right;
+ } else {
+ rb = rb->rb_left;
+ }
+ }
++
++ mutex_unlock(&prime_fpriv->lock);
+ }
+
+ void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv)
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
+index 0c5638f5b72bc..91caf4523b34d 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.c
++++ b/drivers/gpu/drm/i915/display/intel_bios.c
+@@ -478,6 +478,13 @@ init_bdb_block(struct drm_i915_private *i915,
+
+ block_size = get_blocksize(block);
+
++ /*
++ * Version number and new block size are considered
++ * part of the header for MIPI sequenece block v3+.
++ */
++ if (section_id == BDB_MIPI_SEQUENCE && *(const u8 *)block >= 3)
++ block_size += 5;
++
+ entry = kzalloc(struct_size(entry, data, max(min_size, block_size) + 3),
+ GFP_KERNEL);
+ if (!entry) {
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+index 9feaf1a589f38..d213d8ad1ea53 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+@@ -671,6 +671,28 @@ intel_dp_prepare_link_train(struct intel_dp *intel_dp,
+ intel_dp_compute_rate(intel_dp, crtc_state->port_clock,
+ &link_bw, &rate_select);
+
++ /*
++ * WaEdpLinkRateDataReload
++ *
++ * Parade PS8461E MUX (used on varius TGL+ laptops) needs
++ * to snoop the link rates reported by the sink when we
++ * use LINK_RATE_SET in order to operate in jitter cleaning
++ * mode (as opposed to redriver mode). Unfortunately it
++ * loses track of the snooped link rates when powered down,
++ * so we need to make it re-snoop often. Without this high
++ * link rates are not stable.
++ */
++ if (!link_bw) {
++ struct intel_connector *connector = intel_dp->attached_connector;
++ __le16 sink_rates[DP_MAX_SUPPORTED_RATES];
++
++ drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] Reloading eDP link rates\n",
++ connector->base.base.id, connector->base.name);
++
++ drm_dp_dpcd_read(&intel_dp->aux, DP_SUPPORTED_LINK_RATES,
++ sink_rates, sizeof(sink_rates));
++ }
++
+ if (link_bw)
+ drm_dbg_kms(&i915->drm,
+ "[ENCODER:%d:%s] Using LINK_BW_SET value %02x\n",
+diff --git a/drivers/gpu/drm/i915/gt/intel_llc.c b/drivers/gpu/drm/i915/gt/intel_llc.c
+index 40e2e28ee6c75..bf01780e7ea56 100644
+--- a/drivers/gpu/drm/i915/gt/intel_llc.c
++++ b/drivers/gpu/drm/i915/gt/intel_llc.c
+@@ -12,6 +12,7 @@
+ #include "intel_llc.h"
+ #include "intel_mchbar_regs.h"
+ #include "intel_pcode.h"
++#include "intel_rps.h"
+
+ struct ia_constants {
+ unsigned int min_gpu_freq;
+@@ -55,9 +56,6 @@ static bool get_ia_constants(struct intel_llc *llc,
+ if (!HAS_LLC(i915) || IS_DGFX(i915))
+ return false;
+
+- if (rps->max_freq <= rps->min_freq)
+- return false;
+-
+ consts->max_ia_freq = cpu_max_MHz();
+
+ consts->min_ring_freq =
+@@ -65,13 +63,8 @@ static bool get_ia_constants(struct intel_llc *llc,
+ /* convert DDR frequency from units of 266.6MHz to bandwidth */
+ consts->min_ring_freq = mult_frac(consts->min_ring_freq, 8, 3);
+
+- consts->min_gpu_freq = rps->min_freq;
+- consts->max_gpu_freq = rps->max_freq;
+- if (GRAPHICS_VER(i915) >= 9) {
+- /* Convert GT frequency to 50 HZ units */
+- consts->min_gpu_freq /= GEN9_FREQ_SCALER;
+- consts->max_gpu_freq /= GEN9_FREQ_SCALER;
+- }
++ consts->min_gpu_freq = intel_rps_get_min_raw_freq(rps);
++ consts->max_gpu_freq = intel_rps_get_max_raw_freq(rps);
+
+ return true;
+ }
+@@ -131,6 +124,12 @@ static void gen6_update_ring_freq(struct intel_llc *llc)
+ if (!get_ia_constants(llc, &consts))
+ return;
+
++ /*
++ * Although this is unlikely on any platform during initialization,
++ * let's ensure we don't get accidentally into infinite loop
++ */
++ if (consts.max_gpu_freq <= consts.min_gpu_freq)
++ return;
+ /*
+ * For each potential GPU frequency, load a ring frequency we'd like
+ * to use for memory access. We do this by specifying the IA frequency
+diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
+index 3476a11f294ce..7c068cc64c2fa 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rps.c
++++ b/drivers/gpu/drm/i915/gt/intel_rps.c
+@@ -2123,6 +2123,31 @@ u32 intel_rps_get_max_frequency(struct intel_rps *rps)
+ return intel_gpu_freq(rps, rps->max_freq_softlimit);
+ }
+
++/**
++ * intel_rps_get_max_raw_freq - returns the max frequency in some raw format.
++ * @rps: the intel_rps structure
++ *
++ * Returns the max frequency in a raw format. In newer platforms raw is in
++ * units of 50 MHz.
++ */
++u32 intel_rps_get_max_raw_freq(struct intel_rps *rps)
++{
++ struct intel_guc_slpc *slpc = rps_to_slpc(rps);
++ u32 freq;
++
++ if (rps_uses_slpc(rps)) {
++ return DIV_ROUND_CLOSEST(slpc->rp0_freq,
++ GT_FREQUENCY_MULTIPLIER);
++ } else {
++ freq = rps->max_freq;
++ if (GRAPHICS_VER(rps_to_i915(rps)) >= 9) {
++ /* Convert GT frequency to 50 MHz units */
++ freq /= GEN9_FREQ_SCALER;
++ }
++ return freq;
++ }
++}
++
+ u32 intel_rps_get_rp0_frequency(struct intel_rps *rps)
+ {
+ struct intel_guc_slpc *slpc = rps_to_slpc(rps);
+@@ -2211,6 +2236,31 @@ u32 intel_rps_get_min_frequency(struct intel_rps *rps)
+ return intel_gpu_freq(rps, rps->min_freq_softlimit);
+ }
+
++/**
++ * intel_rps_get_min_raw_freq - returns the min frequency in some raw format.
++ * @rps: the intel_rps structure
++ *
++ * Returns the min frequency in a raw format. In newer platforms raw is in
++ * units of 50 MHz.
++ */
++u32 intel_rps_get_min_raw_freq(struct intel_rps *rps)
++{
++ struct intel_guc_slpc *slpc = rps_to_slpc(rps);
++ u32 freq;
++
++ if (rps_uses_slpc(rps)) {
++ return DIV_ROUND_CLOSEST(slpc->min_freq,
++ GT_FREQUENCY_MULTIPLIER);
++ } else {
++ freq = rps->min_freq;
++ if (GRAPHICS_VER(rps_to_i915(rps)) >= 9) {
++ /* Convert GT frequency to 50 MHz units */
++ freq /= GEN9_FREQ_SCALER;
++ }
++ return freq;
++ }
++}
++
+ static int set_min_freq(struct intel_rps *rps, u32 val)
+ {
+ int ret = 0;
+diff --git a/drivers/gpu/drm/i915/gt/intel_rps.h b/drivers/gpu/drm/i915/gt/intel_rps.h
+index 1e8d564913083..4509dfdc52e09 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rps.h
++++ b/drivers/gpu/drm/i915/gt/intel_rps.h
+@@ -37,8 +37,10 @@ u32 intel_rps_get_cagf(struct intel_rps *rps, u32 rpstat1);
+ u32 intel_rps_read_actual_frequency(struct intel_rps *rps);
+ u32 intel_rps_get_requested_frequency(struct intel_rps *rps);
+ u32 intel_rps_get_min_frequency(struct intel_rps *rps);
++u32 intel_rps_get_min_raw_freq(struct intel_rps *rps);
+ int intel_rps_set_min_frequency(struct intel_rps *rps, u32 val);
+ u32 intel_rps_get_max_frequency(struct intel_rps *rps);
++u32 intel_rps_get_max_raw_freq(struct intel_rps *rps);
+ int intel_rps_set_max_frequency(struct intel_rps *rps, u32 val);
+ u32 intel_rps_get_rp0_frequency(struct intel_rps *rps);
+ u32 intel_rps_get_rp1_frequency(struct intel_rps *rps);
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index 429644d5ddc69..9fba16cb3f1e7 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1604,6 +1604,9 @@ int radeon_suspend_kms(struct drm_device *dev, bool suspend,
+ if (r) {
+ /* delay GPU reset to resume */
+ radeon_fence_driver_force_completion(rdev, i);
++ } else {
++ /* finish executing delayed work */
++ flush_delayed_work(&rdev->fence_drv[i].lockup_work);
+ }
+ }
+
+diff --git a/drivers/hwmon/asus-ec-sensors.c b/drivers/hwmon/asus-ec-sensors.c
+index 3633ab691662b..81e688975c6a7 100644
+--- a/drivers/hwmon/asus-ec-sensors.c
++++ b/drivers/hwmon/asus-ec-sensors.c
+@@ -54,6 +54,10 @@ static char *mutex_path_override;
+ /* ACPI mutex for locking access to the EC for the firmware */
+ #define ASUS_HW_ACCESS_MUTEX_ASMX "\\AMW0.ASMX"
+
++#define ASUS_HW_ACCESS_MUTEX_RMTW_ASMX "\\RMTW.ASMX"
++
++#define ASUS_HW_ACCESS_MUTEX_SB_PCI0_SBRG_SIO1_MUT0 "\\_SB_.PCI0.SBRG.SIO1.MUT0"
++
+ #define MAX_IDENTICAL_BOARD_VARIATIONS 3
+
+ /* Moniker for the ACPI global lock (':' is not allowed in ASL identifiers) */
+@@ -119,6 +123,18 @@ enum ec_sensors {
+ ec_sensor_temp_water_in,
+ /* "Water_Out" temperature sensor reading [℃] */
+ ec_sensor_temp_water_out,
++ /* "Water_Block_In" temperature sensor reading [℃] */
++ ec_sensor_temp_water_block_in,
++ /* "Water_Block_Out" temperature sensor reading [℃] */
++ ec_sensor_temp_water_block_out,
++ /* "T_sensor_2" temperature sensor reading [℃] */
++ ec_sensor_temp_t_sensor_2,
++ /* "Extra_1" temperature sensor reading [℃] */
++ ec_sensor_temp_sensor_extra_1,
++ /* "Extra_2" temperature sensor reading [℃] */
++ ec_sensor_temp_sensor_extra_2,
++ /* "Extra_3" temperature sensor reading [℃] */
++ ec_sensor_temp_sensor_extra_3,
+ };
+
+ #define SENSOR_TEMP_CHIPSET BIT(ec_sensor_temp_chipset)
+@@ -134,11 +150,19 @@ enum ec_sensors {
+ #define SENSOR_CURR_CPU BIT(ec_sensor_curr_cpu)
+ #define SENSOR_TEMP_WATER_IN BIT(ec_sensor_temp_water_in)
+ #define SENSOR_TEMP_WATER_OUT BIT(ec_sensor_temp_water_out)
++#define SENSOR_TEMP_WATER_BLOCK_IN BIT(ec_sensor_temp_water_block_in)
++#define SENSOR_TEMP_WATER_BLOCK_OUT BIT(ec_sensor_temp_water_block_out)
++#define SENSOR_TEMP_T_SENSOR_2 BIT(ec_sensor_temp_t_sensor_2)
++#define SENSOR_TEMP_SENSOR_EXTRA_1 BIT(ec_sensor_temp_sensor_extra_1)
++#define SENSOR_TEMP_SENSOR_EXTRA_2 BIT(ec_sensor_temp_sensor_extra_2)
++#define SENSOR_TEMP_SENSOR_EXTRA_3 BIT(ec_sensor_temp_sensor_extra_3)
+
+ enum board_family {
+ family_unknown,
+ family_amd_400_series,
+ family_amd_500_series,
++ family_intel_300_series,
++ family_intel_600_series
+ };
+
+ /* All the known sensors for ASUS EC controllers */
+@@ -195,15 +219,54 @@ static const struct ec_sensor_info sensors_family_amd_500[] = {
+ EC_SENSOR("Water_In", hwmon_temp, 1, 0x01, 0x00),
+ [ec_sensor_temp_water_out] =
+ EC_SENSOR("Water_Out", hwmon_temp, 1, 0x01, 0x01),
++ [ec_sensor_temp_water_block_in] =
++ EC_SENSOR("Water_Block_In", hwmon_temp, 1, 0x01, 0x02),
++ [ec_sensor_temp_water_block_out] =
++ EC_SENSOR("Water_Block_Out", hwmon_temp, 1, 0x01, 0x03),
++ [ec_sensor_temp_sensor_extra_1] =
++ EC_SENSOR("Extra_1", hwmon_temp, 1, 0x01, 0x09),
++ [ec_sensor_temp_t_sensor_2] =
++ EC_SENSOR("T_sensor_2", hwmon_temp, 1, 0x01, 0x0a),
++ [ec_sensor_temp_sensor_extra_2] =
++ EC_SENSOR("Extra_2", hwmon_temp, 1, 0x01, 0x0b),
++ [ec_sensor_temp_sensor_extra_3] =
++ EC_SENSOR("Extra_3", hwmon_temp, 1, 0x01, 0x0c),
++};
++
++static const struct ec_sensor_info sensors_family_intel_300[] = {
++ [ec_sensor_temp_chipset] =
++ EC_SENSOR("Chipset", hwmon_temp, 1, 0x00, 0x3a),
++ [ec_sensor_temp_cpu] = EC_SENSOR("CPU", hwmon_temp, 1, 0x00, 0x3b),
++ [ec_sensor_temp_mb] =
++ EC_SENSOR("Motherboard", hwmon_temp, 1, 0x00, 0x3c),
++ [ec_sensor_temp_t_sensor] =
++ EC_SENSOR("T_Sensor", hwmon_temp, 1, 0x00, 0x3d),
++ [ec_sensor_temp_vrm] = EC_SENSOR("VRM", hwmon_temp, 1, 0x00, 0x3e),
++ [ec_sensor_fan_cpu_opt] =
++ EC_SENSOR("CPU_Opt", hwmon_fan, 2, 0x00, 0xb0),
++ [ec_sensor_fan_vrm_hs] = EC_SENSOR("VRM HS", hwmon_fan, 2, 0x00, 0xb2),
++ [ec_sensor_fan_water_flow] =
++ EC_SENSOR("Water_Flow", hwmon_fan, 2, 0x00, 0xbc),
++ [ec_sensor_temp_water_in] =
++ EC_SENSOR("Water_In", hwmon_temp, 1, 0x01, 0x00),
++ [ec_sensor_temp_water_out] =
++ EC_SENSOR("Water_Out", hwmon_temp, 1, 0x01, 0x01),
++};
++
++static const struct ec_sensor_info sensors_family_intel_600[] = {
++ [ec_sensor_temp_t_sensor] =
++ EC_SENSOR("T_Sensor", hwmon_temp, 1, 0x00, 0x3d),
++ [ec_sensor_temp_vrm] = EC_SENSOR("VRM", hwmon_temp, 1, 0x00, 0x3e),
+ };
+
+ /* Shortcuts for common combinations */
+ #define SENSOR_SET_TEMP_CHIPSET_CPU_MB \
+ (SENSOR_TEMP_CHIPSET | SENSOR_TEMP_CPU | SENSOR_TEMP_MB)
+ #define SENSOR_SET_TEMP_WATER (SENSOR_TEMP_WATER_IN | SENSOR_TEMP_WATER_OUT)
++#define SENSOR_SET_WATER_BLOCK \
++ (SENSOR_TEMP_WATER_BLOCK_IN | SENSOR_TEMP_WATER_BLOCK_OUT)
+
+ struct ec_board_info {
+- const char *board_names[MAX_IDENTICAL_BOARD_VARIATIONS];
+ unsigned long sensors;
+ /*
+ * Defines which mutex to use for guarding access to the state and the
+@@ -216,121 +279,194 @@ struct ec_board_info {
+ enum board_family family;
+ };
+
+-static const struct ec_board_info board_info[] = {
+- {
+- .board_names = {"PRIME X470-PRO"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
+- SENSOR_FAN_CPU_OPT |
+- SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
+- .mutex_path = ACPI_GLOBAL_LOCK_PSEUDO_PATH,
+- .family = family_amd_400_series,
+- },
+- {
+- .board_names = {"PRIME X570-PRO"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM |
+- SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ProArt X570-CREATOR WIFI"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM |
+- SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CPU_OPT |
+- SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
+- },
+- {
+- .board_names = {"Pro WS X570-ACE"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM |
+- SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET |
+- SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ROG CROSSHAIR VIII DARK HERO"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR |
+- SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER |
+- SENSOR_FAN_CPU_OPT | SENSOR_FAN_WATER_FLOW |
+- SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {
+- "ROG CROSSHAIR VIII FORMULA",
+- "ROG CROSSHAIR VIII HERO",
+- "ROG CROSSHAIR VIII HERO (WI-FI)",
+- },
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR |
+- SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER |
+- SENSOR_FAN_CPU_OPT | SENSOR_FAN_CHIPSET |
+- SENSOR_FAN_WATER_FLOW | SENSOR_CURR_CPU |
+- SENSOR_IN_CPU_CORE,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ROG CROSSHAIR VIII IMPACT"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
+- SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU |
+- SENSOR_IN_CPU_CORE,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ROG STRIX B550-E GAMING"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
+- SENSOR_FAN_CPU_OPT,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ROG STRIX B550-I GAMING"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
+- SENSOR_FAN_VRM_HS | SENSOR_CURR_CPU |
+- SENSOR_IN_CPU_CORE,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ROG STRIX X570-E GAMING"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
+- SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU |
+- SENSOR_IN_CPU_CORE,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ROG STRIX X570-E GAMING WIFI II"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR | SENSOR_CURR_CPU |
+- SENSOR_IN_CPU_CORE,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ROG STRIX X570-F GAMING"},
+- .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+- SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {
+- .board_names = {"ROG STRIX X570-I GAMING"},
+- .sensors = SENSOR_TEMP_T_SENSOR | SENSOR_FAN_VRM_HS |
+- SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU |
+- SENSOR_IN_CPU_CORE,
+- .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+- .family = family_amd_500_series,
+- },
+- {}
++static const struct ec_board_info board_info_prime_x470_pro = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
++ SENSOR_FAN_CPU_OPT |
++ SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
++ .mutex_path = ACPI_GLOBAL_LOCK_PSEUDO_PATH,
++ .family = family_amd_400_series,
++};
++
++static const struct ec_board_info board_info_prime_x570_pro = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM |
++ SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_pro_art_x570_creator_wifi = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM |
++ SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CPU_OPT |
++ SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_pro_ws_x570_ace = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM |
++ SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET |
++ SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_crosshair_viii_dark_hero = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR |
++ SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER |
++ SENSOR_FAN_CPU_OPT | SENSOR_FAN_WATER_FLOW |
++ SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_crosshair_viii_hero = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR |
++ SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER |
++ SENSOR_FAN_CPU_OPT | SENSOR_FAN_CHIPSET |
++ SENSOR_FAN_WATER_FLOW | SENSOR_CURR_CPU |
++ SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_maximus_xi_hero = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR |
++ SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER |
++ SENSOR_FAN_CPU_OPT | SENSOR_FAN_WATER_FLOW,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_intel_300_series,
++};
++
++static const struct ec_board_info board_info_crosshair_viii_impact = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
++ SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU |
++ SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_strix_b550_e_gaming = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
++ SENSOR_FAN_CPU_OPT,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_strix_b550_i_gaming = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
++ SENSOR_FAN_VRM_HS | SENSOR_CURR_CPU |
++ SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_strix_x570_e_gaming = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
++ SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU |
++ SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_strix_x570_e_gaming_wifi_ii = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR | SENSOR_CURR_CPU |
++ SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_strix_x570_f_gaming = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
++ SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_strix_x570_i_gaming = {
++ .sensors = SENSOR_TEMP_CHIPSET | SENSOR_TEMP_VRM |
++ SENSOR_TEMP_T_SENSOR |
++ SENSOR_FAN_VRM_HS | SENSOR_FAN_CHIPSET |
++ SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
++ .family = family_amd_500_series,
++};
++
++static const struct ec_board_info board_info_strix_z690_a_gaming_wifi_d4 = {
++ .sensors = SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_RMTW_ASMX,
++ .family = family_intel_600_series,
++};
++
++static const struct ec_board_info board_info_zenith_ii_extreme = {
++ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_T_SENSOR |
++ SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER |
++ SENSOR_FAN_CPU_OPT | SENSOR_FAN_CHIPSET | SENSOR_FAN_VRM_HS |
++ SENSOR_FAN_WATER_FLOW | SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE |
++ SENSOR_SET_WATER_BLOCK |
++ SENSOR_TEMP_T_SENSOR_2 | SENSOR_TEMP_SENSOR_EXTRA_1 |
++ SENSOR_TEMP_SENSOR_EXTRA_2 | SENSOR_TEMP_SENSOR_EXTRA_3,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_SB_PCI0_SBRG_SIO1_MUT0,
++ .family = family_amd_500_series,
++};
++
++#define DMI_EXACT_MATCH_ASUS_BOARD_NAME(name, board_info) \
++ { \
++ .matches = { \
++ DMI_EXACT_MATCH(DMI_BOARD_VENDOR, \
++ "ASUSTeK COMPUTER INC."), \
++ DMI_EXACT_MATCH(DMI_BOARD_NAME, name), \
++ }, \
++ .driver_data = (void *)board_info, \
++ }
++
++static const struct dmi_system_id dmi_table[] = {
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("PRIME X470-PRO",
++ &board_info_prime_x470_pro),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("PRIME X570-PRO",
++ &board_info_prime_x570_pro),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ProArt X570-CREATOR WIFI",
++ &board_info_pro_art_x570_creator_wifi),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("Pro WS X570-ACE",
++ &board_info_pro_ws_x570_ace),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII DARK HERO",
++ &board_info_crosshair_viii_dark_hero),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII FORMULA",
++ &board_info_crosshair_viii_hero),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII HERO",
++ &board_info_crosshair_viii_hero),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII HERO (WI-FI)",
++ &board_info_crosshair_viii_hero),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG MAXIMUS XI HERO",
++ &board_info_maximus_xi_hero),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG MAXIMUS XI HERO (WI-FI)",
++ &board_info_maximus_xi_hero),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII IMPACT",
++ &board_info_crosshair_viii_impact),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX B550-E GAMING",
++ &board_info_strix_b550_e_gaming),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX B550-I GAMING",
++ &board_info_strix_b550_i_gaming),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX X570-E GAMING",
++ &board_info_strix_x570_e_gaming),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX X570-E GAMING WIFI II",
++ &board_info_strix_x570_e_gaming_wifi_ii),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX X570-F GAMING",
++ &board_info_strix_x570_f_gaming),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX X570-I GAMING",
++ &board_info_strix_x570_i_gaming),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX Z690-A GAMING WIFI D4",
++ &board_info_strix_z690_a_gaming_wifi_d4),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG ZENITH II EXTREME",
++ &board_info_zenith_ii_extreme),
++ {},
+ };
+
+ struct ec_sensor {
+@@ -441,12 +577,12 @@ static int find_ec_sensor_index(const struct ec_sensors_data *ec,
+ return -ENOENT;
+ }
+
+-static int __init bank_compare(const void *a, const void *b)
++static int bank_compare(const void *a, const void *b)
+ {
+ return *((const s8 *)a) - *((const s8 *)b);
+ }
+
+-static void __init setup_sensor_data(struct ec_sensors_data *ec)
++static void setup_sensor_data(struct ec_sensors_data *ec)
+ {
+ struct ec_sensor *s = ec->sensors;
+ bool bank_found;
+@@ -478,7 +614,7 @@ static void __init setup_sensor_data(struct ec_sensors_data *ec)
+ sort(ec->banks, ec->nr_banks, 1, bank_compare, NULL);
+ }
+
+-static void __init fill_ec_registers(struct ec_sensors_data *ec)
++static void fill_ec_registers(struct ec_sensors_data *ec)
+ {
+ const struct ec_sensor_info *si;
+ unsigned int i, j, register_idx = 0;
+@@ -493,7 +629,7 @@ static void __init fill_ec_registers(struct ec_sensors_data *ec)
+ }
+ }
+
+-static int __init setup_lock_data(struct device *dev)
++static int setup_lock_data(struct device *dev)
+ {
+ const char *mutex_path;
+ int status;
+@@ -716,7 +852,7 @@ static umode_t asus_ec_hwmon_is_visible(const void *drvdata,
+ return find_ec_sensor_index(state, type, channel) >= 0 ? S_IRUGO : 0;
+ }
+
+-static int __init
++static int
+ asus_ec_hwmon_add_chan_info(struct hwmon_channel_info *asus_ec_hwmon_chan,
+ struct device *dev, int num,
+ enum hwmon_sensor_types type, u32 config)
+@@ -745,27 +881,15 @@ static struct hwmon_chip_info asus_ec_chip_info = {
+ .ops = &asus_ec_hwmon_ops,
+ };
+
+-static const struct ec_board_info * __init get_board_info(void)
++static const struct ec_board_info *get_board_info(void)
+ {
+- const char *dmi_board_vendor = dmi_get_system_info(DMI_BOARD_VENDOR);
+- const char *dmi_board_name = dmi_get_system_info(DMI_BOARD_NAME);
+- const struct ec_board_info *board;
+-
+- if (!dmi_board_vendor || !dmi_board_name ||
+- strcasecmp(dmi_board_vendor, "ASUSTeK COMPUTER INC."))
+- return NULL;
+-
+- for (board = board_info; board->sensors; board++) {
+- if (match_string(board->board_names,
+- MAX_IDENTICAL_BOARD_VARIATIONS,
+- dmi_board_name) >= 0)
+- return board;
+- }
++ const struct dmi_system_id *dmi_entry;
+
+- return NULL;
++ dmi_entry = dmi_first_match(dmi_table);
++ return dmi_entry ? dmi_entry->driver_data : NULL;
+ }
+
+-static int __init asus_ec_probe(struct platform_device *pdev)
++static int asus_ec_probe(struct platform_device *pdev)
+ {
+ const struct hwmon_channel_info **ptr_asus_ec_ci;
+ int nr_count[hwmon_max] = { 0 }, nr_types = 0;
+@@ -799,6 +923,12 @@ static int __init asus_ec_probe(struct platform_device *pdev)
+ case family_amd_500_series:
+ ec_data->sensors_info = sensors_family_amd_500;
+ break;
++ case family_intel_300_series:
++ ec_data->sensors_info = sensors_family_intel_300;
++ break;
++ case family_intel_600_series:
++ ec_data->sensors_info = sensors_family_intel_600;
++ break;
+ default:
+ dev_err(dev, "Unknown board family: %d",
+ ec_data->board_info->family);
+@@ -868,29 +998,37 @@ static int __init asus_ec_probe(struct platform_device *pdev)
+ return PTR_ERR_OR_ZERO(hwdev);
+ }
+
+-
+-static const struct acpi_device_id acpi_ec_ids[] = {
+- /* Embedded Controller Device */
+- { "PNP0C09", 0 },
+- {}
+-};
++MODULE_DEVICE_TABLE(dmi, dmi_table);
+
+ static struct platform_driver asus_ec_sensors_platform_driver = {
+ .driver = {
+ .name = "asus-ec-sensors",
+- .acpi_match_table = acpi_ec_ids,
+ },
++ .probe = asus_ec_probe,
+ };
+
+-MODULE_DEVICE_TABLE(acpi, acpi_ec_ids);
+-/*
+- * we use module_platform_driver_probe() rather than module_platform_driver()
+- * because the probe function (and its dependants) are marked with __init, which
+- * means we can't put it into the .probe member of the platform_driver struct
+- * above, and we can't mark the asus_ec_sensors_platform_driver object as __init
+- * because the object is referenced from the module exit code.
+- */
+-module_platform_driver_probe(asus_ec_sensors_platform_driver, asus_ec_probe);
++static struct platform_device *asus_ec_sensors_platform_device;
++
++static int __init asus_ec_init(void)
++{
++ asus_ec_sensors_platform_device =
++ platform_create_bundle(&asus_ec_sensors_platform_driver,
++ asus_ec_probe, NULL, 0, NULL, 0);
++
++ if (IS_ERR(asus_ec_sensors_platform_device))
++ return PTR_ERR(asus_ec_sensors_platform_device);
++
++ return 0;
++}
++
++static void __exit asus_ec_exit(void)
++{
++ platform_device_unregister(asus_ec_sensors_platform_device);
++ platform_driver_unregister(&asus_ec_sensors_platform_driver);
++}
++
++module_init(asus_ec_init);
++module_exit(asus_ec_exit);
+
+ module_param_named(mutex_path, mutex_path_override, charp, 0);
+ MODULE_PARM_DESC(mutex_path,
+diff --git a/drivers/hwmon/mr75203.c b/drivers/hwmon/mr75203.c
+index 26278b0f17a98..9259779cc2dff 100644
+--- a/drivers/hwmon/mr75203.c
++++ b/drivers/hwmon/mr75203.c
+@@ -68,8 +68,9 @@
+
+ /* VM Individual Macro Register */
+ #define VM_COM_REG_SIZE 0x200
+-#define VM_SDIF_DONE(n) (VM_COM_REG_SIZE + 0x34 + 0x200 * (n))
+-#define VM_SDIF_DATA(n) (VM_COM_REG_SIZE + 0x40 + 0x200 * (n))
++#define VM_SDIF_DONE(vm) (VM_COM_REG_SIZE + 0x34 + 0x200 * (vm))
++#define VM_SDIF_DATA(vm, ch) \
++ (VM_COM_REG_SIZE + 0x40 + 0x200 * (vm) + 0x4 * (ch))
+
+ /* SDA Slave Register */
+ #define IP_CTRL 0x00
+@@ -115,6 +116,7 @@ struct pvt_device {
+ u32 t_num;
+ u32 p_num;
+ u32 v_num;
++ u32 c_num;
+ u32 ip_freq;
+ u8 *vm_idx;
+ };
+@@ -178,14 +180,15 @@ static int pvt_read_in(struct device *dev, u32 attr, int channel, long *val)
+ {
+ struct pvt_device *pvt = dev_get_drvdata(dev);
+ struct regmap *v_map = pvt->v_map;
++ u8 vm_idx, ch_idx;
+ u32 n, stat;
+- u8 vm_idx;
+ int ret;
+
+- if (channel >= pvt->v_num)
++ if (channel >= pvt->v_num * pvt->c_num)
+ return -EINVAL;
+
+- vm_idx = pvt->vm_idx[channel];
++ vm_idx = pvt->vm_idx[channel / pvt->c_num];
++ ch_idx = channel % pvt->c_num;
+
+ switch (attr) {
+ case hwmon_in_input:
+@@ -196,13 +199,23 @@ static int pvt_read_in(struct device *dev, u32 attr, int channel, long *val)
+ if (ret)
+ return ret;
+
+- ret = regmap_read(v_map, VM_SDIF_DATA(vm_idx), &n);
++ ret = regmap_read(v_map, VM_SDIF_DATA(vm_idx, ch_idx), &n);
+ if(ret < 0)
+ return ret;
+
+ n &= SAMPLE_DATA_MSK;
+- /* Convert the N bitstream count into voltage */
+- *val = (PVT_N_CONST * n - PVT_R_CONST) >> PVT_CONV_BITS;
++ /*
++ * Convert the N bitstream count into voltage.
++ * To support negative voltage calculation for 64bit machines
++ * n must be cast to long, since n and *val differ both in
++ * signedness and in size.
++ * Division is used instead of right shift, because for signed
++ * numbers, the sign bit is used to fill the vacated bit
++ * positions, and if the number is negative, 1 is used.
++ * BIT(x) may not be used instead of (1 << x) because it's
++ * unsigned.
++ */
++ *val = (PVT_N_CONST * (long)n - PVT_R_CONST) / (1 << PVT_CONV_BITS);
+
+ return 0;
+ default:
+@@ -375,6 +388,19 @@ static int pvt_init(struct pvt_device *pvt)
+ if (ret)
+ return ret;
+
++ val = (BIT(pvt->c_num) - 1) | VM_CH_INIT |
++ IP_POLL << SDIF_ADDR_SFT | SDIF_WRN_W | SDIF_PROG;
++ ret = regmap_write(v_map, SDIF_W, val);
++ if (ret < 0)
++ return ret;
++
++ ret = regmap_read_poll_timeout(v_map, SDIF_STAT,
++ val, !(val & SDIF_BUSY),
++ PVT_POLL_DELAY_US,
++ PVT_POLL_TIMEOUT_US);
++ if (ret)
++ return ret;
++
+ val = CFG1_VOL_MEAS_MODE | CFG1_PARALLEL_OUT |
+ CFG1_14_BIT | IP_CFG << SDIF_ADDR_SFT |
+ SDIF_WRN_W | SDIF_PROG;
+@@ -489,8 +515,8 @@ static int pvt_reset_control_deassert(struct device *dev, struct pvt_device *pvt
+
+ static int mr75203_probe(struct platform_device *pdev)
+ {
++ u32 ts_num, vm_num, pd_num, ch_num, val, index, i;
+ const struct hwmon_channel_info **pvt_info;
+- u32 ts_num, vm_num, pd_num, val, index, i;
+ struct device *dev = &pdev->dev;
+ u32 *temp_config, *in_config;
+ struct device *hwmon_dev;
+@@ -531,9 +557,11 @@ static int mr75203_probe(struct platform_device *pdev)
+ ts_num = (val & TS_NUM_MSK) >> TS_NUM_SFT;
+ pd_num = (val & PD_NUM_MSK) >> PD_NUM_SFT;
+ vm_num = (val & VM_NUM_MSK) >> VM_NUM_SFT;
++ ch_num = (val & CH_NUM_MSK) >> CH_NUM_SFT;
+ pvt->t_num = ts_num;
+ pvt->p_num = pd_num;
+ pvt->v_num = vm_num;
++ pvt->c_num = ch_num;
+ val = 0;
+ if (ts_num)
+ val++;
+@@ -570,7 +598,7 @@ static int mr75203_probe(struct platform_device *pdev)
+ }
+
+ if (vm_num) {
+- u32 num = vm_num;
++ u32 total_ch;
+
+ ret = pvt_get_regmap(pdev, "vm", pvt);
+ if (ret)
+@@ -584,30 +612,30 @@ static int mr75203_probe(struct platform_device *pdev)
+ ret = device_property_read_u8_array(dev, "intel,vm-map",
+ pvt->vm_idx, vm_num);
+ if (ret) {
+- num = 0;
++ /*
++ * Incase intel,vm-map property is not defined, we
++ * assume incremental channel numbers.
++ */
++ for (i = 0; i < vm_num; i++)
++ pvt->vm_idx[i] = i;
+ } else {
+ for (i = 0; i < vm_num; i++)
+ if (pvt->vm_idx[i] >= vm_num ||
+ pvt->vm_idx[i] == 0xff) {
+- num = i;
++ pvt->v_num = i;
++ vm_num = i;
+ break;
+ }
+ }
+
+- /*
+- * Incase intel,vm-map property is not defined, we assume
+- * incremental channel numbers.
+- */
+- for (i = num; i < vm_num; i++)
+- pvt->vm_idx[i] = i;
+-
+- in_config = devm_kcalloc(dev, num + 1,
++ total_ch = ch_num * vm_num;
++ in_config = devm_kcalloc(dev, total_ch + 1,
+ sizeof(*in_config), GFP_KERNEL);
+ if (!in_config)
+ return -ENOMEM;
+
+- memset32(in_config, HWMON_I_INPUT, num);
+- in_config[num] = 0;
++ memset32(in_config, HWMON_I_INPUT, total_ch);
++ in_config[total_ch] = 0;
+ pvt_in.config = in_config;
+
+ pvt_info[index++] = &pvt_in;
+diff --git a/drivers/hwmon/tps23861.c b/drivers/hwmon/tps23861.c
+index 8bd6435c13e82..2148fd543bb4b 100644
+--- a/drivers/hwmon/tps23861.c
++++ b/drivers/hwmon/tps23861.c
+@@ -489,18 +489,20 @@ static char *tps23861_port_poe_plus_status(struct tps23861_data *data, int port)
+
+ static int tps23861_port_resistance(struct tps23861_data *data, int port)
+ {
+- u16 regval;
++ unsigned int raw_val;
++ __le16 regval;
+
+ regmap_bulk_read(data->regmap,
+ PORT_1_RESISTANCE_LSB + PORT_N_RESISTANCE_LSB_OFFSET * (port - 1),
+ ®val,
+ 2);
+
+- switch (FIELD_GET(PORT_RESISTANCE_RSN_MASK, regval)) {
++ raw_val = le16_to_cpu(regval);
++ switch (FIELD_GET(PORT_RESISTANCE_RSN_MASK, raw_val)) {
+ case PORT_RESISTANCE_RSN_OTHER:
+- return (FIELD_GET(PORT_RESISTANCE_MASK, regval) * RESISTANCE_LSB) / 10000;
++ return (FIELD_GET(PORT_RESISTANCE_MASK, raw_val) * RESISTANCE_LSB) / 10000;
+ case PORT_RESISTANCE_RSN_LOW:
+- return (FIELD_GET(PORT_RESISTANCE_MASK, regval) * RESISTANCE_LSB_LOW) / 10000;
++ return (FIELD_GET(PORT_RESISTANCE_MASK, raw_val) * RESISTANCE_LSB_LOW) / 10000;
+ case PORT_RESISTANCE_RSN_SHORT:
+ case PORT_RESISTANCE_RSN_OPEN:
+ default:
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index fabca5e51e3d4..4dd133eccfdfb 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1719,8 +1719,8 @@ cma_ib_id_from_event(struct ib_cm_id *cm_id,
+ }
+
+ if (!validate_net_dev(*net_dev,
+- (struct sockaddr *)&req->listen_addr_storage,
+- (struct sockaddr *)&req->src_addr_storage)) {
++ (struct sockaddr *)&req->src_addr_storage,
++ (struct sockaddr *)&req->listen_addr_storage)) {
+ id_priv = ERR_PTR(-EHOSTUNREACH);
+ goto err;
+ }
+diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
+index 186ed8859920c..d39e16c211e8a 100644
+--- a/drivers/infiniband/core/umem_odp.c
++++ b/drivers/infiniband/core/umem_odp.c
+@@ -462,7 +462,7 @@ retry:
+ mutex_unlock(&umem_odp->umem_mutex);
+
+ out_put_mm:
+- mmput(owning_mm);
++ mmput_async(owning_mm);
+ out_put_task:
+ if (owning_process)
+ put_task_struct(owning_process);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 2855e9ad4b328..1df076e70e293 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -730,7 +730,6 @@ struct hns_roce_caps {
+ u32 num_qps;
+ u32 num_pi_qps;
+ u32 reserved_qps;
+- int num_qpc_timer;
+ u32 num_srqs;
+ u32 max_wqes;
+ u32 max_srq_wrs;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index b354caeaa9b29..49edff989f1f1 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1941,7 +1941,7 @@ static void set_default_caps(struct hns_roce_dev *hr_dev)
+
+ caps->num_mtpts = HNS_ROCE_V2_MAX_MTPT_NUM;
+ caps->num_pds = HNS_ROCE_V2_MAX_PD_NUM;
+- caps->num_qpc_timer = HNS_ROCE_V2_MAX_QPC_TIMER_NUM;
++ caps->qpc_timer_bt_num = HNS_ROCE_V2_MAX_QPC_TIMER_BT_NUM;
+ caps->cqc_timer_bt_num = HNS_ROCE_V2_MAX_CQC_TIMER_BT_NUM;
+
+ caps->max_qp_init_rdma = HNS_ROCE_V2_MAX_QP_INIT_RDMA;
+@@ -2237,7 +2237,6 @@ static int hns_roce_query_pf_caps(struct hns_roce_dev *hr_dev)
+ caps->max_rq_sg = le16_to_cpu(resp_a->max_rq_sg);
+ caps->max_rq_sg = roundup_pow_of_two(caps->max_rq_sg);
+ caps->max_extend_sg = le32_to_cpu(resp_a->max_extend_sg);
+- caps->num_qpc_timer = le16_to_cpu(resp_a->num_qpc_timer);
+ caps->max_srq_sges = le16_to_cpu(resp_a->max_srq_sges);
+ caps->max_srq_sges = roundup_pow_of_two(caps->max_srq_sges);
+ caps->num_aeq_vectors = resp_a->num_aeq_vectors;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index 7ffb7824d2689..e4b640caee1b7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -36,11 +36,11 @@
+ #include <linux/bitops.h>
+
+ #define HNS_ROCE_V2_MAX_QP_NUM 0x1000
+-#define HNS_ROCE_V2_MAX_QPC_TIMER_NUM 0x200
+ #define HNS_ROCE_V2_MAX_WQE_NUM 0x8000
+ #define HNS_ROCE_V2_MAX_SRQ_WR 0x8000
+ #define HNS_ROCE_V2_MAX_SRQ_SGE 64
+ #define HNS_ROCE_V2_MAX_CQ_NUM 0x100000
++#define HNS_ROCE_V2_MAX_QPC_TIMER_BT_NUM 0x100
+ #define HNS_ROCE_V2_MAX_CQC_TIMER_BT_NUM 0x100
+ #define HNS_ROCE_V2_MAX_SRQ_NUM 0x100000
+ #define HNS_ROCE_V2_MAX_CQE_NUM 0x400000
+@@ -83,7 +83,7 @@
+
+ #define HNS_ROCE_V2_QPC_TIMER_ENTRY_SZ PAGE_SIZE
+ #define HNS_ROCE_V2_CQC_TIMER_ENTRY_SZ PAGE_SIZE
+-#define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED 0xFFFFF000
++#define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED 0xFFFF000
+ #define HNS_ROCE_V2_MAX_INNER_MTPT_NUM 2
+ #define HNS_ROCE_INVALID_LKEY 0x0
+ #define HNS_ROCE_INVALID_SGE_LENGTH 0x80000000
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index c8af4ebd7cbd3..4ccb217b2841d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -725,7 +725,7 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
+ ret = hns_roce_init_hem_table(hr_dev, &hr_dev->qpc_timer_table,
+ HEM_TYPE_QPC_TIMER,
+ hr_dev->caps.qpc_timer_entry_sz,
+- hr_dev->caps.num_qpc_timer, 1);
++ hr_dev->caps.qpc_timer_bt_num, 1);
+ if (ret) {
+ dev_err(dev,
+ "Failed to init QPC timer memory, aborting.\n");
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 48d3616a6d71d..7bee7f6c5e702 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -462,11 +462,8 @@ static int set_rq_size(struct hns_roce_dev *hr_dev, struct ib_qp_cap *cap,
+ hr_qp->rq.max_gs = roundup_pow_of_two(max(1U, cap->max_recv_sge) +
+ hr_qp->rq.rsv_sge);
+
+- if (hr_dev->caps.max_rq_sg <= HNS_ROCE_SGE_IN_WQE)
+- hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz);
+- else
+- hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz *
+- hr_qp->rq.max_gs);
++ hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz *
++ hr_qp->rq.max_gs);
+
+ hr_qp->rq.wqe_cnt = cnt;
+ if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE &&
+diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c
+index daeab5daed5bc..d003ad864ee44 100644
+--- a/drivers/infiniband/hw/irdma/uk.c
++++ b/drivers/infiniband/hw/irdma/uk.c
+@@ -1005,6 +1005,7 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq,
+ int ret_code;
+ bool move_cq_head = true;
+ u8 polarity;
++ u8 op_type;
+ bool ext_valid;
+ __le64 *ext_cqe;
+
+@@ -1187,7 +1188,6 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq,
+ do {
+ __le64 *sw_wqe;
+ u64 wqe_qword;
+- u8 op_type;
+ u32 tail;
+
+ tail = qp->sq_ring.tail;
+@@ -1204,6 +1204,8 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq,
+ break;
+ }
+ } while (1);
++ if (op_type == IRDMA_OP_TYPE_BIND_MW && info->minor_err == FLUSH_PROT_ERR)
++ info->minor_err = FLUSH_MW_BIND_ERR;
+ qp->sq_flush_seen = true;
+ if (!IRDMA_RING_MORE_WORK(qp->sq_ring))
+ qp->sq_flush_complete = true;
+diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c
+index ab3c5208a1231..f4d774451160d 100644
+--- a/drivers/infiniband/hw/irdma/utils.c
++++ b/drivers/infiniband/hw/irdma/utils.c
+@@ -590,11 +590,14 @@ static int irdma_wait_event(struct irdma_pci_f *rf,
+ cqp_error = cqp_request->compl_info.error;
+ if (cqp_error) {
+ err_code = -EIO;
+- if (cqp_request->compl_info.maj_err_code == 0xFFFF &&
+- cqp_request->compl_info.min_err_code == 0x8029) {
+- if (!rf->reset) {
+- rf->reset = true;
+- rf->gen_ops.request_reset(rf);
++ if (cqp_request->compl_info.maj_err_code == 0xFFFF) {
++ if (cqp_request->compl_info.min_err_code == 0x8002)
++ err_code = -EBUSY;
++ else if (cqp_request->compl_info.min_err_code == 0x8029) {
++ if (!rf->reset) {
++ rf->reset = true;
++ rf->gen_ops.request_reset(rf);
++ }
+ }
+ }
+ }
+@@ -2597,7 +2600,7 @@ void irdma_generate_flush_completions(struct irdma_qp *iwqp)
+ spin_unlock_irqrestore(&iwqp->lock, flags2);
+ spin_unlock_irqrestore(&iwqp->iwscq->lock, flags1);
+ if (compl_generated)
+- irdma_comp_handler(iwqp->iwrcq);
++ irdma_comp_handler(iwqp->iwscq);
+ } else {
+ spin_unlock_irqrestore(&iwqp->iwscq->lock, flags1);
+ mod_delayed_work(iwqp->iwdev->cleanup_wq, &iwqp->dwork_flush,
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 227a799385d1d..ab73d1715f991 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -39,15 +39,18 @@ static int irdma_query_device(struct ib_device *ibdev,
+ props->max_send_sge = hw_attrs->uk_attrs.max_hw_wq_frags;
+ props->max_recv_sge = hw_attrs->uk_attrs.max_hw_wq_frags;
+ props->max_cq = rf->max_cq - rf->used_cqs;
+- props->max_cqe = rf->max_cqe;
++ props->max_cqe = rf->max_cqe - 1;
+ props->max_mr = rf->max_mr - rf->used_mrs;
+ props->max_mw = props->max_mr;
+ props->max_pd = rf->max_pd - rf->used_pds;
+ props->max_sge_rd = hw_attrs->uk_attrs.max_hw_read_sges;
+ props->max_qp_rd_atom = hw_attrs->max_hw_ird;
+ props->max_qp_init_rd_atom = hw_attrs->max_hw_ord;
+- if (rdma_protocol_roce(ibdev, 1))
++ if (rdma_protocol_roce(ibdev, 1)) {
++ props->device_cap_flags |= IB_DEVICE_RC_RNR_NAK_GEN;
+ props->max_pkeys = IRDMA_PKEY_TBL_SZ;
++ }
++
+ props->max_ah = rf->max_ah;
+ props->max_mcast_grp = rf->max_mcg;
+ props->max_mcast_qp_attach = IRDMA_MAX_MGS_PER_CTX;
+@@ -3001,6 +3004,7 @@ static int irdma_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata)
+ struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc;
+ struct irdma_cqp_request *cqp_request;
+ struct cqp_cmds_info *cqp_info;
++ int status;
+
+ if (iwmr->type != IRDMA_MEMREG_TYPE_MEM) {
+ if (iwmr->region) {
+@@ -3031,8 +3035,11 @@ static int irdma_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata)
+ cqp_info->post_sq = 1;
+ cqp_info->in.u.dealloc_stag.dev = &iwdev->rf->sc_dev;
+ cqp_info->in.u.dealloc_stag.scratch = (uintptr_t)cqp_request;
+- irdma_handle_cqp_op(iwdev->rf, cqp_request);
++ status = irdma_handle_cqp_op(iwdev->rf, cqp_request);
+ irdma_put_cqp_request(&iwdev->rf->cqp, cqp_request);
++ if (status)
++ return status;
++
+ irdma_free_stag(iwdev, iwmr->stag);
+ done:
+ if (iwpbl->pbl_allocated)
+diff --git a/drivers/infiniband/hw/mlx5/mad.c b/drivers/infiniband/hw/mlx5/mad.c
+index 293ed709e5ed5..b4dc52392275b 100644
+--- a/drivers/infiniband/hw/mlx5/mad.c
++++ b/drivers/infiniband/hw/mlx5/mad.c
+@@ -166,6 +166,12 @@ static int process_pma_cmd(struct mlx5_ib_dev *dev, u32 port_num,
+ mdev = dev->mdev;
+ mdev_port_num = 1;
+ }
++ if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1) {
++ /* set local port to one for Function-Per-Port HCA. */
++ mdev = dev->mdev;
++ mdev_port_num = 1;
++ }
++
+ /* Declaring support of extended counters */
+ if (in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO) {
+ struct ib_class_port_info cpi = {};
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index 1f4e60257700e..7d47b521070b1 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -29,7 +29,7 @@ static struct page *siw_get_pblpage(struct siw_mem *mem, u64 addr, int *idx)
+ dma_addr_t paddr = siw_pbl_get_buffer(pbl, offset, NULL, idx);
+
+ if (paddr)
+- return virt_to_page(paddr);
++ return virt_to_page((void *)paddr);
+
+ return NULL;
+ }
+@@ -533,13 +533,23 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s)
+ kunmap_local(kaddr);
+ }
+ } else {
+- u64 va = sge->laddr + sge_off;
++ /*
++ * Cast to an uintptr_t to preserve all 64 bits
++ * in sge->laddr.
++ */
++ uintptr_t va = (uintptr_t)(sge->laddr + sge_off);
+
+- page_array[seg] = virt_to_page(va & PAGE_MASK);
++ /*
++ * virt_to_page() takes a (void *) pointer
++ * so cast to a (void *) meaning it will be 64
++ * bits on a 64 bit platform and 32 bits on a
++ * 32 bit platform.
++ */
++ page_array[seg] = virt_to_page((void *)(va & PAGE_MASK));
+ if (do_crc)
+ crypto_shash_update(
+ c_tx->mpa_crc_hd,
+- (void *)(uintptr_t)va,
++ (void *)va,
+ plen);
+ }
+
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 525f083fcaeb4..bf464400a4409 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -1004,7 +1004,8 @@ rtrs_clt_get_copy_req(struct rtrs_clt_path *alive_path,
+ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con,
+ struct rtrs_clt_io_req *req,
+ struct rtrs_rbuf *rbuf, bool fr_en,
+- u32 size, u32 imm, struct ib_send_wr *wr,
++ u32 count, u32 size, u32 imm,
++ struct ib_send_wr *wr,
+ struct ib_send_wr *tail)
+ {
+ struct rtrs_clt_path *clt_path = to_clt_path(con->c.path);
+@@ -1024,12 +1025,12 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con,
+ num_sge = 2;
+ ptail = tail;
+ } else {
+- for_each_sg(req->sglist, sg, req->sg_cnt, i) {
++ for_each_sg(req->sglist, sg, count, i) {
+ sge[i].addr = sg_dma_address(sg);
+ sge[i].length = sg_dma_len(sg);
+ sge[i].lkey = clt_path->s.dev->ib_pd->local_dma_lkey;
+ }
+- num_sge = 1 + req->sg_cnt;
++ num_sge = 1 + count;
+ }
+ sge[i].addr = req->iu->dma_addr;
+ sge[i].length = size;
+@@ -1142,7 +1143,7 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req)
+ */
+ rtrs_clt_update_all_stats(req, WRITE);
+
+- ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, fr_en,
++ ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, fr_en, count,
+ req->usr_len + sizeof(*msg),
+ imm, wr, &inv_wr);
+ if (ret) {
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 24024bce25664..ee4876bdce4ac 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -600,7 +600,7 @@ static int map_cont_bufs(struct rtrs_srv_path *srv_path)
+ struct sg_table *sgt = &srv_mr->sgt;
+ struct scatterlist *s;
+ struct ib_mr *mr;
+- int nr, chunks;
++ int nr, nr_sgt, chunks;
+
+ chunks = chunks_per_mr * mri;
+ if (!always_invalidate)
+@@ -615,19 +615,19 @@ static int map_cont_bufs(struct rtrs_srv_path *srv_path)
+ sg_set_page(s, srv->chunks[chunks + i],
+ max_chunk_size, 0);
+
+- nr = ib_dma_map_sg(srv_path->s.dev->ib_dev, sgt->sgl,
++ nr_sgt = ib_dma_map_sg(srv_path->s.dev->ib_dev, sgt->sgl,
+ sgt->nents, DMA_BIDIRECTIONAL);
+- if (nr < sgt->nents) {
+- err = nr < 0 ? nr : -EINVAL;
++ if (!nr_sgt) {
++ err = -EINVAL;
+ goto free_sg;
+ }
+ mr = ib_alloc_mr(srv_path->s.dev->ib_pd, IB_MR_TYPE_MEM_REG,
+- sgt->nents);
++ nr_sgt);
+ if (IS_ERR(mr)) {
+ err = PTR_ERR(mr);
+ goto unmap_sg;
+ }
+- nr = ib_map_mr_sg(mr, sgt->sgl, sgt->nents,
++ nr = ib_map_mr_sg(mr, sgt->sgl, nr_sgt,
+ NULL, max_chunk_size);
+ if (nr < 0 || nr < sgt->nents) {
+ err = nr < 0 ? nr : -EINVAL;
+@@ -646,7 +646,7 @@ static int map_cont_bufs(struct rtrs_srv_path *srv_path)
+ }
+ }
+ /* Eventually dma addr for each chunk can be cached */
+- for_each_sg(sgt->sgl, s, sgt->orig_nents, i)
++ for_each_sg(sgt->sgl, s, nr_sgt, i)
+ srv_path->dma_addr[chunks + i] = sg_dma_address(s);
+
+ ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 6058abf42ba74..3d9c108d73ad8 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -1962,7 +1962,8 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp)
+ if (scmnd) {
+ req = scsi_cmd_priv(scmnd);
+ scmnd = srp_claim_req(ch, req, NULL, scmnd);
+- } else {
++ }
++ if (!scmnd) {
+ shost_printk(KERN_ERR, target->scsi_host,
+ "Null scmnd for RSP w/tag %#016llx received on ch %td / QP %#x\n",
+ rsp->tag, ch - target->ch, ch->qp->qp_num);
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 840831d5d2ad9..a0924144bac80 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -874,7 +874,8 @@ static void build_completion_wait(struct iommu_cmd *cmd,
+ memset(cmd, 0, sizeof(*cmd));
+ cmd->data[0] = lower_32_bits(paddr) | CMD_COMPL_WAIT_STORE_MASK;
+ cmd->data[1] = upper_32_bits(paddr);
+- cmd->data[2] = data;
++ cmd->data[2] = lower_32_bits(data);
++ cmd->data[3] = upper_32_bits(data);
+ CMD_SET_TYPE(cmd, CMD_COMPL_WAIT);
+ }
+
+diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
+index afb3efd565b78..f3e2689787ae5 100644
+--- a/drivers/iommu/amd/iommu_v2.c
++++ b/drivers/iommu/amd/iommu_v2.c
+@@ -786,6 +786,8 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
+ if (dev_state->domain == NULL)
+ goto out_free_states;
+
++ /* See iommu_is_default_domain() */
++ dev_state->domain->type = IOMMU_DOMAIN_IDENTITY;
+ amd_iommu_domain_direct_map(dev_state->domain);
+
+ ret = amd_iommu_domain_enable_v2(dev_state->domain, pasids);
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 64b14ac4c7b02..fc8c1420c0b69 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -2368,6 +2368,13 @@ static int dmar_device_hotplug(acpi_handle handle, bool insert)
+ if (!dmar_in_use())
+ return 0;
+
++ /*
++ * It's unlikely that any I/O board is hot added before the IOMMU
++ * subsystem is initialized.
++ */
++ if (IS_ENABLED(CONFIG_INTEL_IOMMU) && !intel_iommu_enabled)
++ return -EOPNOTSUPP;
++
+ if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {
+ tmp = handle;
+ } else {
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 5c0dce78586aa..40ac3a78d90ef 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -422,14 +422,36 @@ static inline int domain_pfn_supported(struct dmar_domain *domain,
+ return !(addr_width < BITS_PER_LONG && pfn >> addr_width);
+ }
+
++/*
++ * Calculate the Supported Adjusted Guest Address Widths of an IOMMU.
++ * Refer to 11.4.2 of the VT-d spec for the encoding of each bit of
++ * the returned SAGAW.
++ */
++static unsigned long __iommu_calculate_sagaw(struct intel_iommu *iommu)
++{
++ unsigned long fl_sagaw, sl_sagaw;
++
++ fl_sagaw = BIT(2) | (cap_fl1gp_support(iommu->cap) ? BIT(3) : 0);
++ sl_sagaw = cap_sagaw(iommu->cap);
++
++ /* Second level only. */
++ if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++ return sl_sagaw;
++
++ /* First level only. */
++ if (!ecap_slts(iommu->ecap))
++ return fl_sagaw;
++
++ return fl_sagaw & sl_sagaw;
++}
++
+ static int __iommu_calculate_agaw(struct intel_iommu *iommu, int max_gaw)
+ {
+ unsigned long sagaw;
+ int agaw;
+
+- sagaw = cap_sagaw(iommu->cap);
+- for (agaw = width_to_agaw(max_gaw);
+- agaw >= 0; agaw--) {
++ sagaw = __iommu_calculate_sagaw(iommu);
++ for (agaw = width_to_agaw(max_gaw); agaw >= 0; agaw--) {
+ if (test_bit(agaw, &sagaw))
+ break;
+ }
+@@ -3123,13 +3145,7 @@ static int __init init_dmars(void)
+
+ #ifdef CONFIG_INTEL_IOMMU_SVM
+ if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) {
+- /*
+- * Call dmar_alloc_hwirq() with dmar_global_lock held,
+- * could cause possible lock race condition.
+- */
+- up_write(&dmar_global_lock);
+ ret = intel_svm_enable_prq(iommu);
+- down_write(&dmar_global_lock);
+ if (ret)
+ goto free_iommu;
+ }
+@@ -4035,7 +4051,6 @@ int __init intel_iommu_init(void)
+ force_on = (!intel_iommu_tboot_noforce && tboot_force_iommu()) ||
+ platform_optin_force_iommu();
+
+- down_write(&dmar_global_lock);
+ if (dmar_table_init()) {
+ if (force_on)
+ panic("tboot: Failed to initialize DMAR table\n");
+@@ -4048,16 +4063,6 @@ int __init intel_iommu_init(void)
+ goto out_free_dmar;
+ }
+
+- up_write(&dmar_global_lock);
+-
+- /*
+- * The bus notifier takes the dmar_global_lock, so lockdep will
+- * complain later when we register it under the lock.
+- */
+- dmar_register_bus_notifier();
+-
+- down_write(&dmar_global_lock);
+-
+ if (!no_iommu)
+ intel_iommu_debugfs_init();
+
+@@ -4105,11 +4110,9 @@ int __init intel_iommu_init(void)
+ pr_err("Initialization failed\n");
+ goto out_free_dmar;
+ }
+- up_write(&dmar_global_lock);
+
+ init_iommu_pm_ops();
+
+- down_read(&dmar_global_lock);
+ for_each_active_iommu(iommu, drhd) {
+ /*
+ * The flush queue implementation does not perform
+@@ -4127,13 +4130,11 @@ int __init intel_iommu_init(void)
+ "%s", iommu->name);
+ iommu_device_register(&iommu->iommu, &intel_iommu_ops, NULL);
+ }
+- up_read(&dmar_global_lock);
+
+ bus_set_iommu(&pci_bus_type, &intel_iommu_ops);
+ if (si_domain && !hw_pass_through)
+ register_memory_notifier(&intel_iommu_memory_nb);
+
+- down_read(&dmar_global_lock);
+ if (probe_acpi_namespace_devices())
+ pr_warn("ACPI name space devices didn't probe correctly\n");
+
+@@ -4144,17 +4145,15 @@ int __init intel_iommu_init(void)
+
+ iommu_disable_protect_mem_regions(iommu);
+ }
+- up_read(&dmar_global_lock);
+-
+- pr_info("Intel(R) Virtualization Technology for Directed I/O\n");
+
+ intel_iommu_enabled = 1;
++ dmar_register_bus_notifier();
++ pr_info("Intel(R) Virtualization Technology for Directed I/O\n");
+
+ return 0;
+
+ out_free_dmar:
+ intel_iommu_free_dmars();
+- up_write(&dmar_global_lock);
+ return ret;
+ }
+
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 847ad47a2dfd3..f113833c3075c 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -3089,6 +3089,24 @@ out:
+ return ret;
+ }
+
++static bool iommu_is_default_domain(struct iommu_group *group)
++{
++ if (group->domain == group->default_domain)
++ return true;
++
++ /*
++ * If the default domain was set to identity and it is still an identity
++ * domain then we consider this a pass. This happens because of
++ * amd_iommu_init_device() replacing the default idenytity domain with an
++ * identity domain that has a different configuration for AMDGPU.
++ */
++ if (group->default_domain &&
++ group->default_domain->type == IOMMU_DOMAIN_IDENTITY &&
++ group->domain && group->domain->type == IOMMU_DOMAIN_IDENTITY)
++ return true;
++ return false;
++}
++
+ /**
+ * iommu_device_use_default_domain() - Device driver wants to handle device
+ * DMA through the kernel DMA API.
+@@ -3107,8 +3125,7 @@ int iommu_device_use_default_domain(struct device *dev)
+
+ mutex_lock(&group->mutex);
+ if (group->owner_cnt) {
+- if (group->domain != group->default_domain ||
+- group->owner) {
++ if (group->owner || !iommu_is_default_domain(group)) {
+ ret = -EBUSY;
+ goto unlock_out;
+ }
+diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
+index 25be4b822aa07..bf340d779c10b 100644
+--- a/drivers/iommu/virtio-iommu.c
++++ b/drivers/iommu/virtio-iommu.c
+@@ -1006,7 +1006,18 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+ return iommu_fwspec_add_ids(dev, args->args, 1);
+ }
+
++static bool viommu_capable(enum iommu_cap cap)
++{
++ switch (cap) {
++ case IOMMU_CAP_CACHE_COHERENCY:
++ return true;
++ default:
++ return false;
++ }
++}
++
+ static struct iommu_ops viommu_ops = {
++ .capable = viommu_capable,
+ .domain_alloc = viommu_domain_alloc,
+ .probe_device = viommu_probe_device,
+ .probe_finalize = viommu_probe_finalize,
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 91e7e80fce489..25d18b67a1620 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -5647,6 +5647,7 @@ static int md_alloc(dev_t dev, char *name)
+ * removed (mddev_delayed_delete).
+ */
+ flush_workqueue(md_misc_wq);
++ flush_workqueue(md_rdev_misc_wq);
+
+ mutex_lock(&disks_mutex);
+ mddev = mddev_alloc(dev);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 6ba4c83fe5fc0..bff0bfd10e235 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1974,6 +1974,8 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ for (i = 0; i < BOND_MAX_ARP_TARGETS; i++)
+ new_slave->target_last_arp_rx[i] = new_slave->last_rx;
+
++ new_slave->last_tx = new_slave->last_rx;
++
+ if (bond->params.miimon && !bond->params.use_carrier) {
+ link_reporting = bond_check_dev_link(bond, slave_dev, 1);
+
+@@ -2857,8 +2859,11 @@ static void bond_arp_send(struct slave *slave, int arp_op, __be32 dest_ip,
+ return;
+ }
+
+- if (bond_handle_vlan(slave, tags, skb))
++ if (bond_handle_vlan(slave, tags, skb)) {
++ slave_update_last_tx(slave);
+ arp_xmit(skb);
++ }
++
+ return;
+ }
+
+@@ -3047,8 +3052,7 @@ static int bond_arp_rcv(const struct sk_buff *skb, struct bonding *bond,
+ curr_active_slave->last_link_up))
+ bond_validate_arp(bond, slave, tip, sip);
+ else if (curr_arp_slave && (arp->ar_op == htons(ARPOP_REPLY)) &&
+- bond_time_in_interval(bond,
+- dev_trans_start(curr_arp_slave->dev), 1))
++ bond_time_in_interval(bond, slave_last_tx(curr_arp_slave), 1))
+ bond_validate_arp(bond, slave, sip, tip);
+
+ out_unlock:
+@@ -3076,8 +3080,10 @@ static void bond_ns_send(struct slave *slave, const struct in6_addr *daddr,
+ }
+
+ addrconf_addr_solict_mult(daddr, &mcaddr);
+- if (bond_handle_vlan(slave, tags, skb))
++ if (bond_handle_vlan(slave, tags, skb)) {
++ slave_update_last_tx(slave);
+ ndisc_send_skb(skb, &mcaddr, saddr);
++ }
+ }
+
+ static void bond_ns_send_all(struct bonding *bond, struct slave *slave)
+@@ -3134,6 +3140,9 @@ static void bond_ns_send_all(struct bonding *bond, struct slave *slave)
+ found:
+ if (!ipv6_dev_get_saddr(dev_net(dst->dev), dst->dev, &targets[i], 0, &saddr))
+ bond_ns_send(slave, &targets[i], &saddr, tags);
++ else
++ bond_ns_send(slave, &targets[i], &in6addr_any, tags);
++
+ dst_release(dst);
+ kfree(tags);
+ }
+@@ -3165,12 +3174,19 @@ static bool bond_has_this_ip6(struct bonding *bond, struct in6_addr *addr)
+ return ret;
+ }
+
+-static void bond_validate_ns(struct bonding *bond, struct slave *slave,
++static void bond_validate_na(struct bonding *bond, struct slave *slave,
+ struct in6_addr *saddr, struct in6_addr *daddr)
+ {
+ int i;
+
+- if (ipv6_addr_any(saddr) || !bond_has_this_ip6(bond, daddr)) {
++ /* Ignore NAs that:
++ * 1. Source address is unspecified address.
++ * 2. Dest address is neither all-nodes multicast address nor
++ * exist on bond interface.
++ */
++ if (ipv6_addr_any(saddr) ||
++ (!ipv6_addr_equal(daddr, &in6addr_linklocal_allnodes) &&
++ !bond_has_this_ip6(bond, daddr))) {
+ slave_dbg(bond->dev, slave->dev, "%s: sip %pI6c tip %pI6c not found\n",
+ __func__, saddr, daddr);
+ return;
+@@ -3213,15 +3229,14 @@ static int bond_na_rcv(const struct sk_buff *skb, struct bonding *bond,
+ * see bond_arp_rcv().
+ */
+ if (bond_is_active_slave(slave))
+- bond_validate_ns(bond, slave, saddr, daddr);
++ bond_validate_na(bond, slave, saddr, daddr);
+ else if (curr_active_slave &&
+ time_after(slave_last_rx(bond, curr_active_slave),
+ curr_active_slave->last_link_up))
+- bond_validate_ns(bond, slave, saddr, daddr);
++ bond_validate_na(bond, slave, saddr, daddr);
+ else if (curr_arp_slave &&
+- bond_time_in_interval(bond,
+- dev_trans_start(curr_arp_slave->dev), 1))
+- bond_validate_ns(bond, slave, saddr, daddr);
++ bond_time_in_interval(bond, slave_last_tx(curr_arp_slave), 1))
++ bond_validate_na(bond, slave, saddr, daddr);
+
+ out:
+ return RX_HANDLER_ANOTHER;
+@@ -3308,12 +3323,12 @@ static void bond_loadbalance_arp_mon(struct bonding *bond)
+ * so it can wait
+ */
+ bond_for_each_slave_rcu(bond, slave, iter) {
+- unsigned long trans_start = dev_trans_start(slave->dev);
++ unsigned long last_tx = slave_last_tx(slave);
+
+ bond_propose_link_state(slave, BOND_LINK_NOCHANGE);
+
+ if (slave->link != BOND_LINK_UP) {
+- if (bond_time_in_interval(bond, trans_start, 1) &&
++ if (bond_time_in_interval(bond, last_tx, 1) &&
+ bond_time_in_interval(bond, slave->last_rx, 1)) {
+
+ bond_propose_link_state(slave, BOND_LINK_UP);
+@@ -3338,7 +3353,7 @@ static void bond_loadbalance_arp_mon(struct bonding *bond)
+ * when the source ip is 0, so don't take the link down
+ * if we don't know our ip yet
+ */
+- if (!bond_time_in_interval(bond, trans_start, bond->params.missed_max) ||
++ if (!bond_time_in_interval(bond, last_tx, bond->params.missed_max) ||
+ !bond_time_in_interval(bond, slave->last_rx, bond->params.missed_max)) {
+
+ bond_propose_link_state(slave, BOND_LINK_DOWN);
+@@ -3404,7 +3419,7 @@ re_arm:
+ */
+ static int bond_ab_arp_inspect(struct bonding *bond)
+ {
+- unsigned long trans_start, last_rx;
++ unsigned long last_tx, last_rx;
+ struct list_head *iter;
+ struct slave *slave;
+ int commit = 0;
+@@ -3455,9 +3470,9 @@ static int bond_ab_arp_inspect(struct bonding *bond)
+ * - (more than missed_max*delta since receive AND
+ * the bond has an IP address)
+ */
+- trans_start = dev_trans_start(slave->dev);
++ last_tx = slave_last_tx(slave);
+ if (bond_is_active_slave(slave) &&
+- (!bond_time_in_interval(bond, trans_start, bond->params.missed_max) ||
++ (!bond_time_in_interval(bond, last_tx, bond->params.missed_max) ||
+ !bond_time_in_interval(bond, last_rx, bond->params.missed_max))) {
+ bond_propose_link_state(slave, BOND_LINK_DOWN);
+ commit++;
+@@ -3474,8 +3489,8 @@ static int bond_ab_arp_inspect(struct bonding *bond)
+ */
+ static void bond_ab_arp_commit(struct bonding *bond)
+ {
+- unsigned long trans_start;
+ struct list_head *iter;
++ unsigned long last_tx;
+ struct slave *slave;
+
+ bond_for_each_slave(bond, slave, iter) {
+@@ -3484,10 +3499,10 @@ static void bond_ab_arp_commit(struct bonding *bond)
+ continue;
+
+ case BOND_LINK_UP:
+- trans_start = dev_trans_start(slave->dev);
++ last_tx = slave_last_tx(slave);
+ if (rtnl_dereference(bond->curr_active_slave) != slave ||
+ (!rtnl_dereference(bond->curr_active_slave) &&
+- bond_time_in_interval(bond, trans_start, 1))) {
++ bond_time_in_interval(bond, last_tx, 1))) {
+ struct slave *current_arp_slave;
+
+ current_arp_slave = rtnl_dereference(bond->current_arp_slave);
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 6439b56f381f9..517bc3922ee24 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -16,11 +16,13 @@
+ #include <linux/iopoll.h>
+ #include <linux/mdio.h>
+ #include <linux/pci.h>
++#include <linux/time.h>
+ #include "felix.h"
+
+ #define VSC9959_NUM_PORTS 6
+
+ #define VSC9959_TAS_GCL_ENTRY_MAX 63
++#define VSC9959_TAS_MIN_GATE_LEN_NS 33
+ #define VSC9959_VCAP_POLICER_BASE 63
+ #define VSC9959_VCAP_POLICER_MAX 383
+ #define VSC9959_SWITCH_PCI_BAR 4
+@@ -1410,6 +1412,23 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
+ mdiobus_free(felix->imdio);
+ }
+
++/* The switch considers any frame (regardless of size) as eligible for
++ * transmission if the traffic class gate is open for at least 33 ns.
++ * Overruns are prevented by cropping an interval at the end of the gate time
++ * slot for which egress scheduling is blocked, but we need to still keep 33 ns
++ * available for one packet to be transmitted, otherwise the port tc will hang.
++ * This function returns the size of a gate interval that remains available for
++ * setting the guard band, after reserving the space for one egress frame.
++ */
++static u64 vsc9959_tas_remaining_gate_len_ps(u64 gate_len_ns)
++{
++ /* Gate always open */
++ if (gate_len_ns == U64_MAX)
++ return U64_MAX;
++
++ return (gate_len_ns - VSC9959_TAS_MIN_GATE_LEN_NS) * PSEC_PER_NSEC;
++}
++
+ /* Extract shortest continuous gate open intervals in ns for each traffic class
+ * of a cyclic tc-taprio schedule. If a gate is always open, the duration is
+ * considered U64_MAX. If the gate is always closed, it is considered 0.
+@@ -1471,6 +1490,65 @@ static void vsc9959_tas_min_gate_lengths(struct tc_taprio_qopt_offload *taprio,
+ min_gate_len[tc] = 0;
+ }
+
++/* ocelot_write_rix is a macro that concatenates QSYS_MAXSDU_CFG_* with _RSZ,
++ * so we need to spell out the register access to each traffic class in helper
++ * functions, to simplify callers
++ */
++static void vsc9959_port_qmaxsdu_set(struct ocelot *ocelot, int port, int tc,
++ u32 max_sdu)
++{
++ switch (tc) {
++ case 0:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_0,
++ port);
++ break;
++ case 1:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_1,
++ port);
++ break;
++ case 2:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_2,
++ port);
++ break;
++ case 3:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_3,
++ port);
++ break;
++ case 4:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_4,
++ port);
++ break;
++ case 5:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_5,
++ port);
++ break;
++ case 6:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_6,
++ port);
++ break;
++ case 7:
++ ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_7,
++ port);
++ break;
++ }
++}
++
++static u32 vsc9959_port_qmaxsdu_get(struct ocelot *ocelot, int port, int tc)
++{
++ switch (tc) {
++ case 0: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_0, port);
++ case 1: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_1, port);
++ case 2: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_2, port);
++ case 3: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_3, port);
++ case 4: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_4, port);
++ case 5: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_5, port);
++ case 6: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_6, port);
++ case 7: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_7, port);
++ default:
++ return 0;
++ }
++}
++
+ /* Update QSYS_PORT_MAX_SDU to make sure the static guard bands added by the
+ * switch (see the ALWAYS_GUARD_BAND_SCH_Q comment) are correct at all MTU
+ * values (the default value is 1518). Also, for traffic class windows smaller
+@@ -1527,11 +1605,16 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
+
+ vsc9959_tas_min_gate_lengths(ocelot_port->taprio, min_gate_len);
+
++ mutex_lock(&ocelot->fwd_domain_lock);
++
+ for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
++ u64 remaining_gate_len_ps;
+ u32 max_sdu;
+
+- if (min_gate_len[tc] == U64_MAX /* Gate always open */ ||
+- min_gate_len[tc] * 1000 > needed_bit_time_ps) {
++ remaining_gate_len_ps =
++ vsc9959_tas_remaining_gate_len_ps(min_gate_len[tc]);
++
++ if (remaining_gate_len_ps > needed_bit_time_ps) {
+ /* Setting QMAXSDU_CFG to 0 disables oversized frame
+ * dropping.
+ */
+@@ -1544,9 +1627,15 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
+ /* If traffic class doesn't support a full MTU sized
+ * frame, make sure to enable oversize frame dropping
+ * for frames larger than the smallest that would fit.
++ *
++ * However, the exact same register, QSYS_QMAXSDU_CFG_*,
++ * controls not only oversized frame dropping, but also
++ * per-tc static guard band lengths, so it reduces the
++ * useful gate interval length. Therefore, be careful
++ * to calculate a guard band (and therefore max_sdu)
++ * that still leaves 33 ns available in the time slot.
+ */
+- max_sdu = div_u64(min_gate_len[tc] * 1000,
+- picos_per_byte);
++ max_sdu = div_u64(remaining_gate_len_ps, picos_per_byte);
+ /* A TC gate may be completely closed, which is a
+ * special case where all packets are oversized.
+ * Any limit smaller than 64 octets accomplishes this
+@@ -1569,47 +1658,14 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
+ max_sdu);
+ }
+
+- /* ocelot_write_rix is a macro that concatenates
+- * QSYS_MAXSDU_CFG_* with _RSZ, so we need to spell out
+- * the writes to each traffic class
+- */
+- switch (tc) {
+- case 0:
+- ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_0,
+- port);
+- break;
+- case 1:
+- ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_1,
+- port);
+- break;
+- case 2:
+- ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_2,
+- port);
+- break;
+- case 3:
+- ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_3,
+- port);
+- break;
+- case 4:
+- ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_4,
+- port);
+- break;
+- case 5:
+- ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_5,
+- port);
+- break;
+- case 6:
+- ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_6,
+- port);
+- break;
+- case 7:
+- ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_7,
+- port);
+- break;
+- }
++ vsc9959_port_qmaxsdu_set(ocelot, port, tc, max_sdu);
+ }
+
+ ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port);
++
++ ocelot->ops->cut_through_fwd(ocelot);
++
++ mutex_unlock(&ocelot->fwd_domain_lock);
+ }
+
+ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
+@@ -1636,13 +1692,13 @@ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
+ break;
+ }
+
++ mutex_lock(&ocelot->tas_lock);
++
+ ocelot_rmw_rix(ocelot,
+ QSYS_TAG_CONFIG_LINK_SPEED(tas_speed),
+ QSYS_TAG_CONFIG_LINK_SPEED_M,
+ QSYS_TAG_CONFIG, port);
+
+- mutex_lock(&ocelot->tas_lock);
+-
+ if (ocelot_port->taprio)
+ vsc9959_tas_guard_bands_update(ocelot, port);
+
+@@ -2709,7 +2765,7 @@ static void vsc9959_cut_through_fwd(struct ocelot *ocelot)
+ {
+ struct felix *felix = ocelot_to_felix(ocelot);
+ struct dsa_switch *ds = felix->ds;
+- int port, other_port;
++ int tc, port, other_port;
+
+ lockdep_assert_held(&ocelot->fwd_domain_lock);
+
+@@ -2753,19 +2809,27 @@ static void vsc9959_cut_through_fwd(struct ocelot *ocelot)
+ min_speed = other_ocelot_port->speed;
+ }
+
+- /* Enable cut-through forwarding for all traffic classes. */
+- if (ocelot_port->speed == min_speed)
++ /* Enable cut-through forwarding for all traffic classes that
++ * don't have oversized dropping enabled, since this check is
++ * bypassed in cut-through mode.
++ */
++ if (ocelot_port->speed == min_speed) {
+ val = GENMASK(7, 0);
+
++ for (tc = 0; tc < OCELOT_NUM_TC; tc++)
++ if (vsc9959_port_qmaxsdu_get(ocelot, port, tc))
++ val &= ~BIT(tc);
++ }
++
+ set:
+ tmp = ocelot_read_rix(ocelot, ANA_CUT_THRU_CFG, port);
+ if (tmp == val)
+ continue;
+
+ dev_dbg(ocelot->dev,
+- "port %d fwd mask 0x%lx speed %d min_speed %d, %s cut-through forwarding\n",
++ "port %d fwd mask 0x%lx speed %d min_speed %d, %s cut-through forwarding on TC mask 0x%x\n",
+ port, mask, ocelot_port->speed, min_speed,
+- val ? "enabling" : "disabling");
++ val ? "enabling" : "disabling", val);
+
+ ocelot_write_rix(ocelot, val, ANA_CUT_THRU_CFG, port);
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 407fe8f340a06..c5b61bc80f783 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -1291,4 +1291,18 @@ int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
+ int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
+ struct i40e_cloud_filter *filter,
+ bool add);
++
++/**
++ * i40e_is_tc_mqprio_enabled - check if TC MQPRIO is enabled on PF
++ * @pf: pointer to a pf.
++ *
++ * Check and return value of flag I40E_FLAG_TC_MQPRIO.
++ *
++ * Return: I40E_FLAG_TC_MQPRIO set state.
++ **/
++static inline u32 i40e_is_tc_mqprio_enabled(struct i40e_pf *pf)
++{
++ return pf->flags & I40E_FLAG_TC_MQPRIO;
++}
++
+ #endif /* _I40E_H_ */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
+index ea2bb0140a6eb..10d7a982a5b9b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
+@@ -177,6 +177,10 @@ void i40e_notify_client_of_netdev_close(struct i40e_vsi *vsi, bool reset)
+ "Cannot locate client instance close routine\n");
+ return;
+ }
++ if (!test_bit(__I40E_CLIENT_INSTANCE_OPENED, &cdev->state)) {
++ dev_dbg(&pf->pdev->dev, "Client is not open, abort close\n");
++ return;
++ }
+ cdev->client->ops->close(&cdev->lan_info, cdev->client, reset);
+ clear_bit(__I40E_CLIENT_INSTANCE_OPENED, &cdev->state);
+ i40e_client_release_qvlist(&cdev->lan_info);
+@@ -429,7 +433,6 @@ void i40e_client_subtask(struct i40e_pf *pf)
+ /* Remove failed client instance */
+ clear_bit(__I40E_CLIENT_INSTANCE_OPENED,
+ &cdev->state);
+- i40e_client_del_instance(pf);
+ return;
+ }
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 22a61802a4027..ed9984f1e1b9f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -4931,7 +4931,7 @@ static int i40e_set_channels(struct net_device *dev,
+ /* We do not support setting channels via ethtool when TCs are
+ * configured through mqprio
+ */
+- if (pf->flags & I40E_FLAG_TC_MQPRIO)
++ if (i40e_is_tc_mqprio_enabled(pf))
+ return -EINVAL;
+
+ /* verify they are not requesting separate vectors */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 71a8e1698ed48..1aaf0c5ddf6cf 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5339,7 +5339,7 @@ static u8 i40e_pf_get_num_tc(struct i40e_pf *pf)
+ u8 num_tc = 0;
+ struct i40e_dcbx_config *dcbcfg = &hw->local_dcbx_config;
+
+- if (pf->flags & I40E_FLAG_TC_MQPRIO)
++ if (i40e_is_tc_mqprio_enabled(pf))
+ return pf->vsi[pf->lan_vsi]->mqprio_qopt.qopt.num_tc;
+
+ /* If neither MQPRIO nor DCB is enabled, then always use single TC */
+@@ -5371,7 +5371,7 @@ static u8 i40e_pf_get_num_tc(struct i40e_pf *pf)
+ **/
+ static u8 i40e_pf_get_tc_map(struct i40e_pf *pf)
+ {
+- if (pf->flags & I40E_FLAG_TC_MQPRIO)
++ if (i40e_is_tc_mqprio_enabled(pf))
+ return i40e_mqprio_get_enabled_tc(pf);
+
+ /* If neither MQPRIO nor DCB is enabled for this PF then just return
+@@ -5468,7 +5468,7 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ int i;
+
+ /* There is no need to reset BW when mqprio mode is on. */
+- if (pf->flags & I40E_FLAG_TC_MQPRIO)
++ if (i40e_is_tc_mqprio_enabled(pf))
+ return 0;
+ if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {
+ ret = i40e_set_bw_limit(vsi, vsi->seid, 0);
+@@ -5540,7 +5540,7 @@ static void i40e_vsi_config_netdev_tc(struct i40e_vsi *vsi, u8 enabled_tc)
+ vsi->tc_config.tc_info[i].qoffset);
+ }
+
+- if (pf->flags & I40E_FLAG_TC_MQPRIO)
++ if (i40e_is_tc_mqprio_enabled(pf))
+ return;
+
+ /* Assign UP2TC map for the VSI */
+@@ -5701,7 +5701,7 @@ static int i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 enabled_tc)
+ ctxt.vf_num = 0;
+ ctxt.uplink_seid = vsi->uplink_seid;
+ ctxt.info = vsi->info;
+- if (vsi->back->flags & I40E_FLAG_TC_MQPRIO) {
++ if (i40e_is_tc_mqprio_enabled(pf)) {
+ ret = i40e_vsi_setup_queue_map_mqprio(vsi, &ctxt, enabled_tc);
+ if (ret)
+ goto out;
+@@ -6425,7 +6425,7 @@ int i40e_create_queue_channel(struct i40e_vsi *vsi,
+ pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+
+ if (vsi->type == I40E_VSI_MAIN) {
+- if (pf->flags & I40E_FLAG_TC_MQPRIO)
++ if (i40e_is_tc_mqprio_enabled(pf))
+ i40e_do_reset(pf, I40E_PF_RESET_FLAG, true);
+ else
+ i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG);
+@@ -6536,6 +6536,9 @@ static int i40e_configure_queue_channels(struct i40e_vsi *vsi)
+ vsi->tc_seid_map[i] = ch->seid;
+ }
+ }
++
++ /* reset to reconfigure TX queue contexts */
++ i40e_do_reset(vsi->back, I40E_PF_RESET_FLAG, true);
+ return ret;
+
+ err_free:
+@@ -7819,7 +7822,7 @@ static void *i40e_fwd_add(struct net_device *netdev, struct net_device *vdev)
+ netdev_info(netdev, "Macvlans are not supported when DCB is enabled\n");
+ return ERR_PTR(-EINVAL);
+ }
+- if ((pf->flags & I40E_FLAG_TC_MQPRIO)) {
++ if (i40e_is_tc_mqprio_enabled(pf)) {
+ netdev_info(netdev, "Macvlans are not supported when HW TC offload is on\n");
+ return ERR_PTR(-EINVAL);
+ }
+@@ -8072,7 +8075,7 @@ config_tc:
+ /* Quiesce VSI queues */
+ i40e_quiesce_vsi(vsi);
+
+- if (!hw && !(pf->flags & I40E_FLAG_TC_MQPRIO))
++ if (!hw && !i40e_is_tc_mqprio_enabled(pf))
+ i40e_remove_queue_channels(vsi);
+
+ /* Configure VSI for enabled TCs */
+@@ -8096,7 +8099,7 @@ config_tc:
+ "Setup channel (id:%u) utilizing num_queues %d\n",
+ vsi->seid, vsi->tc_config.tc_info[0].qcount);
+
+- if (pf->flags & I40E_FLAG_TC_MQPRIO) {
++ if (i40e_is_tc_mqprio_enabled(pf)) {
+ if (vsi->mqprio_qopt.max_rate[0]) {
+ u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
+
+@@ -10750,7 +10753,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ * unless I40E_FLAG_TC_MQPRIO was enabled or DCB
+ * is not supported with new link speed
+ */
+- if (pf->flags & I40E_FLAG_TC_MQPRIO) {
++ if (i40e_is_tc_mqprio_enabled(pf)) {
+ i40e_aq_set_dcb_parameters(hw, false, NULL);
+ } else {
+ if (I40E_IS_X710TL_DEVICE(hw->device_id) &&
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index af69ccc6e8d2f..07f1e209d524d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -3689,7 +3689,8 @@ u16 i40e_lan_select_queue(struct net_device *netdev,
+ u8 prio;
+
+ /* is DCB enabled at all? */
+- if (vsi->tc_config.numtc == 1)
++ if (vsi->tc_config.numtc == 1 ||
++ i40e_is_tc_mqprio_enabled(vsi->back))
+ return netdev_pick_tx(netdev, skb, sb_dev);
+
+ prio = skb->priority;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 6d159334da9ec..981c43b204ff4 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2789,6 +2789,11 @@ static void iavf_reset_task(struct work_struct *work)
+ int i = 0, err;
+ bool running;
+
++ /* Detach interface to avoid subsequent NDO callbacks */
++ rtnl_lock();
++ netif_device_detach(netdev);
++ rtnl_unlock();
++
+ /* When device is being removed it doesn't make sense to run the reset
+ * task, just return in such a case.
+ */
+@@ -2796,7 +2801,7 @@ static void iavf_reset_task(struct work_struct *work)
+ if (adapter->state != __IAVF_REMOVE)
+ queue_work(iavf_wq, &adapter->reset_task);
+
+- return;
++ goto reset_finish;
+ }
+
+ while (!mutex_trylock(&adapter->client_lock))
+@@ -2866,7 +2871,6 @@ continue_reset:
+
+ if (running) {
+ netif_carrier_off(netdev);
+- netif_tx_stop_all_queues(netdev);
+ adapter->link_up = false;
+ iavf_napi_disable_all(adapter);
+ }
+@@ -2996,7 +3000,7 @@ continue_reset:
+ mutex_unlock(&adapter->client_lock);
+ mutex_unlock(&adapter->crit_lock);
+
+- return;
++ goto reset_finish;
+ reset_err:
+ if (running) {
+ set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+@@ -3007,6 +3011,10 @@ reset_err:
+ mutex_unlock(&adapter->client_lock);
+ mutex_unlock(&adapter->crit_lock);
+ dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
++reset_finish:
++ rtnl_lock();
++ netif_device_attach(netdev);
++ rtnl_unlock();
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index 136d7911adb48..1e32438081780 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -7,18 +7,6 @@
+ #include "ice_dcb_lib.h"
+ #include "ice_sriov.h"
+
+-static bool ice_alloc_rx_buf_zc(struct ice_rx_ring *rx_ring)
+-{
+- rx_ring->xdp_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->xdp_buf), GFP_KERNEL);
+- return !!rx_ring->xdp_buf;
+-}
+-
+-static bool ice_alloc_rx_buf(struct ice_rx_ring *rx_ring)
+-{
+- rx_ring->rx_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL);
+- return !!rx_ring->rx_buf;
+-}
+-
+ /**
+ * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI
+ * @qs_cfg: gathered variables needed for PF->VSI queues assignment
+@@ -519,11 +507,8 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
+ xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
+ ring->q_index, ring->q_vector->napi.napi_id);
+
+- kfree(ring->rx_buf);
+ ring->xsk_pool = ice_xsk_pool(ring);
+ if (ring->xsk_pool) {
+- if (!ice_alloc_rx_buf_zc(ring))
+- return -ENOMEM;
+ xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);
+
+ ring->rx_buf_len =
+@@ -538,8 +523,6 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
+ dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
+ ring->q_index);
+ } else {
+- if (!ice_alloc_rx_buf(ring))
+- return -ENOMEM;
+ if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))
+ /* coverity[check_return] */
+ xdp_rxq_info_reg(&ring->xdp_rxq,
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 3d45e075204e3..4c6bb7482b362 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2898,10 +2898,18 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ if (xdp_ring_err)
+ NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed");
+ }
++ /* reallocate Rx queues that are used for zero-copy */
++ xdp_ring_err = ice_realloc_zc_buf(vsi, true);
++ if (xdp_ring_err)
++ NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Rx resources failed");
+ } else if (ice_is_xdp_ena_vsi(vsi) && !prog) {
+ xdp_ring_err = ice_destroy_xdp_rings(vsi);
+ if (xdp_ring_err)
+ NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Tx resources failed");
++ /* reallocate Rx queues that were used for zero-copy */
++ xdp_ring_err = ice_realloc_zc_buf(vsi, false);
++ if (xdp_ring_err)
++ NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Rx resources failed");
+ } else {
+ /* safe to call even when prog == vsi->xdp_prog as
+ * dev_xdp_install in net/core/dev.c incremented prog's
+@@ -3904,7 +3912,7 @@ static int ice_init_pf(struct ice_pf *pf)
+
+ pf->avail_rxqs = bitmap_zalloc(pf->max_pf_rxqs, GFP_KERNEL);
+ if (!pf->avail_rxqs) {
+- devm_kfree(ice_pf_to_dev(pf), pf->avail_txqs);
++ bitmap_free(pf->avail_txqs);
+ pf->avail_txqs = NULL;
+ return -ENOMEM;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index e48e29258450f..03ce85f6e6df8 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -192,6 +192,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true);
+ if (err)
+ return err;
++ ice_clean_rx_ring(rx_ring);
+
+ ice_qvec_toggle_napi(vsi, q_vector, false);
+ ice_qp_clean_rings(vsi, q_idx);
+@@ -316,6 +317,62 @@ ice_xsk_pool_enable(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
+ return 0;
+ }
+
++/**
++ * ice_realloc_rx_xdp_bufs - reallocate for either XSK or normal buffer
++ * @rx_ring: Rx ring
++ * @pool_present: is pool for XSK present
++ *
++ * Try allocating memory and return ENOMEM, if failed to allocate.
++ * If allocation was successful, substitute buffer with allocated one.
++ * Returns 0 on success, negative on failure
++ */
++static int
++ice_realloc_rx_xdp_bufs(struct ice_rx_ring *rx_ring, bool pool_present)
++{
++ size_t elem_size = pool_present ? sizeof(*rx_ring->xdp_buf) :
++ sizeof(*rx_ring->rx_buf);
++ void *sw_ring = kcalloc(rx_ring->count, elem_size, GFP_KERNEL);
++
++ if (!sw_ring)
++ return -ENOMEM;
++
++ if (pool_present) {
++ kfree(rx_ring->rx_buf);
++ rx_ring->rx_buf = NULL;
++ rx_ring->xdp_buf = sw_ring;
++ } else {
++ kfree(rx_ring->xdp_buf);
++ rx_ring->xdp_buf = NULL;
++ rx_ring->rx_buf = sw_ring;
++ }
++
++ return 0;
++}
++
++/**
++ * ice_realloc_zc_buf - reallocate XDP ZC queue pairs
++ * @vsi: Current VSI
++ * @zc: is zero copy set
++ *
++ * Reallocate buffer for rx_rings that might be used by XSK.
++ * XDP requires more memory, than rx_buf provides.
++ * Returns 0 on success, negative on failure
++ */
++int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc)
++{
++ struct ice_rx_ring *rx_ring;
++ unsigned long q;
++
++ for_each_set_bit(q, vsi->af_xdp_zc_qps,
++ max_t(int, vsi->alloc_txq, vsi->alloc_rxq)) {
++ rx_ring = vsi->rx_rings[q];
++ if (ice_realloc_rx_xdp_bufs(rx_ring, zc))
++ return -ENOMEM;
++ }
++
++ return 0;
++}
++
+ /**
+ * ice_xsk_pool_setup - enable/disable a buffer pool region depending on its state
+ * @vsi: Current VSI
+@@ -345,11 +402,17 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
+ if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi);
+
+ if (if_running) {
++ struct ice_rx_ring *rx_ring = vsi->rx_rings[qid];
++
+ ret = ice_qp_dis(vsi, qid);
+ if (ret) {
+ netdev_err(vsi->netdev, "ice_qp_dis error = %d\n", ret);
+ goto xsk_pool_if_up;
+ }
++
++ ret = ice_realloc_rx_xdp_bufs(rx_ring, pool_present);
++ if (ret)
++ goto xsk_pool_if_up;
+ }
+
+ pool_failure = pool_present ? ice_xsk_pool_enable(vsi, pool, qid) :
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.h b/drivers/net/ethernet/intel/ice/ice_xsk.h
+index 21faec8e97db1..4edbe81eb6460 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.h
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.h
+@@ -27,6 +27,7 @@ bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi);
+ void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring);
+ void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring);
+ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget);
++int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc);
+ #else
+ static inline bool
+ ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring,
+@@ -72,5 +73,12 @@ ice_xsk_wakeup(struct net_device __always_unused *netdev,
+
+ static inline void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring) { }
+ static inline void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring) { }
++
++static inline int
++ice_realloc_zc_buf(struct ice_vsi __always_unused *vsi,
++ bool __always_unused zc)
++{
++ return 0;
++}
+ #endif /* CONFIG_XDP_SOCKETS */
+ #endif /* !_ICE_XSK_H_ */
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c
+index dab8f3f771f84..cfe804bc8d205 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe.c
+@@ -412,7 +412,7 @@ __mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry)
+ if (entry->hash != 0xffff) {
+ ppe->foe_table[entry->hash].ib1 &= ~MTK_FOE_IB1_STATE;
+ ppe->foe_table[entry->hash].ib1 |= FIELD_PREP(MTK_FOE_IB1_STATE,
+- MTK_FOE_STATE_BIND);
++ MTK_FOE_STATE_UNBIND);
+ dma_wmb();
+ }
+ entry->hash = 0xffff;
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h
+index 1f5cf1c9a9475..69ffce04d6306 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe.h
++++ b/drivers/net/ethernet/mediatek/mtk_ppe.h
+@@ -293,6 +293,9 @@ mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash)
+ if (!ppe)
+ return;
+
++ if (hash > MTK_PPE_HASH_MASK)
++ return;
++
+ now = (u16)jiffies;
+ diff = now - ppe->foe_check_time[hash];
+ if (diff < HZ / 10)
+diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
+index 73f7962a37d33..c49062ad72c6c 100644
+--- a/drivers/net/phy/meson-gxl.c
++++ b/drivers/net/phy/meson-gxl.c
+@@ -243,13 +243,7 @@ static irqreturn_t meson_gxl_handle_interrupt(struct phy_device *phydev)
+ irq_status == INTSRC_ENERGY_DETECT)
+ return IRQ_HANDLED;
+
+- /* Give PHY some time before MAC starts sending data. This works
+- * around an issue where network doesn't come up properly.
+- */
+- if (!(irq_status & INTSRC_LINK_DOWN))
+- phy_queue_state_machine(phydev, msecs_to_jiffies(100));
+- else
+- phy_trigger_machine(phydev);
++ phy_trigger_machine(phydev);
+
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/phy/microchip_t1.c b/drivers/net/phy/microchip_t1.c
+index d4c93d59bc539..8569a545e0a3f 100644
+--- a/drivers/net/phy/microchip_t1.c
++++ b/drivers/net/phy/microchip_t1.c
+@@ -28,12 +28,16 @@
+
+ /* Interrupt Source Register */
+ #define LAN87XX_INTERRUPT_SOURCE (0x18)
++#define LAN87XX_INTERRUPT_SOURCE_2 (0x08)
+
+ /* Interrupt Mask Register */
+ #define LAN87XX_INTERRUPT_MASK (0x19)
+ #define LAN87XX_MASK_LINK_UP (0x0004)
+ #define LAN87XX_MASK_LINK_DOWN (0x0002)
+
++#define LAN87XX_INTERRUPT_MASK_2 (0x09)
++#define LAN87XX_MASK_COMM_RDY BIT(10)
++
+ /* MISC Control 1 Register */
+ #define LAN87XX_CTRL_1 (0x11)
+ #define LAN87XX_MASK_RGMII_TXC_DLY_EN (0x4000)
+@@ -424,17 +428,55 @@ static int lan87xx_phy_config_intr(struct phy_device *phydev)
+ int rc, val = 0;
+
+ if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+- /* unmask all source and clear them before enable */
+- rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, 0x7FFF);
++ /* clear all interrupt */
++ rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, val);
++ if (rc < 0)
++ return rc;
++
+ rc = phy_read(phydev, LAN87XX_INTERRUPT_SOURCE);
+- val = LAN87XX_MASK_LINK_UP | LAN87XX_MASK_LINK_DOWN;
++ if (rc < 0)
++ return rc;
++
++ rc = access_ereg(phydev, PHYACC_ATTR_MODE_WRITE,
++ PHYACC_ATTR_BANK_MISC,
++ LAN87XX_INTERRUPT_MASK_2, val);
++ if (rc < 0)
++ return rc;
++
++ rc = access_ereg(phydev, PHYACC_ATTR_MODE_READ,
++ PHYACC_ATTR_BANK_MISC,
++ LAN87XX_INTERRUPT_SOURCE_2, 0);
++ if (rc < 0)
++ return rc;
++
++ /* enable link down and comm ready interrupt */
++ val = LAN87XX_MASK_LINK_DOWN;
+ rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, val);
++ if (rc < 0)
++ return rc;
++
++ val = LAN87XX_MASK_COMM_RDY;
++ rc = access_ereg(phydev, PHYACC_ATTR_MODE_WRITE,
++ PHYACC_ATTR_BANK_MISC,
++ LAN87XX_INTERRUPT_MASK_2, val);
+ } else {
+ rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, val);
+- if (rc)
++ if (rc < 0)
+ return rc;
+
+ rc = phy_read(phydev, LAN87XX_INTERRUPT_SOURCE);
++ if (rc < 0)
++ return rc;
++
++ rc = access_ereg(phydev, PHYACC_ATTR_MODE_WRITE,
++ PHYACC_ATTR_BANK_MISC,
++ LAN87XX_INTERRUPT_MASK_2, val);
++ if (rc < 0)
++ return rc;
++
++ rc = access_ereg(phydev, PHYACC_ATTR_MODE_READ,
++ PHYACC_ATTR_BANK_MISC,
++ LAN87XX_INTERRUPT_SOURCE_2, 0);
+ }
+
+ return rc < 0 ? rc : 0;
+@@ -444,6 +486,14 @@ static irqreturn_t lan87xx_handle_interrupt(struct phy_device *phydev)
+ {
+ int irq_status;
+
++ irq_status = access_ereg(phydev, PHYACC_ATTR_MODE_READ,
++ PHYACC_ATTR_BANK_MISC,
++ LAN87XX_INTERRUPT_SOURCE_2, 0);
++ if (irq_status < 0) {
++ phy_error(phydev);
++ return IRQ_NONE;
++ }
++
+ irq_status = phy_read(phydev, LAN87XX_INTERRUPT_SOURCE);
+ if (irq_status < 0) {
+ phy_error(phydev);
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-rs.c b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+index c62f299b9e0a8..d8a5dbf89a021 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-rs.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+@@ -2403,7 +2403,7 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ /* Repeat initial/next rate.
+ * For legacy IL_NUMBER_TRY == 1, this loop will not execute.
+ * For HT IL_HT_NUMBER_TRY == 3, this executes twice. */
+- while (repeat_rate > 0) {
++ while (repeat_rate > 0 && idx < (LINK_QUAL_MAX_RETRY_NUM - 1)) {
+ if (is_legacy(tbl_type.lq_type)) {
+ if (ant_toggle_cnt < NUM_TRY_BEFORE_ANT_TOGGLE)
+ ant_toggle_cnt++;
+@@ -2422,8 +2422,6 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ cpu_to_le32(new_rate);
+ repeat_rate--;
+ idx++;
+- if (idx >= LINK_QUAL_MAX_RETRY_NUM)
+- goto out;
+ }
+
+ il4965_rs_get_tbl_info_from_mcs(new_rate, lq_sta->band,
+@@ -2468,7 +2466,6 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ repeat_rate--;
+ }
+
+-out:
+ lq_cmd->agg_params.agg_frame_cnt_limit = LINK_QUAL_AGG_FRAME_LIMIT_DEF;
+ lq_cmd->agg_params.agg_dis_start_th = LINK_QUAL_AGG_DISABLE_START_DEF;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
+index b0f58bcf70cb0..106c88b723b90 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
+@@ -345,7 +345,7 @@ int mt7921e_mac_reset(struct mt7921_dev *dev)
+
+ err = mt7921e_driver_own(dev);
+ if (err)
+- return err;
++ goto out;
+
+ err = mt7921_run_firmware(dev);
+ if (err)
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.h b/drivers/net/wireless/microchip/wilc1000/netdev.h
+index a067274c20144..bf001e9def6aa 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.h
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.h
+@@ -254,6 +254,7 @@ struct wilc {
+ u8 *rx_buffer;
+ u32 rx_buffer_offset;
+ u8 *tx_buffer;
++ u32 *vmm_table;
+
+ struct txq_handle txq[NQUEUES];
+ int txq_entries;
+diff --git a/drivers/net/wireless/microchip/wilc1000/sdio.c b/drivers/net/wireless/microchip/wilc1000/sdio.c
+index 7962c11cfe848..56f924a31bc66 100644
+--- a/drivers/net/wireless/microchip/wilc1000/sdio.c
++++ b/drivers/net/wireless/microchip/wilc1000/sdio.c
+@@ -27,6 +27,7 @@ struct wilc_sdio {
+ bool irq_gpio;
+ u32 block_size;
+ int has_thrpt_enh3;
++ u8 *cmd53_buf;
+ };
+
+ struct sdio_cmd52 {
+@@ -46,6 +47,7 @@ struct sdio_cmd53 {
+ u32 count: 9;
+ u8 *buffer;
+ u32 block_size;
++ bool use_global_buf;
+ };
+
+ static const struct wilc_hif_func wilc_hif_sdio;
+@@ -90,6 +92,8 @@ static int wilc_sdio_cmd53(struct wilc *wilc, struct sdio_cmd53 *cmd)
+ {
+ struct sdio_func *func = container_of(wilc->dev, struct sdio_func, dev);
+ int size, ret;
++ struct wilc_sdio *sdio_priv = wilc->bus_data;
++ u8 *buf = cmd->buffer;
+
+ sdio_claim_host(func);
+
+@@ -100,12 +104,23 @@ static int wilc_sdio_cmd53(struct wilc *wilc, struct sdio_cmd53 *cmd)
+ else
+ size = cmd->count;
+
++ if (cmd->use_global_buf) {
++ if (size > sizeof(u32))
++ return -EINVAL;
++
++ buf = sdio_priv->cmd53_buf;
++ }
++
+ if (cmd->read_write) { /* write */
+- ret = sdio_memcpy_toio(func, cmd->address,
+- (void *)cmd->buffer, size);
++ if (cmd->use_global_buf)
++ memcpy(buf, cmd->buffer, size);
++
++ ret = sdio_memcpy_toio(func, cmd->address, buf, size);
+ } else { /* read */
+- ret = sdio_memcpy_fromio(func, (void *)cmd->buffer,
+- cmd->address, size);
++ ret = sdio_memcpy_fromio(func, buf, cmd->address, size);
++
++ if (cmd->use_global_buf)
++ memcpy(cmd->buffer, buf, size);
+ }
+
+ sdio_release_host(func);
+@@ -127,6 +142,12 @@ static int wilc_sdio_probe(struct sdio_func *func,
+ if (!sdio_priv)
+ return -ENOMEM;
+
++ sdio_priv->cmd53_buf = kzalloc(sizeof(u32), GFP_KERNEL);
++ if (!sdio_priv->cmd53_buf) {
++ ret = -ENOMEM;
++ goto free;
++ }
++
+ ret = wilc_cfg80211_init(&wilc, &func->dev, WILC_HIF_SDIO,
+ &wilc_hif_sdio);
+ if (ret)
+@@ -160,6 +181,7 @@ dispose_irq:
+ irq_dispose_mapping(wilc->dev_irq_num);
+ wilc_netdev_cleanup(wilc);
+ free:
++ kfree(sdio_priv->cmd53_buf);
+ kfree(sdio_priv);
+ return ret;
+ }
+@@ -171,6 +193,7 @@ static void wilc_sdio_remove(struct sdio_func *func)
+
+ clk_disable_unprepare(wilc->rtc_clk);
+ wilc_netdev_cleanup(wilc);
++ kfree(sdio_priv->cmd53_buf);
+ kfree(sdio_priv);
+ }
+
+@@ -367,8 +390,9 @@ static int wilc_sdio_write_reg(struct wilc *wilc, u32 addr, u32 data)
+ cmd.address = WILC_SDIO_FBR_DATA_REG;
+ cmd.block_mode = 0;
+ cmd.increment = 1;
+- cmd.count = 4;
++ cmd.count = sizeof(u32);
+ cmd.buffer = (u8 *)&data;
++ cmd.use_global_buf = true;
+ cmd.block_size = sdio_priv->block_size;
+ ret = wilc_sdio_cmd53(wilc, &cmd);
+ if (ret)
+@@ -406,6 +430,7 @@ static int wilc_sdio_write(struct wilc *wilc, u32 addr, u8 *buf, u32 size)
+ nblk = size / block_size;
+ nleft = size % block_size;
+
++ cmd.use_global_buf = false;
+ if (nblk > 0) {
+ cmd.block_mode = 1;
+ cmd.increment = 1;
+@@ -484,8 +509,9 @@ static int wilc_sdio_read_reg(struct wilc *wilc, u32 addr, u32 *data)
+ cmd.address = WILC_SDIO_FBR_DATA_REG;
+ cmd.block_mode = 0;
+ cmd.increment = 1;
+- cmd.count = 4;
++ cmd.count = sizeof(u32);
+ cmd.buffer = (u8 *)data;
++ cmd.use_global_buf = true;
+
+ cmd.block_size = sdio_priv->block_size;
+ ret = wilc_sdio_cmd53(wilc, &cmd);
+@@ -527,6 +553,7 @@ static int wilc_sdio_read(struct wilc *wilc, u32 addr, u8 *buf, u32 size)
+ nblk = size / block_size;
+ nleft = size % block_size;
+
++ cmd.use_global_buf = false;
+ if (nblk > 0) {
+ cmd.block_mode = 1;
+ cmd.increment = 1;
+diff --git a/drivers/net/wireless/microchip/wilc1000/wlan.c b/drivers/net/wireless/microchip/wilc1000/wlan.c
+index 48441f0389ca1..0c8a571486d25 100644
+--- a/drivers/net/wireless/microchip/wilc1000/wlan.c
++++ b/drivers/net/wireless/microchip/wilc1000/wlan.c
+@@ -714,7 +714,7 @@ int wilc_wlan_handle_txq(struct wilc *wilc, u32 *txq_count)
+ int ret = 0;
+ int counter;
+ int timeout;
+- u32 vmm_table[WILC_VMM_TBL_SIZE];
++ u32 *vmm_table = wilc->vmm_table;
+ u8 ac_pkt_num_to_chip[NQUEUES] = {0, 0, 0, 0};
+ const struct wilc_hif_func *func;
+ int srcu_idx;
+@@ -1251,6 +1251,8 @@ void wilc_wlan_cleanup(struct net_device *dev)
+ while ((rqe = wilc_wlan_rxq_remove(wilc)))
+ kfree(rqe);
+
++ kfree(wilc->vmm_table);
++ wilc->vmm_table = NULL;
+ kfree(wilc->rx_buffer);
+ wilc->rx_buffer = NULL;
+ kfree(wilc->tx_buffer);
+@@ -1485,6 +1487,14 @@ int wilc_wlan_init(struct net_device *dev)
+ goto fail;
+ }
+
++ if (!wilc->vmm_table)
++ wilc->vmm_table = kzalloc(WILC_VMM_TBL_SIZE, GFP_KERNEL);
++
++ if (!wilc->vmm_table) {
++ ret = -ENOBUFS;
++ goto fail;
++ }
++
+ if (!wilc->tx_buffer)
+ wilc->tx_buffer = kmalloc(WILC_TX_BUFF_SIZE, GFP_KERNEL);
+
+@@ -1509,7 +1519,8 @@ int wilc_wlan_init(struct net_device *dev)
+ return 0;
+
+ fail:
+-
++ kfree(wilc->vmm_table);
++ wilc->vmm_table = NULL;
+ kfree(wilc->rx_buffer);
+ wilc->rx_buffer = NULL;
+ kfree(wilc->tx_buffer);
+diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
+index 990360d75cb64..e85b3c5d4acce 100644
+--- a/drivers/net/xen-netback/xenbus.c
++++ b/drivers/net/xen-netback/xenbus.c
+@@ -256,7 +256,6 @@ static void backend_disconnect(struct backend_info *be)
+ unsigned int queue_index;
+
+ xen_unregister_watchers(vif);
+- xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
+ #ifdef CONFIG_DEBUG_FS
+ xenvif_debugfs_delif(vif);
+ #endif /* CONFIG_DEBUG_FS */
+@@ -984,6 +983,7 @@ static int netback_remove(struct xenbus_device *dev)
+ struct backend_info *be = dev_get_drvdata(&dev->dev);
+
+ unregister_hotplug_status_watch(be);
++ xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status");
+ if (be->vif) {
+ kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE);
+ backend_disconnect(be);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 7a9e6ffa23429..daa0e160e1212 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -121,7 +121,6 @@ struct nvme_tcp_queue {
+ struct mutex send_mutex;
+ struct llist_head req_list;
+ struct list_head send_list;
+- bool more_requests;
+
+ /* recv state */
+ void *pdu;
+@@ -318,7 +317,7 @@ static inline void nvme_tcp_send_all(struct nvme_tcp_queue *queue)
+ static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+ {
+ return !list_empty(&queue->send_list) ||
+- !llist_empty(&queue->req_list) || queue->more_requests;
++ !llist_empty(&queue->req_list);
+ }
+
+ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+@@ -337,9 +336,7 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ */
+ if (queue->io_cpu == raw_smp_processor_id() &&
+ sync && empty && mutex_trylock(&queue->send_mutex)) {
+- queue->more_requests = !last;
+ nvme_tcp_send_all(queue);
+- queue->more_requests = false;
+ mutex_unlock(&queue->send_mutex);
+ }
+
+@@ -1227,7 +1224,7 @@ static void nvme_tcp_io_work(struct work_struct *w)
+ else if (unlikely(result < 0))
+ return;
+
+- if (!pending)
++ if (!pending || !queue->rd_enabled)
+ return;
+
+ } while (!time_after(jiffies, deadline)); /* quota is exhausted */
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index c27660a660d9a..a339719100051 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -735,6 +735,8 @@ static void nvmet_set_error(struct nvmet_req *req, u16 status)
+
+ static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
+ {
++ struct nvmet_ns *ns = req->ns;
++
+ if (!req->sq->sqhd_disabled)
+ nvmet_update_sq_head(req);
+ req->cqe->sq_id = cpu_to_le16(req->sq->qid);
+@@ -745,9 +747,9 @@ static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
+
+ trace_nvmet_req_complete(req);
+
+- if (req->ns)
+- nvmet_put_namespace(req->ns);
+ req->ops->queue_response(req);
++ if (ns)
++ nvmet_put_namespace(ns);
+ }
+
+ void nvmet_req_complete(struct nvmet_req *req, u16 status)
+diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
+index 82b61acf7a72b..1956be87ac5ff 100644
+--- a/drivers/nvme/target/zns.c
++++ b/drivers/nvme/target/zns.c
+@@ -100,6 +100,7 @@ void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req)
+ struct nvme_id_ns_zns *id_zns;
+ u64 zsze;
+ u16 status;
++ u32 mar, mor;
+
+ if (le32_to_cpu(req->cmd->identify.nsid) == NVME_NSID_ALL) {
+ req->error_loc = offsetof(struct nvme_identify, nsid);
+@@ -130,8 +131,20 @@ void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req)
+ zsze = (bdev_zone_sectors(req->ns->bdev) << 9) >>
+ req->ns->blksize_shift;
+ id_zns->lbafe[0].zsze = cpu_to_le64(zsze);
+- id_zns->mor = cpu_to_le32(bdev_max_open_zones(req->ns->bdev));
+- id_zns->mar = cpu_to_le32(bdev_max_active_zones(req->ns->bdev));
++
++ mor = bdev_max_open_zones(req->ns->bdev);
++ if (!mor)
++ mor = U32_MAX;
++ else
++ mor--;
++ id_zns->mor = cpu_to_le32(mor);
++
++ mar = bdev_max_active_zones(req->ns->bdev);
++ if (!mar)
++ mar = U32_MAX;
++ else
++ mar--;
++ id_zns->mar = cpu_to_le32(mar);
+
+ done:
+ status = nvmet_copy_to_sgl(req, 0, id_zns, sizeof(*id_zns));
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index 9be007c9420f9..f69ab90b5e22d 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -1380,15 +1380,17 @@ ccio_init_resource(struct resource *res, char *name, void __iomem *ioaddr)
+ }
+ }
+
+-static void __init ccio_init_resources(struct ioc *ioc)
++static int __init ccio_init_resources(struct ioc *ioc)
+ {
+ struct resource *res = ioc->mmio_region;
+ char *name = kmalloc(14, GFP_KERNEL);
+-
++ if (unlikely(!name))
++ return -ENOMEM;
+ snprintf(name, 14, "GSC Bus [%d/]", ioc->hw_path);
+
+ ccio_init_resource(res, name, &ioc->ioc_regs->io_io_low);
+ ccio_init_resource(res + 1, name, &ioc->ioc_regs->io_io_low_hv);
++ return 0;
+ }
+
+ static int new_ioc_area(struct resource *res, unsigned long size,
+@@ -1543,7 +1545,10 @@ static int __init ccio_probe(struct parisc_device *dev)
+ return -ENOMEM;
+ }
+ ccio_ioc_init(ioc);
+- ccio_init_resources(ioc);
++ if (ccio_init_resources(ioc)) {
++ kfree(ioc);
++ return -ENOMEM;
++ }
+ hppa_dma_ops = &ccio_ops;
+
+ hba = kzalloc(sizeof(*hba), GFP_KERNEL);
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 231d86d3949c0..1ec5baa673f92 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -467,7 +467,7 @@ static int pmu_sbi_get_ctrinfo(int nctr)
+ if (!pmu_ctr_list)
+ return -ENOMEM;
+
+- for (i = 0; i <= nctr; i++) {
++ for (i = 0; i < nctr; i++) {
+ ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_GET_INFO, i, 0, 0, 0, 0, 0);
+ if (ret.error)
+ /* The logical counter ids are not expected to be contiguous */
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 1e54a833f2cf0..a9daaf4d5aaab 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2732,13 +2732,18 @@ static int _regulator_do_enable(struct regulator_dev *rdev)
+ */
+ static int _regulator_handle_consumer_enable(struct regulator *regulator)
+ {
++ int ret;
+ struct regulator_dev *rdev = regulator->rdev;
+
+ lockdep_assert_held_once(&rdev->mutex.base);
+
+ regulator->enable_count++;
+- if (regulator->uA_load && regulator->enable_count == 1)
+- return drms_uA_update(rdev);
++ if (regulator->uA_load && regulator->enable_count == 1) {
++ ret = drms_uA_update(rdev);
++ if (ret)
++ regulator->enable_count--;
++ return ret;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 750dd1e9f2cc7..2ddc431cbd337 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -8061,7 +8061,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ /* Allocate device driver memory */
+ rc = lpfc_mem_alloc(phba, SGL_ALIGN_SZ);
+ if (rc)
+- return -ENOMEM;
++ goto out_destroy_workqueue;
+
+ /* IF Type 2 ports get initialized now. */
+ if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >=
+@@ -8489,6 +8489,9 @@ out_free_bsmbx:
+ lpfc_destroy_bootstrap_mbox(phba);
+ out_free_mem:
+ lpfc_mem_free(phba);
++out_destroy_workqueue:
++ destroy_workqueue(phba->wq);
++ phba->wq = NULL;
+ return rc;
+ }
+
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 5b5885d9732b6..3e9b2b0099c7a 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -5311,7 +5311,6 @@ megasas_alloc_fusion_context(struct megasas_instance *instance)
+ if (!fusion->log_to_span) {
+ dev_err(&instance->pdev->dev, "Failed from %s %d\n",
+ __func__, __LINE__);
+- kfree(instance->ctrl_context);
+ return -ENOMEM;
+ }
+ }
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 5e8887fa02c8a..e3b7ebf464244 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -3670,6 +3670,7 @@ static struct fw_event_work *dequeue_next_fw_event(struct MPT3SAS_ADAPTER *ioc)
+ fw_event = list_first_entry(&ioc->fw_event_list,
+ struct fw_event_work, list);
+ list_del_init(&fw_event->list);
++ fw_event_work_put(fw_event);
+ }
+ spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+
+@@ -3751,7 +3752,6 @@ _scsih_fw_event_cleanup_queue(struct MPT3SAS_ADAPTER *ioc)
+ if (cancel_work_sync(&fw_event->work))
+ fw_event_work_put(fw_event);
+
+- fw_event_work_put(fw_event);
+ }
+ ioc->fw_events_cleanup = 0;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 2b2f682883752..62666df1a59eb 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -6935,14 +6935,8 @@ qlt_24xx_config_rings(struct scsi_qla_host *vha)
+
+ if (ha->flags.msix_enabled) {
+ if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+- if (IS_QLA2071(ha)) {
+- /* 4 ports Baker: Enable Interrupt Handshake */
+- icb->msix_atio = 0;
+- icb->firmware_options_2 |= cpu_to_le32(BIT_26);
+- } else {
+- icb->msix_atio = cpu_to_le16(msix->entry);
+- icb->firmware_options_2 &= cpu_to_le32(~BIT_26);
+- }
++ icb->msix_atio = cpu_to_le16(msix->entry);
++ icb->firmware_options_2 &= cpu_to_le32(~BIT_26);
+ ql_dbg(ql_dbg_init, vha, 0xf072,
+ "Registering ICB vector 0x%x for atio que.\n",
+ msix->entry);
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 78edb1ea4748d..f5c876d03c1ad 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -118,7 +118,7 @@ scsi_set_blocked(struct scsi_cmnd *cmd, int reason)
+ }
+ }
+
+-static void scsi_mq_requeue_cmd(struct scsi_cmnd *cmd)
++static void scsi_mq_requeue_cmd(struct scsi_cmnd *cmd, unsigned long msecs)
+ {
+ struct request *rq = scsi_cmd_to_rq(cmd);
+
+@@ -128,7 +128,12 @@ static void scsi_mq_requeue_cmd(struct scsi_cmnd *cmd)
+ } else {
+ WARN_ON_ONCE(true);
+ }
+- blk_mq_requeue_request(rq, true);
++
++ if (msecs) {
++ blk_mq_requeue_request(rq, false);
++ blk_mq_delay_kick_requeue_list(rq->q, msecs);
++ } else
++ blk_mq_requeue_request(rq, true);
+ }
+
+ /**
+@@ -658,14 +663,6 @@ static unsigned int scsi_rq_err_bytes(const struct request *rq)
+ return bytes;
+ }
+
+-/* Helper for scsi_io_completion() when "reprep" action required. */
+-static void scsi_io_completion_reprep(struct scsi_cmnd *cmd,
+- struct request_queue *q)
+-{
+- /* A new command will be prepared and issued. */
+- scsi_mq_requeue_cmd(cmd);
+-}
+-
+ static bool scsi_cmd_runtime_exceeced(struct scsi_cmnd *cmd)
+ {
+ struct request *req = scsi_cmd_to_rq(cmd);
+@@ -683,14 +680,21 @@ static bool scsi_cmd_runtime_exceeced(struct scsi_cmnd *cmd)
+ return false;
+ }
+
++/*
++ * When ALUA transition state is returned, reprep the cmd to
++ * use the ALUA handler's transition timeout. Delay the reprep
++ * 1 sec to avoid aggressive retries of the target in that
++ * state.
++ */
++#define ALUA_TRANSITION_REPREP_DELAY 1000
++
+ /* Helper for scsi_io_completion() when special action required. */
+ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ {
+- struct request_queue *q = cmd->device->request_queue;
+ struct request *req = scsi_cmd_to_rq(cmd);
+ int level = 0;
+- enum {ACTION_FAIL, ACTION_REPREP, ACTION_RETRY,
+- ACTION_DELAYED_RETRY} action;
++ enum {ACTION_FAIL, ACTION_REPREP, ACTION_DELAYED_REPREP,
++ ACTION_RETRY, ACTION_DELAYED_RETRY} action;
+ struct scsi_sense_hdr sshdr;
+ bool sense_valid;
+ bool sense_current = true; /* false implies "deferred sense" */
+@@ -779,8 +783,8 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ action = ACTION_DELAYED_RETRY;
+ break;
+ case 0x0a: /* ALUA state transition */
+- blk_stat = BLK_STS_TRANSPORT;
+- fallthrough;
++ action = ACTION_DELAYED_REPREP;
++ break;
+ default:
+ action = ACTION_FAIL;
+ break;
+@@ -839,7 +843,10 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ return;
+ fallthrough;
+ case ACTION_REPREP:
+- scsi_io_completion_reprep(cmd, q);
++ scsi_mq_requeue_cmd(cmd, 0);
++ break;
++ case ACTION_DELAYED_REPREP:
++ scsi_mq_requeue_cmd(cmd, ALUA_TRANSITION_REPREP_DELAY);
+ break;
+ case ACTION_RETRY:
+ /* Retry the same command immediately */
+@@ -933,7 +940,7 @@ static int scsi_io_completion_nz_result(struct scsi_cmnd *cmd, int result,
+ * command block will be released and the queue function will be goosed. If we
+ * are not done then we have to figure out what to do next:
+ *
+- * a) We can call scsi_io_completion_reprep(). The request will be
++ * a) We can call scsi_mq_requeue_cmd(). The request will be
+ * unprepared and put back on the queue. Then a new command will
+ * be created for it. This should be used if we made forward
+ * progress, or if we want to switch from READ(10) to READ(6) for
+@@ -949,7 +956,6 @@ static int scsi_io_completion_nz_result(struct scsi_cmnd *cmd, int result,
+ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ {
+ int result = cmd->result;
+- struct request_queue *q = cmd->device->request_queue;
+ struct request *req = scsi_cmd_to_rq(cmd);
+ blk_status_t blk_stat = BLK_STS_OK;
+
+@@ -986,7 +992,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ * request just queue the command up again.
+ */
+ if (likely(result == 0))
+- scsi_io_completion_reprep(cmd, q);
++ scsi_mq_requeue_cmd(cmd, 0);
+ else
+ scsi_io_completion_action(cmd, result);
+ }
+diff --git a/drivers/soc/bcm/brcmstb/pm/pm-arm.c b/drivers/soc/bcm/brcmstb/pm/pm-arm.c
+index 70ad0f3dce283..286f5d57c0cab 100644
+--- a/drivers/soc/bcm/brcmstb/pm/pm-arm.c
++++ b/drivers/soc/bcm/brcmstb/pm/pm-arm.c
+@@ -684,13 +684,14 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ const struct of_device_id *of_id = NULL;
+ struct device_node *dn;
+ void __iomem *base;
+- int ret, i;
++ int ret, i, s;
+
+ /* AON ctrl registers */
+ base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL);
+ if (IS_ERR(base)) {
+ pr_err("error mapping AON_CTRL\n");
+- return PTR_ERR(base);
++ ret = PTR_ERR(base);
++ goto aon_err;
+ }
+ ctrl.aon_ctrl_base = base;
+
+@@ -700,8 +701,10 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ /* Assume standard offset */
+ ctrl.aon_sram = ctrl.aon_ctrl_base +
+ AON_CTRL_SYSTEM_DATA_RAM_OFS;
++ s = 0;
+ } else {
+ ctrl.aon_sram = base;
++ s = 1;
+ }
+
+ writel_relaxed(0, ctrl.aon_sram + AON_REG_PANIC);
+@@ -711,7 +714,8 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ (const void **)&ddr_phy_data);
+ if (IS_ERR(base)) {
+ pr_err("error mapping DDR PHY\n");
+- return PTR_ERR(base);
++ ret = PTR_ERR(base);
++ goto ddr_phy_err;
+ }
+ ctrl.support_warm_boot = ddr_phy_data->supports_warm_boot;
+ ctrl.pll_status_offset = ddr_phy_data->pll_status_offset;
+@@ -731,17 +735,20 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ for_each_matching_node(dn, ddr_shimphy_dt_ids) {
+ i = ctrl.num_memc;
+ if (i >= MAX_NUM_MEMC) {
++ of_node_put(dn);
+ pr_warn("too many MEMCs (max %d)\n", MAX_NUM_MEMC);
+ break;
+ }
+
+ base = of_io_request_and_map(dn, 0, dn->full_name);
+ if (IS_ERR(base)) {
++ of_node_put(dn);
+ if (!ctrl.support_warm_boot)
+ break;
+
+ pr_err("error mapping DDR SHIMPHY %d\n", i);
+- return PTR_ERR(base);
++ ret = PTR_ERR(base);
++ goto ddr_shimphy_err;
+ }
+ ctrl.memcs[i].ddr_shimphy_base = base;
+ ctrl.num_memc++;
+@@ -752,14 +759,18 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ for_each_matching_node(dn, brcmstb_memc_of_match) {
+ base = of_iomap(dn, 0);
+ if (!base) {
++ of_node_put(dn);
+ pr_err("error mapping DDR Sequencer %d\n", i);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto brcmstb_memc_err;
+ }
+
+ of_id = of_match_node(brcmstb_memc_of_match, dn);
+ if (!of_id) {
+ iounmap(base);
+- return -EINVAL;
++ of_node_put(dn);
++ ret = -EINVAL;
++ goto brcmstb_memc_err;
+ }
+
+ ddr_seq_data = of_id->data;
+@@ -779,21 +790,24 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+ dn = of_find_matching_node(NULL, sram_dt_ids);
+ if (!dn) {
+ pr_err("SRAM not found\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto brcmstb_memc_err;
+ }
+
+ ret = brcmstb_init_sram(dn);
+ of_node_put(dn);
+ if (ret) {
+ pr_err("error setting up SRAM for PM\n");
+- return ret;
++ goto brcmstb_memc_err;
+ }
+
+ ctrl.pdev = pdev;
+
+ ctrl.s3_params = kmalloc(sizeof(*ctrl.s3_params), GFP_KERNEL);
+- if (!ctrl.s3_params)
+- return -ENOMEM;
++ if (!ctrl.s3_params) {
++ ret = -ENOMEM;
++ goto s3_params_err;
++ }
+ ctrl.s3_params_pa = dma_map_single(&pdev->dev, ctrl.s3_params,
+ sizeof(*ctrl.s3_params),
+ DMA_TO_DEVICE);
+@@ -813,7 +827,21 @@ static int brcmstb_pm_probe(struct platform_device *pdev)
+
+ out:
+ kfree(ctrl.s3_params);
+-
++s3_params_err:
++ iounmap(ctrl.boot_sram);
++brcmstb_memc_err:
++ for (i--; i >= 0; i--)
++ iounmap(ctrl.memcs[i].ddr_ctrl);
++ddr_shimphy_err:
++ for (i = 0; i < ctrl.num_memc; i++)
++ iounmap(ctrl.memcs[i].ddr_shimphy_base);
++
++ iounmap(ctrl.memcs[0].ddr_phy_base);
++ddr_phy_err:
++ iounmap(ctrl.aon_ctrl_base);
++ if (s)
++ iounmap(ctrl.aon_sram);
++aon_err:
+ pr_warn("PM: initialization failed with code %d\n", ret);
+
+ return ret;
+diff --git a/drivers/soc/fsl/Kconfig b/drivers/soc/fsl/Kconfig
+index 07d52cafbb313..fcec6ed83d5e2 100644
+--- a/drivers/soc/fsl/Kconfig
++++ b/drivers/soc/fsl/Kconfig
+@@ -24,6 +24,7 @@ config FSL_MC_DPIO
+ tristate "QorIQ DPAA2 DPIO driver"
+ depends on FSL_MC_BUS
+ select SOC_BUS
++ select FSL_GUTS
+ select DIMLIB
+ help
+ Driver for the DPAA2 DPIO object. A DPIO provides queue and
+diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c
+index 85aa86e1338af..5a3809f6a698f 100644
+--- a/drivers/soc/imx/gpcv2.c
++++ b/drivers/soc/imx/gpcv2.c
+@@ -333,6 +333,8 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd)
+ }
+ }
+
++ reset_control_assert(domain->reset);
++
+ /* Enable reset clocks for all devices in the domain */
+ ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks);
+ if (ret) {
+@@ -340,7 +342,8 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd)
+ goto out_regulator_disable;
+ }
+
+- reset_control_assert(domain->reset);
++ /* delays for reset to propagate */
++ udelay(5);
+
+ if (domain->bits.pxx) {
+ /* request the domain to power up */
+diff --git a/drivers/soc/imx/imx8m-blk-ctrl.c b/drivers/soc/imx/imx8m-blk-ctrl.c
+index 7ebc28709e945..2782a7e0a8719 100644
+--- a/drivers/soc/imx/imx8m-blk-ctrl.c
++++ b/drivers/soc/imx/imx8m-blk-ctrl.c
+@@ -242,7 +242,6 @@ static int imx8m_blk_ctrl_probe(struct platform_device *pdev)
+ ret = PTR_ERR(domain->power_dev);
+ goto cleanup_pds;
+ }
+- dev_set_name(domain->power_dev, "%s", data->name);
+
+ domain->genpd.name = data->name;
+ domain->genpd.power_on = imx8m_blk_ctrl_power_on;
+diff --git a/drivers/spi/spi-bitbang-txrx.h b/drivers/spi/spi-bitbang-txrx.h
+index 267342dfa7388..2dcbe166df63e 100644
+--- a/drivers/spi/spi-bitbang-txrx.h
++++ b/drivers/spi/spi-bitbang-txrx.h
+@@ -116,6 +116,7 @@ bitbang_txrx_le_cpha0(struct spi_device *spi,
+ {
+ /* if (cpol == 0) this is SPI_MODE_0; else this is SPI_MODE_2 */
+
++ u8 rxbit = bits - 1;
+ u32 oldbit = !(word & 1);
+ /* clock starts at inactive polarity */
+ for (; likely(bits); bits--) {
+@@ -135,7 +136,7 @@ bitbang_txrx_le_cpha0(struct spi_device *spi,
+ /* sample LSB (from slave) on leading edge */
+ word >>= 1;
+ if ((flags & SPI_MASTER_NO_RX) == 0)
+- word |= getmiso(spi) << (bits - 1);
++ word |= getmiso(spi) << rxbit;
+ setsck(spi, cpol);
+ }
+ return word;
+@@ -148,6 +149,7 @@ bitbang_txrx_le_cpha1(struct spi_device *spi,
+ {
+ /* if (cpol == 0) this is SPI_MODE_1; else this is SPI_MODE_3 */
+
++ u8 rxbit = bits - 1;
+ u32 oldbit = !(word & 1);
+ /* clock starts at inactive polarity */
+ for (; likely(bits); bits--) {
+@@ -168,7 +170,7 @@ bitbang_txrx_le_cpha1(struct spi_device *spi,
+ /* sample LSB (from slave) on trailing edge */
+ word >>= 1;
+ if ((flags & SPI_MASTER_NO_RX) == 0)
+- word |= getmiso(spi) << (bits - 1);
++ word |= getmiso(spi) << rxbit;
+ }
+ return word;
+ }
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index 1175f3a46859f..27295bda3e0bd 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -9,6 +9,7 @@
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/tee_drv.h>
++#include <linux/uaccess.h>
+ #include <linux/uio.h>
+ #include "tee_private.h"
+
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index 80d4e0676083a..365489bf4b8c1 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -527,7 +527,7 @@ static void int3400_setup_gddv(struct int3400_thermal_priv *priv)
+ priv->data_vault = kmemdup(obj->package.elements[0].buffer.pointer,
+ obj->package.elements[0].buffer.length,
+ GFP_KERNEL);
+- if (!priv->data_vault)
++ if (ZERO_OR_NULL_PTR(priv->data_vault))
+ goto out_free;
+
+ bin_attr_data_vault.private = priv->data_vault;
+@@ -597,7 +597,7 @@ static int int3400_thermal_probe(struct platform_device *pdev)
+ goto free_imok;
+ }
+
+- if (priv->data_vault) {
++ if (!ZERO_OR_NULL_PTR(priv->data_vault)) {
+ result = sysfs_create_group(&pdev->dev.kobj,
+ &data_attribute_group);
+ if (result)
+@@ -615,7 +615,8 @@ static int int3400_thermal_probe(struct platform_device *pdev)
+ free_sysfs:
+ cleanup_odvp(priv);
+ if (priv->data_vault) {
+- sysfs_remove_group(&pdev->dev.kobj, &data_attribute_group);
++ if (!ZERO_OR_NULL_PTR(priv->data_vault))
++ sysfs_remove_group(&pdev->dev.kobj, &data_attribute_group);
+ kfree(priv->data_vault);
+ }
+ free_uuid:
+@@ -647,7 +648,7 @@ static int int3400_thermal_remove(struct platform_device *pdev)
+ if (!priv->rel_misc_dev_res)
+ acpi_thermal_rel_misc_device_remove(priv->adev->handle);
+
+- if (priv->data_vault)
++ if (!ZERO_OR_NULL_PTR(priv->data_vault))
+ sysfs_remove_group(&pdev->dev.kobj, &data_attribute_group);
+ sysfs_remove_group(&pdev->dev.kobj, &uuid_attribute_group);
+ sysfs_remove_group(&pdev->dev.kobj, &imok_attribute_group);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index a51ca56a0ebe7..829da9cb14a86 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -8723,6 +8723,8 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba,
+ struct scsi_device *sdp;
+ unsigned long flags;
+ int ret, retries;
++ unsigned long deadline;
++ int32_t remaining;
+
+ spin_lock_irqsave(hba->host->host_lock, flags);
+ sdp = hba->ufs_device_wlun;
+@@ -8755,9 +8757,14 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba,
+ * callbacks hence set the RQF_PM flag so that it doesn't resume the
+ * already suspended childs.
+ */
++ deadline = jiffies + 10 * HZ;
+ for (retries = 3; retries > 0; --retries) {
++ ret = -ETIMEDOUT;
++ remaining = deadline - jiffies;
++ if (remaining <= 0)
++ break;
+ ret = scsi_execute(sdp, cmd, DMA_NONE, NULL, 0, NULL, &sshdr,
+- START_STOP_TIMEOUT, 0, 0, RQF_PM, NULL);
++ remaining / HZ, 0, 0, RQF_PM, NULL);
+ if (!scsi_status_is_check_condition(ret) ||
+ !scsi_sense_valid(&sshdr) ||
+ sshdr.sense_key != UNIT_ATTENTION)
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index c13b9290e3575..d0057d18d2f4a 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -557,6 +557,18 @@ static int vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr,
+ ret = pin_user_pages_remote(mm, vaddr, npages, flags | FOLL_LONGTERM,
+ pages, NULL, NULL);
+ if (ret > 0) {
++ int i;
++
++ /*
++ * The zero page is always resident, we don't need to pin it
++ * and it falls into our invalid/reserved test so we don't
++ * unpin in put_pfn(). Unpin all zero pages in the batch here.
++ */
++ for (i = 0 ; i < ret; i++) {
++ if (unlikely(is_zero_pfn(page_to_pfn(pages[i]))))
++ unpin_user_page(pages[i]);
++ }
++
+ *pfn = page_to_pfn(pages[0]);
+ goto done;
+ }
+diff --git a/drivers/video/fbdev/chipsfb.c b/drivers/video/fbdev/chipsfb.c
+index 393894af26f84..2b00a9d554fc0 100644
+--- a/drivers/video/fbdev/chipsfb.c
++++ b/drivers/video/fbdev/chipsfb.c
+@@ -430,6 +430,7 @@ static int chipsfb_pci_init(struct pci_dev *dp, const struct pci_device_id *ent)
+ err_release_fb:
+ framebuffer_release(p);
+ err_disable:
++ pci_disable_device(dp);
+ err_out:
+ return rc;
+ }
+diff --git a/drivers/video/fbdev/core/fbsysfs.c b/drivers/video/fbdev/core/fbsysfs.c
+index c2a60b187467e..4d7f63892dcc4 100644
+--- a/drivers/video/fbdev/core/fbsysfs.c
++++ b/drivers/video/fbdev/core/fbsysfs.c
+@@ -84,6 +84,10 @@ void framebuffer_release(struct fb_info *info)
+ if (WARN_ON(refcount_read(&info->count)))
+ return;
+
++#if IS_ENABLED(CONFIG_FB_BACKLIGHT)
++ mutex_destroy(&info->bl_curve_mutex);
++#endif
++
+ kfree(info->apertures);
+ kfree(info);
+ }
+diff --git a/drivers/video/fbdev/omap/omapfb_main.c b/drivers/video/fbdev/omap/omapfb_main.c
+index 292fcb0a24fc9..6ff237cee7f87 100644
+--- a/drivers/video/fbdev/omap/omapfb_main.c
++++ b/drivers/video/fbdev/omap/omapfb_main.c
+@@ -1643,14 +1643,14 @@ static int omapfb_do_probe(struct platform_device *pdev,
+ goto cleanup;
+ }
+ fbdev->int_irq = platform_get_irq(pdev, 0);
+- if (!fbdev->int_irq) {
++ if (fbdev->int_irq < 0) {
+ dev_err(&pdev->dev, "unable to get irq\n");
+ r = ENXIO;
+ goto cleanup;
+ }
+
+ fbdev->ext_irq = platform_get_irq(pdev, 1);
+- if (!fbdev->ext_irq) {
++ if (fbdev->ext_irq < 0) {
+ dev_err(&pdev->dev, "unable to get irq\n");
+ r = ENXIO;
+ goto cleanup;
+diff --git a/fs/afs/flock.c b/fs/afs/flock.c
+index c4210a3964d8b..bbcc5afd15760 100644
+--- a/fs/afs/flock.c
++++ b/fs/afs/flock.c
+@@ -76,7 +76,7 @@ void afs_lock_op_done(struct afs_call *call)
+ if (call->error == 0) {
+ spin_lock(&vnode->lock);
+ trace_afs_flock_ev(vnode, NULL, afs_flock_timestamp, 0);
+- vnode->locked_at = call->reply_time;
++ vnode->locked_at = call->issue_time;
+ afs_schedule_lock_extension(vnode);
+ spin_unlock(&vnode->lock);
+ }
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index 4943413d9c5f7..7d37f63ef0f09 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -131,7 +131,7 @@ bad:
+
+ static time64_t xdr_decode_expiry(struct afs_call *call, u32 expiry)
+ {
+- return ktime_divns(call->reply_time, NSEC_PER_SEC) + expiry;
++ return ktime_divns(call->issue_time, NSEC_PER_SEC) + expiry;
+ }
+
+ static void xdr_decode_AFSCallBack(const __be32 **_bp,
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index a6f25d9e75b52..28bdd0387e5ea 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -137,7 +137,6 @@ struct afs_call {
+ bool need_attention; /* T if RxRPC poked us */
+ bool async; /* T if asynchronous */
+ bool upgrade; /* T to request service upgrade */
+- bool have_reply_time; /* T if have got reply_time */
+ bool intr; /* T if interruptible */
+ bool unmarshalling_error; /* T if an unmarshalling error occurred */
+ u16 service_id; /* Actual service ID (after upgrade) */
+@@ -151,7 +150,7 @@ struct afs_call {
+ } __attribute__((packed));
+ __be64 tmp64;
+ };
+- ktime_t reply_time; /* Time of first reply packet */
++ ktime_t issue_time; /* Time of issue of operation */
+ };
+
+ struct afs_call_type {
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index a5434f3e57c68..e3de7fea36435 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -347,6 +347,7 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
+ if (call->max_lifespan)
+ rxrpc_kernel_set_max_life(call->net->socket, rxcall,
+ call->max_lifespan);
++ call->issue_time = ktime_get_real();
+
+ /* send the request */
+ iov[0].iov_base = call->request;
+@@ -497,12 +498,6 @@ static void afs_deliver_to_call(struct afs_call *call)
+ return;
+ }
+
+- if (!call->have_reply_time &&
+- rxrpc_kernel_get_reply_time(call->net->socket,
+- call->rxcall,
+- &call->reply_time))
+- call->have_reply_time = true;
+-
+ ret = call->type->deliver(call);
+ state = READ_ONCE(call->state);
+ if (ret == 0 && call->unmarshalling_error)
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index fdc7d675b4b0c..11571cca86c19 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -232,8 +232,7 @@ static void xdr_decode_YFSCallBack(const __be32 **_bp,
+ struct afs_callback *cb = &scb->callback;
+ ktime_t cb_expiry;
+
+- cb_expiry = call->reply_time;
+- cb_expiry = ktime_add(cb_expiry, xdr_to_u64(x->expiration_time) * 100);
++ cb_expiry = ktime_add(call->issue_time, xdr_to_u64(x->expiration_time) * 100);
+ cb->expires_at = ktime_divns(cb_expiry, NSEC_PER_SEC);
+ scb->have_cb = true;
+ *_bp += xdr_size(x);
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 4d8acd7e63eb5..1bbc810574f22 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1065,8 +1065,6 @@ struct btrfs_fs_info {
+
+ spinlock_t zone_active_bgs_lock;
+ struct list_head zone_active_bgs;
+- /* Waiters when BTRFS_FS_NEED_ZONE_FINISH is set */
+- wait_queue_head_t zone_finish_wait;
+
+ #ifdef CONFIG_BTRFS_FS_REF_VERIFY
+ spinlock_t ref_verify_lock;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index a2505cfc6bc10..781952c5a5c23 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3173,7 +3173,6 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
+ init_waitqueue_head(&fs_info->transaction_blocked_wait);
+ init_waitqueue_head(&fs_info->async_submit_wait);
+ init_waitqueue_head(&fs_info->delayed_iputs_wait);
+- init_waitqueue_head(&fs_info->zone_finish_wait);
+
+ /* Usable values until the real ones are cached from the superblock */
+ fs_info->nodesize = 4096;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 61496ecb1e201..f79f8d7cffcf2 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1643,10 +1643,9 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode,
+ done_offset = end;
+
+ if (done_offset == start) {
+- struct btrfs_fs_info *info = inode->root->fs_info;
+-
+- wait_var_event(&info->zone_finish_wait,
+- !test_bit(BTRFS_FS_NEED_ZONE_FINISH, &info->flags));
++ wait_on_bit_io(&inode->root->fs_info->flags,
++ BTRFS_FS_NEED_ZONE_FINISH,
++ TASK_UNINTERRUPTIBLE);
+ continue;
+ }
+
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index b0c5b4738b1f7..17623e6410c5d 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -199,7 +199,7 @@ static u64 calc_chunk_size(const struct btrfs_fs_info *fs_info, u64 flags)
+ ASSERT(flags & BTRFS_BLOCK_GROUP_TYPE_MASK);
+
+ if (flags & BTRFS_BLOCK_GROUP_DATA)
+- return SZ_1G;
++ return BTRFS_MAX_DATA_CHUNK_SIZE;
+ else if (flags & BTRFS_BLOCK_GROUP_SYSTEM)
+ return SZ_32M;
+
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 3460fd6743807..16e01fbdcec83 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -5266,6 +5266,9 @@ static int decide_stripe_size_regular(struct alloc_chunk_ctl *ctl,
+ ctl->stripe_size);
+ }
+
++ /* Stripe size should not go beyond 1G. */
++ ctl->stripe_size = min_t(u64, ctl->stripe_size, SZ_1G);
++
+ /* Align to BTRFS_STRIPE_LEN */
+ ctl->stripe_size = round_down(ctl->stripe_size, BTRFS_STRIPE_LEN);
+ ctl->chunk_size = ctl->stripe_size * data_stripes;
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 31cb11daa8e82..1386362fad3b8 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -421,10 +421,19 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device, bool populate_cache)
+ * since btrfs adds the pages one by one to a bio, and btrfs cannot
+ * increase the metadata reservation even if it increases the number of
+ * extents, it is safe to stick with the limit.
++ *
++ * With the zoned emulation, we can have non-zoned device on the zoned
++ * mode. In this case, we don't have a valid max zone append size. So,
++ * use max_segments * PAGE_SIZE as the pseudo max_zone_append_size.
+ */
+- zone_info->max_zone_append_size =
+- min_t(u64, (u64)bdev_max_zone_append_sectors(bdev) << SECTOR_SHIFT,
+- (u64)bdev_max_segments(bdev) << PAGE_SHIFT);
++ if (bdev_is_zoned(bdev)) {
++ zone_info->max_zone_append_size = min_t(u64,
++ (u64)bdev_max_zone_append_sectors(bdev) << SECTOR_SHIFT,
++ (u64)bdev_max_segments(bdev) << PAGE_SHIFT);
++ } else {
++ zone_info->max_zone_append_size =
++ (u64)bdev_max_segments(bdev) << PAGE_SHIFT;
++ }
+ if (!IS_ALIGNED(nr_sectors, zone_sectors))
+ zone_info->nr_zones++;
+
+@@ -1178,7 +1187,7 @@ int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size)
+ * offset.
+ */
+ static int calculate_alloc_pointer(struct btrfs_block_group *cache,
+- u64 *offset_ret)
++ u64 *offset_ret, bool new)
+ {
+ struct btrfs_fs_info *fs_info = cache->fs_info;
+ struct btrfs_root *root;
+@@ -1188,6 +1197,21 @@ static int calculate_alloc_pointer(struct btrfs_block_group *cache,
+ int ret;
+ u64 length;
+
++ /*
++ * Avoid tree lookups for a new block group, there's no use for it.
++ * It must always be 0.
++ *
++ * Also, we have a lock chain of extent buffer lock -> chunk mutex.
++ * For new a block group, this function is called from
++ * btrfs_make_block_group() which is already taking the chunk mutex.
++ * Thus, we cannot call calculate_alloc_pointer() which takes extent
++ * buffer locks to avoid deadlock.
++ */
++ if (new) {
++ *offset_ret = 0;
++ return 0;
++ }
++
+ path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+@@ -1323,6 +1347,13 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ else
+ num_conventional++;
+
++ /*
++ * Consider a zone as active if we can allow any number of
++ * active zones.
++ */
++ if (!device->zone_info->max_active_zones)
++ __set_bit(i, active);
++
+ if (!is_sequential) {
+ alloc_offsets[i] = WP_CONVENTIONAL;
+ continue;
+@@ -1389,45 +1420,23 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ __set_bit(i, active);
+ break;
+ }
+-
+- /*
+- * Consider a zone as active if we can allow any number of
+- * active zones.
+- */
+- if (!device->zone_info->max_active_zones)
+- __set_bit(i, active);
+ }
+
+ if (num_sequential > 0)
+ cache->seq_zone = true;
+
+ if (num_conventional > 0) {
+- /*
+- * Avoid calling calculate_alloc_pointer() for new BG. It
+- * is no use for new BG. It must be always 0.
+- *
+- * Also, we have a lock chain of extent buffer lock ->
+- * chunk mutex. For new BG, this function is called from
+- * btrfs_make_block_group() which is already taking the
+- * chunk mutex. Thus, we cannot call
+- * calculate_alloc_pointer() which takes extent buffer
+- * locks to avoid deadlock.
+- */
+-
+ /* Zone capacity is always zone size in emulation */
+ cache->zone_capacity = cache->length;
+- if (new) {
+- cache->alloc_offset = 0;
+- goto out;
+- }
+- ret = calculate_alloc_pointer(cache, &last_alloc);
+- if (ret || map->num_stripes == num_conventional) {
+- if (!ret)
+- cache->alloc_offset = last_alloc;
+- else
+- btrfs_err(fs_info,
++ ret = calculate_alloc_pointer(cache, &last_alloc, new);
++ if (ret) {
++ btrfs_err(fs_info,
+ "zoned: failed to determine allocation offset of bg %llu",
+- cache->start);
++ cache->start);
++ goto out;
++ } else if (map->num_stripes == num_conventional) {
++ cache->alloc_offset = last_alloc;
++ cache->zone_is_active = 1;
+ goto out;
+ }
+ }
+@@ -1495,13 +1504,6 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ goto out;
+ }
+
+- if (cache->zone_is_active) {
+- btrfs_get_block_group(cache);
+- spin_lock(&fs_info->zone_active_bgs_lock);
+- list_add_tail(&cache->active_bg_list, &fs_info->zone_active_bgs);
+- spin_unlock(&fs_info->zone_active_bgs_lock);
+- }
+-
+ out:
+ if (cache->alloc_offset > fs_info->zone_size) {
+ btrfs_err(fs_info,
+@@ -1526,10 +1528,16 @@ out:
+ ret = -EIO;
+ }
+
+- if (!ret)
++ if (!ret) {
+ cache->meta_write_pointer = cache->alloc_offset + cache->start;
+-
+- if (ret) {
++ if (cache->zone_is_active) {
++ btrfs_get_block_group(cache);
++ spin_lock(&fs_info->zone_active_bgs_lock);
++ list_add_tail(&cache->active_bg_list,
++ &fs_info->zone_active_bgs);
++ spin_unlock(&fs_info->zone_active_bgs_lock);
++ }
++ } else {
+ kfree(cache->physical_map);
+ cache->physical_map = NULL;
+ }
+@@ -2007,8 +2015,7 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
+ /* For active_bg_list */
+ btrfs_put_block_group(block_group);
+
+- clear_bit(BTRFS_FS_NEED_ZONE_FINISH, &fs_info->flags);
+- wake_up_all(&fs_info->zone_finish_wait);
++ clear_and_wake_up_bit(BTRFS_FS_NEED_ZONE_FINISH, &fs_info->flags);
+
+ return 0;
+ }
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index f5dcc4940b6da..9dfd2dd612c25 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -61,7 +61,6 @@ smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
+ nr_ioctl_req.Reserved = 0;
+ rc = SMB2_ioctl(xid, oparms->tcon, fid->persistent_fid,
+ fid->volatile_fid, FSCTL_LMR_REQUEST_RESILIENCY,
+- true /* is_fsctl */,
+ (char *)&nr_ioctl_req, sizeof(nr_ioctl_req),
+ CIFSMaxBufSize, NULL, NULL /* no return info */);
+ if (rc == -EOPNOTSUPP) {
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 3898ec2632dc4..e8a8daa82ed76 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -680,7 +680,7 @@ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
+ struct cifs_ses *ses = tcon->ses;
+
+ rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+- FSCTL_QUERY_NETWORK_INTERFACE_INFO, true /* is_fsctl */,
++ FSCTL_QUERY_NETWORK_INTERFACE_INFO,
+ NULL /* no data input */, 0 /* no data input */,
+ CIFSMaxBufSize, (char **)&out_buf, &ret_data_len);
+ if (rc == -EOPNOTSUPP) {
+@@ -1609,9 +1609,8 @@ SMB2_request_res_key(const unsigned int xid, struct cifs_tcon *tcon,
+ struct resume_key_req *res_key;
+
+ rc = SMB2_ioctl(xid, tcon, persistent_fid, volatile_fid,
+- FSCTL_SRV_REQUEST_RESUME_KEY, true /* is_fsctl */,
+- NULL, 0 /* no input */, CIFSMaxBufSize,
+- (char **)&res_key, &ret_data_len);
++ FSCTL_SRV_REQUEST_RESUME_KEY, NULL, 0 /* no input */,
++ CIFSMaxBufSize, (char **)&res_key, &ret_data_len);
+
+ if (rc == -EOPNOTSUPP) {
+ pr_warn_once("Server share %s does not support copy range\n", tcon->treeName);
+@@ -1753,7 +1752,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ rqst[1].rq_nvec = SMB2_IOCTL_IOV_SIZE;
+
+ rc = SMB2_ioctl_init(tcon, server, &rqst[1], COMPOUND_FID, COMPOUND_FID,
+- qi.info_type, true, buffer, qi.output_buffer_length,
++ qi.info_type, buffer, qi.output_buffer_length,
+ CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE -
+ MAX_SMB2_CLOSE_RESPONSE_SIZE);
+ free_req1_func = SMB2_ioctl_free;
+@@ -1929,9 +1928,8 @@ smb2_copychunk_range(const unsigned int xid,
+ retbuf = NULL;
+ rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,
+ trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE,
+- true /* is_fsctl */, (char *)pcchunk,
+- sizeof(struct copychunk_ioctl), CIFSMaxBufSize,
+- (char **)&retbuf, &ret_data_len);
++ (char *)pcchunk, sizeof(struct copychunk_ioctl),
++ CIFSMaxBufSize, (char **)&retbuf, &ret_data_len);
+ if (rc == 0) {
+ if (ret_data_len !=
+ sizeof(struct copychunk_ioctl_rsp)) {
+@@ -2091,7 +2089,6 @@ static bool smb2_set_sparse(const unsigned int xid, struct cifs_tcon *tcon,
+
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid, FSCTL_SET_SPARSE,
+- true /* is_fctl */,
+ &setsparse, 1, CIFSMaxBufSize, NULL, NULL);
+ if (rc) {
+ tcon->broken_sparse_sup = true;
+@@ -2174,7 +2171,6 @@ smb2_duplicate_extents(const unsigned int xid,
+ rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,
+ trgtfile->fid.volatile_fid,
+ FSCTL_DUPLICATE_EXTENTS_TO_FILE,
+- true /* is_fsctl */,
+ (char *)&dup_ext_buf,
+ sizeof(struct duplicate_extents_to_file),
+ CIFSMaxBufSize, NULL,
+@@ -2209,7 +2205,6 @@ smb3_set_integrity(const unsigned int xid, struct cifs_tcon *tcon,
+ return SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid,
+ FSCTL_SET_INTEGRITY_INFORMATION,
+- true /* is_fsctl */,
+ (char *)&integr_info,
+ sizeof(struct fsctl_set_integrity_information_req),
+ CIFSMaxBufSize, NULL,
+@@ -2262,7 +2257,6 @@ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid,
+ FSCTL_SRV_ENUMERATE_SNAPSHOTS,
+- true /* is_fsctl */,
+ NULL, 0 /* no input data */, max_response_size,
+ (char **)&retbuf,
+ &ret_data_len);
+@@ -2982,7 +2976,6 @@ smb2_get_dfs_refer(const unsigned int xid, struct cifs_ses *ses,
+ do {
+ rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+ FSCTL_DFS_GET_REFERRALS,
+- true /* is_fsctl */,
+ (char *)dfs_req, dfs_req_size, CIFSMaxBufSize,
+ (char **)&dfs_rsp, &dfs_rsp_size);
+ if (!is_retryable_error(rc))
+@@ -3189,8 +3182,7 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+
+ rc = SMB2_ioctl_init(tcon, server,
+ &rqst[1], fid.persistent_fid,
+- fid.volatile_fid, FSCTL_GET_REPARSE_POINT,
+- true /* is_fctl */, NULL, 0,
++ fid.volatile_fid, FSCTL_GET_REPARSE_POINT, NULL, 0,
+ CIFSMaxBufSize -
+ MAX_SMB2_CREATE_RESPONSE_SIZE -
+ MAX_SMB2_CLOSE_RESPONSE_SIZE);
+@@ -3370,8 +3362,7 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
+
+ rc = SMB2_ioctl_init(tcon, server,
+ &rqst[1], COMPOUND_FID,
+- COMPOUND_FID, FSCTL_GET_REPARSE_POINT,
+- true /* is_fctl */, NULL, 0,
++ COMPOUND_FID, FSCTL_GET_REPARSE_POINT, NULL, 0,
+ CIFSMaxBufSize -
+ MAX_SMB2_CREATE_RESPONSE_SIZE -
+ MAX_SMB2_CLOSE_RESPONSE_SIZE);
+@@ -3599,26 +3590,43 @@ get_smb2_acl(struct cifs_sb_info *cifs_sb,
+ return pntsd;
+ }
+
++static long smb3_zero_data(struct file *file, struct cifs_tcon *tcon,
++ loff_t offset, loff_t len, unsigned int xid)
++{
++ struct cifsFileInfo *cfile = file->private_data;
++ struct file_zero_data_information fsctl_buf;
++
++ cifs_dbg(FYI, "Offset %lld len %lld\n", offset, len);
++
++ fsctl_buf.FileOffset = cpu_to_le64(offset);
++ fsctl_buf.BeyondFinalZero = cpu_to_le64(offset + len);
++
++ return SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
++ cfile->fid.volatile_fid, FSCTL_SET_ZERO_DATA,
++ (char *)&fsctl_buf,
++ sizeof(struct file_zero_data_information),
++ 0, NULL, NULL);
++}
++
+ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ loff_t offset, loff_t len, bool keep_size)
+ {
+ struct cifs_ses *ses = tcon->ses;
+- struct inode *inode;
+- struct cifsInodeInfo *cifsi;
++ struct inode *inode = file_inode(file);
++ struct cifsInodeInfo *cifsi = CIFS_I(inode);
+ struct cifsFileInfo *cfile = file->private_data;
+- struct file_zero_data_information fsctl_buf;
+ long rc;
+ unsigned int xid;
+ __le64 eof;
+
+ xid = get_xid();
+
+- inode = d_inode(cfile->dentry);
+- cifsi = CIFS_I(inode);
+-
+ trace_smb3_zero_enter(xid, cfile->fid.persistent_fid, tcon->tid,
+ ses->Suid, offset, len);
+
++ inode_lock(inode);
++ filemap_invalidate_lock(inode->i_mapping);
++
+ /*
+ * We zero the range through ioctl, so we need remove the page caches
+ * first, otherwise the data may be inconsistent with the server.
+@@ -3626,26 +3634,12 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ truncate_pagecache_range(inode, offset, offset + len - 1);
+
+ /* if file not oplocked can't be sure whether asking to extend size */
+- if (!CIFS_CACHE_READ(cifsi))
+- if (keep_size == false) {
+- rc = -EOPNOTSUPP;
+- trace_smb3_zero_err(xid, cfile->fid.persistent_fid,
+- tcon->tid, ses->Suid, offset, len, rc);
+- free_xid(xid);
+- return rc;
+- }
+-
+- cifs_dbg(FYI, "Offset %lld len %lld\n", offset, len);
+-
+- fsctl_buf.FileOffset = cpu_to_le64(offset);
+- fsctl_buf.BeyondFinalZero = cpu_to_le64(offset + len);
++ rc = -EOPNOTSUPP;
++ if (keep_size == false && !CIFS_CACHE_READ(cifsi))
++ goto zero_range_exit;
+
+- rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+- cfile->fid.volatile_fid, FSCTL_SET_ZERO_DATA, true,
+- (char *)&fsctl_buf,
+- sizeof(struct file_zero_data_information),
+- 0, NULL, NULL);
+- if (rc)
++ rc = smb3_zero_data(file, tcon, offset, len, xid);
++ if (rc < 0)
+ goto zero_range_exit;
+
+ /*
+@@ -3658,6 +3652,8 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ }
+
+ zero_range_exit:
++ filemap_invalidate_unlock(inode->i_mapping);
++ inode_unlock(inode);
+ free_xid(xid);
+ if (rc)
+ trace_smb3_zero_err(xid, cfile->fid.persistent_fid, tcon->tid,
+@@ -3702,7 +3698,7 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid, FSCTL_SET_ZERO_DATA,
+- true /* is_fctl */, (char *)&fsctl_buf,
++ (char *)&fsctl_buf,
+ sizeof(struct file_zero_data_information),
+ CIFSMaxBufSize, NULL, NULL);
+ filemap_invalidate_unlock(inode->i_mapping);
+@@ -3764,7 +3760,7 @@ static int smb3_simple_fallocate_range(unsigned int xid,
+ in_data.length = cpu_to_le64(len);
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid,
+- FSCTL_QUERY_ALLOCATED_RANGES, true,
++ FSCTL_QUERY_ALLOCATED_RANGES,
+ (char *)&in_data, sizeof(in_data),
+ 1024 * sizeof(struct file_allocated_range_buffer),
+ (char **)&out_data, &out_data_len);
+@@ -4085,7 +4081,7 @@ static loff_t smb3_llseek(struct file *file, struct cifs_tcon *tcon, loff_t offs
+
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid,
+- FSCTL_QUERY_ALLOCATED_RANGES, true,
++ FSCTL_QUERY_ALLOCATED_RANGES,
+ (char *)&in_data, sizeof(in_data),
+ sizeof(struct file_allocated_range_buffer),
+ (char **)&out_data, &out_data_len);
+@@ -4145,7 +4141,7 @@ static int smb3_fiemap(struct cifs_tcon *tcon,
+
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid,
+- FSCTL_QUERY_ALLOCATED_RANGES, true,
++ FSCTL_QUERY_ALLOCATED_RANGES,
+ (char *)&in_data, sizeof(in_data),
+ 1024 * sizeof(struct file_allocated_range_buffer),
+ (char **)&out_data, &out_data_len);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index ba58d7fd54f9e..31d37afae741f 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1174,7 +1174,7 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ }
+
+ rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+- FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
++ FSCTL_VALIDATE_NEGOTIATE_INFO,
+ (char *)pneg_inbuf, inbuflen, CIFSMaxBufSize,
+ (char **)&pneg_rsp, &rsplen);
+ if (rc == -EOPNOTSUPP) {
+@@ -3053,7 +3053,7 @@ int
+ SMB2_ioctl_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
+ struct smb_rqst *rqst,
+ u64 persistent_fid, u64 volatile_fid, u32 opcode,
+- bool is_fsctl, char *in_data, u32 indatalen,
++ char *in_data, u32 indatalen,
+ __u32 max_response_size)
+ {
+ struct smb2_ioctl_req *req;
+@@ -3128,10 +3128,8 @@ SMB2_ioctl_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
+ req->hdr.CreditCharge =
+ cpu_to_le16(DIV_ROUND_UP(max(indatalen, max_response_size),
+ SMB2_MAX_BUFFER_SIZE));
+- if (is_fsctl)
+- req->Flags = cpu_to_le32(SMB2_0_IOCTL_IS_FSCTL);
+- else
+- req->Flags = 0;
++ /* always an FSCTL (for now) */
++ req->Flags = cpu_to_le32(SMB2_0_IOCTL_IS_FSCTL);
+
+ /* validate negotiate request must be signed - see MS-SMB2 3.2.5.5 */
+ if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO)
+@@ -3158,9 +3156,9 @@ SMB2_ioctl_free(struct smb_rqst *rqst)
+ */
+ int
+ SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
+- u64 volatile_fid, u32 opcode, bool is_fsctl,
+- char *in_data, u32 indatalen, u32 max_out_data_len,
+- char **out_data, u32 *plen /* returned data len */)
++ u64 volatile_fid, u32 opcode, char *in_data, u32 indatalen,
++ u32 max_out_data_len, char **out_data,
++ u32 *plen /* returned data len */)
+ {
+ struct smb_rqst rqst;
+ struct smb2_ioctl_rsp *rsp = NULL;
+@@ -3202,7 +3200,7 @@ SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
+
+ rc = SMB2_ioctl_init(tcon, server,
+ &rqst, persistent_fid, volatile_fid, opcode,
+- is_fsctl, in_data, indatalen, max_out_data_len);
++ in_data, indatalen, max_out_data_len);
+ if (rc)
+ goto ioctl_exit;
+
+@@ -3294,7 +3292,7 @@ SMB2_set_compression(const unsigned int xid, struct cifs_tcon *tcon,
+ cpu_to_le16(COMPRESSION_FORMAT_DEFAULT);
+
+ rc = SMB2_ioctl(xid, tcon, persistent_fid, volatile_fid,
+- FSCTL_SET_COMPRESSION, true /* is_fsctl */,
++ FSCTL_SET_COMPRESSION,
+ (char *)&fsctl_input /* data input */,
+ 2 /* in data len */, CIFSMaxBufSize /* max out data */,
+ &ret_data /* out data */, NULL);
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index a69f1eed1cfe5..d57d7202dc367 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -147,13 +147,13 @@ extern int SMB2_open_init(struct cifs_tcon *tcon,
+ extern void SMB2_open_free(struct smb_rqst *rqst);
+ extern int SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon,
+ u64 persistent_fid, u64 volatile_fid, u32 opcode,
+- bool is_fsctl, char *in_data, u32 indatalen, u32 maxoutlen,
++ char *in_data, u32 indatalen, u32 maxoutlen,
+ char **out_data, u32 *plen /* returned data len */);
+ extern int SMB2_ioctl_init(struct cifs_tcon *tcon,
+ struct TCP_Server_Info *server,
+ struct smb_rqst *rqst,
+ u64 persistent_fid, u64 volatile_fid, u32 opcode,
+- bool is_fsctl, char *in_data, u32 indatalen,
++ char *in_data, u32 indatalen,
+ __u32 max_response_size);
+ extern void SMB2_ioctl_free(struct smb_rqst *rqst);
+ extern int SMB2_change_notify(const unsigned int xid, struct cifs_tcon *tcon,
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 3dcf0b8b4e932..232cfdf095aeb 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -744,6 +744,28 @@ void debugfs_remove(struct dentry *dentry)
+ }
+ EXPORT_SYMBOL_GPL(debugfs_remove);
+
++/**
++ * debugfs_lookup_and_remove - lookup a directory or file and recursively remove it
++ * @name: a pointer to a string containing the name of the item to look up.
++ * @parent: a pointer to the parent dentry of the item.
++ *
++ * This is the equlivant of doing something like
++ * debugfs_remove(debugfs_lookup(..)) but with the proper reference counting
++ * handled for the directory being looked up.
++ */
++void debugfs_lookup_and_remove(const char *name, struct dentry *parent)
++{
++ struct dentry *dentry;
++
++ dentry = debugfs_lookup(name, parent);
++ if (!dentry)
++ return;
++
++ debugfs_remove(dentry);
++ dput(dentry);
++}
++EXPORT_SYMBOL_GPL(debugfs_lookup_and_remove);
++
+ /**
+ * debugfs_rename - rename a file/directory in the debugfs filesystem
+ * @old_dir: a pointer to the parent dentry for the renamed object. This
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index 8e01d89c3319e..b5fd9d71e67f1 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -222,8 +222,10 @@ static int erofs_fscache_meta_read_folio(struct file *data, struct folio *folio)
+
+ rreq = erofs_fscache_alloc_request(folio_mapping(folio),
+ folio_pos(folio), folio_size(folio));
+- if (IS_ERR(rreq))
++ if (IS_ERR(rreq)) {
++ ret = PTR_ERR(rreq);
+ goto out;
++ }
+
+ return erofs_fscache_read_folios_async(mdev.m_fscache->cookie,
+ rreq, mdev.m_pa);
+@@ -301,8 +303,10 @@ static int erofs_fscache_read_folio(struct file *file, struct folio *folio)
+
+ rreq = erofs_fscache_alloc_request(folio_mapping(folio),
+ folio_pos(folio), folio_size(folio));
+- if (IS_ERR(rreq))
++ if (IS_ERR(rreq)) {
++ ret = PTR_ERR(rreq);
+ goto out_unlock;
++ }
+
+ pstart = mdev.m_pa + (pos - map.m_la);
+ return erofs_fscache_read_folios_async(mdev.m_fscache->cookie,
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index cfee49d33b95a..a01cc82795a25 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -195,7 +195,6 @@ struct erofs_workgroup {
+ atomic_t refcount;
+ };
+
+-#if defined(CONFIG_SMP)
+ static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp,
+ int val)
+ {
+@@ -224,34 +223,6 @@ static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp)
+ return atomic_cond_read_relaxed(&grp->refcount,
+ VAL != EROFS_LOCKED_MAGIC);
+ }
+-#else
+-static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp,
+- int val)
+-{
+- preempt_disable();
+- /* no need to spin on UP platforms, let's just disable preemption. */
+- if (val != atomic_read(&grp->refcount)) {
+- preempt_enable();
+- return false;
+- }
+- return true;
+-}
+-
+-static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp,
+- int orig_val)
+-{
+- preempt_enable();
+-}
+-
+-static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp)
+-{
+- int v = atomic_read(&grp->refcount);
+-
+- /* workgroup is never freezed on uniprocessor systems */
+- DBG_BUGON(v == EROFS_LOCKED_MAGIC);
+- return v;
+-}
+-#endif /* !CONFIG_SMP */
+ #endif /* !CONFIG_EROFS_FS_ZIP */
+
+ /* we strictly follow PAGE_SIZE and no buffer head yet */
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 81d26abf486fa..da85b39791957 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -141,6 +141,8 @@ struct tracefs_mount_opts {
+ kuid_t uid;
+ kgid_t gid;
+ umode_t mode;
++ /* Opt_* bitfield. */
++ unsigned int opts;
+ };
+
+ enum {
+@@ -241,6 +243,7 @@ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ kgid_t gid;
+ char *p;
+
++ opts->opts = 0;
+ opts->mode = TRACEFS_DEFAULT_MODE;
+
+ while ((p = strsep(&data, ",")) != NULL) {
+@@ -275,24 +278,36 @@ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ * but traditionally tracefs has ignored all mount options
+ */
+ }
++
++ opts->opts |= BIT(token);
+ }
+
+ return 0;
+ }
+
+-static int tracefs_apply_options(struct super_block *sb)
++static int tracefs_apply_options(struct super_block *sb, bool remount)
+ {
+ struct tracefs_fs_info *fsi = sb->s_fs_info;
+ struct inode *inode = d_inode(sb->s_root);
+ struct tracefs_mount_opts *opts = &fsi->mount_opts;
+
+- inode->i_mode &= ~S_IALLUGO;
+- inode->i_mode |= opts->mode;
++ /*
++ * On remount, only reset mode/uid/gid if they were provided as mount
++ * options.
++ */
++
++ if (!remount || opts->opts & BIT(Opt_mode)) {
++ inode->i_mode &= ~S_IALLUGO;
++ inode->i_mode |= opts->mode;
++ }
+
+- inode->i_uid = opts->uid;
++ if (!remount || opts->opts & BIT(Opt_uid))
++ inode->i_uid = opts->uid;
+
+- /* Set all the group ids to the mount option */
+- set_gid(sb->s_root, opts->gid);
++ if (!remount || opts->opts & BIT(Opt_gid)) {
++ /* Set all the group ids to the mount option */
++ set_gid(sb->s_root, opts->gid);
++ }
+
+ return 0;
+ }
+@@ -307,7 +322,7 @@ static int tracefs_remount(struct super_block *sb, int *flags, char *data)
+ if (err)
+ goto fail;
+
+- tracefs_apply_options(sb);
++ tracefs_apply_options(sb, true);
+
+ fail:
+ return err;
+@@ -359,7 +374,7 @@ static int trace_fill_super(struct super_block *sb, void *data, int silent)
+
+ sb->s_op = &tracefs_super_operations;
+
+- tracefs_apply_options(sb);
++ tracefs_apply_options(sb, false);
+
+ return 0;
+
+diff --git a/include/kunit/test.h b/include/kunit/test.h
+index 8ffcd7de96070..648dbb00a3008 100644
+--- a/include/kunit/test.h
++++ b/include/kunit/test.h
+@@ -863,7 +863,7 @@ do { \
+
+ #define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...) \
+ KUNIT_BINARY_INT_ASSERTION(test, \
+- KUNIT_ASSERTION, \
++ KUNIT_EXPECTATION, \
+ left, <=, right, \
+ fmt, \
+ ##__VA_ARGS__)
+@@ -1153,7 +1153,7 @@ do { \
+
+ #define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...) \
+ KUNIT_BINARY_INT_ASSERTION(test, \
+- KUNIT_EXPECTATION, \
++ KUNIT_ASSERTION, \
+ left, <, right, \
+ fmt, \
+ ##__VA_ARGS__)
+@@ -1194,7 +1194,7 @@ do { \
+
+ #define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...) \
+ KUNIT_BINARY_INT_ASSERTION(test, \
+- KUNIT_EXPECTATION, \
++ KUNIT_ASSERTION, \
+ left, >, right, \
+ fmt, \
+ ##__VA_ARGS__)
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index badcc0e3418f2..262664107b839 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -136,6 +136,17 @@ BUFFER_FNS(Defer_Completion, defer_completion)
+
+ static __always_inline void set_buffer_uptodate(struct buffer_head *bh)
+ {
++ /*
++ * If somebody else already set this uptodate, they will
++ * have done the memory barrier, and a reader will thus
++ * see *some* valid buffer state.
++ *
++ * Any other serialization (with IO errors or whatever that
++ * might clear the bit) has to come from other state (eg BH_Lock).
++ */
++ if (test_bit(BH_Uptodate, &bh->b_state))
++ return;
++
+ /*
+ * make it consistent with folio_mark_uptodate
+ * pairs with smp_load_acquire in buffer_uptodate
+diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
+index c869f1e73d755..f60674692d365 100644
+--- a/include/linux/debugfs.h
++++ b/include/linux/debugfs.h
+@@ -91,6 +91,8 @@ struct dentry *debugfs_create_automount(const char *name,
+ void debugfs_remove(struct dentry *dentry);
+ #define debugfs_remove_recursive debugfs_remove
+
++void debugfs_lookup_and_remove(const char *name, struct dentry *parent);
++
+ const struct file_operations *debugfs_real_fops(const struct file *filp);
+
+ int debugfs_file_get(struct dentry *dentry);
+@@ -225,6 +227,10 @@ static inline void debugfs_remove(struct dentry *dentry)
+ static inline void debugfs_remove_recursive(struct dentry *dentry)
+ { }
+
++static inline void debugfs_lookup_and_remove(const char *name,
++ struct dentry *parent)
++{ }
++
+ const struct file_operations *debugfs_real_fops(const struct file *filp);
+
+ static inline int debugfs_file_get(struct dentry *dentry)
+diff --git a/include/linux/dmar.h b/include/linux/dmar.h
+index cbd714a198a0a..f3a3d95df5325 100644
+--- a/include/linux/dmar.h
++++ b/include/linux/dmar.h
+@@ -69,6 +69,7 @@ struct dmar_pci_notify_info {
+
+ extern struct rw_semaphore dmar_global_lock;
+ extern struct list_head dmar_drhd_units;
++extern int intel_iommu_enabled;
+
+ #define for_each_drhd_unit(drhd) \
+ list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \
+@@ -92,7 +93,8 @@ extern struct list_head dmar_drhd_units;
+ static inline bool dmar_rcu_check(void)
+ {
+ return rwsem_is_locked(&dmar_global_lock) ||
+- system_state == SYSTEM_BOOTING;
++ system_state == SYSTEM_BOOTING ||
++ (IS_ENABLED(CONFIG_INTEL_IOMMU) && !intel_iommu_enabled);
+ }
+
+ #define dmar_rcu_dereference(p) rcu_dereference_check((p), dmar_rcu_check())
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index eafa1d2489fda..4e94755098f19 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -406,4 +406,5 @@ LSM_HOOK(int, 0, perf_event_write, struct perf_event *event)
+ #ifdef CONFIG_IO_URING
+ LSM_HOOK(int, 0, uring_override_creds, const struct cred *new)
+ LSM_HOOK(int, 0, uring_sqpoll, void)
++LSM_HOOK(int, 0, uring_cmd, struct io_uring_cmd *ioucmd)
+ #endif /* CONFIG_IO_URING */
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index 91c8146649f59..b681cfce6190a 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -1575,6 +1575,9 @@
+ * Check whether the current task is allowed to spawn a io_uring polling
+ * thread (IORING_SETUP_SQPOLL).
+ *
++ * @uring_cmd:
++ * Check whether the file_operations uring_cmd is allowed to run.
++ *
+ */
+ union security_list_options {
+ #define LSM_HOOK(RET, DEFAULT, NAME, ...) RET (*NAME)(__VA_ARGS__);
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 7fc4e9f49f542..3cc127bb5bfd4 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -2051,6 +2051,7 @@ static inline int security_perf_event_write(struct perf_event *event)
+ #ifdef CONFIG_SECURITY
+ extern int security_uring_override_creds(const struct cred *new);
+ extern int security_uring_sqpoll(void);
++extern int security_uring_cmd(struct io_uring_cmd *ioucmd);
+ #else
+ static inline int security_uring_override_creds(const struct cred *new)
+ {
+@@ -2060,6 +2061,10 @@ static inline int security_uring_sqpoll(void)
+ {
+ return 0;
+ }
++static inline int security_uring_cmd(struct io_uring_cmd *ioucmd)
++{
++ return 0;
++}
+ #endif /* CONFIG_SECURITY */
+ #endif /* CONFIG_IO_URING */
+
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 2f41364a6791e..63d0a21b63162 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -2528,6 +2528,22 @@ static inline unsigned int skb_pagelen(const struct sk_buff *skb)
+ return skb_headlen(skb) + __skb_pagelen(skb);
+ }
+
++static inline void __skb_fill_page_desc_noacc(struct skb_shared_info *shinfo,
++ int i, struct page *page,
++ int off, int size)
++{
++ skb_frag_t *frag = &shinfo->frags[i];
++
++ /*
++ * Propagate page pfmemalloc to the skb if we can. The problem is
++ * that not all callers have unique ownership of the page but rely
++ * on page_is_pfmemalloc doing the right thing(tm).
++ */
++ frag->bv_page = page;
++ frag->bv_offset = off;
++ skb_frag_size_set(frag, size);
++}
++
+ /**
+ * __skb_fill_page_desc - initialise a paged fragment in an skb
+ * @skb: buffer containing fragment to be initialised
+@@ -2544,17 +2560,7 @@ static inline unsigned int skb_pagelen(const struct sk_buff *skb)
+ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i,
+ struct page *page, int off, int size)
+ {
+- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+-
+- /*
+- * Propagate page pfmemalloc to the skb if we can. The problem is
+- * that not all callers have unique ownership of the page but rely
+- * on page_is_pfmemalloc doing the right thing(tm).
+- */
+- frag->bv_page = page;
+- frag->bv_offset = off;
+- skb_frag_size_set(frag, size);
+-
++ __skb_fill_page_desc_noacc(skb_shinfo(skb), i, page, off, size);
+ page = compound_head(page);
+ if (page_is_pfmemalloc(page))
+ skb->pfmemalloc = true;
+@@ -2581,6 +2587,27 @@ static inline void skb_fill_page_desc(struct sk_buff *skb, int i,
+ skb_shinfo(skb)->nr_frags = i + 1;
+ }
+
++/**
++ * skb_fill_page_desc_noacc - initialise a paged fragment in an skb
++ * @skb: buffer containing fragment to be initialised
++ * @i: paged fragment index to initialise
++ * @page: the page to use for this fragment
++ * @off: the offset to the data with @page
++ * @size: the length of the data
++ *
++ * Variant of skb_fill_page_desc() which does not deal with
++ * pfmemalloc, if page is not owned by us.
++ */
++static inline void skb_fill_page_desc_noacc(struct sk_buff *skb, int i,
++ struct page *page, int off,
++ int size)
++{
++ struct skb_shared_info *shinfo = skb_shinfo(skb);
++
++ __skb_fill_page_desc_noacc(shinfo, i, page, off, size);
++ shinfo->nr_frags = i + 1;
++}
++
+ void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off,
+ int size, unsigned int truesize);
+
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index 81b9686a20799..2fb8232cff1d5 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -20,6 +20,9 @@ struct itimerspec64 {
+ struct timespec64 it_value;
+ };
+
++/* Parameters used to convert the timespec values: */
++#define PSEC_PER_NSEC 1000L
++
+ /* Located here for timespec[64]_valid_strict */
+ #define TIME64_MAX ((s64)~((u64)1 << 63))
+ #define TIME64_MIN (-TIME64_MAX - 1)
+diff --git a/include/linux/udp.h b/include/linux/udp.h
+index 254a2654400f8..e96da4157d04d 100644
+--- a/include/linux/udp.h
++++ b/include/linux/udp.h
+@@ -70,6 +70,7 @@ struct udp_sock {
+ * For encapsulation sockets.
+ */
+ int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
++ void (*encap_err_rcv)(struct sock *sk, struct sk_buff *skb, unsigned int udp_offset);
+ int (*encap_err_lookup)(struct sock *sk, struct sk_buff *skb);
+ void (*encap_destroy)(struct sock *sk);
+
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index cb904d356e31e..3b816ae8b1f3b 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -161,8 +161,9 @@ struct slave {
+ struct net_device *dev; /* first - useful for panic debug */
+ struct bonding *bond; /* our master */
+ int delay;
+- /* all three in jiffies */
++ /* all 4 in jiffies */
+ unsigned long last_link_up;
++ unsigned long last_tx;
+ unsigned long last_rx;
+ unsigned long target_last_arp_rx[BOND_MAX_ARP_TARGETS];
+ s8 link; /* one of BOND_LINK_XXXX */
+@@ -539,6 +540,16 @@ static inline unsigned long slave_last_rx(struct bonding *bond,
+ return slave->last_rx;
+ }
+
++static inline void slave_update_last_tx(struct slave *slave)
++{
++ WRITE_ONCE(slave->last_tx, jiffies);
++}
++
++static inline unsigned long slave_last_tx(struct slave *slave)
++{
++ return READ_ONCE(slave->last_tx);
++}
++
+ #ifdef CONFIG_NET_POLL_CONTROLLER
+ static inline netdev_tx_t bond_netpoll_send_skb(const struct slave *slave,
+ struct sk_buff *skb)
+diff --git a/include/net/udp_tunnel.h b/include/net/udp_tunnel.h
+index afc7ce713657b..72394f441dad8 100644
+--- a/include/net/udp_tunnel.h
++++ b/include/net/udp_tunnel.h
+@@ -67,6 +67,9 @@ static inline int udp_sock_create(struct net *net,
+ typedef int (*udp_tunnel_encap_rcv_t)(struct sock *sk, struct sk_buff *skb);
+ typedef int (*udp_tunnel_encap_err_lookup_t)(struct sock *sk,
+ struct sk_buff *skb);
++typedef void (*udp_tunnel_encap_err_rcv_t)(struct sock *sk,
++ struct sk_buff *skb,
++ unsigned int udp_offset);
+ typedef void (*udp_tunnel_encap_destroy_t)(struct sock *sk);
+ typedef struct sk_buff *(*udp_tunnel_gro_receive_t)(struct sock *sk,
+ struct list_head *head,
+@@ -80,6 +83,7 @@ struct udp_tunnel_sock_cfg {
+ __u8 encap_type;
+ udp_tunnel_encap_rcv_t encap_rcv;
+ udp_tunnel_encap_err_lookup_t encap_err_lookup;
++ udp_tunnel_encap_err_rcv_t encap_err_rcv;
+ udp_tunnel_encap_destroy_t encap_destroy;
+ udp_tunnel_gro_receive_t gro_receive;
+ udp_tunnel_gro_complete_t gro_complete;
+diff --git a/include/soc/at91/sama7-ddr.h b/include/soc/at91/sama7-ddr.h
+index 9e17247474fa9..6ce3bd22f6c69 100644
+--- a/include/soc/at91/sama7-ddr.h
++++ b/include/soc/at91/sama7-ddr.h
+@@ -38,6 +38,14 @@
+ #define DDR3PHY_DSGCR_ODTPDD_ODT0 (1 << 20) /* ODT[0] Power Down Driver */
+
+ #define DDR3PHY_ZQ0SR0 (0x188) /* ZQ status register 0 */
++#define DDR3PHY_ZQ0SR0_PDO_OFF (0) /* Pull-down output impedance select offset */
++#define DDR3PHY_ZQ0SR0_PUO_OFF (5) /* Pull-up output impedance select offset */
++#define DDR3PHY_ZQ0SR0_PDODT_OFF (10) /* Pull-down on-die termination impedance select offset */
++#define DDR3PHY_ZQ0SRO_PUODT_OFF (15) /* Pull-up on-die termination impedance select offset */
++
++#define DDR3PHY_DX0DLLCR (0x1CC) /* DDR3PHY DATX8 DLL Control Register */
++#define DDR3PHY_DX1DLLCR (0x20C) /* DDR3PHY DATX8 DLL Control Register */
++#define DDR3PHY_DXDLLCR_DLLDIS (1 << 31) /* DLL Disable */
+
+ /* UDDRC */
+ #define UDDRC_STAT (0x04) /* UDDRC Operating Mode Status Register */
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index cd155b7e1346d..48833d0edd089 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -4878,6 +4878,10 @@ static int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags)
+ if (!req->file->f_op->uring_cmd)
+ return -EOPNOTSUPP;
+
++ ret = security_uring_cmd(ioucmd);
++ if (ret)
++ return ret;
++
+ if (ctx->flags & IORING_SETUP_SQE128)
+ issue_flags |= IO_URING_F_SQE128;
+ if (ctx->flags & IORING_SETUP_CQE32)
+@@ -8260,6 +8264,7 @@ static void io_queue_async(struct io_kiocb *req, int ret)
+
+ switch (io_arm_poll_handler(req, 0)) {
+ case IO_APOLL_READY:
++ io_kbuf_recycle(req, 0);
+ io_req_task_queue(req);
+ break;
+ case IO_APOLL_ABORTED:
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index ce95aee05e8ae..e702ca368539a 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2346,6 +2346,47 @@ int task_cgroup_path(struct task_struct *task, char *buf, size_t buflen)
+ }
+ EXPORT_SYMBOL_GPL(task_cgroup_path);
+
++/**
++ * cgroup_attach_lock - Lock for ->attach()
++ * @lock_threadgroup: whether to down_write cgroup_threadgroup_rwsem
++ *
++ * cgroup migration sometimes needs to stabilize threadgroups against forks and
++ * exits by write-locking cgroup_threadgroup_rwsem. However, some ->attach()
++ * implementations (e.g. cpuset), also need to disable CPU hotplug.
++ * Unfortunately, letting ->attach() operations acquire cpus_read_lock() can
++ * lead to deadlocks.
++ *
++ * Bringing up a CPU may involve creating and destroying tasks which requires
++ * read-locking threadgroup_rwsem, so threadgroup_rwsem nests inside
++ * cpus_read_lock(). If we call an ->attach() which acquires the cpus lock while
++ * write-locking threadgroup_rwsem, the locking order is reversed and we end up
++ * waiting for an on-going CPU hotplug operation which in turn is waiting for
++ * the threadgroup_rwsem to be released to create new tasks. For more details:
++ *
++ * http://lkml.kernel.org/r/20220711174629.uehfmqegcwn2lqzu@wubuntu
++ *
++ * Resolve the situation by always acquiring cpus_read_lock() before optionally
++ * write-locking cgroup_threadgroup_rwsem. This allows ->attach() to assume that
++ * CPU hotplug is disabled on entry.
++ */
++static void cgroup_attach_lock(bool lock_threadgroup)
++{
++ cpus_read_lock();
++ if (lock_threadgroup)
++ percpu_down_write(&cgroup_threadgroup_rwsem);
++}
++
++/**
++ * cgroup_attach_unlock - Undo cgroup_attach_lock()
++ * @lock_threadgroup: whether to up_write cgroup_threadgroup_rwsem
++ */
++static void cgroup_attach_unlock(bool lock_threadgroup)
++{
++ if (lock_threadgroup)
++ percpu_up_write(&cgroup_threadgroup_rwsem);
++ cpus_read_unlock();
++}
++
+ /**
+ * cgroup_migrate_add_task - add a migration target task to a migration context
+ * @task: target task
+@@ -2822,8 +2863,7 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader,
+ }
+
+ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
+- bool *locked)
+- __acquires(&cgroup_threadgroup_rwsem)
++ bool *threadgroup_locked)
+ {
+ struct task_struct *tsk;
+ pid_t pid;
+@@ -2840,12 +2880,8 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
+ * Therefore, we can skip the global lock.
+ */
+ lockdep_assert_held(&cgroup_mutex);
+- if (pid || threadgroup) {
+- percpu_down_write(&cgroup_threadgroup_rwsem);
+- *locked = true;
+- } else {
+- *locked = false;
+- }
++ *threadgroup_locked = pid || threadgroup;
++ cgroup_attach_lock(*threadgroup_locked);
+
+ rcu_read_lock();
+ if (pid) {
+@@ -2876,17 +2912,14 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
+ goto out_unlock_rcu;
+
+ out_unlock_threadgroup:
+- if (*locked) {
+- percpu_up_write(&cgroup_threadgroup_rwsem);
+- *locked = false;
+- }
++ cgroup_attach_unlock(*threadgroup_locked);
++ *threadgroup_locked = false;
+ out_unlock_rcu:
+ rcu_read_unlock();
+ return tsk;
+ }
+
+-void cgroup_procs_write_finish(struct task_struct *task, bool locked)
+- __releases(&cgroup_threadgroup_rwsem)
++void cgroup_procs_write_finish(struct task_struct *task, bool threadgroup_locked)
+ {
+ struct cgroup_subsys *ss;
+ int ssid;
+@@ -2894,8 +2927,8 @@ void cgroup_procs_write_finish(struct task_struct *task, bool locked)
+ /* release reference from cgroup_procs_write_start() */
+ put_task_struct(task);
+
+- if (locked)
+- percpu_up_write(&cgroup_threadgroup_rwsem);
++ cgroup_attach_unlock(threadgroup_locked);
++
+ for_each_subsys(ss, ssid)
+ if (ss->post_attach)
+ ss->post_attach();
+@@ -2950,12 +2983,11 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ struct cgroup_subsys_state *d_css;
+ struct cgroup *dsct;
+ struct css_set *src_cset;
++ bool has_tasks;
+ int ret;
+
+ lockdep_assert_held(&cgroup_mutex);
+
+- percpu_down_write(&cgroup_threadgroup_rwsem);
+-
+ /* look up all csses currently attached to @cgrp's subtree */
+ spin_lock_irq(&css_set_lock);
+ cgroup_for_each_live_descendant_pre(dsct, d_css, cgrp) {
+@@ -2966,6 +2998,15 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ }
+ spin_unlock_irq(&css_set_lock);
+
++ /*
++ * We need to write-lock threadgroup_rwsem while migrating tasks.
++ * However, if there are no source csets for @cgrp, changing its
++ * controllers isn't gonna produce any task migrations and the
++ * write-locking can be skipped safely.
++ */
++ has_tasks = !list_empty(&mgctx.preloaded_src_csets);
++ cgroup_attach_lock(has_tasks);
++
+ /* NULL dst indicates self on default hierarchy */
+ ret = cgroup_migrate_prepare_dst(&mgctx);
+ if (ret)
+@@ -2985,7 +3026,7 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ ret = cgroup_migrate_execute(&mgctx);
+ out_finish:
+ cgroup_migrate_finish(&mgctx);
+- percpu_up_write(&cgroup_threadgroup_rwsem);
++ cgroup_attach_unlock(has_tasks);
+ return ret;
+ }
+
+@@ -4933,13 +4974,13 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf,
+ struct task_struct *task;
+ const struct cred *saved_cred;
+ ssize_t ret;
+- bool locked;
++ bool threadgroup_locked;
+
+ dst_cgrp = cgroup_kn_lock_live(of->kn, false);
+ if (!dst_cgrp)
+ return -ENODEV;
+
+- task = cgroup_procs_write_start(buf, threadgroup, &locked);
++ task = cgroup_procs_write_start(buf, threadgroup, &threadgroup_locked);
+ ret = PTR_ERR_OR_ZERO(task);
+ if (ret)
+ goto out_unlock;
+@@ -4965,7 +5006,7 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf,
+ ret = cgroup_attach_task(dst_cgrp, task, threadgroup);
+
+ out_finish:
+- cgroup_procs_write_finish(task, locked);
++ cgroup_procs_write_finish(task, threadgroup_locked);
+ out_unlock:
+ cgroup_kn_unlock(of->kn);
+
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 58aadfda9b8b3..1f3a55297f39d 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -2289,7 +2289,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ cgroup_taskset_first(tset, &css);
+ cs = css_cs(css);
+
+- cpus_read_lock();
++ lockdep_assert_cpus_held(); /* see cgroup_attach_lock() */
+ percpu_down_write(&cpuset_rwsem);
+
+ guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
+@@ -2343,7 +2343,6 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ wake_up(&cpuset_attach_wq);
+
+ percpu_up_write(&cpuset_rwsem);
+- cpus_read_unlock();
+ }
+
+ /* The various types of files and directories in a cpuset file system */
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 5830dce6081b3..ce34d50f7a9bb 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -464,7 +464,10 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size
+ }
+ }
+
+-#define slot_addr(start, idx) ((start) + ((idx) << IO_TLB_SHIFT))
++static inline phys_addr_t slot_addr(phys_addr_t start, phys_addr_t idx)
++{
++ return start + (idx << IO_TLB_SHIFT);
++}
+
+ /*
+ * Carefully handle integer overflow which can occur when boundary_mask == ~0UL.
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 9d44f2d46c696..d587c85f35b1e 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1225,6 +1225,7 @@ void mmput_async(struct mm_struct *mm)
+ schedule_work(&mm->async_put_work);
+ }
+ }
++EXPORT_SYMBOL_GPL(mmput_async);
+ #endif
+
+ /**
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 08350e35aba24..ca9d834d0b843 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1562,6 +1562,7 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ /* Ensure it is not in reserved area nor out of text */
+ if (!(core_kernel_text((unsigned long) p->addr) ||
+ is_module_text_address((unsigned long) p->addr)) ||
++ in_gate_area_no_mm((unsigned long) p->addr) ||
+ within_kprobe_blacklist((unsigned long) p->addr) ||
+ jump_label_text_reserved(p->addr, p->addr) ||
+ static_call_text_reserved(p->addr, p->addr) ||
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index bb3d63bdf4ae8..667876da8382d 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -416,7 +416,7 @@ void update_sched_domain_debugfs(void)
+ char buf[32];
+
+ snprintf(buf, sizeof(buf), "cpu%d", cpu);
+- debugfs_remove(debugfs_lookup(buf, sd_dentry));
++ debugfs_lookup_and_remove(buf, sd_dentry);
+ d_cpu = debugfs_create_dir(buf, sd_dentry);
+
+ i = 0;
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index cb866c3141af2..918730d749325 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -142,7 +142,8 @@ static bool check_user_trigger(struct trace_event_file *file)
+ {
+ struct event_trigger_data *data;
+
+- list_for_each_entry_rcu(data, &file->triggers, list) {
++ list_for_each_entry_rcu(data, &file->triggers, list,
++ lockdep_is_held(&event_mutex)) {
+ if (data->flags & EVENT_TRIGGER_FL_PROBE)
+ continue;
+ return true;
+diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
+index 95b58bd757ce4..1e130da1b742c 100644
+--- a/kernel/trace/trace_preemptirq.c
++++ b/kernel/trace/trace_preemptirq.c
+@@ -95,14 +95,14 @@ __visible void trace_hardirqs_on_caller(unsigned long caller_addr)
+ }
+
+ lockdep_hardirqs_on_prepare();
+- lockdep_hardirqs_on(CALLER_ADDR0);
++ lockdep_hardirqs_on(caller_addr);
+ }
+ EXPORT_SYMBOL(trace_hardirqs_on_caller);
+ NOKPROBE_SYMBOL(trace_hardirqs_on_caller);
+
+ __visible void trace_hardirqs_off_caller(unsigned long caller_addr)
+ {
+- lockdep_hardirqs_off(CALLER_ADDR0);
++ lockdep_hardirqs_off(caller_addr);
+
+ if (!this_cpu_read(tracing_irq_cpu)) {
+ this_cpu_write(tracing_irq_cpu, 1);
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index a182f5ddaf68b..acd7cbb82e160 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1132,7 +1132,7 @@ EXPORT_SYMBOL(kmemleak_no_scan);
+ void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
+ gfp_t gfp)
+ {
+- if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
++ if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
+ kmemleak_alloc(__va(phys), size, min_count, gfp);
+ }
+ EXPORT_SYMBOL(kmemleak_alloc_phys);
+@@ -1146,7 +1146,7 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
+ */
+ void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
+ {
+- if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
++ if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
+ kmemleak_free_part(__va(phys), size);
+ }
+ EXPORT_SYMBOL(kmemleak_free_part_phys);
+@@ -1158,7 +1158,7 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
+ */
+ void __ref kmemleak_not_leak_phys(phys_addr_t phys)
+ {
+- if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
++ if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
+ kmemleak_not_leak(__va(phys));
+ }
+ EXPORT_SYMBOL(kmemleak_not_leak_phys);
+@@ -1170,7 +1170,7 @@ EXPORT_SYMBOL(kmemleak_not_leak_phys);
+ */
+ void __ref kmemleak_ignore_phys(phys_addr_t phys)
+ {
+- if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
++ if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
+ kmemleak_ignore(__va(phys));
+ }
+ EXPORT_SYMBOL(kmemleak_ignore_phys);
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index ff47790366497..f20f4373ff408 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -384,6 +384,7 @@ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_
+ /* - Bridged-and-DNAT'ed traffic doesn't
+ * require ip_forwarding. */
+ if (rt->dst.dev == dev) {
++ skb_dst_drop(skb);
+ skb_dst_set(skb, &rt->dst);
+ goto bridged_dnat;
+ }
+@@ -413,6 +414,7 @@ bridged_dnat:
+ kfree_skb(skb);
+ return 0;
+ }
++ skb_dst_drop(skb);
+ skb_dst_set_noref(skb, &rt->dst);
+ }
+
+diff --git a/net/bridge/br_netfilter_ipv6.c b/net/bridge/br_netfilter_ipv6.c
+index e4e0c836c3f51..6b07f30675bb0 100644
+--- a/net/bridge/br_netfilter_ipv6.c
++++ b/net/bridge/br_netfilter_ipv6.c
+@@ -197,6 +197,7 @@ static int br_nf_pre_routing_finish_ipv6(struct net *net, struct sock *sk, struc
+ kfree_skb(skb);
+ return 0;
+ }
++ skb_dst_drop(skb);
+ skb_dst_set_noref(skb, &rt->dst);
+ }
+
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index 50f4faeea76cc..48e82438acb02 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -675,7 +675,7 @@ int __zerocopy_sg_from_iter(struct sock *sk, struct sk_buff *skb,
+ page_ref_sub(last_head, refs);
+ refs = 0;
+ }
+- skb_fill_page_desc(skb, frag++, head, start, size);
++ skb_fill_page_desc_noacc(skb, frag++, head, start, size);
+ }
+ if (refs)
+ page_ref_sub(last_head, refs);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index bebf58464d667..4b2b07a9422cf 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4179,9 +4179,8 @@ normal:
+ SKB_GSO_CB(nskb)->csum_start =
+ skb_headroom(nskb) + doffset;
+ } else {
+- skb_copy_bits(head_skb, offset,
+- skb_put(nskb, len),
+- len);
++ if (skb_copy_bits(head_skb, offset, skb_put(nskb, len), len))
++ goto err;
+ }
+ continue;
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 3d446773ff2a5..ab03977b65781 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1015,7 +1015,7 @@ new_segment:
+ skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
+ } else {
+ get_page(page);
+- skb_fill_page_desc(skb, i, page, offset, copy);
++ skb_fill_page_desc_noacc(skb, i, page, offset, copy);
+ }
+
+ if (!(flags & MSG_NO_SHARED_FRAGS))
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index e5435156e545d..c30696eafc361 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2514,6 +2514,21 @@ static inline bool tcp_may_undo(const struct tcp_sock *tp)
+ return tp->undo_marker && (!tp->undo_retrans || tcp_packet_delayed(tp));
+ }
+
++static bool tcp_is_non_sack_preventing_reopen(struct sock *sk)
++{
++ struct tcp_sock *tp = tcp_sk(sk);
++
++ if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
++ /* Hold old state until something *above* high_seq
++ * is ACKed. For Reno it is MUST to prevent false
++ * fast retransmits (RFC2582). SACK TCP is safe. */
++ if (!tcp_any_retrans_done(sk))
++ tp->retrans_stamp = 0;
++ return true;
++ }
++ return false;
++}
++
+ /* People celebrate: "We love our President!" */
+ static bool tcp_try_undo_recovery(struct sock *sk)
+ {
+@@ -2536,14 +2551,8 @@ static bool tcp_try_undo_recovery(struct sock *sk)
+ } else if (tp->rack.reo_wnd_persist) {
+ tp->rack.reo_wnd_persist--;
+ }
+- if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
+- /* Hold old state until something *above* high_seq
+- * is ACKed. For Reno it is MUST to prevent false
+- * fast retransmits (RFC2582). SACK TCP is safe. */
+- if (!tcp_any_retrans_done(sk))
+- tp->retrans_stamp = 0;
++ if (tcp_is_non_sack_preventing_reopen(sk))
+ return true;
+- }
+ tcp_set_ca_state(sk, TCP_CA_Open);
+ tp->is_sack_reneg = 0;
+ return false;
+@@ -2579,6 +2588,8 @@ static bool tcp_try_undo_loss(struct sock *sk, bool frto_undo)
+ NET_INC_STATS(sock_net(sk),
+ LINUX_MIB_TCPSPURIOUSRTOS);
+ inet_csk(sk)->icsk_retransmits = 0;
++ if (tcp_is_non_sack_preventing_reopen(sk))
++ return true;
+ if (frto_undo || tcp_is_sack(tp)) {
+ tcp_set_ca_state(sk, TCP_CA_Open);
+ tp->is_sack_reneg = 0;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index aa9f2ec3dc468..01e1d36bdf135 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -781,6 +781,8 @@ int __udp4_lib_err(struct sk_buff *skb, u32 info, struct udp_table *udptable)
+ */
+ if (tunnel) {
+ /* ...not for tunnels though: we don't have a sending socket */
++ if (udp_sk(sk)->encap_err_rcv)
++ udp_sk(sk)->encap_err_rcv(sk, skb, iph->ihl << 2);
+ goto out;
+ }
+ if (!inet->recverr) {
+diff --git a/net/ipv4/udp_tunnel_core.c b/net/ipv4/udp_tunnel_core.c
+index 8efaf8c3fe2a9..8242c8947340e 100644
+--- a/net/ipv4/udp_tunnel_core.c
++++ b/net/ipv4/udp_tunnel_core.c
+@@ -72,6 +72,7 @@ void setup_udp_tunnel_sock(struct net *net, struct socket *sock,
+
+ udp_sk(sk)->encap_type = cfg->encap_type;
+ udp_sk(sk)->encap_rcv = cfg->encap_rcv;
++ udp_sk(sk)->encap_err_rcv = cfg->encap_err_rcv;
+ udp_sk(sk)->encap_err_lookup = cfg->encap_err_lookup;
+ udp_sk(sk)->encap_destroy = cfg->encap_destroy;
+ udp_sk(sk)->gro_receive = cfg->gro_receive;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index b738eb7e1cae8..04cf06866e765 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3557,11 +3557,15 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
+ fallthrough;
+ case NETDEV_UP:
+ case NETDEV_CHANGE:
+- if (dev->flags & IFF_SLAVE)
++ if (idev && idev->cnf.disable_ipv6)
+ break;
+
+- if (idev && idev->cnf.disable_ipv6)
++ if (dev->flags & IFF_SLAVE) {
++ if (event == NETDEV_UP && !IS_ERR_OR_NULL(idev) &&
++ dev->flags & IFF_UP && dev->flags & IFF_MULTICAST)
++ ipv6_mc_up(idev);
+ break;
++ }
+
+ if (event == NETDEV_UP) {
+ /* restore routes for permanent addresses */
+diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c
+index 73aaabf0e9665..0b0e34ddc64e0 100644
+--- a/net/ipv6/seg6.c
++++ b/net/ipv6/seg6.c
+@@ -191,6 +191,11 @@ static int seg6_genl_sethmac(struct sk_buff *skb, struct genl_info *info)
+ goto out_unlock;
+ }
+
++ if (slen > nla_len(info->attrs[SEG6_ATTR_SECRET])) {
++ err = -EINVAL;
++ goto out_unlock;
++ }
++
+ if (hinfo) {
+ err = seg6_hmac_info_del(net, hmackeyid);
+ if (err)
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index e2f2e087a7531..40074bc7274ea 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -616,8 +616,11 @@ int __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ }
+
+ /* Tunnels don't have an application socket: don't pass errors back */
+- if (tunnel)
++ if (tunnel) {
++ if (udp_sk(sk)->encap_err_rcv)
++ udp_sk(sk)->encap_err_rcv(sk, skb, offset);
+ goto out;
++ }
+
+ if (!np->recverr) {
+ if (!harderr || sk->sk_state != TCP_ESTABLISHED)
+diff --git a/net/netfilter/nf_conntrack_irc.c b/net/netfilter/nf_conntrack_irc.c
+index 1796c456ac98b..992decbcaa5c1 100644
+--- a/net/netfilter/nf_conntrack_irc.c
++++ b/net/netfilter/nf_conntrack_irc.c
+@@ -194,8 +194,9 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+
+ /* dcc_ip can be the internal OR external (NAT'ed) IP */
+ tuple = &ct->tuplehash[dir].tuple;
+- if (tuple->src.u3.ip != dcc_ip &&
+- tuple->dst.u3.ip != dcc_ip) {
++ if ((tuple->src.u3.ip != dcc_ip &&
++ ct->tuplehash[!dir].tuple.dst.u3.ip != dcc_ip) ||
++ dcc_port == 0) {
+ net_warn_ratelimited("Forged DCC command from %pI4: %pI4:%u\n",
+ &tuple->src.u3.ip,
+ &dcc_ip, dcc_port);
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index a63b51dceaf2c..a634c72b1ffcf 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -655,6 +655,37 @@ static bool tcp_in_window(struct nf_conn *ct,
+ tn->tcp_be_liberal)
+ res = true;
+ if (!res) {
++ bool seq_ok = before(seq, sender->td_maxend + 1);
++
++ if (!seq_ok) {
++ u32 overshot = end - sender->td_maxend + 1;
++ bool ack_ok;
++
++ ack_ok = after(sack, receiver->td_end - MAXACKWINDOW(sender) - 1);
++
++ if (in_recv_win &&
++ ack_ok &&
++ overshot <= receiver->td_maxwin &&
++ before(sack, receiver->td_end + 1)) {
++ /* Work around TCPs that send more bytes than allowed by
++ * the receive window.
++ *
++ * If the (marked as invalid) packet is allowed to pass by
++ * the ruleset and the peer acks this data, then its possible
++ * all future packets will trigger 'ACK is over upper bound' check.
++ *
++ * Thus if only the sequence check fails then do update td_end so
++ * possible ACK for this data can update internal state.
++ */
++ sender->td_end = end;
++ sender->flags |= IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED;
++
++ nf_ct_l4proto_log_invalid(skb, ct, hook_state,
++ "%u bytes more than expected", overshot);
++ return res;
++ }
++ }
++
+ nf_ct_l4proto_log_invalid(skb, ct, hook_state,
+ "%s",
+ before(seq, sender->td_maxend + 1) ?
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index bc690238a3c56..848cc81d69926 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2166,8 +2166,10 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family,
+ chain->flags |= NFT_CHAIN_BASE | flags;
+ basechain->policy = NF_ACCEPT;
+ if (chain->flags & NFT_CHAIN_HW_OFFLOAD &&
+- !nft_chain_offload_support(basechain))
++ !nft_chain_offload_support(basechain)) {
++ list_splice_init(&basechain->hook_list, &hook->list);
+ return -EOPNOTSUPP;
++ }
+
+ flow_block_init(&basechain->flow_block);
+
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 571436064cd6f..62c70709d7980 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -982,6 +982,7 @@ void rxrpc_send_keepalive(struct rxrpc_peer *);
+ /*
+ * peer_event.c
+ */
++void rxrpc_encap_err_rcv(struct sock *sk, struct sk_buff *skb, unsigned int udp_offset);
+ void rxrpc_error_report(struct sock *);
+ void rxrpc_peer_keepalive_worker(struct work_struct *);
+
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index 96ecb7356c0fe..79bb02eb67b2b 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -137,6 +137,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+
+ tuncfg.encap_type = UDP_ENCAP_RXRPC;
+ tuncfg.encap_rcv = rxrpc_input_packet;
++ tuncfg.encap_err_rcv = rxrpc_encap_err_rcv;
+ tuncfg.sk_user_data = local;
+ setup_udp_tunnel_sock(net, local->socket, &tuncfg);
+
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index be032850ae8ca..32561e9567fe3 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -16,22 +16,105 @@
+ #include <net/sock.h>
+ #include <net/af_rxrpc.h>
+ #include <net/ip.h>
++#include <net/icmp.h>
+ #include "ar-internal.h"
+
++static void rxrpc_adjust_mtu(struct rxrpc_peer *, unsigned int);
+ static void rxrpc_store_error(struct rxrpc_peer *, struct sock_exterr_skb *);
+ static void rxrpc_distribute_error(struct rxrpc_peer *, int,
+ enum rxrpc_call_completion);
+
+ /*
+- * Find the peer associated with an ICMP packet.
++ * Find the peer associated with an ICMPv4 packet.
+ */
+ static struct rxrpc_peer *rxrpc_lookup_peer_icmp_rcu(struct rxrpc_local *local,
+- const struct sk_buff *skb,
++ struct sk_buff *skb,
++ unsigned int udp_offset,
++ unsigned int *info,
+ struct sockaddr_rxrpc *srx)
+ {
+- struct sock_exterr_skb *serr = SKB_EXT_ERR(skb);
++ struct iphdr *ip, *ip0 = ip_hdr(skb);
++ struct icmphdr *icmp = icmp_hdr(skb);
++ struct udphdr *udp = (struct udphdr *)(skb->data + udp_offset);
+
+- _enter("");
++ _enter("%u,%u,%u", ip0->protocol, icmp->type, icmp->code);
++
++ switch (icmp->type) {
++ case ICMP_DEST_UNREACH:
++ *info = ntohs(icmp->un.frag.mtu);
++ fallthrough;
++ case ICMP_TIME_EXCEEDED:
++ case ICMP_PARAMETERPROB:
++ ip = (struct iphdr *)((void *)icmp + 8);
++ break;
++ default:
++ return NULL;
++ }
++
++ memset(srx, 0, sizeof(*srx));
++ srx->transport_type = local->srx.transport_type;
++ srx->transport_len = local->srx.transport_len;
++ srx->transport.family = local->srx.transport.family;
++
++ /* Can we see an ICMP4 packet on an ICMP6 listening socket? and vice
++ * versa?
++ */
++ switch (srx->transport.family) {
++ case AF_INET:
++ srx->transport_len = sizeof(srx->transport.sin);
++ srx->transport.family = AF_INET;
++ srx->transport.sin.sin_port = udp->dest;
++ memcpy(&srx->transport.sin.sin_addr, &ip->daddr,
++ sizeof(struct in_addr));
++ break;
++
++#ifdef CONFIG_AF_RXRPC_IPV6
++ case AF_INET6:
++ srx->transport_len = sizeof(srx->transport.sin);
++ srx->transport.family = AF_INET;
++ srx->transport.sin.sin_port = udp->dest;
++ memcpy(&srx->transport.sin.sin_addr, &ip->daddr,
++ sizeof(struct in_addr));
++ break;
++#endif
++
++ default:
++ WARN_ON_ONCE(1);
++ return NULL;
++ }
++
++ _net("ICMP {%pISp}", &srx->transport);
++ return rxrpc_lookup_peer_rcu(local, srx);
++}
++
++#ifdef CONFIG_AF_RXRPC_IPV6
++/*
++ * Find the peer associated with an ICMPv6 packet.
++ */
++static struct rxrpc_peer *rxrpc_lookup_peer_icmp6_rcu(struct rxrpc_local *local,
++ struct sk_buff *skb,
++ unsigned int udp_offset,
++ unsigned int *info,
++ struct sockaddr_rxrpc *srx)
++{
++ struct icmp6hdr *icmp = icmp6_hdr(skb);
++ struct ipv6hdr *ip, *ip0 = ipv6_hdr(skb);
++ struct udphdr *udp = (struct udphdr *)(skb->data + udp_offset);
++
++ _enter("%u,%u,%u", ip0->nexthdr, icmp->icmp6_type, icmp->icmp6_code);
++
++ switch (icmp->icmp6_type) {
++ case ICMPV6_DEST_UNREACH:
++ *info = ntohl(icmp->icmp6_mtu);
++ fallthrough;
++ case ICMPV6_PKT_TOOBIG:
++ case ICMPV6_TIME_EXCEED:
++ case ICMPV6_PARAMPROB:
++ ip = (struct ipv6hdr *)((void *)icmp + 8);
++ break;
++ default:
++ return NULL;
++ }
+
+ memset(srx, 0, sizeof(*srx));
+ srx->transport_type = local->srx.transport_type;
+@@ -41,6 +124,165 @@ static struct rxrpc_peer *rxrpc_lookup_peer_icmp_rcu(struct rxrpc_local *local,
+ /* Can we see an ICMP4 packet on an ICMP6 listening socket? and vice
+ * versa?
+ */
++ switch (srx->transport.family) {
++ case AF_INET:
++ _net("Rx ICMP6 on v4 sock");
++ srx->transport_len = sizeof(srx->transport.sin);
++ srx->transport.family = AF_INET;
++ srx->transport.sin.sin_port = udp->dest;
++ memcpy(&srx->transport.sin.sin_addr,
++ &ip->daddr.s6_addr32[3], sizeof(struct in_addr));
++ break;
++ case AF_INET6:
++ _net("Rx ICMP6");
++ srx->transport.sin.sin_port = udp->dest;
++ memcpy(&srx->transport.sin6.sin6_addr, &ip->daddr,
++ sizeof(struct in6_addr));
++ break;
++ default:
++ WARN_ON_ONCE(1);
++ return NULL;
++ }
++
++ _net("ICMP {%pISp}", &srx->transport);
++ return rxrpc_lookup_peer_rcu(local, srx);
++}
++#endif /* CONFIG_AF_RXRPC_IPV6 */
++
++/*
++ * Handle an error received on the local endpoint as a tunnel.
++ */
++void rxrpc_encap_err_rcv(struct sock *sk, struct sk_buff *skb,
++ unsigned int udp_offset)
++{
++ struct sock_extended_err ee;
++ struct sockaddr_rxrpc srx;
++ struct rxrpc_local *local;
++ struct rxrpc_peer *peer;
++ unsigned int info = 0;
++ int err;
++ u8 version = ip_hdr(skb)->version;
++ u8 type = icmp_hdr(skb)->type;
++ u8 code = icmp_hdr(skb)->code;
++
++ rcu_read_lock();
++ local = rcu_dereference_sk_user_data(sk);
++ if (unlikely(!local)) {
++ rcu_read_unlock();
++ return;
++ }
++
++ rxrpc_new_skb(skb, rxrpc_skb_received);
++
++ switch (ip_hdr(skb)->version) {
++ case IPVERSION:
++ peer = rxrpc_lookup_peer_icmp_rcu(local, skb, udp_offset,
++ &info, &srx);
++ break;
++#ifdef CONFIG_AF_RXRPC_IPV6
++ case 6:
++ peer = rxrpc_lookup_peer_icmp6_rcu(local, skb, udp_offset,
++ &info, &srx);
++ break;
++#endif
++ default:
++ rcu_read_unlock();
++ return;
++ }
++
++ if (peer && !rxrpc_get_peer_maybe(peer))
++ peer = NULL;
++ if (!peer) {
++ rcu_read_unlock();
++ return;
++ }
++
++ memset(&ee, 0, sizeof(ee));
++
++ switch (version) {
++ case IPVERSION:
++ switch (type) {
++ case ICMP_DEST_UNREACH:
++ switch (code) {
++ case ICMP_FRAG_NEEDED:
++ rxrpc_adjust_mtu(peer, info);
++ rcu_read_unlock();
++ rxrpc_put_peer(peer);
++ return;
++ default:
++ break;
++ }
++
++ err = EHOSTUNREACH;
++ if (code <= NR_ICMP_UNREACH) {
++ /* Might want to do something different with
++ * non-fatal errors
++ */
++ //harderr = icmp_err_convert[code].fatal;
++ err = icmp_err_convert[code].errno;
++ }
++ break;
++
++ case ICMP_TIME_EXCEEDED:
++ err = EHOSTUNREACH;
++ break;
++ default:
++ err = EPROTO;
++ break;
++ }
++
++ ee.ee_origin = SO_EE_ORIGIN_ICMP;
++ ee.ee_type = type;
++ ee.ee_code = code;
++ ee.ee_errno = err;
++ break;
++
++#ifdef CONFIG_AF_RXRPC_IPV6
++ case 6:
++ switch (type) {
++ case ICMPV6_PKT_TOOBIG:
++ rxrpc_adjust_mtu(peer, info);
++ rcu_read_unlock();
++ rxrpc_put_peer(peer);
++ return;
++ }
++
++ icmpv6_err_convert(type, code, &err);
++
++ if (err == EACCES)
++ err = EHOSTUNREACH;
++
++ ee.ee_origin = SO_EE_ORIGIN_ICMP6;
++ ee.ee_type = type;
++ ee.ee_code = code;
++ ee.ee_errno = err;
++ break;
++#endif
++ }
++
++ trace_rxrpc_rx_icmp(peer, &ee, &srx);
++
++ rxrpc_distribute_error(peer, err, RXRPC_CALL_NETWORK_ERROR);
++ rcu_read_unlock();
++ rxrpc_put_peer(peer);
++}
++
++/*
++ * Find the peer associated with a local error.
++ */
++static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local,
++ const struct sk_buff *skb,
++ struct sockaddr_rxrpc *srx)
++{
++ struct sock_exterr_skb *serr = SKB_EXT_ERR(skb);
++
++ _enter("");
++
++ memset(srx, 0, sizeof(*srx));
++ srx->transport_type = local->srx.transport_type;
++ srx->transport_len = local->srx.transport_len;
++ srx->transport.family = local->srx.transport.family;
++
+ switch (srx->transport.family) {
+ case AF_INET:
+ srx->transport_len = sizeof(srx->transport.sin);
+@@ -104,10 +346,8 @@ static struct rxrpc_peer *rxrpc_lookup_peer_icmp_rcu(struct rxrpc_local *local,
+ /*
+ * Handle an MTU/fragmentation problem.
+ */
+-static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, struct sock_exterr_skb *serr)
++static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu)
+ {
+- u32 mtu = serr->ee.ee_info;
+-
+ _net("Rx ICMP Fragmentation Needed (%d)", mtu);
+
+ /* wind down the local interface MTU */
+@@ -148,7 +388,7 @@ void rxrpc_error_report(struct sock *sk)
+ struct sock_exterr_skb *serr;
+ struct sockaddr_rxrpc srx;
+ struct rxrpc_local *local;
+- struct rxrpc_peer *peer;
++ struct rxrpc_peer *peer = NULL;
+ struct sk_buff *skb;
+
+ rcu_read_lock();
+@@ -172,41 +412,20 @@ void rxrpc_error_report(struct sock *sk)
+ }
+ rxrpc_new_skb(skb, rxrpc_skb_received);
+ serr = SKB_EXT_ERR(skb);
+- if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) {
+- _leave("UDP empty message");
+- rcu_read_unlock();
+- rxrpc_free_skb(skb, rxrpc_skb_freed);
+- return;
+- }
+
+- peer = rxrpc_lookup_peer_icmp_rcu(local, skb, &srx);
+- if (peer && !rxrpc_get_peer_maybe(peer))
+- peer = NULL;
+- if (!peer) {
+- rcu_read_unlock();
+- rxrpc_free_skb(skb, rxrpc_skb_freed);
+- _leave(" [no peer]");
+- return;
+- }
+-
+- trace_rxrpc_rx_icmp(peer, &serr->ee, &srx);
+-
+- if ((serr->ee.ee_origin == SO_EE_ORIGIN_ICMP &&
+- serr->ee.ee_type == ICMP_DEST_UNREACH &&
+- serr->ee.ee_code == ICMP_FRAG_NEEDED)) {
+- rxrpc_adjust_mtu(peer, serr);
+- rcu_read_unlock();
+- rxrpc_free_skb(skb, rxrpc_skb_freed);
+- rxrpc_put_peer(peer);
+- _leave(" [MTU update]");
+- return;
++ if (serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL) {
++ peer = rxrpc_lookup_peer_local_rcu(local, skb, &srx);
++ if (peer && !rxrpc_get_peer_maybe(peer))
++ peer = NULL;
++ if (peer) {
++ trace_rxrpc_rx_icmp(peer, &serr->ee, &srx);
++ rxrpc_store_error(peer, serr);
++ }
+ }
+
+- rxrpc_store_error(peer, serr);
+ rcu_read_unlock();
+ rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_put_peer(peer);
+-
+ _leave("");
+ }
+
+diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
+index 08aab5c01437d..db47844f4ac99 100644
+--- a/net/rxrpc/rxkad.c
++++ b/net/rxrpc/rxkad.c
+@@ -540,7 +540,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
+ * directly into the target buffer.
+ */
+ sg = _sg;
+- nsg = skb_shinfo(skb)->nr_frags;
++ nsg = skb_shinfo(skb)->nr_frags + 1;
+ if (nsg <= 4) {
+ nsg = 4;
+ } else {
+diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
+index 3d061a13d7ed2..2829455211f8c 100644
+--- a/net/sched/sch_sfb.c
++++ b/net/sched/sch_sfb.c
+@@ -135,15 +135,15 @@ static void increment_one_qlen(u32 sfbhash, u32 slot, struct sfb_sched_data *q)
+ }
+ }
+
+-static void increment_qlen(const struct sk_buff *skb, struct sfb_sched_data *q)
++static void increment_qlen(const struct sfb_skb_cb *cb, struct sfb_sched_data *q)
+ {
+ u32 sfbhash;
+
+- sfbhash = sfb_hash(skb, 0);
++ sfbhash = cb->hashes[0];
+ if (sfbhash)
+ increment_one_qlen(sfbhash, 0, q);
+
+- sfbhash = sfb_hash(skb, 1);
++ sfbhash = cb->hashes[1];
+ if (sfbhash)
+ increment_one_qlen(sfbhash, 1, q);
+ }
+@@ -281,8 +281,10 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ {
+
+ struct sfb_sched_data *q = qdisc_priv(sch);
++ unsigned int len = qdisc_pkt_len(skb);
+ struct Qdisc *child = q->qdisc;
+ struct tcf_proto *fl;
++ struct sfb_skb_cb cb;
+ int i;
+ u32 p_min = ~0;
+ u32 minqlen = ~0;
+@@ -399,11 +401,12 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ }
+
+ enqueue:
++ memcpy(&cb, sfb_skb_cb(skb), sizeof(cb));
+ ret = qdisc_enqueue(skb, child, to_free);
+ if (likely(ret == NET_XMIT_SUCCESS)) {
+- qdisc_qstats_backlog_inc(sch, skb);
++ sch->qstats.backlog += len;
+ sch->q.qlen++;
+- increment_qlen(skb, q);
++ increment_qlen(&cb, q);
+ } else if (net_xmit_drop_count(ret)) {
+ q->stats.childdrop++;
+ qdisc_qstats_drop(sch);
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index b9c71a304d399..0b941dd63d268 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -18,6 +18,7 @@
+ #include <linux/module.h>
+ #include <linux/spinlock.h>
+ #include <linux/rcupdate.h>
++#include <linux/time.h>
+ #include <net/netlink.h>
+ #include <net/pkt_sched.h>
+ #include <net/pkt_cls.h>
+@@ -176,7 +177,7 @@ static ktime_t get_interval_end_time(struct sched_gate_list *sched,
+
+ static int length_to_duration(struct taprio_sched *q, int len)
+ {
+- return div_u64(len * atomic64_read(&q->picos_per_byte), 1000);
++ return div_u64(len * atomic64_read(&q->picos_per_byte), PSEC_PER_NSEC);
+ }
+
+ /* Returns the entry corresponding to next available interval. If
+@@ -551,7 +552,7 @@ static struct sk_buff *taprio_peek(struct Qdisc *sch)
+ static void taprio_set_budget(struct taprio_sched *q, struct sched_entry *entry)
+ {
+ atomic_set(&entry->budget,
+- div64_u64((u64)entry->interval * 1000,
++ div64_u64((u64)entry->interval * PSEC_PER_NSEC,
+ atomic64_read(&q->picos_per_byte)));
+ }
+
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index f40f6ed0fbdb4..1f3bb1f6b1f7b 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -755,6 +755,7 @@ int smcr_link_init(struct smc_link_group *lgr, struct smc_link *lnk,
+ lnk->lgr = lgr;
+ smc_lgr_hold(lgr); /* lgr_put in smcr_link_clear() */
+ lnk->link_idx = link_idx;
++ lnk->wr_rx_id_compl = 0;
+ smc_ibdev_cnt_inc(lnk);
+ smcr_copy_dev_info_to_link(lnk);
+ atomic_set(&lnk->conn_cnt, 0);
+diff --git a/net/smc/smc_core.h b/net/smc/smc_core.h
+index 4cb03e9423648..7b43a78c7f73a 100644
+--- a/net/smc/smc_core.h
++++ b/net/smc/smc_core.h
+@@ -115,8 +115,10 @@ struct smc_link {
+ dma_addr_t wr_rx_dma_addr; /* DMA address of wr_rx_bufs */
+ dma_addr_t wr_rx_v2_dma_addr; /* DMA address of v2 rx buf*/
+ u64 wr_rx_id; /* seq # of last recv WR */
++ u64 wr_rx_id_compl; /* seq # of last completed WR */
+ u32 wr_rx_cnt; /* number of WR recv buffers */
+ unsigned long wr_rx_tstamp; /* jiffies when last buf rx */
++ wait_queue_head_t wr_rx_empty_wait; /* wait for RQ empty */
+
+ struct ib_reg_wr wr_reg; /* WR register memory region */
+ wait_queue_head_t wr_reg_wait; /* wait for wr_reg result */
+diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c
+index 26f8f240d9e84..b0678a417e09d 100644
+--- a/net/smc/smc_wr.c
++++ b/net/smc/smc_wr.c
+@@ -454,6 +454,7 @@ static inline void smc_wr_rx_process_cqes(struct ib_wc wc[], int num)
+
+ for (i = 0; i < num; i++) {
+ link = wc[i].qp->qp_context;
++ link->wr_rx_id_compl = wc[i].wr_id;
+ if (wc[i].status == IB_WC_SUCCESS) {
+ link->wr_rx_tstamp = jiffies;
+ smc_wr_rx_demultiplex(&wc[i]);
+@@ -465,6 +466,8 @@ static inline void smc_wr_rx_process_cqes(struct ib_wc wc[], int num)
+ case IB_WC_RNR_RETRY_EXC_ERR:
+ case IB_WC_WR_FLUSH_ERR:
+ smcr_link_down_cond_sched(link);
++ if (link->wr_rx_id_compl == link->wr_rx_id)
++ wake_up(&link->wr_rx_empty_wait);
+ break;
+ default:
+ smc_wr_rx_post(link); /* refill WR RX */
+@@ -639,6 +642,7 @@ void smc_wr_free_link(struct smc_link *lnk)
+ return;
+ ibdev = lnk->smcibdev->ibdev;
+
++ smc_wr_drain_cq(lnk);
+ smc_wr_wakeup_reg_wait(lnk);
+ smc_wr_wakeup_tx_wait(lnk);
+
+@@ -889,6 +893,7 @@ int smc_wr_create_link(struct smc_link *lnk)
+ atomic_set(&lnk->wr_tx_refcnt, 0);
+ init_waitqueue_head(&lnk->wr_reg_wait);
+ atomic_set(&lnk->wr_reg_refcnt, 0);
++ init_waitqueue_head(&lnk->wr_rx_empty_wait);
+ return rc;
+
+ dma_unmap:
+diff --git a/net/smc/smc_wr.h b/net/smc/smc_wr.h
+index a54e90a1110fd..45e9b894d3f8a 100644
+--- a/net/smc/smc_wr.h
++++ b/net/smc/smc_wr.h
+@@ -73,6 +73,11 @@ static inline void smc_wr_tx_link_put(struct smc_link *link)
+ wake_up_all(&link->wr_tx_wait);
+ }
+
++static inline void smc_wr_drain_cq(struct smc_link *lnk)
++{
++ wait_event(lnk->wr_rx_empty_wait, lnk->wr_rx_id_compl == lnk->wr_rx_id);
++}
++
+ static inline void smc_wr_wakeup_tx_wait(struct smc_link *lnk)
+ {
+ wake_up_all(&lnk->wr_tx_wait);
+diff --git a/net/tipc/monitor.c b/net/tipc/monitor.c
+index 2f4d23238a7e3..9618e4429f0fe 100644
+--- a/net/tipc/monitor.c
++++ b/net/tipc/monitor.c
+@@ -160,7 +160,7 @@ static void map_set(u64 *up_map, int i, unsigned int v)
+
+ static int map_get(u64 up_map, int i)
+ {
+- return (up_map & (1 << i)) >> i;
++ return (up_map & (1ULL << i)) >> i;
+ }
+
+ static struct tipc_peer *peer_prev(struct tipc_peer *peer)
+diff --git a/security/security.c b/security/security.c
+index 188b8f7822206..8b62654ff3f97 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -2654,4 +2654,8 @@ int security_uring_sqpoll(void)
+ {
+ return call_int_hook(uring_sqpoll, 0);
+ }
++int security_uring_cmd(struct io_uring_cmd *ioucmd)
++{
++ return call_int_hook(uring_cmd, 0, ioucmd);
++}
+ #endif /* CONFIG_IO_URING */
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 1bbd53321d133..e90dfa36f79aa 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -91,6 +91,7 @@
+ #include <uapi/linux/mount.h>
+ #include <linux/fsnotify.h>
+ #include <linux/fanotify.h>
++#include <linux/io_uring.h>
+
+ #include "avc.h"
+ #include "objsec.h"
+@@ -6990,6 +6991,28 @@ static int selinux_uring_sqpoll(void)
+ return avc_has_perm(&selinux_state, sid, sid,
+ SECCLASS_IO_URING, IO_URING__SQPOLL, NULL);
+ }
++
++/**
++ * selinux_uring_cmd - check if IORING_OP_URING_CMD is allowed
++ * @ioucmd: the io_uring command structure
++ *
++ * Check to see if the current domain is allowed to execute an
++ * IORING_OP_URING_CMD against the device/file specified in @ioucmd.
++ *
++ */
++static int selinux_uring_cmd(struct io_uring_cmd *ioucmd)
++{
++ struct file *file = ioucmd->file;
++ struct inode *inode = file_inode(file);
++ struct inode_security_struct *isec = selinux_inode(inode);
++ struct common_audit_data ad;
++
++ ad.type = LSM_AUDIT_DATA_FILE;
++ ad.u.file = file;
++
++ return avc_has_perm(&selinux_state, current_sid(), isec->sid,
++ SECCLASS_IO_URING, IO_URING__CMD, &ad);
++}
+ #endif /* CONFIG_IO_URING */
+
+ /*
+@@ -7234,6 +7257,7 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = {
+ #ifdef CONFIG_IO_URING
+ LSM_HOOK_INIT(uring_override_creds, selinux_uring_override_creds),
+ LSM_HOOK_INIT(uring_sqpoll, selinux_uring_sqpoll),
++ LSM_HOOK_INIT(uring_cmd, selinux_uring_cmd),
+ #endif
+
+ /*
+diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h
+index ff757ae5f2537..1c2f41ff4e551 100644
+--- a/security/selinux/include/classmap.h
++++ b/security/selinux/include/classmap.h
+@@ -253,7 +253,7 @@ const struct security_class_mapping secclass_map[] = {
+ { "anon_inode",
+ { COMMON_FILE_PERMS, NULL } },
+ { "io_uring",
+- { "override_creds", "sqpoll", NULL } },
++ { "override_creds", "sqpoll", "cmd", NULL } },
+ { NULL }
+ };
+
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 6207762dbdb13..b30e20f64471c 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -42,6 +42,7 @@
+ #include <linux/fs_context.h>
+ #include <linux/fs_parser.h>
+ #include <linux/watch_queue.h>
++#include <linux/io_uring.h>
+ #include "smack.h"
+
+ #define TRANS_TRUE "TRUE"
+@@ -4739,6 +4740,36 @@ static int smack_uring_sqpoll(void)
+ return -EPERM;
+ }
+
++/**
++ * smack_uring_cmd - check on file operations for io_uring
++ * @ioucmd: the command in question
++ *
++ * Make a best guess about whether a io_uring "command" should
++ * be allowed. Use the same logic used for determining if the
++ * file could be opened for read in the absence of better criteria.
++ */
++static int smack_uring_cmd(struct io_uring_cmd *ioucmd)
++{
++ struct file *file = ioucmd->file;
++ struct smk_audit_info ad;
++ struct task_smack *tsp;
++ struct inode *inode;
++ int rc;
++
++ if (!file)
++ return -EINVAL;
++
++ tsp = smack_cred(file->f_cred);
++ inode = file_inode(file);
++
++ smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_PATH);
++ smk_ad_setfield_u_fs_path(&ad, file->f_path);
++ rc = smk_tskacc(tsp, smk_of_inode(inode), MAY_READ, &ad);
++ rc = smk_bu_credfile(file->f_cred, file, MAY_READ, rc);
++
++ return rc;
++}
++
+ #endif /* CONFIG_IO_URING */
+
+ struct lsm_blob_sizes smack_blob_sizes __lsm_ro_after_init = {
+@@ -4896,6 +4927,7 @@ static struct security_hook_list smack_hooks[] __lsm_ro_after_init = {
+ #ifdef CONFIG_IO_URING
+ LSM_HOOK_INIT(uring_override_creds, smack_uring_override_creds),
+ LSM_HOOK_INIT(uring_sqpoll, smack_uring_sqpoll),
++ LSM_HOOK_INIT(uring_cmd, smack_uring_cmd),
+ #endif
+ };
+
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 55b3c49ba61de..244afc38ddcaa 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -535,10 +535,13 @@ static void *snd_dma_noncontig_alloc(struct snd_dma_buffer *dmab, size_t size)
+ dmab->dev.need_sync = dma_need_sync(dmab->dev.dev,
+ sg_dma_address(sgt->sgl));
+ p = dma_vmap_noncontiguous(dmab->dev.dev, size, sgt);
+- if (p)
++ if (p) {
+ dmab->private_data = sgt;
+- else
++ /* store the first page address for convenience */
++ dmab->addr = snd_sgbuf_get_addr(dmab, 0);
++ } else {
+ dma_free_noncontiguous(dmab->dev.dev, size, sgt, dmab->dev.dir);
++ }
+ return p;
+ }
+
+@@ -772,6 +775,8 @@ static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size)
+ if (!p)
+ goto error;
+ dmab->private_data = sgbuf;
++ /* store the first page address for convenience */
++ dmab->addr = snd_sgbuf_get_addr(dmab, 0);
+ return p;
+
+ error:
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 90c3a367d7de9..02df915eb3c66 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -1672,14 +1672,14 @@ static int snd_pcm_oss_sync(struct snd_pcm_oss_file *pcm_oss_file)
+ runtime = substream->runtime;
+ if (atomic_read(&substream->mmap_count))
+ goto __direct;
+- err = snd_pcm_oss_make_ready(substream);
+- if (err < 0)
+- return err;
+ atomic_inc(&runtime->oss.rw_ref);
+ if (mutex_lock_interruptible(&runtime->oss.params_lock)) {
+ atomic_dec(&runtime->oss.rw_ref);
+ return -ERESTARTSYS;
+ }
++ err = snd_pcm_oss_make_ready_locked(substream);
++ if (err < 0)
++ goto unlock;
+ format = snd_pcm_oss_format_from(runtime->oss.format);
+ width = snd_pcm_format_physical_width(format);
+ if (runtime->oss.buffer_used > 0) {
+diff --git a/sound/drivers/aloop.c b/sound/drivers/aloop.c
+index 9b4a7cdb103ad..12f12a294df5a 100644
+--- a/sound/drivers/aloop.c
++++ b/sound/drivers/aloop.c
+@@ -605,17 +605,18 @@ static unsigned int loopback_jiffies_timer_pos_update
+ cable->streams[SNDRV_PCM_STREAM_PLAYBACK];
+ struct loopback_pcm *dpcm_capt =
+ cable->streams[SNDRV_PCM_STREAM_CAPTURE];
+- unsigned long delta_play = 0, delta_capt = 0;
++ unsigned long delta_play = 0, delta_capt = 0, cur_jiffies;
+ unsigned int running, count1, count2;
+
++ cur_jiffies = jiffies;
+ running = cable->running ^ cable->pause;
+ if (running & (1 << SNDRV_PCM_STREAM_PLAYBACK)) {
+- delta_play = jiffies - dpcm_play->last_jiffies;
++ delta_play = cur_jiffies - dpcm_play->last_jiffies;
+ dpcm_play->last_jiffies += delta_play;
+ }
+
+ if (running & (1 << SNDRV_PCM_STREAM_CAPTURE)) {
+- delta_capt = jiffies - dpcm_capt->last_jiffies;
++ delta_capt = cur_jiffies - dpcm_capt->last_jiffies;
+ dpcm_capt->last_jiffies += delta_capt;
+ }
+
+diff --git a/sound/pci/emu10k1/emupcm.c b/sound/pci/emu10k1/emupcm.c
+index b2701a4452d86..48af77ae8020f 100644
+--- a/sound/pci/emu10k1/emupcm.c
++++ b/sound/pci/emu10k1/emupcm.c
+@@ -124,7 +124,7 @@ static int snd_emu10k1_pcm_channel_alloc(struct snd_emu10k1_pcm * epcm, int voic
+ epcm->voices[0]->epcm = epcm;
+ if (voices > 1) {
+ for (i = 1; i < voices; i++) {
+- epcm->voices[i] = &epcm->emu->voices[epcm->voices[0]->number + i];
++ epcm->voices[i] = &epcm->emu->voices[(epcm->voices[0]->number + i) % NUM_G];
+ epcm->voices[i]->epcm = epcm;
+ }
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index a77165bd92a98..b20694fd69dea 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1817,7 +1817,7 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+
+ /* use the non-cached pages in non-snoop mode */
+ if (!azx_snoop(chip))
+- azx_bus(chip)->dma_type = SNDRV_DMA_TYPE_DEV_WC;
++ azx_bus(chip)->dma_type = SNDRV_DMA_TYPE_DEV_WC_SG;
+
+ if (chip->driver_type == AZX_DRIVER_NVIDIA) {
+ dev_dbg(chip->card->dev, "Enable delay in RIRB handling\n");
+diff --git a/sound/soc/atmel/mchp-spdiftx.c b/sound/soc/atmel/mchp-spdiftx.c
+index d243800464352..bcca1cf3cd7b6 100644
+--- a/sound/soc/atmel/mchp-spdiftx.c
++++ b/sound/soc/atmel/mchp-spdiftx.c
+@@ -196,8 +196,7 @@ struct mchp_spdiftx_dev {
+ struct clk *pclk;
+ struct clk *gclk;
+ unsigned int fmt;
+- const struct mchp_i2s_caps *caps;
+- int gclk_enabled:1;
++ unsigned int gclk_enabled:1;
+ };
+
+ static inline int mchp_spdiftx_is_running(struct mchp_spdiftx_dev *dev)
+@@ -766,8 +765,6 @@ static const struct of_device_id mchp_spdiftx_dt_ids[] = {
+ MODULE_DEVICE_TABLE(of, mchp_spdiftx_dt_ids);
+ static int mchp_spdiftx_probe(struct platform_device *pdev)
+ {
+- struct device_node *np = pdev->dev.of_node;
+- const struct of_device_id *match;
+ struct mchp_spdiftx_dev *dev;
+ struct resource *mem;
+ struct regmap *regmap;
+@@ -781,11 +778,6 @@ static int mchp_spdiftx_probe(struct platform_device *pdev)
+ if (!dev)
+ return -ENOMEM;
+
+- /* Get hardware capabilities. */
+- match = of_match_node(mchp_spdiftx_dt_ids, np);
+- if (match)
+- dev->caps = match->data;
+-
+ /* Map I/O registers. */
+ base = devm_platform_get_and_ioremap_resource(pdev, 0, &mem);
+ if (IS_ERR(base))
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 4fade23887972..8cba3015398b7 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -1618,7 +1618,6 @@ static irqreturn_t cs42l42_irq_thread(int irq, void *data)
+ unsigned int current_plug_status;
+ unsigned int current_button_status;
+ unsigned int i;
+- int report = 0;
+
+ mutex_lock(&cs42l42->irq_lock);
+ if (cs42l42->suspended) {
+@@ -1713,13 +1712,15 @@ static irqreturn_t cs42l42_irq_thread(int irq, void *data)
+
+ if (current_button_status & CS42L42_M_DETECT_TF_MASK) {
+ dev_dbg(cs42l42->dev, "Button released\n");
+- report = 0;
++ snd_soc_jack_report(cs42l42->jack, 0,
++ SND_JACK_BTN_0 | SND_JACK_BTN_1 |
++ SND_JACK_BTN_2 | SND_JACK_BTN_3);
+ } else if (current_button_status & CS42L42_M_DETECT_FT_MASK) {
+- report = cs42l42_handle_button_press(cs42l42);
+-
++ snd_soc_jack_report(cs42l42->jack,
++ cs42l42_handle_button_press(cs42l42),
++ SND_JACK_BTN_0 | SND_JACK_BTN_1 |
++ SND_JACK_BTN_2 | SND_JACK_BTN_3);
+ }
+- snd_soc_jack_report(cs42l42->jack, report, SND_JACK_BTN_0 | SND_JACK_BTN_1 |
+- SND_JACK_BTN_2 | SND_JACK_BTN_3);
+ }
+ }
+
+diff --git a/sound/soc/qcom/sm8250.c b/sound/soc/qcom/sm8250.c
+index 6e1184c8b672a..c48ac107810d4 100644
+--- a/sound/soc/qcom/sm8250.c
++++ b/sound/soc/qcom/sm8250.c
+@@ -270,6 +270,7 @@ static int sm8250_platform_probe(struct platform_device *pdev)
+ if (!card)
+ return -ENOMEM;
+
++ card->owner = THIS_MODULE;
+ /* Allocate the private data */
+ data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+diff --git a/sound/soc/sof/Kconfig b/sound/soc/sof/Kconfig
+index 4542868cd730f..39216c09f1597 100644
+--- a/sound/soc/sof/Kconfig
++++ b/sound/soc/sof/Kconfig
+@@ -196,6 +196,7 @@ config SND_SOC_SOF_DEBUG_ENABLE_FIRMWARE_TRACE
+
+ config SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST
+ tristate "SOF enable IPC flood test"
++ depends on SND_SOC_SOF
+ select SND_SOC_SOF_CLIENT
+ help
+ This option enables a separate client device for IPC flood test
+@@ -214,6 +215,7 @@ config SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST_NUM
+
+ config SND_SOC_SOF_DEBUG_IPC_MSG_INJECTOR
+ tristate "SOF enable IPC message injector"
++ depends on SND_SOC_SOF
+ select SND_SOC_SOF_CLIENT
+ help
+ This option enables the IPC message injector which can be used to send
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index d356743de2ff9..706d249a9ad6b 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -699,7 +699,7 @@ static bool check_delayed_register_option(struct snd_usb_audio *chip, int iface)
+ if (delayed_register[i] &&
+ sscanf(delayed_register[i], "%x:%x", &id, &inum) == 2 &&
+ id == chip->usb_id)
+- return inum != iface;
++ return iface < inum;
+ }
+
+ return false;
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index f9c921683948d..ff2aa13b7b26f 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -758,7 +758,8 @@ bool snd_usb_endpoint_compatible(struct snd_usb_audio *chip,
+ * The endpoint needs to be closed via snd_usb_endpoint_close() later.
+ *
+ * Note that this function doesn't configure the endpoint. The substream
+- * needs to set it up later via snd_usb_endpoint_configure().
++ * needs to set it up later via snd_usb_endpoint_set_params() and
++ * snd_usb_endpoint_prepare().
+ */
+ struct snd_usb_endpoint *
+ snd_usb_endpoint_open(struct snd_usb_audio *chip,
+@@ -924,6 +925,8 @@ void snd_usb_endpoint_close(struct snd_usb_audio *chip,
+ endpoint_set_interface(chip, ep, false);
+
+ if (!--ep->opened) {
++ if (ep->clock_ref && !atomic_read(&ep->clock_ref->locked))
++ ep->clock_ref->rate = 0;
+ ep->iface = 0;
+ ep->altsetting = 0;
+ ep->cur_audiofmt = NULL;
+@@ -1290,12 +1293,13 @@ out_of_memory:
+ /*
+ * snd_usb_endpoint_set_params: configure an snd_usb_endpoint
+ *
++ * It's called either from hw_params callback.
+ * Determine the number of URBs to be used on this endpoint.
+ * An endpoint must be configured before it can be started.
+ * An endpoint that is already running can not be reconfigured.
+ */
+-static int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
+- struct snd_usb_endpoint *ep)
++int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
++ struct snd_usb_endpoint *ep)
+ {
+ const struct audioformat *fmt = ep->cur_audiofmt;
+ int err;
+@@ -1378,18 +1382,18 @@ static int init_sample_rate(struct snd_usb_audio *chip,
+ }
+
+ /*
+- * snd_usb_endpoint_configure: Configure the endpoint
++ * snd_usb_endpoint_prepare: Prepare the endpoint
+ *
+ * This function sets up the EP to be fully usable state.
+- * It's called either from hw_params or prepare callback.
++ * It's called either from prepare callback.
+ * The function checks need_setup flag, and performs nothing unless needed,
+ * so it's safe to call this multiple times.
+ *
+ * This returns zero if unchanged, 1 if the configuration has changed,
+ * or a negative error code.
+ */
+-int snd_usb_endpoint_configure(struct snd_usb_audio *chip,
+- struct snd_usb_endpoint *ep)
++int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,
++ struct snd_usb_endpoint *ep)
+ {
+ bool iface_first;
+ int err = 0;
+@@ -1410,9 +1414,6 @@ int snd_usb_endpoint_configure(struct snd_usb_audio *chip,
+ if (err < 0)
+ goto unlock;
+ }
+- err = snd_usb_endpoint_set_params(chip, ep);
+- if (err < 0)
+- goto unlock;
+ goto done;
+ }
+
+@@ -1440,10 +1441,6 @@ int snd_usb_endpoint_configure(struct snd_usb_audio *chip,
+ if (err < 0)
+ goto unlock;
+
+- err = snd_usb_endpoint_set_params(chip, ep);
+- if (err < 0)
+- goto unlock;
+-
+ err = snd_usb_select_mode_quirk(chip, ep->cur_audiofmt);
+ if (err < 0)
+ goto unlock;
+diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h
+index 6a9af04cf175a..e67ea28faa54f 100644
+--- a/sound/usb/endpoint.h
++++ b/sound/usb/endpoint.h
+@@ -17,8 +17,10 @@ snd_usb_endpoint_open(struct snd_usb_audio *chip,
+ bool is_sync_ep);
+ void snd_usb_endpoint_close(struct snd_usb_audio *chip,
+ struct snd_usb_endpoint *ep);
+-int snd_usb_endpoint_configure(struct snd_usb_audio *chip,
+- struct snd_usb_endpoint *ep);
++int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
++ struct snd_usb_endpoint *ep);
++int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,
++ struct snd_usb_endpoint *ep);
+ int snd_usb_endpoint_get_clock_rate(struct snd_usb_audio *chip, int clock);
+
+ bool snd_usb_endpoint_compatible(struct snd_usb_audio *chip,
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index e692ae04436a5..02035b545f9dd 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -443,17 +443,17 @@ static int configure_endpoints(struct snd_usb_audio *chip,
+ if (stop_endpoints(subs, false))
+ sync_pending_stops(subs);
+ if (subs->sync_endpoint) {
+- err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);
++ err = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);
+ if (err < 0)
+ return err;
+ }
+- err = snd_usb_endpoint_configure(chip, subs->data_endpoint);
++ err = snd_usb_endpoint_prepare(chip, subs->data_endpoint);
+ if (err < 0)
+ return err;
+ snd_usb_set_format_quirk(subs, subs->cur_audiofmt);
+ } else {
+ if (subs->sync_endpoint) {
+- err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);
++ err = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);
+ if (err < 0)
+ return err;
+ }
+@@ -551,7 +551,13 @@ static int snd_usb_hw_params(struct snd_pcm_substream *substream,
+ subs->cur_audiofmt = fmt;
+ mutex_unlock(&chip->mutex);
+
+- ret = configure_endpoints(chip, subs);
++ if (subs->sync_endpoint) {
++ ret = snd_usb_endpoint_set_params(chip, subs->sync_endpoint);
++ if (ret < 0)
++ goto unlock;
++ }
++
++ ret = snd_usb_endpoint_set_params(chip, subs->data_endpoint);
+
+ unlock:
+ if (ret < 0)
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 9bfead5efc4c1..5b4d8f5eade20 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1764,7 +1764,7 @@ bool snd_usb_registration_quirk(struct snd_usb_audio *chip, int iface)
+
+ for (q = registration_quirks; q->usb_id; q++)
+ if (chip->usb_id == q->usb_id)
+- return iface != q->interface;
++ return iface < q->interface;
+
+ /* Register as normal */
+ return false;
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index ceb93d798182c..f10f4e6d3fb85 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -495,6 +495,10 @@ static int __snd_usb_add_audio_stream(struct snd_usb_audio *chip,
+ return 0;
+ }
+ }
++
++ if (chip->card->registered)
++ chip->need_delayed_register = true;
++
+ /* look for an empty stream */
+ list_for_each_entry(as, &chip->pcm_list, list) {
+ if (as->fmt_type != fp->fmt_type)
+@@ -502,9 +506,6 @@ static int __snd_usb_add_audio_stream(struct snd_usb_audio *chip,
+ subs = &as->substream[stream];
+ if (subs->ep_num)
+ continue;
+- if (snd_device_get_state(chip->card, as->pcm) !=
+- SNDRV_DEV_BUILD)
+- chip->need_delayed_register = true;
+ err = snd_pcm_new_stream(as->pcm, stream, 1);
+ if (err < 0)
+ return err;
+@@ -1105,7 +1106,7 @@ static int __snd_usb_parse_audio_interface(struct snd_usb_audio *chip,
+ * Dallas DS4201 workaround: It presents 5 altsettings, but the last
+ * one misses syncpipe, and does not produce any sound.
+ */
+- if (chip->usb_id == USB_ID(0x04fa, 0x4201))
++ if (chip->usb_id == USB_ID(0x04fa, 0x4201) && num >= 4)
+ num = 4;
+
+ for (i = 0; i < num; i++) {
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index e6c98a6e3908e..6b1bafe267a42 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -486,6 +486,7 @@ mmap_per_evsel(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
+ if (ops->idx)
+ ops->idx(evlist, evsel, mp, idx);
+
++ pr_debug("idx %d: mmapping fd %d\n", idx, *output);
+ if (ops->mmap(map, mp, *output, evlist_cpu) < 0)
+ return -1;
+
+@@ -494,6 +495,7 @@ mmap_per_evsel(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
+ if (!idx)
+ perf_evlist__set_mmap_first(evlist, map, overwrite);
+ } else {
++ pr_debug("idx %d: set output fd %d -> %d\n", idx, fd, *output);
+ if (ioctl(fd, PERF_EVENT_IOC_SET_OUTPUT, *output) != 0)
+ return -1;
+
+@@ -519,6 +521,48 @@ mmap_per_evsel(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
+ return 0;
+ }
+
++static int
++mmap_per_thread(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
++ struct perf_mmap_param *mp)
++{
++ int nr_threads = perf_thread_map__nr(evlist->threads);
++ int nr_cpus = perf_cpu_map__nr(evlist->all_cpus);
++ int cpu, thread, idx = 0;
++ int nr_mmaps = 0;
++
++ pr_debug("%s: nr cpu values (may include -1) %d nr threads %d\n",
++ __func__, nr_cpus, nr_threads);
++
++ /* per-thread mmaps */
++ for (thread = 0; thread < nr_threads; thread++, idx++) {
++ int output = -1;
++ int output_overwrite = -1;
++
++ if (mmap_per_evsel(evlist, ops, idx, mp, 0, thread, &output,
++ &output_overwrite, &nr_mmaps))
++ goto out_unmap;
++ }
++
++ /* system-wide mmaps i.e. per-cpu */
++ for (cpu = 1; cpu < nr_cpus; cpu++, idx++) {
++ int output = -1;
++ int output_overwrite = -1;
++
++ if (mmap_per_evsel(evlist, ops, idx, mp, cpu, 0, &output,
++ &output_overwrite, &nr_mmaps))
++ goto out_unmap;
++ }
++
++ if (nr_mmaps != evlist->nr_mmaps)
++ pr_err("Miscounted nr_mmaps %d vs %d\n", nr_mmaps, evlist->nr_mmaps);
++
++ return 0;
++
++out_unmap:
++ perf_evlist__munmap(evlist);
++ return -1;
++}
++
+ static int
+ mmap_per_cpu(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
+ struct perf_mmap_param *mp)
+@@ -528,6 +572,8 @@ mmap_per_cpu(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
+ int nr_mmaps = 0;
+ int cpu, thread;
+
++ pr_debug("%s: nr cpu values %d nr threads %d\n", __func__, nr_cpus, nr_threads);
++
+ for (cpu = 0; cpu < nr_cpus; cpu++) {
+ int output = -1;
+ int output_overwrite = -1;
+@@ -569,6 +615,7 @@ int perf_evlist__mmap_ops(struct perf_evlist *evlist,
+ struct perf_evlist_mmap_ops *ops,
+ struct perf_mmap_param *mp)
+ {
++ const struct perf_cpu_map *cpus = evlist->all_cpus;
+ struct perf_evsel *evsel;
+
+ if (!ops || !ops->get || !ops->mmap)
+@@ -588,6 +635,9 @@ int perf_evlist__mmap_ops(struct perf_evlist *evlist,
+ if (evlist->pollfd.entries == NULL && perf_evlist__alloc_pollfd(evlist) < 0)
+ return -ENOMEM;
+
++ if (perf_cpu_map__empty(cpus))
++ return mmap_per_thread(evlist, ops, mp);
++
+ return mmap_per_cpu(evlist, ops, mp);
+ }
+
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 31c719f99f66e..5d87e0b0d85f9 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -162,32 +162,34 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
+
+ /*
+ * Unfortunately these have to be hard coded because the noreturn
+- * attribute isn't provided in ELF data.
++ * attribute isn't provided in ELF data. Keep 'em sorted.
+ */
+ static const char * const global_noreturns[] = {
++ "__invalid_creds",
++ "__module_put_and_kthread_exit",
++ "__reiserfs_panic",
+ "__stack_chk_fail",
+- "panic",
++ "__ubsan_handle_builtin_unreachable",
++ "cpu_bringup_and_idle",
++ "cpu_startup_entry",
+ "do_exit",
++ "do_group_exit",
+ "do_task_dead",
+- "kthread_exit",
+- "make_task_dead",
+- "__module_put_and_kthread_exit",
++ "ex_handler_msr_mce",
++ "fortify_panic",
+ "kthread_complete_and_exit",
+- "__reiserfs_panic",
++ "kthread_exit",
++ "kunit_try_catch_throw",
+ "lbug_with_loc",
+- "fortify_panic",
+- "usercopy_abort",
+ "machine_real_restart",
++ "make_task_dead",
++ "panic",
+ "rewind_stack_and_make_dead",
+- "kunit_try_catch_throw",
+- "xen_start_kernel",
+- "cpu_bringup_and_idle",
+- "do_group_exit",
++ "sev_es_terminate",
++ "snp_abort",
+ "stop_this_cpu",
+- "__invalid_creds",
+- "cpu_startup_entry",
+- "__ubsan_handle_builtin_unreachable",
+- "ex_handler_msr_mce",
++ "usercopy_abort",
++ "xen_start_kernel",
+ };
+
+ if (!func)
+diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
+index 68f681ad54c1e..777bdf182a582 100644
+--- a/tools/perf/arch/x86/util/evlist.c
++++ b/tools/perf/arch/x86/util/evlist.c
+@@ -8,8 +8,13 @@
+ #define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound}"
+ #define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}"
+
+-int arch_evlist__add_default_attrs(struct evlist *evlist)
++int arch_evlist__add_default_attrs(struct evlist *evlist,
++ struct perf_event_attr *attrs,
++ size_t nr_attrs)
+ {
++ if (nr_attrs)
++ return __evlist__add_default_attrs(evlist, attrs, nr_attrs);
++
+ if (!pmu_have_event("cpu", "slots"))
+ return 0;
+
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index 9a71f0330137e..68c878b4e5e4c 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -1892,14 +1892,18 @@ static int record__synthesize(struct record *rec, bool tail)
+
+ err = perf_event__synthesize_bpf_events(session, process_synthesized_event,
+ machine, opts);
+- if (err < 0)
++ if (err < 0) {
+ pr_warning("Couldn't synthesize bpf events.\n");
++ err = 0;
++ }
+
+ if (rec->opts.synth & PERF_SYNTH_CGROUP) {
+ err = perf_event__synthesize_cgroups(tool, process_synthesized_event,
+ machine);
+- if (err < 0)
++ if (err < 0) {
+ pr_warning("Couldn't synthesize cgroup events.\n");
++ err = 0;
++ }
+ }
+
+ if (rec->opts.nr_threads_synthesize > 1) {
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index c689054002cca..26a572c160d6f 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -441,6 +441,9 @@ static int evsel__check_attr(struct evsel *evsel, struct perf_session *session)
+ struct perf_event_attr *attr = &evsel->core.attr;
+ bool allow_user_set;
+
++ if (evsel__is_dummy_event(evsel))
++ return 0;
++
+ if (perf_header__has_feat(&session->header, HEADER_STAT))
+ return 0;
+
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 5f0333a8acd8a..82e14faecc3e4 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -1778,6 +1778,9 @@ static int add_default_attributes(void)
+ (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
+ (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
+ };
++
++ struct perf_event_attr default_null_attrs[] = {};
++
+ /* Set attrs if no event is selected and !null_run: */
+ if (stat_config.null_run)
+ return 0;
+@@ -1941,6 +1944,9 @@ setup_metrics:
+ free(str);
+ }
+
++ if (!stat_config.topdown_level)
++ stat_config.topdown_level = TOPDOWN_MAX_LEVEL;
++
+ if (!evsel_list->core.nr_entries) {
+ if (target__has_cpu(&target))
+ default_attrs0[0].config = PERF_COUNT_SW_CPU_CLOCK;
+@@ -1957,9 +1963,8 @@ setup_metrics:
+ }
+ if (evlist__add_default_attrs(evsel_list, default_attrs1) < 0)
+ return -1;
+-
+- stat_config.topdown_level = TOPDOWN_MAX_LEVEL;
+- if (arch_evlist__add_default_attrs(evsel_list) < 0)
++ /* Platform specific attrs */
++ if (evlist__add_default_attrs(evsel_list, default_null_attrs) < 0)
+ return -1;
+ }
+
+diff --git a/tools/perf/dlfilters/dlfilter-show-cycles.c b/tools/perf/dlfilters/dlfilter-show-cycles.c
+index 9eccc97bff82f..6d47298ebe9f6 100644
+--- a/tools/perf/dlfilters/dlfilter-show-cycles.c
++++ b/tools/perf/dlfilters/dlfilter-show-cycles.c
+@@ -98,9 +98,9 @@ int filter_event_early(void *data, const struct perf_dlfilter_sample *sample, vo
+ static void print_vals(__u64 cycles, __u64 delta)
+ {
+ if (delta)
+- printf("%10llu %10llu ", cycles, delta);
++ printf("%10llu %10llu ", (unsigned long long)cycles, (unsigned long long)delta);
+ else
+- printf("%10llu %10s ", cycles, "");
++ printf("%10llu %10s ", (unsigned long long)cycles, "");
+ }
+
+ int filter_event(void *data, const struct perf_dlfilter_sample *sample, void *ctx)
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index 48af7d379d822..efa5f006b5c61 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -342,9 +342,14 @@ int __evlist__add_default_attrs(struct evlist *evlist, struct perf_event_attr *a
+ return evlist__add_attrs(evlist, attrs, nr_attrs);
+ }
+
+-__weak int arch_evlist__add_default_attrs(struct evlist *evlist __maybe_unused)
++__weak int arch_evlist__add_default_attrs(struct evlist *evlist,
++ struct perf_event_attr *attrs,
++ size_t nr_attrs)
+ {
+- return 0;
++ if (!nr_attrs)
++ return 0;
++
++ return __evlist__add_default_attrs(evlist, attrs, nr_attrs);
+ }
+
+ struct evsel *evlist__find_tracepoint_by_id(struct evlist *evlist, int id)
+diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
+index 1bde9ccf4e7da..129095c0fe6d3 100644
+--- a/tools/perf/util/evlist.h
++++ b/tools/perf/util/evlist.h
+@@ -107,10 +107,13 @@ static inline int evlist__add_default(struct evlist *evlist)
+ int __evlist__add_default_attrs(struct evlist *evlist,
+ struct perf_event_attr *attrs, size_t nr_attrs);
+
++int arch_evlist__add_default_attrs(struct evlist *evlist,
++ struct perf_event_attr *attrs,
++ size_t nr_attrs);
++
+ #define evlist__add_default_attrs(evlist, array) \
+- __evlist__add_default_attrs(evlist, array, ARRAY_SIZE(array))
++ arch_evlist__add_default_attrs(evlist, array, ARRAY_SIZE(array))
+
+-int arch_evlist__add_default_attrs(struct evlist *evlist);
+ struct evsel *arch_evlist__leader(struct list_head *list);
+
+ int evlist__add_dummy(struct evlist *evlist);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-20 12:00 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-20 12:00 UTC (permalink / raw
To: gentoo-commits
commit: a15baa1cdaa74379d95243035410d3a16ea473ff
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 20 12:00:09 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Sep 20 12:00:09 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a15baa1c
Linux patch 5.19.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-5.19.10.patch | 1743 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1747 insertions(+)
diff --git a/0000_README b/0000_README
index 341e7dca..e710df97 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-5.19.9.patch
From: http://www.kernel.org
Desc: Linux 5.19.9
+Patch: 1009_linux-5.19.10.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-5.19.10.patch b/1009_linux-5.19.10.patch
new file mode 100644
index 00000000..ded561b4
--- /dev/null
+++ b/1009_linux-5.19.10.patch
@@ -0,0 +1,1743 @@
+diff --git a/Documentation/devicetree/bindings/iio/gyroscope/bosch,bmg160.yaml b/Documentation/devicetree/bindings/iio/gyroscope/bosch,bmg160.yaml
+index b6bbc312a7cf7..1414ba9977c16 100644
+--- a/Documentation/devicetree/bindings/iio/gyroscope/bosch,bmg160.yaml
++++ b/Documentation/devicetree/bindings/iio/gyroscope/bosch,bmg160.yaml
+@@ -24,8 +24,10 @@ properties:
+
+ interrupts:
+ minItems: 1
++ maxItems: 2
+ description:
+ Should be configured with type IRQ_TYPE_EDGE_RISING.
++ If two interrupts are provided, expected order is INT1 and INT2.
+
+ required:
+ - compatible
+diff --git a/Documentation/input/joydev/joystick.rst b/Documentation/input/joydev/joystick.rst
+index f615906a0821b..6d721396717a2 100644
+--- a/Documentation/input/joydev/joystick.rst
++++ b/Documentation/input/joydev/joystick.rst
+@@ -517,6 +517,7 @@ All I-Force devices are supported by the iforce module. This includes:
+ * AVB Mag Turbo Force
+ * AVB Top Shot Pegasus
+ * AVB Top Shot Force Feedback Racing Wheel
++* Boeder Force Feedback Wheel
+ * Logitech WingMan Force
+ * Logitech WingMan Force Wheel
+ * Guillemot Race Leader Force Feedback
+diff --git a/Makefile b/Makefile
+index 1f27c4bd09e67..33a9b6b547c47 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index 62b5b07fa4e1c..ca64bf5f5b038 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -36,6 +36,7 @@ config LOONGARCH
+ select ARCH_INLINE_SPIN_UNLOCK_BH if !PREEMPTION
+ select ARCH_INLINE_SPIN_UNLOCK_IRQ if !PREEMPTION
+ select ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE if !PREEMPTION
++ select ARCH_KEEP_MEMBLOCK
+ select ARCH_MIGHT_HAVE_PC_PARPORT
+ select ARCH_MIGHT_HAVE_PC_SERIO
+ select ARCH_SPARSEMEM_ENABLE
+diff --git a/arch/loongarch/include/asm/acpi.h b/arch/loongarch/include/asm/acpi.h
+index 62044cd5b7bc5..825c2519b9d1f 100644
+--- a/arch/loongarch/include/asm/acpi.h
++++ b/arch/loongarch/include/asm/acpi.h
+@@ -15,7 +15,7 @@ extern int acpi_pci_disabled;
+ extern int acpi_noirq;
+
+ #define acpi_os_ioremap acpi_os_ioremap
+-void __init __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
++void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
+
+ static inline void disable_acpi(void)
+ {
+diff --git a/arch/loongarch/kernel/acpi.c b/arch/loongarch/kernel/acpi.c
+index bb729ee8a2370..796a24055a942 100644
+--- a/arch/loongarch/kernel/acpi.c
++++ b/arch/loongarch/kernel/acpi.c
+@@ -113,7 +113,7 @@ void __init __acpi_unmap_table(void __iomem *map, unsigned long size)
+ early_memunmap(map, size);
+ }
+
+-void __init __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
++void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
+ {
+ if (!memblock_is_memory(phys))
+ return ioremap(phys, size);
+diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
+index 7094a68c9b832..3c3fbff0b8f86 100644
+--- a/arch/loongarch/mm/init.c
++++ b/arch/loongarch/mm/init.c
+@@ -131,18 +131,6 @@ int arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params)
+ return ret;
+ }
+
+-#ifdef CONFIG_NUMA
+-int memory_add_physaddr_to_nid(u64 start)
+-{
+- int nid;
+-
+- nid = pa_to_nid(start);
+- return nid;
+-}
+-EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
+-#endif
+-
+-#ifdef CONFIG_MEMORY_HOTREMOVE
+ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
+ {
+ unsigned long start_pfn = start >> PAGE_SHIFT;
+@@ -154,6 +142,16 @@ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
+ page += vmem_altmap_offset(altmap);
+ __remove_pages(start_pfn, nr_pages, altmap);
+ }
++
++#ifdef CONFIG_NUMA
++int memory_add_physaddr_to_nid(u64 start)
++{
++ int nid;
++
++ nid = pa_to_nid(start);
++ return nid;
++}
++EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
+ #endif
+ #endif
+
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 356226c7ebbdc..aa1ba803659cd 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5907,47 +5907,18 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
+ const struct kvm_memory_slot *memslot,
+ int start_level)
+ {
+- bool flush = false;
+-
+ if (kvm_memslots_have_rmaps(kvm)) {
+ write_lock(&kvm->mmu_lock);
+- flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
+- start_level, KVM_MAX_HUGEPAGE_LEVEL,
+- false);
++ slot_handle_level(kvm, memslot, slot_rmap_write_protect,
++ start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
+ write_unlock(&kvm->mmu_lock);
+ }
+
+ if (is_tdp_mmu_enabled(kvm)) {
+ read_lock(&kvm->mmu_lock);
+- flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
++ kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
+ read_unlock(&kvm->mmu_lock);
+ }
+-
+- /*
+- * Flush TLBs if any SPTEs had to be write-protected to ensure that
+- * guest writes are reflected in the dirty bitmap before the memslot
+- * update completes, i.e. before enabling dirty logging is visible to
+- * userspace.
+- *
+- * Perform the TLB flush outside the mmu_lock to reduce the amount of
+- * time the lock is held. However, this does mean that another CPU can
+- * now grab mmu_lock and encounter a write-protected SPTE while CPUs
+- * still have a writable mapping for the associated GFN in their TLB.
+- *
+- * This is safe but requires KVM to be careful when making decisions
+- * based on the write-protection status of an SPTE. Specifically, KVM
+- * also write-protects SPTEs to monitor changes to guest page tables
+- * during shadow paging, and must guarantee no CPUs can write to those
+- * page before the lock is dropped. As mentioned in the previous
+- * paragraph, a write-protected SPTE is no guarantee that CPU cannot
+- * perform writes. So to determine if a TLB flush is truly required, KVM
+- * will clear a separate software-only bit (MMU-writable) and skip the
+- * flush if-and-only-if this bit was already clear.
+- *
+- * See is_writable_pte() for more details.
+- */
+- if (flush)
+- kvm_arch_flush_remote_tlbs_memslot(kvm, memslot);
+ }
+
+ /* Must be called with the mmu_lock held in write-mode. */
+@@ -6070,32 +6041,30 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
+ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
+ const struct kvm_memory_slot *memslot)
+ {
+- bool flush = false;
+-
+ if (kvm_memslots_have_rmaps(kvm)) {
+ write_lock(&kvm->mmu_lock);
+ /*
+ * Clear dirty bits only on 4k SPTEs since the legacy MMU only
+ * support dirty logging at a 4k granularity.
+ */
+- flush = slot_handle_level_4k(kvm, memslot, __rmap_clear_dirty, false);
++ slot_handle_level_4k(kvm, memslot, __rmap_clear_dirty, false);
+ write_unlock(&kvm->mmu_lock);
+ }
+
+ if (is_tdp_mmu_enabled(kvm)) {
+ read_lock(&kvm->mmu_lock);
+- flush |= kvm_tdp_mmu_clear_dirty_slot(kvm, memslot);
++ kvm_tdp_mmu_clear_dirty_slot(kvm, memslot);
+ read_unlock(&kvm->mmu_lock);
+ }
+
+ /*
++ * The caller will flush the TLBs after this function returns.
++ *
+ * It's also safe to flush TLBs out of mmu lock here as currently this
+ * function is only used for dirty logging, in which case flushing TLB
+ * out of mmu lock also guarantees no dirty pages will be lost in
+ * dirty_bitmap.
+ */
+- if (flush)
+- kvm_arch_flush_remote_tlbs_memslot(kvm, memslot);
+ }
+
+ void kvm_mmu_zap_all(struct kvm *kvm)
+diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
+index f80dbb628df57..e09bdcf1e47c5 100644
+--- a/arch/x86/kvm/mmu/spte.h
++++ b/arch/x86/kvm/mmu/spte.h
+@@ -326,7 +326,7 @@ static __always_inline bool is_rsvd_spte(struct rsvd_bits_validate *rsvd_check,
+ }
+
+ /*
+- * An shadow-present leaf SPTE may be non-writable for 3 possible reasons:
++ * A shadow-present leaf SPTE may be non-writable for 4 possible reasons:
+ *
+ * 1. To intercept writes for dirty logging. KVM write-protects huge pages
+ * so that they can be split be split down into the dirty logging
+@@ -344,8 +344,13 @@ static __always_inline bool is_rsvd_spte(struct rsvd_bits_validate *rsvd_check,
+ * read-only memslot or guest memory backed by a read-only VMA. Writes to
+ * such pages are disallowed entirely.
+ *
+- * To keep track of why a given SPTE is write-protected, KVM uses 2
+- * software-only bits in the SPTE:
++ * 4. To emulate the Accessed bit for SPTEs without A/D bits. Note, in this
++ * case, the SPTE is access-protected, not just write-protected!
++ *
++ * For cases #1 and #4, KVM can safely make such SPTEs writable without taking
++ * mmu_lock as capturing the Accessed/Dirty state doesn't require taking it.
++ * To differentiate #1 and #4 from #2 and #3, KVM uses two software-only bits
++ * in the SPTE:
+ *
+ * shadow_mmu_writable_mask, aka MMU-writable -
+ * Cleared on SPTEs that KVM is currently write-protecting for shadow paging
+@@ -374,7 +379,8 @@ static __always_inline bool is_rsvd_spte(struct rsvd_bits_validate *rsvd_check,
+ * shadow page tables between vCPUs. Write-protecting an SPTE for dirty logging
+ * (which does not clear the MMU-writable bit), does not flush TLBs before
+ * dropping the lock, as it only needs to synchronize guest writes with the
+- * dirty bitmap.
++ * dirty bitmap. Similarly, making the SPTE inaccessible (and non-writable) for
++ * access-tracking via the clear_young() MMU notifier also does not flush TLBs.
+ *
+ * So, there is the problem: clearing the MMU-writable bit can encounter a
+ * write-protected SPTE while CPUs still have writable mappings for that SPTE
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 55de0d1981e52..5b36866528568 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -12265,6 +12265,50 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
+ } else {
+ kvm_mmu_slot_remove_write_access(kvm, new, PG_LEVEL_4K);
+ }
++
++ /*
++ * Unconditionally flush the TLBs after enabling dirty logging.
++ * A flush is almost always going to be necessary (see below),
++ * and unconditionally flushing allows the helpers to omit
++ * the subtly complex checks when removing write access.
++ *
++ * Do the flush outside of mmu_lock to reduce the amount of
++ * time mmu_lock is held. Flushing after dropping mmu_lock is
++ * safe as KVM only needs to guarantee the slot is fully
++ * write-protected before returning to userspace, i.e. before
++ * userspace can consume the dirty status.
++ *
++ * Flushing outside of mmu_lock requires KVM to be careful when
++ * making decisions based on writable status of an SPTE, e.g. a
++ * !writable SPTE doesn't guarantee a CPU can't perform writes.
++ *
++ * Specifically, KVM also write-protects guest page tables to
++ * monitor changes when using shadow paging, and must guarantee
++ * no CPUs can write to those page before mmu_lock is dropped.
++ * Because CPUs may have stale TLB entries at this point, a
++ * !writable SPTE doesn't guarantee CPUs can't perform writes.
++ *
++ * KVM also allows making SPTES writable outside of mmu_lock,
++ * e.g. to allow dirty logging without taking mmu_lock.
++ *
++ * To handle these scenarios, KVM uses a separate software-only
++ * bit (MMU-writable) to track if a SPTE is !writable due to
++ * a guest page table being write-protected (KVM clears the
++ * MMU-writable flag when write-protecting for shadow paging).
++ *
++ * The use of MMU-writable is also the primary motivation for
++ * the unconditional flush. Because KVM must guarantee that a
++ * CPU doesn't contain stale, writable TLB entries for a
++ * !MMU-writable SPTE, KVM must flush if it encounters any
++ * MMU-writable SPTE regardless of whether the actual hardware
++ * writable bit was set. I.e. KVM is almost guaranteed to need
++ * to flush, while unconditionally flushing allows the "remove
++ * write access" helpers to ignore MMU-writable entirely.
++ *
++ * See is_writable_pte() for more details (the case involving
++ * access-tracked SPTEs is particularly relevant).
++ */
++ kvm_arch_flush_remote_tlbs_memslot(kvm, new);
+ }
+ }
+
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index c2d4947844250..510cdec375c4d 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -416,6 +416,16 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
+ {
+ int i;
+
++#ifdef CONFIG_X86
++ /*
++ * IRQ override isn't needed on modern AMD Zen systems and
++ * this override breaks active low IRQs on AMD Ryzen 6000 and
++ * newer systems. Skip it.
++ */
++ if (boot_cpu_has(X86_FEATURE_ZEN))
++ return false;
++#endif
++
+ for (i = 0; i < ARRAY_SIZE(skip_override_table); i++) {
+ const struct irq_override_cmp *entry = &skip_override_table[i];
+
+diff --git a/drivers/gpio/gpio-104-dio-48e.c b/drivers/gpio/gpio-104-dio-48e.c
+index f118ad9bcd33d..0e95351d47d49 100644
+--- a/drivers/gpio/gpio-104-dio-48e.c
++++ b/drivers/gpio/gpio-104-dio-48e.c
+@@ -271,6 +271,7 @@ static void dio48e_irq_mask(struct irq_data *data)
+ dio48egpio->irq_mask &= ~BIT(0);
+ else
+ dio48egpio->irq_mask &= ~BIT(1);
++ gpiochip_disable_irq(chip, offset);
+
+ if (!dio48egpio->irq_mask)
+ /* disable interrupts */
+@@ -298,6 +299,7 @@ static void dio48e_irq_unmask(struct irq_data *data)
+ iowrite8(0x00, dio48egpio->base + 0xB);
+ }
+
++ gpiochip_enable_irq(chip, offset);
+ if (offset == 19)
+ dio48egpio->irq_mask |= BIT(0);
+ else
+@@ -320,12 +322,14 @@ static int dio48e_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ return 0;
+ }
+
+-static struct irq_chip dio48e_irqchip = {
++static const struct irq_chip dio48e_irqchip = {
+ .name = "104-dio-48e",
+ .irq_ack = dio48e_irq_ack,
+ .irq_mask = dio48e_irq_mask,
+ .irq_unmask = dio48e_irq_unmask,
+- .irq_set_type = dio48e_irq_set_type
++ .irq_set_type = dio48e_irq_set_type,
++ .flags = IRQCHIP_IMMUTABLE,
++ GPIOCHIP_IRQ_RESOURCE_HELPERS,
+ };
+
+ static irqreturn_t dio48e_irq_handler(int irq, void *dev_id)
+@@ -414,7 +418,7 @@ static int dio48e_probe(struct device *dev, unsigned int id)
+ dio48egpio->chip.set_multiple = dio48e_gpio_set_multiple;
+
+ girq = &dio48egpio->chip.irq;
+- girq->chip = &dio48e_irqchip;
++ gpio_irq_chip_set_chip(girq, &dio48e_irqchip);
+ /* This will let us handle the parent IRQ in the driver */
+ girq->parent_handler = NULL;
+ girq->num_parents = 0;
+diff --git a/drivers/gpio/gpio-104-idio-16.c b/drivers/gpio/gpio-104-idio-16.c
+index 45f7ad8573e19..a8b7c8eafac5a 100644
+--- a/drivers/gpio/gpio-104-idio-16.c
++++ b/drivers/gpio/gpio-104-idio-16.c
+@@ -150,10 +150,11 @@ static void idio_16_irq_mask(struct irq_data *data)
+ {
+ struct gpio_chip *chip = irq_data_get_irq_chip_data(data);
+ struct idio_16_gpio *const idio16gpio = gpiochip_get_data(chip);
+- const unsigned long mask = BIT(irqd_to_hwirq(data));
++ const unsigned long offset = irqd_to_hwirq(data);
+ unsigned long flags;
+
+- idio16gpio->irq_mask &= ~mask;
++ idio16gpio->irq_mask &= ~BIT(offset);
++ gpiochip_disable_irq(chip, offset);
+
+ if (!idio16gpio->irq_mask) {
+ raw_spin_lock_irqsave(&idio16gpio->lock, flags);
+@@ -168,11 +169,12 @@ static void idio_16_irq_unmask(struct irq_data *data)
+ {
+ struct gpio_chip *chip = irq_data_get_irq_chip_data(data);
+ struct idio_16_gpio *const idio16gpio = gpiochip_get_data(chip);
+- const unsigned long mask = BIT(irqd_to_hwirq(data));
++ const unsigned long offset = irqd_to_hwirq(data);
+ const unsigned long prev_irq_mask = idio16gpio->irq_mask;
+ unsigned long flags;
+
+- idio16gpio->irq_mask |= mask;
++ gpiochip_enable_irq(chip, offset);
++ idio16gpio->irq_mask |= BIT(offset);
+
+ if (!prev_irq_mask) {
+ raw_spin_lock_irqsave(&idio16gpio->lock, flags);
+@@ -193,12 +195,14 @@ static int idio_16_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ return 0;
+ }
+
+-static struct irq_chip idio_16_irqchip = {
++static const struct irq_chip idio_16_irqchip = {
+ .name = "104-idio-16",
+ .irq_ack = idio_16_irq_ack,
+ .irq_mask = idio_16_irq_mask,
+ .irq_unmask = idio_16_irq_unmask,
+- .irq_set_type = idio_16_irq_set_type
++ .irq_set_type = idio_16_irq_set_type,
++ .flags = IRQCHIP_IMMUTABLE,
++ GPIOCHIP_IRQ_RESOURCE_HELPERS,
+ };
+
+ static irqreturn_t idio_16_irq_handler(int irq, void *dev_id)
+@@ -275,7 +279,7 @@ static int idio_16_probe(struct device *dev, unsigned int id)
+ idio16gpio->out_state = 0xFFFF;
+
+ girq = &idio16gpio->chip.irq;
+- girq->chip = &idio_16_irqchip;
++ gpio_irq_chip_set_chip(girq, &idio_16_irqchip);
+ /* This will let us handle the parent IRQ in the driver */
+ girq->parent_handler = NULL;
+ girq->num_parents = 0;
+diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c
+index 8943cea927642..a2e505a7545cd 100644
+--- a/drivers/gpio/gpio-mockup.c
++++ b/drivers/gpio/gpio-mockup.c
+@@ -373,6 +373,13 @@ static void gpio_mockup_debugfs_setup(struct device *dev,
+ }
+ }
+
++static void gpio_mockup_debugfs_cleanup(void *data)
++{
++ struct gpio_mockup_chip *chip = data;
++
++ debugfs_remove_recursive(chip->dbg_dir);
++}
++
+ static void gpio_mockup_dispose_mappings(void *data)
+ {
+ struct gpio_mockup_chip *chip = data;
+@@ -455,7 +462,7 @@ static int gpio_mockup_probe(struct platform_device *pdev)
+
+ gpio_mockup_debugfs_setup(dev, chip);
+
+- return 0;
++ return devm_add_action_or_reset(dev, gpio_mockup_debugfs_cleanup, chip);
+ }
+
+ static const struct of_device_id gpio_mockup_of_match[] = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+index ecada5eadfe35..e325150879df7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+@@ -66,10 +66,15 @@ static bool is_fru_eeprom_supported(struct amdgpu_device *adev)
+ return true;
+ case CHIP_SIENNA_CICHLID:
+ if (strnstr(atom_ctx->vbios_version, "D603",
++ sizeof(atom_ctx->vbios_version))) {
++ if (strnstr(atom_ctx->vbios_version, "D603GLXE",
+ sizeof(atom_ctx->vbios_version)))
+- return true;
+- else
++ return false;
++ else
++ return true;
++ } else {
+ return false;
++ }
+ default:
+ return false;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 2b00f8fe15a89..b19bf0c3f3737 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -2372,7 +2372,7 @@ static int psp_load_smu_fw(struct psp_context *psp)
+ static bool fw_load_skip_check(struct psp_context *psp,
+ struct amdgpu_firmware_info *ucode)
+ {
+- if (!ucode->fw)
++ if (!ucode->fw || !ucode->ucode_size)
+ return true;
+
+ if (ucode->ucode_id == AMDGPU_UCODE_ID_SMC &&
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index 9cde13b07dd26..d9a5209aa8433 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -382,11 +382,27 @@ static int smu_v13_0_7_append_powerplay_table(struct smu_context *smu)
+ return 0;
+ }
+
++static int smu_v13_0_7_get_pptable_from_pmfw(struct smu_context *smu,
++ void **table,
++ uint32_t *size)
++{
++ struct smu_table_context *smu_table = &smu->smu_table;
++ void *combo_pptable = smu_table->combo_pptable;
++ int ret = 0;
++
++ ret = smu_cmn_get_combo_pptable(smu);
++ if (ret)
++ return ret;
++
++ *table = combo_pptable;
++ *size = sizeof(struct smu_13_0_7_powerplay_table);
++
++ return 0;
++}
+
+ static int smu_v13_0_7_setup_pptable(struct smu_context *smu)
+ {
+ struct smu_table_context *smu_table = &smu->smu_table;
+- void *combo_pptable = smu_table->combo_pptable;
+ struct amdgpu_device *adev = smu->adev;
+ int ret = 0;
+
+@@ -395,18 +411,11 @@ static int smu_v13_0_7_setup_pptable(struct smu_context *smu)
+ * be used directly by driver. To get the raw pptable, we need to
+ * rely on the combo pptable(and its revelant SMU message).
+ */
+- if (adev->scpm_enabled) {
+- ret = smu_cmn_get_combo_pptable(smu);
+- if (ret)
+- return ret;
+-
+- smu->smu_table.power_play_table = combo_pptable;
+- smu->smu_table.power_play_table_size = sizeof(struct smu_13_0_7_powerplay_table);
+- } else {
+- ret = smu_v13_0_setup_pptable(smu);
+- if (ret)
+- return ret;
+- }
++ ret = smu_v13_0_7_get_pptable_from_pmfw(smu,
++ &smu_table->power_play_table,
++ &smu_table->power_play_table_size);
++ if (ret)
++ return ret;
+
+ ret = smu_v13_0_7_store_powerplay_table(smu);
+ if (ret)
+diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
+index a92ffde53f0b3..db2f847c8535f 100644
+--- a/drivers/gpu/drm/msm/msm_rd.c
++++ b/drivers/gpu/drm/msm/msm_rd.c
+@@ -196,6 +196,9 @@ static int rd_open(struct inode *inode, struct file *file)
+ file->private_data = rd;
+ rd->open = true;
+
++ /* Reset fifo to clear any previously unread data: */
++ rd->fifo.head = rd->fifo.tail = 0;
++
+ /* the parsing tools need to know gpu-id to know which
+ * register database to load.
+ *
+diff --git a/drivers/hid/intel-ish-hid/ishtp-hid.h b/drivers/hid/intel-ish-hid/ishtp-hid.h
+index 6a5cc11aefd89..35dddc5015b37 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-hid.h
++++ b/drivers/hid/intel-ish-hid/ishtp-hid.h
+@@ -105,7 +105,7 @@ struct report_list {
+ * @multi_packet_cnt: Count of fragmented packet count
+ *
+ * This structure is used to store completion flags and per client data like
+- * like report description, number of HID devices etc.
++ * report description, number of HID devices etc.
+ */
+ struct ishtp_cl_data {
+ /* completion flags */
+diff --git a/drivers/hid/intel-ish-hid/ishtp/client.c b/drivers/hid/intel-ish-hid/ishtp/client.c
+index 405e0d5212cc8..df0a825694f52 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/client.c
++++ b/drivers/hid/intel-ish-hid/ishtp/client.c
+@@ -626,13 +626,14 @@ static void ishtp_cl_read_complete(struct ishtp_cl_rb *rb)
+ }
+
+ /**
+- * ipc_tx_callback() - IPC tx callback function
++ * ipc_tx_send() - IPC tx send function
+ * @prm: Pointer to client device instance
+ *
+- * Send message over IPC either first time or on callback on previous message
+- * completion
++ * Send message over IPC. Message will be split into fragments
++ * if message size is bigger than IPC FIFO size, and all
++ * fragments will be sent one by one.
+ */
+-static void ipc_tx_callback(void *prm)
++static void ipc_tx_send(void *prm)
+ {
+ struct ishtp_cl *cl = prm;
+ struct ishtp_cl_tx_ring *cl_msg;
+@@ -677,32 +678,41 @@ static void ipc_tx_callback(void *prm)
+ list);
+ rem = cl_msg->send_buf.size - cl->tx_offs;
+
+- ishtp_hdr.host_addr = cl->host_client_id;
+- ishtp_hdr.fw_addr = cl->fw_client_id;
+- ishtp_hdr.reserved = 0;
+- pmsg = cl_msg->send_buf.data + cl->tx_offs;
++ while (rem > 0) {
++ ishtp_hdr.host_addr = cl->host_client_id;
++ ishtp_hdr.fw_addr = cl->fw_client_id;
++ ishtp_hdr.reserved = 0;
++ pmsg = cl_msg->send_buf.data + cl->tx_offs;
++
++ if (rem <= dev->mtu) {
++ /* Last fragment or only one packet */
++ ishtp_hdr.length = rem;
++ ishtp_hdr.msg_complete = 1;
++ /* Submit to IPC queue with no callback */
++ ishtp_write_message(dev, &ishtp_hdr, pmsg);
++ cl->tx_offs = 0;
++ cl->sending = 0;
+
+- if (rem <= dev->mtu) {
+- ishtp_hdr.length = rem;
+- ishtp_hdr.msg_complete = 1;
+- cl->sending = 0;
+- list_del_init(&cl_msg->list); /* Must be before write */
+- spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
+- /* Submit to IPC queue with no callback */
+- ishtp_write_message(dev, &ishtp_hdr, pmsg);
+- spin_lock_irqsave(&cl->tx_free_list_spinlock, tx_free_flags);
+- list_add_tail(&cl_msg->list, &cl->tx_free_list.list);
+- ++cl->tx_ring_free_size;
+- spin_unlock_irqrestore(&cl->tx_free_list_spinlock,
+- tx_free_flags);
+- } else {
+- /* Send IPC fragment */
+- spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
+- cl->tx_offs += dev->mtu;
+- ishtp_hdr.length = dev->mtu;
+- ishtp_hdr.msg_complete = 0;
+- ishtp_send_msg(dev, &ishtp_hdr, pmsg, ipc_tx_callback, cl);
++ break;
++ } else {
++ /* Send ipc fragment */
++ ishtp_hdr.length = dev->mtu;
++ ishtp_hdr.msg_complete = 0;
++ /* All fregments submitted to IPC queue with no callback */
++ ishtp_write_message(dev, &ishtp_hdr, pmsg);
++ cl->tx_offs += dev->mtu;
++ rem = cl_msg->send_buf.size - cl->tx_offs;
++ }
+ }
++
++ list_del_init(&cl_msg->list);
++ spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
++
++ spin_lock_irqsave(&cl->tx_free_list_spinlock, tx_free_flags);
++ list_add_tail(&cl_msg->list, &cl->tx_free_list.list);
++ ++cl->tx_ring_free_size;
++ spin_unlock_irqrestore(&cl->tx_free_list_spinlock,
++ tx_free_flags);
+ }
+
+ /**
+@@ -720,7 +730,7 @@ static void ishtp_cl_send_msg_ipc(struct ishtp_device *dev,
+ return;
+
+ cl->tx_offs = 0;
+- ipc_tx_callback(cl);
++ ipc_tx_send(cl);
+ ++cl->send_msg_cnt_ipc;
+ }
+
+diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c
+index d003ad864ee44..a6e5d350a94ce 100644
+--- a/drivers/infiniband/hw/irdma/uk.c
++++ b/drivers/infiniband/hw/irdma/uk.c
+@@ -497,7 +497,8 @@ int irdma_uk_send(struct irdma_qp_uk *qp, struct irdma_post_sq_info *info,
+ FIELD_PREP(IRDMAQPSQ_IMMDATA, info->imm_data));
+ i = 0;
+ } else {
+- qp->wqe_ops.iw_set_fragment(wqe, 0, op_info->sg_list,
++ qp->wqe_ops.iw_set_fragment(wqe, 0,
++ frag_cnt ? op_info->sg_list : NULL,
+ qp->swqe_polarity);
+ i = 1;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 08371a80fdc26..be189e0525de6 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -523,6 +523,10 @@ repoll:
+ "Requestor" : "Responder", cq->mcq.cqn);
+ mlx5_ib_dbg(dev, "syndrome 0x%x, vendor syndrome 0x%x\n",
+ err_cqe->syndrome, err_cqe->vendor_err_synd);
++ if (wc->status != IB_WC_WR_FLUSH_ERR &&
++ (*cur_qp)->type == MLX5_IB_QPT_REG_UMR)
++ dev->umrc.state = MLX5_UMR_STATE_RECOVER;
++
+ if (opcode == MLX5_CQE_REQ_ERR) {
+ wq = &(*cur_qp)->sq;
+ wqe_ctr = be16_to_cpu(cqe64->wqe_counter);
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 63c89a72cc352..bb13164124fdb 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4336,7 +4336,7 @@ static int mlx5r_probe(struct auxiliary_device *adev,
+ dev->mdev = mdev;
+ dev->num_ports = num_ports;
+
+- if (ll == IB_LINK_LAYER_ETHERNET && !mlx5_is_roce_init_enabled(mdev))
++ if (ll == IB_LINK_LAYER_ETHERNET && !mlx5_get_roce_state(mdev))
+ profile = &raw_eth_profile;
+ else
+ profile = &pf_profile;
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index 998b67509a533..c2cca032a6ed4 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -717,13 +717,24 @@ struct mlx5_ib_umr_context {
+ struct completion done;
+ };
+
++enum {
++ MLX5_UMR_STATE_UNINIT,
++ MLX5_UMR_STATE_ACTIVE,
++ MLX5_UMR_STATE_RECOVER,
++ MLX5_UMR_STATE_ERR,
++};
++
+ struct umr_common {
+ struct ib_pd *pd;
+ struct ib_cq *cq;
+ struct ib_qp *qp;
+- /* control access to UMR QP
++ /* Protects from UMR QP overflow
+ */
+ struct semaphore sem;
++ /* Protects from using UMR while the UMR is not active
++ */
++ struct mutex lock;
++ unsigned int state;
+ };
+
+ struct mlx5_cache_ent {
+diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
+index 3a48364c09181..d5105b5c9979b 100644
+--- a/drivers/infiniband/hw/mlx5/umr.c
++++ b/drivers/infiniband/hw/mlx5/umr.c
+@@ -176,6 +176,8 @@ int mlx5r_umr_resource_init(struct mlx5_ib_dev *dev)
+ dev->umrc.pd = pd;
+
+ sema_init(&dev->umrc.sem, MAX_UMR_WR);
++ mutex_init(&dev->umrc.lock);
++ dev->umrc.state = MLX5_UMR_STATE_ACTIVE;
+
+ return 0;
+
+@@ -190,11 +192,38 @@ destroy_pd:
+
+ void mlx5r_umr_resource_cleanup(struct mlx5_ib_dev *dev)
+ {
++ if (dev->umrc.state == MLX5_UMR_STATE_UNINIT)
++ return;
+ ib_destroy_qp(dev->umrc.qp);
+ ib_free_cq(dev->umrc.cq);
+ ib_dealloc_pd(dev->umrc.pd);
+ }
+
++static int mlx5r_umr_recover(struct mlx5_ib_dev *dev)
++{
++ struct umr_common *umrc = &dev->umrc;
++ struct ib_qp_attr attr;
++ int err;
++
++ attr.qp_state = IB_QPS_RESET;
++ err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE);
++ if (err) {
++ mlx5_ib_dbg(dev, "Couldn't modify UMR QP\n");
++ goto err;
++ }
++
++ err = mlx5r_umr_qp_rst2rts(dev, umrc->qp);
++ if (err)
++ goto err;
++
++ umrc->state = MLX5_UMR_STATE_ACTIVE;
++ return 0;
++
++err:
++ umrc->state = MLX5_UMR_STATE_ERR;
++ return err;
++}
++
+ static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe,
+ struct mlx5r_umr_wqe *wqe, bool with_data)
+ {
+@@ -231,7 +260,7 @@ static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe,
+
+ id.ib_cqe = cqe;
+ mlx5r_finish_wqe(qp, ctrl, seg, size, cur_edge, idx, id.wr_id, 0,
+- MLX5_FENCE_MODE_NONE, MLX5_OPCODE_UMR);
++ MLX5_FENCE_MODE_INITIATOR_SMALL, MLX5_OPCODE_UMR);
+
+ mlx5r_ring_db(qp, 1, ctrl);
+
+@@ -270,17 +299,49 @@ static int mlx5r_umr_post_send_wait(struct mlx5_ib_dev *dev, u32 mkey,
+ mlx5r_umr_init_context(&umr_context);
+
+ down(&umrc->sem);
+- err = mlx5r_umr_post_send(umrc->qp, mkey, &umr_context.cqe, wqe,
+- with_data);
+- if (err)
+- mlx5_ib_warn(dev, "UMR post send failed, err %d\n", err);
+- else {
+- wait_for_completion(&umr_context.done);
+- if (umr_context.status != IB_WC_SUCCESS) {
+- mlx5_ib_warn(dev, "reg umr failed (%u)\n",
+- umr_context.status);
++ while (true) {
++ mutex_lock(&umrc->lock);
++ if (umrc->state == MLX5_UMR_STATE_ERR) {
++ mutex_unlock(&umrc->lock);
+ err = -EFAULT;
++ break;
++ }
++
++ if (umrc->state == MLX5_UMR_STATE_RECOVER) {
++ mutex_unlock(&umrc->lock);
++ usleep_range(3000, 5000);
++ continue;
++ }
++
++ err = mlx5r_umr_post_send(umrc->qp, mkey, &umr_context.cqe, wqe,
++ with_data);
++ mutex_unlock(&umrc->lock);
++ if (err) {
++ mlx5_ib_warn(dev, "UMR post send failed, err %d\n",
++ err);
++ break;
+ }
++
++ wait_for_completion(&umr_context.done);
++
++ if (umr_context.status == IB_WC_SUCCESS)
++ break;
++
++ if (umr_context.status == IB_WC_WR_FLUSH_ERR)
++ continue;
++
++ WARN_ON_ONCE(1);
++ mlx5_ib_warn(dev,
++ "reg umr failed (%u). Trying to recover and resubmit the flushed WQEs\n",
++ umr_context.status);
++ mutex_lock(&umrc->lock);
++ err = mlx5r_umr_recover(dev);
++ mutex_unlock(&umrc->lock);
++ if (err)
++ mlx5_ib_warn(dev, "couldn't recover UMR, err %d\n",
++ err);
++ err = -EFAULT;
++ break;
+ }
+ up(&umrc->sem);
+ return err;
+diff --git a/drivers/input/joystick/iforce/iforce-main.c b/drivers/input/joystick/iforce/iforce-main.c
+index b2a68bc9f0b4d..b86de1312512b 100644
+--- a/drivers/input/joystick/iforce/iforce-main.c
++++ b/drivers/input/joystick/iforce/iforce-main.c
+@@ -50,6 +50,7 @@ static struct iforce_device iforce_device[] = {
+ { 0x046d, 0xc291, "Logitech WingMan Formula Force", btn_wheel, abs_wheel, ff_iforce },
+ { 0x05ef, 0x020a, "AVB Top Shot Pegasus", btn_joystick_avb, abs_avb_pegasus, ff_iforce },
+ { 0x05ef, 0x8884, "AVB Mag Turbo Force", btn_wheel, abs_wheel, ff_iforce },
++ { 0x05ef, 0x8886, "Boeder Force Feedback Wheel", btn_wheel, abs_wheel, ff_iforce },
+ { 0x05ef, 0x8888, "AVB Top Shot Force Feedback Racing Wheel", btn_wheel, abs_wheel, ff_iforce }, //?
+ { 0x061c, 0xc0a4, "ACT LABS Force RS", btn_wheel, abs_wheel, ff_iforce }, //?
+ { 0x061c, 0xc084, "ACT LABS Force RS", btn_wheel, abs_wheel, ff_iforce },
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index aa45a9fee6a01..3020ddc1ca48b 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -95,6 +95,7 @@ static const struct goodix_chip_data gt9x_chip_data = {
+
+ static const struct goodix_chip_id goodix_chip_ids[] = {
+ { .id = "1151", .data = >1x_chip_data },
++ { .id = "1158", .data = >1x_chip_data },
+ { .id = "5663", .data = >1x_chip_data },
+ { .id = "5688", .data = >1x_chip_data },
+ { .id = "917S", .data = >1x_chip_data },
+@@ -1514,6 +1515,7 @@ MODULE_DEVICE_TABLE(acpi, goodix_acpi_match);
+ #ifdef CONFIG_OF
+ static const struct of_device_id goodix_of_match[] = {
+ { .compatible = "goodix,gt1151" },
++ { .compatible = "goodix,gt1158" },
+ { .compatible = "goodix,gt5663" },
+ { .compatible = "goodix,gt5688" },
+ { .compatible = "goodix,gt911" },
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 40ac3a78d90ef..c0464959cbcdb 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -168,38 +168,6 @@ static phys_addr_t root_entry_uctp(struct root_entry *re)
+ return re->hi & VTD_PAGE_MASK;
+ }
+
+-static inline void context_clear_pasid_enable(struct context_entry *context)
+-{
+- context->lo &= ~(1ULL << 11);
+-}
+-
+-static inline bool context_pasid_enabled(struct context_entry *context)
+-{
+- return !!(context->lo & (1ULL << 11));
+-}
+-
+-static inline void context_set_copied(struct context_entry *context)
+-{
+- context->hi |= (1ull << 3);
+-}
+-
+-static inline bool context_copied(struct context_entry *context)
+-{
+- return !!(context->hi & (1ULL << 3));
+-}
+-
+-static inline bool __context_present(struct context_entry *context)
+-{
+- return (context->lo & 1);
+-}
+-
+-bool context_present(struct context_entry *context)
+-{
+- return context_pasid_enabled(context) ?
+- __context_present(context) :
+- __context_present(context) && !context_copied(context);
+-}
+-
+ static inline void context_set_present(struct context_entry *context)
+ {
+ context->lo |= 1;
+@@ -247,6 +215,26 @@ static inline void context_clear_entry(struct context_entry *context)
+ context->hi = 0;
+ }
+
++static inline bool context_copied(struct intel_iommu *iommu, u8 bus, u8 devfn)
++{
++ if (!iommu->copied_tables)
++ return false;
++
++ return test_bit(((long)bus << 8) | devfn, iommu->copied_tables);
++}
++
++static inline void
++set_context_copied(struct intel_iommu *iommu, u8 bus, u8 devfn)
++{
++ set_bit(((long)bus << 8) | devfn, iommu->copied_tables);
++}
++
++static inline void
++clear_context_copied(struct intel_iommu *iommu, u8 bus, u8 devfn)
++{
++ clear_bit(((long)bus << 8) | devfn, iommu->copied_tables);
++}
++
+ /*
+ * This domain is a statically identity mapping domain.
+ * 1. This domain creats a static 1:1 mapping to all usable memory.
+@@ -644,6 +632,13 @@ struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus,
+ struct context_entry *context;
+ u64 *entry;
+
++ /*
++ * Except that the caller requested to allocate a new entry,
++ * returning a copied context entry makes no sense.
++ */
++ if (!alloc && context_copied(iommu, bus, devfn))
++ return NULL;
++
+ entry = &root->lo;
+ if (sm_supported(iommu)) {
+ if (devfn >= 0x80) {
+@@ -1770,6 +1765,11 @@ static void free_dmar_iommu(struct intel_iommu *iommu)
+ iommu->domain_ids = NULL;
+ }
+
++ if (iommu->copied_tables) {
++ bitmap_free(iommu->copied_tables);
++ iommu->copied_tables = NULL;
++ }
++
+ g_iommus[iommu->seq_id] = NULL;
+
+ /* free context mapping */
+@@ -1978,7 +1978,7 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
+ goto out_unlock;
+
+ ret = 0;
+- if (context_present(context))
++ if (context_present(context) && !context_copied(iommu, bus, devfn))
+ goto out_unlock;
+
+ /*
+@@ -1990,7 +1990,7 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
+ * in-flight DMA will exist, and we don't need to worry anymore
+ * hereafter.
+ */
+- if (context_copied(context)) {
++ if (context_copied(iommu, bus, devfn)) {
+ u16 did_old = context_domain_id(context);
+
+ if (did_old < cap_ndoms(iommu->cap)) {
+@@ -2001,6 +2001,8 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
+ iommu->flush.flush_iotlb(iommu, did_old, 0, 0,
+ DMA_TLB_DSI_FLUSH);
+ }
++
++ clear_context_copied(iommu, bus, devfn);
+ }
+
+ context_clear_entry(context);
+@@ -2783,32 +2785,14 @@ static int copy_context_table(struct intel_iommu *iommu,
+ /* Now copy the context entry */
+ memcpy(&ce, old_ce + idx, sizeof(ce));
+
+- if (!__context_present(&ce))
++ if (!context_present(&ce))
+ continue;
+
+ did = context_domain_id(&ce);
+ if (did >= 0 && did < cap_ndoms(iommu->cap))
+ set_bit(did, iommu->domain_ids);
+
+- /*
+- * We need a marker for copied context entries. This
+- * marker needs to work for the old format as well as
+- * for extended context entries.
+- *
+- * Bit 67 of the context entry is used. In the old
+- * format this bit is available to software, in the
+- * extended format it is the PGE bit, but PGE is ignored
+- * by HW if PASIDs are disabled (and thus still
+- * available).
+- *
+- * So disable PASIDs first and then mark the entry
+- * copied. This means that we don't copy PASID
+- * translations from the old kernel, but this is fine as
+- * faults there are not fatal.
+- */
+- context_clear_pasid_enable(&ce);
+- context_set_copied(&ce);
+-
++ set_context_copied(iommu, bus, devfn);
+ new_ce[idx] = ce;
+ }
+
+@@ -2835,8 +2819,8 @@ static int copy_translation_tables(struct intel_iommu *iommu)
+ bool new_ext, ext;
+
+ rtaddr_reg = dmar_readq(iommu->reg + DMAR_RTADDR_REG);
+- ext = !!(rtaddr_reg & DMA_RTADDR_RTT);
+- new_ext = !!ecap_ecs(iommu->ecap);
++ ext = !!(rtaddr_reg & DMA_RTADDR_SMT);
++ new_ext = !!sm_supported(iommu);
+
+ /*
+ * The RTT bit can only be changed when translation is disabled,
+@@ -2847,6 +2831,10 @@ static int copy_translation_tables(struct intel_iommu *iommu)
+ if (new_ext != ext)
+ return -EINVAL;
+
++ iommu->copied_tables = bitmap_zalloc(BIT_ULL(16), GFP_KERNEL);
++ if (!iommu->copied_tables)
++ return -ENOMEM;
++
+ old_rt_phys = rtaddr_reg & VTD_PAGE_MASK;
+ if (!old_rt_phys)
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index c28f8cc00d1cf..a9cc85882b315 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -18076,16 +18076,20 @@ static void tg3_shutdown(struct pci_dev *pdev)
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct tg3 *tp = netdev_priv(dev);
+
++ tg3_reset_task_cancel(tp);
++
+ rtnl_lock();
++
+ netif_device_detach(dev);
+
+ if (netif_running(dev))
+ dev_close(dev);
+
+- if (system_state == SYSTEM_POWER_OFF)
+- tg3_power_down(tp);
++ tg3_power_down(tp);
+
+ rtnl_unlock();
++
++ pci_disable_device(pdev);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
+index cfb8bedba5124..079fa44ada71e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
+@@ -289,6 +289,10 @@ int mlx5_cmd_init_hca(struct mlx5_core_dev *dev, uint32_t *sw_owner_id)
+ sw_owner_id[i]);
+ }
+
++ if (MLX5_CAP_GEN_2_MAX(dev, sw_vhca_id_valid) &&
++ dev->priv.sw_vhca_id > 0)
++ MLX5_SET(init_hca_in, in, sw_vhca_id, dev->priv.sw_vhca_id);
++
+ return mlx5_cmd_exec_in(dev, init_hca, in);
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 616207c3b187a..6c8bb74bd8fc6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -90,6 +90,8 @@ module_param_named(prof_sel, prof_sel, uint, 0444);
+ MODULE_PARM_DESC(prof_sel, "profile selector. Valid range 0 - 2");
+
+ static u32 sw_owner_id[4];
++#define MAX_SW_VHCA_ID (BIT(__mlx5_bit_sz(cmd_hca_cap_2, sw_vhca_id)) - 1)
++static DEFINE_IDA(sw_vhca_ida);
+
+ enum {
+ MLX5_ATOMIC_REQ_MODE_BE = 0x0,
+@@ -499,6 +501,49 @@ static int max_uc_list_get_devlink_param(struct mlx5_core_dev *dev)
+ return err;
+ }
+
++bool mlx5_is_roce_on(struct mlx5_core_dev *dev)
++{
++ struct devlink *devlink = priv_to_devlink(dev);
++ union devlink_param_value val;
++ int err;
++
++ err = devlink_param_driverinit_value_get(devlink,
++ DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE,
++ &val);
++
++ if (!err)
++ return val.vbool;
++
++ mlx5_core_dbg(dev, "Failed to get param. err = %d\n", err);
++ return MLX5_CAP_GEN(dev, roce);
++}
++EXPORT_SYMBOL(mlx5_is_roce_on);
++
++static int handle_hca_cap_2(struct mlx5_core_dev *dev, void *set_ctx)
++{
++ void *set_hca_cap;
++ int err;
++
++ if (!MLX5_CAP_GEN_MAX(dev, hca_cap_2))
++ return 0;
++
++ err = mlx5_core_get_caps(dev, MLX5_CAP_GENERAL_2);
++ if (err)
++ return err;
++
++ if (!MLX5_CAP_GEN_2_MAX(dev, sw_vhca_id_valid) ||
++ !(dev->priv.sw_vhca_id > 0))
++ return 0;
++
++ set_hca_cap = MLX5_ADDR_OF(set_hca_cap_in, set_ctx,
++ capability);
++ memcpy(set_hca_cap, dev->caps.hca[MLX5_CAP_GENERAL_2]->cur,
++ MLX5_ST_SZ_BYTES(cmd_hca_cap_2));
++ MLX5_SET(cmd_hca_cap_2, set_hca_cap, sw_vhca_id_valid, 1);
++
++ return set_caps(dev, set_ctx, MLX5_CAP_GENERAL_2);
++}
++
+ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
+ {
+ struct mlx5_profile *prof = &dev->profile;
+@@ -577,7 +622,8 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
+ MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix));
+
+ if (MLX5_CAP_GEN(dev, roce_rw_supported))
+- MLX5_SET(cmd_hca_cap, set_hca_cap, roce, mlx5_is_roce_init_enabled(dev));
++ MLX5_SET(cmd_hca_cap, set_hca_cap, roce,
++ mlx5_is_roce_on(dev));
+
+ max_uc_list = max_uc_list_get_devlink_param(dev);
+ if (max_uc_list > 0)
+@@ -603,7 +649,7 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
+ */
+ static bool is_roce_fw_disabled(struct mlx5_core_dev *dev)
+ {
+- return (MLX5_CAP_GEN(dev, roce_rw_supported) && !mlx5_is_roce_init_enabled(dev)) ||
++ return (MLX5_CAP_GEN(dev, roce_rw_supported) && !mlx5_is_roce_on(dev)) ||
+ (!MLX5_CAP_GEN(dev, roce_rw_supported) && !MLX5_CAP_GEN(dev, roce));
+ }
+
+@@ -669,6 +715,13 @@ static int set_hca_cap(struct mlx5_core_dev *dev)
+ goto out;
+ }
+
++ memset(set_ctx, 0, set_sz);
++ err = handle_hca_cap_2(dev, set_ctx);
++ if (err) {
++ mlx5_core_err(dev, "handle_hca_cap_2 failed\n");
++ goto out;
++ }
++
+ out:
+ kfree(set_ctx);
+ return err;
+@@ -1512,6 +1565,18 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
+ if (err)
+ goto err_hca_caps;
+
++ /* The conjunction of sw_vhca_id with sw_owner_id will be a global
++ * unique id per function which uses mlx5_core.
++ * Those values are supplied to FW as part of the init HCA command to
++ * be used by both driver and FW when it's applicable.
++ */
++ dev->priv.sw_vhca_id = ida_alloc_range(&sw_vhca_ida, 1,
++ MAX_SW_VHCA_ID,
++ GFP_KERNEL);
++ if (dev->priv.sw_vhca_id < 0)
++ mlx5_core_err(dev, "failed to allocate sw_vhca_id, err=%d\n",
++ dev->priv.sw_vhca_id);
++
+ return 0;
+
+ err_hca_caps:
+@@ -1537,6 +1602,9 @@ void mlx5_mdev_uninit(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_priv *priv = &dev->priv;
+
++ if (priv->sw_vhca_id > 0)
++ ida_free(&sw_vhca_ida, dev->priv.sw_vhca_id);
++
+ mlx5_hca_caps_free(dev);
+ mlx5_adev_cleanup(dev);
+ mlx5_pagealloc_cleanup(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+index ac020cb780727..d5c3173250309 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+@@ -1086,9 +1086,17 @@ int mlx5_nic_vport_affiliate_multiport(struct mlx5_core_dev *master_mdev,
+ goto free;
+
+ MLX5_SET(modify_nic_vport_context_in, in, field_select.affiliation, 1);
+- MLX5_SET(modify_nic_vport_context_in, in,
+- nic_vport_context.affiliated_vhca_id,
+- MLX5_CAP_GEN(master_mdev, vhca_id));
++ if (MLX5_CAP_GEN_2(master_mdev, sw_vhca_id_valid)) {
++ MLX5_SET(modify_nic_vport_context_in, in,
++ nic_vport_context.vhca_id_type, VHCA_ID_TYPE_SW);
++ MLX5_SET(modify_nic_vport_context_in, in,
++ nic_vport_context.affiliated_vhca_id,
++ MLX5_CAP_GEN_2(master_mdev, sw_vhca_id));
++ } else {
++ MLX5_SET(modify_nic_vport_context_in, in,
++ nic_vport_context.affiliated_vhca_id,
++ MLX5_CAP_GEN(master_mdev, vhca_id));
++ }
+ MLX5_SET(modify_nic_vport_context_in, in,
+ nic_vport_context.affiliation_criteria,
+ MLX5_CAP_GEN(port_mdev, affiliate_nic_vport_criteria));
+diff --git a/drivers/net/ieee802154/cc2520.c b/drivers/net/ieee802154/cc2520.c
+index 1e1f40f628a02..c69b87d3837da 100644
+--- a/drivers/net/ieee802154/cc2520.c
++++ b/drivers/net/ieee802154/cc2520.c
+@@ -504,6 +504,7 @@ cc2520_tx(struct ieee802154_hw *hw, struct sk_buff *skb)
+ goto err_tx;
+
+ if (status & CC2520_STATUS_TX_UNDERFLOW) {
++ rc = -EINVAL;
+ dev_err(&priv->spi->dev, "cc2520 tx underflow exception\n");
+ goto err_tx;
+ }
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 2de09ad5bac03..e11f70911acc1 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -777,6 +777,13 @@ static const struct usb_device_id products[] = {
+ },
+ #endif
+
++/* Lenovo ThinkPad OneLink+ Dock (based on Realtek RTL8153) */
++{
++ USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3054, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++ .driver_info = 0,
++},
++
+ /* ThinkPad USB-C Dock (based on Realtek RTL8153) */
+ {
+ USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3062, USB_CLASS_COMM,
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index d142ac8fcf6e2..688905ea0a6d3 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -770,6 +770,7 @@ enum rtl8152_flags {
+ RX_EPROTO,
+ };
+
++#define DEVICE_ID_THINKPAD_ONELINK_PLUS_DOCK 0x3054
+ #define DEVICE_ID_THINKPAD_THUNDERBOLT3_DOCK_GEN2 0x3082
+ #define DEVICE_ID_THINKPAD_USB_C_DONGLE 0x720c
+ #define DEVICE_ID_THINKPAD_USB_C_DOCK_GEN2 0xa387
+@@ -9581,6 +9582,7 @@ static bool rtl8152_supports_lenovo_macpassthru(struct usb_device *udev)
+
+ if (vendor_id == VENDOR_ID_LENOVO) {
+ switch (product_id) {
++ case DEVICE_ID_THINKPAD_ONELINK_PLUS_DOCK:
+ case DEVICE_ID_THINKPAD_THUNDERBOLT3_DOCK_GEN2:
+ case DEVICE_ID_THINKPAD_USB_C_DOCK_GEN2:
+ case DEVICE_ID_THINKPAD_USB_C_DOCK_GEN3:
+@@ -9828,6 +9830,7 @@ static const struct usb_device_id rtl8152_table[] = {
+ REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927),
+ REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101),
+ REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f),
++ REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3054),
+ REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3062),
+ REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3069),
+ REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3082),
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 73d9fcba3b1c0..9f6614f7dbeb1 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3517,6 +3517,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ { PCI_DEVICE(0xc0a9, 0x540a), /* Crucial P2 */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
++ { PCI_DEVICE(0x1d97, 0x2263), /* Lexar NM610 */
++ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0061),
+ .driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0065),
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index dc3b4dc8fe08b..a3694a32f6d52 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1506,6 +1506,9 @@ static void nvmet_tcp_state_change(struct sock *sk)
+ goto done;
+
+ switch (sk->sk_state) {
++ case TCP_FIN_WAIT2:
++ case TCP_LAST_ACK:
++ break;
+ case TCP_FIN_WAIT1:
+ case TCP_CLOSE_WAIT:
+ case TCP_CLOSE:
+diff --git a/drivers/peci/cpu.c b/drivers/peci/cpu.c
+index 68eb61c65d345..de4a7b3e5966e 100644
+--- a/drivers/peci/cpu.c
++++ b/drivers/peci/cpu.c
+@@ -188,8 +188,6 @@ static void adev_release(struct device *dev)
+ {
+ struct auxiliary_device *adev = to_auxiliary_dev(dev);
+
+- auxiliary_device_uninit(adev);
+-
+ kfree(adev->name);
+ kfree(adev);
+ }
+@@ -234,6 +232,7 @@ static void unregister_adev(void *_adev)
+ struct auxiliary_device *adev = _adev;
+
+ auxiliary_device_delete(adev);
++ auxiliary_device_uninit(adev);
+ }
+
+ static int devm_adev_add(struct device *dev, int idx)
+diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
+index 513de1f54e2d7..933b96e243b84 100644
+--- a/drivers/perf/arm_pmu_platform.c
++++ b/drivers/perf/arm_pmu_platform.c
+@@ -117,7 +117,7 @@ static int pmu_parse_irqs(struct arm_pmu *pmu)
+
+ if (num_irqs == 1) {
+ int irq = platform_get_irq(pdev, 0);
+- if (irq && irq_is_percpu_devid(irq))
++ if ((irq > 0) && irq_is_percpu_devid(irq))
+ return pmu_parse_percpu_irq(pmu, irq);
+ }
+
+diff --git a/drivers/platform/surface/surface_aggregator_registry.c b/drivers/platform/surface/surface_aggregator_registry.c
+index ce2bd88feeaa8..08019c6ccc9ca 100644
+--- a/drivers/platform/surface/surface_aggregator_registry.c
++++ b/drivers/platform/surface/surface_aggregator_registry.c
+@@ -556,6 +556,9 @@ static const struct acpi_device_id ssam_platform_hub_match[] = {
+ /* Surface Laptop Go 1 */
+ { "MSHW0118", (unsigned long)ssam_node_group_slg1 },
+
++ /* Surface Laptop Go 2 */
++ { "MSHW0290", (unsigned long)ssam_node_group_slg1 },
++
+ /* Surface Laptop Studio */
+ { "MSHW0123", (unsigned long)ssam_node_group_sls },
+
+diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
+index 9c6943e401a6c..0fbcaffabbfc7 100644
+--- a/drivers/platform/x86/acer-wmi.c
++++ b/drivers/platform/x86/acer-wmi.c
+@@ -99,6 +99,7 @@ static const struct key_entry acer_wmi_keymap[] __initconst = {
+ {KE_KEY, 0x22, {KEY_PROG2} }, /* Arcade */
+ {KE_KEY, 0x23, {KEY_PROG3} }, /* P_Key */
+ {KE_KEY, 0x24, {KEY_PROG4} }, /* Social networking_Key */
++ {KE_KEY, 0x27, {KEY_HELP} },
+ {KE_KEY, 0x29, {KEY_PROG3} }, /* P_Key for TM8372 */
+ {KE_IGNORE, 0x41, {KEY_MUTE} },
+ {KE_IGNORE, 0x42, {KEY_PREVIOUSSONG} },
+@@ -112,7 +113,13 @@ static const struct key_entry acer_wmi_keymap[] __initconst = {
+ {KE_IGNORE, 0x48, {KEY_VOLUMEUP} },
+ {KE_IGNORE, 0x49, {KEY_VOLUMEDOWN} },
+ {KE_IGNORE, 0x4a, {KEY_VOLUMEDOWN} },
+- {KE_IGNORE, 0x61, {KEY_SWITCHVIDEOMODE} },
++ /*
++ * 0x61 is KEY_SWITCHVIDEOMODE. Usually this is a duplicate input event
++ * with the "Video Bus" input device events. But sometimes it is not
++ * a dup. Map it to KEY_UNKNOWN instead of using KE_IGNORE so that
++ * udev/hwdb can override it on systems where it is not a dup.
++ */
++ {KE_KEY, 0x61, {KEY_UNKNOWN} },
+ {KE_IGNORE, 0x62, {KEY_BRIGHTNESSUP} },
+ {KE_IGNORE, 0x63, {KEY_BRIGHTNESSDOWN} },
+ {KE_KEY, 0x64, {KEY_SWITCHVIDEOMODE} }, /* Display Switch */
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 62ce198a34631..a0f31624aee97 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -107,7 +107,7 @@ module_param(fnlock_default, bool, 0444);
+ #define WMI_EVENT_MASK 0xFFFF
+
+ #define FAN_CURVE_POINTS 8
+-#define FAN_CURVE_BUF_LEN (FAN_CURVE_POINTS * 2)
++#define FAN_CURVE_BUF_LEN 32
+ #define FAN_CURVE_DEV_CPU 0x00
+ #define FAN_CURVE_DEV_GPU 0x01
+ /* Mask to determine if setting temperature or percentage */
+@@ -2208,8 +2208,10 @@ static int fan_curve_get_factory_default(struct asus_wmi *asus, u32 fan_dev)
+ curves = &asus->custom_fan_curves[fan_idx];
+ err = asus_wmi_evaluate_method_buf(asus->dsts_id, fan_dev, mode, buf,
+ FAN_CURVE_BUF_LEN);
+- if (err)
++ if (err) {
++ pr_warn("%s (0x%08x) failed: %d\n", __func__, fan_dev, err);
+ return err;
++ }
+
+ fan_curve_copy_from_buf(curves, buf);
+ curves->device_id = fan_dev;
+@@ -2227,9 +2229,6 @@ static int fan_curve_check_present(struct asus_wmi *asus, bool *available,
+
+ err = fan_curve_get_factory_default(asus, fan_dev);
+ if (err) {
+- pr_debug("fan_curve_get_factory_default(0x%08x) failed: %d\n",
+- fan_dev, err);
+- /* Don't cause probe to fail on devices without fan-curves */
+ return 0;
+ }
+
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 4051c8cd0cd8a..23ab3b048d9be 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -62,6 +62,13 @@ UNUSUAL_DEV(0x0984, 0x0301, 0x0128, 0x0128,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_IGNORE_UAS),
+
++/* Reported-by: Tom Hu <huxiaoying@kylinos.cn> */
++UNUSUAL_DEV(0x0b05, 0x1932, 0x0000, 0x9999,
++ "ASUS",
++ "External HDD",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_UAS),
++
+ /* Reported-by: David Webb <djw@noc.ac.uk> */
+ UNUSUAL_DEV(0x0bc2, 0x331a, 0x0000, 0x9999,
+ "Seagate",
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 5fcf89faa31ab..d72626d71258f 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -196,7 +196,6 @@
+ #define ecap_dis(e) (((e) >> 27) & 0x1)
+ #define ecap_nest(e) (((e) >> 26) & 0x1)
+ #define ecap_mts(e) (((e) >> 25) & 0x1)
+-#define ecap_ecs(e) (((e) >> 24) & 0x1)
+ #define ecap_iotlb_offset(e) ((((e) >> 8) & 0x3ff) * 16)
+ #define ecap_max_iotlb_offset(e) (ecap_iotlb_offset(e) + 16)
+ #define ecap_coherent(e) ((e) & 0x1)
+@@ -264,7 +263,6 @@
+ #define DMA_GSTS_CFIS (((u32)1) << 23)
+
+ /* DMA_RTADDR_REG */
+-#define DMA_RTADDR_RTT (((u64)1) << 11)
+ #define DMA_RTADDR_SMT (((u64)1) << 10)
+
+ /* CCMD_REG */
+@@ -579,6 +577,7 @@ struct intel_iommu {
+
+ #ifdef CONFIG_INTEL_IOMMU
+ unsigned long *domain_ids; /* bitmap of domains */
++ unsigned long *copied_tables; /* bitmap of copied tables */
+ spinlock_t lock; /* protect context, domain ids */
+ struct root_entry *root_entry; /* virtual address */
+
+@@ -692,6 +691,11 @@ static inline int nr_pte_to_next_page(struct dma_pte *pte)
+ (struct dma_pte *)ALIGN((unsigned long)pte, VTD_PAGE_SIZE) - pte;
+ }
+
++static inline bool context_present(struct context_entry *context)
++{
++ return (context->lo & 1);
++}
++
+ extern struct dmar_drhd_unit * dmar_find_matched_drhd_unit(struct pci_dev *dev);
+
+ extern int dmar_enable_qi(struct intel_iommu *iommu);
+@@ -776,7 +780,6 @@ static inline void intel_iommu_debugfs_init(void) {}
+ #endif /* CONFIG_INTEL_IOMMU_DEBUGFS */
+
+ extern const struct attribute_group *intel_iommu_groups[];
+-bool context_present(struct context_entry *context);
+ struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus,
+ u8 devfn, int alloc);
+
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index b0b4ac92354a2..b3ea245faa515 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -606,6 +606,7 @@ struct mlx5_priv {
+ spinlock_t ctx_lock;
+ struct mlx5_adev **adev;
+ int adev_idx;
++ int sw_vhca_id;
+ struct mlx5_events *events;
+
+ struct mlx5_flow_steering *steering;
+@@ -1274,16 +1275,17 @@ enum {
+ MLX5_TRIGGERED_CMD_COMP = (u64)1 << 32,
+ };
+
+-static inline bool mlx5_is_roce_init_enabled(struct mlx5_core_dev *dev)
++bool mlx5_is_roce_on(struct mlx5_core_dev *dev);
++
++static inline bool mlx5_get_roce_state(struct mlx5_core_dev *dev)
+ {
+- struct devlink *devlink = priv_to_devlink(dev);
+- union devlink_param_value val;
+- int err;
+-
+- err = devlink_param_driverinit_value_get(devlink,
+- DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE,
+- &val);
+- return err ? MLX5_CAP_GEN(dev, roce) : val.vbool;
++ if (MLX5_CAP_GEN(dev, roce_rw_supported))
++ return MLX5_CAP_GEN(dev, roce);
++
++ /* If RoCE cap is read-only in FW, get RoCE state from devlink
++ * in order to support RoCE enable/disable feature
++ */
++ return mlx5_is_roce_on(dev);
+ }
+
+ #endif /* MLX5_DRIVER_H */
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index fd7d083a34d33..6d57e5ec9718d 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -1804,7 +1804,14 @@ struct mlx5_ifc_cmd_hca_cap_2_bits {
+ u8 max_reformat_remove_size[0x8];
+ u8 max_reformat_remove_offset[0x8];
+
+- u8 reserved_at_c0[0x740];
++ u8 reserved_at_c0[0x160];
++
++ u8 reserved_at_220[0x1];
++ u8 sw_vhca_id_valid[0x1];
++ u8 sw_vhca_id[0xe];
++ u8 reserved_at_230[0x10];
++
++ u8 reserved_at_240[0x5c0];
+ };
+
+ enum mlx5_ifc_flow_destination_type {
+@@ -3715,6 +3722,11 @@ struct mlx5_ifc_rmpc_bits {
+ struct mlx5_ifc_wq_bits wq;
+ };
+
++enum {
++ VHCA_ID_TYPE_HW = 0,
++ VHCA_ID_TYPE_SW = 1,
++};
++
+ struct mlx5_ifc_nic_vport_context_bits {
+ u8 reserved_at_0[0x5];
+ u8 min_wqe_inline_mode[0x3];
+@@ -3731,8 +3743,8 @@ struct mlx5_ifc_nic_vport_context_bits {
+ u8 event_on_mc_address_change[0x1];
+ u8 event_on_uc_address_change[0x1];
+
+- u8 reserved_at_40[0xc];
+-
++ u8 vhca_id_type[0x1];
++ u8 reserved_at_41[0xb];
+ u8 affiliation_criteria[0x4];
+ u8 affiliated_vhca_id[0x10];
+
+@@ -7189,7 +7201,12 @@ struct mlx5_ifc_init_hca_in_bits {
+ u8 reserved_at_20[0x10];
+ u8 op_mod[0x10];
+
+- u8 reserved_at_40[0x40];
++ u8 reserved_at_40[0x20];
++
++ u8 reserved_at_60[0x2];
++ u8 sw_vhca_id[0xe];
++ u8 reserved_at_70[0x10];
++
+ u8 sw_owner_id[4][0x20];
+ };
+
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index cbdf0e2bc5ae0..d0fb74b0db1d5 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -4420,6 +4420,22 @@ static int set_exp_feature(struct sock *sk, struct hci_dev *hdev,
+ MGMT_STATUS_NOT_SUPPORTED);
+ }
+
++static u32 get_params_flags(struct hci_dev *hdev,
++ struct hci_conn_params *params)
++{
++ u32 flags = hdev->conn_flags;
++
++ /* Devices using RPAs can only be programmed in the acceptlist if
++ * LL Privacy has been enable otherwise they cannot mark
++ * HCI_CONN_FLAG_REMOTE_WAKEUP.
++ */
++ if ((flags & HCI_CONN_FLAG_REMOTE_WAKEUP) && !use_ll_privacy(hdev) &&
++ hci_find_irk_by_addr(hdev, ¶ms->addr, params->addr_type))
++ flags &= ~HCI_CONN_FLAG_REMOTE_WAKEUP;
++
++ return flags;
++}
++
+ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ u16 data_len)
+ {
+@@ -4451,10 +4467,10 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ } else {
+ params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
+ le_addr_type(cp->addr.type));
+-
+ if (!params)
+ goto done;
+
++ supported_flags = get_params_flags(hdev, params);
+ current_flags = params->flags;
+ }
+
+@@ -4523,38 +4539,35 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ bt_dev_warn(hdev, "No such BR/EDR device %pMR (0x%x)",
+ &cp->addr.bdaddr, cp->addr.type);
+ }
+- } else {
+- params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
+- le_addr_type(cp->addr.type));
+- if (params) {
+- /* Devices using RPAs can only be programmed in the
+- * acceptlist LL Privacy has been enable otherwise they
+- * cannot mark HCI_CONN_FLAG_REMOTE_WAKEUP.
+- */
+- if ((current_flags & HCI_CONN_FLAG_REMOTE_WAKEUP) &&
+- !use_ll_privacy(hdev) &&
+- hci_find_irk_by_addr(hdev, ¶ms->addr,
+- params->addr_type)) {
+- bt_dev_warn(hdev,
+- "Cannot set wakeable for RPA");
+- goto unlock;
+- }
+
+- params->flags = current_flags;
+- status = MGMT_STATUS_SUCCESS;
++ goto unlock;
++ }
+
+- /* Update passive scan if HCI_CONN_FLAG_DEVICE_PRIVACY
+- * has been set.
+- */
+- if (params->flags & HCI_CONN_FLAG_DEVICE_PRIVACY)
+- hci_update_passive_scan(hdev);
+- } else {
+- bt_dev_warn(hdev, "No such LE device %pMR (0x%x)",
+- &cp->addr.bdaddr,
+- le_addr_type(cp->addr.type));
+- }
++ params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
++ le_addr_type(cp->addr.type));
++ if (!params) {
++ bt_dev_warn(hdev, "No such LE device %pMR (0x%x)",
++ &cp->addr.bdaddr, le_addr_type(cp->addr.type));
++ goto unlock;
++ }
++
++ supported_flags = get_params_flags(hdev, params);
++
++ if ((supported_flags | current_flags) != supported_flags) {
++ bt_dev_warn(hdev, "Bad flag given (0x%x) vs supported (0x%0x)",
++ current_flags, supported_flags);
++ goto unlock;
+ }
+
++ params->flags = current_flags;
++ status = MGMT_STATUS_SUCCESS;
++
++ /* Update passive scan if HCI_CONN_FLAG_DEVICE_PRIVACY
++ * has been set.
++ */
++ if (params->flags & HCI_CONN_FLAG_DEVICE_PRIVACY)
++ hci_update_passive_scan(hdev);
++
+ unlock:
+ hci_dev_unlock(hdev);
+
+diff --git a/net/dsa/tag_hellcreek.c b/net/dsa/tag_hellcreek.c
+index eb204ad36eeec..846588c0070a5 100644
+--- a/net/dsa/tag_hellcreek.c
++++ b/net/dsa/tag_hellcreek.c
+@@ -45,7 +45,7 @@ static struct sk_buff *hellcreek_rcv(struct sk_buff *skb,
+
+ skb->dev = dsa_master_find_slave(dev, 0, port);
+ if (!skb->dev) {
+- netdev_warn(dev, "Failed to get source port: %d\n", port);
++ netdev_warn_once(dev, "Failed to get source port: %d\n", port);
+ return NULL;
+ }
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-23 12:38 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-23 12:38 UTC (permalink / raw
To: gentoo-commits
commit: b3e6664fbe92c3787a56b3f0f64a08c2ff24f6ee
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 23 12:38:23 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 23 12:38:23 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b3e6664f
Linux patch 5.19.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1010_linux-5.19.11.patch | 1231 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1235 insertions(+)
diff --git a/0000_README b/0000_README
index e710df97..d3eec191 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-5.19.10.patch
From: http://www.kernel.org
Desc: Linux 5.19.10
+Patch: 1010_linux-5.19.11.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.11
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1010_linux-5.19.11.patch b/1010_linux-5.19.11.patch
new file mode 100644
index 00000000..a5ff5cbf
--- /dev/null
+++ b/1010_linux-5.19.11.patch
@@ -0,0 +1,1231 @@
+diff --git a/Documentation/devicetree/bindings/interrupt-controller/apple,aic.yaml b/Documentation/devicetree/bindings/interrupt-controller/apple,aic.yaml
+index 85c85b694217c..e18107eafe7cc 100644
+--- a/Documentation/devicetree/bindings/interrupt-controller/apple,aic.yaml
++++ b/Documentation/devicetree/bindings/interrupt-controller/apple,aic.yaml
+@@ -96,7 +96,7 @@ properties:
+ Documentation/devicetree/bindings/arm/cpus.yaml).
+
+ required:
+- - fiq-index
++ - apple,fiq-index
+ - cpus
+
+ required:
+diff --git a/Makefile b/Makefile
+index 33a9b6b547c47..01463a22926d5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index cd2b3fe156724..c68c3581483ac 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -225,8 +225,18 @@ config MLONGCALLS
+ Enabling this option will probably slow down your kernel.
+
+ config 64BIT
+- def_bool "$(ARCH)" = "parisc64"
++ def_bool y if "$(ARCH)" = "parisc64"
++ bool "64-bit kernel" if "$(ARCH)" = "parisc"
+ depends on PA8X00
++ help
++ Enable this if you want to support 64bit kernel on PA-RISC platform.
++
++ At the moment, only people willing to use more than 2GB of RAM,
++ or having a 64bit-only capable PA-RISC machine should say Y here.
++
++ Since there is no 64bit userland on PA-RISC, there is no point to
++ enable this option otherwise. The 64bit kernel is significantly bigger
++ and slower than the 32bit one.
+
+ choice
+ prompt "Kernel page size"
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 27fb1357ad4b8..cc6fbcb6d2521 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -338,7 +338,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+
+ while (!blk_try_enter_queue(q, pm)) {
+ if (flags & BLK_MQ_REQ_NOWAIT)
+- return -EBUSY;
++ return -EAGAIN;
+
+ /*
+ * read pair of barrier in blk_freeze_queue_start(), we need to
+@@ -368,7 +368,7 @@ int __bio_queue_enter(struct request_queue *q, struct bio *bio)
+ if (test_bit(GD_DEAD, &disk->state))
+ goto dead;
+ bio_wouldblock_error(bio);
+- return -EBUSY;
++ return -EAGAIN;
+ }
+
+ /*
+diff --git a/block/blk-lib.c b/block/blk-lib.c
+index 09b7e1200c0f4..20e42144065b8 100644
+--- a/block/blk-lib.c
++++ b/block/blk-lib.c
+@@ -311,6 +311,11 @@ int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector,
+ struct blk_plug plug;
+ int ret = 0;
+
++ /* make sure that "len << SECTOR_SHIFT" doesn't overflow */
++ if (max_sectors > UINT_MAX >> SECTOR_SHIFT)
++ max_sectors = UINT_MAX >> SECTOR_SHIFT;
++ max_sectors &= ~bs_mask;
++
+ if (max_sectors == 0)
+ return -EOPNOTSUPP;
+ if ((sector | nr_sects) & bs_mask)
+@@ -324,10 +329,10 @@ int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector,
+
+ bio = blk_next_bio(bio, bdev, 0, REQ_OP_SECURE_ERASE, gfp);
+ bio->bi_iter.bi_sector = sector;
+- bio->bi_iter.bi_size = len;
++ bio->bi_iter.bi_size = len << SECTOR_SHIFT;
+
+- sector += len << SECTOR_SHIFT;
+- nr_sects -= len << SECTOR_SHIFT;
++ sector += len;
++ nr_sects -= len;
+ if (!nr_sects) {
+ ret = submit_bio_wait(bio);
+ bio_put(bio);
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index a964e25ea6206..763256efddc2b 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -172,6 +172,7 @@ static int mpc8xxx_irq_set_type(struct irq_data *d, unsigned int flow_type)
+
+ switch (flow_type) {
+ case IRQ_TYPE_EDGE_FALLING:
++ case IRQ_TYPE_LEVEL_LOW:
+ raw_spin_lock_irqsave(&mpc8xxx_gc->lock, flags);
+ gc->write_reg(mpc8xxx_gc->regs + GPIO_ICR,
+ gc->read_reg(mpc8xxx_gc->regs + GPIO_ICR)
+diff --git a/drivers/gpio/gpio-rockchip.c b/drivers/gpio/gpio-rockchip.c
+index e342a6dc4c6c1..bb953f6478647 100644
+--- a/drivers/gpio/gpio-rockchip.c
++++ b/drivers/gpio/gpio-rockchip.c
+@@ -418,11 +418,11 @@ static int rockchip_irq_set_type(struct irq_data *d, unsigned int type)
+ goto out;
+ } else {
+ bank->toggle_edge_mode |= mask;
+- level |= mask;
++ level &= ~mask;
+
+ /*
+ * Determine gpio state. If 1 next interrupt should be
+- * falling otherwise rising.
++ * low otherwise high.
+ */
+ data = readl(bank->reg_base + bank->gpio_regs->ext_port);
+ if (data & mask)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 67d4a3c13ed19..929f8b75bfaee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2391,8 +2391,16 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ }
+ adev->ip_blocks[i].status.sw = true;
+
+- /* need to do gmc hw init early so we can allocate gpu mem */
+- if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {
++ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_COMMON) {
++ /* need to do common hw init early so everything is set up for gmc */
++ r = adev->ip_blocks[i].version->funcs->hw_init((void *)adev);
++ if (r) {
++ DRM_ERROR("hw_init %d failed %d\n", i, r);
++ goto init_failed;
++ }
++ adev->ip_blocks[i].status.hw = true;
++ } else if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {
++ /* need to do gmc hw init early so we can allocate gpu mem */
+ /* Try to reserve bad pages early */
+ if (amdgpu_sriov_vf(adev))
+ amdgpu_virt_exchange_data(adev);
+@@ -3078,8 +3086,8 @@ static int amdgpu_device_ip_reinit_early_sriov(struct amdgpu_device *adev)
+ int i, r;
+
+ static enum amd_ip_block_type ip_order[] = {
+- AMD_IP_BLOCK_TYPE_GMC,
+ AMD_IP_BLOCK_TYPE_COMMON,
++ AMD_IP_BLOCK_TYPE_GMC,
+ AMD_IP_BLOCK_TYPE_PSP,
+ AMD_IP_BLOCK_TYPE_IH,
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
+index f49db13b3fbee..0debdbcf46310 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
+@@ -380,6 +380,7 @@ static void nbio_v2_3_enable_aspm(struct amdgpu_device *adev,
+ WREG32_PCIE(smnPCIE_LC_CNTL, data);
+ }
+
++#ifdef CONFIG_PCIEASPM
+ static void nbio_v2_3_program_ltr(struct amdgpu_device *adev)
+ {
+ uint32_t def, data;
+@@ -401,9 +402,11 @@ static void nbio_v2_3_program_ltr(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_PCIE(smnBIF_CFG_DEV0_EPF0_DEVICE_CNTL2, data);
+ }
++#endif
+
+ static void nbio_v2_3_program_aspm(struct amdgpu_device *adev)
+ {
++#ifdef CONFIG_PCIEASPM
+ uint32_t def, data;
+
+ def = data = RREG32_PCIE(smnPCIE_LC_CNTL);
+@@ -459,7 +462,10 @@ static void nbio_v2_3_program_aspm(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_PCIE(smnPCIE_LC_CNTL6, data);
+
+- nbio_v2_3_program_ltr(adev);
++ /* Don't bother about LTR if LTR is not enabled
++ * in the path */
++ if (adev->pdev->ltr_path)
++ nbio_v2_3_program_ltr(adev);
+
+ def = data = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP3);
+ data |= 0x5DE0 << RCC_BIF_STRAP3__STRAP_VLINK_ASPM_IDLE_TIMER__SHIFT;
+@@ -483,6 +489,7 @@ static void nbio_v2_3_program_aspm(struct amdgpu_device *adev)
+ data &= ~PCIE_LC_CNTL3__LC_DSC_DONT_ENTER_L23_AFTER_PME_ACK_MASK;
+ if (def != data)
+ WREG32_PCIE(smnPCIE_LC_CNTL3, data);
++#endif
+ }
+
+ static void nbio_v2_3_apply_lc_spc_mode_wa(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
+index f7f6ddebd3e49..37615a77287bc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
+@@ -282,6 +282,7 @@ static void nbio_v6_1_init_registers(struct amdgpu_device *adev)
+ mmBIF_BX_DEV0_EPF0_VF0_HDP_MEM_COHERENCY_FLUSH_CNTL) << 2;
+ }
+
++#ifdef CONFIG_PCIEASPM
+ static void nbio_v6_1_program_ltr(struct amdgpu_device *adev)
+ {
+ uint32_t def, data;
+@@ -303,9 +304,11 @@ static void nbio_v6_1_program_ltr(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_PCIE(smnBIF_CFG_DEV0_EPF0_DEVICE_CNTL2, data);
+ }
++#endif
+
+ static void nbio_v6_1_program_aspm(struct amdgpu_device *adev)
+ {
++#ifdef CONFIG_PCIEASPM
+ uint32_t def, data;
+
+ def = data = RREG32_PCIE(smnPCIE_LC_CNTL);
+@@ -361,7 +364,10 @@ static void nbio_v6_1_program_aspm(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_PCIE(smnPCIE_LC_CNTL6, data);
+
+- nbio_v6_1_program_ltr(adev);
++ /* Don't bother about LTR if LTR is not enabled
++ * in the path */
++ if (adev->pdev->ltr_path)
++ nbio_v6_1_program_ltr(adev);
+
+ def = data = RREG32_PCIE(smnRCC_BIF_STRAP3);
+ data |= 0x5DE0 << RCC_BIF_STRAP3__STRAP_VLINK_ASPM_IDLE_TIMER__SHIFT;
+@@ -385,6 +391,7 @@ static void nbio_v6_1_program_aspm(struct amdgpu_device *adev)
+ data &= ~PCIE_LC_CNTL3__LC_DSC_DONT_ENTER_L23_AFTER_PME_ACK_MASK;
+ if (def != data)
+ WREG32_PCIE(smnPCIE_LC_CNTL3, data);
++#endif
+ }
+
+ const struct amdgpu_nbio_funcs nbio_v6_1_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+index 11848d1e238b6..19455a7259391 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+@@ -673,6 +673,7 @@ struct amdgpu_nbio_ras nbio_v7_4_ras = {
+ };
+
+
++#ifdef CONFIG_PCIEASPM
+ static void nbio_v7_4_program_ltr(struct amdgpu_device *adev)
+ {
+ uint32_t def, data;
+@@ -694,9 +695,11 @@ static void nbio_v7_4_program_ltr(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_PCIE(smnBIF_CFG_DEV0_EPF0_DEVICE_CNTL2, data);
+ }
++#endif
+
+ static void nbio_v7_4_program_aspm(struct amdgpu_device *adev)
+ {
++#ifdef CONFIG_PCIEASPM
+ uint32_t def, data;
+
+ if (adev->ip_versions[NBIO_HWIP][0] == IP_VERSION(7, 4, 4))
+@@ -755,7 +758,10 @@ static void nbio_v7_4_program_aspm(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_PCIE(smnPCIE_LC_CNTL6, data);
+
+- nbio_v7_4_program_ltr(adev);
++ /* Don't bother about LTR if LTR is not enabled
++ * in the path */
++ if (adev->pdev->ltr_path)
++ nbio_v7_4_program_ltr(adev);
+
+ def = data = RREG32_PCIE(smnRCC_BIF_STRAP3);
+ data |= 0x5DE0 << RCC_BIF_STRAP3__STRAP_VLINK_ASPM_IDLE_TIMER__SHIFT;
+@@ -779,6 +785,7 @@ static void nbio_v7_4_program_aspm(struct amdgpu_device *adev)
+ data &= ~PCIE_LC_CNTL3__LC_DSC_DONT_ENTER_L23_AFTER_PME_ACK_MASK;
+ if (def != data)
+ WREG32_PCIE(smnPCIE_LC_CNTL3, data);
++#endif
+ }
+
+ const struct amdgpu_nbio_funcs nbio_v7_4_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index 65181efba50ec..56424f75dd2cc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -1504,6 +1504,11 @@ static int sdma_v4_0_start(struct amdgpu_device *adev)
+ WREG32_SDMA(i, mmSDMA0_CNTL, temp);
+
+ if (!amdgpu_sriov_vf(adev)) {
++ ring = &adev->sdma.instance[i].ring;
++ adev->nbio.funcs->sdma_doorbell_range(adev, i,
++ ring->use_doorbell, ring->doorbell_index,
++ adev->doorbell_index.sdma_doorbell_range);
++
+ /* unhalt engine */
+ temp = RREG32_SDMA(i, mmSDMA0_F32_CNTL);
+ temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index fde6154f20096..183024d7c184e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -1211,25 +1211,6 @@ static int soc15_common_sw_fini(void *handle)
+ return 0;
+ }
+
+-static void soc15_doorbell_range_init(struct amdgpu_device *adev)
+-{
+- int i;
+- struct amdgpu_ring *ring;
+-
+- /* sdma/ih doorbell range are programed by hypervisor */
+- if (!amdgpu_sriov_vf(adev)) {
+- for (i = 0; i < adev->sdma.num_instances; i++) {
+- ring = &adev->sdma.instance[i].ring;
+- adev->nbio.funcs->sdma_doorbell_range(adev, i,
+- ring->use_doorbell, ring->doorbell_index,
+- adev->doorbell_index.sdma_doorbell_range);
+- }
+-
+- adev->nbio.funcs->ih_doorbell_range(adev, adev->irq.ih.use_doorbell,
+- adev->irq.ih.doorbell_index);
+- }
+-}
+-
+ static int soc15_common_hw_init(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+@@ -1249,12 +1230,6 @@ static int soc15_common_hw_init(void *handle)
+
+ /* enable the doorbell aperture */
+ soc15_enable_doorbell_aperture(adev, true);
+- /* HW doorbell routing policy: doorbell writing not
+- * in SDMA/IH/MM/ACV range will be routed to CP. So
+- * we need to init SDMA/IH/MM/ACV doorbell range prior
+- * to CP ip block init and ring test.
+- */
+- soc15_doorbell_range_init(adev);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
+index 03b7066471f9a..1e83db0c5438d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
+@@ -289,6 +289,10 @@ static int vega10_ih_irq_init(struct amdgpu_device *adev)
+ }
+ }
+
++ if (!amdgpu_sriov_vf(adev))
++ adev->nbio.funcs->ih_doorbell_range(adev, adev->irq.ih.use_doorbell,
++ adev->irq.ih.doorbell_index);
++
+ pci_set_master(adev->pdev);
+
+ /* enable interrupts */
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+index 2022ffbb8dba5..59dfca093155c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+@@ -340,6 +340,10 @@ static int vega20_ih_irq_init(struct amdgpu_device *adev)
+ }
+ }
+
++ if (!amdgpu_sriov_vf(adev))
++ adev->nbio.funcs->ih_doorbell_range(adev, adev->irq.ih.use_doorbell,
++ adev->irq.ih.doorbell_index);
++
+ pci_set_master(adev->pdev);
+
+ /* enable interrupts */
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index 19bf717fd4cb6..5508ebb9eb434 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -1629,6 +1629,8 @@ static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder,
+ /* FIXME: initialize from VBT */
+ vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
+
++ vdsc_cfg->pic_height = crtc_state->hw.adjusted_mode.crtc_vdisplay;
++
+ ret = intel_dsc_compute_params(crtc_state);
+ if (ret)
+ return ret;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 41aaa6c98114f..fe8b6b72970a2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1379,6 +1379,7 @@ static int intel_dp_dsc_compute_params(struct intel_encoder *encoder,
+ * DP_DSC_RC_BUF_SIZE for this.
+ */
+ vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
++ vdsc_cfg->pic_height = crtc_state->hw.adjusted_mode.crtc_vdisplay;
+
+ /*
+ * Slice Height of 8 works for all currently available panels. So start
+diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c b/drivers/gpu/drm/i915/display/intel_vdsc.c
+index 43e1bbc1e3035..ca530f0733e0e 100644
+--- a/drivers/gpu/drm/i915/display/intel_vdsc.c
++++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
+@@ -460,7 +460,6 @@ int intel_dsc_compute_params(struct intel_crtc_state *pipe_config)
+ u8 i = 0;
+
+ vdsc_cfg->pic_width = pipe_config->hw.adjusted_mode.crtc_hdisplay;
+- vdsc_cfg->pic_height = pipe_config->hw.adjusted_mode.crtc_vdisplay;
+ vdsc_cfg->slice_width = DIV_ROUND_UP(vdsc_cfg->pic_width,
+ pipe_config->dsc.slice_count);
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+index 9feda105f9131..a7acffbf15d1f 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+@@ -235,6 +235,14 @@ struct intel_guc {
+ * @shift: Right shift value for the gpm timestamp
+ */
+ u32 shift;
++
++ /**
++ * @last_stat_jiffies: jiffies at last actual stats collection time
++ * We use this timestamp to ensure we don't oversample the
++ * stats because runtime power management events can trigger
++ * stats collection at much higher rates than required.
++ */
++ unsigned long last_stat_jiffies;
+ } timestamp;
+
+ #ifdef CONFIG_DRM_I915_SELFTEST
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index 26a051ef119df..d7e4681d7297c 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -1365,6 +1365,8 @@ static void __update_guc_busyness_stats(struct intel_guc *guc)
+ unsigned long flags;
+ ktime_t unused;
+
++ guc->timestamp.last_stat_jiffies = jiffies;
++
+ spin_lock_irqsave(&guc->timestamp.lock, flags);
+
+ guc_update_pm_timestamp(guc, &unused);
+@@ -1436,7 +1438,23 @@ void intel_guc_busyness_park(struct intel_gt *gt)
+ if (!guc_submission_initialized(guc))
+ return;
+
+- cancel_delayed_work(&guc->timestamp.work);
++ /*
++ * There is a race with suspend flow where the worker runs after suspend
++ * and causes an unclaimed register access warning. Cancel the worker
++ * synchronously here.
++ */
++ cancel_delayed_work_sync(&guc->timestamp.work);
++
++ /*
++ * Before parking, we should sample engine busyness stats if we need to.
++ * We can skip it if we are less than half a ping from the last time we
++ * sampled the busyness stats.
++ */
++ if (guc->timestamp.last_stat_jiffies &&
++ !time_after(jiffies, guc->timestamp.last_stat_jiffies +
++ (guc->timestamp.ping_delay / 2)))
++ return;
++
+ __update_guc_busyness_stats(guc);
+ }
+
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 4f5a51bb9e1e4..e77956ae88a4b 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -1849,14 +1849,14 @@
+
+ #define GT0_PERF_LIMIT_REASONS _MMIO(0x1381a8)
+ #define GT0_PERF_LIMIT_REASONS_MASK 0xde3
+-#define PROCHOT_MASK REG_BIT(1)
+-#define THERMAL_LIMIT_MASK REG_BIT(2)
+-#define RATL_MASK REG_BIT(6)
+-#define VR_THERMALERT_MASK REG_BIT(7)
+-#define VR_TDC_MASK REG_BIT(8)
+-#define POWER_LIMIT_4_MASK REG_BIT(9)
+-#define POWER_LIMIT_1_MASK REG_BIT(11)
+-#define POWER_LIMIT_2_MASK REG_BIT(12)
++#define PROCHOT_MASK REG_BIT(0)
++#define THERMAL_LIMIT_MASK REG_BIT(1)
++#define RATL_MASK REG_BIT(5)
++#define VR_THERMALERT_MASK REG_BIT(6)
++#define VR_TDC_MASK REG_BIT(7)
++#define POWER_LIMIT_4_MASK REG_BIT(8)
++#define POWER_LIMIT_1_MASK REG_BIT(10)
++#define POWER_LIMIT_2_MASK REG_BIT(11)
+
+ #define CHV_CLK_CTL1 _MMIO(0x101100)
+ #define VLV_CLK_CTL2 _MMIO(0x101104)
+diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
+index 16460b169ed21..2a32729a74b51 100644
+--- a/drivers/gpu/drm/i915/i915_vma.c
++++ b/drivers/gpu/drm/i915/i915_vma.c
+@@ -1870,12 +1870,13 @@ int _i915_vma_move_to_active(struct i915_vma *vma,
+ enum dma_resv_usage usage;
+ int idx;
+
+- obj->read_domains = 0;
+ if (flags & EXEC_OBJECT_WRITE) {
+ usage = DMA_RESV_USAGE_WRITE;
+ obj->write_domain = I915_GEM_DOMAIN_RENDER;
++ obj->read_domains = 0;
+ } else {
+ usage = DMA_RESV_USAGE_READ;
++ obj->write_domain = 0;
+ }
+
+ dma_fence_array_for_each(curr, idx, fence)
+diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/meson/meson_plane.c
+index 8640a8a8a4691..44aa526294439 100644
+--- a/drivers/gpu/drm/meson/meson_plane.c
++++ b/drivers/gpu/drm/meson/meson_plane.c
+@@ -168,7 +168,7 @@ static void meson_plane_atomic_update(struct drm_plane *plane,
+
+ /* Enable OSD and BLK0, set max global alpha */
+ priv->viu.osd1_ctrl_stat = OSD_ENABLE |
+- (0xFF << OSD_GLOBAL_ALPHA_SHIFT) |
++ (0x100 << OSD_GLOBAL_ALPHA_SHIFT) |
+ OSD_BLK0_ENABLE;
+
+ priv->viu.osd1_ctrl_stat2 = readl(priv->io_base +
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index bb7e109534de1..d4b907889a21d 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -94,7 +94,7 @@ static void meson_viu_set_g12a_osd1_matrix(struct meson_drm *priv,
+ priv->io_base + _REG(VPP_WRAP_OSD1_MATRIX_COEF11_12));
+ writel(((m[9] & 0x1fff) << 16) | (m[10] & 0x1fff),
+ priv->io_base + _REG(VPP_WRAP_OSD1_MATRIX_COEF20_21));
+- writel((m[11] & 0x1fff) << 16,
++ writel((m[11] & 0x1fff),
+ priv->io_base + _REG(VPP_WRAP_OSD1_MATRIX_COEF22));
+
+ writel(((m[18] & 0xfff) << 16) | (m[19] & 0xfff),
+diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c
+index a189982601a48..e8040defe6073 100644
+--- a/drivers/gpu/drm/panel/panel-edp.c
++++ b/drivers/gpu/drm/panel/panel-edp.c
+@@ -1270,7 +1270,8 @@ static const struct panel_desc innolux_n116bca_ea1 = {
+ },
+ .delay = {
+ .hpd_absent = 200,
+- .prepare_to_enable = 80,
++ .enable = 80,
++ .disable = 50,
+ .unprepare = 500,
+ },
+ };
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index d6e831576cd2b..88271f04615b0 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -1436,11 +1436,15 @@ static void rk3568_set_intf_mux(struct vop2_video_port *vp, int id,
+ die &= ~RK3568_SYS_DSP_INFACE_EN_HDMI_MUX;
+ die |= RK3568_SYS_DSP_INFACE_EN_HDMI |
+ FIELD_PREP(RK3568_SYS_DSP_INFACE_EN_HDMI_MUX, vp->id);
++ dip &= ~RK3568_DSP_IF_POL__HDMI_PIN_POL;
++ dip |= FIELD_PREP(RK3568_DSP_IF_POL__HDMI_PIN_POL, polflags);
+ break;
+ case ROCKCHIP_VOP2_EP_EDP0:
+ die &= ~RK3568_SYS_DSP_INFACE_EN_EDP_MUX;
+ die |= RK3568_SYS_DSP_INFACE_EN_EDP |
+ FIELD_PREP(RK3568_SYS_DSP_INFACE_EN_EDP_MUX, vp->id);
++ dip &= ~RK3568_DSP_IF_POL__EDP_PIN_POL;
++ dip |= FIELD_PREP(RK3568_DSP_IF_POL__EDP_PIN_POL, polflags);
+ break;
+ case ROCKCHIP_VOP2_EP_MIPI0:
+ die &= ~RK3568_SYS_DSP_INFACE_EN_MIPI0_MUX;
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index fc8c1420c0b69..64b14ac4c7b02 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -2368,13 +2368,6 @@ static int dmar_device_hotplug(acpi_handle handle, bool insert)
+ if (!dmar_in_use())
+ return 0;
+
+- /*
+- * It's unlikely that any I/O board is hot added before the IOMMU
+- * subsystem is initialized.
+- */
+- if (IS_ENABLED(CONFIG_INTEL_IOMMU) && !intel_iommu_enabled)
+- return -EOPNOTSUPP;
+-
+ if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {
+ tmp = handle;
+ } else {
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index c0464959cbcdb..861a239d905a4 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3133,7 +3133,13 @@ static int __init init_dmars(void)
+
+ #ifdef CONFIG_INTEL_IOMMU_SVM
+ if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) {
++ /*
++ * Call dmar_alloc_hwirq() with dmar_global_lock held,
++ * could cause possible lock race condition.
++ */
++ up_write(&dmar_global_lock);
+ ret = intel_svm_enable_prq(iommu);
++ down_write(&dmar_global_lock);
+ if (ret)
+ goto free_iommu;
+ }
+@@ -4039,6 +4045,7 @@ int __init intel_iommu_init(void)
+ force_on = (!intel_iommu_tboot_noforce && tboot_force_iommu()) ||
+ platform_optin_force_iommu();
+
++ down_write(&dmar_global_lock);
+ if (dmar_table_init()) {
+ if (force_on)
+ panic("tboot: Failed to initialize DMAR table\n");
+@@ -4051,6 +4058,16 @@ int __init intel_iommu_init(void)
+ goto out_free_dmar;
+ }
+
++ up_write(&dmar_global_lock);
++
++ /*
++ * The bus notifier takes the dmar_global_lock, so lockdep will
++ * complain later when we register it under the lock.
++ */
++ dmar_register_bus_notifier();
++
++ down_write(&dmar_global_lock);
++
+ if (!no_iommu)
+ intel_iommu_debugfs_init();
+
+@@ -4098,9 +4115,11 @@ int __init intel_iommu_init(void)
+ pr_err("Initialization failed\n");
+ goto out_free_dmar;
+ }
++ up_write(&dmar_global_lock);
+
+ init_iommu_pm_ops();
+
++ down_read(&dmar_global_lock);
+ for_each_active_iommu(iommu, drhd) {
+ /*
+ * The flush queue implementation does not perform
+@@ -4118,11 +4137,13 @@ int __init intel_iommu_init(void)
+ "%s", iommu->name);
+ iommu_device_register(&iommu->iommu, &intel_iommu_ops, NULL);
+ }
++ up_read(&dmar_global_lock);
+
+ bus_set_iommu(&pci_bus_type, &intel_iommu_ops);
+ if (si_domain && !hw_pass_through)
+ register_memory_notifier(&intel_iommu_memory_nb);
+
++ down_read(&dmar_global_lock);
+ if (probe_acpi_namespace_devices())
+ pr_warn("ACPI name space devices didn't probe correctly\n");
+
+@@ -4133,15 +4154,17 @@ int __init intel_iommu_init(void)
+
+ iommu_disable_protect_mem_regions(iommu);
+ }
++ up_read(&dmar_global_lock);
+
+- intel_iommu_enabled = 1;
+- dmar_register_bus_notifier();
+ pr_info("Intel(R) Virtualization Technology for Directed I/O\n");
+
++ intel_iommu_enabled = 1;
++
+ return 0;
+
+ out_free_dmar:
+ intel_iommu_free_dmars();
++ up_write(&dmar_global_lock);
+ return ret;
+ }
+
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 520ed965bb7a4..583ca847a39cb 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -314,7 +314,7 @@ static int unflatten_dt_nodes(const void *blob,
+ for (offset = 0;
+ offset >= 0 && depth >= initial_depth;
+ offset = fdt_next_node(blob, offset, &depth)) {
+- if (WARN_ON_ONCE(depth >= FDT_MAX_DEPTH))
++ if (WARN_ON_ONCE(depth >= FDT_MAX_DEPTH - 1))
+ continue;
+
+ if (!IS_ENABLED(CONFIG_OF_KOBJ) &&
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index f69ab90b5e22d..6052f264bbb0a 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -1546,6 +1546,7 @@ static int __init ccio_probe(struct parisc_device *dev)
+ }
+ ccio_ioc_init(ioc);
+ if (ccio_init_resources(ioc)) {
++ iounmap(ioc->ioc_regs);
+ kfree(ioc);
+ return -ENOMEM;
+ }
+diff --git a/drivers/pinctrl/qcom/pinctrl-sc8180x.c b/drivers/pinctrl/qcom/pinctrl-sc8180x.c
+index 6bec7f1431348..704a99d2f93ce 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sc8180x.c
++++ b/drivers/pinctrl/qcom/pinctrl-sc8180x.c
+@@ -530,10 +530,10 @@ DECLARE_MSM_GPIO_PINS(187);
+ DECLARE_MSM_GPIO_PINS(188);
+ DECLARE_MSM_GPIO_PINS(189);
+
+-static const unsigned int sdc2_clk_pins[] = { 190 };
+-static const unsigned int sdc2_cmd_pins[] = { 191 };
+-static const unsigned int sdc2_data_pins[] = { 192 };
+-static const unsigned int ufs_reset_pins[] = { 193 };
++static const unsigned int ufs_reset_pins[] = { 190 };
++static const unsigned int sdc2_clk_pins[] = { 191 };
++static const unsigned int sdc2_cmd_pins[] = { 192 };
++static const unsigned int sdc2_data_pins[] = { 193 };
+
+ enum sc8180x_functions {
+ msm_mux_adsp_ext,
+@@ -1582,7 +1582,7 @@ static const int sc8180x_acpi_reserved_gpios[] = {
+ static const struct msm_gpio_wakeirq_map sc8180x_pdc_map[] = {
+ { 3, 31 }, { 5, 32 }, { 8, 33 }, { 9, 34 }, { 10, 100 }, { 12, 104 },
+ { 24, 37 }, { 26, 38 }, { 27, 41 }, { 28, 42 }, { 30, 39 }, { 36, 43 },
+- { 37, 43 }, { 38, 45 }, { 39, 118 }, { 39, 125 }, { 41, 47 },
++ { 37, 44 }, { 38, 45 }, { 39, 118 }, { 39, 125 }, { 41, 47 },
+ { 42, 48 }, { 46, 50 }, { 47, 49 }, { 48, 51 }, { 49, 53 }, { 50, 52 },
+ { 51, 116 }, { 51, 123 }, { 53, 54 }, { 54, 55 }, { 55, 56 },
+ { 56, 57 }, { 58, 58 }, { 60, 60 }, { 68, 62 }, { 70, 63 }, { 76, 86 },
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sun50i-a100-r.c b/drivers/pinctrl/sunxi/pinctrl-sun50i-a100-r.c
+index 21054fcacd345..18088f6f44b23 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sun50i-a100-r.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sun50i-a100-r.c
+@@ -98,7 +98,7 @@ MODULE_DEVICE_TABLE(of, a100_r_pinctrl_match);
+ static struct platform_driver a100_r_pinctrl_driver = {
+ .probe = a100_r_pinctrl_probe,
+ .driver = {
+- .name = "sun50iw10p1-r-pinctrl",
++ .name = "sun50i-a100-r-pinctrl",
+ .of_match_table = a100_r_pinctrl_match,
+ },
+ };
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 386bb523c69ea..bdc3efdb12219 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -707,9 +707,6 @@ cifs_readv_from_socket(struct TCP_Server_Info *server, struct msghdr *smb_msg)
+ int length = 0;
+ int total_read;
+
+- smb_msg->msg_control = NULL;
+- smb_msg->msg_controllen = 0;
+-
+ for (total_read = 0; msg_data_left(smb_msg); total_read += length) {
+ try_to_freeze();
+
+@@ -765,7 +762,7 @@ int
+ cifs_read_from_socket(struct TCP_Server_Info *server, char *buf,
+ unsigned int to_read)
+ {
+- struct msghdr smb_msg;
++ struct msghdr smb_msg = {};
+ struct kvec iov = {.iov_base = buf, .iov_len = to_read};
+ iov_iter_kvec(&smb_msg.msg_iter, READ, &iov, 1, to_read);
+
+@@ -775,15 +772,13 @@ cifs_read_from_socket(struct TCP_Server_Info *server, char *buf,
+ ssize_t
+ cifs_discard_from_socket(struct TCP_Server_Info *server, size_t to_read)
+ {
+- struct msghdr smb_msg;
++ struct msghdr smb_msg = {};
+
+ /*
+ * iov_iter_discard already sets smb_msg.type and count and iov_offset
+ * and cifs_readv_from_socket sets msg_control and msg_controllen
+ * so little to initialize in struct msghdr
+ */
+- smb_msg.msg_name = NULL;
+- smb_msg.msg_namelen = 0;
+ iov_iter_discard(&smb_msg.msg_iter, READ, to_read);
+
+ return cifs_readv_from_socket(server, &smb_msg);
+@@ -793,7 +788,7 @@ int
+ cifs_read_page_from_socket(struct TCP_Server_Info *server, struct page *page,
+ unsigned int page_offset, unsigned int to_read)
+ {
+- struct msghdr smb_msg;
++ struct msghdr smb_msg = {};
+ struct bio_vec bv = {
+ .bv_page = page, .bv_len = to_read, .bv_offset = page_offset};
+ iov_iter_bvec(&smb_msg.msg_iter, READ, &bv, 1, to_read);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 0f03c0bfdf280..02dd591acabb3 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3327,6 +3327,9 @@ static ssize_t __cifs_writev(
+
+ ssize_t cifs_direct_writev(struct kiocb *iocb, struct iov_iter *from)
+ {
++ struct file *file = iocb->ki_filp;
++
++ cifs_revalidate_mapping(file->f_inode);
+ return __cifs_writev(iocb, from, true);
+ }
+
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index bfc9bd55870a0..8adc0f2a59518 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -196,10 +196,6 @@ smb_send_kvec(struct TCP_Server_Info *server, struct msghdr *smb_msg,
+
+ *sent = 0;
+
+- smb_msg->msg_name = (struct sockaddr *) &server->dstaddr;
+- smb_msg->msg_namelen = sizeof(struct sockaddr);
+- smb_msg->msg_control = NULL;
+- smb_msg->msg_controllen = 0;
+ if (server->noblocksnd)
+ smb_msg->msg_flags = MSG_DONTWAIT + MSG_NOSIGNAL;
+ else
+@@ -311,7 +307,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ sigset_t mask, oldmask;
+ size_t total_len = 0, sent, size;
+ struct socket *ssocket = server->ssocket;
+- struct msghdr smb_msg;
++ struct msghdr smb_msg = {};
+ __be32 rfc1002_marker;
+
+ if (cifs_rdma_enabled(server)) {
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 8f8cd6e2d4dbc..597e3ce3f148a 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -604,6 +604,31 @@ static inline gfp_t nfs_io_gfp_mask(void)
+ return GFP_KERNEL;
+ }
+
++/*
++ * Special version of should_remove_suid() that ignores capabilities.
++ */
++static inline int nfs_should_remove_suid(const struct inode *inode)
++{
++ umode_t mode = inode->i_mode;
++ int kill = 0;
++
++ /* suid always must be killed */
++ if (unlikely(mode & S_ISUID))
++ kill = ATTR_KILL_SUID;
++
++ /*
++ * sgid without any exec bits is just a mandatory locking mark; leave
++ * it alone. If some exec bits are set, it's a real sgid; kill it.
++ */
++ if (unlikely((mode & S_ISGID) && (mode & S_IXGRP)))
++ kill |= ATTR_KILL_SGID;
++
++ if (unlikely(kill && S_ISREG(mode)))
++ return kill;
++
++ return 0;
++}
++
+ /* unlink.c */
+ extern struct rpc_task *
+ nfs_async_rename(struct inode *old_dir, struct inode *new_dir,
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 068c45b3bc1ab..6dab9e4083729 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -78,10 +78,15 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+
+ status = nfs4_call_sync(server->client, server, msg,
+ &args.seq_args, &res.seq_res, 0);
+- if (status == 0)
++ if (status == 0) {
++ if (nfs_should_remove_suid(inode)) {
++ spin_lock(&inode->i_lock);
++ nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE);
++ spin_unlock(&inode->i_lock);
++ }
+ status = nfs_post_op_update_inode_force_wcc(inode,
+ res.falloc_fattr);
+-
++ }
+ if (msg->rpc_proc == &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE])
+ trace_nfs4_fallocate(inode, &args, status);
+ else
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 6ab5eeb000dc0..5e4bacb77bfc7 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -1051,22 +1051,31 @@ static void nfs_fill_super(struct super_block *sb, struct nfs_fs_context *ctx)
+ if (ctx->bsize)
+ sb->s_blocksize = nfs_block_size(ctx->bsize, &sb->s_blocksize_bits);
+
+- if (server->nfs_client->rpc_ops->version != 2) {
+- /* The VFS shouldn't apply the umask to mode bits. We will do
+- * so ourselves when necessary.
++ switch (server->nfs_client->rpc_ops->version) {
++ case 2:
++ sb->s_time_gran = 1000;
++ sb->s_time_min = 0;
++ sb->s_time_max = U32_MAX;
++ break;
++ case 3:
++ /*
++ * The VFS shouldn't apply the umask to mode bits.
++ * We will do so ourselves when necessary.
+ */
+ sb->s_flags |= SB_POSIXACL;
+ sb->s_time_gran = 1;
+- sb->s_export_op = &nfs_export_ops;
+- } else
+- sb->s_time_gran = 1000;
+-
+- if (server->nfs_client->rpc_ops->version != 4) {
+ sb->s_time_min = 0;
+ sb->s_time_max = U32_MAX;
+- } else {
++ sb->s_export_op = &nfs_export_ops;
++ break;
++ case 4:
++ sb->s_flags |= SB_POSIXACL;
++ sb->s_time_gran = 1;
+ sb->s_time_min = S64_MIN;
+ sb->s_time_max = S64_MAX;
++ if (server->caps & NFS_CAP_ATOMIC_OPEN_V1)
++ sb->s_export_op = &nfs_export_ops;
++ break;
+ }
+
+ sb->s_magic = NFS_SUPER_MAGIC;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 5d7e1c2061842..4212473c69ee9 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1497,31 +1497,6 @@ void nfs_commit_prepare(struct rpc_task *task, void *calldata)
+ NFS_PROTO(data->inode)->commit_rpc_prepare(task, data);
+ }
+
+-/*
+- * Special version of should_remove_suid() that ignores capabilities.
+- */
+-static int nfs_should_remove_suid(const struct inode *inode)
+-{
+- umode_t mode = inode->i_mode;
+- int kill = 0;
+-
+- /* suid always must be killed */
+- if (unlikely(mode & S_ISUID))
+- kill = ATTR_KILL_SUID;
+-
+- /*
+- * sgid without any exec bits is just a mandatory locking mark; leave
+- * it alone. If some exec bits are set, it's a real sgid; kill it.
+- */
+- if (unlikely((mode & S_ISGID) && (mode & S_IXGRP)))
+- kill |= ATTR_KILL_SGID;
+-
+- if (unlikely(kill && S_ISREG(mode)))
+- return kill;
+-
+- return 0;
+-}
+-
+ static void nfs_writeback_check_extend(struct nfs_pgio_header *hdr,
+ struct nfs_fattr *fattr)
+ {
+diff --git a/include/linux/dmar.h b/include/linux/dmar.h
+index f3a3d95df5325..cbd714a198a0a 100644
+--- a/include/linux/dmar.h
++++ b/include/linux/dmar.h
+@@ -69,7 +69,6 @@ struct dmar_pci_notify_info {
+
+ extern struct rw_semaphore dmar_global_lock;
+ extern struct list_head dmar_drhd_units;
+-extern int intel_iommu_enabled;
+
+ #define for_each_drhd_unit(drhd) \
+ list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \
+@@ -93,8 +92,7 @@ extern int intel_iommu_enabled;
+ static inline bool dmar_rcu_check(void)
+ {
+ return rwsem_is_locked(&dmar_global_lock) ||
+- system_state == SYSTEM_BOOTING ||
+- (IS_ENABLED(CONFIG_INTEL_IOMMU) && !intel_iommu_enabled);
++ system_state == SYSTEM_BOOTING;
+ }
+
+ #define dmar_rcu_dereference(p) rcu_dereference_check((p), dmar_rcu_check())
+diff --git a/include/linux/of_device.h b/include/linux/of_device.h
+index 1d7992a02e36e..1a803e4335d30 100644
+--- a/include/linux/of_device.h
++++ b/include/linux/of_device.h
+@@ -101,8 +101,9 @@ static inline struct device_node *of_cpu_device_node_get(int cpu)
+ }
+
+ static inline int of_dma_configure_id(struct device *dev,
+- struct device_node *np,
+- bool force_dma)
++ struct device_node *np,
++ bool force_dma,
++ const u32 *id)
+ {
+ return 0;
+ }
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index c39d910d4b454..9ca397eed1638 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1195,6 +1195,8 @@ int __xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk);
+
+ static inline int xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk)
+ {
++ if (!sk_fullsock(osk))
++ return 0;
+ sk->sk_policy[0] = NULL;
+ sk->sk_policy[1] = NULL;
+ if (unlikely(osk->sk_policy[0] || osk->sk_policy[1]))
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 48833d0edd089..602da2cfd57c8 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -5061,7 +5061,8 @@ done:
+ req_set_fail(req);
+ __io_req_complete(req, issue_flags, ret, 0);
+ /* put file to avoid an attempt to IOPOLL the req */
+- io_put_file(req->file);
++ if (!(req->flags & REQ_F_FIXED_FILE))
++ io_put_file(req->file);
+ req->file = NULL;
+ return 0;
+ }
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index afc6c0e9c966e..f93983910b5e1 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -59,6 +59,7 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
+ int retval = 0;
+
+ mutex_lock(&cgroup_mutex);
++ cpus_read_lock();
+ percpu_down_write(&cgroup_threadgroup_rwsem);
+ for_each_root(root) {
+ struct cgroup *from_cgrp;
+@@ -72,6 +73,7 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
+ break;
+ }
+ percpu_up_write(&cgroup_threadgroup_rwsem);
++ cpus_read_unlock();
+ mutex_unlock(&cgroup_mutex);
+
+ return retval;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index da8b3cc67234d..028eb28c7882d 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1704,7 +1704,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ tcp_hdr(skb)->source, tcp_hdr(skb)->dest,
+ arg->uid);
+ security_skb_classify_flow(skb, flowi4_to_flowi_common(&fl4));
+- rt = ip_route_output_key(net, &fl4);
++ rt = ip_route_output_flow(net, &fl4, sk);
+ if (IS_ERR(rt))
+ return;
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 586c102ce152d..9fd92e263d0a3 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -819,6 +819,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ?
+ inet_twsk(sk)->tw_priority : sk->sk_priority;
+ transmit_time = tcp_transmit_time(sk);
++ xfrm_sk_clone_policy(ctl_sk, sk);
+ }
+ ip_send_unicast_reply(ctl_sk,
+ skb, &TCP_SKB_CB(skb)->header.h4.opt,
+@@ -827,6 +828,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ transmit_time);
+
+ ctl_sk->sk_mark = 0;
++ xfrm_sk_free_policy(ctl_sk);
+ sock_net_set(ctl_sk, &init_net);
+ __TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
+ __TCP_INC_STATS(net, TCP_MIB_OUTRSTS);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index be09941fe6d9a..5eabe746cfa76 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -952,7 +952,10 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
+ * Underlying function will use this to retrieve the network
+ * namespace
+ */
+- dst = ip6_dst_lookup_flow(sock_net(ctl_sk), ctl_sk, &fl6, NULL);
++ if (sk && sk->sk_state != TCP_TIME_WAIT)
++ dst = ip6_dst_lookup_flow(net, sk, &fl6, NULL); /*sk's xfrm_policy can be referred*/
++ else
++ dst = ip6_dst_lookup_flow(net, ctl_sk, &fl6, NULL);
+ if (!IS_ERR(dst)) {
+ skb_dst_set(buff, dst);
+ ip6_xmit(ctl_sk, buff, &fl6, fl6.flowi6_mark, NULL,
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index c1a01947530f0..db8c0de1de422 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2858,6 +2858,9 @@ int rpc_clnt_test_and_add_xprt(struct rpc_clnt *clnt,
+
+ task = rpc_call_null_helper(clnt, xprt, NULL, RPC_TASK_ASYNC,
+ &rpc_cb_add_xprt_call_ops, data);
++ if (IS_ERR(task))
++ return PTR_ERR(task);
++
+ data->xps->xps_nunique_destaddr_xprts++;
+ rpc_put_task(task);
+ success:
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 53b024cea3b3e..5ecafffe7ce59 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1179,11 +1179,8 @@ xprt_request_dequeue_receive_locked(struct rpc_task *task)
+ {
+ struct rpc_rqst *req = task->tk_rqstp;
+
+- if (test_and_clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate)) {
++ if (test_and_clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate))
+ xprt_request_rb_remove(req->rq_xprt, req);
+- xdr_free_bvec(&req->rq_rcv_buf);
+- req->rq_private_buf.bvec = NULL;
+- }
+ }
+
+ /**
+@@ -1221,6 +1218,8 @@ void xprt_complete_rqst(struct rpc_task *task, int copied)
+
+ xprt->stat.recvs++;
+
++ xdr_free_bvec(&req->rq_rcv_buf);
++ req->rq_private_buf.bvec = NULL;
+ req->rq_private_buf.len = copied;
+ /* Ensure all writes are done before we update */
+ /* req->rq_reply_bytes_recvd */
+@@ -1453,6 +1452,7 @@ xprt_request_dequeue_xprt(struct rpc_task *task)
+ xprt_request_dequeue_transmit_locked(task);
+ xprt_request_dequeue_receive_locked(task);
+ spin_unlock(&xprt->queue_lock);
++ xdr_free_bvec(&req->rq_rcv_buf);
+ }
+ }
+
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 61df4d33c48ff..7f340f18599c9 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -209,6 +209,7 @@ struct sigmatel_spec {
+
+ /* beep widgets */
+ hda_nid_t anabeep_nid;
++ bool beep_power_on;
+
+ /* SPDIF-out mux */
+ const char * const *spdif_labels;
+@@ -4443,6 +4444,28 @@ static int stac_suspend(struct hda_codec *codec)
+
+ return 0;
+ }
++
++static int stac_check_power_status(struct hda_codec *codec, hda_nid_t nid)
++{
++#ifdef CONFIG_SND_HDA_INPUT_BEEP
++ struct sigmatel_spec *spec = codec->spec;
++#endif
++ int ret = snd_hda_gen_check_power_status(codec, nid);
++
++#ifdef CONFIG_SND_HDA_INPUT_BEEP
++ if (nid == spec->gen.beep_nid && codec->beep) {
++ if (codec->beep->enabled != spec->beep_power_on) {
++ spec->beep_power_on = codec->beep->enabled;
++ if (spec->beep_power_on)
++ snd_hda_power_up_pm(codec);
++ else
++ snd_hda_power_down_pm(codec);
++ }
++ ret |= spec->beep_power_on;
++ }
++#endif
++ return ret;
++}
+ #else
+ #define stac_suspend NULL
+ #endif /* CONFIG_PM */
+@@ -4455,6 +4478,7 @@ static const struct hda_codec_ops stac_patch_ops = {
+ .unsol_event = snd_hda_jack_unsol_event,
+ #ifdef CONFIG_PM
+ .suspend = stac_suspend,
++ .check_power_status = stac_check_power_status,
+ #endif
+ };
+
+diff --git a/tools/include/uapi/asm/errno.h b/tools/include/uapi/asm/errno.h
+index d30439b4b8ab4..869379f91fe48 100644
+--- a/tools/include/uapi/asm/errno.h
++++ b/tools/include/uapi/asm/errno.h
+@@ -9,8 +9,8 @@
+ #include "../../../arch/alpha/include/uapi/asm/errno.h"
+ #elif defined(__mips__)
+ #include "../../../arch/mips/include/uapi/asm/errno.h"
+-#elif defined(__xtensa__)
+-#include "../../../arch/xtensa/include/uapi/asm/errno.h"
++#elif defined(__hppa__)
++#include "../../../arch/parisc/include/uapi/asm/errno.h"
+ #else
+ #include <asm-generic/errno.h>
+ #endif
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-23 12:50 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-23 12:50 UTC (permalink / raw
To: gentoo-commits
commit: 3c105a885991f3208dbf51c98306253e1f13b642
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 23 12:49:21 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 23 12:49:21 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3c105a88
Remove duplicate patch
Removed:
2700_revert-drm-i915-dma-resv-obj-fix.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 --
2700_revert-drm-i915-dma-resv-obj-fix.patch | 107 ----------------------------
2 files changed, 111 deletions(-)
diff --git a/0000_README b/0000_README
index d3eec191..c5e283b0 100644
--- a/0000_README
+++ b/0000_README
@@ -107,10 +107,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2700_revert-drm-i915-dma-resv-obj-fix.patch
-From: https://bugs.gentoo.org/866023
-Desc: Revert Revert for drm i915 thanks to Luigi 'Comio' Mantellini
-
Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
diff --git a/2700_revert-drm-i915-dma-resv-obj-fix.patch b/2700_revert-drm-i915-dma-resv-obj-fix.patch
deleted file mode 100644
index a9fcaf4a..00000000
--- a/2700_revert-drm-i915-dma-resv-obj-fix.patch
+++ /dev/null
@@ -1,107 +0,0 @@
-From d481c481ca7813d688ffcb1c5418b48f83d945c1 Mon Sep 17 00:00:00 2001
-From: Luigi 'Comio' Mantellini <luigi.mantellini@gmail.com>
-Date: Sun, 28 Aug 2022 09:17:35 +0200
-Subject: [PATCH] Revert "drm/i915: Individualize fences before adding to
- dma_resv obj"
-
-This reverts commit 842d9346b2fdda4d2fb8ccb5b87faef1ac01ab51.
----
- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 3 +-
- drivers/gpu/drm/i915/i915_vma.c | 48 ++++++++-----------
- 2 files changed, 21 insertions(+), 30 deletions(-)
-
-diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
-index 30fe847c6664..c326bd2b444f 100644
---- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
-+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
-@@ -999,8 +999,7 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
- }
- }
-
-- /* Reserve enough slots to accommodate composite fences */
-- err = dma_resv_reserve_fences(vma->obj->base.resv, eb->num_batches);
-+ err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
- if (err)
- return err;
-
-diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
-index 16460b169ed2..e71826f0e4b1 100644
---- a/drivers/gpu/drm/i915/i915_vma.c
-+++ b/drivers/gpu/drm/i915/i915_vma.c
-@@ -23,7 +23,6 @@
- */
-
- #include <linux/sched/mm.h>
--#include <linux/dma-fence-array.h>
- #include <drm/drm_gem.h>
-
- #include "display/intel_frontbuffer.h"
-@@ -1839,21 +1838,6 @@ int _i915_vma_move_to_active(struct i915_vma *vma,
- if (unlikely(err))
- return err;
-
-- /*
-- * Reserve fences slot early to prevent an allocation after preparing
-- * the workload and associating fences with dma_resv.
-- */
-- if (fence && !(flags & __EXEC_OBJECT_NO_RESERVE)) {
-- struct dma_fence *curr;
-- int idx;
--
-- dma_fence_array_for_each(curr, idx, fence)
-- ;
-- err = dma_resv_reserve_fences(vma->obj->base.resv, idx);
-- if (unlikely(err))
-- return err;
-- }
--
- if (flags & EXEC_OBJECT_WRITE) {
- struct intel_frontbuffer *front;
-
-@@ -1863,23 +1847,31 @@ int _i915_vma_move_to_active(struct i915_vma *vma,
- i915_active_add_request(&front->write, rq);
- intel_frontbuffer_put(front);
- }
-- }
-
-- if (fence) {
-- struct dma_fence *curr;
-- enum dma_resv_usage usage;
-- int idx;
-+ if (!(flags & __EXEC_OBJECT_NO_RESERVE)) {
-+ err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
-+ if (unlikely(err))
-+ return err;
-+ }
-
-- obj->read_domains = 0;
-- if (flags & EXEC_OBJECT_WRITE) {
-- usage = DMA_RESV_USAGE_WRITE;
-+ if (fence) {
-+ dma_resv_add_fence(vma->obj->base.resv, fence,
-+ DMA_RESV_USAGE_WRITE);
- obj->write_domain = I915_GEM_DOMAIN_RENDER;
-- } else {
-- usage = DMA_RESV_USAGE_READ;
-+ obj->read_domains = 0;
-+ }
-+ } else {
-+ if (!(flags & __EXEC_OBJECT_NO_RESERVE)) {
-+ err = dma_resv_reserve_fences(vma->obj->base.resv, 1);
-+ if (unlikely(err))
-+ return err;
- }
-
-- dma_fence_array_for_each(curr, idx, fence)
-- dma_resv_add_fence(vma->obj->base.resv, curr, usage);
-+ if (fence) {
-+ dma_resv_add_fence(vma->obj->base.resv, fence,
-+ DMA_RESV_USAGE_READ);
-+ obj->write_domain = 0;
-+ }
- }
-
- if (flags & EXEC_OBJECT_NEEDS_FENCE && vma->fence)
---
-2.37.2
-
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-27 12:02 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-27 12:02 UTC (permalink / raw
To: gentoo-commits
commit: 50ea097c7e3b862632af3b0c99cb2467857c7fa7
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 27 12:01:09 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Sep 27 12:01:09 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=50ea097c
ACPI: processor_idle: Skip dummy wait for processors based on the Zen microarchitecture
See: https://lkml.org/lkml/2022/9/21/74
See: https://www.phoronix.com/news/Linux-AMD-Old-Chipset-WA
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch | 142 +++++++++++++++++++++
2 files changed, 146 insertions(+)
diff --git a/0000_README b/0000_README
index c5e283b0..034f4583 100644
--- a/0000_README
+++ b/0000_README
@@ -134,3 +134,7 @@ Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incl
Patch: 5021_BMQ-and-PDS-gentoo-defaults.patch
From: https://gitweb.gentoo.org/proj/linux-patches.git/
Desc: Set defaults for BMQ. Add archs as people test, default to N
+
+Patch: 5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch
+From: https://lkml.org/lkml/2022/9/21/74
+Desc: ACPI: processor_idle: Skip dummy wait for processors based on the Zen microarchitecture
diff --git a/5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch b/5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch
new file mode 100644
index 00000000..e0f6ebe8
--- /dev/null
+++ b/5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch
@@ -0,0 +1,142 @@
+linux-kernel.vger.kernel.org archive mirror
+ help / color / mirror / Atom feed
+From: K Prateek Nayak <kprateek.nayak@amd.com>
+To: <linux-kernel@vger.kernel.org>
+Cc: <rafael@kernel.org>, <lenb@kernel.org>,
+ <linux-acpi@vger.kernel.org>, <linux-pm@vger.kernel.org>,
+ <dave.hansen@linux.intel.com>, <bp@alien8.de>,
+ <tglx@linutronix.de>, <andi@lisas.de>, <puwen@hygon.cn>,
+ <mario.limonciello@amd.com>, <peterz@infradead.org>,
+ <rui.zhang@intel.com>, <gpiccoli@igalia.com>,
+ <daniel.lezcano@linaro.org>, <ananth.narayan@amd.com>,
+ <gautham.shenoy@amd.com>,
+ K Prateek Nayak <kprateek.nayak@amd.com>,
+ "Calvin Ong" <calvin.ong@amd.com>, <stable@vger.kernel.org>,
+ <regressions@lists.linux.dev>
+Subject: [PATCH] ACPI: processor_idle: Skip dummy wait for processors based on the Zen microarchitecture
+Date: Wed, 21 Sep 2022 12:06:38 +0530 [thread overview]
+Message-ID: <20220921063638.2489-1-kprateek.nayak@amd.com> (raw)
+
+Processors based on the Zen microarchitecture support IOPORT based deeper
+C-states. The idle driver reads the acpi_gbl_FADT.xpm_timer_block.address
+in the IOPORT based C-state exit path which is claimed to be a
+"Dummy wait op" and has been around since ACPI introduction to Linux
+dating back to Andy Grover's Mar 14, 2002 posting [1].
+The comment above the dummy operation was elaborated by Andreas Mohr back
+in 2006 in commit b488f02156d3d ("ACPI: restore comment justifying 'extra'
+P_LVLx access") [2] where the commit log claims:
+"this dummy read was about: STPCLK# doesn't get asserted in time on
+(some) chipsets, which is why we need to have a dummy I/O read to delaarchitecture-relatedy
+further instruction processing until the CPU is fully stopped."
+
+However, sampling certain workloads with IBS on AMD Zen3 system shows
+that a significant amount of time is spent in the dummy op, which
+incorrectly gets accounted as C-State residency. A large C-State
+residency value can prime the cpuidle governor to recommend a deeper
+C-State during the subsequent idle instances, starting a vicious cycle,
+leading to performance degradation on workloads that rapidly switch
+between busy and idle phases.
+
+One such workload is tbench where a massive performance degradation can
+be observed during certain runs. Following are some statistics gathered
+by running tbench with 128 clients, on a dual socket (2 x 64C/128T) Zen3
+system with the baseline kernel, baseline kernel keeping C2 disabled,
+and baseline kernel with this patch applied keeping C2 enabled:
+
+baseline kernel was tip:sched/core at
+commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU on
+wakelist if wakee cpu is idle")
+
+Kernel : baseline baseline + C2 disabled baseline + patch
+
+Min (MB/s) : 2215.06 33072.10 (+1393.05%) 33016.10 (+1390.52%)
+Max (MB/s) : 32938.80 34399.10 34774.50
+Median (MB/s) : 32191.80 33476.60 33805.70
+AMean (MB/s) : 22448.55 33649.27 (+49.89%) 33865.43 (+50.85%)
+AMean Stddev : 17526.70 680.14 880.72
+AMean CoefVar : 78.07% 2.02% 2.60%
+
+The data shows there are edge cases that can cause massive regressions
+in case of tbench. Profiling the bad runs with IBS shows a significant
+amount of time being spent in acpi_idle_do_entry method:
+
+Overhead Command Shared Object Symbolarchitecture-related
+ 74.76% swapper [kernel.kallsyms] [k] acpi_idle_do_entry
+ 0.71% tbench [kernel.kallsyms] [k] update_sd_lb_stats.constprop.0
+ 0.69% tbench_srv [kernel.kallsyms] [k] update_sd_lb_stats.constprop.0
+ 0.49% swapper [kernel.kallsyms] [k] psi_group_change
+ ...
+
+Annotation of acpi_idle_do_entry method reveals almost all the time in
+acpi_idle_do_entry is spent on the port I/O in wait_for_freeze():
+
+ 0.14 │ in (%dx),%al # <------ First "in" corresponding to inb(cx->address)
+ 0.51 │ mov 0x144d64d(%rip),%rax
+ 0.00 │ test $0x80000000,%eax
+ │ ↓ jne 62 # <------ Skip if running in guest
+ 0.00 │ mov 0x19800c3(%rip),%rdx
+ 99.33 │ in (%dx),%eax # <------ Second "in" corresponding to inl(acpi_gbl_FADT.xpm_timer_block.address)
+ 0.00 │62: mov -0x8(%rbp),%r12
+ 0.00 │ leavearchitecture-related
+ 0.00 │ ← ret
+
+This overhead is reflected in the C2 residency on the test system where
+C2 is an IOPORT based C-State. The total C-state residency reported by
+"cpupower idle-info" on CPU0 for good and bad case over the 80s tbench
+run is as follows (all numbers are in microseconds):
+
+ Good Run Bad Run
+ (Baseline)
+
+POLL: 43338 6231 (-85.62%)
+C1 (MWAIT Based): 23576156 363861 (-98.45%)
+C2 (IOPORT Based): 10781218 77027280 (+614.45%)
+
+The larger residency value in bad case leads to the system recommending
+C2 state again for subsequent idle instances. The pattern lasts till the
+end of the tbench run. Following is the breakdown of "entry_method"
+passed to acpi_idle_do_entry during good run and bad run:
+
+ Good Run Bad Run
+ (Baseline)
+
+Number of times acpi_idle_do_entry was called: 6149573 6149050 (-0.01%)
+ |-> Number of times entry_method was "ACPI_CSTATE_FFH": 6141494 88144 (-98.56%)
+ |-> Number of times entry_method was "ACPI_CSTATE_HALT": 0 0 (+0.00%)
+ |-> Number of times entry_method was "ACPI_CSTATE_SYSTEMIO": 8079 6060906 (+74920.49%)
+
+For processors based on the Zen microarchitecture, this dummy wait op is
+unnecessary and can be skipped when choosing IOPORT based C-States to
+avoid polluting the C-state residency information.
+
+Link: https://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux-fullhistory.git/commit/?id=972c16130d9dc182cedcdd408408d9eacc7d6a2d [1]
+Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b488f02156d3deb08f5ad7816d565c370a8cc6f1 [2]
+
+Suggested-by: Calvin Ong <calvin.ong@amd.com>
+Cc: stable@vger.kernel.org
+Cc: regressions@lists.linux.dev
+Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
+---
+ drivers/acpi/processor_idle.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 16a1663d02d4..18850aa2b79b 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -528,8 +528,11 @@ static int acpi_idle_bm_check(void)
+ static void wait_for_freeze(void)
+ {
+ #ifdef CONFIG_X86
+- /* No delay is needed if we are in guest */
+- if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ /*
++ * No delay is needed if we are in guest or on a processor
++ * based on the Zen microarchitecture.
++ */
++ if (boot_cpu_has(X86_FEATURE_HYPERVISOR) || boot_cpu_has(X86_FEATURE_ZEN))
+ return;
+ #endif
+ /* Dummy wait op - must do something useless after P_LVL2 read
+--
+2.25.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-27 12:09 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-27 12:09 UTC (permalink / raw
To: gentoo-commits
commit: ce86a2a64aefdebb1acbe42d2557bc55ba71ec46
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 27 12:09:22 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Sep 27 12:09:22 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ce86a2a6
Update patch directly from Linus' tree
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 2 +-
5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch | 202 +++++++--------------
2 files changed, 70 insertions(+), 134 deletions(-)
diff --git a/0000_README b/0000_README
index 034f4583..591733a1 100644
--- a/0000_README
+++ b/0000_README
@@ -136,5 +136,5 @@ From: https://gitweb.gentoo.org/proj/linux-patches.git/
Desc: Set defaults for BMQ. Add archs as people test, default to N
Patch: 5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch
-From: https://lkml.org/lkml/2022/9/21/74
+From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/acpi/processor_idle.c?id=e400ad8b7e6a1b9102123c6240289a811501f7d9
Desc: ACPI: processor_idle: Skip dummy wait for processors based on the Zen microarchitecture
diff --git a/5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch b/5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch
index e0f6ebe8..f4db4c95 100644
--- a/5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch
+++ b/5030_ACPI-Skip-dummy-wait-for-Zen-processors.patch
@@ -1,142 +1,78 @@
-linux-kernel.vger.kernel.org archive mirror
- help / color / mirror / Atom feed
-From: K Prateek Nayak <kprateek.nayak@amd.com>
-To: <linux-kernel@vger.kernel.org>
-Cc: <rafael@kernel.org>, <lenb@kernel.org>,
- <linux-acpi@vger.kernel.org>, <linux-pm@vger.kernel.org>,
- <dave.hansen@linux.intel.com>, <bp@alien8.de>,
- <tglx@linutronix.de>, <andi@lisas.de>, <puwen@hygon.cn>,
- <mario.limonciello@amd.com>, <peterz@infradead.org>,
- <rui.zhang@intel.com>, <gpiccoli@igalia.com>,
- <daniel.lezcano@linaro.org>, <ananth.narayan@amd.com>,
- <gautham.shenoy@amd.com>,
- K Prateek Nayak <kprateek.nayak@amd.com>,
- "Calvin Ong" <calvin.ong@amd.com>, <stable@vger.kernel.org>,
- <regressions@lists.linux.dev>
-Subject: [PATCH] ACPI: processor_idle: Skip dummy wait for processors based on the Zen microarchitecture
-Date: Wed, 21 Sep 2022 12:06:38 +0530 [thread overview]
-Message-ID: <20220921063638.2489-1-kprateek.nayak@amd.com> (raw)
-
-Processors based on the Zen microarchitecture support IOPORT based deeper
-C-states. The idle driver reads the acpi_gbl_FADT.xpm_timer_block.address
-in the IOPORT based C-state exit path which is claimed to be a
-"Dummy wait op" and has been around since ACPI introduction to Linux
-dating back to Andy Grover's Mar 14, 2002 posting [1].
-The comment above the dummy operation was elaborated by Andreas Mohr back
-in 2006 in commit b488f02156d3d ("ACPI: restore comment justifying 'extra'
-P_LVLx access") [2] where the commit log claims:
-"this dummy read was about: STPCLK# doesn't get asserted in time on
-(some) chipsets, which is why we need to have a dummy I/O read to delaarchitecture-relatedy
-further instruction processing until the CPU is fully stopped."
-
-However, sampling certain workloads with IBS on AMD Zen3 system shows
-that a significant amount of time is spent in the dummy op, which
-incorrectly gets accounted as C-State residency. A large C-State
-residency value can prime the cpuidle governor to recommend a deeper
-C-State during the subsequent idle instances, starting a vicious cycle,
-leading to performance degradation on workloads that rapidly switch
-between busy and idle phases.
-
-One such workload is tbench where a massive performance degradation can
-be observed during certain runs. Following are some statistics gathered
-by running tbench with 128 clients, on a dual socket (2 x 64C/128T) Zen3
-system with the baseline kernel, baseline kernel keeping C2 disabled,
-and baseline kernel with this patch applied keeping C2 enabled:
-
-baseline kernel was tip:sched/core at
-commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU on
-wakelist if wakee cpu is idle")
-
-Kernel : baseline baseline + C2 disabled baseline + patch
-
-Min (MB/s) : 2215.06 33072.10 (+1393.05%) 33016.10 (+1390.52%)
-Max (MB/s) : 32938.80 34399.10 34774.50
-Median (MB/s) : 32191.80 33476.60 33805.70
-AMean (MB/s) : 22448.55 33649.27 (+49.89%) 33865.43 (+50.85%)
-AMean Stddev : 17526.70 680.14 880.72
-AMean CoefVar : 78.07% 2.02% 2.60%
-
-The data shows there are edge cases that can cause massive regressions
-in case of tbench. Profiling the bad runs with IBS shows a significant
-amount of time being spent in acpi_idle_do_entry method:
-
-Overhead Command Shared Object Symbolarchitecture-related
- 74.76% swapper [kernel.kallsyms] [k] acpi_idle_do_entry
- 0.71% tbench [kernel.kallsyms] [k] update_sd_lb_stats.constprop.0
- 0.69% tbench_srv [kernel.kallsyms] [k] update_sd_lb_stats.constprop.0
- 0.49% swapper [kernel.kallsyms] [k] psi_group_change
- ...
-
-Annotation of acpi_idle_do_entry method reveals almost all the time in
-acpi_idle_do_entry is spent on the port I/O in wait_for_freeze():
-
- 0.14 │ in (%dx),%al # <------ First "in" corresponding to inb(cx->address)
- 0.51 │ mov 0x144d64d(%rip),%rax
- 0.00 │ test $0x80000000,%eax
- │ ↓ jne 62 # <------ Skip if running in guest
- 0.00 │ mov 0x19800c3(%rip),%rdx
- 99.33 │ in (%dx),%eax # <------ Second "in" corresponding to inl(acpi_gbl_FADT.xpm_timer_block.address)
- 0.00 │62: mov -0x8(%rbp),%r12
- 0.00 │ leavearchitecture-related
- 0.00 │ ← ret
-
-This overhead is reflected in the C2 residency on the test system where
-C2 is an IOPORT based C-State. The total C-state residency reported by
-"cpupower idle-info" on CPU0 for good and bad case over the 80s tbench
-run is as follows (all numbers are in microseconds):
-
- Good Run Bad Run
- (Baseline)
-
-POLL: 43338 6231 (-85.62%)
-C1 (MWAIT Based): 23576156 363861 (-98.45%)
-C2 (IOPORT Based): 10781218 77027280 (+614.45%)
-
-The larger residency value in bad case leads to the system recommending
-C2 state again for subsequent idle instances. The pattern lasts till the
-end of the tbench run. Following is the breakdown of "entry_method"
-passed to acpi_idle_do_entry during good run and bad run:
-
- Good Run Bad Run
- (Baseline)
-
-Number of times acpi_idle_do_entry was called: 6149573 6149050 (-0.01%)
- |-> Number of times entry_method was "ACPI_CSTATE_FFH": 6141494 88144 (-98.56%)
- |-> Number of times entry_method was "ACPI_CSTATE_HALT": 0 0 (+0.00%)
- |-> Number of times entry_method was "ACPI_CSTATE_SYSTEMIO": 8079 6060906 (+74920.49%)
-
-For processors based on the Zen microarchitecture, this dummy wait op is
-unnecessary and can be skipped when choosing IOPORT based C-States to
-avoid polluting the C-state residency information.
-
-Link: https://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux-fullhistory.git/commit/?id=972c16130d9dc182cedcdd408408d9eacc7d6a2d [1]
-Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b488f02156d3deb08f5ad7816d565c370a8cc6f1 [2]
-
-Suggested-by: Calvin Ong <calvin.ong@amd.com>
-Cc: stable@vger.kernel.org
-Cc: regressions@lists.linux.dev
-Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
+From e400ad8b7e6a1b9102123c6240289a811501f7d9 Mon Sep 17 00:00:00 2001
+From: Dave Hansen <dave.hansen@intel.com>
+Date: Thu, 22 Sep 2022 11:47:45 -0700
+Subject: ACPI: processor idle: Practically limit "Dummy wait" workaround to
+ old Intel systems
+
+Old, circa 2002 chipsets have a bug: they don't go idle when they are
+supposed to. So, a workaround was added to slow the CPU down and
+ensure that the CPU waits a bit for the chipset to actually go idle.
+This workaround is ancient and has been in place in some form since
+the original kernel ACPI implementation.
+
+But, this workaround is very painful on modern systems. The "inl()"
+can take thousands of cycles (see Link: for some more detailed
+numbers and some fun kernel archaeology).
+
+First and foremost, modern systems should not be using this code.
+Typical Intel systems have not used it in over a decade because it is
+horribly inferior to MWAIT-based idle.
+
+Despite this, people do seem to be tripping over this workaround on
+AMD system today.
+
+Limit the "dummy wait" workaround to Intel systems. Keep Modern AMD
+systems from tripping over the workaround. Remotely modern Intel
+systems use intel_idle instead of this code and will, in practice,
+remain unaffected by the dummy wait.
+
+Reported-by: K Prateek Nayak <kprateek.nayak@amd.com>
+Suggested-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Reviewed-by: Mario Limonciello <mario.limonciello@amd.com>
+Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
+Link: https://lore.kernel.org/all/20220921063638.2489-1-kprateek.nayak@amd.com/
+Link: https://lkml.kernel.org/r/20220922184745.3252932-1-dave.hansen@intel.com
---
- drivers/acpi/processor_idle.c | 7 +++++--
- 1 file changed, 5 insertions(+), 2 deletions(-)
+ drivers/acpi/processor_idle.c | 23 ++++++++++++++++++++---
+ 1 file changed, 20 insertions(+), 3 deletions(-)
+
+(limited to 'drivers/acpi/processor_idle.c')
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
-index 16a1663d02d4..18850aa2b79b 100644
+index 16a1663d02d46..9f40917c49efb 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
-@@ -528,8 +528,11 @@ static int acpi_idle_bm_check(void)
- static void wait_for_freeze(void)
- {
- #ifdef CONFIG_X86
-- /* No delay is needed if we are in guest */
-- if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+@@ -531,10 +531,27 @@ static void wait_for_freeze(void)
+ /* No delay is needed if we are in guest */
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ return;
+ /*
-+ * No delay is needed if we are in guest or on a processor
-+ * based on the Zen microarchitecture.
++ * Modern (>=Nehalem) Intel systems use ACPI via intel_idle,
++ * not this code. Assume that any Intel systems using this
++ * are ancient and may need the dummy wait. This also assumes
++ * that the motivating chipset issue was Intel-only.
+ */
-+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR) || boot_cpu_has(X86_FEATURE_ZEN))
- return;
++ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++ return;
#endif
- /* Dummy wait op - must do something useless after P_LVL2 read
+- /* Dummy wait op - must do something useless after P_LVL2 read
+- because chipsets cannot guarantee that STPCLK# signal
+- gets asserted in time to freeze execution properly. */
++ /*
++ * Dummy wait op - must do something useless after P_LVL2 read
++ * because chipsets cannot guarantee that STPCLK# signal gets
++ * asserted in time to freeze execution properly
++ *
++ * This workaround has been in place since the original ACPI
++ * implementation was merged, circa 2002.
++ *
++ * If a profile is pointing to this instruction, please first
++ * consider moving your system to a more modern idle
++ * mechanism.
++ */
+ inl(acpi_gbl_FADT.xpm_timer_block.address);
+ }
+
--
-2.25.1
+cgit
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-09-28 9:55 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-09-28 9:55 UTC (permalink / raw
To: gentoo-commits
commit: 27a162cbb4e6bf6258462f89e5da2c02364e125e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 28 09:55:39 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 28 09:55:39 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=27a162cb
Linux patch 5.19.12
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1011_linux-5.19.12.patch | 9776 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9780 insertions(+)
diff --git a/0000_README b/0000_README
index 591733a1..05763bb8 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-5.19.11.patch
From: http://www.kernel.org
Desc: Linux 5.19.11
+Patch: 1011_linux-5.19.12.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-5.19.12.patch b/1011_linux-5.19.12.patch
new file mode 100644
index 00000000..8c6e32f4
--- /dev/null
+++ b/1011_linux-5.19.12.patch
@@ -0,0 +1,9776 @@
+diff --git a/Makefile b/Makefile
+index 01463a22926d5..7df4c195c8ab2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/boot/dts/lan966x.dtsi b/arch/arm/boot/dts/lan966x.dtsi
+index 38e90a31d2dd1..25c19f9d0a12f 100644
+--- a/arch/arm/boot/dts/lan966x.dtsi
++++ b/arch/arm/boot/dts/lan966x.dtsi
+@@ -515,13 +515,13 @@
+
+ phy0: ethernet-phy@1 {
+ reg = <1>;
+- interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>;
+ status = "disabled";
+ };
+
+ phy1: ethernet-phy@2 {
+ reg = <2>;
+- interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-mx8menlo.dts b/arch/arm64/boot/dts/freescale/imx8mm-mx8menlo.dts
+index 92eaf4ef45638..57ecdfa0dfc09 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-mx8menlo.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-mx8menlo.dts
+@@ -152,11 +152,11 @@
+ * CPLD_reset is RESET_SOFT in schematic
+ */
+ gpio-line-names =
+- "CPLD_D[1]", "CPLD_int", "CPLD_reset", "",
+- "", "CPLD_D[0]", "", "",
+- "", "", "", "CPLD_D[2]",
+- "CPLD_D[3]", "CPLD_D[4]", "CPLD_D[5]", "CPLD_D[6]",
+- "CPLD_D[7]", "", "", "",
++ "CPLD_D[6]", "CPLD_int", "CPLD_reset", "",
++ "", "CPLD_D[7]", "", "",
++ "", "", "", "CPLD_D[5]",
++ "CPLD_D[4]", "CPLD_D[3]", "CPLD_D[2]", "CPLD_D[1]",
++ "CPLD_D[0]", "", "", "",
+ "", "", "", "",
+ "", "", "", "KBD_intK",
+ "", "", "", "";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml-mba8mx.dts b/arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml-mba8mx.dts
+index 286d2df01cfa7..7e0aeb2db3054 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml-mba8mx.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml-mba8mx.dts
+@@ -5,7 +5,6 @@
+
+ /dts-v1/;
+
+-#include <dt-bindings/phy/phy-imx8-pcie.h>
+ #include "imx8mm-tqma8mqml.dtsi"
+ #include "mba8mx.dtsi"
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi
+index 16ee9b5179e6e..f649dfacb4b69 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi
+@@ -3,6 +3,7 @@
+ * Copyright 2020-2021 TQ-Systems GmbH
+ */
+
++#include <dt-bindings/phy/phy-imx8-pcie.h>
+ #include "imx8mm.dtsi"
+
+ / {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+index c2d4da25482ff..44b473494d0f5 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+@@ -359,8 +359,8 @@
+ nxp,dvs-standby-voltage = <850000>;
+ regulator-always-on;
+ regulator-boot-on;
+- regulator-max-microvolt = <950000>;
+- regulator-min-microvolt = <850000>;
++ regulator-max-microvolt = <1050000>;
++ regulator-min-microvolt = <805000>;
+ regulator-name = "On-module +VDD_ARM (BUCK2)";
+ regulator-ramp-delay = <3125>;
+ };
+@@ -368,8 +368,8 @@
+ reg_vdd_dram: BUCK3 {
+ regulator-always-on;
+ regulator-boot-on;
+- regulator-max-microvolt = <950000>;
+- regulator-min-microvolt = <850000>;
++ regulator-max-microvolt = <1000000>;
++ regulator-min-microvolt = <805000>;
+ regulator-name = "On-module +VDD_GPU_VPU_DDR (BUCK3)";
+ };
+
+@@ -408,7 +408,7 @@
+ reg_vdd_snvs: LDO2 {
+ regulator-always-on;
+ regulator-boot-on;
+- regulator-max-microvolt = <900000>;
++ regulator-max-microvolt = <800000>;
+ regulator-min-microvolt = <800000>;
+ regulator-name = "On-module +V0.8_SNVS (LDO2)";
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+index e41e1d56f980d..7bd4eecd592ef 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+@@ -672,7 +672,6 @@
+ <&clk IMX8MN_CLK_GPU_SHADER>,
+ <&clk IMX8MN_CLK_GPU_BUS_ROOT>,
+ <&clk IMX8MN_CLK_GPU_AHB>;
+- resets = <&src IMX8MQ_RESET_GPU_RESET>;
+ };
+
+ pgc_dispmix: power-domain@3 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+index 6630ec561dc25..211e6a1b296e1 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+@@ -123,8 +123,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_reg_can>;
+ regulator-name = "can2_stby";
+- gpio = <&gpio3 19 GPIO_ACTIVE_HIGH>;
+- enable-active-high;
++ gpio = <&gpio3 19 GPIO_ACTIVE_LOW>;
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+@@ -484,35 +483,40 @@
+ lan1: port@0 {
+ reg = <0>;
+ label = "lan1";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+ lan2: port@1 {
+ reg = <1>;
+ label = "lan2";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+ lan3: port@2 {
+ reg = <2>;
+ label = "lan3";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+ lan4: port@3 {
+ reg = <3>;
+ label = "lan4";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+ lan5: port@4 {
+ reg = <4>;
+ label = "lan5";
++ phy-mode = "internal";
+ local-mac-address = [00 00 00 00 00 00];
+ };
+
+- port@6 {
+- reg = <6>;
++ port@5 {
++ reg = <5>;
+ label = "cpu";
+ ethernet = <&fec>;
+ phy-mode = "rgmii-id";
+diff --git a/arch/arm64/boot/dts/freescale/imx8ulp.dtsi b/arch/arm64/boot/dts/freescale/imx8ulp.dtsi
+index 09f7364dd1d05..1cd389b1b95d6 100644
+--- a/arch/arm64/boot/dts/freescale/imx8ulp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8ulp.dtsi
+@@ -172,6 +172,7 @@
+ compatible = "fsl,imx8ulp-pcc3";
+ reg = <0x292d0000 0x10000>;
+ #clock-cells = <1>;
++ #reset-cells = <1>;
+ };
+
+ tpm5: tpm@29340000 {
+@@ -270,6 +271,7 @@
+ compatible = "fsl,imx8ulp-pcc4";
+ reg = <0x29800000 0x10000>;
+ #clock-cells = <1>;
++ #reset-cells = <1>;
+ };
+
+ lpi2c6: i2c@29840000 {
+@@ -414,6 +416,7 @@
+ compatible = "fsl,imx8ulp-pcc5";
+ reg = <0x2da70000 0x10000>;
+ #clock-cells = <1>;
++ #reset-cells = <1>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/px30-engicam-px30-core.dtsi b/arch/arm64/boot/dts/rockchip/px30-engicam-px30-core.dtsi
+index 7249871530ab9..5eecbefa8a336 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-engicam-px30-core.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30-engicam-px30-core.dtsi
+@@ -2,8 +2,8 @@
+ /*
+ * Copyright (c) 2020 Fuzhou Rockchip Electronics Co., Ltd
+ * Copyright (c) 2020 Engicam srl
+- * Copyright (c) 2020 Amarula Solutons
+- * Copyright (c) 2020 Amarula Solutons(India)
++ * Copyright (c) 2020 Amarula Solutions
++ * Copyright (c) 2020 Amarula Solutions(India)
+ */
+
+ #include <dt-bindings/gpio/gpio.h>
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts b/arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts
+index 31ebb4e5fd330..0f9cc042d9bf0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts
+@@ -88,3 +88,8 @@
+ };
+ };
+ };
++
++&wlan_host_wake_l {
++ /* Kevin has an external pull up, but Bob does not. */
++ rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_up>;
++};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
+index 50d459ee4831c..af5810e5f5b79 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
+@@ -244,6 +244,14 @@
+ &edp {
+ status = "okay";
+
++ /*
++ * eDP PHY/clk don't sync reliably at anything other than 24 MHz. Only
++ * set this here, because rk3399-gru.dtsi ensures we can generate this
++ * off GPLL=600MHz, whereas some other RK3399 boards may not.
++ */
++ assigned-clocks = <&cru PCLK_EDP>;
++ assigned-clock-rates = <24000000>;
++
+ ports {
+ edp_out: port@1 {
+ reg = <1>;
+@@ -578,6 +586,7 @@ ap_i2c_tp: &i2c5 {
+ };
+
+ wlan_host_wake_l: wlan-host-wake-l {
++ /* Kevin has an external pull up, but Bob does not */
+ rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index b1ac3a89f259c..aa3e21bd6c8f4 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -62,7 +62,6 @@
+ vcc5v0_host: vcc5v0-host-regulator {
+ compatible = "regulator-fixed";
+ gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
+- enable-active-low;
+ pinctrl-names = "default";
+ pinctrl-0 = <&vcc5v0_host_en>;
+ regulator-name = "vcc5v0_host";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts b/arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts
+index fa953b7366421..fdbfdf3634e43 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts
+@@ -163,7 +163,6 @@
+
+ vcc3v3_sd: vcc3v3_sd {
+ compatible = "regulator-fixed";
+- enable-active-low;
+ gpio = <&gpio0 RK_PA5 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&vcc_sd_h>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-quartz64-b.dts b/arch/arm64/boot/dts/rockchip/rk3566-quartz64-b.dts
+index 02d5f5a8ca036..528bb4e8ac776 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-quartz64-b.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-quartz64-b.dts
+@@ -506,7 +506,7 @@
+ disable-wp;
+ pinctrl-names = "default";
+ pinctrl-0 = <&sdmmc0_bus4 &sdmmc0_clk &sdmmc0_cmd &sdmmc0_det>;
+- sd-uhs-sdr104;
++ sd-uhs-sdr50;
+ vmmc-supply = <&vcc3v3_sd>;
+ vqmmc-supply = <&vccio_sd>;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts b/arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts
+index 622be8be9813d..282f5c74d5cda 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts
+@@ -618,7 +618,7 @@
+ };
+
+ &usb2phy0_otg {
+- vbus-supply = <&vcc5v0_usb_otg>;
++ phy-supply = <&vcc5v0_usb_otg>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts b/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
+index 0813c0c5abded..26912f02684ce 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
+@@ -543,7 +543,7 @@
+ };
+
+ &usb2phy0_otg {
+- vbus-supply = <&vcc5v0_usb_otg>;
++ phy-supply = <&vcc5v0_usb_otg>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index 707b5451929d4..d4abb948eb14e 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -251,7 +251,7 @@ static void amu_fie_setup(const struct cpumask *cpus)
+ for_each_cpu(cpu, cpus) {
+ if (!freq_counters_valid(cpu) ||
+ freq_inv_set_max_ratio(cpu,
+- cpufreq_get_hw_max_freq(cpu) * 1000,
++ cpufreq_get_hw_max_freq(cpu) * 1000ULL,
+ arch_timer_get_rate()))
+ return;
+ }
+diff --git a/arch/mips/lantiq/clk.c b/arch/mips/lantiq/clk.c
+index 7a623684d9b5e..2d5a0bcb0cec1 100644
+--- a/arch/mips/lantiq/clk.c
++++ b/arch/mips/lantiq/clk.c
+@@ -50,6 +50,7 @@ struct clk *clk_get_io(void)
+ {
+ return &cpu_clk_generic[2];
+ }
++EXPORT_SYMBOL_GPL(clk_get_io);
+
+ struct clk *clk_get_ppe(void)
+ {
+diff --git a/arch/mips/loongson32/common/platform.c b/arch/mips/loongson32/common/platform.c
+index 794c96c2a4cdd..311dc1580bbde 100644
+--- a/arch/mips/loongson32/common/platform.c
++++ b/arch/mips/loongson32/common/platform.c
+@@ -98,7 +98,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
+ if (plat_dat->bus_id) {
+ __raw_writel(__raw_readl(LS1X_MUX_CTRL0) | GMAC1_USE_UART1 |
+ GMAC1_USE_UART0, LS1X_MUX_CTRL0);
+- switch (plat_dat->interface) {
++ switch (plat_dat->phy_interface) {
+ case PHY_INTERFACE_MODE_RGMII:
+ val &= ~(GMAC1_USE_TXCLK | GMAC1_USE_PWM23);
+ break;
+@@ -107,12 +107,12 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
+ break;
+ default:
+ pr_err("unsupported mii mode %d\n",
+- plat_dat->interface);
++ plat_dat->phy_interface);
+ return -ENOTSUPP;
+ }
+ val &= ~GMAC1_SHUT;
+ } else {
+- switch (plat_dat->interface) {
++ switch (plat_dat->phy_interface) {
+ case PHY_INTERFACE_MODE_RGMII:
+ val &= ~(GMAC0_USE_TXCLK | GMAC0_USE_PWM01);
+ break;
+@@ -121,7 +121,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
+ break;
+ default:
+ pr_err("unsupported mii mode %d\n",
+- plat_dat->interface);
++ plat_dat->phy_interface);
+ return -ENOTSUPP;
+ }
+ val &= ~GMAC0_SHUT;
+@@ -131,7 +131,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
+ plat_dat = dev_get_platdata(&pdev->dev);
+
+ val &= ~PHY_INTF_SELI;
+- if (plat_dat->interface == PHY_INTERFACE_MODE_RMII)
++ if (plat_dat->phy_interface == PHY_INTERFACE_MODE_RMII)
+ val |= 0x4 << PHY_INTF_SELI_SHIFT;
+ __raw_writel(val, LS1X_MUX_CTRL1);
+
+@@ -146,9 +146,9 @@ static struct plat_stmmacenet_data ls1x_eth0_pdata = {
+ .bus_id = 0,
+ .phy_addr = -1,
+ #if defined(CONFIG_LOONGSON1_LS1B)
+- .interface = PHY_INTERFACE_MODE_MII,
++ .phy_interface = PHY_INTERFACE_MODE_MII,
+ #elif defined(CONFIG_LOONGSON1_LS1C)
+- .interface = PHY_INTERFACE_MODE_RMII,
++ .phy_interface = PHY_INTERFACE_MODE_RMII,
+ #endif
+ .mdio_bus_data = &ls1x_mdio_bus_data,
+ .dma_cfg = &ls1x_eth_dma_cfg,
+@@ -186,7 +186,7 @@ struct platform_device ls1x_eth0_pdev = {
+ static struct plat_stmmacenet_data ls1x_eth1_pdata = {
+ .bus_id = 1,
+ .phy_addr = -1,
+- .interface = PHY_INTERFACE_MODE_MII,
++ .phy_interface = PHY_INTERFACE_MODE_MII,
+ .mdio_bus_data = &ls1x_mdio_bus_data,
+ .dma_cfg = &ls1x_eth_dma_cfg,
+ .has_gmac = 1,
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index fcbb81feb7ad8..1f02f15569749 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -361,6 +361,7 @@ config RISCV_ISA_C
+ config RISCV_ISA_SVPBMT
+ bool "SVPBMT extension support"
+ depends on 64BIT && MMU
++ depends on !XIP_KERNEL
+ select RISCV_ALTERNATIVE
+ default y
+ help
+diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
+index 5a2de6b6f8822..5c591123c4409 100644
+--- a/arch/riscv/kernel/signal.c
++++ b/arch/riscv/kernel/signal.c
+@@ -124,6 +124,8 @@ SYSCALL_DEFINE0(rt_sigreturn)
+ if (restore_altstack(&frame->uc.uc_stack))
+ goto badframe;
+
++ regs->cause = -1UL;
++
+ return regs->a0;
+
+ badframe:
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index e0de60e503b98..d9e023c78f568 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -33,7 +33,7 @@
+ #include "um_arch.h"
+
+ #define DEFAULT_COMMAND_LINE_ROOT "root=98:0"
+-#define DEFAULT_COMMAND_LINE_CONSOLE "console=tty"
++#define DEFAULT_COMMAND_LINE_CONSOLE "console=tty0"
+
+ /* Changed in add_arg and setup_arch, which run before SMP is started */
+ static char __initdata command_line[COMMAND_LINE_SIZE] = { 0 };
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 4c0e812f2f044..19c04412f6e16 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -713,6 +713,7 @@ struct kvm_vcpu_arch {
+ struct fpu_guest guest_fpu;
+
+ u64 xcr0;
++ u64 guest_supported_xcr0;
+
+ struct kvm_pio_request pio;
+ void *pio_data;
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index de6d44e07e348..3ab498165639f 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -283,7 +283,6 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+ {
+ struct kvm_lapic *apic = vcpu->arch.apic;
+ struct kvm_cpuid_entry2 *best;
+- u64 guest_supported_xcr0;
+
+ best = kvm_find_cpuid_entry(vcpu, 1, 0);
+ if (best && apic) {
+@@ -295,10 +294,16 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+ kvm_apic_set_version(vcpu);
+ }
+
+- guest_supported_xcr0 =
++ vcpu->arch.guest_supported_xcr0 =
+ cpuid_get_supported_xcr0(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent);
+
+- vcpu->arch.guest_fpu.fpstate->user_xfeatures = guest_supported_xcr0;
++ /*
++ * FP+SSE can always be saved/restored via KVM_{G,S}ET_XSAVE, even if
++ * XSAVE/XCRO are not exposed to the guest, and even if XSAVE isn't
++ * supported by the host.
++ */
++ vcpu->arch.guest_fpu.fpstate->user_xfeatures = vcpu->arch.guest_supported_xcr0 |
++ XFEATURE_MASK_FPSSE;
+
+ kvm_update_pv_runtime(vcpu);
+
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 09fa8a94807bf..0c4a866813b31 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -4134,6 +4134,9 @@ static int em_xsetbv(struct x86_emulate_ctxt *ctxt)
+ {
+ u32 eax, ecx, edx;
+
++ if (!(ctxt->ops->get_cr(ctxt, 4) & X86_CR4_OSXSAVE))
++ return emulate_ud(ctxt);
++
+ eax = reg_read(ctxt, VCPU_REGS_RAX);
+ edx = reg_read(ctxt, VCPU_REGS_RDX);
+ ecx = reg_read(ctxt, VCPU_REGS_RCX);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 5b36866528568..8c2815151864b 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1025,15 +1025,10 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_load_host_xsave_state);
+
+-static inline u64 kvm_guest_supported_xcr0(struct kvm_vcpu *vcpu)
+-{
+- return vcpu->arch.guest_fpu.fpstate->user_xfeatures;
+-}
+-
+ #ifdef CONFIG_X86_64
+ static inline u64 kvm_guest_supported_xfd(struct kvm_vcpu *vcpu)
+ {
+- return kvm_guest_supported_xcr0(vcpu) & XFEATURE_MASK_USER_DYNAMIC;
++ return vcpu->arch.guest_supported_xcr0 & XFEATURE_MASK_USER_DYNAMIC;
+ }
+ #endif
+
+@@ -1056,7 +1051,7 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
+ * saving. However, xcr0 bit 0 is always set, even if the
+ * emulated CPU does not support XSAVE (see kvm_vcpu_reset()).
+ */
+- valid_bits = kvm_guest_supported_xcr0(vcpu) | XFEATURE_MASK_FP;
++ valid_bits = vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FP;
+ if (xcr0 & ~valid_bits)
+ return 1;
+
+@@ -1084,6 +1079,7 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
+
+ int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu)
+ {
++ /* Note, #UD due to CR4.OSXSAVE=0 has priority over the intercept. */
+ if (static_call(kvm_x86_get_cpl)(vcpu) != 0 ||
+ __kvm_set_xcr(vcpu, kvm_rcx_read(vcpu), kvm_read_edx_eax(vcpu))) {
+ kvm_inject_gp(vcpu, 0);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index cc6fbcb6d2521..7743c68177e89 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -284,49 +284,6 @@ void blk_queue_start_drain(struct request_queue *q)
+ wake_up_all(&q->mq_freeze_wq);
+ }
+
+-/**
+- * blk_cleanup_queue - shutdown a request queue
+- * @q: request queue to shutdown
+- *
+- * Mark @q DYING, drain all pending requests, mark @q DEAD, destroy and
+- * put it. All future requests will be failed immediately with -ENODEV.
+- *
+- * Context: can sleep
+- */
+-void blk_cleanup_queue(struct request_queue *q)
+-{
+- /* cannot be called from atomic context */
+- might_sleep();
+-
+- WARN_ON_ONCE(blk_queue_registered(q));
+-
+- /* mark @q DYING, no new request or merges will be allowed afterwards */
+- blk_queue_flag_set(QUEUE_FLAG_DYING, q);
+- blk_queue_start_drain(q);
+-
+- blk_queue_flag_set(QUEUE_FLAG_NOMERGES, q);
+- blk_queue_flag_set(QUEUE_FLAG_NOXMERGES, q);
+-
+- /*
+- * Drain all requests queued before DYING marking. Set DEAD flag to
+- * prevent that blk_mq_run_hw_queues() accesses the hardware queues
+- * after draining finished.
+- */
+- blk_freeze_queue(q);
+-
+- blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
+-
+- blk_sync_queue(q);
+- if (queue_is_mq(q)) {
+- blk_mq_cancel_work_sync(q);
+- blk_mq_exit_queue(q);
+- }
+-
+- /* @q is and will stay empty, shutdown and put */
+- blk_put_queue(q);
+-}
+-EXPORT_SYMBOL(blk_cleanup_queue);
+-
+ /**
+ * blk_queue_enter() - try to increase q->q_usage_counter
+ * @q: request queue pointer
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index 61f179e5f151a..28adb01f64419 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -116,7 +116,6 @@ static const char *const blk_queue_flag_name[] = {
+ QUEUE_FLAG_NAME(NOXMERGES),
+ QUEUE_FLAG_NAME(ADD_RANDOM),
+ QUEUE_FLAG_NAME(SAME_FORCE),
+- QUEUE_FLAG_NAME(DEAD),
+ QUEUE_FLAG_NAME(INIT_DONE),
+ QUEUE_FLAG_NAME(STABLE_WRITES),
+ QUEUE_FLAG_NAME(POLL),
+@@ -151,11 +150,10 @@ static ssize_t queue_state_write(void *data, const char __user *buf,
+ char opbuf[16] = { }, *op;
+
+ /*
+- * The "state" attribute is removed after blk_cleanup_queue() has called
+- * blk_mq_free_queue(). Return if QUEUE_FLAG_DEAD has been set to avoid
+- * triggering a use-after-free.
++ * The "state" attribute is removed when the queue is removed. Don't
++ * allow setting the state on a dying queue to avoid a use-after-free.
+ */
+- if (blk_queue_dead(q))
++ if (blk_queue_dying(q))
+ return -ENOENT;
+
+ if (count >= sizeof(opbuf)) {
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 0a299941c622e..69d0a58f9e2f1 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3896,7 +3896,7 @@ static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set,
+ q->queuedata = queuedata;
+ ret = blk_mq_init_allocated_queue(set, q);
+ if (ret) {
+- blk_cleanup_queue(q);
++ blk_put_queue(q);
+ return ERR_PTR(ret);
+ }
+ return q;
+@@ -3908,6 +3908,35 @@ struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set)
+ }
+ EXPORT_SYMBOL(blk_mq_init_queue);
+
++/**
++ * blk_mq_destroy_queue - shutdown a request queue
++ * @q: request queue to shutdown
++ *
++ * This shuts down a request queue allocated by blk_mq_init_queue() and drops
++ * the initial reference. All future requests will failed with -ENODEV.
++ *
++ * Context: can sleep
++ */
++void blk_mq_destroy_queue(struct request_queue *q)
++{
++ WARN_ON_ONCE(!queue_is_mq(q));
++ WARN_ON_ONCE(blk_queue_registered(q));
++
++ might_sleep();
++
++ blk_queue_flag_set(QUEUE_FLAG_DYING, q);
++ blk_queue_start_drain(q);
++ blk_freeze_queue(q);
++
++ blk_sync_queue(q);
++ blk_mq_cancel_work_sync(q);
++ blk_mq_exit_queue(q);
++
++ /* @q is and will stay empty, shutdown and put */
++ blk_put_queue(q);
++}
++EXPORT_SYMBOL(blk_mq_destroy_queue);
++
+ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata,
+ struct lock_class_key *lkclass)
+ {
+@@ -3920,13 +3949,23 @@ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata,
+
+ disk = __alloc_disk_node(q, set->numa_node, lkclass);
+ if (!disk) {
+- blk_cleanup_queue(q);
++ blk_mq_destroy_queue(q);
+ return ERR_PTR(-ENOMEM);
+ }
++ set_bit(GD_OWNS_QUEUE, &disk->state);
+ return disk;
+ }
+ EXPORT_SYMBOL(__blk_mq_alloc_disk);
+
++struct gendisk *blk_mq_alloc_disk_for_queue(struct request_queue *q,
++ struct lock_class_key *lkclass)
++{
++ if (!blk_get_queue(q))
++ return NULL;
++ return __alloc_disk_node(q, NUMA_NO_NODE, lkclass);
++}
++EXPORT_SYMBOL(blk_mq_alloc_disk_for_queue);
++
+ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
+ struct blk_mq_tag_set *set, struct request_queue *q,
+ int hctx_idx, int node)
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 9b905e9443e49..84d7f87015673 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -748,11 +748,6 @@ static void blk_free_queue_rcu(struct rcu_head *rcu_head)
+ * decremented with blk_put_queue(). Once the refcount reaches 0 this function
+ * is called.
+ *
+- * For drivers that have a request_queue on a gendisk and added with
+- * __device_add_disk() the refcount to request_queue will reach 0 with
+- * the last put_disk() called by the driver. For drivers which don't use
+- * __device_add_disk() this happens with blk_cleanup_queue().
+- *
+ * Drivers exist which depend on the release of the request_queue to be
+ * synchronous, it should not be deferred.
+ *
+diff --git a/block/blk.h b/block/blk.h
+index 434017701403f..0d6668663ab5d 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -411,6 +411,9 @@ int bdev_resize_partition(struct gendisk *disk, int partno, sector_t start,
+ sector_t length);
+ void blk_drop_partitions(struct gendisk *disk);
+
++struct gendisk *__alloc_disk_node(struct request_queue *q, int node_id,
++ struct lock_class_key *lkclass);
++
+ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
+ struct page *page, unsigned int len, unsigned int offset,
+ unsigned int max_sectors, bool *same_page);
+diff --git a/block/bsg-lib.c b/block/bsg-lib.c
+index acfe1357bf6c4..fd4cd5e682826 100644
+--- a/block/bsg-lib.c
++++ b/block/bsg-lib.c
+@@ -324,7 +324,7 @@ void bsg_remove_queue(struct request_queue *q)
+ container_of(q->tag_set, struct bsg_set, tag_set);
+
+ bsg_unregister_queue(bset->bd);
+- blk_cleanup_queue(q);
++ blk_mq_destroy_queue(q);
+ blk_mq_free_tag_set(&bset->tag_set);
+ kfree(bset);
+ }
+@@ -399,7 +399,7 @@ struct request_queue *bsg_setup_queue(struct device *dev, const char *name,
+
+ return q;
+ out_cleanup_queue:
+- blk_cleanup_queue(q);
++ blk_mq_destroy_queue(q);
+ out_queue:
+ blk_mq_free_tag_set(set);
+ out_tag_set:
+diff --git a/block/genhd.c b/block/genhd.c
+index 278227ba1d531..a39c416d658fd 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -617,13 +617,14 @@ void del_gendisk(struct gendisk *disk)
+ * Fail any new I/O.
+ */
+ set_bit(GD_DEAD, &disk->state);
++ if (test_bit(GD_OWNS_QUEUE, &disk->state))
++ blk_queue_flag_set(QUEUE_FLAG_DYING, q);
+ set_capacity(disk, 0);
+
+ /*
+ * Prevent new I/O from crossing bio_queue_enter().
+ */
+ blk_queue_start_drain(q);
+- blk_mq_freeze_queue_wait(q);
+
+ if (!(disk->flags & GENHD_FL_HIDDEN)) {
+ sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
+@@ -647,6 +648,8 @@ void del_gendisk(struct gendisk *disk)
+ pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
+ device_del(disk_to_dev(disk));
+
++ blk_mq_freeze_queue_wait(q);
++
+ blk_throtl_cancel_bios(disk->queue);
+
+ blk_sync_queue(q);
+@@ -663,11 +666,16 @@ void del_gendisk(struct gendisk *disk)
+ blk_mq_unquiesce_queue(q);
+
+ /*
+- * Allow using passthrough request again after the queue is torn down.
++ * If the disk does not own the queue, allow using passthrough requests
++ * again. Else leave the queue frozen to fail all I/O.
+ */
+- blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q);
+- __blk_mq_unfreeze_queue(q, true);
+-
++ if (!test_bit(GD_OWNS_QUEUE, &disk->state)) {
++ blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q);
++ __blk_mq_unfreeze_queue(q, true);
++ } else {
++ if (queue_is_mq(q))
++ blk_mq_exit_queue(q);
++ }
+ }
+ EXPORT_SYMBOL(del_gendisk);
+
+@@ -1151,6 +1159,18 @@ static void disk_release(struct device *dev)
+ might_sleep();
+ WARN_ON_ONCE(disk_live(disk));
+
++ /*
++ * To undo the all initialization from blk_mq_init_allocated_queue in
++ * case of a probe failure where add_disk is never called we have to
++ * call blk_mq_exit_queue here. We can't do this for the more common
++ * teardown case (yet) as the tagset can be gone by the time the disk
++ * is released once it was added.
++ */
++ if (queue_is_mq(disk->queue) &&
++ test_bit(GD_OWNS_QUEUE, &disk->state) &&
++ !test_bit(GD_ADDED, &disk->state))
++ blk_mq_exit_queue(disk->queue);
++
+ blkcg_exit_queue(disk->queue);
+
+ disk_release_events(disk);
+@@ -1338,12 +1358,9 @@ struct gendisk *__alloc_disk_node(struct request_queue *q, int node_id,
+ {
+ struct gendisk *disk;
+
+- if (!blk_get_queue(q))
+- return NULL;
+-
+ disk = kzalloc_node(sizeof(struct gendisk), GFP_KERNEL, node_id);
+ if (!disk)
+- goto out_put_queue;
++ return NULL;
+
+ disk->bdi = bdi_alloc(node_id);
+ if (!disk->bdi)
+@@ -1387,11 +1404,8 @@ out_free_bdi:
+ bdi_put(disk->bdi);
+ out_free_disk:
+ kfree(disk);
+-out_put_queue:
+- blk_put_queue(q);
+ return NULL;
+ }
+-EXPORT_SYMBOL(__alloc_disk_node);
+
+ struct gendisk *__blk_alloc_disk(int node, struct lock_class_key *lkclass)
+ {
+@@ -1404,9 +1418,10 @@ struct gendisk *__blk_alloc_disk(int node, struct lock_class_key *lkclass)
+
+ disk = __alloc_disk_node(q, node, lkclass);
+ if (!disk) {
+- blk_cleanup_queue(q);
++ blk_put_queue(q);
+ return NULL;
+ }
++ set_bit(GD_OWNS_QUEUE, &disk->state);
+ return disk;
+ }
+ EXPORT_SYMBOL(__blk_alloc_disk);
+@@ -1418,6 +1433,9 @@ EXPORT_SYMBOL(__blk_alloc_disk);
+ * This decrements the refcount for the struct gendisk. When this reaches 0
+ * we'll have disk_release() called.
+ *
++ * Note: for blk-mq disk put_disk must be called before freeing the tag_set
++ * when handling probe errors (that is before add_disk() is called).
++ *
+ * Context: Any context, but the last reference must not be dropped from
+ * atomic context.
+ */
+@@ -1439,7 +1457,6 @@ EXPORT_SYMBOL(put_disk);
+ */
+ void blk_cleanup_disk(struct gendisk *disk)
+ {
+- blk_cleanup_queue(disk->queue);
+ put_disk(disk);
+ }
+ EXPORT_SYMBOL(blk_cleanup_disk);
+diff --git a/certs/Kconfig b/certs/Kconfig
+index bf9b511573d75..1f109b0708778 100644
+--- a/certs/Kconfig
++++ b/certs/Kconfig
+@@ -43,7 +43,7 @@ config SYSTEM_TRUSTED_KEYRING
+ bool "Provide system-wide ring of trusted keys"
+ depends on KEYS
+ depends on ASYMMETRIC_KEY_TYPE
+- depends on X509_CERTIFICATE_PARSER
++ depends on X509_CERTIFICATE_PARSER = y
+ help
+ Provide a system keyring to which trusted keys can be added. Keys in
+ the keyring are considered to be trusted. Keys may be added at will
+diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
+index e232cc4fd444b..c6e41ee18aaa2 100644
+--- a/drivers/block/ataflop.c
++++ b/drivers/block/ataflop.c
+@@ -2045,7 +2045,6 @@ static void atari_floppy_cleanup(void)
+ if (!unit[i].disk[type])
+ continue;
+ del_gendisk(unit[i].disk[type]);
+- blk_cleanup_queue(unit[i].disk[type]->queue);
+ put_disk(unit[i].disk[type]);
+ }
+ blk_mq_free_tag_set(&unit[i].tag_set);
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index a59910ef948e9..1c036ef686fbb 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -2062,7 +2062,6 @@ static void loop_remove(struct loop_device *lo)
+ {
+ /* Make this loop device unreachable from pathname. */
+ del_gendisk(lo->lo_disk);
+- blk_cleanup_queue(lo->lo_disk->queue);
+ blk_mq_free_tag_set(&lo->tag_set);
+
+ mutex_lock(&loop_ctl_mutex);
+diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
+index 6699e4b2f7f43..06994a35acc7a 100644
+--- a/drivers/block/mtip32xx/mtip32xx.c
++++ b/drivers/block/mtip32xx/mtip32xx.c
+@@ -3677,7 +3677,6 @@ static int mtip_block_shutdown(struct driver_data *dd)
+ if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
+ del_gendisk(dd->disk);
+
+- blk_cleanup_queue(dd->queue);
+ blk_mq_free_tag_set(&dd->tags);
+ put_disk(dd->disk);
+ return 0;
+@@ -4040,7 +4039,6 @@ static void mtip_pci_remove(struct pci_dev *pdev)
+ dev_info(&dd->pdev->dev, "device %s surprise removal\n",
+ dd->disk->disk_name);
+
+- blk_cleanup_queue(dd->queue);
+ blk_mq_free_tag_set(&dd->tags);
+
+ /* De-initialize the protocol layer. */
+diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
+index 409c76b81aed4..a4470374f54fc 100644
+--- a/drivers/block/rnbd/rnbd-clt.c
++++ b/drivers/block/rnbd/rnbd-clt.c
+@@ -1755,7 +1755,7 @@ static void rnbd_destroy_sessions(void)
+ list_for_each_entry_safe(dev, tn, &sess->devs_list, list) {
+ /*
+ * Here unmap happens in parallel for only one reason:
+- * blk_cleanup_queue() takes around half a second, so
++ * del_gendisk() takes around half a second, so
+ * on huge amount of devices the whole module unload
+ * procedure takes minutes.
+ */
+diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
+index 63b4f6431d2e6..75057dbbcfbea 100644
+--- a/drivers/block/sx8.c
++++ b/drivers/block/sx8.c
+@@ -1536,7 +1536,7 @@ err_out_free_majors:
+ clear_bit(0, &carm_major_alloc);
+ else if (host->major == 161)
+ clear_bit(1, &carm_major_alloc);
+- blk_cleanup_queue(host->oob_q);
++ blk_mq_destroy_queue(host->oob_q);
+ blk_mq_free_tag_set(&host->tag_set);
+ err_out_dma_free:
+ dma_free_coherent(&pdev->dev, CARM_SHM_SIZE, host->shm, host->shm_dma);
+@@ -1570,7 +1570,7 @@ static void carm_remove_one (struct pci_dev *pdev)
+ clear_bit(0, &carm_major_alloc);
+ else if (host->major == 161)
+ clear_bit(1, &carm_major_alloc);
+- blk_cleanup_queue(host->oob_q);
++ blk_mq_destroy_queue(host->oob_q);
+ blk_mq_free_tag_set(&host->tag_set);
+ dma_free_coherent(&pdev->dev, CARM_SHM_SIZE, host->shm, host->shm_dma);
+ iounmap(host->mmio);
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index d756423e0059a..59d6d5faf7396 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -1107,7 +1107,6 @@ static void virtblk_remove(struct virtio_device *vdev)
+ flush_work(&vblk->config_work);
+
+ del_gendisk(vblk->disk);
+- blk_cleanup_queue(vblk->disk->queue);
+ blk_mq_free_tag_set(&vblk->tag_set);
+
+ mutex_lock(&vblk->vdev_mutex);
+diff --git a/drivers/block/z2ram.c b/drivers/block/z2ram.c
+index 7a6ed83481b8d..18ad43d9933ec 100644
+--- a/drivers/block/z2ram.c
++++ b/drivers/block/z2ram.c
+@@ -384,7 +384,6 @@ static void __exit z2_exit(void)
+
+ for (i = 0; i < Z2MINOR_COUNT; i++) {
+ del_gendisk(z2ram_gendisk[i]);
+- blk_cleanup_queue(z2ram_gendisk[i]->queue);
+ put_disk(z2ram_gendisk[i]);
+ }
+ blk_mq_free_tag_set(&tag_set);
+diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
+index 8e78b37d0f6a4..f4cc90ea6198e 100644
+--- a/drivers/cdrom/gdrom.c
++++ b/drivers/cdrom/gdrom.c
+@@ -831,7 +831,6 @@ probe_fail_no_mem:
+
+ static int remove_gdrom(struct platform_device *devptr)
+ {
+- blk_cleanup_queue(gd.gdrom_rq);
+ blk_mq_free_tag_set(&gd.tag_set);
+ free_irq(HW_EVENT_GDROM_CMD, &gd);
+ free_irq(HW_EVENT_GDROM_DMA, &gd);
+diff --git a/drivers/dax/hmem/device.c b/drivers/dax/hmem/device.c
+index cb6401c9e9a4f..acf31cc1dbcca 100644
+--- a/drivers/dax/hmem/device.c
++++ b/drivers/dax/hmem/device.c
+@@ -15,6 +15,7 @@ void hmem_register_device(int target_nid, struct resource *r)
+ .start = r->start,
+ .end = r->end,
+ .flags = IORESOURCE_MEM,
++ .desc = IORES_DESC_SOFT_RESERVED,
+ };
+ struct platform_device *pdev;
+ struct memregion_info info;
+diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c
+index d4f1e4e9603a4..85e00701473cb 100644
+--- a/drivers/dma/ti/k3-udma-private.c
++++ b/drivers/dma/ti/k3-udma-private.c
+@@ -31,14 +31,14 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property)
+ }
+
+ pdev = of_find_device_by_node(udma_node);
++ if (np != udma_node)
++ of_node_put(udma_node);
++
+ if (!pdev) {
+ pr_debug("UDMA device not found\n");
+ return ERR_PTR(-EPROBE_DEFER);
+ }
+
+- if (np != udma_node)
+- of_node_put(udma_node);
+-
+ ud = platform_get_drvdata(pdev);
+ if (!ud) {
+ pr_debug("UDMA has not been probed\n");
+diff --git a/drivers/firmware/arm_scmi/reset.c b/drivers/firmware/arm_scmi/reset.c
+index 673f3eb498f43..e9afa8cab7309 100644
+--- a/drivers/firmware/arm_scmi/reset.c
++++ b/drivers/firmware/arm_scmi/reset.c
+@@ -166,9 +166,13 @@ static int scmi_domain_reset(const struct scmi_protocol_handle *ph, u32 domain,
+ struct scmi_xfer *t;
+ struct scmi_msg_reset_domain_reset *dom;
+ struct scmi_reset_info *pi = ph->get_priv(ph);
+- struct reset_dom_info *rdom = pi->dom_info + domain;
++ struct reset_dom_info *rdom;
+
+- if (rdom->async_reset)
++ if (domain >= pi->num_domains)
++ return -EINVAL;
++
++ rdom = pi->dom_info + domain;
++ if (rdom->async_reset && flags & AUTONOMOUS_RESET)
+ flags |= ASYNCHRONOUS_RESET;
+
+ ret = ph->xops->xfer_get_init(ph, RESET, sizeof(*dom), 0, &t);
+@@ -180,7 +184,7 @@ static int scmi_domain_reset(const struct scmi_protocol_handle *ph, u32 domain,
+ dom->flags = cpu_to_le32(flags);
+ dom->reset_state = cpu_to_le32(state);
+
+- if (rdom->async_reset)
++ if (flags & ASYNCHRONOUS_RESET)
+ ret = ph->xops->do_xfer_with_response(ph, t);
+ else
+ ret = ph->xops->do_xfer(ph, t);
+diff --git a/drivers/firmware/efi/libstub/secureboot.c b/drivers/firmware/efi/libstub/secureboot.c
+index 8a18930f3eb69..516f4f0069bd2 100644
+--- a/drivers/firmware/efi/libstub/secureboot.c
++++ b/drivers/firmware/efi/libstub/secureboot.c
+@@ -14,7 +14,7 @@
+
+ /* SHIM variables */
+ static const efi_guid_t shim_guid = EFI_SHIM_LOCK_GUID;
+-static const efi_char16_t shim_MokSBState_name[] = L"MokSBState";
++static const efi_char16_t shim_MokSBState_name[] = L"MokSBStateRT";
+
+ static efi_status_t get_var(efi_char16_t *name, efi_guid_t *vendor, u32 *attr,
+ unsigned long *data_size, void *data)
+@@ -43,8 +43,8 @@ enum efi_secureboot_mode efi_get_secureboot(void)
+
+ /*
+ * See if a user has put the shim into insecure mode. If so, and if the
+- * variable doesn't have the runtime attribute set, we might as well
+- * honor that.
++ * variable doesn't have the non-volatile attribute set, we might as
++ * well honor that.
+ */
+ size = sizeof(moksbstate);
+ status = get_efi_var(shim_MokSBState_name, &shim_guid,
+@@ -53,7 +53,7 @@ enum efi_secureboot_mode efi_get_secureboot(void)
+ /* If it fails, we don't care why. Default to secure */
+ if (status != EFI_SUCCESS)
+ goto secure_boot_enabled;
+- if (!(attr & EFI_VARIABLE_RUNTIME_ACCESS) && moksbstate == 1)
++ if (!(attr & EFI_VARIABLE_NON_VOLATILE) && moksbstate == 1)
+ return efi_secureboot_mode_disabled;
+
+ secure_boot_enabled:
+diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
+index 05ae8bcc9d671..9780f32a9f243 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.c
++++ b/drivers/firmware/efi/libstub/x86-stub.c
+@@ -517,6 +517,13 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
+ hdr->ramdisk_image = 0;
+ hdr->ramdisk_size = 0;
+
++ /*
++ * Disregard any setup data that was provided by the bootloader:
++ * setup_data could be pointing anywhere, and we have no way of
++ * authenticating or validating the payload.
++ */
++ hdr->setup_data = 0;
++
+ efi_stub_entry(handle, sys_table_arg, boot_params);
+ /* not reached */
+
+diff --git a/drivers/gpio/gpio-ixp4xx.c b/drivers/gpio/gpio-ixp4xx.c
+index 312309be0287d..56656fb519f85 100644
+--- a/drivers/gpio/gpio-ixp4xx.c
++++ b/drivers/gpio/gpio-ixp4xx.c
+@@ -63,6 +63,14 @@ static void ixp4xx_gpio_irq_ack(struct irq_data *d)
+ __raw_writel(BIT(d->hwirq), g->base + IXP4XX_REG_GPIS);
+ }
+
++static void ixp4xx_gpio_mask_irq(struct irq_data *d)
++{
++ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++
++ irq_chip_mask_parent(d);
++ gpiochip_disable_irq(gc, d->hwirq);
++}
++
+ static void ixp4xx_gpio_irq_unmask(struct irq_data *d)
+ {
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+@@ -72,6 +80,7 @@ static void ixp4xx_gpio_irq_unmask(struct irq_data *d)
+ if (!(g->irq_edge & BIT(d->hwirq)))
+ ixp4xx_gpio_irq_ack(d);
+
++ gpiochip_enable_irq(gc, d->hwirq);
+ irq_chip_unmask_parent(d);
+ }
+
+@@ -149,12 +158,14 @@ static int ixp4xx_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ return irq_chip_set_type_parent(d, IRQ_TYPE_LEVEL_HIGH);
+ }
+
+-static struct irq_chip ixp4xx_gpio_irqchip = {
++static const struct irq_chip ixp4xx_gpio_irqchip = {
+ .name = "IXP4GPIO",
+ .irq_ack = ixp4xx_gpio_irq_ack,
+- .irq_mask = irq_chip_mask_parent,
++ .irq_mask = ixp4xx_gpio_mask_irq,
+ .irq_unmask = ixp4xx_gpio_irq_unmask,
+ .irq_set_type = ixp4xx_gpio_irq_set_type,
++ .flags = IRQCHIP_IMMUTABLE,
++ GPIOCHIP_IRQ_RESOURCE_HELPERS,
+ };
+
+ static int ixp4xx_gpio_child_to_parent_hwirq(struct gpio_chip *gc,
+@@ -263,7 +274,7 @@ static int ixp4xx_gpio_probe(struct platform_device *pdev)
+ g->gc.owner = THIS_MODULE;
+
+ girq = &g->gc.irq;
+- girq->chip = &ixp4xx_gpio_irqchip;
++ gpio_irq_chip_set_chip(girq, &ixp4xx_gpio_irqchip);
+ girq->fwnode = g->fwnode;
+ girq->parent_domain = parent;
+ girq->child_to_parent_hwirq = ixp4xx_gpio_child_to_parent_hwirq;
+diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c
+index a2e505a7545cd..523dfd17dd922 100644
+--- a/drivers/gpio/gpio-mockup.c
++++ b/drivers/gpio/gpio-mockup.c
+@@ -533,8 +533,10 @@ static int __init gpio_mockup_register_chip(int idx)
+ }
+
+ fwnode = fwnode_create_software_node(properties, NULL);
+- if (IS_ERR(fwnode))
++ if (IS_ERR(fwnode)) {
++ kfree_strarray(line_names, ngpio);
+ return PTR_ERR(fwnode);
++ }
+
+ pdevinfo.name = "gpio-mockup";
+ pdevinfo.id = idx;
+@@ -597,9 +599,9 @@ static int __init gpio_mockup_init(void)
+
+ static void __exit gpio_mockup_exit(void)
+ {
++ gpio_mockup_unregister_pdevs();
+ debugfs_remove_recursive(gpio_mockup_dbg_dir);
+ platform_driver_unregister(&gpio_mockup_driver);
+- gpio_mockup_unregister_pdevs();
+ }
+
+ module_init(gpio_mockup_init);
+diff --git a/drivers/gpio/gpio-mt7621.c b/drivers/gpio/gpio-mt7621.c
+index d8a26e503ca5d..f163f5ca857be 100644
+--- a/drivers/gpio/gpio-mt7621.c
++++ b/drivers/gpio/gpio-mt7621.c
+@@ -112,6 +112,8 @@ mediatek_gpio_irq_unmask(struct irq_data *d)
+ unsigned long flags;
+ u32 rise, fall, high, low;
+
++ gpiochip_enable_irq(gc, d->hwirq);
++
+ spin_lock_irqsave(&rg->lock, flags);
+ rise = mtk_gpio_r32(rg, GPIO_REG_REDGE);
+ fall = mtk_gpio_r32(rg, GPIO_REG_FEDGE);
+@@ -143,6 +145,8 @@ mediatek_gpio_irq_mask(struct irq_data *d)
+ mtk_gpio_w32(rg, GPIO_REG_HLVL, high & ~BIT(pin));
+ mtk_gpio_w32(rg, GPIO_REG_LLVL, low & ~BIT(pin));
+ spin_unlock_irqrestore(&rg->lock, flags);
++
++ gpiochip_disable_irq(gc, d->hwirq);
+ }
+
+ static int
+@@ -204,6 +208,16 @@ mediatek_gpio_xlate(struct gpio_chip *chip,
+ return gpio % MTK_BANK_WIDTH;
+ }
+
++static const struct irq_chip mt7621_irq_chip = {
++ .name = "mt7621-gpio",
++ .irq_mask_ack = mediatek_gpio_irq_mask,
++ .irq_mask = mediatek_gpio_irq_mask,
++ .irq_unmask = mediatek_gpio_irq_unmask,
++ .irq_set_type = mediatek_gpio_irq_type,
++ .flags = IRQCHIP_IMMUTABLE,
++ GPIOCHIP_IRQ_RESOURCE_HELPERS,
++};
++
+ static int
+ mediatek_gpio_bank_probe(struct device *dev, int bank)
+ {
+@@ -238,11 +252,6 @@ mediatek_gpio_bank_probe(struct device *dev, int bank)
+ return -ENOMEM;
+
+ rg->chip.offset = bank * MTK_BANK_WIDTH;
+- rg->irq_chip.name = dev_name(dev);
+- rg->irq_chip.irq_unmask = mediatek_gpio_irq_unmask;
+- rg->irq_chip.irq_mask = mediatek_gpio_irq_mask;
+- rg->irq_chip.irq_mask_ack = mediatek_gpio_irq_mask;
+- rg->irq_chip.irq_set_type = mediatek_gpio_irq_type;
+
+ if (mtk->gpio_irq) {
+ struct gpio_irq_chip *girq;
+@@ -262,7 +271,7 @@ mediatek_gpio_bank_probe(struct device *dev, int bank)
+ }
+
+ girq = &rg->chip.irq;
+- girq->chip = &rg->irq_chip;
++ gpio_irq_chip_set_chip(girq, &mt7621_irq_chip);
+ /* This will let us handle the parent IRQ in the driver */
+ girq->parent_handler = NULL;
+ girq->num_parents = 0;
+diff --git a/drivers/gpio/gpio-tqmx86.c b/drivers/gpio/gpio-tqmx86.c
+index fa4bc7481f9a6..e739dcea61b23 100644
+--- a/drivers/gpio/gpio-tqmx86.c
++++ b/drivers/gpio/gpio-tqmx86.c
+@@ -307,6 +307,8 @@ static int tqmx86_gpio_probe(struct platform_device *pdev)
+ girq->default_type = IRQ_TYPE_NONE;
+ girq->handler = handle_simple_irq;
+ girq->init_valid_mask = tqmx86_init_irq_valid_mask;
++
++ irq_domain_set_pm_device(girq->domain, dev);
+ }
+
+ ret = devm_gpiochip_add_data(dev, chip, gpio);
+@@ -315,8 +317,6 @@ static int tqmx86_gpio_probe(struct platform_device *pdev)
+ goto out_pm_dis;
+ }
+
+- irq_domain_set_pm_device(girq->domain, dev);
+-
+ dev_info(dev, "GPIO functionality initialized with %d pins\n",
+ chip->ngpio);
+
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index b26e643383762..21fee9ed7f0d2 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -1975,7 +1975,6 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ ret = -ENODEV;
+ goto out_free_le;
+ }
+- le->irq = irq;
+
+ if (eflags & GPIOEVENT_REQUEST_RISING_EDGE)
+ irqflags |= test_bit(FLAG_ACTIVE_LOW, &desc->flags) ?
+@@ -1989,7 +1988,7 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ init_waitqueue_head(&le->wait);
+
+ /* Request a thread to read the events */
+- ret = request_threaded_irq(le->irq,
++ ret = request_threaded_irq(irq,
+ lineevent_irq_handler,
+ lineevent_irq_thread,
+ irqflags,
+@@ -1998,6 +1997,8 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ if (ret)
+ goto out_free_le;
+
++ le->irq = irq;
++
+ fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC);
+ if (fd < 0) {
+ ret = fd;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 4dfd6724b3caa..0a8c15c3a04c3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -35,6 +35,8 @@
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
+ #include <drm/drm_crtc_helper.h>
++#include <drm/drm_damage_helper.h>
++#include <drm/drm_drv.h>
+ #include <drm/drm_edid.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_fb_helper.h>
+@@ -495,6 +497,12 @@ static const struct drm_framebuffer_funcs amdgpu_fb_funcs = {
+ .create_handle = drm_gem_fb_create_handle,
+ };
+
++static const struct drm_framebuffer_funcs amdgpu_fb_funcs_atomic = {
++ .destroy = drm_gem_fb_destroy,
++ .create_handle = drm_gem_fb_create_handle,
++ .dirty = drm_atomic_helper_dirtyfb,
++};
++
+ uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev,
+ uint64_t bo_flags)
+ {
+@@ -1069,7 +1077,10 @@ static int amdgpu_display_gem_fb_verify_and_init(struct drm_device *dev,
+ if (ret)
+ goto err;
+
+- ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
++ if (drm_drv_uses_atomic_modeset(dev))
++ ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs_atomic);
++ else
++ ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
+ if (ret)
+ goto err;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index b19bf0c3f3737..79ce654bd3dad 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -748,7 +748,7 @@ static int psp_tmr_init(struct psp_context *psp)
+ }
+
+ pptr = amdgpu_sriov_vf(psp->adev) ? &tmr_buf : NULL;
+- ret = amdgpu_bo_create_kernel(psp->adev, tmr_size, PSP_TMR_SIZE(psp->adev),
++ ret = amdgpu_bo_create_kernel(psp->adev, tmr_size, PSP_TMR_ALIGNMENT,
+ AMDGPU_GEM_DOMAIN_VRAM,
+ &psp->tmr_bo, &psp->tmr_mc_addr, pptr);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index e431f49949319..cd366c7f311fd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -36,6 +36,7 @@
+ #define PSP_CMD_BUFFER_SIZE 0x1000
+ #define PSP_1_MEG 0x100000
+ #define PSP_TMR_SIZE(adev) ((adev)->asic_type == CHIP_ALDEBARAN ? 0x800000 : 0x400000)
++#define PSP_TMR_ALIGNMENT 0x100000
+ #define PSP_FW_NAME_LEN 0x24
+
+ enum psp_shared_mem_size {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index dac202ae864dd..9193ca5d6fe7a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1805,7 +1805,8 @@ static void amdgpu_ras_log_on_err_counter(struct amdgpu_device *adev)
+ amdgpu_ras_query_error_status(adev, &info);
+
+ if (adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
+- adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4)) {
++ adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4) &&
++ adev->ip_versions[MP0_HWIP][0] != IP_VERSION(13, 0, 0)) {
+ if (amdgpu_ras_reset_error_status(adev, info.head.block))
+ dev_warn(adev->dev, "Failed to reset error counter and error status");
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
+index cdc0c97798483..6c1fd471a4c7d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
+@@ -28,6 +28,14 @@
+ #include "nbio/nbio_7_7_0_sh_mask.h"
+ #include <uapi/linux/kfd_ioctl.h>
+
++static void nbio_v7_7_remap_hdp_registers(struct amdgpu_device *adev)
++{
++ WREG32_SOC15(NBIO, 0, regBIF_BX0_REMAP_HDP_MEM_FLUSH_CNTL,
++ adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL);
++ WREG32_SOC15(NBIO, 0, regBIF_BX0_REMAP_HDP_REG_FLUSH_CNTL,
++ adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_REG_FLUSH_CNTL);
++}
++
+ static u32 nbio_v7_7_get_rev_id(struct amdgpu_device *adev)
+ {
+ u32 tmp;
+@@ -237,4 +245,5 @@ const struct amdgpu_nbio_funcs nbio_v7_7_funcs = {
+ .ih_doorbell_range = nbio_v7_7_ih_doorbell_range,
+ .ih_control = nbio_v7_7_ih_control,
+ .init_registers = nbio_v7_7_init_registers,
++ .remap_hdp_registers = nbio_v7_7_remap_hdp_registers,
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index f47d82da115c9..42a567e71439b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -6651,8 +6651,7 @@ static double CalculateUrgentLatency(
+ return ret;
+ }
+
+-
+-static void UseMinimumDCFCLK(
++static noinline_for_stack void UseMinimumDCFCLK(
+ struct display_mode_lib *mode_lib,
+ int MaxInterDCNTileRepeaters,
+ int MaxPrefetchMode,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+index e4b9fd31223c9..40a672236198e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+@@ -261,33 +261,13 @@ static void CalculateRowBandwidth(
+
+ static void CalculateFlipSchedule(
+ struct display_mode_lib *mode_lib,
++ unsigned int k,
+ double HostVMInefficiencyFactor,
+ double UrgentExtraLatency,
+ double UrgentLatency,
+- unsigned int GPUVMMaxPageTableLevels,
+- bool HostVMEnable,
+- unsigned int HostVMMaxNonCachedPageTableLevels,
+- bool GPUVMEnable,
+- double HostVMMinPageSize,
+ double PDEAndMetaPTEBytesPerFrame,
+ double MetaRowBytes,
+- double DPTEBytesPerRow,
+- double BandwidthAvailableForImmediateFlip,
+- unsigned int TotImmediateFlipBytes,
+- enum source_format_class SourcePixelFormat,
+- double LineTime,
+- double VRatio,
+- double VRatioChroma,
+- double Tno_bw,
+- bool DCCEnable,
+- unsigned int dpte_row_height,
+- unsigned int meta_row_height,
+- unsigned int dpte_row_height_chroma,
+- unsigned int meta_row_height_chroma,
+- double *DestinationLinesToRequestVMInImmediateFlip,
+- double *DestinationLinesToRequestRowInImmediateFlip,
+- double *final_flip_bw,
+- bool *ImmediateFlipSupportedForPipe);
++ double DPTEBytesPerRow);
+ static double CalculateWriteBackDelay(
+ enum source_format_class WritebackPixelFormat,
+ double WritebackHRatio,
+@@ -321,64 +301,28 @@ static void CalculateVupdateAndDynamicMetadataParameters(
+ static void CalculateWatermarksAndDRAMSpeedChangeSupport(
+ struct display_mode_lib *mode_lib,
+ unsigned int PrefetchMode,
+- unsigned int NumberOfActivePlanes,
+- unsigned int MaxLineBufferLines,
+- unsigned int LineBufferSize,
+- unsigned int WritebackInterfaceBufferSize,
+ double DCFCLK,
+ double ReturnBW,
+- bool SynchronizedVBlank,
+- unsigned int dpte_group_bytes[],
+- unsigned int MetaChunkSize,
+ double UrgentLatency,
+ double ExtraLatency,
+- double WritebackLatency,
+- double WritebackChunkSize,
+ double SOCCLK,
+- double DRAMClockChangeLatency,
+- double SRExitTime,
+- double SREnterPlusExitTime,
+- double SRExitZ8Time,
+- double SREnterPlusExitZ8Time,
+ double DCFCLKDeepSleep,
+ unsigned int DETBufferSizeY[],
+ unsigned int DETBufferSizeC[],
+ unsigned int SwathHeightY[],
+ unsigned int SwathHeightC[],
+- unsigned int LBBitPerPixel[],
+ double SwathWidthY[],
+ double SwathWidthC[],
+- double HRatio[],
+- double HRatioChroma[],
+- unsigned int vtaps[],
+- unsigned int VTAPsChroma[],
+- double VRatio[],
+- double VRatioChroma[],
+- unsigned int HTotal[],
+- double PixelClock[],
+- unsigned int BlendingAndTiming[],
+ unsigned int DPPPerPlane[],
+ double BytePerPixelDETY[],
+ double BytePerPixelDETC[],
+- double DSTXAfterScaler[],
+- double DSTYAfterScaler[],
+- bool WritebackEnable[],
+- enum source_format_class WritebackPixelFormat[],
+- double WritebackDestinationWidth[],
+- double WritebackDestinationHeight[],
+- double WritebackSourceHeight[],
+ bool UnboundedRequestEnabled,
+ int unsigned CompressedBufferSizeInkByte,
+ enum clock_change_support *DRAMClockChangeSupport,
+- double *UrgentWatermark,
+- double *WritebackUrgentWatermark,
+- double *DRAMClockChangeWatermark,
+- double *WritebackDRAMClockChangeWatermark,
+ double *StutterExitWatermark,
+ double *StutterEnterPlusExitWatermark,
+ double *Z8StutterExitWatermark,
+- double *Z8StutterEnterPlusExitWatermark,
+- double *MinActiveDRAMClockChangeLatencySupported);
++ double *Z8StutterEnterPlusExitWatermark);
+
+ static void CalculateDCFCLKDeepSleep(
+ struct display_mode_lib *mode_lib,
+@@ -2914,33 +2858,13 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
+ for (k = 0; k < v->NumberOfActivePlanes; ++k) {
+ CalculateFlipSchedule(
+ mode_lib,
++ k,
+ HostVMInefficiencyFactor,
+ v->UrgentExtraLatency,
+ v->UrgentLatency,
+- v->GPUVMMaxPageTableLevels,
+- v->HostVMEnable,
+- v->HostVMMaxNonCachedPageTableLevels,
+- v->GPUVMEnable,
+- v->HostVMMinPageSize,
+ v->PDEAndMetaPTEBytesFrame[k],
+ v->MetaRowByte[k],
+- v->PixelPTEBytesPerRow[k],
+- v->BandwidthAvailableForImmediateFlip,
+- v->TotImmediateFlipBytes,
+- v->SourcePixelFormat[k],
+- v->HTotal[k] / v->PixelClock[k],
+- v->VRatio[k],
+- v->VRatioChroma[k],
+- v->Tno_bw[k],
+- v->DCCEnable[k],
+- v->dpte_row_height[k],
+- v->meta_row_height[k],
+- v->dpte_row_height_chroma[k],
+- v->meta_row_height_chroma[k],
+- &v->DestinationLinesToRequestVMInImmediateFlip[k],
+- &v->DestinationLinesToRequestRowInImmediateFlip[k],
+- &v->final_flip_bw[k],
+- &v->ImmediateFlipSupportedForPipe[k]);
++ v->PixelPTEBytesPerRow[k]);
+ }
+
+ v->total_dcn_read_bw_with_flip = 0.0;
+@@ -3027,64 +2951,28 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
+ CalculateWatermarksAndDRAMSpeedChangeSupport(
+ mode_lib,
+ PrefetchMode,
+- v->NumberOfActivePlanes,
+- v->MaxLineBufferLines,
+- v->LineBufferSize,
+- v->WritebackInterfaceBufferSize,
+ v->DCFCLK,
+ v->ReturnBW,
+- v->SynchronizedVBlank,
+- v->dpte_group_bytes,
+- v->MetaChunkSize,
+ v->UrgentLatency,
+ v->UrgentExtraLatency,
+- v->WritebackLatency,
+- v->WritebackChunkSize,
+ v->SOCCLK,
+- v->DRAMClockChangeLatency,
+- v->SRExitTime,
+- v->SREnterPlusExitTime,
+- v->SRExitZ8Time,
+- v->SREnterPlusExitZ8Time,
+ v->DCFCLKDeepSleep,
+ v->DETBufferSizeY,
+ v->DETBufferSizeC,
+ v->SwathHeightY,
+ v->SwathHeightC,
+- v->LBBitPerPixel,
+ v->SwathWidthY,
+ v->SwathWidthC,
+- v->HRatio,
+- v->HRatioChroma,
+- v->vtaps,
+- v->VTAPsChroma,
+- v->VRatio,
+- v->VRatioChroma,
+- v->HTotal,
+- v->PixelClock,
+- v->BlendingAndTiming,
+ v->DPPPerPlane,
+ v->BytePerPixelDETY,
+ v->BytePerPixelDETC,
+- v->DSTXAfterScaler,
+- v->DSTYAfterScaler,
+- v->WritebackEnable,
+- v->WritebackPixelFormat,
+- v->WritebackDestinationWidth,
+- v->WritebackDestinationHeight,
+- v->WritebackSourceHeight,
+ v->UnboundedRequestEnabled,
+ v->CompressedBufferSizeInkByte,
+ &DRAMClockChangeSupport,
+- &v->UrgentWatermark,
+- &v->WritebackUrgentWatermark,
+- &v->DRAMClockChangeWatermark,
+- &v->WritebackDRAMClockChangeWatermark,
+ &v->StutterExitWatermark,
+ &v->StutterEnterPlusExitWatermark,
+ &v->Z8StutterExitWatermark,
+- &v->Z8StutterEnterPlusExitWatermark,
+- &v->MinActiveDRAMClockChangeLatencySupported);
++ &v->Z8StutterEnterPlusExitWatermark);
+
+ for (k = 0; k < v->NumberOfActivePlanes; ++k) {
+ if (v->WritebackEnable[k] == true) {
+@@ -3696,61 +3584,43 @@ static void CalculateRowBandwidth(
+
+ static void CalculateFlipSchedule(
+ struct display_mode_lib *mode_lib,
++ unsigned int k,
+ double HostVMInefficiencyFactor,
+ double UrgentExtraLatency,
+ double UrgentLatency,
+- unsigned int GPUVMMaxPageTableLevels,
+- bool HostVMEnable,
+- unsigned int HostVMMaxNonCachedPageTableLevels,
+- bool GPUVMEnable,
+- double HostVMMinPageSize,
+ double PDEAndMetaPTEBytesPerFrame,
+ double MetaRowBytes,
+- double DPTEBytesPerRow,
+- double BandwidthAvailableForImmediateFlip,
+- unsigned int TotImmediateFlipBytes,
+- enum source_format_class SourcePixelFormat,
+- double LineTime,
+- double VRatio,
+- double VRatioChroma,
+- double Tno_bw,
+- bool DCCEnable,
+- unsigned int dpte_row_height,
+- unsigned int meta_row_height,
+- unsigned int dpte_row_height_chroma,
+- unsigned int meta_row_height_chroma,
+- double *DestinationLinesToRequestVMInImmediateFlip,
+- double *DestinationLinesToRequestRowInImmediateFlip,
+- double *final_flip_bw,
+- bool *ImmediateFlipSupportedForPipe)
++ double DPTEBytesPerRow)
+ {
++ struct vba_vars_st *v = &mode_lib->vba;
+ double min_row_time = 0.0;
+ unsigned int HostVMDynamicLevelsTrips;
+ double TimeForFetchingMetaPTEImmediateFlip;
+ double TimeForFetchingRowInVBlankImmediateFlip;
+ double ImmediateFlipBW;
++ double LineTime = v->HTotal[k] / v->PixelClock[k];
+
+- if (GPUVMEnable == true && HostVMEnable == true) {
+- HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels;
++ if (v->GPUVMEnable == true && v->HostVMEnable == true) {
++ HostVMDynamicLevelsTrips = v->HostVMMaxNonCachedPageTableLevels;
+ } else {
+ HostVMDynamicLevelsTrips = 0;
+ }
+
+- if (GPUVMEnable == true || DCCEnable == true) {
+- ImmediateFlipBW = (PDEAndMetaPTEBytesPerFrame + MetaRowBytes + DPTEBytesPerRow) * BandwidthAvailableForImmediateFlip / TotImmediateFlipBytes;
++ if (v->GPUVMEnable == true || v->DCCEnable[k] == true) {
++ ImmediateFlipBW = (PDEAndMetaPTEBytesPerFrame + MetaRowBytes + DPTEBytesPerRow) * v->BandwidthAvailableForImmediateFlip / v->TotImmediateFlipBytes;
+ }
+
+- if (GPUVMEnable == true) {
++ if (v->GPUVMEnable == true) {
+ TimeForFetchingMetaPTEImmediateFlip = dml_max3(
+- Tno_bw + PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor / ImmediateFlipBW,
+- UrgentExtraLatency + UrgentLatency * (GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1),
++ v->Tno_bw[k] + PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor / ImmediateFlipBW,
++ UrgentExtraLatency + UrgentLatency * (v->GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1),
+ LineTime / 4.0);
+ } else {
+ TimeForFetchingMetaPTEImmediateFlip = 0;
+ }
+
+- *DestinationLinesToRequestVMInImmediateFlip = dml_ceil(4.0 * (TimeForFetchingMetaPTEImmediateFlip / LineTime), 1) / 4.0;
+- if ((GPUVMEnable == true || DCCEnable == true)) {
++ v->DestinationLinesToRequestVMInImmediateFlip[k] = dml_ceil(4.0 * (TimeForFetchingMetaPTEImmediateFlip / LineTime), 1) / 4.0;
++ if ((v->GPUVMEnable == true || v->DCCEnable[k] == true)) {
+ TimeForFetchingRowInVBlankImmediateFlip = dml_max3(
+ (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / ImmediateFlipBW,
+ UrgentLatency * (HostVMDynamicLevelsTrips + 1),
+@@ -3759,54 +3629,54 @@ static void CalculateFlipSchedule(
+ TimeForFetchingRowInVBlankImmediateFlip = 0;
+ }
+
+- *DestinationLinesToRequestRowInImmediateFlip = dml_ceil(4.0 * (TimeForFetchingRowInVBlankImmediateFlip / LineTime), 1) / 4.0;
++ v->DestinationLinesToRequestRowInImmediateFlip[k] = dml_ceil(4.0 * (TimeForFetchingRowInVBlankImmediateFlip / LineTime), 1) / 4.0;
+
+- if (GPUVMEnable == true) {
+- *final_flip_bw = dml_max(
+- PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor / (*DestinationLinesToRequestVMInImmediateFlip * LineTime),
+- (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / (*DestinationLinesToRequestRowInImmediateFlip * LineTime));
+- } else if ((GPUVMEnable == true || DCCEnable == true)) {
+- *final_flip_bw = (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / (*DestinationLinesToRequestRowInImmediateFlip * LineTime);
++ if (v->GPUVMEnable == true) {
++ v->final_flip_bw[k] = dml_max(
++ PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor / (v->DestinationLinesToRequestVMInImmediateFlip[k] * LineTime),
++ (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / (v->DestinationLinesToRequestRowInImmediateFlip[k] * LineTime));
++ } else if ((v->GPUVMEnable == true || v->DCCEnable[k] == true)) {
++ v->final_flip_bw[k] = (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / (v->DestinationLinesToRequestRowInImmediateFlip[k] * LineTime);
+ } else {
+- *final_flip_bw = 0;
++ v->final_flip_bw[k] = 0;
+ }
+
+- if (SourcePixelFormat == dm_420_8 || SourcePixelFormat == dm_420_10 || SourcePixelFormat == dm_rgbe_alpha) {
+- if (GPUVMEnable == true && DCCEnable != true) {
+- min_row_time = dml_min(dpte_row_height * LineTime / VRatio, dpte_row_height_chroma * LineTime / VRatioChroma);
+- } else if (GPUVMEnable != true && DCCEnable == true) {
+- min_row_time = dml_min(meta_row_height * LineTime / VRatio, meta_row_height_chroma * LineTime / VRatioChroma);
++ if (v->SourcePixelFormat[k] == dm_420_8 || v->SourcePixelFormat[k] == dm_420_10 || v->SourcePixelFormat[k] == dm_rgbe_alpha) {
++ if (v->GPUVMEnable == true && v->DCCEnable[k] != true) {
++ min_row_time = dml_min(v->dpte_row_height[k] * LineTime / v->VRatio[k], v->dpte_row_height_chroma[k] * LineTime / v->VRatioChroma[k]);
++ } else if (v->GPUVMEnable != true && v->DCCEnable[k] == true) {
++ min_row_time = dml_min(v->meta_row_height[k] * LineTime / v->VRatio[k], v->meta_row_height_chroma[k] * LineTime / v->VRatioChroma[k]);
+ } else {
+ min_row_time = dml_min4(
+- dpte_row_height * LineTime / VRatio,
+- meta_row_height * LineTime / VRatio,
+- dpte_row_height_chroma * LineTime / VRatioChroma,
+- meta_row_height_chroma * LineTime / VRatioChroma);
++ v->dpte_row_height[k] * LineTime / v->VRatio[k],
++ v->meta_row_height[k] * LineTime / v->VRatio[k],
++ v->dpte_row_height_chroma[k] * LineTime / v->VRatioChroma[k],
++ v->meta_row_height_chroma[k] * LineTime / v->VRatioChroma[k]);
+ }
+ } else {
+- if (GPUVMEnable == true && DCCEnable != true) {
+- min_row_time = dpte_row_height * LineTime / VRatio;
+- } else if (GPUVMEnable != true && DCCEnable == true) {
+- min_row_time = meta_row_height * LineTime / VRatio;
++ if (v->GPUVMEnable == true && v->DCCEnable[k] != true) {
++ min_row_time = v->dpte_row_height[k] * LineTime / v->VRatio[k];
++ } else if (v->GPUVMEnable != true && v->DCCEnable[k] == true) {
++ min_row_time = v->meta_row_height[k] * LineTime / v->VRatio[k];
+ } else {
+- min_row_time = dml_min(dpte_row_height * LineTime / VRatio, meta_row_height * LineTime / VRatio);
++ min_row_time = dml_min(v->dpte_row_height[k] * LineTime / v->VRatio[k], v->meta_row_height[k] * LineTime / v->VRatio[k]);
+ }
+ }
+
+- if (*DestinationLinesToRequestVMInImmediateFlip >= 32 || *DestinationLinesToRequestRowInImmediateFlip >= 16
++ if (v->DestinationLinesToRequestVMInImmediateFlip[k] >= 32 || v->DestinationLinesToRequestRowInImmediateFlip[k] >= 16
+ || TimeForFetchingMetaPTEImmediateFlip + 2 * TimeForFetchingRowInVBlankImmediateFlip > min_row_time) {
+- *ImmediateFlipSupportedForPipe = false;
++ v->ImmediateFlipSupportedForPipe[k] = false;
+ } else {
+- *ImmediateFlipSupportedForPipe = true;
++ v->ImmediateFlipSupportedForPipe[k] = true;
+ }
+
+ #ifdef __DML_VBA_DEBUG__
+- dml_print("DML::%s: DestinationLinesToRequestVMInImmediateFlip = %f\n", __func__, *DestinationLinesToRequestVMInImmediateFlip);
+- dml_print("DML::%s: DestinationLinesToRequestRowInImmediateFlip = %f\n", __func__, *DestinationLinesToRequestRowInImmediateFlip);
++ dml_print("DML::%s: DestinationLinesToRequestVMInImmediateFlip = %f\n", __func__, v->DestinationLinesToRequestVMInImmediateFlip[k]);
++ dml_print("DML::%s: DestinationLinesToRequestRowInImmediateFlip = %f\n", __func__, v->DestinationLinesToRequestRowInImmediateFlip[k]);
+ dml_print("DML::%s: TimeForFetchingMetaPTEImmediateFlip = %f\n", __func__, TimeForFetchingMetaPTEImmediateFlip);
+ dml_print("DML::%s: TimeForFetchingRowInVBlankImmediateFlip = %f\n", __func__, TimeForFetchingRowInVBlankImmediateFlip);
+ dml_print("DML::%s: min_row_time = %f\n", __func__, min_row_time);
+- dml_print("DML::%s: ImmediateFlipSupportedForPipe = %d\n", __func__, *ImmediateFlipSupportedForPipe);
++ dml_print("DML::%s: ImmediateFlipSupportedForPipe = %d\n", __func__, v->ImmediateFlipSupportedForPipe[k]);
+ #endif
+
+ }
+@@ -5397,33 +5267,13 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ for (k = 0; k < v->NumberOfActivePlanes; k++) {
+ CalculateFlipSchedule(
+ mode_lib,
++ k,
+ HostVMInefficiencyFactor,
+ v->ExtraLatency,
+ v->UrgLatency[i],
+- v->GPUVMMaxPageTableLevels,
+- v->HostVMEnable,
+- v->HostVMMaxNonCachedPageTableLevels,
+- v->GPUVMEnable,
+- v->HostVMMinPageSize,
+ v->PDEAndMetaPTEBytesPerFrame[i][j][k],
+ v->MetaRowBytes[i][j][k],
+- v->DPTEBytesPerRow[i][j][k],
+- v->BandwidthAvailableForImmediateFlip,
+- v->TotImmediateFlipBytes,
+- v->SourcePixelFormat[k],
+- v->HTotal[k] / v->PixelClock[k],
+- v->VRatio[k],
+- v->VRatioChroma[k],
+- v->Tno_bw[k],
+- v->DCCEnable[k],
+- v->dpte_row_height[k],
+- v->meta_row_height[k],
+- v->dpte_row_height_chroma[k],
+- v->meta_row_height_chroma[k],
+- &v->DestinationLinesToRequestVMInImmediateFlip[k],
+- &v->DestinationLinesToRequestRowInImmediateFlip[k],
+- &v->final_flip_bw[k],
+- &v->ImmediateFlipSupportedForPipe[k]);
++ v->DPTEBytesPerRow[i][j][k]);
+ }
+ v->total_dcn_read_bw_with_flip = 0.0;
+ for (k = 0; k < v->NumberOfActivePlanes; k++) {
+@@ -5481,64 +5331,28 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ CalculateWatermarksAndDRAMSpeedChangeSupport(
+ mode_lib,
+ v->PrefetchModePerState[i][j],
+- v->NumberOfActivePlanes,
+- v->MaxLineBufferLines,
+- v->LineBufferSize,
+- v->WritebackInterfaceBufferSize,
+ v->DCFCLKState[i][j],
+ v->ReturnBWPerState[i][j],
+- v->SynchronizedVBlank,
+- v->dpte_group_bytes,
+- v->MetaChunkSize,
+ v->UrgLatency[i],
+ v->ExtraLatency,
+- v->WritebackLatency,
+- v->WritebackChunkSize,
+ v->SOCCLKPerState[i],
+- v->DRAMClockChangeLatency,
+- v->SRExitTime,
+- v->SREnterPlusExitTime,
+- v->SRExitZ8Time,
+- v->SREnterPlusExitZ8Time,
+ v->ProjectedDCFCLKDeepSleep[i][j],
+ v->DETBufferSizeYThisState,
+ v->DETBufferSizeCThisState,
+ v->SwathHeightYThisState,
+ v->SwathHeightCThisState,
+- v->LBBitPerPixel,
+ v->SwathWidthYThisState,
+ v->SwathWidthCThisState,
+- v->HRatio,
+- v->HRatioChroma,
+- v->vtaps,
+- v->VTAPsChroma,
+- v->VRatio,
+- v->VRatioChroma,
+- v->HTotal,
+- v->PixelClock,
+- v->BlendingAndTiming,
+ v->NoOfDPPThisState,
+ v->BytePerPixelInDETY,
+ v->BytePerPixelInDETC,
+- v->DSTXAfterScaler,
+- v->DSTYAfterScaler,
+- v->WritebackEnable,
+- v->WritebackPixelFormat,
+- v->WritebackDestinationWidth,
+- v->WritebackDestinationHeight,
+- v->WritebackSourceHeight,
+ UnboundedRequestEnabledThisState,
+ CompressedBufferSizeInkByteThisState,
+ &v->DRAMClockChangeSupport[i][j],
+- &v->UrgentWatermark,
+- &v->WritebackUrgentWatermark,
+- &v->DRAMClockChangeWatermark,
+- &v->WritebackDRAMClockChangeWatermark,
+- &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+- &v->MinActiveDRAMClockChangeLatencySupported);
++ &dummy);
+ }
+ }
+
+@@ -5663,64 +5477,28 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ static void CalculateWatermarksAndDRAMSpeedChangeSupport(
+ struct display_mode_lib *mode_lib,
+ unsigned int PrefetchMode,
+- unsigned int NumberOfActivePlanes,
+- unsigned int MaxLineBufferLines,
+- unsigned int LineBufferSize,
+- unsigned int WritebackInterfaceBufferSize,
+ double DCFCLK,
+ double ReturnBW,
+- bool SynchronizedVBlank,
+- unsigned int dpte_group_bytes[],
+- unsigned int MetaChunkSize,
+ double UrgentLatency,
+ double ExtraLatency,
+- double WritebackLatency,
+- double WritebackChunkSize,
+ double SOCCLK,
+- double DRAMClockChangeLatency,
+- double SRExitTime,
+- double SREnterPlusExitTime,
+- double SRExitZ8Time,
+- double SREnterPlusExitZ8Time,
+ double DCFCLKDeepSleep,
+ unsigned int DETBufferSizeY[],
+ unsigned int DETBufferSizeC[],
+ unsigned int SwathHeightY[],
+ unsigned int SwathHeightC[],
+- unsigned int LBBitPerPixel[],
+ double SwathWidthY[],
+ double SwathWidthC[],
+- double HRatio[],
+- double HRatioChroma[],
+- unsigned int vtaps[],
+- unsigned int VTAPsChroma[],
+- double VRatio[],
+- double VRatioChroma[],
+- unsigned int HTotal[],
+- double PixelClock[],
+- unsigned int BlendingAndTiming[],
+ unsigned int DPPPerPlane[],
+ double BytePerPixelDETY[],
+ double BytePerPixelDETC[],
+- double DSTXAfterScaler[],
+- double DSTYAfterScaler[],
+- bool WritebackEnable[],
+- enum source_format_class WritebackPixelFormat[],
+- double WritebackDestinationWidth[],
+- double WritebackDestinationHeight[],
+- double WritebackSourceHeight[],
+ bool UnboundedRequestEnabled,
+ int unsigned CompressedBufferSizeInkByte,
+ enum clock_change_support *DRAMClockChangeSupport,
+- double *UrgentWatermark,
+- double *WritebackUrgentWatermark,
+- double *DRAMClockChangeWatermark,
+- double *WritebackDRAMClockChangeWatermark,
+ double *StutterExitWatermark,
+ double *StutterEnterPlusExitWatermark,
+ double *Z8StutterExitWatermark,
+- double *Z8StutterEnterPlusExitWatermark,
+- double *MinActiveDRAMClockChangeLatencySupported)
++ double *Z8StutterEnterPlusExitWatermark)
+ {
+ struct vba_vars_st *v = &mode_lib->vba;
+ double EffectiveLBLatencyHidingY;
+@@ -5740,103 +5518,103 @@ static void CalculateWatermarksAndDRAMSpeedChangeSupport(
+ double TotalPixelBW = 0.0;
+ int k, j;
+
+- *UrgentWatermark = UrgentLatency + ExtraLatency;
++ v->UrgentWatermark = UrgentLatency + ExtraLatency;
+
+ #ifdef __DML_VBA_DEBUG__
+ dml_print("DML::%s: UrgentLatency = %f\n", __func__, UrgentLatency);
+ dml_print("DML::%s: ExtraLatency = %f\n", __func__, ExtraLatency);
+- dml_print("DML::%s: UrgentWatermark = %f\n", __func__, *UrgentWatermark);
++ dml_print("DML::%s: UrgentWatermark = %f\n", __func__, v->UrgentWatermark);
+ #endif
+
+- *DRAMClockChangeWatermark = DRAMClockChangeLatency + *UrgentWatermark;
++ v->DRAMClockChangeWatermark = v->DRAMClockChangeLatency + v->UrgentWatermark;
+
+ #ifdef __DML_VBA_DEBUG__
+- dml_print("DML::%s: DRAMClockChangeLatency = %f\n", __func__, DRAMClockChangeLatency);
+- dml_print("DML::%s: DRAMClockChangeWatermark = %f\n", __func__, *DRAMClockChangeWatermark);
++ dml_print("DML::%s: v->DRAMClockChangeLatency = %f\n", __func__, v->DRAMClockChangeLatency);
++ dml_print("DML::%s: DRAMClockChangeWatermark = %f\n", __func__, v->DRAMClockChangeWatermark);
+ #endif
+
+ v->TotalActiveWriteback = 0;
+- for (k = 0; k < NumberOfActivePlanes; ++k) {
+- if (WritebackEnable[k] == true) {
++ for (k = 0; k < v->NumberOfActivePlanes; ++k) {
++ if (v->WritebackEnable[k] == true) {
+ v->TotalActiveWriteback = v->TotalActiveWriteback + 1;
+ }
+ }
+
+ if (v->TotalActiveWriteback <= 1) {
+- *WritebackUrgentWatermark = WritebackLatency;
++ v->WritebackUrgentWatermark = v->WritebackLatency;
+ } else {
+- *WritebackUrgentWatermark = WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
++ v->WritebackUrgentWatermark = v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
+ }
+
+ if (v->TotalActiveWriteback <= 1) {
+- *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency;
++ v->WritebackDRAMClockChangeWatermark = v->DRAMClockChangeLatency + v->WritebackLatency;
+ } else {
+- *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
++ v->WritebackDRAMClockChangeWatermark = v->DRAMClockChangeLatency + v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
+ }
+
+- for (k = 0; k < NumberOfActivePlanes; ++k) {
++ for (k = 0; k < v->NumberOfActivePlanes; ++k) {
+ TotalPixelBW = TotalPixelBW
+- + DPPPerPlane[k] * (SwathWidthY[k] * BytePerPixelDETY[k] * VRatio[k] + SwathWidthC[k] * BytePerPixelDETC[k] * VRatioChroma[k])
+- / (HTotal[k] / PixelClock[k]);
++ + DPPPerPlane[k] * (SwathWidthY[k] * BytePerPixelDETY[k] * v->VRatio[k] + SwathWidthC[k] * BytePerPixelDETC[k] * v->VRatioChroma[k])
++ / (v->HTotal[k] / v->PixelClock[k]);
+ }
+
+- for (k = 0; k < NumberOfActivePlanes; ++k) {
++ for (k = 0; k < v->NumberOfActivePlanes; ++k) {
+ double EffectiveDETBufferSizeY = DETBufferSizeY[k];
+
+ v->LBLatencyHidingSourceLinesY = dml_min(
+- (double) MaxLineBufferLines,
+- dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(HRatio[k], 1.0)), 1)) - (vtaps[k] - 1);
++ (double) v->MaxLineBufferLines,
++ dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(v->HRatio[k], 1.0)), 1)) - (v->vtaps[k] - 1);
+
+ v->LBLatencyHidingSourceLinesC = dml_min(
+- (double) MaxLineBufferLines,
+- dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(HRatioChroma[k], 1.0)), 1)) - (VTAPsChroma[k] - 1);
++ (double) v->MaxLineBufferLines,
++ dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(v->HRatioChroma[k], 1.0)), 1)) - (v->VTAPsChroma[k] - 1);
+
+- EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / VRatio[k] * (HTotal[k] / PixelClock[k]);
++ EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / v->VRatio[k] * (v->HTotal[k] / v->PixelClock[k]);
+
+- EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / VRatioChroma[k] * (HTotal[k] / PixelClock[k]);
++ EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / v->VRatioChroma[k] * (v->HTotal[k] / v->PixelClock[k]);
+
+ if (UnboundedRequestEnabled) {
+ EffectiveDETBufferSizeY = EffectiveDETBufferSizeY
+- + CompressedBufferSizeInkByte * 1024 * SwathWidthY[k] * BytePerPixelDETY[k] * VRatio[k] / (HTotal[k] / PixelClock[k]) / TotalPixelBW;
++ + CompressedBufferSizeInkByte * 1024 * SwathWidthY[k] * BytePerPixelDETY[k] * v->VRatio[k] / (v->HTotal[k] / v->PixelClock[k]) / TotalPixelBW;
+ }
+
+ LinesInDETY[k] = (double) EffectiveDETBufferSizeY / BytePerPixelDETY[k] / SwathWidthY[k];
+ LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]);
+- FullDETBufferingTimeY = LinesInDETYRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k]) / VRatio[k];
++ FullDETBufferingTimeY = LinesInDETYRoundedDownToSwath[k] * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k];
+ if (BytePerPixelDETC[k] > 0) {
+ LinesInDETC = v->DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k];
+ LinesInDETCRoundedDownToSwath = dml_floor(LinesInDETC, SwathHeightC[k]);
+- FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (HTotal[k] / PixelClock[k]) / VRatioChroma[k];
++ FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (v->HTotal[k] / v->PixelClock[k]) / v->VRatioChroma[k];
+ } else {
+ LinesInDETC = 0;
+ FullDETBufferingTimeC = 999999;
+ }
+
+ ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY
+- - ((double) DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) * HTotal[k] / PixelClock[k] - *UrgentWatermark - *DRAMClockChangeWatermark;
++ - ((double) v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) * v->HTotal[k] / v->PixelClock[k] - v->UrgentWatermark - v->DRAMClockChangeWatermark;
+
+- if (NumberOfActivePlanes > 1) {
++ if (v->NumberOfActivePlanes > 1) {
+ ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY
+- - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightY[k] * HTotal[k] / PixelClock[k] / VRatio[k];
++ - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightY[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatio[k];
+ }
+
+ if (BytePerPixelDETC[k] > 0) {
+ ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC
+- - ((double) DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) * HTotal[k] / PixelClock[k] - *UrgentWatermark - *DRAMClockChangeWatermark;
++ - ((double) v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) * v->HTotal[k] / v->PixelClock[k] - v->UrgentWatermark - v->DRAMClockChangeWatermark;
+
+- if (NumberOfActivePlanes > 1) {
++ if (v->NumberOfActivePlanes > 1) {
+ ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC
+- - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightC[k] * HTotal[k] / PixelClock[k] / VRatioChroma[k];
++ - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightC[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatioChroma[k];
+ }
+ v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC);
+ } else {
+ v->ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY;
+ }
+
+- if (WritebackEnable[k] == true) {
+- WritebackDRAMClockChangeLatencyHiding = WritebackInterfaceBufferSize * 1024
+- / (WritebackDestinationWidth[k] * WritebackDestinationHeight[k] / (WritebackSourceHeight[k] * HTotal[k] / PixelClock[k]) * 4);
+- if (WritebackPixelFormat[k] == dm_444_64) {
++ if (v->WritebackEnable[k] == true) {
++ WritebackDRAMClockChangeLatencyHiding = v->WritebackInterfaceBufferSize * 1024
++ / (v->WritebackDestinationWidth[k] * v->WritebackDestinationHeight[k] / (v->WritebackSourceHeight[k] * v->HTotal[k] / v->PixelClock[k]) * 4);
++ if (v->WritebackPixelFormat[k] == dm_444_64) {
+ WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding / 2;
+ }
+ WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - v->WritebackDRAMClockChangeWatermark;
+@@ -5846,14 +5624,14 @@ static void CalculateWatermarksAndDRAMSpeedChangeSupport(
+
+ v->MinActiveDRAMClockChangeMargin = 999999;
+ PlaneWithMinActiveDRAMClockChangeMargin = 0;
+- for (k = 0; k < NumberOfActivePlanes; ++k) {
++ for (k = 0; k < v->NumberOfActivePlanes; ++k) {
+ if (v->ActiveDRAMClockChangeLatencyMargin[k] < v->MinActiveDRAMClockChangeMargin) {
+ v->MinActiveDRAMClockChangeMargin = v->ActiveDRAMClockChangeLatencyMargin[k];
+- if (BlendingAndTiming[k] == k) {
++ if (v->BlendingAndTiming[k] == k) {
+ PlaneWithMinActiveDRAMClockChangeMargin = k;
+ } else {
+- for (j = 0; j < NumberOfActivePlanes; ++j) {
+- if (BlendingAndTiming[k] == j) {
++ for (j = 0; j < v->NumberOfActivePlanes; ++j) {
++ if (v->BlendingAndTiming[k] == j) {
+ PlaneWithMinActiveDRAMClockChangeMargin = j;
+ }
+ }
+@@ -5861,11 +5639,11 @@ static void CalculateWatermarksAndDRAMSpeedChangeSupport(
+ }
+ }
+
+- *MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + DRAMClockChangeLatency;
++ v->MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + v->DRAMClockChangeLatency ;
+
+ SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = 999999;
+- for (k = 0; k < NumberOfActivePlanes; ++k) {
+- if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (BlendingAndTiming[k] == k)) && !(BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin)
++ for (k = 0; k < v->NumberOfActivePlanes; ++k) {
++ if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (v->BlendingAndTiming[k] == k)) && !(v->BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin)
+ && v->ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) {
+ SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = v->ActiveDRAMClockChangeLatencyMargin[k];
+ }
+@@ -5873,25 +5651,25 @@ static void CalculateWatermarksAndDRAMSpeedChangeSupport(
+
+ v->TotalNumberOfActiveOTG = 0;
+
+- for (k = 0; k < NumberOfActivePlanes; ++k) {
+- if (BlendingAndTiming[k] == k) {
++ for (k = 0; k < v->NumberOfActivePlanes; ++k) {
++ if (v->BlendingAndTiming[k] == k) {
+ v->TotalNumberOfActiveOTG = v->TotalNumberOfActiveOTG + 1;
+ }
+ }
+
+ if (v->MinActiveDRAMClockChangeMargin > 0 && PrefetchMode == 0) {
+ *DRAMClockChangeSupport = dm_dram_clock_change_vactive;
+- } else if ((SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1
++ } else if ((v->SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1
+ || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0) {
+ *DRAMClockChangeSupport = dm_dram_clock_change_vblank;
+ } else {
+ *DRAMClockChangeSupport = dm_dram_clock_change_unsupported;
+ }
+
+- *StutterExitWatermark = SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep;
+- *StutterEnterPlusExitWatermark = (SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep);
+- *Z8StutterExitWatermark = SRExitZ8Time + ExtraLatency + 10 / DCFCLKDeepSleep;
+- *Z8StutterEnterPlusExitWatermark = SREnterPlusExitZ8Time + ExtraLatency + 10 / DCFCLKDeepSleep;
++ *StutterExitWatermark = v->SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep;
++ *StutterEnterPlusExitWatermark = (v->SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep);
++ *Z8StutterExitWatermark = v->SRExitZ8Time + ExtraLatency + 10 / DCFCLKDeepSleep;
++ *Z8StutterEnterPlusExitWatermark = v->SREnterPlusExitZ8Time + ExtraLatency + 10 / DCFCLKDeepSleep;
+
+ #ifdef __DML_VBA_DEBUG__
+ dml_print("DML::%s: StutterExitWatermark = %f\n", __func__, *StutterExitWatermark);
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index 64a38f08f4974..5a51be753e87f 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -1603,6 +1603,7 @@ static void interpolate_user_regamma(uint32_t hw_points_num,
+ struct fixed31_32 lut2;
+ struct fixed31_32 delta_lut;
+ struct fixed31_32 delta_index;
++ const struct fixed31_32 one = dc_fixpt_from_int(1);
+
+ i = 0;
+ /* fixed_pt library has problems handling too small values */
+@@ -1631,6 +1632,9 @@ static void interpolate_user_regamma(uint32_t hw_points_num,
+ } else
+ hw_x = coordinates_x[i].x;
+
++ if (dc_fixpt_le(one, hw_x))
++ hw_x = one;
++
+ norm_x = dc_fixpt_mul(norm_factor, hw_x);
+ index = dc_fixpt_floor(norm_x);
+ if (index < 0 || index > 255)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 32bb6b1d95261..d13e455c8827e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -368,6 +368,17 @@ static void sienna_cichlid_check_bxco_support(struct smu_context *smu)
+ smu_baco->platform_support =
+ (val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true :
+ false;
++
++ /*
++ * Disable BACO entry/exit completely on below SKUs to
++ * avoid hardware intermittent failures.
++ */
++ if (((adev->pdev->device == 0x73A1) &&
++ (adev->pdev->revision == 0x00)) ||
++ ((adev->pdev->device == 0x73BF) &&
++ (adev->pdev->revision == 0xCF)))
++ smu_baco->platform_support = false;
++
+ }
+ }
+
+diff --git a/drivers/gpu/drm/gma500/cdv_device.c b/drivers/gpu/drm/gma500/cdv_device.c
+index dd32b484dd825..ce96234f3df20 100644
+--- a/drivers/gpu/drm/gma500/cdv_device.c
++++ b/drivers/gpu/drm/gma500/cdv_device.c
+@@ -581,11 +581,9 @@ static const struct psb_offset cdv_regmap[2] = {
+ static int cdv_chip_setup(struct drm_device *dev)
+ {
+ struct drm_psb_private *dev_priv = to_drm_psb_private(dev);
+- struct pci_dev *pdev = to_pci_dev(dev->dev);
+ INIT_WORK(&dev_priv->hotplug_work, cdv_hotplug_work_func);
+
+- if (pci_enable_msi(pdev))
+- dev_warn(dev->dev, "Enabling MSI failed!\n");
++ dev_priv->use_msi = true;
+ dev_priv->regmap = cdv_regmap;
+ gma_get_core_freq(dev);
+ psb_intel_opregion_init(dev);
+diff --git a/drivers/gpu/drm/gma500/gem.c b/drivers/gpu/drm/gma500/gem.c
+index dffe37490206d..4b7627a726378 100644
+--- a/drivers/gpu/drm/gma500/gem.c
++++ b/drivers/gpu/drm/gma500/gem.c
+@@ -112,12 +112,12 @@ static void psb_gem_free_object(struct drm_gem_object *obj)
+ {
+ struct psb_gem_object *pobj = to_psb_gem_object(obj);
+
+- drm_gem_object_release(obj);
+-
+ /* Undo the mmap pin if we are destroying the object */
+ if (pobj->mmapping)
+ psb_gem_unpin(pobj);
+
++ drm_gem_object_release(obj);
++
+ WARN_ON(pobj->in_gart && !pobj->stolen);
+
+ release_resource(&pobj->resource);
+diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c
+index 34ec3fca09ba6..12287c9bb4d80 100644
+--- a/drivers/gpu/drm/gma500/gma_display.c
++++ b/drivers/gpu/drm/gma500/gma_display.c
+@@ -531,15 +531,18 @@ int gma_crtc_page_flip(struct drm_crtc *crtc,
+ WARN_ON(drm_crtc_vblank_get(crtc) != 0);
+
+ gma_crtc->page_flip_event = event;
++ spin_unlock_irqrestore(&dev->event_lock, flags);
+
+ /* Call this locked if we want an event at vblank interrupt. */
+ ret = crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, old_fb);
+ if (ret) {
+- gma_crtc->page_flip_event = NULL;
+- drm_crtc_vblank_put(crtc);
++ spin_lock_irqsave(&dev->event_lock, flags);
++ if (gma_crtc->page_flip_event) {
++ gma_crtc->page_flip_event = NULL;
++ drm_crtc_vblank_put(crtc);
++ }
++ spin_unlock_irqrestore(&dev->event_lock, flags);
+ }
+-
+- spin_unlock_irqrestore(&dev->event_lock, flags);
+ } else {
+ ret = crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, old_fb);
+ }
+diff --git a/drivers/gpu/drm/gma500/oaktrail_device.c b/drivers/gpu/drm/gma500/oaktrail_device.c
+index 5923a9c893122..f90e628cb482c 100644
+--- a/drivers/gpu/drm/gma500/oaktrail_device.c
++++ b/drivers/gpu/drm/gma500/oaktrail_device.c
+@@ -501,12 +501,9 @@ static const struct psb_offset oaktrail_regmap[2] = {
+ static int oaktrail_chip_setup(struct drm_device *dev)
+ {
+ struct drm_psb_private *dev_priv = to_drm_psb_private(dev);
+- struct pci_dev *pdev = to_pci_dev(dev->dev);
+ int ret;
+
+- if (pci_enable_msi(pdev))
+- dev_warn(dev->dev, "Enabling MSI failed!\n");
+-
++ dev_priv->use_msi = true;
+ dev_priv->regmap = oaktrail_regmap;
+
+ ret = mid_chip_setup(dev);
+diff --git a/drivers/gpu/drm/gma500/power.c b/drivers/gpu/drm/gma500/power.c
+index b91de6d36e412..66873085d4505 100644
+--- a/drivers/gpu/drm/gma500/power.c
++++ b/drivers/gpu/drm/gma500/power.c
+@@ -139,8 +139,6 @@ static void gma_suspend_pci(struct pci_dev *pdev)
+ dev_priv->regs.saveBSM = bsm;
+ pci_read_config_dword(pdev, 0xFC, &vbt);
+ dev_priv->regs.saveVBT = vbt;
+- pci_read_config_dword(pdev, PSB_PCIx_MSI_ADDR_LOC, &dev_priv->msi_addr);
+- pci_read_config_dword(pdev, PSB_PCIx_MSI_DATA_LOC, &dev_priv->msi_data);
+
+ pci_disable_device(pdev);
+ pci_set_power_state(pdev, PCI_D3hot);
+@@ -168,9 +166,6 @@ static bool gma_resume_pci(struct pci_dev *pdev)
+ pci_restore_state(pdev);
+ pci_write_config_dword(pdev, 0x5c, dev_priv->regs.saveBSM);
+ pci_write_config_dword(pdev, 0xFC, dev_priv->regs.saveVBT);
+- /* restoring MSI address and data in PCIx space */
+- pci_write_config_dword(pdev, PSB_PCIx_MSI_ADDR_LOC, dev_priv->msi_addr);
+- pci_write_config_dword(pdev, PSB_PCIx_MSI_DATA_LOC, dev_priv->msi_data);
+ ret = pci_enable_device(pdev);
+
+ if (ret != 0)
+@@ -223,8 +218,7 @@ int gma_power_resume(struct device *_dev)
+ mutex_lock(&power_mutex);
+ gma_resume_pci(pdev);
+ gma_resume_display(pdev);
+- gma_irq_preinstall(dev);
+- gma_irq_postinstall(dev);
++ gma_irq_install(dev);
+ mutex_unlock(&power_mutex);
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/gma500/psb_drv.c b/drivers/gpu/drm/gma500/psb_drv.c
+index 1d8744f3e7020..54e756b486060 100644
+--- a/drivers/gpu/drm/gma500/psb_drv.c
++++ b/drivers/gpu/drm/gma500/psb_drv.c
+@@ -383,7 +383,7 @@ static int psb_driver_load(struct drm_device *dev, unsigned long flags)
+ PSB_WVDC32(0xFFFFFFFF, PSB_INT_MASK_R);
+ spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags);
+
+- gma_irq_install(dev, pdev->irq);
++ gma_irq_install(dev);
+
+ dev->max_vblank_count = 0xffffff; /* only 24 bits of frame count */
+
+diff --git a/drivers/gpu/drm/gma500/psb_drv.h b/drivers/gpu/drm/gma500/psb_drv.h
+index 0ddfec1a0851d..4c3fc5eaf6ad5 100644
+--- a/drivers/gpu/drm/gma500/psb_drv.h
++++ b/drivers/gpu/drm/gma500/psb_drv.h
+@@ -490,6 +490,7 @@ struct drm_psb_private {
+ int rpm_enabled;
+
+ /* MID specific */
++ bool use_msi;
+ bool has_gct;
+ struct oaktrail_gct_data gct_data;
+
+@@ -499,10 +500,6 @@ struct drm_psb_private {
+ /* Register state */
+ struct psb_save_area regs;
+
+- /* MSI reg save */
+- uint32_t msi_addr;
+- uint32_t msi_data;
+-
+ /* Hotplug handling */
+ struct work_struct hotplug_work;
+
+diff --git a/drivers/gpu/drm/gma500/psb_irq.c b/drivers/gpu/drm/gma500/psb_irq.c
+index e6e6d61bbeab6..038f18ed0a95e 100644
+--- a/drivers/gpu/drm/gma500/psb_irq.c
++++ b/drivers/gpu/drm/gma500/psb_irq.c
+@@ -316,17 +316,24 @@ void gma_irq_postinstall(struct drm_device *dev)
+ spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags);
+ }
+
+-int gma_irq_install(struct drm_device *dev, unsigned int irq)
++int gma_irq_install(struct drm_device *dev)
+ {
++ struct drm_psb_private *dev_priv = to_drm_psb_private(dev);
++ struct pci_dev *pdev = to_pci_dev(dev->dev);
+ int ret;
+
+- if (irq == IRQ_NOTCONNECTED)
++ if (dev_priv->use_msi && pci_enable_msi(pdev)) {
++ dev_warn(dev->dev, "Enabling MSI failed!\n");
++ dev_priv->use_msi = false;
++ }
++
++ if (pdev->irq == IRQ_NOTCONNECTED)
+ return -ENOTCONN;
+
+ gma_irq_preinstall(dev);
+
+ /* PCI devices require shared interrupts. */
+- ret = request_irq(irq, gma_irq_handler, IRQF_SHARED, dev->driver->name, dev);
++ ret = request_irq(pdev->irq, gma_irq_handler, IRQF_SHARED, dev->driver->name, dev);
+ if (ret)
+ return ret;
+
+@@ -369,6 +376,8 @@ void gma_irq_uninstall(struct drm_device *dev)
+ spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags);
+
+ free_irq(pdev->irq, dev);
++ if (dev_priv->use_msi)
++ pci_disable_msi(pdev);
+ }
+
+ int gma_crtc_enable_vblank(struct drm_crtc *crtc)
+diff --git a/drivers/gpu/drm/gma500/psb_irq.h b/drivers/gpu/drm/gma500/psb_irq.h
+index b51e395194fff..7648f69824a5d 100644
+--- a/drivers/gpu/drm/gma500/psb_irq.h
++++ b/drivers/gpu/drm/gma500/psb_irq.h
+@@ -17,7 +17,7 @@ struct drm_device;
+
+ void gma_irq_preinstall(struct drm_device *dev);
+ void gma_irq_postinstall(struct drm_device *dev);
+-int gma_irq_install(struct drm_device *dev, unsigned int irq);
++int gma_irq_install(struct drm_device *dev);
+ void gma_irq_uninstall(struct drm_device *dev);
+
+ int gma_crtc_enable_vblank(struct drm_crtc *crtc);
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/Kconfig b/drivers/gpu/drm/hisilicon/hibmc/Kconfig
+index 073adfe438ddd..4e41c144a2902 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/Kconfig
++++ b/drivers/gpu/drm/hisilicon/hibmc/Kconfig
+@@ -2,6 +2,7 @@
+ config DRM_HISI_HIBMC
+ tristate "DRM Support for Hisilicon Hibmc"
+ depends on DRM && PCI && (ARM64 || COMPILE_TEST)
++ depends on MMU
+ select DRM_KMS_HELPER
+ select DRM_VRAM_HELPER
+ select DRM_TTM
+diff --git a/drivers/gpu/drm/i915/display/g4x_dp.c b/drivers/gpu/drm/i915/display/g4x_dp.c
+index 5a957acebfd62..82ad8fe7440c0 100644
+--- a/drivers/gpu/drm/i915/display/g4x_dp.c
++++ b/drivers/gpu/drm/i915/display/g4x_dp.c
+@@ -395,26 +395,8 @@ static void intel_dp_get_config(struct intel_encoder *encoder,
+ intel_dotclock_calculate(pipe_config->port_clock,
+ &pipe_config->dp_m_n);
+
+- if (intel_dp_is_edp(intel_dp) && dev_priv->vbt.edp.bpp &&
+- pipe_config->pipe_bpp > dev_priv->vbt.edp.bpp) {
+- /*
+- * This is a big fat ugly hack.
+- *
+- * Some machines in UEFI boot mode provide us a VBT that has 18
+- * bpp and 1.62 GHz link bandwidth for eDP, which for reasons
+- * unknown we fail to light up. Yet the same BIOS boots up with
+- * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
+- * max, not what it tells us to use.
+- *
+- * Note: This will still be broken if the eDP panel is not lit
+- * up by the BIOS, and thus we can't get the mode at module
+- * load.
+- */
+- drm_dbg_kms(&dev_priv->drm,
+- "pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
+- pipe_config->pipe_bpp, dev_priv->vbt.edp.bpp);
+- dev_priv->vbt.edp.bpp = pipe_config->pipe_bpp;
+- }
++ if (intel_dp_is_edp(intel_dp))
++ intel_edp_fixup_vbt_bpp(encoder, pipe_config->pipe_bpp);
+ }
+
+ static void
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index 5508ebb9eb434..f416499dad6f3 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -1864,7 +1864,8 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
++ struct intel_connector *connector = intel_dsi->attached_connector;
++ struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
+ u32 tlpx_ns;
+ u32 prepare_cnt, exit_zero_cnt, clk_zero_cnt, trail_cnt;
+ u32 ths_prepare_ns, tclk_trail_ns;
+@@ -2051,6 +2052,8 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
+ /* attach connector to encoder */
+ intel_connector_attach_encoder(intel_connector, encoder);
+
++ intel_bios_init_panel(dev_priv, &intel_connector->panel);
++
+ mutex_lock(&dev->mode_config.mutex);
+ intel_panel_add_vbt_lfp_fixed_mode(intel_connector);
+ mutex_unlock(&dev->mode_config.mutex);
+@@ -2064,13 +2067,20 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
+
+ intel_backlight_setup(intel_connector, INVALID_PIPE);
+
+- if (dev_priv->vbt.dsi.config->dual_link)
++ if (intel_connector->panel.vbt.dsi.config->dual_link)
+ intel_dsi->ports = BIT(PORT_A) | BIT(PORT_B);
+ else
+ intel_dsi->ports = BIT(port);
+
+- intel_dsi->dcs_backlight_ports = dev_priv->vbt.dsi.bl_ports;
+- intel_dsi->dcs_cabc_ports = dev_priv->vbt.dsi.cabc_ports;
++ if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports))
++ intel_connector->panel.vbt.dsi.bl_ports &= intel_dsi->ports;
++
++ intel_dsi->dcs_backlight_ports = intel_connector->panel.vbt.dsi.bl_ports;
++
++ if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports))
++ intel_connector->panel.vbt.dsi.cabc_ports &= intel_dsi->ports;
++
++ intel_dsi->dcs_cabc_ports = intel_connector->panel.vbt.dsi.cabc_ports;
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+ struct intel_dsi_host *host;
+diff --git a/drivers/gpu/drm/i915/display/intel_backlight.c b/drivers/gpu/drm/i915/display/intel_backlight.c
+index 3e200a2e4ba29..5182bb66bd289 100644
+--- a/drivers/gpu/drm/i915/display/intel_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_backlight.c
+@@ -1158,9 +1158,10 @@ static u32 vlv_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ return DIV_ROUND_CLOSEST(clock, pwm_freq_hz * mul);
+ }
+
+-static u16 get_vbt_pwm_freq(struct drm_i915_private *dev_priv)
++static u16 get_vbt_pwm_freq(struct intel_connector *connector)
+ {
+- u16 pwm_freq_hz = dev_priv->vbt.backlight.pwm_freq_hz;
++ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
++ u16 pwm_freq_hz = connector->panel.vbt.backlight.pwm_freq_hz;
+
+ if (pwm_freq_hz) {
+ drm_dbg_kms(&dev_priv->drm,
+@@ -1180,7 +1181,7 @@ static u32 get_backlight_max_vbt(struct intel_connector *connector)
+ {
+ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ struct intel_panel *panel = &connector->panel;
+- u16 pwm_freq_hz = get_vbt_pwm_freq(dev_priv);
++ u16 pwm_freq_hz = get_vbt_pwm_freq(connector);
+ u32 pwm;
+
+ if (!panel->backlight.pwm_funcs->hz_to_pwm) {
+@@ -1217,11 +1218,11 @@ static u32 get_backlight_min_vbt(struct intel_connector *connector)
+ * against this by letting the minimum be at most (arbitrarily chosen)
+ * 25% of the max.
+ */
+- min = clamp_t(int, dev_priv->vbt.backlight.min_brightness, 0, 64);
+- if (min != dev_priv->vbt.backlight.min_brightness) {
++ min = clamp_t(int, connector->panel.vbt.backlight.min_brightness, 0, 64);
++ if (min != connector->panel.vbt.backlight.min_brightness) {
+ drm_dbg_kms(&dev_priv->drm,
+ "clamping VBT min backlight %d/255 to %d/255\n",
+- dev_priv->vbt.backlight.min_brightness, min);
++ connector->panel.vbt.backlight.min_brightness, min);
+ }
+
+ /* vbt value is a coefficient in range [0..255] */
+@@ -1410,7 +1411,7 @@ bxt_setup_backlight(struct intel_connector *connector, enum pipe unused)
+ struct intel_panel *panel = &connector->panel;
+ u32 pwm_ctl, val;
+
+- panel->backlight.controller = dev_priv->vbt.backlight.controller;
++ panel->backlight.controller = connector->panel.vbt.backlight.controller;
+
+ pwm_ctl = intel_de_read(dev_priv,
+ BXT_BLC_PWM_CTL(panel->backlight.controller));
+@@ -1483,7 +1484,7 @@ static int ext_pwm_setup_backlight(struct intel_connector *connector,
+ u32 level;
+
+ /* Get the right PWM chip for DSI backlight according to VBT */
+- if (dev_priv->vbt.dsi.config->pwm_blc == PPS_BLC_PMIC) {
++ if (connector->panel.vbt.dsi.config->pwm_blc == PPS_BLC_PMIC) {
+ panel->backlight.pwm = pwm_get(dev->dev, "pwm_pmic_backlight");
+ desc = "PMIC";
+ } else {
+@@ -1512,11 +1513,11 @@ static int ext_pwm_setup_backlight(struct intel_connector *connector,
+
+ drm_dbg_kms(&dev_priv->drm, "PWM already enabled at freq %ld, VBT freq %d, level %d\n",
+ NSEC_PER_SEC / (unsigned long)panel->backlight.pwm_state.period,
+- get_vbt_pwm_freq(dev_priv), level);
++ get_vbt_pwm_freq(connector), level);
+ } else {
+ /* Set period from VBT frequency, leave other settings at 0. */
+ panel->backlight.pwm_state.period =
+- NSEC_PER_SEC / get_vbt_pwm_freq(dev_priv);
++ NSEC_PER_SEC / get_vbt_pwm_freq(connector);
+ }
+
+ drm_info(&dev_priv->drm, "Using %s PWM for LCD backlight control\n",
+@@ -1601,7 +1602,7 @@ int intel_backlight_setup(struct intel_connector *connector, enum pipe pipe)
+ struct intel_panel *panel = &connector->panel;
+ int ret;
+
+- if (!dev_priv->vbt.backlight.present) {
++ if (!connector->panel.vbt.backlight.present) {
+ if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) {
+ drm_dbg_kms(&dev_priv->drm,
+ "no backlight present per VBT, but present per quirk\n");
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
+index 91caf4523b34d..b5de61fe9cc67 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.c
++++ b/drivers/gpu/drm/i915/display/intel_bios.c
+@@ -682,7 +682,8 @@ static int get_panel_type(struct drm_i915_private *i915)
+
+ /* Parse general panel options */
+ static void
+-parse_panel_options(struct drm_i915_private *i915)
++parse_panel_options(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_lvds_options *lvds_options;
+ int panel_type;
+@@ -692,11 +693,11 @@ parse_panel_options(struct drm_i915_private *i915)
+ if (!lvds_options)
+ return;
+
+- i915->vbt.lvds_dither = lvds_options->pixel_dither;
++ panel->vbt.lvds_dither = lvds_options->pixel_dither;
+
+ panel_type = get_panel_type(i915);
+
+- i915->vbt.panel_type = panel_type;
++ panel->vbt.panel_type = panel_type;
+
+ drrs_mode = (lvds_options->dps_panel_type_bits
+ >> (panel_type * 2)) & MODE_MASK;
+@@ -707,16 +708,16 @@ parse_panel_options(struct drm_i915_private *i915)
+ */
+ switch (drrs_mode) {
+ case 0:
+- i915->vbt.drrs_type = DRRS_TYPE_STATIC;
++ panel->vbt.drrs_type = DRRS_TYPE_STATIC;
+ drm_dbg_kms(&i915->drm, "DRRS supported mode is static\n");
+ break;
+ case 2:
+- i915->vbt.drrs_type = DRRS_TYPE_SEAMLESS;
++ panel->vbt.drrs_type = DRRS_TYPE_SEAMLESS;
+ drm_dbg_kms(&i915->drm,
+ "DRRS supported mode is seamless\n");
+ break;
+ default:
+- i915->vbt.drrs_type = DRRS_TYPE_NONE;
++ panel->vbt.drrs_type = DRRS_TYPE_NONE;
+ drm_dbg_kms(&i915->drm,
+ "DRRS not supported (VBT input)\n");
+ break;
+@@ -725,13 +726,14 @@ parse_panel_options(struct drm_i915_private *i915)
+
+ static void
+ parse_lfp_panel_dtd(struct drm_i915_private *i915,
++ struct intel_panel *panel,
+ const struct bdb_lvds_lfp_data *lvds_lfp_data,
+ const struct bdb_lvds_lfp_data_ptrs *lvds_lfp_data_ptrs)
+ {
+ const struct lvds_dvo_timing *panel_dvo_timing;
+ const struct lvds_fp_timing *fp_timing;
+ struct drm_display_mode *panel_fixed_mode;
+- int panel_type = i915->vbt.panel_type;
++ int panel_type = panel->vbt.panel_type;
+
+ panel_dvo_timing = get_lvds_dvo_timing(lvds_lfp_data,
+ lvds_lfp_data_ptrs,
+@@ -743,7 +745,7 @@ parse_lfp_panel_dtd(struct drm_i915_private *i915,
+
+ fill_detail_timing_data(panel_fixed_mode, panel_dvo_timing);
+
+- i915->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
++ panel->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
+
+ drm_dbg_kms(&i915->drm,
+ "Found panel mode in BIOS VBT legacy lfp table: " DRM_MODE_FMT "\n",
+@@ -756,20 +758,21 @@ parse_lfp_panel_dtd(struct drm_i915_private *i915,
+ /* check the resolution, just to be sure */
+ if (fp_timing->x_res == panel_fixed_mode->hdisplay &&
+ fp_timing->y_res == panel_fixed_mode->vdisplay) {
+- i915->vbt.bios_lvds_val = fp_timing->lvds_reg_val;
++ panel->vbt.bios_lvds_val = fp_timing->lvds_reg_val;
+ drm_dbg_kms(&i915->drm,
+ "VBT initial LVDS value %x\n",
+- i915->vbt.bios_lvds_val);
++ panel->vbt.bios_lvds_val);
+ }
+ }
+
+ static void
+-parse_lfp_data(struct drm_i915_private *i915)
++parse_lfp_data(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_lvds_lfp_data *data;
+ const struct bdb_lvds_lfp_data_tail *tail;
+ const struct bdb_lvds_lfp_data_ptrs *ptrs;
+- int panel_type = i915->vbt.panel_type;
++ int panel_type = panel->vbt.panel_type;
+
+ ptrs = find_section(i915, BDB_LVDS_LFP_DATA_PTRS);
+ if (!ptrs)
+@@ -779,24 +782,25 @@ parse_lfp_data(struct drm_i915_private *i915)
+ if (!data)
+ return;
+
+- if (!i915->vbt.lfp_lvds_vbt_mode)
+- parse_lfp_panel_dtd(i915, data, ptrs);
++ if (!panel->vbt.lfp_lvds_vbt_mode)
++ parse_lfp_panel_dtd(i915, panel, data, ptrs);
+
+ tail = get_lfp_data_tail(data, ptrs);
+ if (!tail)
+ return;
+
+ if (i915->vbt.version >= 188) {
+- i915->vbt.seamless_drrs_min_refresh_rate =
++ panel->vbt.seamless_drrs_min_refresh_rate =
+ tail->seamless_drrs_min_refresh_rate[panel_type];
+ drm_dbg_kms(&i915->drm,
+ "Seamless DRRS min refresh rate: %d Hz\n",
+- i915->vbt.seamless_drrs_min_refresh_rate);
++ panel->vbt.seamless_drrs_min_refresh_rate);
+ }
+ }
+
+ static void
+-parse_generic_dtd(struct drm_i915_private *i915)
++parse_generic_dtd(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_generic_dtd *generic_dtd;
+ const struct generic_dtd_entry *dtd;
+@@ -831,14 +835,14 @@ parse_generic_dtd(struct drm_i915_private *i915)
+
+ num_dtd = (get_blocksize(generic_dtd) -
+ sizeof(struct bdb_generic_dtd)) / generic_dtd->gdtd_size;
+- if (i915->vbt.panel_type >= num_dtd) {
++ if (panel->vbt.panel_type >= num_dtd) {
+ drm_err(&i915->drm,
+ "Panel type %d not found in table of %d DTD's\n",
+- i915->vbt.panel_type, num_dtd);
++ panel->vbt.panel_type, num_dtd);
+ return;
+ }
+
+- dtd = &generic_dtd->dtd[i915->vbt.panel_type];
++ dtd = &generic_dtd->dtd[panel->vbt.panel_type];
+
+ panel_fixed_mode = kzalloc(sizeof(*panel_fixed_mode), GFP_KERNEL);
+ if (!panel_fixed_mode)
+@@ -881,15 +885,16 @@ parse_generic_dtd(struct drm_i915_private *i915)
+ "Found panel mode in BIOS VBT generic dtd table: " DRM_MODE_FMT "\n",
+ DRM_MODE_ARG(panel_fixed_mode));
+
+- i915->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
++ panel->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
+ }
+
+ static void
+-parse_lfp_backlight(struct drm_i915_private *i915)
++parse_lfp_backlight(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_lfp_backlight_data *backlight_data;
+ const struct lfp_backlight_data_entry *entry;
+- int panel_type = i915->vbt.panel_type;
++ int panel_type = panel->vbt.panel_type;
+ u16 level;
+
+ backlight_data = find_section(i915, BDB_LVDS_BACKLIGHT);
+@@ -905,15 +910,15 @@ parse_lfp_backlight(struct drm_i915_private *i915)
+
+ entry = &backlight_data->data[panel_type];
+
+- i915->vbt.backlight.present = entry->type == BDB_BACKLIGHT_TYPE_PWM;
+- if (!i915->vbt.backlight.present) {
++ panel->vbt.backlight.present = entry->type == BDB_BACKLIGHT_TYPE_PWM;
++ if (!panel->vbt.backlight.present) {
+ drm_dbg_kms(&i915->drm,
+ "PWM backlight not present in VBT (type %u)\n",
+ entry->type);
+ return;
+ }
+
+- i915->vbt.backlight.type = INTEL_BACKLIGHT_DISPLAY_DDI;
++ panel->vbt.backlight.type = INTEL_BACKLIGHT_DISPLAY_DDI;
+ if (i915->vbt.version >= 191) {
+ size_t exp_size;
+
+@@ -928,13 +933,13 @@ parse_lfp_backlight(struct drm_i915_private *i915)
+ const struct lfp_backlight_control_method *method;
+
+ method = &backlight_data->backlight_control[panel_type];
+- i915->vbt.backlight.type = method->type;
+- i915->vbt.backlight.controller = method->controller;
++ panel->vbt.backlight.type = method->type;
++ panel->vbt.backlight.controller = method->controller;
+ }
+ }
+
+- i915->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
+- i915->vbt.backlight.active_low_pwm = entry->active_low_pwm;
++ panel->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
++ panel->vbt.backlight.active_low_pwm = entry->active_low_pwm;
+
+ if (i915->vbt.version >= 234) {
+ u16 min_level;
+@@ -955,28 +960,29 @@ parse_lfp_backlight(struct drm_i915_private *i915)
+ drm_warn(&i915->drm, "Brightness min level > 255\n");
+ level = 255;
+ }
+- i915->vbt.backlight.min_brightness = min_level;
++ panel->vbt.backlight.min_brightness = min_level;
+
+- i915->vbt.backlight.brightness_precision_bits =
++ panel->vbt.backlight.brightness_precision_bits =
+ backlight_data->brightness_precision_bits[panel_type];
+ } else {
+ level = backlight_data->level[panel_type];
+- i915->vbt.backlight.min_brightness = entry->min_brightness;
++ panel->vbt.backlight.min_brightness = entry->min_brightness;
+ }
+
+ drm_dbg_kms(&i915->drm,
+ "VBT backlight PWM modulation frequency %u Hz, "
+ "active %s, min brightness %u, level %u, controller %u\n",
+- i915->vbt.backlight.pwm_freq_hz,
+- i915->vbt.backlight.active_low_pwm ? "low" : "high",
+- i915->vbt.backlight.min_brightness,
++ panel->vbt.backlight.pwm_freq_hz,
++ panel->vbt.backlight.active_low_pwm ? "low" : "high",
++ panel->vbt.backlight.min_brightness,
+ level,
+- i915->vbt.backlight.controller);
++ panel->vbt.backlight.controller);
+ }
+
+ /* Try to find sdvo panel data */
+ static void
+-parse_sdvo_panel_data(struct drm_i915_private *i915)
++parse_sdvo_panel_data(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_sdvo_panel_dtds *dtds;
+ struct drm_display_mode *panel_fixed_mode;
+@@ -1009,7 +1015,7 @@ parse_sdvo_panel_data(struct drm_i915_private *i915)
+
+ fill_detail_timing_data(panel_fixed_mode, &dtds->dtds[index]);
+
+- i915->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode;
++ panel->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode;
+
+ drm_dbg_kms(&i915->drm,
+ "Found SDVO panel mode in BIOS VBT tables: " DRM_MODE_FMT "\n",
+@@ -1188,6 +1194,17 @@ parse_driver_features(struct drm_i915_private *i915)
+ driver->lvds_config != BDB_DRIVER_FEATURE_INT_SDVO_LVDS)
+ i915->vbt.int_lvds_support = 0;
+ }
++}
++
++static void
++parse_panel_driver_features(struct drm_i915_private *i915,
++ struct intel_panel *panel)
++{
++ const struct bdb_driver_features *driver;
++
++ driver = find_section(i915, BDB_DRIVER_FEATURES);
++ if (!driver)
++ return;
+
+ if (i915->vbt.version < 228) {
+ drm_dbg_kms(&i915->drm, "DRRS State Enabled:%d\n",
+@@ -1199,17 +1216,18 @@ parse_driver_features(struct drm_i915_private *i915)
+ * driver->drrs_enabled=false
+ */
+ if (!driver->drrs_enabled)
+- i915->vbt.drrs_type = DRRS_TYPE_NONE;
++ panel->vbt.drrs_type = DRRS_TYPE_NONE;
+
+- i915->vbt.psr.enable = driver->psr_enabled;
++ panel->vbt.psr.enable = driver->psr_enabled;
+ }
+ }
+
+ static void
+-parse_power_conservation_features(struct drm_i915_private *i915)
++parse_power_conservation_features(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_lfp_power *power;
+- u8 panel_type = i915->vbt.panel_type;
++ u8 panel_type = panel->vbt.panel_type;
+
+ if (i915->vbt.version < 228)
+ return;
+@@ -1218,7 +1236,7 @@ parse_power_conservation_features(struct drm_i915_private *i915)
+ if (!power)
+ return;
+
+- i915->vbt.psr.enable = power->psr & BIT(panel_type);
++ panel->vbt.psr.enable = power->psr & BIT(panel_type);
+
+ /*
+ * If DRRS is not supported, drrs_type has to be set to 0.
+@@ -1227,19 +1245,20 @@ parse_power_conservation_features(struct drm_i915_private *i915)
+ * power->drrs & BIT(panel_type)=false
+ */
+ if (!(power->drrs & BIT(panel_type)))
+- i915->vbt.drrs_type = DRRS_TYPE_NONE;
++ panel->vbt.drrs_type = DRRS_TYPE_NONE;
+
+ if (i915->vbt.version >= 232)
+- i915->vbt.edp.hobl = power->hobl & BIT(panel_type);
++ panel->vbt.edp.hobl = power->hobl & BIT(panel_type);
+ }
+
+ static void
+-parse_edp(struct drm_i915_private *i915)
++parse_edp(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_edp *edp;
+ const struct edp_power_seq *edp_pps;
+ const struct edp_fast_link_params *edp_link_params;
+- int panel_type = i915->vbt.panel_type;
++ int panel_type = panel->vbt.panel_type;
+
+ edp = find_section(i915, BDB_EDP);
+ if (!edp)
+@@ -1247,13 +1266,13 @@ parse_edp(struct drm_i915_private *i915)
+
+ switch ((edp->color_depth >> (panel_type * 2)) & 3) {
+ case EDP_18BPP:
+- i915->vbt.edp.bpp = 18;
++ panel->vbt.edp.bpp = 18;
+ break;
+ case EDP_24BPP:
+- i915->vbt.edp.bpp = 24;
++ panel->vbt.edp.bpp = 24;
+ break;
+ case EDP_30BPP:
+- i915->vbt.edp.bpp = 30;
++ panel->vbt.edp.bpp = 30;
+ break;
+ }
+
+@@ -1261,14 +1280,14 @@ parse_edp(struct drm_i915_private *i915)
+ edp_pps = &edp->power_seqs[panel_type];
+ edp_link_params = &edp->fast_link_params[panel_type];
+
+- i915->vbt.edp.pps = *edp_pps;
++ panel->vbt.edp.pps = *edp_pps;
+
+ switch (edp_link_params->rate) {
+ case EDP_RATE_1_62:
+- i915->vbt.edp.rate = DP_LINK_BW_1_62;
++ panel->vbt.edp.rate = DP_LINK_BW_1_62;
+ break;
+ case EDP_RATE_2_7:
+- i915->vbt.edp.rate = DP_LINK_BW_2_7;
++ panel->vbt.edp.rate = DP_LINK_BW_2_7;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1279,13 +1298,13 @@ parse_edp(struct drm_i915_private *i915)
+
+ switch (edp_link_params->lanes) {
+ case EDP_LANE_1:
+- i915->vbt.edp.lanes = 1;
++ panel->vbt.edp.lanes = 1;
+ break;
+ case EDP_LANE_2:
+- i915->vbt.edp.lanes = 2;
++ panel->vbt.edp.lanes = 2;
+ break;
+ case EDP_LANE_4:
+- i915->vbt.edp.lanes = 4;
++ panel->vbt.edp.lanes = 4;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1296,16 +1315,16 @@ parse_edp(struct drm_i915_private *i915)
+
+ switch (edp_link_params->preemphasis) {
+ case EDP_PREEMPHASIS_NONE:
+- i915->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_0;
++ panel->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_0;
+ break;
+ case EDP_PREEMPHASIS_3_5dB:
+- i915->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_1;
++ panel->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_1;
+ break;
+ case EDP_PREEMPHASIS_6dB:
+- i915->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_2;
++ panel->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_2;
+ break;
+ case EDP_PREEMPHASIS_9_5dB:
+- i915->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_3;
++ panel->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_3;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1316,16 +1335,16 @@ parse_edp(struct drm_i915_private *i915)
+
+ switch (edp_link_params->vswing) {
+ case EDP_VSWING_0_4V:
+- i915->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_0;
++ panel->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_0;
+ break;
+ case EDP_VSWING_0_6V:
+- i915->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_1;
++ panel->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_1;
+ break;
+ case EDP_VSWING_0_8V:
+- i915->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
++ panel->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
+ break;
+ case EDP_VSWING_1_2V:
+- i915->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
++ panel->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1339,24 +1358,25 @@ parse_edp(struct drm_i915_private *i915)
+
+ /* Don't read from VBT if module parameter has valid value*/
+ if (i915->params.edp_vswing) {
+- i915->vbt.edp.low_vswing =
++ panel->vbt.edp.low_vswing =
+ i915->params.edp_vswing == 1;
+ } else {
+ vswing = (edp->edp_vswing_preemph >> (panel_type * 4)) & 0xF;
+- i915->vbt.edp.low_vswing = vswing == 0;
++ panel->vbt.edp.low_vswing = vswing == 0;
+ }
+ }
+
+- i915->vbt.edp.drrs_msa_timing_delay =
++ panel->vbt.edp.drrs_msa_timing_delay =
+ (edp->sdrrs_msa_timing_delay >> (panel_type * 2)) & 3;
+ }
+
+ static void
+-parse_psr(struct drm_i915_private *i915)
++parse_psr(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_psr *psr;
+ const struct psr_table *psr_table;
+- int panel_type = i915->vbt.panel_type;
++ int panel_type = panel->vbt.panel_type;
+
+ psr = find_section(i915, BDB_PSR);
+ if (!psr) {
+@@ -1366,11 +1386,11 @@ parse_psr(struct drm_i915_private *i915)
+
+ psr_table = &psr->psr_table[panel_type];
+
+- i915->vbt.psr.full_link = psr_table->full_link;
+- i915->vbt.psr.require_aux_wakeup = psr_table->require_aux_to_wakeup;
++ panel->vbt.psr.full_link = psr_table->full_link;
++ panel->vbt.psr.require_aux_wakeup = psr_table->require_aux_to_wakeup;
+
+ /* Allowed VBT values goes from 0 to 15 */
+- i915->vbt.psr.idle_frames = psr_table->idle_frames < 0 ? 0 :
++ panel->vbt.psr.idle_frames = psr_table->idle_frames < 0 ? 0 :
+ psr_table->idle_frames > 15 ? 15 : psr_table->idle_frames;
+
+ /*
+@@ -1381,13 +1401,13 @@ parse_psr(struct drm_i915_private *i915)
+ (DISPLAY_VER(i915) >= 9 && !IS_BROXTON(i915))) {
+ switch (psr_table->tp1_wakeup_time) {
+ case 0:
+- i915->vbt.psr.tp1_wakeup_time_us = 500;
++ panel->vbt.psr.tp1_wakeup_time_us = 500;
+ break;
+ case 1:
+- i915->vbt.psr.tp1_wakeup_time_us = 100;
++ panel->vbt.psr.tp1_wakeup_time_us = 100;
+ break;
+ case 3:
+- i915->vbt.psr.tp1_wakeup_time_us = 0;
++ panel->vbt.psr.tp1_wakeup_time_us = 0;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1395,19 +1415,19 @@ parse_psr(struct drm_i915_private *i915)
+ psr_table->tp1_wakeup_time);
+ fallthrough;
+ case 2:
+- i915->vbt.psr.tp1_wakeup_time_us = 2500;
++ panel->vbt.psr.tp1_wakeup_time_us = 2500;
+ break;
+ }
+
+ switch (psr_table->tp2_tp3_wakeup_time) {
+ case 0:
+- i915->vbt.psr.tp2_tp3_wakeup_time_us = 500;
++ panel->vbt.psr.tp2_tp3_wakeup_time_us = 500;
+ break;
+ case 1:
+- i915->vbt.psr.tp2_tp3_wakeup_time_us = 100;
++ panel->vbt.psr.tp2_tp3_wakeup_time_us = 100;
+ break;
+ case 3:
+- i915->vbt.psr.tp2_tp3_wakeup_time_us = 0;
++ panel->vbt.psr.tp2_tp3_wakeup_time_us = 0;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1415,12 +1435,12 @@ parse_psr(struct drm_i915_private *i915)
+ psr_table->tp2_tp3_wakeup_time);
+ fallthrough;
+ case 2:
+- i915->vbt.psr.tp2_tp3_wakeup_time_us = 2500;
++ panel->vbt.psr.tp2_tp3_wakeup_time_us = 2500;
+ break;
+ }
+ } else {
+- i915->vbt.psr.tp1_wakeup_time_us = psr_table->tp1_wakeup_time * 100;
+- i915->vbt.psr.tp2_tp3_wakeup_time_us = psr_table->tp2_tp3_wakeup_time * 100;
++ panel->vbt.psr.tp1_wakeup_time_us = psr_table->tp1_wakeup_time * 100;
++ panel->vbt.psr.tp2_tp3_wakeup_time_us = psr_table->tp2_tp3_wakeup_time * 100;
+ }
+
+ if (i915->vbt.version >= 226) {
+@@ -1442,62 +1462,66 @@ parse_psr(struct drm_i915_private *i915)
+ wakeup_time = 2500;
+ break;
+ }
+- i915->vbt.psr.psr2_tp2_tp3_wakeup_time_us = wakeup_time;
++ panel->vbt.psr.psr2_tp2_tp3_wakeup_time_us = wakeup_time;
+ } else {
+ /* Reusing PSR1 wakeup time for PSR2 in older VBTs */
+- i915->vbt.psr.psr2_tp2_tp3_wakeup_time_us = i915->vbt.psr.tp2_tp3_wakeup_time_us;
++ panel->vbt.psr.psr2_tp2_tp3_wakeup_time_us = panel->vbt.psr.tp2_tp3_wakeup_time_us;
+ }
+ }
+
+ static void parse_dsi_backlight_ports(struct drm_i915_private *i915,
+- u16 version, enum port port)
++ struct intel_panel *panel,
++ enum port port)
+ {
+- if (!i915->vbt.dsi.config->dual_link || version < 197) {
+- i915->vbt.dsi.bl_ports = BIT(port);
+- if (i915->vbt.dsi.config->cabc_supported)
+- i915->vbt.dsi.cabc_ports = BIT(port);
++ enum port port_bc = DISPLAY_VER(i915) >= 11 ? PORT_B : PORT_C;
++
++ if (!panel->vbt.dsi.config->dual_link || i915->vbt.version < 197) {
++ panel->vbt.dsi.bl_ports = BIT(port);
++ if (panel->vbt.dsi.config->cabc_supported)
++ panel->vbt.dsi.cabc_ports = BIT(port);
+
+ return;
+ }
+
+- switch (i915->vbt.dsi.config->dl_dcs_backlight_ports) {
++ switch (panel->vbt.dsi.config->dl_dcs_backlight_ports) {
+ case DL_DCS_PORT_A:
+- i915->vbt.dsi.bl_ports = BIT(PORT_A);
++ panel->vbt.dsi.bl_ports = BIT(PORT_A);
+ break;
+ case DL_DCS_PORT_C:
+- i915->vbt.dsi.bl_ports = BIT(PORT_C);
++ panel->vbt.dsi.bl_ports = BIT(port_bc);
+ break;
+ default:
+ case DL_DCS_PORT_A_AND_C:
+- i915->vbt.dsi.bl_ports = BIT(PORT_A) | BIT(PORT_C);
++ panel->vbt.dsi.bl_ports = BIT(PORT_A) | BIT(port_bc);
+ break;
+ }
+
+- if (!i915->vbt.dsi.config->cabc_supported)
++ if (!panel->vbt.dsi.config->cabc_supported)
+ return;
+
+- switch (i915->vbt.dsi.config->dl_dcs_cabc_ports) {
++ switch (panel->vbt.dsi.config->dl_dcs_cabc_ports) {
+ case DL_DCS_PORT_A:
+- i915->vbt.dsi.cabc_ports = BIT(PORT_A);
++ panel->vbt.dsi.cabc_ports = BIT(PORT_A);
+ break;
+ case DL_DCS_PORT_C:
+- i915->vbt.dsi.cabc_ports = BIT(PORT_C);
++ panel->vbt.dsi.cabc_ports = BIT(port_bc);
+ break;
+ default:
+ case DL_DCS_PORT_A_AND_C:
+- i915->vbt.dsi.cabc_ports =
+- BIT(PORT_A) | BIT(PORT_C);
++ panel->vbt.dsi.cabc_ports =
++ BIT(PORT_A) | BIT(port_bc);
+ break;
+ }
+ }
+
+ static void
+-parse_mipi_config(struct drm_i915_private *i915)
++parse_mipi_config(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ const struct bdb_mipi_config *start;
+ const struct mipi_config *config;
+ const struct mipi_pps_data *pps;
+- int panel_type = i915->vbt.panel_type;
++ int panel_type = panel->vbt.panel_type;
+ enum port port;
+
+ /* parse MIPI blocks only if LFP type is MIPI */
+@@ -1505,7 +1529,7 @@ parse_mipi_config(struct drm_i915_private *i915)
+ return;
+
+ /* Initialize this to undefined indicating no generic MIPI support */
+- i915->vbt.dsi.panel_id = MIPI_DSI_UNDEFINED_PANEL_ID;
++ panel->vbt.dsi.panel_id = MIPI_DSI_UNDEFINED_PANEL_ID;
+
+ /* Block #40 is already parsed and panel_fixed_mode is
+ * stored in i915->lfp_lvds_vbt_mode
+@@ -1532,17 +1556,17 @@ parse_mipi_config(struct drm_i915_private *i915)
+ pps = &start->pps[panel_type];
+
+ /* store as of now full data. Trim when we realise all is not needed */
+- i915->vbt.dsi.config = kmemdup(config, sizeof(struct mipi_config), GFP_KERNEL);
+- if (!i915->vbt.dsi.config)
++ panel->vbt.dsi.config = kmemdup(config, sizeof(struct mipi_config), GFP_KERNEL);
++ if (!panel->vbt.dsi.config)
+ return;
+
+- i915->vbt.dsi.pps = kmemdup(pps, sizeof(struct mipi_pps_data), GFP_KERNEL);
+- if (!i915->vbt.dsi.pps) {
+- kfree(i915->vbt.dsi.config);
++ panel->vbt.dsi.pps = kmemdup(pps, sizeof(struct mipi_pps_data), GFP_KERNEL);
++ if (!panel->vbt.dsi.pps) {
++ kfree(panel->vbt.dsi.config);
+ return;
+ }
+
+- parse_dsi_backlight_ports(i915, i915->vbt.version, port);
++ parse_dsi_backlight_ports(i915, panel, port);
+
+ /* FIXME is the 90 vs. 270 correct? */
+ switch (config->rotation) {
+@@ -1551,25 +1575,25 @@ parse_mipi_config(struct drm_i915_private *i915)
+ * Most (all?) VBTs claim 0 degrees despite having
+ * an upside down panel, thus we do not trust this.
+ */
+- i915->vbt.dsi.orientation =
++ panel->vbt.dsi.orientation =
+ DRM_MODE_PANEL_ORIENTATION_UNKNOWN;
+ break;
+ case ENABLE_ROTATION_90:
+- i915->vbt.dsi.orientation =
++ panel->vbt.dsi.orientation =
+ DRM_MODE_PANEL_ORIENTATION_RIGHT_UP;
+ break;
+ case ENABLE_ROTATION_180:
+- i915->vbt.dsi.orientation =
++ panel->vbt.dsi.orientation =
+ DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP;
+ break;
+ case ENABLE_ROTATION_270:
+- i915->vbt.dsi.orientation =
++ panel->vbt.dsi.orientation =
+ DRM_MODE_PANEL_ORIENTATION_LEFT_UP;
+ break;
+ }
+
+ /* We have mandatory mipi config blocks. Initialize as generic panel */
+- i915->vbt.dsi.panel_id = MIPI_DSI_GENERIC_PANEL_ID;
++ panel->vbt.dsi.panel_id = MIPI_DSI_GENERIC_PANEL_ID;
+ }
+
+ /* Find the sequence block and size for the given panel. */
+@@ -1732,13 +1756,14 @@ static int goto_next_sequence_v3(const u8 *data, int index, int total)
+ * Get len of pre-fixed deassert fragment from a v1 init OTP sequence,
+ * skip all delay + gpio operands and stop at the first DSI packet op.
+ */
+-static int get_init_otp_deassert_fragment_len(struct drm_i915_private *i915)
++static int get_init_otp_deassert_fragment_len(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+- const u8 *data = i915->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
++ const u8 *data = panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
+ int index, len;
+
+ if (drm_WARN_ON(&i915->drm,
+- !data || i915->vbt.dsi.seq_version != 1))
++ !data || panel->vbt.dsi.seq_version != 1))
+ return 0;
+
+ /* index = 1 to skip sequence byte */
+@@ -1766,7 +1791,8 @@ static int get_init_otp_deassert_fragment_len(struct drm_i915_private *i915)
+ * these devices we split the init OTP sequence into a deassert sequence and
+ * the actual init OTP part.
+ */
+-static void fixup_mipi_sequences(struct drm_i915_private *i915)
++static void fixup_mipi_sequences(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+ u8 *init_otp;
+ int len;
+@@ -1776,18 +1802,18 @@ static void fixup_mipi_sequences(struct drm_i915_private *i915)
+ return;
+
+ /* Limit this to v1 vid-mode sequences */
+- if (i915->vbt.dsi.config->is_cmd_mode ||
+- i915->vbt.dsi.seq_version != 1)
++ if (panel->vbt.dsi.config->is_cmd_mode ||
++ panel->vbt.dsi.seq_version != 1)
+ return;
+
+ /* Only do this if there are otp and assert seqs and no deassert seq */
+- if (!i915->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] ||
+- !i915->vbt.dsi.sequence[MIPI_SEQ_ASSERT_RESET] ||
+- i915->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET])
++ if (!panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] ||
++ !panel->vbt.dsi.sequence[MIPI_SEQ_ASSERT_RESET] ||
++ panel->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET])
+ return;
+
+ /* The deassert-sequence ends at the first DSI packet */
+- len = get_init_otp_deassert_fragment_len(i915);
++ len = get_init_otp_deassert_fragment_len(i915, panel);
+ if (!len)
+ return;
+
+@@ -1795,25 +1821,26 @@ static void fixup_mipi_sequences(struct drm_i915_private *i915)
+ "Using init OTP fragment to deassert reset\n");
+
+ /* Copy the fragment, update seq byte and terminate it */
+- init_otp = (u8 *)i915->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
+- i915->vbt.dsi.deassert_seq = kmemdup(init_otp, len + 1, GFP_KERNEL);
+- if (!i915->vbt.dsi.deassert_seq)
++ init_otp = (u8 *)panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
++ panel->vbt.dsi.deassert_seq = kmemdup(init_otp, len + 1, GFP_KERNEL);
++ if (!panel->vbt.dsi.deassert_seq)
+ return;
+- i915->vbt.dsi.deassert_seq[0] = MIPI_SEQ_DEASSERT_RESET;
+- i915->vbt.dsi.deassert_seq[len] = MIPI_SEQ_ELEM_END;
++ panel->vbt.dsi.deassert_seq[0] = MIPI_SEQ_DEASSERT_RESET;
++ panel->vbt.dsi.deassert_seq[len] = MIPI_SEQ_ELEM_END;
+ /* Use the copy for deassert */
+- i915->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET] =
+- i915->vbt.dsi.deassert_seq;
++ panel->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET] =
++ panel->vbt.dsi.deassert_seq;
+ /* Replace the last byte of the fragment with init OTP seq byte */
+ init_otp[len - 1] = MIPI_SEQ_INIT_OTP;
+ /* And make MIPI_MIPI_SEQ_INIT_OTP point to it */
+- i915->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] = init_otp + len - 1;
++ panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] = init_otp + len - 1;
+ }
+
+ static void
+-parse_mipi_sequence(struct drm_i915_private *i915)
++parse_mipi_sequence(struct drm_i915_private *i915,
++ struct intel_panel *panel)
+ {
+- int panel_type = i915->vbt.panel_type;
++ int panel_type = panel->vbt.panel_type;
+ const struct bdb_mipi_sequence *sequence;
+ const u8 *seq_data;
+ u32 seq_size;
+@@ -1821,7 +1848,7 @@ parse_mipi_sequence(struct drm_i915_private *i915)
+ int index = 0;
+
+ /* Only our generic panel driver uses the sequence block. */
+- if (i915->vbt.dsi.panel_id != MIPI_DSI_GENERIC_PANEL_ID)
++ if (panel->vbt.dsi.panel_id != MIPI_DSI_GENERIC_PANEL_ID)
+ return;
+
+ sequence = find_section(i915, BDB_MIPI_SEQUENCE);
+@@ -1867,7 +1894,7 @@ parse_mipi_sequence(struct drm_i915_private *i915)
+ drm_dbg_kms(&i915->drm,
+ "Unsupported sequence %u\n", seq_id);
+
+- i915->vbt.dsi.sequence[seq_id] = data + index;
++ panel->vbt.dsi.sequence[seq_id] = data + index;
+
+ if (sequence->version >= 3)
+ index = goto_next_sequence_v3(data, index, seq_size);
+@@ -1880,18 +1907,18 @@ parse_mipi_sequence(struct drm_i915_private *i915)
+ }
+ }
+
+- i915->vbt.dsi.data = data;
+- i915->vbt.dsi.size = seq_size;
+- i915->vbt.dsi.seq_version = sequence->version;
++ panel->vbt.dsi.data = data;
++ panel->vbt.dsi.size = seq_size;
++ panel->vbt.dsi.seq_version = sequence->version;
+
+- fixup_mipi_sequences(i915);
++ fixup_mipi_sequences(i915, panel);
+
+ drm_dbg(&i915->drm, "MIPI related VBT parsing complete\n");
+ return;
+
+ err:
+ kfree(data);
+- memset(i915->vbt.dsi.sequence, 0, sizeof(i915->vbt.dsi.sequence));
++ memset(panel->vbt.dsi.sequence, 0, sizeof(panel->vbt.dsi.sequence));
+ }
+
+ static void
+@@ -2645,15 +2672,6 @@ init_vbt_defaults(struct drm_i915_private *i915)
+ {
+ i915->vbt.crt_ddc_pin = GMBUS_PIN_VGADDC;
+
+- /* Default to having backlight */
+- i915->vbt.backlight.present = true;
+-
+- /* LFP panel data */
+- i915->vbt.lvds_dither = 1;
+-
+- /* SDVO panel data */
+- i915->vbt.sdvo_lvds_vbt_mode = NULL;
+-
+ /* general features */
+ i915->vbt.int_tv_support = 1;
+ i915->vbt.int_crt_support = 1;
+@@ -2673,6 +2691,17 @@ init_vbt_defaults(struct drm_i915_private *i915)
+ i915->vbt.lvds_ssc_freq);
+ }
+
++/* Common defaults which may be overridden by VBT. */
++static void
++init_vbt_panel_defaults(struct intel_panel *panel)
++{
++ /* Default to having backlight */
++ panel->vbt.backlight.present = true;
++
++ /* LFP panel data */
++ panel->vbt.lvds_dither = true;
++}
++
+ /* Defaults to initialize only if there is no VBT. */
+ static void
+ init_vbt_missing_defaults(struct drm_i915_private *i915)
+@@ -2959,17 +2988,7 @@ void intel_bios_init(struct drm_i915_private *i915)
+ /* Grab useful general definitions */
+ parse_general_features(i915);
+ parse_general_definitions(i915);
+- parse_panel_options(i915);
+- parse_generic_dtd(i915);
+- parse_lfp_data(i915);
+- parse_lfp_backlight(i915);
+- parse_sdvo_panel_data(i915);
+ parse_driver_features(i915);
+- parse_power_conservation_features(i915);
+- parse_edp(i915);
+- parse_psr(i915);
+- parse_mipi_config(i915);
+- parse_mipi_sequence(i915);
+
+ /* Depends on child device list */
+ parse_compression_parameters(i915);
+@@ -2988,6 +3007,24 @@ out:
+ kfree(oprom_vbt);
+ }
+
++void intel_bios_init_panel(struct drm_i915_private *i915,
++ struct intel_panel *panel)
++{
++ init_vbt_panel_defaults(panel);
++
++ parse_panel_options(i915, panel);
++ parse_generic_dtd(i915, panel);
++ parse_lfp_data(i915, panel);
++ parse_lfp_backlight(i915, panel);
++ parse_sdvo_panel_data(i915, panel);
++ parse_panel_driver_features(i915, panel);
++ parse_power_conservation_features(i915, panel);
++ parse_edp(i915, panel);
++ parse_psr(i915, panel);
++ parse_mipi_config(i915, panel);
++ parse_mipi_sequence(i915, panel);
++}
++
+ /**
+ * intel_bios_driver_remove - Free any resources allocated by intel_bios_init()
+ * @i915: i915 device instance
+@@ -3007,19 +3044,22 @@ void intel_bios_driver_remove(struct drm_i915_private *i915)
+ list_del(&entry->node);
+ kfree(entry);
+ }
++}
+
+- kfree(i915->vbt.sdvo_lvds_vbt_mode);
+- i915->vbt.sdvo_lvds_vbt_mode = NULL;
+- kfree(i915->vbt.lfp_lvds_vbt_mode);
+- i915->vbt.lfp_lvds_vbt_mode = NULL;
+- kfree(i915->vbt.dsi.data);
+- i915->vbt.dsi.data = NULL;
+- kfree(i915->vbt.dsi.pps);
+- i915->vbt.dsi.pps = NULL;
+- kfree(i915->vbt.dsi.config);
+- i915->vbt.dsi.config = NULL;
+- kfree(i915->vbt.dsi.deassert_seq);
+- i915->vbt.dsi.deassert_seq = NULL;
++void intel_bios_fini_panel(struct intel_panel *panel)
++{
++ kfree(panel->vbt.sdvo_lvds_vbt_mode);
++ panel->vbt.sdvo_lvds_vbt_mode = NULL;
++ kfree(panel->vbt.lfp_lvds_vbt_mode);
++ panel->vbt.lfp_lvds_vbt_mode = NULL;
++ kfree(panel->vbt.dsi.data);
++ panel->vbt.dsi.data = NULL;
++ kfree(panel->vbt.dsi.pps);
++ panel->vbt.dsi.pps = NULL;
++ kfree(panel->vbt.dsi.config);
++ panel->vbt.dsi.config = NULL;
++ kfree(panel->vbt.dsi.deassert_seq);
++ panel->vbt.dsi.deassert_seq = NULL;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.h b/drivers/gpu/drm/i915/display/intel_bios.h
+index 4709c4d298059..86129f015718d 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.h
++++ b/drivers/gpu/drm/i915/display/intel_bios.h
+@@ -36,6 +36,7 @@ struct drm_i915_private;
+ struct intel_bios_encoder_data;
+ struct intel_crtc_state;
+ struct intel_encoder;
++struct intel_panel;
+ enum port;
+
+ enum intel_backlight_type {
+@@ -230,6 +231,9 @@ struct mipi_pps_data {
+ } __packed;
+
+ void intel_bios_init(struct drm_i915_private *dev_priv);
++void intel_bios_init_panel(struct drm_i915_private *dev_priv,
++ struct intel_panel *panel);
++void intel_bios_fini_panel(struct intel_panel *panel);
+ void intel_bios_driver_remove(struct drm_i915_private *dev_priv);
+ bool intel_bios_is_valid_vbt(const void *buf, size_t size);
+ bool intel_bios_is_tv_present(struct drm_i915_private *dev_priv);
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 9e6fa59eabba7..333871cf3a2c5 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3433,26 +3433,8 @@ static void intel_ddi_get_config(struct intel_encoder *encoder,
+ pipe_config->has_audio =
+ intel_ddi_is_audio_enabled(dev_priv, cpu_transcoder);
+
+- if (encoder->type == INTEL_OUTPUT_EDP && dev_priv->vbt.edp.bpp &&
+- pipe_config->pipe_bpp > dev_priv->vbt.edp.bpp) {
+- /*
+- * This is a big fat ugly hack.
+- *
+- * Some machines in UEFI boot mode provide us a VBT that has 18
+- * bpp and 1.62 GHz link bandwidth for eDP, which for reasons
+- * unknown we fail to light up. Yet the same BIOS boots up with
+- * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
+- * max, not what it tells us to use.
+- *
+- * Note: This will still be broken if the eDP panel is not lit
+- * up by the BIOS, and thus we can't get the mode at module
+- * load.
+- */
+- drm_dbg_kms(&dev_priv->drm,
+- "pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
+- pipe_config->pipe_bpp, dev_priv->vbt.edp.bpp);
+- dev_priv->vbt.edp.bpp = pipe_config->pipe_bpp;
+- }
++ if (encoder->type == INTEL_OUTPUT_EDP)
++ intel_edp_fixup_vbt_bpp(encoder, pipe_config->pipe_bpp);
+
+ ddi_dotclock_get(pipe_config);
+
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
+index 85f58dd3df722..b490acd0ab691 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
+@@ -1062,17 +1062,18 @@ bool is_hobl_buf_trans(const struct intel_ddi_buf_trans *table)
+
+ static bool use_edp_hobl(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *i915 = to_i915(encoder->base.dev);
+ struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
++ struct intel_connector *connector = intel_dp->attached_connector;
+
+- return i915->vbt.edp.hobl && !intel_dp->hobl_failed;
++ return connector->panel.vbt.edp.hobl && !intel_dp->hobl_failed;
+ }
+
+ static bool use_edp_low_vswing(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *i915 = to_i915(encoder->base.dev);
++ struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
++ struct intel_connector *connector = intel_dp->attached_connector;
+
+- return i915->vbt.edp.low_vswing;
++ return connector->panel.vbt.edp.low_vswing;
+ }
+
+ static const struct intel_ddi_buf_trans *
+diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
+index 408152f9f46a4..e2561c5d4953c 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_types.h
++++ b/drivers/gpu/drm/i915/display/intel_display_types.h
+@@ -279,6 +279,73 @@ struct intel_panel_bl_funcs {
+ u32 (*hz_to_pwm)(struct intel_connector *connector, u32 hz);
+ };
+
++enum drrs_type {
++ DRRS_TYPE_NONE,
++ DRRS_TYPE_STATIC,
++ DRRS_TYPE_SEAMLESS,
++};
++
++struct intel_vbt_panel_data {
++ struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */
++ struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
++
++ /* Feature bits */
++ unsigned int panel_type:4;
++ unsigned int lvds_dither:1;
++ unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */
++
++ u8 seamless_drrs_min_refresh_rate;
++ enum drrs_type drrs_type;
++
++ struct {
++ int rate;
++ int lanes;
++ int preemphasis;
++ int vswing;
++ int bpp;
++ struct edp_power_seq pps;
++ u8 drrs_msa_timing_delay;
++ bool low_vswing;
++ bool initialized;
++ bool hobl;
++ } edp;
++
++ struct {
++ bool enable;
++ bool full_link;
++ bool require_aux_wakeup;
++ int idle_frames;
++ int tp1_wakeup_time_us;
++ int tp2_tp3_wakeup_time_us;
++ int psr2_tp2_tp3_wakeup_time_us;
++ } psr;
++
++ struct {
++ u16 pwm_freq_hz;
++ u16 brightness_precision_bits;
++ bool present;
++ bool active_low_pwm;
++ u8 min_brightness; /* min_brightness/255 of max */
++ u8 controller; /* brightness controller number */
++ enum intel_backlight_type type;
++ } backlight;
++
++ /* MIPI DSI */
++ struct {
++ u16 panel_id;
++ struct mipi_config *config;
++ struct mipi_pps_data *pps;
++ u16 bl_ports;
++ u16 cabc_ports;
++ u8 seq_version;
++ u32 size;
++ u8 *data;
++ const u8 *sequence[MIPI_SEQ_MAX];
++ u8 *deassert_seq; /* Used by fixup_mipi_sequences() */
++ enum drm_panel_orientation orientation;
++ } dsi;
++};
++
+ struct intel_panel {
+ struct list_head fixed_modes;
+
+@@ -318,6 +385,8 @@ struct intel_panel {
+ const struct intel_panel_bl_funcs *pwm_funcs;
+ void (*power)(struct intel_connector *, bool enable);
+ } backlight;
++
++ struct intel_vbt_panel_data vbt;
+ };
+
+ struct intel_digital_port;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index fe8b6b72970a2..0efec6023fbe8 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1246,11 +1246,12 @@ static int intel_dp_max_bpp(struct intel_dp *intel_dp,
+ if (intel_dp_is_edp(intel_dp)) {
+ /* Get bpp from vbt only for panels that dont have bpp in edid */
+ if (intel_connector->base.display_info.bpc == 0 &&
+- dev_priv->vbt.edp.bpp && dev_priv->vbt.edp.bpp < bpp) {
++ intel_connector->panel.vbt.edp.bpp &&
++ intel_connector->panel.vbt.edp.bpp < bpp) {
+ drm_dbg_kms(&dev_priv->drm,
+ "clamping bpp for eDP panel to BIOS-provided %i\n",
+- dev_priv->vbt.edp.bpp);
+- bpp = dev_priv->vbt.edp.bpp;
++ intel_connector->panel.vbt.edp.bpp);
++ bpp = intel_connector->panel.vbt.edp.bpp;
+ }
+ }
+
+@@ -1907,7 +1908,7 @@ intel_dp_drrs_compute_config(struct intel_connector *connector,
+ }
+
+ if (IS_IRONLAKE(i915) || IS_SANDYBRIDGE(i915) || IS_IVYBRIDGE(i915))
+- pipe_config->msa_timing_delay = i915->vbt.edp.drrs_msa_timing_delay;
++ pipe_config->msa_timing_delay = connector->panel.vbt.edp.drrs_msa_timing_delay;
+
+ pipe_config->has_drrs = true;
+
+@@ -2737,6 +2738,33 @@ static void intel_edp_mso_mode_fixup(struct intel_connector *connector,
+ DRM_MODE_ARG(mode));
+ }
+
++void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp)
++{
++ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
++ struct intel_connector *connector = intel_dp->attached_connector;
++
++ if (connector->panel.vbt.edp.bpp && pipe_bpp > connector->panel.vbt.edp.bpp) {
++ /*
++ * This is a big fat ugly hack.
++ *
++ * Some machines in UEFI boot mode provide us a VBT that has 18
++ * bpp and 1.62 GHz link bandwidth for eDP, which for reasons
++ * unknown we fail to light up. Yet the same BIOS boots up with
++ * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
++ * max, not what it tells us to use.
++ *
++ * Note: This will still be broken if the eDP panel is not lit
++ * up by the BIOS, and thus we can't get the mode at module
++ * load.
++ */
++ drm_dbg_kms(&dev_priv->drm,
++ "pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
++ pipe_bpp, connector->panel.vbt.edp.bpp);
++ connector->panel.vbt.edp.bpp = pipe_bpp;
++ }
++}
++
+ static void intel_edp_mso_init(struct intel_dp *intel_dp)
+ {
+ struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+@@ -5212,8 +5240,10 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp,
+ }
+ intel_connector->edid = edid;
+
++ intel_bios_init_panel(dev_priv, &intel_connector->panel);
++
+ intel_panel_add_edid_fixed_modes(intel_connector,
+- dev_priv->vbt.drrs_type != DRRS_TYPE_NONE);
++ intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE);
+
+ /* MSO requires information from the EDID */
+ intel_edp_mso_init(intel_dp);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
+index d457e17bdc57e..a54902c713a34 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.h
++++ b/drivers/gpu/drm/i915/display/intel_dp.h
+@@ -29,6 +29,7 @@ struct link_config_limits {
+ int min_bpp, max_bpp;
+ };
+
++void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp);
+ void intel_dp_adjust_compliance_config(struct intel_dp *intel_dp,
+ struct intel_crtc_state *pipe_config,
+ struct link_config_limits *limits);
+@@ -63,6 +64,7 @@ enum irqreturn intel_dp_hpd_pulse(struct intel_digital_port *dig_port,
+ void intel_edp_backlight_on(const struct intel_crtc_state *crtc_state,
+ const struct drm_connector_state *conn_state);
+ void intel_edp_backlight_off(const struct drm_connector_state *conn_state);
++void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp);
+ void intel_dp_mst_suspend(struct drm_i915_private *dev_priv);
+ void intel_dp_mst_resume(struct drm_i915_private *dev_priv);
+ int intel_dp_max_link_rate(struct intel_dp *intel_dp);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c b/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
+index fb6cf30ee6281..c92d5bb2326a3 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
+@@ -370,7 +370,7 @@ static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector,
+ int ret;
+
+ ret = drm_edp_backlight_init(&intel_dp->aux, &panel->backlight.edp.vesa.info,
+- i915->vbt.backlight.pwm_freq_hz, intel_dp->edp_dpcd,
++ panel->vbt.backlight.pwm_freq_hz, intel_dp->edp_dpcd,
+ ¤t_level, ¤t_mode);
+ if (ret < 0)
+ return ret;
+@@ -454,7 +454,7 @@ int intel_dp_aux_init_backlight_funcs(struct intel_connector *connector)
+ case INTEL_DP_AUX_BACKLIGHT_OFF:
+ return -ENODEV;
+ case INTEL_DP_AUX_BACKLIGHT_AUTO:
+- switch (i915->vbt.backlight.type) {
++ switch (panel->vbt.backlight.type) {
+ case INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE:
+ try_vesa_interface = true;
+ break;
+@@ -466,7 +466,7 @@ int intel_dp_aux_init_backlight_funcs(struct intel_connector *connector)
+ }
+ break;
+ case INTEL_DP_AUX_BACKLIGHT_ON:
+- if (i915->vbt.backlight.type != INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE)
++ if (panel->vbt.backlight.type != INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE)
+ try_intel_interface = true;
+
+ try_vesa_interface = true;
+diff --git a/drivers/gpu/drm/i915/display/intel_drrs.c b/drivers/gpu/drm/i915/display/intel_drrs.c
+index 166caf293f7bc..7da4a9cbe4ba4 100644
+--- a/drivers/gpu/drm/i915/display/intel_drrs.c
++++ b/drivers/gpu/drm/i915/display/intel_drrs.c
+@@ -217,9 +217,6 @@ static void intel_drrs_frontbuffer_update(struct drm_i915_private *dev_priv,
+ {
+ struct intel_crtc *crtc;
+
+- if (dev_priv->vbt.drrs_type != DRRS_TYPE_SEAMLESS)
+- return;
+-
+ for_each_intel_crtc(&dev_priv->drm, crtc) {
+ unsigned int frontbuffer_bits;
+
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi.c b/drivers/gpu/drm/i915/display/intel_dsi.c
+index 389a8c24cdc1e..35e121cd226c5 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi.c
+@@ -102,7 +102,7 @@ intel_dsi_get_panel_orientation(struct intel_connector *connector)
+ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ enum drm_panel_orientation orientation;
+
+- orientation = dev_priv->vbt.dsi.orientation;
++ orientation = connector->panel.vbt.dsi.orientation;
+ if (orientation != DRM_MODE_PANEL_ORIENTATION_UNKNOWN)
+ return orientation;
+
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c b/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c
+index 7d234429e71ef..1bc7118c56a2a 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c
+@@ -160,12 +160,10 @@ static void dcs_enable_backlight(const struct intel_crtc_state *crtc_state,
+ static int dcs_setup_backlight(struct intel_connector *connector,
+ enum pipe unused)
+ {
+- struct drm_device *dev = connector->base.dev;
+- struct drm_i915_private *dev_priv = to_i915(dev);
+ struct intel_panel *panel = &connector->panel;
+
+- if (dev_priv->vbt.backlight.brightness_precision_bits > 8)
+- panel->backlight.max = (1 << dev_priv->vbt.backlight.brightness_precision_bits) - 1;
++ if (panel->vbt.backlight.brightness_precision_bits > 8)
++ panel->backlight.max = (1 << panel->vbt.backlight.brightness_precision_bits) - 1;
+ else
+ panel->backlight.max = PANEL_PWM_MAX_VALUE;
+
+@@ -185,11 +183,10 @@ static const struct intel_panel_bl_funcs dcs_bl_funcs = {
+ int intel_dsi_dcs_init_backlight_funcs(struct intel_connector *intel_connector)
+ {
+ struct drm_device *dev = intel_connector->base.dev;
+- struct drm_i915_private *dev_priv = to_i915(dev);
+ struct intel_encoder *encoder = intel_attached_encoder(intel_connector);
+ struct intel_panel *panel = &intel_connector->panel;
+
+- if (dev_priv->vbt.backlight.type != INTEL_BACKLIGHT_DSI_DCS)
++ if (panel->vbt.backlight.type != INTEL_BACKLIGHT_DSI_DCS)
+ return -ENODEV;
+
+ if (drm_WARN_ON(dev, encoder->type != INTEL_OUTPUT_DSI))
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+index dd24aef925f2e..75e8cc4337c93 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+@@ -240,9 +240,10 @@ static const u8 *mipi_exec_delay(struct intel_dsi *intel_dsi, const u8 *data)
+ return data;
+ }
+
+-static void vlv_exec_gpio(struct drm_i915_private *dev_priv,
++static void vlv_exec_gpio(struct intel_connector *connector,
+ u8 gpio_source, u8 gpio_index, bool value)
+ {
++ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ struct gpio_map *map;
+ u16 pconf0, padval;
+ u32 tmp;
+@@ -256,7 +257,7 @@ static void vlv_exec_gpio(struct drm_i915_private *dev_priv,
+
+ map = &vlv_gpio_table[gpio_index];
+
+- if (dev_priv->vbt.dsi.seq_version >= 3) {
++ if (connector->panel.vbt.dsi.seq_version >= 3) {
+ /* XXX: this assumes vlv_gpio_table only has NC GPIOs. */
+ port = IOSF_PORT_GPIO_NC;
+ } else {
+@@ -287,14 +288,15 @@ static void vlv_exec_gpio(struct drm_i915_private *dev_priv,
+ vlv_iosf_sb_put(dev_priv, BIT(VLV_IOSF_SB_GPIO));
+ }
+
+-static void chv_exec_gpio(struct drm_i915_private *dev_priv,
++static void chv_exec_gpio(struct intel_connector *connector,
+ u8 gpio_source, u8 gpio_index, bool value)
+ {
++ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ u16 cfg0, cfg1;
+ u16 family_num;
+ u8 port;
+
+- if (dev_priv->vbt.dsi.seq_version >= 3) {
++ if (connector->panel.vbt.dsi.seq_version >= 3) {
+ if (gpio_index >= CHV_GPIO_IDX_START_SE) {
+ /* XXX: it's unclear whether 255->57 is part of SE. */
+ gpio_index -= CHV_GPIO_IDX_START_SE;
+@@ -340,9 +342,10 @@ static void chv_exec_gpio(struct drm_i915_private *dev_priv,
+ vlv_iosf_sb_put(dev_priv, BIT(VLV_IOSF_SB_GPIO));
+ }
+
+-static void bxt_exec_gpio(struct drm_i915_private *dev_priv,
++static void bxt_exec_gpio(struct intel_connector *connector,
+ u8 gpio_source, u8 gpio_index, bool value)
+ {
++ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ /* XXX: this table is a quick ugly hack. */
+ static struct gpio_desc *bxt_gpio_table[U8_MAX + 1];
+ struct gpio_desc *gpio_desc = bxt_gpio_table[gpio_index];
+@@ -366,9 +369,11 @@ static void bxt_exec_gpio(struct drm_i915_private *dev_priv,
+ gpiod_set_value(gpio_desc, value);
+ }
+
+-static void icl_exec_gpio(struct drm_i915_private *dev_priv,
++static void icl_exec_gpio(struct intel_connector *connector,
+ u8 gpio_source, u8 gpio_index, bool value)
+ {
++ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
++
+ drm_dbg_kms(&dev_priv->drm, "Skipping ICL GPIO element execution\n");
+ }
+
+@@ -376,18 +381,19 @@ static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
++ struct intel_connector *connector = intel_dsi->attached_connector;
+ u8 gpio_source, gpio_index = 0, gpio_number;
+ bool value;
+
+ drm_dbg_kms(&dev_priv->drm, "\n");
+
+- if (dev_priv->vbt.dsi.seq_version >= 3)
++ if (connector->panel.vbt.dsi.seq_version >= 3)
+ gpio_index = *data++;
+
+ gpio_number = *data++;
+
+ /* gpio source in sequence v2 only */
+- if (dev_priv->vbt.dsi.seq_version == 2)
++ if (connector->panel.vbt.dsi.seq_version == 2)
+ gpio_source = (*data >> 1) & 3;
+ else
+ gpio_source = 0;
+@@ -396,13 +402,13 @@ static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
+ value = *data++ & 1;
+
+ if (DISPLAY_VER(dev_priv) >= 11)
+- icl_exec_gpio(dev_priv, gpio_source, gpio_index, value);
++ icl_exec_gpio(connector, gpio_source, gpio_index, value);
+ else if (IS_VALLEYVIEW(dev_priv))
+- vlv_exec_gpio(dev_priv, gpio_source, gpio_number, value);
++ vlv_exec_gpio(connector, gpio_source, gpio_number, value);
+ else if (IS_CHERRYVIEW(dev_priv))
+- chv_exec_gpio(dev_priv, gpio_source, gpio_number, value);
++ chv_exec_gpio(connector, gpio_source, gpio_number, value);
+ else
+- bxt_exec_gpio(dev_priv, gpio_source, gpio_index, value);
++ bxt_exec_gpio(connector, gpio_source, gpio_index, value);
+
+ return data;
+ }
+@@ -585,14 +591,15 @@ static void intel_dsi_vbt_exec(struct intel_dsi *intel_dsi,
+ enum mipi_seq seq_id)
+ {
+ struct drm_i915_private *dev_priv = to_i915(intel_dsi->base.base.dev);
++ struct intel_connector *connector = intel_dsi->attached_connector;
+ const u8 *data;
+ fn_mipi_elem_exec mipi_elem_exec;
+
+ if (drm_WARN_ON(&dev_priv->drm,
+- seq_id >= ARRAY_SIZE(dev_priv->vbt.dsi.sequence)))
++ seq_id >= ARRAY_SIZE(connector->panel.vbt.dsi.sequence)))
+ return;
+
+- data = dev_priv->vbt.dsi.sequence[seq_id];
++ data = connector->panel.vbt.dsi.sequence[seq_id];
+ if (!data)
+ return;
+
+@@ -605,7 +612,7 @@ static void intel_dsi_vbt_exec(struct intel_dsi *intel_dsi,
+ data++;
+
+ /* Skip Size of Sequence. */
+- if (dev_priv->vbt.dsi.seq_version >= 3)
++ if (connector->panel.vbt.dsi.seq_version >= 3)
+ data += 4;
+
+ while (1) {
+@@ -621,7 +628,7 @@ static void intel_dsi_vbt_exec(struct intel_dsi *intel_dsi,
+ mipi_elem_exec = NULL;
+
+ /* Size of Operation. */
+- if (dev_priv->vbt.dsi.seq_version >= 3)
++ if (connector->panel.vbt.dsi.seq_version >= 3)
+ operation_size = *data++;
+
+ if (mipi_elem_exec) {
+@@ -669,10 +676,10 @@ void intel_dsi_vbt_exec_sequence(struct intel_dsi *intel_dsi,
+
+ void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec)
+ {
+- struct drm_i915_private *dev_priv = to_i915(intel_dsi->base.base.dev);
++ struct intel_connector *connector = intel_dsi->attached_connector;
+
+ /* For v3 VBTs in vid-mode the delays are part of the VBT sequences */
+- if (is_vid_mode(intel_dsi) && dev_priv->vbt.dsi.seq_version >= 3)
++ if (is_vid_mode(intel_dsi) && connector->panel.vbt.dsi.seq_version >= 3)
+ return;
+
+ msleep(msec);
+@@ -734,9 +741,10 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
+- struct mipi_pps_data *pps = dev_priv->vbt.dsi.pps;
+- struct drm_display_mode *mode = dev_priv->vbt.lfp_lvds_vbt_mode;
++ struct intel_connector *connector = intel_dsi->attached_connector;
++ struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
++ struct mipi_pps_data *pps = connector->panel.vbt.dsi.pps;
++ struct drm_display_mode *mode = connector->panel.vbt.lfp_lvds_vbt_mode;
+ u16 burst_mode_ratio;
+ enum port port;
+
+@@ -872,7 +880,8 @@ void intel_dsi_vbt_gpio_init(struct intel_dsi *intel_dsi, bool panel_is_on)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
++ struct intel_connector *connector = intel_dsi->attached_connector;
++ struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
+ enum gpiod_flags flags = panel_is_on ? GPIOD_OUT_HIGH : GPIOD_OUT_LOW;
+ bool want_backlight_gpio = false;
+ bool want_panel_gpio = false;
+@@ -927,7 +936,8 @@ void intel_dsi_vbt_gpio_cleanup(struct intel_dsi *intel_dsi)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
++ struct intel_connector *connector = intel_dsi->attached_connector;
++ struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
+
+ if (intel_dsi->gpio_panel) {
+ gpiod_put(intel_dsi->gpio_panel);
+diff --git a/drivers/gpu/drm/i915/display/intel_lvds.c b/drivers/gpu/drm/i915/display/intel_lvds.c
+index e8478161f8b9b..9f250a70519aa 100644
+--- a/drivers/gpu/drm/i915/display/intel_lvds.c
++++ b/drivers/gpu/drm/i915/display/intel_lvds.c
+@@ -809,7 +809,7 @@ static bool compute_is_dual_link_lvds(struct intel_lvds_encoder *lvds_encoder)
+ else
+ val &= ~(LVDS_DETECTED | LVDS_PIPE_SEL_MASK);
+ if (val == 0)
+- val = dev_priv->vbt.bios_lvds_val;
++ val = connector->panel.vbt.bios_lvds_val;
+
+ return (val & LVDS_CLKB_POWER_MASK) == LVDS_CLKB_POWER_UP;
+ }
+@@ -967,9 +967,11 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
+ }
+ intel_connector->edid = edid;
+
++ intel_bios_init_panel(dev_priv, &intel_connector->panel);
++
+ /* Try EDID first */
+ intel_panel_add_edid_fixed_modes(intel_connector,
+- dev_priv->vbt.drrs_type != DRRS_TYPE_NONE);
++ intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE);
+
+ /* Failed to get EDID, what about VBT? */
+ if (!intel_panel_preferred_fixed_mode(intel_connector))
+diff --git a/drivers/gpu/drm/i915/display/intel_panel.c b/drivers/gpu/drm/i915/display/intel_panel.c
+index d1d1b59102d69..d055e41185582 100644
+--- a/drivers/gpu/drm/i915/display/intel_panel.c
++++ b/drivers/gpu/drm/i915/display/intel_panel.c
+@@ -75,9 +75,8 @@ const struct drm_display_mode *
+ intel_panel_downclock_mode(struct intel_connector *connector,
+ const struct drm_display_mode *adjusted_mode)
+ {
+- struct drm_i915_private *i915 = to_i915(connector->base.dev);
+ const struct drm_display_mode *fixed_mode, *best_mode = NULL;
+- int min_vrefresh = i915->vbt.seamless_drrs_min_refresh_rate;
++ int min_vrefresh = connector->panel.vbt.seamless_drrs_min_refresh_rate;
+ int max_vrefresh = drm_mode_vrefresh(adjusted_mode);
+
+ /* pick the fixed_mode with the lowest refresh rate */
+@@ -113,13 +112,11 @@ int intel_panel_get_modes(struct intel_connector *connector)
+
+ enum drrs_type intel_panel_drrs_type(struct intel_connector *connector)
+ {
+- struct drm_i915_private *i915 = to_i915(connector->base.dev);
+-
+ if (list_empty(&connector->panel.fixed_modes) ||
+ list_is_singular(&connector->panel.fixed_modes))
+ return DRRS_TYPE_NONE;
+
+- return i915->vbt.drrs_type;
++ return connector->panel.vbt.drrs_type;
+ }
+
+ int intel_panel_compute_config(struct intel_connector *connector,
+@@ -260,7 +257,7 @@ void intel_panel_add_vbt_lfp_fixed_mode(struct intel_connector *connector)
+ struct drm_i915_private *i915 = to_i915(connector->base.dev);
+ const struct drm_display_mode *mode;
+
+- mode = i915->vbt.lfp_lvds_vbt_mode;
++ mode = connector->panel.vbt.lfp_lvds_vbt_mode;
+ if (!mode)
+ return;
+
+@@ -274,7 +271,7 @@ void intel_panel_add_vbt_sdvo_fixed_mode(struct intel_connector *connector)
+ struct drm_i915_private *i915 = to_i915(connector->base.dev);
+ const struct drm_display_mode *mode;
+
+- mode = i915->vbt.sdvo_lvds_vbt_mode;
++ mode = connector->panel.vbt.sdvo_lvds_vbt_mode;
+ if (!mode)
+ return;
+
+@@ -639,6 +636,8 @@ void intel_panel_fini(struct intel_connector *connector)
+
+ intel_backlight_destroy(panel);
+
++ intel_bios_fini_panel(panel);
++
+ list_for_each_entry_safe(fixed_mode, next, &panel->fixed_modes, head) {
+ list_del(&fixed_mode->head);
+ drm_mode_destroy(connector->base.dev, fixed_mode);
+diff --git a/drivers/gpu/drm/i915/display/intel_pps.c b/drivers/gpu/drm/i915/display/intel_pps.c
+index 5a598dd060391..a226e4e5c5698 100644
+--- a/drivers/gpu/drm/i915/display/intel_pps.c
++++ b/drivers/gpu/drm/i915/display/intel_pps.c
+@@ -209,7 +209,8 @@ static int
+ bxt_power_sequencer_idx(struct intel_dp *intel_dp)
+ {
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+- int backlight_controller = dev_priv->vbt.backlight.controller;
++ struct intel_connector *connector = intel_dp->attached_connector;
++ int backlight_controller = connector->panel.vbt.backlight.controller;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+@@ -1159,53 +1160,84 @@ intel_pps_verify_state(struct intel_dp *intel_dp)
+ }
+ }
+
+-static void pps_init_delays(struct intel_dp *intel_dp)
++static void pps_init_delays_cur(struct intel_dp *intel_dp,
++ struct edp_power_seq *cur)
+ {
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+- struct edp_power_seq cur, vbt, spec,
+- *final = &intel_dp->pps.pps_delays;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+- /* already initialized? */
+- if (final->t11_t12 != 0)
+- return;
++ intel_pps_readout_hw_state(intel_dp, cur);
++
++ intel_pps_dump_state(intel_dp, "cur", cur);
++}
+
+- intel_pps_readout_hw_state(intel_dp, &cur);
++static void pps_init_delays_vbt(struct intel_dp *intel_dp,
++ struct edp_power_seq *vbt)
++{
++ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
++ struct intel_connector *connector = intel_dp->attached_connector;
+
+- intel_pps_dump_state(intel_dp, "cur", &cur);
++ *vbt = connector->panel.vbt.edp.pps;
+
+- vbt = dev_priv->vbt.edp.pps;
+ /* On Toshiba Satellite P50-C-18C system the VBT T12 delay
+ * of 500ms appears to be too short. Ocassionally the panel
+ * just fails to power back on. Increasing the delay to 800ms
+ * seems sufficient to avoid this problem.
+ */
+ if (dev_priv->quirks & QUIRK_INCREASE_T12_DELAY) {
+- vbt.t11_t12 = max_t(u16, vbt.t11_t12, 1300 * 10);
++ vbt->t11_t12 = max_t(u16, vbt->t11_t12, 1300 * 10);
+ drm_dbg_kms(&dev_priv->drm,
+ "Increasing T12 panel delay as per the quirk to %d\n",
+- vbt.t11_t12);
++ vbt->t11_t12);
+ }
++
+ /* T11_T12 delay is special and actually in units of 100ms, but zero
+ * based in the hw (so we need to add 100 ms). But the sw vbt
+ * table multiplies it with 1000 to make it in units of 100usec,
+ * too. */
+- vbt.t11_t12 += 100 * 10;
++ vbt->t11_t12 += 100 * 10;
++
++ intel_pps_dump_state(intel_dp, "vbt", vbt);
++}
++
++static void pps_init_delays_spec(struct intel_dp *intel_dp,
++ struct edp_power_seq *spec)
++{
++ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
++
++ lockdep_assert_held(&dev_priv->pps_mutex);
+
+ /* Upper limits from eDP 1.3 spec. Note that we use the clunky units of
+ * our hw here, which are all in 100usec. */
+- spec.t1_t3 = 210 * 10;
+- spec.t8 = 50 * 10; /* no limit for t8, use t7 instead */
+- spec.t9 = 50 * 10; /* no limit for t9, make it symmetric with t8 */
+- spec.t10 = 500 * 10;
++ spec->t1_t3 = 210 * 10;
++ spec->t8 = 50 * 10; /* no limit for t8, use t7 instead */
++ spec->t9 = 50 * 10; /* no limit for t9, make it symmetric with t8 */
++ spec->t10 = 500 * 10;
+ /* This one is special and actually in units of 100ms, but zero
+ * based in the hw (so we need to add 100 ms). But the sw vbt
+ * table multiplies it with 1000 to make it in units of 100usec,
+ * too. */
+- spec.t11_t12 = (510 + 100) * 10;
++ spec->t11_t12 = (510 + 100) * 10;
++
++ intel_pps_dump_state(intel_dp, "spec", spec);
++}
++
++static void pps_init_delays(struct intel_dp *intel_dp)
++{
++ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
++ struct edp_power_seq cur, vbt, spec,
++ *final = &intel_dp->pps.pps_delays;
++
++ lockdep_assert_held(&dev_priv->pps_mutex);
++
++ /* already initialized? */
++ if (final->t11_t12 != 0)
++ return;
+
+- intel_pps_dump_state(intel_dp, "vbt", &vbt);
++ pps_init_delays_cur(intel_dp, &cur);
++ pps_init_delays_vbt(intel_dp, &vbt);
++ pps_init_delays_spec(intel_dp, &spec);
+
+ /* Use the max of the register settings and vbt. If both are
+ * unset, fall back to the spec limits. */
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 06db407e2749f..8f09203e0cf03 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -86,10 +86,13 @@
+
+ static bool psr_global_enabled(struct intel_dp *intel_dp)
+ {
++ struct intel_connector *connector = intel_dp->attached_connector;
+ struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+
+ switch (intel_dp->psr.debug & I915_PSR_DEBUG_MODE_MASK) {
+ case I915_PSR_DEBUG_DEFAULT:
++ if (i915->params.enable_psr == -1)
++ return connector->panel.vbt.psr.enable;
+ return i915->params.enable_psr;
+ case I915_PSR_DEBUG_DISABLE:
+ return false;
+@@ -399,6 +402,7 @@ static void intel_psr_enable_sink(struct intel_dp *intel_dp)
+
+ static u32 intel_psr1_get_tp_time(struct intel_dp *intel_dp)
+ {
++ struct intel_connector *connector = intel_dp->attached_connector;
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+ u32 val = 0;
+
+@@ -411,20 +415,20 @@ static u32 intel_psr1_get_tp_time(struct intel_dp *intel_dp)
+ goto check_tp3_sel;
+ }
+
+- if (dev_priv->vbt.psr.tp1_wakeup_time_us == 0)
++ if (connector->panel.vbt.psr.tp1_wakeup_time_us == 0)
+ val |= EDP_PSR_TP1_TIME_0us;
+- else if (dev_priv->vbt.psr.tp1_wakeup_time_us <= 100)
++ else if (connector->panel.vbt.psr.tp1_wakeup_time_us <= 100)
+ val |= EDP_PSR_TP1_TIME_100us;
+- else if (dev_priv->vbt.psr.tp1_wakeup_time_us <= 500)
++ else if (connector->panel.vbt.psr.tp1_wakeup_time_us <= 500)
+ val |= EDP_PSR_TP1_TIME_500us;
+ else
+ val |= EDP_PSR_TP1_TIME_2500us;
+
+- if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us == 0)
++ if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us == 0)
+ val |= EDP_PSR_TP2_TP3_TIME_0us;
+- else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 100)
++ else if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us <= 100)
+ val |= EDP_PSR_TP2_TP3_TIME_100us;
+- else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 500)
++ else if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us <= 500)
+ val |= EDP_PSR_TP2_TP3_TIME_500us;
+ else
+ val |= EDP_PSR_TP2_TP3_TIME_2500us;
+@@ -441,13 +445,14 @@ check_tp3_sel:
+
+ static u8 psr_compute_idle_frames(struct intel_dp *intel_dp)
+ {
++ struct intel_connector *connector = intel_dp->attached_connector;
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+ int idle_frames;
+
+ /* Let's use 6 as the minimum to cover all known cases including the
+ * off-by-one issue that HW has in some cases.
+ */
+- idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
++ idle_frames = max(6, connector->panel.vbt.psr.idle_frames);
+ idle_frames = max(idle_frames, intel_dp->psr.sink_sync_latency + 1);
+
+ if (drm_WARN_ON(&dev_priv->drm, idle_frames > 0xf))
+@@ -483,18 +488,19 @@ static void hsw_activate_psr1(struct intel_dp *intel_dp)
+
+ static u32 intel_psr2_get_tp_time(struct intel_dp *intel_dp)
+ {
++ struct intel_connector *connector = intel_dp->attached_connector;
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+ u32 val = 0;
+
+ if (dev_priv->params.psr_safest_params)
+ return EDP_PSR2_TP2_TIME_2500us;
+
+- if (dev_priv->vbt.psr.psr2_tp2_tp3_wakeup_time_us >= 0 &&
+- dev_priv->vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 50)
++ if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us >= 0 &&
++ connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 50)
+ val |= EDP_PSR2_TP2_TIME_50us;
+- else if (dev_priv->vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 100)
++ else if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 100)
+ val |= EDP_PSR2_TP2_TIME_100us;
+- else if (dev_priv->vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 500)
++ else if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 500)
+ val |= EDP_PSR2_TP2_TIME_500us;
+ else
+ val |= EDP_PSR2_TP2_TIME_2500us;
+@@ -2344,6 +2350,7 @@ unlock:
+ */
+ void intel_psr_init(struct intel_dp *intel_dp)
+ {
++ struct intel_connector *connector = intel_dp->attached_connector;
+ struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+
+@@ -2367,14 +2374,10 @@ void intel_psr_init(struct intel_dp *intel_dp)
+
+ intel_dp->psr.source_support = true;
+
+- if (dev_priv->params.enable_psr == -1)
+- if (!dev_priv->vbt.psr.enable)
+- dev_priv->params.enable_psr = 0;
+-
+ /* Set link_standby x link_off defaults */
+ if (DISPLAY_VER(dev_priv) < 12)
+ /* For new platforms up to TGL let's respect VBT back again */
+- intel_dp->psr.link_standby = dev_priv->vbt.psr.full_link;
++ intel_dp->psr.link_standby = connector->panel.vbt.psr.full_link;
+
+ INIT_WORK(&intel_dp->psr.work, intel_psr_work);
+ INIT_DELAYED_WORK(&intel_dp->psr.dc3co_work, tgl_dc3co_disable_work);
+diff --git a/drivers/gpu/drm/i915/display/intel_sdvo.c b/drivers/gpu/drm/i915/display/intel_sdvo.c
+index d81855d57cdc9..14a64bd61176d 100644
+--- a/drivers/gpu/drm/i915/display/intel_sdvo.c
++++ b/drivers/gpu/drm/i915/display/intel_sdvo.c
+@@ -2869,6 +2869,7 @@ static bool
+ intel_sdvo_lvds_init(struct intel_sdvo *intel_sdvo, int device)
+ {
+ struct drm_encoder *encoder = &intel_sdvo->base.base;
++ struct drm_i915_private *i915 = to_i915(encoder->dev);
+ struct drm_connector *connector;
+ struct intel_connector *intel_connector;
+ struct intel_sdvo_connector *intel_sdvo_connector;
+@@ -2900,6 +2901,8 @@ intel_sdvo_lvds_init(struct intel_sdvo *intel_sdvo, int device)
+ if (!intel_sdvo_create_enhance_property(intel_sdvo, intel_sdvo_connector))
+ goto err;
+
++ intel_bios_init_panel(i915, &intel_connector->panel);
++
+ /*
+ * Fetch modes from VBT. For SDVO prefer the VBT mode since some
+ * SDVO->LVDS transcoders can't cope with the EDID mode.
+diff --git a/drivers/gpu/drm/i915/display/vlv_dsi.c b/drivers/gpu/drm/i915/display/vlv_dsi.c
+index 1954f07f0d3ec..02f75e95b2ec1 100644
+--- a/drivers/gpu/drm/i915/display/vlv_dsi.c
++++ b/drivers/gpu/drm/i915/display/vlv_dsi.c
+@@ -782,6 +782,7 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
+ {
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
++ struct intel_connector *connector = to_intel_connector(conn_state->connector);
+ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ enum pipe pipe = crtc->pipe;
+ enum port port;
+@@ -838,7 +839,7 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
+ * the delay in that case. If there is no deassert-seq, then an
+ * unconditional msleep is used to give the panel time to power-on.
+ */
+- if (dev_priv->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) {
++ if (connector->panel.vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) {
+ intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay);
+ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
+ } else {
+@@ -1690,7 +1691,8 @@ static void vlv_dphy_param_init(struct intel_dsi *intel_dsi)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
++ struct intel_connector *connector = intel_dsi->attached_connector;
++ struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
+ u32 tlpx_ns, extra_byte_count, tlpx_ui;
+ u32 ui_num, ui_den;
+ u32 prepare_cnt, exit_zero_cnt, clk_zero_cnt, trail_cnt;
+@@ -1924,13 +1926,22 @@ void vlv_dsi_init(struct drm_i915_private *dev_priv)
+
+ intel_dsi->panel_power_off_time = ktime_get_boottime();
+
+- if (dev_priv->vbt.dsi.config->dual_link)
++ intel_bios_init_panel(dev_priv, &intel_connector->panel);
++
++ if (intel_connector->panel.vbt.dsi.config->dual_link)
+ intel_dsi->ports = BIT(PORT_A) | BIT(PORT_C);
+ else
+ intel_dsi->ports = BIT(port);
+
+- intel_dsi->dcs_backlight_ports = dev_priv->vbt.dsi.bl_ports;
+- intel_dsi->dcs_cabc_ports = dev_priv->vbt.dsi.cabc_ports;
++ if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports))
++ intel_connector->panel.vbt.dsi.bl_ports &= intel_dsi->ports;
++
++ intel_dsi->dcs_backlight_ports = intel_connector->panel.vbt.dsi.bl_ports;
++
++ if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports))
++ intel_connector->panel.vbt.dsi.cabc_ports &= intel_dsi->ports;
++
++ intel_dsi->dcs_cabc_ports = intel_connector->panel.vbt.dsi.cabc_ports;
+
+ /* Create a DSI host (and a device) for each port. */
+ for_each_dsi_port(port, intel_dsi->ports) {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
+index 321af109d484f..8da42af0256ab 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
+@@ -1269,6 +1269,10 @@ static void i915_gem_context_release_work(struct work_struct *work)
+ trace_i915_context_free(ctx);
+ GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
+
++ spin_lock(&ctx->i915->gem.contexts.lock);
++ list_del(&ctx->link);
++ spin_unlock(&ctx->i915->gem.contexts.lock);
++
+ if (ctx->syncobj)
+ drm_syncobj_put(ctx->syncobj);
+
+@@ -1514,10 +1518,6 @@ static void context_close(struct i915_gem_context *ctx)
+
+ ctx->file_priv = ERR_PTR(-EBADF);
+
+- spin_lock(&ctx->i915->gem.contexts.lock);
+- list_del(&ctx->link);
+- spin_unlock(&ctx->i915->gem.contexts.lock);
+-
+ client = ctx->client;
+ if (client) {
+ spin_lock(&client->ctx_lock);
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 5184d70d48382..554d79bc0312d 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -194,12 +194,6 @@ struct drm_i915_display_funcs {
+
+ #define I915_COLOR_UNEVICTABLE (-1) /* a non-vma sharing the address space */
+
+-enum drrs_type {
+- DRRS_TYPE_NONE,
+- DRRS_TYPE_STATIC,
+- DRRS_TYPE_SEAMLESS,
+-};
+-
+ #define QUIRK_LVDS_SSC_DISABLE (1<<1)
+ #define QUIRK_INVERT_BRIGHTNESS (1<<2)
+ #define QUIRK_BACKLIGHT_PRESENT (1<<3)
+@@ -308,76 +302,19 @@ struct intel_vbt_data {
+ /* bdb version */
+ u16 version;
+
+- struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */
+- struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
+-
+ /* Feature bits */
+ unsigned int int_tv_support:1;
+- unsigned int lvds_dither:1;
+ unsigned int int_crt_support:1;
+ unsigned int lvds_use_ssc:1;
+ unsigned int int_lvds_support:1;
+ unsigned int display_clock_mode:1;
+ unsigned int fdi_rx_polarity_inverted:1;
+- unsigned int panel_type:4;
+ int lvds_ssc_freq;
+- unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */
+ enum drm_panel_orientation orientation;
+
+ bool override_afc_startup;
+ u8 override_afc_startup_val;
+
+- u8 seamless_drrs_min_refresh_rate;
+- enum drrs_type drrs_type;
+-
+- struct {
+- int rate;
+- int lanes;
+- int preemphasis;
+- int vswing;
+- int bpp;
+- struct edp_power_seq pps;
+- u8 drrs_msa_timing_delay;
+- bool low_vswing;
+- bool initialized;
+- bool hobl;
+- } edp;
+-
+- struct {
+- bool enable;
+- bool full_link;
+- bool require_aux_wakeup;
+- int idle_frames;
+- int tp1_wakeup_time_us;
+- int tp2_tp3_wakeup_time_us;
+- int psr2_tp2_tp3_wakeup_time_us;
+- } psr;
+-
+- struct {
+- u16 pwm_freq_hz;
+- u16 brightness_precision_bits;
+- bool present;
+- bool active_low_pwm;
+- u8 min_brightness; /* min_brightness/255 of max */
+- u8 controller; /* brightness controller number */
+- enum intel_backlight_type type;
+- } backlight;
+-
+- /* MIPI DSI */
+- struct {
+- u16 panel_id;
+- struct mipi_config *config;
+- struct mipi_pps_data *pps;
+- u16 bl_ports;
+- u16 cabc_ports;
+- u8 seq_version;
+- u32 size;
+- u8 *data;
+- const u8 *sequence[MIPI_SEQ_MAX];
+- u8 *deassert_seq; /* Used by fixup_mipi_sequences() */
+- enum drm_panel_orientation orientation;
+- } dsi;
+-
+ int crt_ddc_pin;
+
+ struct list_head display_devices;
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 702e5b89be226..b605d0ceaefad 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -1191,7 +1191,8 @@ void i915_gem_driver_release(struct drm_i915_private *dev_priv)
+
+ intel_uc_cleanup_firmwares(&to_gt(dev_priv)->uc);
+
+- i915_gem_drain_freed_objects(dev_priv);
++ /* Flush any outstanding work, including i915_gem_context.release_work. */
++ i915_gem_drain_workqueue(dev_priv);
+
+ drm_WARN_ON(&dev_priv->drm, !list_empty(&dev_priv->gem.contexts.list));
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+index 5d7504a72b11c..e244aa408d9d4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+@@ -151,7 +151,7 @@ static void mtk_dither_config(struct device *dev, unsigned int w,
+ {
+ struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev);
+
+- mtk_ddp_write(cmdq_pkt, h << 16 | w, &priv->cmdq_reg, priv->regs, DISP_REG_DITHER_SIZE);
++ mtk_ddp_write(cmdq_pkt, w << 16 | h, &priv->cmdq_reg, priv->regs, DISP_REG_DITHER_SIZE);
+ mtk_ddp_write(cmdq_pkt, DITHER_RELAY_MODE, &priv->cmdq_reg, priv->regs,
+ DISP_REG_DITHER_CFG);
+ mtk_dither_set_common(priv->regs, &priv->cmdq_reg, bpc, DISP_REG_DITHER_CFG,
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index af2f123e9a9a9..9a3b86c29b503 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -685,6 +685,16 @@ static void mtk_dsi_poweroff(struct mtk_dsi *dsi)
+ if (--dsi->refcount != 0)
+ return;
+
++ /*
++ * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
++ * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
++ * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
++ * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
++ * after dsi is fully set.
++ */
++ mtk_dsi_stop(dsi);
++
++ mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
+ mtk_dsi_reset_engine(dsi);
+ mtk_dsi_lane0_ulp_mode_enter(dsi);
+ mtk_dsi_clk_ulp_mode_enter(dsi);
+@@ -735,17 +745,6 @@ static void mtk_output_dsi_disable(struct mtk_dsi *dsi)
+ if (!dsi->enabled)
+ return;
+
+- /*
+- * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
+- * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
+- * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
+- * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
+- * after dsi is fully set.
+- */
+- mtk_dsi_stop(dsi);
+-
+- mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
+-
+ dsi->enabled = false;
+ }
+
+@@ -808,10 +807,13 @@ static void mtk_dsi_bridge_atomic_post_disable(struct drm_bridge *bridge,
+
+ static const struct drm_bridge_funcs mtk_dsi_bridge_funcs = {
+ .attach = mtk_dsi_bridge_attach,
++ .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
+ .atomic_disable = mtk_dsi_bridge_atomic_disable,
++ .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
+ .atomic_enable = mtk_dsi_bridge_atomic_enable,
+ .atomic_pre_enable = mtk_dsi_bridge_atomic_pre_enable,
+ .atomic_post_disable = mtk_dsi_bridge_atomic_post_disable,
++ .atomic_reset = drm_atomic_helper_bridge_reset,
+ .mode_set = mtk_dsi_bridge_mode_set,
+ };
+
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 4a2e580a2f7b7..0e001ce8a40fd 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2136,7 +2136,7 @@ static const struct panel_desc innolux_g121i1_l01 = {
+ .enable = 200,
+ .disable = 20,
+ },
+- .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++ .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index c204e9b95c1f7..518ee13b1d6f4 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -283,8 +283,9 @@ static int cdn_dp_connector_get_modes(struct drm_connector *connector)
+ return ret;
+ }
+
+-static int cdn_dp_connector_mode_valid(struct drm_connector *connector,
+- struct drm_display_mode *mode)
++static enum drm_mode_status
++cdn_dp_connector_mode_valid(struct drm_connector *connector,
++ struct drm_display_mode *mode)
+ {
+ struct cdn_dp_device *dp = connector_to_dp(connector);
+ struct drm_display_info *display_info = &dp->connector.display_info;
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 547ae334e5cd8..027029efb0088 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2309,7 +2309,7 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
+ bool fb_overlap_ok)
+ {
+ struct resource *iter, *shadow;
+- resource_size_t range_min, range_max, start;
++ resource_size_t range_min, range_max, start, end;
+ const char *dev_n = dev_name(&device_obj->device);
+ int retval;
+
+@@ -2344,6 +2344,14 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
+ range_max = iter->end;
+ start = (range_min + align - 1) & ~(align - 1);
+ for (; start + size - 1 <= range_max; start += align) {
++ end = start + size - 1;
++
++ /* Skip the whole fb_mmio region if not fb_overlap_ok */
++ if (!fb_overlap_ok && fb_mmio &&
++ (((start >= fb_mmio->start) && (start <= fb_mmio->end)) ||
++ ((end >= fb_mmio->start) && (end <= fb_mmio->end))))
++ continue;
++
+ shadow = __request_region(iter, start, size, NULL,
+ IORESOURCE_BUSY);
+ if (!shadow)
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index e47fa34656717..3082183bd66a4 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1583,7 +1583,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ if (i2c_imx->dma)
+ i2c_imx_dma_free(i2c_imx);
+
+- if (ret == 0) {
++ if (ret >= 0) {
+ /* setup chip registers to defaults */
+ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
+ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
+diff --git a/drivers/i2c/busses/i2c-mlxbf.c b/drivers/i2c/busses/i2c-mlxbf.c
+index 8716032f030a0..ad5efd7497d1c 100644
+--- a/drivers/i2c/busses/i2c-mlxbf.c
++++ b/drivers/i2c/busses/i2c-mlxbf.c
+@@ -6,6 +6,7 @@
+ */
+
+ #include <linux/acpi.h>
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/interrupt.h>
+@@ -63,13 +64,14 @@
+ */
+ #define MLXBF_I2C_TYU_PLL_OUT_FREQ (400 * 1000 * 1000)
+ /* Reference clock for Bluefield - 156 MHz. */
+-#define MLXBF_I2C_PLL_IN_FREQ (156 * 1000 * 1000)
++#define MLXBF_I2C_PLL_IN_FREQ 156250000ULL
+
+ /* Constant used to determine the PLL frequency. */
+-#define MLNXBF_I2C_COREPLL_CONST 16384
++#define MLNXBF_I2C_COREPLL_CONST 16384ULL
++
++#define MLXBF_I2C_FREQUENCY_1GHZ 1000000000ULL
+
+ /* PLL registers. */
+-#define MLXBF_I2C_CORE_PLL_REG0 0x0
+ #define MLXBF_I2C_CORE_PLL_REG1 0x4
+ #define MLXBF_I2C_CORE_PLL_REG2 0x8
+
+@@ -181,22 +183,15 @@
+ #define MLXBF_I2C_COREPLL_FREQ MLXBF_I2C_TYU_PLL_OUT_FREQ
+
+ /* Core PLL TYU configuration. */
+-#define MLXBF_I2C_COREPLL_CORE_F_TYU_MASK GENMASK(12, 0)
+-#define MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK GENMASK(3, 0)
+-#define MLXBF_I2C_COREPLL_CORE_R_TYU_MASK GENMASK(5, 0)
+-
+-#define MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT 3
+-#define MLXBF_I2C_COREPLL_CORE_OD_TYU_SHIFT 16
+-#define MLXBF_I2C_COREPLL_CORE_R_TYU_SHIFT 20
++#define MLXBF_I2C_COREPLL_CORE_F_TYU_MASK GENMASK(15, 3)
++#define MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK GENMASK(19, 16)
++#define MLXBF_I2C_COREPLL_CORE_R_TYU_MASK GENMASK(25, 20)
+
+ /* Core PLL YU configuration. */
+ #define MLXBF_I2C_COREPLL_CORE_F_YU_MASK GENMASK(25, 0)
+ #define MLXBF_I2C_COREPLL_CORE_OD_YU_MASK GENMASK(3, 0)
+-#define MLXBF_I2C_COREPLL_CORE_R_YU_MASK GENMASK(5, 0)
++#define MLXBF_I2C_COREPLL_CORE_R_YU_MASK GENMASK(31, 26)
+
+-#define MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT 0
+-#define MLXBF_I2C_COREPLL_CORE_OD_YU_SHIFT 1
+-#define MLXBF_I2C_COREPLL_CORE_R_YU_SHIFT 26
+
+ /* Core PLL frequency. */
+ static u64 mlxbf_i2c_corepll_frequency;
+@@ -479,8 +474,6 @@ static struct mutex mlxbf_i2c_bus_lock;
+ #define MLXBF_I2C_MASK_8 GENMASK(7, 0)
+ #define MLXBF_I2C_MASK_16 GENMASK(15, 0)
+
+-#define MLXBF_I2C_FREQUENCY_1GHZ 1000000000
+-
+ /*
+ * Function to poll a set of bits at a specific address; it checks whether
+ * the bits are equal to zero when eq_zero is set to 'true', and not equal
+@@ -669,7 +662,7 @@ static int mlxbf_i2c_smbus_enable(struct mlxbf_i2c_priv *priv, u8 slave,
+ /* Clear status bits. */
+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_STATUS);
+ /* Set the cause data. */
+- writel(~0x0, priv->smbus->io + MLXBF_I2C_CAUSE_OR_CLEAR);
++ writel(~0x0, priv->mst_cause->io + MLXBF_I2C_CAUSE_OR_CLEAR);
+ /* Zero PEC byte. */
+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_PEC);
+ /* Zero byte count. */
+@@ -738,6 +731,9 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ if (flags & MLXBF_I2C_F_WRITE) {
+ write_en = 1;
+ write_len += operation->length;
++ if (data_idx + operation->length >
++ MLXBF_I2C_MASTER_DATA_DESC_SIZE)
++ return -ENOBUFS;
+ memcpy(data_desc + data_idx,
+ operation->buffer, operation->length);
+ data_idx += operation->length;
+@@ -1407,24 +1403,19 @@ static int mlxbf_i2c_init_master(struct platform_device *pdev,
+ return 0;
+ }
+
+-static u64 mlxbf_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
++static u64 mlxbf_i2c_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
+ {
+- u64 core_frequency, pad_frequency;
++ u64 core_frequency;
+ u8 core_od, core_r;
+ u32 corepll_val;
+ u16 core_f;
+
+- pad_frequency = MLXBF_I2C_PLL_IN_FREQ;
+-
+ corepll_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1);
+
+ /* Get Core PLL configuration bits. */
+- core_f = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT) &
+- MLXBF_I2C_COREPLL_CORE_F_TYU_MASK;
+- core_od = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_OD_TYU_SHIFT) &
+- MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK;
+- core_r = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_R_TYU_SHIFT) &
+- MLXBF_I2C_COREPLL_CORE_R_TYU_MASK;
++ core_f = FIELD_GET(MLXBF_I2C_COREPLL_CORE_F_TYU_MASK, corepll_val);
++ core_od = FIELD_GET(MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK, corepll_val);
++ core_r = FIELD_GET(MLXBF_I2C_COREPLL_CORE_R_TYU_MASK, corepll_val);
+
+ /*
+ * Compute PLL output frequency as follow:
+@@ -1436,31 +1427,26 @@ static u64 mlxbf_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
+ * Where PLL_OUT_FREQ and PLL_IN_FREQ refer to CoreFrequency
+ * and PadFrequency, respectively.
+ */
+- core_frequency = pad_frequency * (++core_f);
++ core_frequency = MLXBF_I2C_PLL_IN_FREQ * (++core_f);
+ core_frequency /= (++core_r) * (++core_od);
+
+ return core_frequency;
+ }
+
+-static u64 mlxbf_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
++static u64 mlxbf_i2c_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
+ {
+ u32 corepll_reg1_val, corepll_reg2_val;
+- u64 corepll_frequency, pad_frequency;
++ u64 corepll_frequency;
+ u8 core_od, core_r;
+ u32 core_f;
+
+- pad_frequency = MLXBF_I2C_PLL_IN_FREQ;
+-
+ corepll_reg1_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1);
+ corepll_reg2_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG2);
+
+ /* Get Core PLL configuration bits */
+- core_f = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT) &
+- MLXBF_I2C_COREPLL_CORE_F_YU_MASK;
+- core_r = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_R_YU_SHIFT) &
+- MLXBF_I2C_COREPLL_CORE_R_YU_MASK;
+- core_od = rol32(corepll_reg2_val, MLXBF_I2C_COREPLL_CORE_OD_YU_SHIFT) &
+- MLXBF_I2C_COREPLL_CORE_OD_YU_MASK;
++ core_f = FIELD_GET(MLXBF_I2C_COREPLL_CORE_F_YU_MASK, corepll_reg1_val);
++ core_r = FIELD_GET(MLXBF_I2C_COREPLL_CORE_R_YU_MASK, corepll_reg1_val);
++ core_od = FIELD_GET(MLXBF_I2C_COREPLL_CORE_OD_YU_MASK, corepll_reg2_val);
+
+ /*
+ * Compute PLL output frequency as follow:
+@@ -1472,7 +1458,7 @@ static u64 mlxbf_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
+ * Where PLL_OUT_FREQ and PLL_IN_FREQ refer to CoreFrequency
+ * and PadFrequency, respectively.
+ */
+- corepll_frequency = (pad_frequency * core_f) / MLNXBF_I2C_COREPLL_CONST;
++ corepll_frequency = (MLXBF_I2C_PLL_IN_FREQ * core_f) / MLNXBF_I2C_COREPLL_CONST;
+ corepll_frequency /= (++core_r) * (++core_od);
+
+ return corepll_frequency;
+@@ -2180,14 +2166,14 @@ static struct mlxbf_i2c_chip_info mlxbf_i2c_chip[] = {
+ [1] = &mlxbf_i2c_corepll_res[MLXBF_I2C_CHIP_TYPE_1],
+ [2] = &mlxbf_i2c_gpio_res[MLXBF_I2C_CHIP_TYPE_1]
+ },
+- .calculate_freq = mlxbf_calculate_freq_from_tyu
++ .calculate_freq = mlxbf_i2c_calculate_freq_from_tyu
+ },
+ [MLXBF_I2C_CHIP_TYPE_2] = {
+ .type = MLXBF_I2C_CHIP_TYPE_2,
+ .shared_res = {
+ [0] = &mlxbf_i2c_corepll_res[MLXBF_I2C_CHIP_TYPE_2]
+ },
+- .calculate_freq = mlxbf_calculate_freq_from_yu
++ .calculate_freq = mlxbf_i2c_calculate_freq_from_yu
+ }
+ };
+
+diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
+index 774507b54b57b..313904be5f3bd 100644
+--- a/drivers/i2c/i2c-mux.c
++++ b/drivers/i2c/i2c-mux.c
+@@ -243,9 +243,10 @@ struct i2c_mux_core *i2c_mux_alloc(struct i2c_adapter *parent,
+ int (*deselect)(struct i2c_mux_core *, u32))
+ {
+ struct i2c_mux_core *muxc;
++ size_t mux_size;
+
+- muxc = devm_kzalloc(dev, struct_size(muxc, adapter, max_adapters)
+- + sizeof_priv, GFP_KERNEL);
++ mux_size = struct_size(muxc, adapter, max_adapters);
++ muxc = devm_kzalloc(dev, size_add(mux_size, sizeof_priv), GFP_KERNEL);
+ if (!muxc)
+ return NULL;
+ if (sizeof_priv)
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 861a239d905a4..3ed15e8ca6775 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -419,7 +419,7 @@ static unsigned long __iommu_calculate_sagaw(struct intel_iommu *iommu)
+ {
+ unsigned long fl_sagaw, sl_sagaw;
+
+- fl_sagaw = BIT(2) | (cap_fl1gp_support(iommu->cap) ? BIT(3) : 0);
++ fl_sagaw = BIT(2) | (cap_5lp_support(iommu->cap) ? BIT(3) : 0);
+ sl_sagaw = cap_sagaw(iommu->cap);
+
+ /* Second level only. */
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
+index 7835bb0f32fc3..e012b21c4fd7a 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.c
++++ b/drivers/media/usb/b2c2/flexcop-usb.c
+@@ -511,7 +511,7 @@ static int flexcop_usb_init(struct flexcop_usb *fc_usb)
+
+ if (fc_usb->uintf->cur_altsetting->desc.bNumEndpoints < 1)
+ return -ENODEV;
+- if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[1].desc))
++ if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[0].desc))
+ return -ENODEV;
+
+ switch (fc_usb->udev->speed) {
+diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
+index f8fdf88fb240c..ecbc46714e681 100644
+--- a/drivers/memstick/core/ms_block.c
++++ b/drivers/memstick/core/ms_block.c
+@@ -2188,7 +2188,6 @@ static void msb_remove(struct memstick_dev *card)
+
+ /* Remove the disk */
+ del_gendisk(msb->disk);
+- blk_cleanup_queue(msb->queue);
+ blk_mq_free_tag_set(&msb->tag_set);
+ msb->queue = NULL;
+
+diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
+index 725ba74ded308..72e91c06c618b 100644
+--- a/drivers/memstick/core/mspro_block.c
++++ b/drivers/memstick/core/mspro_block.c
+@@ -1294,7 +1294,6 @@ static void mspro_block_remove(struct memstick_dev *card)
+ del_gendisk(msb->disk);
+ dev_dbg(&card->dev, "mspro block remove\n");
+
+- blk_cleanup_queue(msb->queue);
+ blk_mq_free_tag_set(&msb->tag_set);
+ msb->queue = NULL;
+
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 912a398a9a764..2f89ae55c1773 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -2509,7 +2509,6 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
+ return md;
+
+ err_cleanup_queue:
+- blk_cleanup_queue(md->disk->queue);
+ blk_mq_free_tag_set(&md->queue.tag_set);
+ err_kfree:
+ kfree(md);
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index fa5324ceeebe4..f824cfdab75ac 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -494,7 +494,6 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
+ if (blk_queue_quiesced(q))
+ blk_mq_unquiesce_queue(q);
+
+- blk_cleanup_queue(q);
+ blk_mq_free_tag_set(&mq->tag_set);
+
+ /*
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index 1f0120cbe9e80..8ad095c19f271 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -87,8 +87,9 @@ static const u8 null_mac_addr[ETH_ALEN + 2] __long_aligned = {
+ static u16 ad_ticks_per_sec;
+ static const int ad_delta_in_ticks = (AD_TIMER_INTERVAL * HZ) / 1000;
+
+-static const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned =
+- MULTICAST_LACPDU_ADDR;
++const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned = {
++ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02
++};
+
+ /* ================= main 802.3ad protocol functions ================== */
+ static int ad_lacpdu_send(struct port *port);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index bff0bfd10e235..ab7cb48f8dfdd 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -865,12 +865,8 @@ static void bond_hw_addr_flush(struct net_device *bond_dev,
+ dev_uc_unsync(slave_dev, bond_dev);
+ dev_mc_unsync(slave_dev, bond_dev);
+
+- if (BOND_MODE(bond) == BOND_MODE_8023AD) {
+- /* del lacpdu mc addr from mc list */
+- u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
+-
+- dev_mc_del(slave_dev, lacpdu_multicast);
+- }
++ if (BOND_MODE(bond) == BOND_MODE_8023AD)
++ dev_mc_del(slave_dev, lacpdu_mcast_addr);
+ }
+
+ /*--------------------------- Active slave change ---------------------------*/
+@@ -890,7 +886,8 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
+ if (bond->dev->flags & IFF_ALLMULTI)
+ dev_set_allmulti(old_active->dev, -1);
+
+- bond_hw_addr_flush(bond->dev, old_active->dev);
++ if (bond->dev->flags & IFF_UP)
++ bond_hw_addr_flush(bond->dev, old_active->dev);
+ }
+
+ if (new_active) {
+@@ -901,10 +898,12 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
+ if (bond->dev->flags & IFF_ALLMULTI)
+ dev_set_allmulti(new_active->dev, 1);
+
+- netif_addr_lock_bh(bond->dev);
+- dev_uc_sync(new_active->dev, bond->dev);
+- dev_mc_sync(new_active->dev, bond->dev);
+- netif_addr_unlock_bh(bond->dev);
++ if (bond->dev->flags & IFF_UP) {
++ netif_addr_lock_bh(bond->dev);
++ dev_uc_sync(new_active->dev, bond->dev);
++ dev_mc_sync(new_active->dev, bond->dev);
++ netif_addr_unlock_bh(bond->dev);
++ }
+ }
+ }
+
+@@ -2139,16 +2138,14 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ }
+ }
+
+- netif_addr_lock_bh(bond_dev);
+- dev_mc_sync_multiple(slave_dev, bond_dev);
+- dev_uc_sync_multiple(slave_dev, bond_dev);
+- netif_addr_unlock_bh(bond_dev);
+-
+- if (BOND_MODE(bond) == BOND_MODE_8023AD) {
+- /* add lacpdu mc addr to mc list */
+- u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
++ if (bond_dev->flags & IFF_UP) {
++ netif_addr_lock_bh(bond_dev);
++ dev_mc_sync_multiple(slave_dev, bond_dev);
++ dev_uc_sync_multiple(slave_dev, bond_dev);
++ netif_addr_unlock_bh(bond_dev);
+
+- dev_mc_add(slave_dev, lacpdu_multicast);
++ if (BOND_MODE(bond) == BOND_MODE_8023AD)
++ dev_mc_add(slave_dev, lacpdu_mcast_addr);
+ }
+ }
+
+@@ -2420,7 +2417,8 @@ static int __bond_release_one(struct net_device *bond_dev,
+ if (old_flags & IFF_ALLMULTI)
+ dev_set_allmulti(slave_dev, -1);
+
+- bond_hw_addr_flush(bond_dev, slave_dev);
++ if (old_flags & IFF_UP)
++ bond_hw_addr_flush(bond_dev, slave_dev);
+ }
+
+ slave_disable_netpoll(slave);
+@@ -4157,6 +4155,12 @@ static int bond_open(struct net_device *bond_dev)
+ struct list_head *iter;
+ struct slave *slave;
+
++ if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
++ bond->rr_tx_counter = alloc_percpu(u32);
++ if (!bond->rr_tx_counter)
++ return -ENOMEM;
++ }
++
+ /* reset slave->backup and slave->inactive */
+ if (bond_has_slaves(bond)) {
+ bond_for_each_slave(bond, slave, iter) {
+@@ -4194,6 +4198,9 @@ static int bond_open(struct net_device *bond_dev)
+ /* register to receive LACPDUs */
+ bond->recv_probe = bond_3ad_lacpdu_recv;
+ bond_3ad_initiate_agg_selection(bond, 1);
++
++ bond_for_each_slave(bond, slave, iter)
++ dev_mc_add(slave->dev, lacpdu_mcast_addr);
+ }
+
+ if (bond_mode_can_use_xmit_hash(bond))
+@@ -4205,6 +4212,7 @@ static int bond_open(struct net_device *bond_dev)
+ static int bond_close(struct net_device *bond_dev)
+ {
+ struct bonding *bond = netdev_priv(bond_dev);
++ struct slave *slave;
+
+ bond_work_cancel_all(bond);
+ bond->send_peer_notif = 0;
+@@ -4212,6 +4220,19 @@ static int bond_close(struct net_device *bond_dev)
+ bond_alb_deinitialize(bond);
+ bond->recv_probe = NULL;
+
++ if (bond_uses_primary(bond)) {
++ rcu_read_lock();
++ slave = rcu_dereference(bond->curr_active_slave);
++ if (slave)
++ bond_hw_addr_flush(bond_dev, slave->dev);
++ rcu_read_unlock();
++ } else {
++ struct list_head *iter;
++
++ bond_for_each_slave(bond, slave, iter)
++ bond_hw_addr_flush(bond_dev, slave->dev);
++ }
++
+ return 0;
+ }
+
+@@ -6195,15 +6216,6 @@ static int bond_init(struct net_device *bond_dev)
+ if (!bond->wq)
+ return -ENOMEM;
+
+- if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN) {
+- bond->rr_tx_counter = alloc_percpu(u32);
+- if (!bond->rr_tx_counter) {
+- destroy_workqueue(bond->wq);
+- bond->wq = NULL;
+- return -ENOMEM;
+- }
+- }
+-
+ spin_lock_init(&bond->stats_lock);
+ netdev_lockdep_set_classes(bond_dev);
+
+diff --git a/drivers/net/can/flexcan/flexcan-core.c b/drivers/net/can/flexcan/flexcan-core.c
+index d060088047f16..131467d37a45b 100644
+--- a/drivers/net/can/flexcan/flexcan-core.c
++++ b/drivers/net/can/flexcan/flexcan-core.c
+@@ -941,11 +941,6 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
+ u32 reg_ctrl, reg_id, reg_iflag1;
+ int i;
+
+- if (unlikely(drop)) {
+- skb = ERR_PTR(-ENOBUFS);
+- goto mark_as_read;
+- }
+-
+ mb = flexcan_get_mb(priv, n);
+
+ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX) {
+@@ -974,6 +969,11 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
+ reg_ctrl = priv->read(&mb->can_ctrl);
+ }
+
++ if (unlikely(drop)) {
++ skb = ERR_PTR(-ENOBUFS);
++ goto mark_as_read;
++ }
++
+ if (reg_ctrl & FLEXCAN_MB_CNT_EDL)
+ skb = alloc_canfd_skb(offload->dev, &cfd);
+ else
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index d3a658b444b5f..092cd51b3926e 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -824,6 +824,7 @@ static int gs_can_open(struct net_device *netdev)
+ flags |= GS_CAN_MODE_TRIPLE_SAMPLE;
+
+ /* finally start device */
++ dev->can.state = CAN_STATE_ERROR_ACTIVE;
+ dm->mode = cpu_to_le32(GS_CAN_MODE_START);
+ dm->flags = cpu_to_le32(flags);
+ rc = usb_control_msg(interface_to_usbdev(dev->iface),
+@@ -835,13 +836,12 @@ static int gs_can_open(struct net_device *netdev)
+ if (rc < 0) {
+ netdev_err(netdev, "Couldn't start device (err=%d)\n", rc);
+ kfree(dm);
++ dev->can.state = CAN_STATE_STOPPED;
+ return rc;
+ }
+
+ kfree(dm);
+
+- dev->can.state = CAN_STATE_ERROR_ACTIVE;
+-
+ parent->active_channels++;
+ if (!(dev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
+ netif_start_queue(netdev);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 964354536f9ce..111a952f880ee 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -662,7 +662,6 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
+
+ for (i = 0; i < nr_pkts; i++) {
+ struct bnxt_sw_tx_bd *tx_buf;
+- bool compl_deferred = false;
+ struct sk_buff *skb;
+ int j, last;
+
+@@ -671,6 +670,8 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
+ skb = tx_buf->skb;
+ tx_buf->skb = NULL;
+
++ tx_bytes += skb->len;
++
+ if (tx_buf->is_push) {
+ tx_buf->is_push = 0;
+ goto next_tx_int;
+@@ -691,8 +692,9 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
+ }
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS)) {
+ if (bp->flags & BNXT_FLAG_CHIP_P5) {
++ /* PTP worker takes ownership of the skb */
+ if (!bnxt_get_tx_ts_p5(bp, skb))
+- compl_deferred = true;
++ skb = NULL;
+ else
+ atomic_inc(&bp->ptp_cfg->tx_avail);
+ }
+@@ -701,9 +703,7 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
+ next_tx_int:
+ cons = NEXT_TX(cons);
+
+- tx_bytes += skb->len;
+- if (!compl_deferred)
+- dev_kfree_skb_any(skb);
++ dev_kfree_skb_any(skb);
+ }
+
+ netdev_tx_completed_queue(txq, nr_pkts, tx_bytes);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index 7f3c0875b6f58..8e316367f6ced 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -317,9 +317,9 @@ void bnxt_ptp_cfg_tstamp_filters(struct bnxt *bp)
+
+ if (!(bp->fw_cap & BNXT_FW_CAP_RX_ALL_PKT_TS) && (ptp->tstamp_filters &
+ (PORT_MAC_CFG_REQ_FLAGS_ALL_RX_TS_CAPTURE_ENABLE |
+- PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_DISABLE))) {
++ PORT_MAC_CFG_REQ_FLAGS_ALL_RX_TS_CAPTURE_DISABLE))) {
+ ptp->tstamp_filters &= ~(PORT_MAC_CFG_REQ_FLAGS_ALL_RX_TS_CAPTURE_ENABLE |
+- PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_DISABLE);
++ PORT_MAC_CFG_REQ_FLAGS_ALL_RX_TS_CAPTURE_DISABLE);
+ netdev_warn(bp->dev, "Unsupported FW for all RX pkts timestamp filter\n");
+ }
+
+diff --git a/drivers/net/ethernet/freescale/enetc/Makefile b/drivers/net/ethernet/freescale/enetc/Makefile
+index a139f2e9d59f0..e0e8dfd137930 100644
+--- a/drivers/net/ethernet/freescale/enetc/Makefile
++++ b/drivers/net/ethernet/freescale/enetc/Makefile
+@@ -9,7 +9,6 @@ fsl-enetc-$(CONFIG_FSL_ENETC_QOS) += enetc_qos.o
+
+ obj-$(CONFIG_FSL_ENETC_VF) += fsl-enetc-vf.o
+ fsl-enetc-vf-y := enetc_vf.o $(common-objs)
+-fsl-enetc-vf-$(CONFIG_FSL_ENETC_QOS) += enetc_qos.o
+
+ obj-$(CONFIG_FSL_ENETC_IERB) += fsl-enetc-ierb.o
+ fsl-enetc-ierb-y := enetc_ierb.o
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 4470a4a3e4c3e..9f5b921039bd4 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -2432,7 +2432,7 @@ int enetc_close(struct net_device *ndev)
+ return 0;
+ }
+
+-static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
++int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
+ {
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct tc_mqprio_qopt *mqprio = type_data;
+@@ -2486,25 +2486,6 @@ static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
+ return 0;
+ }
+
+-int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+- void *type_data)
+-{
+- switch (type) {
+- case TC_SETUP_QDISC_MQPRIO:
+- return enetc_setup_tc_mqprio(ndev, type_data);
+- case TC_SETUP_QDISC_TAPRIO:
+- return enetc_setup_tc_taprio(ndev, type_data);
+- case TC_SETUP_QDISC_CBS:
+- return enetc_setup_tc_cbs(ndev, type_data);
+- case TC_SETUP_QDISC_ETF:
+- return enetc_setup_tc_txtime(ndev, type_data);
+- case TC_SETUP_BLOCK:
+- return enetc_setup_tc_psfp(ndev, type_data);
+- default:
+- return -EOPNOTSUPP;
+- }
+-}
+-
+ static int enetc_setup_xdp_prog(struct net_device *dev, struct bpf_prog *prog,
+ struct netlink_ext_ack *extack)
+ {
+@@ -2600,29 +2581,6 @@ static int enetc_set_rss(struct net_device *ndev, int en)
+ return 0;
+ }
+
+-static int enetc_set_psfp(struct net_device *ndev, int en)
+-{
+- struct enetc_ndev_priv *priv = netdev_priv(ndev);
+- int err;
+-
+- if (en) {
+- err = enetc_psfp_enable(priv);
+- if (err)
+- return err;
+-
+- priv->active_offloads |= ENETC_F_QCI;
+- return 0;
+- }
+-
+- err = enetc_psfp_disable(priv);
+- if (err)
+- return err;
+-
+- priv->active_offloads &= ~ENETC_F_QCI;
+-
+- return 0;
+-}
+-
+ static void enetc_enable_rxvlan(struct net_device *ndev, bool en)
+ {
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+@@ -2641,11 +2599,9 @@ static void enetc_enable_txvlan(struct net_device *ndev, bool en)
+ enetc_bdr_enable_txvlan(&priv->si->hw, i, en);
+ }
+
+-int enetc_set_features(struct net_device *ndev,
+- netdev_features_t features)
++void enetc_set_features(struct net_device *ndev, netdev_features_t features)
+ {
+ netdev_features_t changed = ndev->features ^ features;
+- int err = 0;
+
+ if (changed & NETIF_F_RXHASH)
+ enetc_set_rss(ndev, !!(features & NETIF_F_RXHASH));
+@@ -2657,11 +2613,6 @@ int enetc_set_features(struct net_device *ndev,
+ if (changed & NETIF_F_HW_VLAN_CTAG_TX)
+ enetc_enable_txvlan(ndev,
+ !!(features & NETIF_F_HW_VLAN_CTAG_TX));
+-
+- if (changed & NETIF_F_HW_TC)
+- err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
+-
+- return err;
+ }
+
+ #ifdef CONFIG_FSL_ENETC_PTP_CLOCK
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
+index 29922c20531f0..2cfe6944ebd32 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc.h
+@@ -393,11 +393,9 @@ void enetc_start(struct net_device *ndev);
+ void enetc_stop(struct net_device *ndev);
+ netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev);
+ struct net_device_stats *enetc_get_stats(struct net_device *ndev);
+-int enetc_set_features(struct net_device *ndev,
+- netdev_features_t features);
++void enetc_set_features(struct net_device *ndev, netdev_features_t features);
+ int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd);
+-int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+- void *type_data);
++int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data);
+ int enetc_setup_bpf(struct net_device *dev, struct netdev_bpf *xdp);
+ int enetc_xdp_xmit(struct net_device *ndev, int num_frames,
+ struct xdp_frame **frames, u32 flags);
+@@ -465,6 +463,7 @@ int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+ int enetc_setup_tc_psfp(struct net_device *ndev, void *type_data);
+ int enetc_psfp_init(struct enetc_ndev_priv *priv);
+ int enetc_psfp_clean(struct enetc_ndev_priv *priv);
++int enetc_set_psfp(struct net_device *ndev, bool en);
+
+ static inline void enetc_get_max_cap(struct enetc_ndev_priv *priv)
+ {
+@@ -540,4 +539,9 @@ static inline int enetc_psfp_disable(struct enetc_ndev_priv *priv)
+ {
+ return 0;
+ }
++
++static inline int enetc_set_psfp(struct net_device *ndev, bool en)
++{
++ return 0;
++}
+ #endif
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index c4a0e836d4f09..bb7750222691d 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -709,6 +709,13 @@ static int enetc_pf_set_features(struct net_device *ndev,
+ {
+ netdev_features_t changed = ndev->features ^ features;
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
++ int err;
++
++ if (changed & NETIF_F_HW_TC) {
++ err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
++ if (err)
++ return err;
++ }
+
+ if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) {
+ struct enetc_pf *pf = enetc_si_priv(priv->si);
+@@ -722,7 +729,28 @@ static int enetc_pf_set_features(struct net_device *ndev,
+ if (changed & NETIF_F_LOOPBACK)
+ enetc_set_loopback(ndev, !!(features & NETIF_F_LOOPBACK));
+
+- return enetc_set_features(ndev, features);
++ enetc_set_features(ndev, features);
++
++ return 0;
++}
++
++static int enetc_pf_setup_tc(struct net_device *ndev, enum tc_setup_type type,
++ void *type_data)
++{
++ switch (type) {
++ case TC_SETUP_QDISC_MQPRIO:
++ return enetc_setup_tc_mqprio(ndev, type_data);
++ case TC_SETUP_QDISC_TAPRIO:
++ return enetc_setup_tc_taprio(ndev, type_data);
++ case TC_SETUP_QDISC_CBS:
++ return enetc_setup_tc_cbs(ndev, type_data);
++ case TC_SETUP_QDISC_ETF:
++ return enetc_setup_tc_txtime(ndev, type_data);
++ case TC_SETUP_BLOCK:
++ return enetc_setup_tc_psfp(ndev, type_data);
++ default:
++ return -EOPNOTSUPP;
++ }
+ }
+
+ static const struct net_device_ops enetc_ndev_ops = {
+@@ -739,7 +767,7 @@ static const struct net_device_ops enetc_ndev_ops = {
+ .ndo_set_vf_spoofchk = enetc_pf_set_vf_spoofchk,
+ .ndo_set_features = enetc_pf_set_features,
+ .ndo_eth_ioctl = enetc_ioctl,
+- .ndo_setup_tc = enetc_setup_tc,
++ .ndo_setup_tc = enetc_pf_setup_tc,
+ .ndo_bpf = enetc_setup_bpf,
+ .ndo_xdp_xmit = enetc_xdp_xmit,
+ };
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 582a663ed0ba4..f8a2f02ce22de 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -1517,6 +1517,29 @@ int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+ }
+ }
+
++int enetc_set_psfp(struct net_device *ndev, bool en)
++{
++ struct enetc_ndev_priv *priv = netdev_priv(ndev);
++ int err;
++
++ if (en) {
++ err = enetc_psfp_enable(priv);
++ if (err)
++ return err;
++
++ priv->active_offloads |= ENETC_F_QCI;
++ return 0;
++ }
++
++ err = enetc_psfp_disable(priv);
++ if (err)
++ return err;
++
++ priv->active_offloads &= ~ENETC_F_QCI;
++
++ return 0;
++}
++
+ int enetc_psfp_init(struct enetc_ndev_priv *priv)
+ {
+ if (epsfp.psfp_sfi_bitmap)
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_vf.c b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+index 17924305afa2f..dfcaac302e245 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_vf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+@@ -88,7 +88,20 @@ static int enetc_vf_set_mac_addr(struct net_device *ndev, void *addr)
+ static int enetc_vf_set_features(struct net_device *ndev,
+ netdev_features_t features)
+ {
+- return enetc_set_features(ndev, features);
++ enetc_set_features(ndev, features);
++
++ return 0;
++}
++
++static int enetc_vf_setup_tc(struct net_device *ndev, enum tc_setup_type type,
++ void *type_data)
++{
++ switch (type) {
++ case TC_SETUP_QDISC_MQPRIO:
++ return enetc_setup_tc_mqprio(ndev, type_data);
++ default:
++ return -EOPNOTSUPP;
++ }
+ }
+
+ /* Probing/ Init */
+@@ -100,7 +113,7 @@ static const struct net_device_ops enetc_ndev_ops = {
+ .ndo_set_mac_address = enetc_vf_set_mac_addr,
+ .ndo_set_features = enetc_vf_set_features,
+ .ndo_eth_ioctl = enetc_ioctl,
+- .ndo_setup_tc = enetc_setup_tc,
++ .ndo_setup_tc = enetc_vf_setup_tc,
+ };
+
+ static void enetc_vf_netdev_setup(struct enetc_si *si, struct net_device *ndev,
+diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+index 8c939628e2d85..2e6461b0ea8bc 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
++++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+@@ -157,7 +157,7 @@ static int gve_alloc_page_dqo(struct gve_priv *priv,
+ int err;
+
+ err = gve_alloc_page(priv, &priv->pdev->dev, &buf_state->page_info.page,
+- &buf_state->addr, DMA_FROM_DEVICE, GFP_KERNEL);
++ &buf_state->addr, DMA_FROM_DEVICE, GFP_ATOMIC);
+ if (err)
+ return err;
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 1aaf0c5ddf6cf..57e27f2024d38 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5785,6 +5785,26 @@ static int i40e_get_link_speed(struct i40e_vsi *vsi)
+ }
+ }
+
++/**
++ * i40e_bw_bytes_to_mbits - Convert max_tx_rate from bytes to mbits
++ * @vsi: Pointer to vsi structure
++ * @max_tx_rate: max TX rate in bytes to be converted into Mbits
++ *
++ * Helper function to convert units before send to set BW limit
++ **/
++static u64 i40e_bw_bytes_to_mbits(struct i40e_vsi *vsi, u64 max_tx_rate)
++{
++ if (max_tx_rate < I40E_BW_MBPS_DIVISOR) {
++ dev_warn(&vsi->back->pdev->dev,
++ "Setting max tx rate to minimum usable value of 50Mbps.\n");
++ max_tx_rate = I40E_BW_CREDIT_DIVISOR;
++ } else {
++ do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
++ }
++
++ return max_tx_rate;
++}
++
+ /**
+ * i40e_set_bw_limit - setup BW limit for Tx traffic based on max_tx_rate
+ * @vsi: VSI to be configured
+@@ -5807,10 +5827,10 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
+ max_tx_rate, seid);
+ return -EINVAL;
+ }
+- if (max_tx_rate && max_tx_rate < 50) {
++ if (max_tx_rate && max_tx_rate < I40E_BW_CREDIT_DIVISOR) {
+ dev_warn(&pf->pdev->dev,
+ "Setting max tx rate to minimum usable value of 50Mbps.\n");
+- max_tx_rate = 50;
++ max_tx_rate = I40E_BW_CREDIT_DIVISOR;
+ }
+
+ /* Tx rate credits are in values of 50Mbps, 0 is disabled */
+@@ -8101,9 +8121,9 @@ config_tc:
+
+ if (i40e_is_tc_mqprio_enabled(pf)) {
+ if (vsi->mqprio_qopt.max_rate[0]) {
+- u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
++ u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,
++ vsi->mqprio_qopt.max_rate[0]);
+
+- do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
+ ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);
+ if (!ret) {
+ u64 credits = max_tx_rate;
+@@ -10848,10 +10868,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ }
+
+ if (vsi->mqprio_qopt.max_rate[0]) {
+- u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
++ u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,
++ vsi->mqprio_qopt.max_rate[0]);
+ u64 credits = 0;
+
+- do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
+ ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);
+ if (ret)
+ goto end_unlock;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 86b0f21287dc8..67fbaaad39859 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -2038,6 +2038,25 @@ static void i40e_del_qch(struct i40e_vf *vf)
+ }
+ }
+
++/**
++ * i40e_vc_get_max_frame_size
++ * @vf: pointer to the VF
++ *
++ * Max frame size is determined based on the current port's max frame size and
++ * whether a port VLAN is configured on this VF. The VF is not aware whether
++ * it's in a port VLAN so the PF needs to account for this in max frame size
++ * checks and sending the max frame size to the VF.
++ **/
++static u16 i40e_vc_get_max_frame_size(struct i40e_vf *vf)
++{
++ u16 max_frame_size = vf->pf->hw.phy.link_info.max_frame_size;
++
++ if (vf->port_vlan_id)
++ max_frame_size -= VLAN_HLEN;
++
++ return max_frame_size;
++}
++
+ /**
+ * i40e_vc_get_vf_resources_msg
+ * @vf: pointer to the VF info
+@@ -2139,6 +2158,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ vfres->max_vectors = pf->hw.func_caps.num_msix_vectors_vf;
+ vfres->rss_key_size = I40E_HKEY_ARRAY_SIZE;
+ vfres->rss_lut_size = I40E_VF_HLUT_ARRAY_SIZE;
++ vfres->max_mtu = i40e_vc_get_max_frame_size(vf);
+
+ if (vf->lan_vsi_idx) {
+ vfres->vsi_res[0].vsi_id = vf->lan_vsi_id;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+index 06d18797d25a2..18b6a702a1d6d 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+@@ -114,8 +114,11 @@ u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw)
+ {
+ u32 head, tail;
+
++ /* underlying hardware might not allow access and/or always return
++ * 0 for the head/tail registers so just use the cached values
++ */
+ head = ring->next_to_clean;
+- tail = readl(ring->tail);
++ tail = ring->next_to_use;
+
+ if (head != tail)
+ return (head < tail) ?
+@@ -1390,7 +1393,7 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
+ #endif
+ struct sk_buff *skb;
+
+- if (!rx_buffer)
++ if (!rx_buffer || !size)
+ return NULL;
+ /* prefetch first cache line of first page */
+ va = page_address(rx_buffer->page) + rx_buffer->page_offset;
+@@ -1548,7 +1551,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
+ /* exit if we failed to retrieve a buffer */
+ if (!skb) {
+ rx_ring->rx_stats.alloc_buff_failed++;
+- if (rx_buffer)
++ if (rx_buffer && size)
+ rx_buffer->pagecnt_bias++;
+ break;
+ }
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index 1603e99bae4af..498797a0a0a95 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -273,11 +273,14 @@ int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter)
+ void iavf_configure_queues(struct iavf_adapter *adapter)
+ {
+ struct virtchnl_vsi_queue_config_info *vqci;
+- struct virtchnl_queue_pair_info *vqpi;
++ int i, max_frame = adapter->vf_res->max_mtu;
+ int pairs = adapter->num_active_queues;
+- int i, max_frame = IAVF_MAX_RXBUFFER;
++ struct virtchnl_queue_pair_info *vqpi;
+ size_t len;
+
++ if (max_frame > IAVF_MAX_RXBUFFER || !max_frame)
++ max_frame = IAVF_MAX_RXBUFFER;
++
+ if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
+ /* bail because we already have a command pending */
+ dev_err(&adapter->pdev->dev, "Cannot configure queues, command %d pending\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 6c4e1d45235ef..1169fd7811b09 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -911,7 +911,7 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt)
+ */
+ static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
+ {
+- u16 offset = 0, qmap = 0, tx_count = 0, pow = 0;
++ u16 offset = 0, qmap = 0, tx_count = 0, rx_count = 0, pow = 0;
+ u16 num_txq_per_tc, num_rxq_per_tc;
+ u16 qcount_tx = vsi->alloc_txq;
+ u16 qcount_rx = vsi->alloc_rxq;
+@@ -978,23 +978,25 @@ static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
+ * at least 1)
+ */
+ if (offset)
+- vsi->num_rxq = offset;
++ rx_count = offset;
+ else
+- vsi->num_rxq = num_rxq_per_tc;
++ rx_count = num_rxq_per_tc;
+
+- if (vsi->num_rxq > vsi->alloc_rxq) {
++ if (rx_count > vsi->alloc_rxq) {
+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",
+- vsi->num_rxq, vsi->alloc_rxq);
++ rx_count, vsi->alloc_rxq);
+ return -EINVAL;
+ }
+
+- vsi->num_txq = tx_count;
+- if (vsi->num_txq > vsi->alloc_txq) {
++ if (tx_count > vsi->alloc_txq) {
+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
+- vsi->num_txq, vsi->alloc_txq);
++ tx_count, vsi->alloc_txq);
+ return -EINVAL;
+ }
+
++ vsi->num_txq = tx_count;
++ vsi->num_rxq = rx_count;
++
+ if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) {
+ dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n");
+ /* since there is a chance that num_rxq could have been changed
+@@ -3487,6 +3489,7 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
+ u16 pow, offset = 0, qcount_tx = 0, qcount_rx = 0, qmap;
+ u16 tc0_offset = vsi->mqprio_qopt.qopt.offset[0];
+ int tc0_qcount = vsi->mqprio_qopt.qopt.count[0];
++ u16 new_txq, new_rxq;
+ u8 netdev_tc = 0;
+ int i;
+
+@@ -3527,21 +3530,24 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
+ }
+ }
+
+- /* Set actual Tx/Rx queue pairs */
+- vsi->num_txq = offset + qcount_tx;
+- if (vsi->num_txq > vsi->alloc_txq) {
++ new_txq = offset + qcount_tx;
++ if (new_txq > vsi->alloc_txq) {
+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
+- vsi->num_txq, vsi->alloc_txq);
++ new_txq, vsi->alloc_txq);
+ return -EINVAL;
+ }
+
+- vsi->num_rxq = offset + qcount_rx;
+- if (vsi->num_rxq > vsi->alloc_rxq) {
++ new_rxq = offset + qcount_rx;
++ if (new_rxq > vsi->alloc_rxq) {
+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",
+- vsi->num_rxq, vsi->alloc_rxq);
++ new_rxq, vsi->alloc_rxq);
+ return -EINVAL;
+ }
+
++ /* Set actual Tx/Rx queue pairs */
++ vsi->num_txq = new_txq;
++ vsi->num_rxq = new_rxq;
++
+ /* Setup queue TC[0].qmap for given VSI context */
+ ctxt->info.tc_mapping[0] = cpu_to_le16(qmap);
+ ctxt->info.q_mapping[0] = cpu_to_le16(vsi->rxq_map[0]);
+@@ -3573,6 +3579,7 @@ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc)
+ {
+ u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+ struct ice_pf *pf = vsi->back;
++ struct ice_tc_cfg old_tc_cfg;
+ struct ice_vsi_ctx *ctx;
+ struct device *dev;
+ int i, ret = 0;
+@@ -3597,6 +3604,7 @@ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc)
+ max_txqs[i] = vsi->num_txq;
+ }
+
++ memcpy(&old_tc_cfg, &vsi->tc_cfg, sizeof(old_tc_cfg));
+ vsi->tc_cfg.ena_tc = ena_tc;
+ vsi->tc_cfg.numtc = num_tc;
+
+@@ -3613,8 +3621,10 @@ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc)
+ else
+ ret = ice_vsi_setup_q_map(vsi, ctx);
+
+- if (ret)
++ if (ret) {
++ memcpy(&vsi->tc_cfg, &old_tc_cfg, sizeof(vsi->tc_cfg));
+ goto out;
++ }
+
+ /* must to indicate which section of VSI context are being modified */
+ ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 4c6bb7482b362..48befe1e2872c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2399,8 +2399,6 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset)
+ return -EBUSY;
+ }
+
+- ice_unplug_aux_dev(pf);
+-
+ switch (reset) {
+ case ICE_RESET_PFR:
+ set_bit(ICE_PFR_REQ, pf->state);
+@@ -6629,7 +6627,7 @@ static void ice_napi_disable_all(struct ice_vsi *vsi)
+ */
+ int ice_down(struct ice_vsi *vsi)
+ {
+- int i, tx_err, rx_err, link_err = 0, vlan_err = 0;
++ int i, tx_err, rx_err, vlan_err = 0;
+
+ WARN_ON(!test_bit(ICE_VSI_DOWN, vsi->state));
+
+@@ -6663,20 +6661,13 @@ int ice_down(struct ice_vsi *vsi)
+
+ ice_napi_disable_all(vsi);
+
+- if (test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, vsi->back->flags)) {
+- link_err = ice_force_phys_link_state(vsi, false);
+- if (link_err)
+- netdev_err(vsi->netdev, "Failed to set physical link down, VSI %d error %d\n",
+- vsi->vsi_num, link_err);
+- }
+-
+ ice_for_each_txq(vsi, i)
+ ice_clean_tx_ring(vsi->tx_rings[i]);
+
+ ice_for_each_rxq(vsi, i)
+ ice_clean_rx_ring(vsi->rx_rings[i]);
+
+- if (tx_err || rx_err || link_err || vlan_err) {
++ if (tx_err || rx_err || vlan_err) {
+ netdev_err(vsi->netdev, "Failed to close VSI 0x%04X on switch 0x%04X\n",
+ vsi->vsi_num, vsi->vsw->sw_id);
+ return -EIO;
+@@ -6838,6 +6829,8 @@ int ice_vsi_open(struct ice_vsi *vsi)
+ if (err)
+ goto err_setup_rx;
+
++ ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc);
++
+ if (vsi->type == ICE_VSI_PF) {
+ /* Notify the stack of the actual queue counts. */
+ err = netif_set_real_num_tx_queues(vsi->netdev, vsi->num_txq);
+@@ -8876,6 +8869,16 @@ int ice_stop(struct net_device *netdev)
+ return -EBUSY;
+ }
+
++ if (test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, vsi->back->flags)) {
++ int link_err = ice_force_phys_link_state(vsi, false);
++
++ if (link_err) {
++ netdev_err(vsi->netdev, "Failed to set physical link down, VSI %d error %d\n",
++ vsi->vsi_num, link_err);
++ return -EIO;
++ }
++ }
++
+ ice_vsi_close(vsi);
+
+ return 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 836dce8407124..97453d1dfafed 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -610,7 +610,7 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+ if (test_bit(ICE_VSI_DOWN, vsi->state))
+ return -ENETDOWN;
+
+- if (!ice_is_xdp_ena_vsi(vsi) || queue_index >= vsi->num_xdp_txq)
++ if (!ice_is_xdp_ena_vsi(vsi))
+ return -ENXIO;
+
+ if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+@@ -621,6 +621,9 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+ xdp_ring = vsi->xdp_rings[queue_index];
+ spin_lock(&xdp_ring->tx_lock);
+ } else {
++ /* Generally, should not happen */
++ if (unlikely(queue_index >= vsi->num_xdp_txq))
++ return -ENXIO;
+ xdp_ring = vsi->xdp_rings[queue_index];
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
+index 85155cd9405c5..4aeb927c37153 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
+@@ -179,6 +179,9 @@ static int mlxbf_gige_mdio_read(struct mii_bus *bus, int phy_add, int phy_reg)
+ /* Only return ad bits of the gw register */
+ ret &= MLXBF_GIGE_MDIO_GW_AD_MASK;
+
++ /* The MDIO lock is set on read. To release it, clear gw register */
++ writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
++
+ return ret;
+ }
+
+@@ -203,6 +206,9 @@ static int mlxbf_gige_mdio_write(struct mii_bus *bus, int phy_add,
+ temp, !(temp & MLXBF_GIGE_MDIO_GW_BUSY_MASK),
+ 5, 1000000);
+
++ /* The MDIO lock is set on read. To release it, clear gw register */
++ writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
++
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+index 49b85ca578b01..9820efce72ffe 100644
+--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
++++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+@@ -370,6 +370,11 @@ static void mana_gd_process_eq_events(void *arg)
+ break;
+ }
+
++ /* Per GDMA spec, rmb is necessary after checking owner_bits, before
++ * reading eqe.
++ */
++ rmb();
++
+ mana_gd_process_eqe(eq);
+
+ eq->head++;
+@@ -1107,6 +1112,11 @@ static int mana_gd_read_cqe(struct gdma_queue *cq, struct gdma_comp *comp)
+ if (WARN_ON_ONCE(owner_bits != new_bits))
+ return -1;
+
++ /* Per GDMA spec, rmb is necessary after checking owner_bits, before
++ * reading completion info
++ */
++ rmb();
++
+ comp->wq_num = cqe->cqe_info.wq_num;
+ comp->is_sq = cqe->cqe_info.is_sq;
+ memcpy(comp->cqe_data, cqe->cqe_data, GDMA_COMP_DATA_SIZE);
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index b357ac4c56c59..7e32b04eb0c75 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1449,6 +1449,8 @@ static int ravb_phy_init(struct net_device *ndev)
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
+ }
+
++ /* Indicate that the MAC is responsible for managing PHY PM */
++ phydev->mac_managed_pm = true;
+ phy_attached_info(phydev);
+
+ return 0;
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 67ade78fb7671..7fd8828d3a846 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -2029,6 +2029,8 @@ static int sh_eth_phy_init(struct net_device *ndev)
+ if (mdp->cd->register_type != SH_ETH_REG_GIGABIT)
+ phy_set_max_speed(phydev, SPEED_100);
+
++ /* Indicate that the MAC is responsible for managing PHY PM */
++ phydev->mac_managed_pm = true;
+ phy_attached_info(phydev);
+
+ return 0;
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index 032b8c0bd7889..5b4d661ab9867 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -319,7 +319,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
+ efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0);
+ efx->n_rx_channels = 1;
+ efx->n_tx_channels = 1;
+- efx->tx_channel_offset = 1;
++ efx->tx_channel_offset = efx_separate_tx_channels ? 1 : 0;
+ efx->n_xdp_channels = 0;
+ efx->xdp_channel_offset = efx->n_channels;
+ efx->legacy_irq = efx->pci_dev->irq;
+diff --git a/drivers/net/ethernet/sfc/siena/efx_channels.c b/drivers/net/ethernet/sfc/siena/efx_channels.c
+index 017212a40df38..f54ebd0072868 100644
+--- a/drivers/net/ethernet/sfc/siena/efx_channels.c
++++ b/drivers/net/ethernet/sfc/siena/efx_channels.c
+@@ -320,7 +320,7 @@ int efx_siena_probe_interrupts(struct efx_nic *efx)
+ efx->n_channels = 1 + (efx_siena_separate_tx_channels ? 1 : 0);
+ efx->n_rx_channels = 1;
+ efx->n_tx_channels = 1;
+- efx->tx_channel_offset = 1;
++ efx->tx_channel_offset = efx_siena_separate_tx_channels ? 1 : 0;
+ efx->n_xdp_channels = 0;
+ efx->xdp_channel_offset = efx->n_channels;
+ efx->legacy_irq = efx->pci_dev->irq;
+diff --git a/drivers/net/ethernet/sfc/siena/tx.c b/drivers/net/ethernet/sfc/siena/tx.c
+index e166dcb9b99ce..91e87594ed1ea 100644
+--- a/drivers/net/ethernet/sfc/siena/tx.c
++++ b/drivers/net/ethernet/sfc/siena/tx.c
+@@ -336,7 +336,7 @@ netdev_tx_t efx_siena_hard_start_xmit(struct sk_buff *skb,
+ * previous packets out.
+ */
+ if (!netdev_xmit_more())
+- efx_tx_send_pending(tx_queue->channel);
++ efx_tx_send_pending(efx_get_tx_channel(efx, index));
+ return NETDEV_TX_OK;
+ }
+
+diff --git a/drivers/net/ethernet/sfc/tx.c b/drivers/net/ethernet/sfc/tx.c
+index 138bca6113415..80ed7f760bd30 100644
+--- a/drivers/net/ethernet/sfc/tx.c
++++ b/drivers/net/ethernet/sfc/tx.c
+@@ -549,7 +549,7 @@ netdev_tx_t efx_hard_start_xmit(struct sk_buff *skb,
+ * previous packets out.
+ */
+ if (!netdev_xmit_more())
+- efx_tx_send_pending(tx_queue->channel);
++ efx_tx_send_pending(efx_get_tx_channel(efx, index));
+ return NETDEV_TX_OK;
+ }
+
+diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c
+index 8594ee839628b..88aa0d310aeef 100644
+--- a/drivers/net/ethernet/sun/sunhme.c
++++ b/drivers/net/ethernet/sun/sunhme.c
+@@ -2020,9 +2020,9 @@ static void happy_meal_rx(struct happy_meal *hp, struct net_device *dev)
+
+ skb_reserve(copy_skb, 2);
+ skb_put(copy_skb, len);
+- dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE);
++ dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE);
+ skb_copy_from_linear_data(skb, copy_skb->data, len);
+- dma_sync_single_for_device(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE);
++ dma_sync_single_for_device(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE);
+ /* Reuse original ring buffer. */
+ hme_write_rxd(hp, this,
+ (RXFLAG_OWN|((RX_BUF_ALLOC_SIZE-RX_OFFSET)<<16)),
+diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
+index ec010cf2e816a..6f874f99b910c 100644
+--- a/drivers/net/ipa/ipa_qmi.c
++++ b/drivers/net/ipa/ipa_qmi.c
+@@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
+ mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE);
+ req.v4_route_tbl_info_valid = 1;
+ req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
+- req.v4_route_tbl_info.count = mem->size / sizeof(__le64);
++ req.v4_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
+
+ mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE);
+ req.v6_route_tbl_info_valid = 1;
+ req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
+- req.v6_route_tbl_info.count = mem->size / sizeof(__le64);
++ req.v6_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
+
+ mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER);
+ req.v4_filter_tbl_start_valid = 1;
+@@ -352,7 +352,7 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
+ req.v4_hash_route_tbl_info_valid = 1;
+ req.v4_hash_route_tbl_info.start =
+ ipa->mem_offset + mem->offset;
+- req.v4_hash_route_tbl_info.count = mem->size / sizeof(__le64);
++ req.v4_hash_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
+ }
+
+ mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE_HASHED);
+@@ -360,7 +360,7 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
+ req.v6_hash_route_tbl_info_valid = 1;
+ req.v6_hash_route_tbl_info.start =
+ ipa->mem_offset + mem->offset;
+- req.v6_hash_route_tbl_info.count = mem->size / sizeof(__le64);
++ req.v6_hash_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
+ }
+
+ mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER_HASHED);
+diff --git a/drivers/net/ipa/ipa_qmi_msg.c b/drivers/net/ipa/ipa_qmi_msg.c
+index 6838e8065072b..75d3fc0092e92 100644
+--- a/drivers/net/ipa/ipa_qmi_msg.c
++++ b/drivers/net/ipa/ipa_qmi_msg.c
+@@ -311,7 +311,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ .tlv_type = 0x12,
+ .offset = offsetof(struct ipa_init_modem_driver_req,
+ v4_route_tbl_info),
+- .ei_array = ipa_mem_array_ei,
++ .ei_array = ipa_mem_bounds_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+@@ -332,7 +332,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ .tlv_type = 0x13,
+ .offset = offsetof(struct ipa_init_modem_driver_req,
+ v6_route_tbl_info),
+- .ei_array = ipa_mem_array_ei,
++ .ei_array = ipa_mem_bounds_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+@@ -496,7 +496,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ .tlv_type = 0x1b,
+ .offset = offsetof(struct ipa_init_modem_driver_req,
+ v4_hash_route_tbl_info),
+- .ei_array = ipa_mem_array_ei,
++ .ei_array = ipa_mem_bounds_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+@@ -517,7 +517,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ .tlv_type = 0x1c,
+ .offset = offsetof(struct ipa_init_modem_driver_req,
+ v6_hash_route_tbl_info),
+- .ei_array = ipa_mem_array_ei,
++ .ei_array = ipa_mem_bounds_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+diff --git a/drivers/net/ipa/ipa_qmi_msg.h b/drivers/net/ipa/ipa_qmi_msg.h
+index 495e85abe50bd..9651aa59b5968 100644
+--- a/drivers/net/ipa/ipa_qmi_msg.h
++++ b/drivers/net/ipa/ipa_qmi_msg.h
+@@ -86,9 +86,11 @@ enum ipa_platform_type {
+ IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01 = 0x5, /* QNX MSM */
+ };
+
+-/* This defines the start and end offset of a range of memory. Both
+- * fields are offsets relative to the start of IPA shared memory.
+- * The end value is the last addressable byte *within* the range.
++/* This defines the start and end offset of a range of memory. The start
++ * value is a byte offset relative to the start of IPA shared memory. The
++ * end value is the last addressable unit *within* the range. Typically
++ * the end value is in units of bytes, however it can also be a maximum
++ * array index value.
+ */
+ struct ipa_mem_bounds {
+ u32 start;
+@@ -129,18 +131,19 @@ struct ipa_init_modem_driver_req {
+ u8 hdr_tbl_info_valid;
+ struct ipa_mem_bounds hdr_tbl_info;
+
+- /* Routing table information. These define the location and size of
+- * non-hashable IPv4 and IPv6 filter tables. The start values are
+- * offsets relative to the start of IPA shared memory.
++ /* Routing table information. These define the location and maximum
++ * *index* (not byte) for the modem portion of non-hashable IPv4 and
++ * IPv6 routing tables. The start values are byte offsets relative
++ * to the start of IPA shared memory.
+ */
+ u8 v4_route_tbl_info_valid;
+- struct ipa_mem_array v4_route_tbl_info;
++ struct ipa_mem_bounds v4_route_tbl_info;
+ u8 v6_route_tbl_info_valid;
+- struct ipa_mem_array v6_route_tbl_info;
++ struct ipa_mem_bounds v6_route_tbl_info;
+
+ /* Filter table information. These define the location of the
+ * non-hashable IPv4 and IPv6 filter tables. The start values are
+- * offsets relative to the start of IPA shared memory.
++ * byte offsets relative to the start of IPA shared memory.
+ */
+ u8 v4_filter_tbl_start_valid;
+ u32 v4_filter_tbl_start;
+@@ -181,18 +184,20 @@ struct ipa_init_modem_driver_req {
+ u8 zip_tbl_info_valid;
+ struct ipa_mem_bounds zip_tbl_info;
+
+- /* Routing table information. These define the location and size
+- * of hashable IPv4 and IPv6 filter tables. The start values are
+- * offsets relative to the start of IPA shared memory.
++ /* Routing table information. These define the location and maximum
++ * *index* (not byte) for the modem portion of hashable IPv4 and IPv6
++ * routing tables (if supported by hardware). The start values are
++ * byte offsets relative to the start of IPA shared memory.
+ */
+ u8 v4_hash_route_tbl_info_valid;
+- struct ipa_mem_array v4_hash_route_tbl_info;
++ struct ipa_mem_bounds v4_hash_route_tbl_info;
+ u8 v6_hash_route_tbl_info_valid;
+- struct ipa_mem_array v6_hash_route_tbl_info;
++ struct ipa_mem_bounds v6_hash_route_tbl_info;
+
+ /* Filter table information. These define the location and size
+- * of hashable IPv4 and IPv6 filter tables. The start values are
+- * offsets relative to the start of IPA shared memory.
++ * of hashable IPv4 and IPv6 filter tables (if supported by hardware).
++ * The start values are byte offsets relative to the start of IPA
++ * shared memory.
+ */
+ u8 v4_hash_filter_tbl_start_valid;
+ u32 v4_hash_filter_tbl_start;
+diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
+index 2f5a58bfc529a..69efe672ca528 100644
+--- a/drivers/net/ipa/ipa_table.c
++++ b/drivers/net/ipa/ipa_table.c
+@@ -108,8 +108,6 @@
+
+ /* Assignment of route table entries to the modem and AP */
+ #define IPA_ROUTE_MODEM_MIN 0
+-#define IPA_ROUTE_MODEM_COUNT 8
+-
+ #define IPA_ROUTE_AP_MIN IPA_ROUTE_MODEM_COUNT
+ #define IPA_ROUTE_AP_COUNT \
+ (IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT)
+diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
+index b6a9a0d79d68e..1538e2e1732fe 100644
+--- a/drivers/net/ipa/ipa_table.h
++++ b/drivers/net/ipa/ipa_table.h
+@@ -13,6 +13,9 @@ struct ipa;
+ /* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
+ #define IPA_FILTER_COUNT_MAX 14
+
++/* The number of route table entries allotted to the modem */
++#define IPA_ROUTE_MODEM_COUNT 8
++
+ /* The maximum number of route table entries (IPv4, IPv6; hashed or not) */
+ #define IPA_ROUTE_COUNT_MAX 15
+
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index 6ffb27419e64b..c58123e136896 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -495,7 +495,6 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+
+ static int ipvlan_process_outbound(struct sk_buff *skb)
+ {
+- struct ethhdr *ethh = eth_hdr(skb);
+ int ret = NET_XMIT_DROP;
+
+ /* The ipvlan is a pseudo-L2 device, so the packets that we receive
+@@ -505,6 +504,8 @@ static int ipvlan_process_outbound(struct sk_buff *skb)
+ if (skb_mac_header_was_set(skb)) {
+ /* In this mode we dont care about
+ * multicast and broadcast traffic */
++ struct ethhdr *ethh = eth_hdr(skb);
++
+ if (is_multicast_ether_addr(ethh->h_dest)) {
+ pr_debug_ratelimited(
+ "Dropped {multi|broad}cast of type=[%x]\n",
+@@ -589,7 +590,7 @@ out:
+ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ {
+ const struct ipvl_dev *ipvlan = netdev_priv(dev);
+- struct ethhdr *eth = eth_hdr(skb);
++ struct ethhdr *eth = skb_eth_hdr(skb);
+ struct ipvl_addr *addr;
+ void *lyr3h;
+ int addr_type;
+@@ -619,6 +620,7 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ return dev_forward_skb(ipvlan->phy_dev, skb);
+
+ } else if (is_multicast_ether_addr(eth->h_dest)) {
++ skb_reset_mac_header(skb);
+ ipvlan_skb_crossing_ns(skb, NULL);
+ ipvlan_multicast_enqueue(ipvlan->port, skb, true);
+ return NET_XMIT_SUCCESS;
+diff --git a/drivers/net/mdio/of_mdio.c b/drivers/net/mdio/of_mdio.c
+index 9e3c815a070f1..796e9c7857d09 100644
+--- a/drivers/net/mdio/of_mdio.c
++++ b/drivers/net/mdio/of_mdio.c
+@@ -231,6 +231,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
+ return 0;
+
+ unregister:
++ of_node_put(child);
+ mdiobus_unregister(mdio);
+ return rc;
+ }
+diff --git a/drivers/net/netdevsim/hwstats.c b/drivers/net/netdevsim/hwstats.c
+index 605a38e16db05..0e58aa7f0374e 100644
+--- a/drivers/net/netdevsim/hwstats.c
++++ b/drivers/net/netdevsim/hwstats.c
+@@ -433,11 +433,11 @@ int nsim_dev_hwstats_init(struct nsim_dev *nsim_dev)
+ goto err_remove_hwstats_recursive;
+ }
+
+- debugfs_create_file("enable_ifindex", 0600, hwstats->l3_ddir, hwstats,
++ debugfs_create_file("enable_ifindex", 0200, hwstats->l3_ddir, hwstats,
+ &nsim_dev_hwstats_l3_enable_fops.fops);
+- debugfs_create_file("disable_ifindex", 0600, hwstats->l3_ddir, hwstats,
++ debugfs_create_file("disable_ifindex", 0200, hwstats->l3_ddir, hwstats,
+ &nsim_dev_hwstats_l3_disable_fops.fops);
+- debugfs_create_file("fail_next_enable", 0600, hwstats->l3_ddir, hwstats,
++ debugfs_create_file("fail_next_enable", 0200, hwstats->l3_ddir, hwstats,
+ &nsim_dev_hwstats_l3_fail_fops.fops);
+
+ INIT_DELAYED_WORK(&hwstats->traffic_dw,
+diff --git a/drivers/net/phy/aquantia_main.c b/drivers/net/phy/aquantia_main.c
+index c7047f5d7a9b0..8bc0957a0f6d3 100644
+--- a/drivers/net/phy/aquantia_main.c
++++ b/drivers/net/phy/aquantia_main.c
+@@ -90,6 +90,9 @@
+ #define VEND1_GLOBAL_FW_ID_MAJOR GENMASK(15, 8)
+ #define VEND1_GLOBAL_FW_ID_MINOR GENMASK(7, 0)
+
++#define VEND1_GLOBAL_GEN_STAT2 0xc831
++#define VEND1_GLOBAL_GEN_STAT2_OP_IN_PROG BIT(15)
++
+ #define VEND1_GLOBAL_RSVD_STAT1 0xc885
+ #define VEND1_GLOBAL_RSVD_STAT1_FW_BUILD_ID GENMASK(7, 4)
+ #define VEND1_GLOBAL_RSVD_STAT1_PROV_ID GENMASK(3, 0)
+@@ -124,6 +127,12 @@
+ #define VEND1_GLOBAL_INT_VEND_MASK_GLOBAL2 BIT(1)
+ #define VEND1_GLOBAL_INT_VEND_MASK_GLOBAL3 BIT(0)
+
++/* Sleep and timeout for checking if the Processor-Intensive
++ * MDIO operation is finished
++ */
++#define AQR107_OP_IN_PROG_SLEEP 1000
++#define AQR107_OP_IN_PROG_TIMEOUT 100000
++
+ struct aqr107_hw_stat {
+ const char *name;
+ int reg;
+@@ -596,16 +605,52 @@ static void aqr107_link_change_notify(struct phy_device *phydev)
+ phydev_info(phydev, "Aquantia 1000Base-T2 mode active\n");
+ }
+
++static int aqr107_wait_processor_intensive_op(struct phy_device *phydev)
++{
++ int val, err;
++
++ /* The datasheet notes to wait at least 1ms after issuing a
++ * processor intensive operation before checking.
++ * We cannot use the 'sleep_before_read' parameter of read_poll_timeout
++ * because that just determines the maximum time slept, not the minimum.
++ */
++ usleep_range(1000, 5000);
++
++ err = phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1,
++ VEND1_GLOBAL_GEN_STAT2, val,
++ !(val & VEND1_GLOBAL_GEN_STAT2_OP_IN_PROG),
++ AQR107_OP_IN_PROG_SLEEP,
++ AQR107_OP_IN_PROG_TIMEOUT, false);
++ if (err) {
++ phydev_err(phydev, "timeout: processor-intensive MDIO operation\n");
++ return err;
++ }
++
++ return 0;
++}
++
+ static int aqr107_suspend(struct phy_device *phydev)
+ {
+- return phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
+- MDIO_CTRL1_LPOWER);
++ int err;
++
++ err = phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
++ MDIO_CTRL1_LPOWER);
++ if (err)
++ return err;
++
++ return aqr107_wait_processor_intensive_op(phydev);
+ }
+
+ static int aqr107_resume(struct phy_device *phydev)
+ {
+- return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
+- MDIO_CTRL1_LPOWER);
++ int err;
++
++ err = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
++ MDIO_CTRL1_LPOWER);
++ if (err)
++ return err;
++
++ return aqr107_wait_processor_intensive_op(phydev);
+ }
+
+ static int aqr107_probe(struct phy_device *phydev)
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 34483a4bd688a..e8e1101911b2f 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -2662,16 +2662,19 @@ static int lan8804_config_init(struct phy_device *phydev)
+ static irqreturn_t lan8814_handle_interrupt(struct phy_device *phydev)
+ {
+ int irq_status, tsu_irq_status;
++ int ret = IRQ_NONE;
+
+ irq_status = phy_read(phydev, LAN8814_INTS);
+- if (irq_status > 0 && (irq_status & LAN8814_INT_LINK))
+- phy_trigger_machine(phydev);
+-
+ if (irq_status < 0) {
+ phy_error(phydev);
+ return IRQ_NONE;
+ }
+
++ if (irq_status & LAN8814_INT_LINK) {
++ phy_trigger_machine(phydev);
++ ret = IRQ_HANDLED;
++ }
++
+ while (1) {
+ tsu_irq_status = lanphy_read_page_reg(phydev, 4,
+ LAN8814_INTR_STS_REG);
+@@ -2680,12 +2683,15 @@ static irqreturn_t lan8814_handle_interrupt(struct phy_device *phydev)
+ (tsu_irq_status & (LAN8814_INTR_STS_REG_1588_TSU0_ |
+ LAN8814_INTR_STS_REG_1588_TSU1_ |
+ LAN8814_INTR_STS_REG_1588_TSU2_ |
+- LAN8814_INTR_STS_REG_1588_TSU3_)))
++ LAN8814_INTR_STS_REG_1588_TSU3_))) {
+ lan8814_handle_ptp_interrupt(phydev);
+- else
++ ret = IRQ_HANDLED;
++ } else {
+ break;
++ }
+ }
+- return IRQ_HANDLED;
++
++ return ret;
+ }
+
+ static int lan8814_ack_interrupt(struct phy_device *phydev)
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index b07dde6f0abf2..b9899913d2467 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1275,10 +1275,12 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ }
+ }
+
+- netif_addr_lock_bh(dev);
+- dev_uc_sync_multiple(port_dev, dev);
+- dev_mc_sync_multiple(port_dev, dev);
+- netif_addr_unlock_bh(dev);
++ if (dev->flags & IFF_UP) {
++ netif_addr_lock_bh(dev);
++ dev_uc_sync_multiple(port_dev, dev);
++ dev_mc_sync_multiple(port_dev, dev);
++ netif_addr_unlock_bh(dev);
++ }
+
+ port->index = -1;
+ list_add_tail_rcu(&port->list, &team->port_list);
+@@ -1349,8 +1351,10 @@ static int team_port_del(struct team *team, struct net_device *port_dev)
+ netdev_rx_handler_unregister(port_dev);
+ team_port_disable_netpoll(port);
+ vlan_vids_del_by_dev(port_dev, dev);
+- dev_uc_unsync(port_dev, dev);
+- dev_mc_unsync(port_dev, dev);
++ if (dev->flags & IFF_UP) {
++ dev_uc_unsync(port_dev, dev);
++ dev_mc_unsync(port_dev, dev);
++ }
+ dev_close(port_dev);
+ team_port_leave(team, port);
+
+@@ -1700,6 +1704,14 @@ static int team_open(struct net_device *dev)
+
+ static int team_close(struct net_device *dev)
+ {
++ struct team *team = netdev_priv(dev);
++ struct team_port *port;
++
++ list_for_each_entry(port, &team->port_list, list) {
++ dev_uc_unsync(port->dev, dev);
++ dev_mc_unsync(port->dev, dev);
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/net/wireguard/netlink.c b/drivers/net/wireguard/netlink.c
+index d0f3b6d7f4089..5c804bcabfe6b 100644
+--- a/drivers/net/wireguard/netlink.c
++++ b/drivers/net/wireguard/netlink.c
+@@ -436,14 +436,13 @@ static int set_peer(struct wg_device *wg, struct nlattr **attrs)
+ if (attrs[WGPEER_A_ENDPOINT]) {
+ struct sockaddr *addr = nla_data(attrs[WGPEER_A_ENDPOINT]);
+ size_t len = nla_len(attrs[WGPEER_A_ENDPOINT]);
++ struct endpoint endpoint = { { { 0 } } };
+
+- if ((len == sizeof(struct sockaddr_in) &&
+- addr->sa_family == AF_INET) ||
+- (len == sizeof(struct sockaddr_in6) &&
+- addr->sa_family == AF_INET6)) {
+- struct endpoint endpoint = { { { 0 } } };
+-
+- memcpy(&endpoint.addr, addr, len);
++ if (len == sizeof(struct sockaddr_in) && addr->sa_family == AF_INET) {
++ endpoint.addr4 = *(struct sockaddr_in *)addr;
++ wg_socket_set_peer_endpoint(peer, &endpoint);
++ } else if (len == sizeof(struct sockaddr_in6) && addr->sa_family == AF_INET6) {
++ endpoint.addr6 = *(struct sockaddr_in6 *)addr;
+ wg_socket_set_peer_endpoint(peer, &endpoint);
+ }
+ }
+diff --git a/drivers/net/wireguard/selftest/ratelimiter.c b/drivers/net/wireguard/selftest/ratelimiter.c
+index ba87d294604fe..d4bb40a695ab6 100644
+--- a/drivers/net/wireguard/selftest/ratelimiter.c
++++ b/drivers/net/wireguard/selftest/ratelimiter.c
+@@ -6,29 +6,28 @@
+ #ifdef DEBUG
+
+ #include <linux/jiffies.h>
+-#include <linux/hrtimer.h>
+
+ static const struct {
+ bool result;
+- u64 nsec_to_sleep_before;
++ unsigned int msec_to_sleep_before;
+ } expected_results[] __initconst = {
+ [0 ... PACKETS_BURSTABLE - 1] = { true, 0 },
+ [PACKETS_BURSTABLE] = { false, 0 },
+- [PACKETS_BURSTABLE + 1] = { true, NSEC_PER_SEC / PACKETS_PER_SECOND },
++ [PACKETS_BURSTABLE + 1] = { true, MSEC_PER_SEC / PACKETS_PER_SECOND },
+ [PACKETS_BURSTABLE + 2] = { false, 0 },
+- [PACKETS_BURSTABLE + 3] = { true, (NSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
++ [PACKETS_BURSTABLE + 3] = { true, (MSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
+ [PACKETS_BURSTABLE + 4] = { true, 0 },
+ [PACKETS_BURSTABLE + 5] = { false, 0 }
+ };
+
+ static __init unsigned int maximum_jiffies_at_index(int index)
+ {
+- u64 total_nsecs = 2 * NSEC_PER_SEC / PACKETS_PER_SECOND / 3;
++ unsigned int total_msecs = 2 * MSEC_PER_SEC / PACKETS_PER_SECOND / 3;
+ int i;
+
+ for (i = 0; i <= index; ++i)
+- total_nsecs += expected_results[i].nsec_to_sleep_before;
+- return nsecs_to_jiffies(total_nsecs);
++ total_msecs += expected_results[i].msec_to_sleep_before;
++ return msecs_to_jiffies(total_msecs);
+ }
+
+ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
+@@ -43,12 +42,8 @@ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
+ loop_start_time = jiffies;
+
+ for (i = 0; i < ARRAY_SIZE(expected_results); ++i) {
+- if (expected_results[i].nsec_to_sleep_before) {
+- ktime_t timeout = ktime_add(ktime_add_ns(ktime_get_coarse_boottime(), TICK_NSEC * 4 / 3),
+- ns_to_ktime(expected_results[i].nsec_to_sleep_before));
+- set_current_state(TASK_UNINTERRUPTIBLE);
+- schedule_hrtimeout_range_clock(&timeout, 0, HRTIMER_MODE_ABS, CLOCK_BOOTTIME);
+- }
++ if (expected_results[i].msec_to_sleep_before)
++ msleep(expected_results[i].msec_to_sleep_before);
+
+ if (time_is_before_jiffies(loop_start_time +
+ maximum_jiffies_at_index(i)))
+@@ -132,7 +127,7 @@ bool __init wg_ratelimiter_selftest(void)
+ if (IS_ENABLED(CONFIG_KASAN) || IS_ENABLED(CONFIG_UBSAN))
+ return true;
+
+- BUILD_BUG_ON(NSEC_PER_SEC % PACKETS_PER_SECOND != 0);
++ BUILD_BUG_ON(MSEC_PER_SEC % PACKETS_PER_SECOND != 0);
+
+ if (wg_ratelimiter_init())
+ goto out;
+@@ -172,7 +167,7 @@ bool __init wg_ratelimiter_selftest(void)
+ ++test;
+ #endif
+
+- for (trials = TRIALS_BEFORE_GIVING_UP;;) {
++ for (trials = TRIALS_BEFORE_GIVING_UP; IS_ENABLED(DEBUG_RATELIMITER_TIMINGS);) {
+ int test_count = 0, ret;
+
+ ret = timings_test(skb4, hdr4, skb6, hdr6, &test_count);
+diff --git a/drivers/net/wireless/intel/iwlwifi/Kconfig b/drivers/net/wireless/intel/iwlwifi/Kconfig
+index a647a406b87be..b20409f8c13ab 100644
+--- a/drivers/net/wireless/intel/iwlwifi/Kconfig
++++ b/drivers/net/wireless/intel/iwlwifi/Kconfig
+@@ -140,6 +140,7 @@ config IWLMEI
+ depends on INTEL_MEI
+ depends on PM
+ depends on CFG80211
++ depends on BROKEN
+ help
+ Enables the iwlmei kernel module.
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 9e832b27170fe..a4eb025f504f3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -1138,7 +1138,7 @@ u32 mt7615_mac_get_sta_tid_sn(struct mt7615_dev *dev, int wcid, u8 tid)
+ offset %= 32;
+
+ val = mt76_rr(dev, addr);
+- val >>= (tid % 32);
++ val >>= offset;
+
+ if (offset > 20) {
+ addr += 4;
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 629d10fcf53b2..b9f1a8e9f88cb 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -45,7 +45,7 @@ static struct nd_region *to_region(struct pmem_device *pmem)
+ return to_nd_region(to_dev(pmem)->parent);
+ }
+
+-static phys_addr_t to_phys(struct pmem_device *pmem, phys_addr_t offset)
++static phys_addr_t pmem_to_phys(struct pmem_device *pmem, phys_addr_t offset)
+ {
+ return pmem->phys_addr + offset;
+ }
+@@ -63,7 +63,7 @@ static phys_addr_t to_offset(struct pmem_device *pmem, sector_t sector)
+ static void pmem_mkpage_present(struct pmem_device *pmem, phys_addr_t offset,
+ unsigned int len)
+ {
+- phys_addr_t phys = to_phys(pmem, offset);
++ phys_addr_t phys = pmem_to_phys(pmem, offset);
+ unsigned long pfn_start, pfn_end, pfn;
+
+ /* only pmem in the linear map supports HWPoison */
+@@ -97,7 +97,7 @@ static void pmem_clear_bb(struct pmem_device *pmem, sector_t sector, long blks)
+ static long __pmem_clear_poison(struct pmem_device *pmem,
+ phys_addr_t offset, unsigned int len)
+ {
+- phys_addr_t phys = to_phys(pmem, offset);
++ phys_addr_t phys = pmem_to_phys(pmem, offset);
+ long cleared = nvdimm_clear_poison(to_dev(pmem), phys, len);
+
+ if (cleared > 0) {
+diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
+index d702d7d60235d..2d23b7d41f7e6 100644
+--- a/drivers/nvme/host/apple.c
++++ b/drivers/nvme/host/apple.c
+@@ -1502,7 +1502,7 @@ static int apple_nvme_probe(struct platform_device *pdev)
+
+ if (!blk_get_queue(anv->ctrl.admin_q)) {
+ nvme_start_admin_queue(&anv->ctrl);
+- blk_cleanup_queue(anv->ctrl.admin_q);
++ blk_mq_destroy_queue(anv->ctrl.admin_q);
+ anv->ctrl.admin_q = NULL;
+ ret = -ENODEV;
+ goto put_dev;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 2f965356f3453..6d76fc608b741 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4105,7 +4105,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ if (!nvme_ns_head_multipath(ns->head))
+ nvme_cdev_del(&ns->cdev, &ns->cdev_device);
+ del_gendisk(ns->disk);
+- blk_cleanup_queue(ns->queue);
+
+ down_write(&ns->ctrl->namespaces_rwsem);
+ list_del_init(&ns->list);
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 4aff83b1b0c05..9a5ce70d7f215 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2392,7 +2392,7 @@ nvme_fc_ctrl_free(struct kref *ref)
+ unsigned long flags;
+
+ if (ctrl->ctrl.tagset) {
+- blk_cleanup_queue(ctrl->ctrl.connect_q);
++ blk_mq_destroy_queue(ctrl->ctrl.connect_q);
+ blk_mq_free_tag_set(&ctrl->tag_set);
+ }
+
+@@ -2402,8 +2402,8 @@ nvme_fc_ctrl_free(struct kref *ref)
+ spin_unlock_irqrestore(&ctrl->rport->lock, flags);
+
+ nvme_start_admin_queue(&ctrl->ctrl);
+- blk_cleanup_queue(ctrl->ctrl.admin_q);
+- blk_cleanup_queue(ctrl->ctrl.fabrics_q);
++ blk_mq_destroy_queue(ctrl->ctrl.admin_q);
++ blk_mq_destroy_queue(ctrl->ctrl.fabrics_q);
+ blk_mq_free_tag_set(&ctrl->admin_tag_set);
+
+ kfree(ctrl->queues);
+@@ -2953,7 +2953,7 @@ nvme_fc_create_io_queues(struct nvme_fc_ctrl *ctrl)
+ out_delete_hw_queues:
+ nvme_fc_delete_hw_io_queues(ctrl);
+ out_cleanup_blk_queue:
+- blk_cleanup_queue(ctrl->ctrl.connect_q);
++ blk_mq_destroy_queue(ctrl->ctrl.connect_q);
+ out_free_tag_set:
+ blk_mq_free_tag_set(&ctrl->tag_set);
+ nvme_fc_free_io_queues(ctrl);
+@@ -3642,9 +3642,9 @@ fail_ctrl:
+ return ERR_PTR(-EIO);
+
+ out_cleanup_admin_q:
+- blk_cleanup_queue(ctrl->ctrl.admin_q);
++ blk_mq_destroy_queue(ctrl->ctrl.admin_q);
+ out_cleanup_fabrics_q:
+- blk_cleanup_queue(ctrl->ctrl.fabrics_q);
++ blk_mq_destroy_queue(ctrl->ctrl.fabrics_q);
+ out_free_admin_tag_set:
+ blk_mq_free_tag_set(&ctrl->admin_tag_set);
+ out_free_queues:
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 9f6614f7dbeb1..3516678d37541 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1760,7 +1760,7 @@ static void nvme_dev_remove_admin(struct nvme_dev *dev)
+ * queue to flush these to completion.
+ */
+ nvme_start_admin_queue(&dev->ctrl);
+- blk_cleanup_queue(dev->ctrl.admin_q);
++ blk_mq_destroy_queue(dev->ctrl.admin_q);
+ blk_mq_free_tag_set(&dev->admin_tagset);
+ }
+ }
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 46c2dcf72f7ea..240024dd5d857 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -840,8 +840,8 @@ static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ bool remove)
+ {
+ if (remove) {
+- blk_cleanup_queue(ctrl->ctrl.admin_q);
+- blk_cleanup_queue(ctrl->ctrl.fabrics_q);
++ blk_mq_destroy_queue(ctrl->ctrl.admin_q);
++ blk_mq_destroy_queue(ctrl->ctrl.fabrics_q);
+ blk_mq_free_tag_set(ctrl->ctrl.admin_tagset);
+ }
+ if (ctrl->async_event_sqe.data) {
+@@ -935,10 +935,10 @@ out_stop_queue:
+ nvme_cancel_admin_tagset(&ctrl->ctrl);
+ out_cleanup_queue:
+ if (new)
+- blk_cleanup_queue(ctrl->ctrl.admin_q);
++ blk_mq_destroy_queue(ctrl->ctrl.admin_q);
+ out_cleanup_fabrics_q:
+ if (new)
+- blk_cleanup_queue(ctrl->ctrl.fabrics_q);
++ blk_mq_destroy_queue(ctrl->ctrl.fabrics_q);
+ out_free_tagset:
+ if (new)
+ blk_mq_free_tag_set(ctrl->ctrl.admin_tagset);
+@@ -957,7 +957,7 @@ static void nvme_rdma_destroy_io_queues(struct nvme_rdma_ctrl *ctrl,
+ bool remove)
+ {
+ if (remove) {
+- blk_cleanup_queue(ctrl->ctrl.connect_q);
++ blk_mq_destroy_queue(ctrl->ctrl.connect_q);
+ blk_mq_free_tag_set(ctrl->ctrl.tagset);
+ }
+ nvme_rdma_free_io_queues(ctrl);
+@@ -1012,7 +1012,7 @@ out_wait_freeze_timed_out:
+ out_cleanup_connect_q:
+ nvme_cancel_tagset(&ctrl->ctrl);
+ if (new)
+- blk_cleanup_queue(ctrl->ctrl.connect_q);
++ blk_mq_destroy_queue(ctrl->ctrl.connect_q);
+ out_free_tag_set:
+ if (new)
+ blk_mq_free_tag_set(ctrl->ctrl.tagset);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index daa0e160e1212..d7e5bbdb9b75a 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1881,7 +1881,7 @@ static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove)
+ {
+ nvme_tcp_stop_io_queues(ctrl);
+ if (remove) {
+- blk_cleanup_queue(ctrl->connect_q);
++ blk_mq_destroy_queue(ctrl->connect_q);
+ blk_mq_free_tag_set(ctrl->tagset);
+ }
+ nvme_tcp_free_io_queues(ctrl);
+@@ -1936,7 +1936,7 @@ out_wait_freeze_timed_out:
+ out_cleanup_connect_q:
+ nvme_cancel_tagset(ctrl);
+ if (new)
+- blk_cleanup_queue(ctrl->connect_q);
++ blk_mq_destroy_queue(ctrl->connect_q);
+ out_free_tag_set:
+ if (new)
+ blk_mq_free_tag_set(ctrl->tagset);
+@@ -1949,8 +1949,8 @@ static void nvme_tcp_destroy_admin_queue(struct nvme_ctrl *ctrl, bool remove)
+ {
+ nvme_tcp_stop_queue(ctrl, 0);
+ if (remove) {
+- blk_cleanup_queue(ctrl->admin_q);
+- blk_cleanup_queue(ctrl->fabrics_q);
++ blk_mq_destroy_queue(ctrl->admin_q);
++ blk_mq_destroy_queue(ctrl->fabrics_q);
+ blk_mq_free_tag_set(ctrl->admin_tagset);
+ }
+ nvme_tcp_free_admin_queue(ctrl);
+@@ -2008,10 +2008,10 @@ out_stop_queue:
+ nvme_cancel_admin_tagset(ctrl);
+ out_cleanup_queue:
+ if (new)
+- blk_cleanup_queue(ctrl->admin_q);
++ blk_mq_destroy_queue(ctrl->admin_q);
+ out_cleanup_fabrics_q:
+ if (new)
+- blk_cleanup_queue(ctrl->fabrics_q);
++ blk_mq_destroy_queue(ctrl->fabrics_q);
+ out_free_tagset:
+ if (new)
+ blk_mq_free_tag_set(ctrl->admin_tagset);
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index 59024af2da2e3..0f5c77e22a0a9 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -266,8 +266,8 @@ static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl)
+ if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags))
+ return;
+ nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);
+- blk_cleanup_queue(ctrl->ctrl.admin_q);
+- blk_cleanup_queue(ctrl->ctrl.fabrics_q);
++ blk_mq_destroy_queue(ctrl->ctrl.admin_q);
++ blk_mq_destroy_queue(ctrl->ctrl.fabrics_q);
+ blk_mq_free_tag_set(&ctrl->admin_tag_set);
+ }
+
+@@ -283,7 +283,7 @@ static void nvme_loop_free_ctrl(struct nvme_ctrl *nctrl)
+ mutex_unlock(&nvme_loop_ctrl_mutex);
+
+ if (nctrl->tagset) {
+- blk_cleanup_queue(ctrl->ctrl.connect_q);
++ blk_mq_destroy_queue(ctrl->ctrl.connect_q);
+ blk_mq_free_tag_set(&ctrl->tag_set);
+ }
+ kfree(ctrl->queues);
+@@ -410,9 +410,9 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl)
+
+ out_cleanup_queue:
+ clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags);
+- blk_cleanup_queue(ctrl->ctrl.admin_q);
++ blk_mq_destroy_queue(ctrl->ctrl.admin_q);
+ out_cleanup_fabrics_q:
+- blk_cleanup_queue(ctrl->ctrl.fabrics_q);
++ blk_mq_destroy_queue(ctrl->ctrl.fabrics_q);
+ out_free_tagset:
+ blk_mq_free_tag_set(&ctrl->admin_tag_set);
+ out_free_sq:
+@@ -554,7 +554,7 @@ static int nvme_loop_create_io_queues(struct nvme_loop_ctrl *ctrl)
+ return 0;
+
+ out_cleanup_connect_q:
+- blk_cleanup_queue(ctrl->ctrl.connect_q);
++ blk_mq_destroy_queue(ctrl->ctrl.connect_q);
+ out_free_tagset:
+ blk_mq_free_tag_set(&ctrl->tag_set);
+ out_destroy_queues:
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 80d8309652a4d..b80a9b74662b1 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -36,7 +36,7 @@
+ #define CMN_CI_CHILD_COUNT GENMASK_ULL(15, 0)
+ #define CMN_CI_CHILD_PTR_OFFSET GENMASK_ULL(31, 16)
+
+-#define CMN_CHILD_NODE_ADDR GENMASK(27, 0)
++#define CMN_CHILD_NODE_ADDR GENMASK(29, 0)
+ #define CMN_CHILD_NODE_EXTERNAL BIT(31)
+
+ #define CMN_MAX_DIMENSION 12
+diff --git a/drivers/phy/marvell/phy-mvebu-a3700-comphy.c b/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
+index a4d7d9bd100d3..67712c77d806f 100644
+--- a/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
++++ b/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
+@@ -274,7 +274,6 @@ struct mvebu_a3700_comphy_lane {
+ int submode;
+ bool invert_tx;
+ bool invert_rx;
+- bool needs_reset;
+ };
+
+ struct gbe_phy_init_data_fix {
+@@ -1097,40 +1096,12 @@ mvebu_a3700_comphy_pcie_power_off(struct mvebu_a3700_comphy_lane *lane)
+ 0x0, PU_PLL_BIT | PU_RX_BIT | PU_TX_BIT);
+ }
+
+-static int mvebu_a3700_comphy_reset(struct phy *phy)
++static void mvebu_a3700_comphy_usb3_power_off(struct mvebu_a3700_comphy_lane *lane)
+ {
+- struct mvebu_a3700_comphy_lane *lane = phy_get_drvdata(phy);
+- u16 mask, data;
+-
+- dev_dbg(lane->dev, "resetting lane %d\n", lane->id);
+-
+- /* COMPHY reset for internal logic */
+- comphy_lane_reg_set(lane, COMPHY_SFT_RESET,
+- SFT_RST_NO_REG, SFT_RST_NO_REG);
+-
+- /* COMPHY register reset (cleared automatically) */
+- comphy_lane_reg_set(lane, COMPHY_SFT_RESET, SFT_RST, SFT_RST);
+-
+- /* PIPE soft and register reset */
+- data = PIPE_SOFT_RESET | PIPE_REG_RESET;
+- mask = data;
+- comphy_lane_reg_set(lane, COMPHY_PIPE_RST_CLK_CTRL, data, mask);
+-
+- /* Release PIPE register reset */
+- comphy_lane_reg_set(lane, COMPHY_PIPE_RST_CLK_CTRL,
+- 0x0, PIPE_REG_RESET);
+-
+- /* Reset SB configuration register (only for lanes 0 and 1) */
+- if (lane->id == 0 || lane->id == 1) {
+- u32 mask, data;
+-
+- data = PIN_RESET_CORE_BIT | PIN_RESET_COMPHY_BIT |
+- PIN_PU_PLL_BIT | PIN_PU_RX_BIT | PIN_PU_TX_BIT;
+- mask = data | PIN_PU_IVREF_BIT | PIN_TX_IDLE_BIT;
+- comphy_periph_reg_set(lane, COMPHY_PHY_CFG1, data, mask);
+- }
+-
+- return 0;
++ /*
++ * The USB3 MAC sets the USB3 PHY to low state, so we do not
++ * need to power off USB3 PHY again.
++ */
+ }
+
+ static bool mvebu_a3700_comphy_check_mode(int lane,
+@@ -1171,10 +1142,6 @@ static int mvebu_a3700_comphy_set_mode(struct phy *phy, enum phy_mode mode,
+ (lane->mode != mode || lane->submode != submode))
+ return -EBUSY;
+
+- /* If changing mode, ensure reset is called */
+- if (lane->mode != PHY_MODE_INVALID && lane->mode != mode)
+- lane->needs_reset = true;
+-
+ /* Just remember the mode, ->power_on() will do the real setup */
+ lane->mode = mode;
+ lane->submode = submode;
+@@ -1185,7 +1152,6 @@ static int mvebu_a3700_comphy_set_mode(struct phy *phy, enum phy_mode mode,
+ static int mvebu_a3700_comphy_power_on(struct phy *phy)
+ {
+ struct mvebu_a3700_comphy_lane *lane = phy_get_drvdata(phy);
+- int ret;
+
+ if (!mvebu_a3700_comphy_check_mode(lane->id, lane->mode,
+ lane->submode)) {
+@@ -1193,14 +1159,6 @@ static int mvebu_a3700_comphy_power_on(struct phy *phy)
+ return -EINVAL;
+ }
+
+- if (lane->needs_reset) {
+- ret = mvebu_a3700_comphy_reset(phy);
+- if (ret)
+- return ret;
+-
+- lane->needs_reset = false;
+- }
+-
+ switch (lane->mode) {
+ case PHY_MODE_USB_HOST_SS:
+ dev_dbg(lane->dev, "set lane %d to USB3 host mode\n", lane->id);
+@@ -1224,38 +1182,28 @@ static int mvebu_a3700_comphy_power_off(struct phy *phy)
+ {
+ struct mvebu_a3700_comphy_lane *lane = phy_get_drvdata(phy);
+
+- switch (lane->mode) {
+- case PHY_MODE_USB_HOST_SS:
+- /*
+- * The USB3 MAC sets the USB3 PHY to low state, so we do not
+- * need to power off USB3 PHY again.
+- */
+- break;
+-
+- case PHY_MODE_SATA:
+- mvebu_a3700_comphy_sata_power_off(lane);
+- break;
+-
+- case PHY_MODE_ETHERNET:
++ switch (lane->id) {
++ case 0:
++ mvebu_a3700_comphy_usb3_power_off(lane);
+ mvebu_a3700_comphy_ethernet_power_off(lane);
+- break;
+-
+- case PHY_MODE_PCIE:
++ return 0;
++ case 1:
+ mvebu_a3700_comphy_pcie_power_off(lane);
+- break;
+-
++ mvebu_a3700_comphy_ethernet_power_off(lane);
++ return 0;
++ case 2:
++ mvebu_a3700_comphy_usb3_power_off(lane);
++ mvebu_a3700_comphy_sata_power_off(lane);
++ return 0;
+ default:
+ dev_err(lane->dev, "invalid COMPHY mode\n");
+ return -EINVAL;
+ }
+-
+- return 0;
+ }
+
+ static const struct phy_ops mvebu_a3700_comphy_ops = {
+ .power_on = mvebu_a3700_comphy_power_on,
+ .power_off = mvebu_a3700_comphy_power_off,
+- .reset = mvebu_a3700_comphy_reset,
+ .set_mode = mvebu_a3700_comphy_set_mode,
+ .owner = THIS_MODULE,
+ };
+@@ -1393,8 +1341,7 @@ static int mvebu_a3700_comphy_probe(struct platform_device *pdev)
+ * To avoid relying on the bootloader/firmware configuration,
+ * power off all comphys.
+ */
+- mvebu_a3700_comphy_reset(phy);
+- lane->needs_reset = false;
++ mvebu_a3700_comphy_power_off(phy);
+ }
+
+ provider = devm_of_phy_provider_register(&pdev->dev,
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index ba6d787896606..e8489331f12b8 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -3280,7 +3280,7 @@ static int dasd_alloc_queue(struct dasd_block *block)
+ static void dasd_free_queue(struct dasd_block *block)
+ {
+ if (block->request_queue) {
+- blk_cleanup_queue(block->request_queue);
++ blk_mq_destroy_queue(block->request_queue);
+ blk_mq_free_tag_set(&block->tag_set);
+ block->request_queue = NULL;
+ }
+diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c
+index dc78a523a69f2..b6b938aa66158 100644
+--- a/drivers/s390/block/dasd_alias.c
++++ b/drivers/s390/block/dasd_alias.c
+@@ -675,12 +675,12 @@ int dasd_alias_remove_device(struct dasd_device *device)
+ struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device)
+ {
+ struct dasd_eckd_private *alias_priv, *private = base_device->private;
+- struct alias_pav_group *group = private->pavgroup;
+ struct alias_lcu *lcu = private->lcu;
+ struct dasd_device *alias_device;
++ struct alias_pav_group *group;
+ unsigned long flags;
+
+- if (!group || !lcu)
++ if (!lcu)
+ return NULL;
+ if (lcu->pav == NO_PAV ||
+ lcu->flags & (NEED_UAC_UPDATE | UPDATE_PENDING))
+@@ -697,6 +697,11 @@ struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device)
+ }
+
+ spin_lock_irqsave(&lcu->lock, flags);
++ group = private->pavgroup;
++ if (!group) {
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ return NULL;
++ }
+ alias_device = group->next;
+ if (!alias_device) {
+ if (list_empty(&group->aliaslist)) {
+diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
+index a7a33ebf4bbe9..5a83f0a39901b 100644
+--- a/drivers/s390/block/dasd_genhd.c
++++ b/drivers/s390/block/dasd_genhd.c
+@@ -41,8 +41,8 @@ int dasd_gendisk_alloc(struct dasd_block *block)
+ if (base->devindex >= DASD_PER_MAJOR)
+ return -EBUSY;
+
+- gdp = __alloc_disk_node(block->request_queue, NUMA_NO_NODE,
+- &dasd_bio_compl_lkclass);
++ gdp = blk_mq_alloc_disk_for_queue(block->request_queue,
++ &dasd_bio_compl_lkclass);
+ if (!gdp)
+ return -ENOMEM;
+
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 8352f90d997df..ae9a107c520d0 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -182,6 +182,15 @@ void scsi_remove_host(struct Scsi_Host *shost)
+ mutex_unlock(&shost->scan_mutex);
+ scsi_proc_host_rm(shost);
+
++ /*
++ * New SCSI devices cannot be attached anymore because of the SCSI host
++ * state so drop the tag set refcnt. Wait until the tag set refcnt drops
++ * to zero because .exit_cmd_priv implementations may need the host
++ * pointer.
++ */
++ kref_put(&shost->tagset_refcnt, scsi_mq_free_tags);
++ wait_for_completion(&shost->tagset_freed);
++
+ spin_lock_irqsave(shost->host_lock, flags);
+ if (scsi_host_set_state(shost, SHOST_DEL))
+ BUG_ON(scsi_host_set_state(shost, SHOST_DEL_RECOVERY));
+@@ -240,6 +249,9 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ if (error)
+ goto fail;
+
++ kref_init(&shost->tagset_refcnt);
++ init_completion(&shost->tagset_freed);
++
+ /*
+ * Increase usage count temporarily here so that calling
+ * scsi_autopm_put_host() will trigger runtime idle if there is
+@@ -312,6 +324,7 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ pm_runtime_disable(&shost->shost_gendev);
+ pm_runtime_set_suspended(&shost->shost_gendev);
+ pm_runtime_put_noidle(&shost->shost_gendev);
++ kref_put(&shost->tagset_refcnt, scsi_mq_free_tags);
+ fail:
+ return error;
+ }
+@@ -345,9 +358,6 @@ static void scsi_host_dev_release(struct device *dev)
+ kfree(dev_name(&shost->shost_dev));
+ }
+
+- if (shost->tag_set.tags)
+- scsi_mq_destroy_tags(shost);
+-
+ kfree(shost->shost_data);
+
+ ida_simple_remove(&host_index_ida, shost->host_no);
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 9a1ae52bb621d..a6d3471a61057 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -2993,7 +2993,7 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
+
+ if (ioc->is_mcpu_endpoint ||
+ sizeof(dma_addr_t) == 4 || ioc->use_32bit_dma ||
+- dma_get_required_mask(&pdev->dev) <= 32)
++ dma_get_required_mask(&pdev->dev) <= DMA_BIT_MASK(32))
+ ioc->dma_mask = 32;
+ /* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */
+ else if (ioc->hba_mpi_version_belonged > MPI2_VERSION)
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 62666df1a59eb..4acff4e84b909 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -2151,8 +2151,10 @@ static int __qlt_24xx_handle_abts(struct scsi_qla_host *vha,
+
+ abort_cmd = ha->tgt.tgt_ops->find_cmd_by_tag(sess,
+ le32_to_cpu(abts->exchange_addr_to_abort));
+- if (!abort_cmd)
++ if (!abort_cmd) {
++ mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool);
+ return -EIO;
++ }
+ mcmd->unpacked_lun = abort_cmd->se_cmd.orig_fe_lun;
+
+ if (abort_cmd->qpair) {
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index f5c876d03c1ad..7e990f7a9f164 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -168,7 +168,7 @@ static void __scsi_queue_insert(struct scsi_cmnd *cmd, int reason, bool unbusy)
+ * Requeue this command. It will go before all other commands
+ * that are already in the queue. Schedule requeue work under
+ * lock such that the kblockd_schedule_work() call happens
+- * before blk_cleanup_queue() finishes.
++ * before blk_mq_destroy_queue() finishes.
+ */
+ cmd->result = 0;
+
+@@ -429,9 +429,9 @@ static void scsi_starved_list_run(struct Scsi_Host *shost)
+ * it and the queue. Mitigate by taking a reference to the
+ * queue and never touching the sdev again after we drop the
+ * host lock. Note: if __scsi_remove_device() invokes
+- * blk_cleanup_queue() before the queue is run from this
++ * blk_mq_destroy_queue() before the queue is run from this
+ * function then blk_run_queue() will return immediately since
+- * blk_cleanup_queue() marks the queue with QUEUE_FLAG_DYING.
++ * blk_mq_destroy_queue() marks the queue with QUEUE_FLAG_DYING.
+ */
+ slq = sdev->request_queue;
+ if (!blk_get_queue(slq))
+@@ -1995,9 +1995,13 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
+ return blk_mq_alloc_tag_set(tag_set);
+ }
+
+-void scsi_mq_destroy_tags(struct Scsi_Host *shost)
++void scsi_mq_free_tags(struct kref *kref)
+ {
++ struct Scsi_Host *shost = container_of(kref, typeof(*shost),
++ tagset_refcnt);
++
+ blk_mq_free_tag_set(&shost->tag_set);
++ complete(&shost->tagset_freed);
+ }
+
+ /**
+diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
+index 5c4786310a31d..a0ee31d55f5f1 100644
+--- a/drivers/scsi/scsi_priv.h
++++ b/drivers/scsi/scsi_priv.h
+@@ -94,7 +94,7 @@ extern void scsi_run_host_queues(struct Scsi_Host *shost);
+ extern void scsi_requeue_run_queue(struct work_struct *work);
+ extern void scsi_start_queue(struct scsi_device *sdev);
+ extern int scsi_mq_setup_tags(struct Scsi_Host *shost);
+-extern void scsi_mq_destroy_tags(struct Scsi_Host *shost);
++extern void scsi_mq_free_tags(struct kref *kref);
+ extern void scsi_exit_queue(void);
+ extern void scsi_evt_thread(struct work_struct *work);
+
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 91ac901a66826..5d27f5196de6f 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -340,6 +340,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
+ kfree(sdev);
+ goto out;
+ }
++ kref_get(&sdev->host->tagset_refcnt);
+ sdev->request_queue = q;
+ q->queuedata = sdev;
+ __scsi_init_queue(sdev->host, q);
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 43949798a2e47..5d61f58399dca 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -1475,7 +1475,8 @@ void __scsi_remove_device(struct scsi_device *sdev)
+ scsi_device_set_state(sdev, SDEV_DEL);
+ mutex_unlock(&sdev->state_mutex);
+
+- blk_cleanup_queue(sdev->request_queue);
++ blk_mq_destroy_queue(sdev->request_queue);
++ kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags);
+ cancel_work_sync(&sdev->requeue_work);
+
+ if (sdev->host->hostt->slave_destroy)
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index a1a2ac09066fd..cb587e488601c 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3440,8 +3440,8 @@ static int sd_probe(struct device *dev)
+ if (!sdkp)
+ goto out;
+
+- gd = __alloc_disk_node(sdp->request_queue, NUMA_NO_NODE,
+- &sd_bio_compl_lkclass);
++ gd = blk_mq_alloc_disk_for_queue(sdp->request_queue,
++ &sd_bio_compl_lkclass);
+ if (!gd)
+ goto out_free;
+
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 32d3b8274f148..a278b739d0c5f 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -624,8 +624,8 @@ static int sr_probe(struct device *dev)
+ if (!cd)
+ goto fail;
+
+- disk = __alloc_disk_node(sdev->request_queue, NUMA_NO_NODE,
+- &sr_bio_compl_lkclass);
++ disk = blk_mq_alloc_disk_for_queue(sdev->request_queue,
++ &sr_bio_compl_lkclass);
+ if (!disk)
+ goto fail_free;
+ mutex_init(&cd->lock);
+diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
+index fff0c740c8f33..6f088dd0ba4f3 100644
+--- a/drivers/thunderbolt/icm.c
++++ b/drivers/thunderbolt/icm.c
+@@ -2527,6 +2527,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
+ tb->cm_ops = &icm_icl_ops;
+ break;
+
++ case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_2C_NHI:
+ case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI:
+ icm->is_supported = icm_tgl_is_supported;
+ icm->get_mode = icm_ar_get_mode;
+diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
+index 69083aab2736c..5091677b3f4ba 100644
+--- a/drivers/thunderbolt/nhi.h
++++ b/drivers/thunderbolt/nhi.h
+@@ -55,6 +55,7 @@ extern const struct tb_nhi_ops icl_nhi_ops;
+ * need for the PCI quirk anymore as we will use ICM also on Apple
+ * hardware.
+ */
++#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_2C_NHI 0x1134
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI 0x1137
+ #define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_NHI 0x157d
+ #define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE 0x157e
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 2945c1b890880..cb83c66bd8a82 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2706,14 +2706,15 @@ static int lpuart_probe(struct platform_device *pdev)
+ lpuart_reg.cons = LPUART_CONSOLE;
+ handler = lpuart_int;
+ }
+- ret = uart_add_one_port(&lpuart_reg, &sport->port);
+- if (ret)
+- goto failed_attach_port;
+
+ ret = lpuart_global_reset(sport);
+ if (ret)
+ goto failed_reset;
+
++ ret = uart_add_one_port(&lpuart_reg, &sport->port);
++ if (ret)
++ goto failed_attach_port;
++
+ ret = uart_get_rs485_mode(&sport->port);
+ if (ret)
+ goto failed_get_rs485;
+@@ -2736,9 +2737,9 @@ static int lpuart_probe(struct platform_device *pdev)
+
+ failed_irq_request:
+ failed_get_rs485:
+-failed_reset:
+ uart_remove_one_port(&lpuart_reg, &sport->port);
+ failed_attach_port:
++failed_reset:
+ lpuart_disable_clks(sport);
+ return ret;
+ }
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index d942ab152f5a4..24aa1dcc5ef7a 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -525,7 +525,7 @@ static void tegra_uart_tx_dma_complete(void *args)
+ count = tup->tx_bytes_requested - state.residue;
+ async_tx_ack(tup->tx_dma_desc);
+ spin_lock_irqsave(&tup->uport.lock, flags);
+- xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
++ uart_xmit_advance(&tup->uport, count);
+ tup->tx_in_progress = 0;
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(&tup->uport);
+@@ -613,7 +613,6 @@ static unsigned int tegra_uart_tx_empty(struct uart_port *u)
+ static void tegra_uart_stop_tx(struct uart_port *u)
+ {
+ struct tegra_uart_port *tup = to_tegra_uport(u);
+- struct circ_buf *xmit = &tup->uport.state->xmit;
+ struct dma_tx_state state;
+ unsigned int count;
+
+@@ -624,7 +623,7 @@ static void tegra_uart_stop_tx(struct uart_port *u)
+ dmaengine_tx_status(tup->tx_dma_chan, tup->tx_cookie, &state);
+ count = tup->tx_bytes_requested - state.residue;
+ async_tx_ack(tup->tx_dma_desc);
+- xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
++ uart_xmit_advance(&tup->uport, count);
+ tup->tx_in_progress = 0;
+ }
+
+diff --git a/drivers/tty/serial/tegra-tcu.c b/drivers/tty/serial/tegra-tcu.c
+index 4877c54c613d1..889b701ba7c62 100644
+--- a/drivers/tty/serial/tegra-tcu.c
++++ b/drivers/tty/serial/tegra-tcu.c
+@@ -101,7 +101,7 @@ static void tegra_tcu_uart_start_tx(struct uart_port *port)
+ break;
+
+ tegra_tcu_write(tcu, &xmit->buf[xmit->tail], count);
+- xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
++ uart_xmit_advance(port, count);
+ }
+
+ uart_write_wakeup(port);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 829da9cb14a86..55bb0d0422d52 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -9519,7 +9519,7 @@ void ufshcd_remove(struct ufs_hba *hba)
+ ufs_bsg_remove(hba);
+ ufshpb_remove(hba);
+ ufs_sysfs_remove_nodes(hba->dev);
+- blk_cleanup_queue(hba->tmf_queue);
++ blk_mq_destroy_queue(hba->tmf_queue);
+ blk_mq_free_tag_set(&hba->tmf_tag_set);
+ scsi_remove_host(hba->host);
+ /* disable interrupts */
+@@ -9815,7 +9815,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ return 0;
+
+ free_tmf_queue:
+- blk_cleanup_queue(hba->tmf_queue);
++ blk_mq_destroy_queue(hba->tmf_queue);
+ free_tmf_tag_set:
+ blk_mq_free_tag_set(&hba->tmf_tag_set);
+ out_remove_scsi_host:
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index dfef85a18eb55..80b29f937c605 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -6049,7 +6049,7 @@ re_enumerate:
+ *
+ * Return: The same as for usb_reset_and_verify_device().
+ * However, if a reset is already in progress (for instance, if a
+- * driver doesn't have pre_ or post_reset() callbacks, and while
++ * driver doesn't have pre_reset() or post_reset() callbacks, and while
+ * being unbound or re-bound during the ongoing reset its disconnect()
+ * or probe() routine tries to perform a second, nested reset), the
+ * routine returns -EINPROGRESS.
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 1db9f51f98aef..08ca65ffe57b7 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1718,12 +1718,6 @@ static int dwc3_probe(struct platform_device *pdev)
+
+ dwc3_get_properties(dwc);
+
+- if (!dwc->sysdev_is_parent) {
+- ret = dma_set_mask_and_coherent(dwc->sysdev, DMA_BIT_MASK(64));
+- if (ret)
+- return ret;
+- }
+-
+ dwc->reset = devm_reset_control_array_get_optional_shared(dev);
+ if (IS_ERR(dwc->reset))
+ return PTR_ERR(dwc->reset);
+@@ -1789,6 +1783,13 @@ static int dwc3_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, dwc);
+ dwc3_cache_hwparams(dwc);
+
++ if (!dwc->sysdev_is_parent &&
++ DWC3_GHWPARAMS0_AWIDTH(dwc->hwparams.hwparams0) == 64) {
++ ret = dma_set_mask_and_coherent(dwc->sysdev, DMA_BIT_MASK(64));
++ if (ret)
++ goto disable_clks;
++ }
++
+ spin_lock_init(&dwc->lock);
+ mutex_init(&dwc->mutex);
+
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index a5e8374a8d710..697683e3fbffa 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -256,6 +256,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EM060K 0x030b
+ #define QUECTEL_PRODUCT_EM12 0x0512
+ #define QUECTEL_PRODUCT_RM500Q 0x0800
++#define QUECTEL_PRODUCT_RM520N 0x0801
+ #define QUECTEL_PRODUCT_EC200S_CN 0x6002
+ #define QUECTEL_PRODUCT_EC200T 0x6026
+ #define QUECTEL_PRODUCT_RM500K 0x7001
+@@ -1138,6 +1139,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff),
+ .driver_info = NUMEP2 },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) },
++ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0203, 0xff), /* BG95-M3 */
++ .driver_info = ZLP },
+ { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96),
+ .driver_info = RSVD(4) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+@@ -1159,6 +1162,9 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+ .driver_info = ZLP },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
+diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
+index d5f3f763717ea..d4b2519257962 100644
+--- a/drivers/xen/xenbus/xenbus_client.c
++++ b/drivers/xen/xenbus/xenbus_client.c
+@@ -382,9 +382,10 @@ int xenbus_setup_ring(struct xenbus_device *dev, gfp_t gfp, void **vaddr,
+ unsigned long ring_size = nr_pages * XEN_PAGE_SIZE;
+ grant_ref_t gref_head;
+ unsigned int i;
++ void *addr;
+ int ret;
+
+- *vaddr = alloc_pages_exact(ring_size, gfp | __GFP_ZERO);
++ addr = *vaddr = alloc_pages_exact(ring_size, gfp | __GFP_ZERO);
+ if (!*vaddr) {
+ ret = -ENOMEM;
+ goto err;
+@@ -401,13 +402,15 @@ int xenbus_setup_ring(struct xenbus_device *dev, gfp_t gfp, void **vaddr,
+ unsigned long gfn;
+
+ if (is_vmalloc_addr(*vaddr))
+- gfn = pfn_to_gfn(vmalloc_to_pfn(vaddr[i]));
++ gfn = pfn_to_gfn(vmalloc_to_pfn(addr));
+ else
+- gfn = virt_to_gfn(vaddr[i]);
++ gfn = virt_to_gfn(addr);
+
+ grefs[i] = gnttab_claim_grant_reference(&gref_head);
+ gnttab_grant_foreign_access_ref(grefs[i], dev->otherend_id,
+ gfn, 0);
++
++ addr += XEN_PAGE_SIZE;
+ }
+
+ return 0;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 781952c5a5c23..20ad619a8a973 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4586,6 +4586,17 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+
+ set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags);
+
++ /*
++ * If we had UNFINISHED_DROPS we could still be processing them, so
++ * clear that bit and wake up relocation so it can stop.
++ * We must do this before stopping the block group reclaim task, because
++ * at btrfs_relocate_block_group() we wait for this bit, and after the
++ * wait we stop with -EINTR if btrfs_fs_closing() returns non-zero - we
++ * have just set BTRFS_FS_CLOSING_START, so btrfs_fs_closing() will
++ * return 1.
++ */
++ btrfs_wake_unfinished_drop(fs_info);
++
+ /*
+ * We may have the reclaim task running and relocating a data block group,
+ * in which case it may create delayed iputs. So stop it before we park
+@@ -4604,12 +4615,6 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ */
+ kthread_park(fs_info->cleaner_kthread);
+
+- /*
+- * If we had UNFINISHED_DROPS we could still be processing them, so
+- * clear that bit and wake up relocation so it can stop.
+- */
+- btrfs_wake_unfinished_drop(fs_info);
+-
+ /* wait for the qgroup rescan worker to stop */
+ btrfs_qgroup_wait_for_completion(fs_info, false);
+
+@@ -4632,6 +4637,31 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ /* clear out the rbtree of defraggable inodes */
+ btrfs_cleanup_defrag_inodes(fs_info);
+
++ /*
++ * After we parked the cleaner kthread, ordered extents may have
++ * completed and created new delayed iputs. If one of the async reclaim
++ * tasks is running and in the RUN_DELAYED_IPUTS flush state, then we
++ * can hang forever trying to stop it, because if a delayed iput is
++ * added after it ran btrfs_run_delayed_iputs() and before it called
++ * btrfs_wait_on_delayed_iputs(), it will hang forever since there is
++ * no one else to run iputs.
++ *
++ * So wait for all ongoing ordered extents to complete and then run
++ * delayed iputs. This works because once we reach this point no one
++ * can either create new ordered extents nor create delayed iputs
++ * through some other means.
++ *
++ * Also note that btrfs_wait_ordered_roots() is not safe here, because
++ * it waits for BTRFS_ORDERED_COMPLETE to be set on an ordered extent,
++ * but the delayed iput for the respective inode is made only when doing
++ * the final btrfs_put_ordered_extent() (which must happen at
++ * btrfs_finish_ordered_io() when we are unmounting).
++ */
++ btrfs_flush_workqueue(fs_info->endio_write_workers);
++ /* Ordered extents for free space inodes. */
++ btrfs_flush_workqueue(fs_info->endio_freespace_worker);
++ btrfs_run_delayed_iputs(fs_info);
++
+ cancel_work_sync(&fs_info->async_reclaim_work);
+ cancel_work_sync(&fs_info->async_data_reclaim_work);
+ cancel_work_sync(&fs_info->preempt_reclaim_work);
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 1386362fad3b8..4448b7b6ea221 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1918,10 +1918,44 @@ out_unlock:
+ return ret;
+ }
+
++static void wait_eb_writebacks(struct btrfs_block_group *block_group)
++{
++ struct btrfs_fs_info *fs_info = block_group->fs_info;
++ const u64 end = block_group->start + block_group->length;
++ struct radix_tree_iter iter;
++ struct extent_buffer *eb;
++ void __rcu **slot;
++
++ rcu_read_lock();
++ radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter,
++ block_group->start >> fs_info->sectorsize_bits) {
++ eb = radix_tree_deref_slot(slot);
++ if (!eb)
++ continue;
++ if (radix_tree_deref_retry(eb)) {
++ slot = radix_tree_iter_retry(&iter);
++ continue;
++ }
++
++ if (eb->start < block_group->start)
++ continue;
++ if (eb->start >= end)
++ break;
++
++ slot = radix_tree_iter_resume(slot, &iter);
++ rcu_read_unlock();
++ wait_on_extent_buffer_writeback(eb);
++ rcu_read_lock();
++ }
++ rcu_read_unlock();
++}
++
+ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_written)
+ {
+ struct btrfs_fs_info *fs_info = block_group->fs_info;
+ struct map_lookup *map;
++ const bool is_metadata = (block_group->flags &
++ (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM));
+ int ret = 0;
+ int i;
+
+@@ -1932,8 +1966,7 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
+ }
+
+ /* Check if we have unwritten allocated space */
+- if ((block_group->flags &
+- (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM)) &&
++ if (is_metadata &&
+ block_group->start + block_group->alloc_offset > block_group->meta_write_pointer) {
+ spin_unlock(&block_group->lock);
+ return -EAGAIN;
+@@ -1958,6 +1991,9 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
+ /* No need to wait for NOCOW writers. Zoned mode does not allow that */
+ btrfs_wait_ordered_roots(fs_info, U64_MAX, block_group->start,
+ block_group->length);
++ /* Wait for extent buffers to be written. */
++ if (is_metadata)
++ wait_eb_writebacks(block_group);
+
+ spin_lock(&block_group->lock);
+
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 8f2e003e05907..97278c43f8dc0 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -1232,6 +1232,12 @@ ssize_t cifs_file_copychunk_range(unsigned int xid,
+ lock_two_nondirectories(target_inode, src_inode);
+
+ cifs_dbg(FYI, "about to flush pages\n");
++
++ rc = filemap_write_and_wait_range(src_inode->i_mapping, off,
++ off + len - 1);
++ if (rc)
++ goto out;
++
+ /* should we flush first and last page first */
+ truncate_inode_pages(&target_inode->i_data, 0);
+
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index e8a8daa82ed76..cc180d37b8ce1 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1886,17 +1886,8 @@ smb2_copychunk_range(const unsigned int xid,
+ int chunks_copied = 0;
+ bool chunk_sizes_updated = false;
+ ssize_t bytes_written, total_bytes_written = 0;
+- struct inode *inode;
+
+ pcchunk = kmalloc(sizeof(struct copychunk_ioctl), GFP_KERNEL);
+-
+- /*
+- * We need to flush all unwritten data before we can send the
+- * copychunk ioctl to the server.
+- */
+- inode = d_inode(trgtfile->dentry);
+- filemap_write_and_wait(inode->i_mapping);
+-
+ if (pcchunk == NULL)
+ return -ENOMEM;
+
+@@ -3961,39 +3952,50 @@ static long smb3_collapse_range(struct file *file, struct cifs_tcon *tcon,
+ {
+ int rc;
+ unsigned int xid;
+- struct inode *inode;
++ struct inode *inode = file_inode(file);
+ struct cifsFileInfo *cfile = file->private_data;
+- struct cifsInodeInfo *cifsi;
++ struct cifsInodeInfo *cifsi = CIFS_I(inode);
+ __le64 eof;
++ loff_t old_eof;
+
+ xid = get_xid();
+
+- inode = d_inode(cfile->dentry);
+- cifsi = CIFS_I(inode);
++ inode_lock(inode);
+
+- if (off >= i_size_read(inode) ||
+- off + len >= i_size_read(inode)) {
++ old_eof = i_size_read(inode);
++ if ((off >= old_eof) ||
++ off + len >= old_eof) {
+ rc = -EINVAL;
+ goto out;
+ }
+
++ filemap_invalidate_lock(inode->i_mapping);
++ rc = filemap_write_and_wait_range(inode->i_mapping, off, old_eof - 1);
++ if (rc < 0)
++ goto out_2;
++
++ truncate_pagecache_range(inode, off, old_eof);
++
+ rc = smb2_copychunk_range(xid, cfile, cfile, off + len,
+- i_size_read(inode) - off - len, off);
++ old_eof - off - len, off);
+ if (rc < 0)
+- goto out;
++ goto out_2;
+
+- eof = cpu_to_le64(i_size_read(inode) - len);
++ eof = cpu_to_le64(old_eof - len);
+ rc = SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid, cfile->pid, &eof);
+ if (rc < 0)
+- goto out;
++ goto out_2;
+
+ rc = 0;
+
+ cifsi->server_eof = i_size_read(inode) - len;
+ truncate_setsize(inode, cifsi->server_eof);
+ fscache_resize_cookie(cifs_inode_cookie(inode), cifsi->server_eof);
++out_2:
++ filemap_invalidate_unlock(inode->i_mapping);
+ out:
++ inode_unlock(inode);
+ free_xid(xid);
+ return rc;
+ }
+@@ -4004,34 +4006,47 @@ static long smb3_insert_range(struct file *file, struct cifs_tcon *tcon,
+ int rc;
+ unsigned int xid;
+ struct cifsFileInfo *cfile = file->private_data;
++ struct inode *inode = file_inode(file);
+ __le64 eof;
+- __u64 count;
++ __u64 count, old_eof;
+
+ xid = get_xid();
+
+- if (off >= i_size_read(file->f_inode)) {
++ inode_lock(inode);
++
++ old_eof = i_size_read(inode);
++ if (off >= old_eof) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+- count = i_size_read(file->f_inode) - off;
+- eof = cpu_to_le64(i_size_read(file->f_inode) + len);
++ count = old_eof - off;
++ eof = cpu_to_le64(old_eof + len);
++
++ filemap_invalidate_lock(inode->i_mapping);
++ rc = filemap_write_and_wait_range(inode->i_mapping, off, old_eof + len - 1);
++ if (rc < 0)
++ goto out_2;
++ truncate_pagecache_range(inode, off, old_eof);
+
+ rc = SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid, cfile->pid, &eof);
+ if (rc < 0)
+- goto out;
++ goto out_2;
+
+ rc = smb2_copychunk_range(xid, cfile, cfile, off, count, off + len);
+ if (rc < 0)
+- goto out;
++ goto out_2;
+
+- rc = smb3_zero_range(file, tcon, off, len, 1);
++ rc = smb3_zero_data(file, tcon, off, len, xid);
+ if (rc < 0)
+- goto out;
++ goto out_2;
+
+ rc = 0;
++out_2:
++ filemap_invalidate_unlock(inode->i_mapping);
+ out:
++ inode_unlock(inode);
+ free_xid(xid);
+ return rc;
+ }
+diff --git a/fs/dax.c b/fs/dax.c
+index 4155a6107fa10..7ab248ed21aa3 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1241,6 +1241,9 @@ dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t done = 0;
+ int ret;
+
++ if (!iomi.len)
++ return 0;
++
+ if (iov_iter_rw(iter) == WRITE) {
+ lockdep_assert_held_write(&iomi.inode->i_rwsem);
+ iomi.flags |= IOMAP_WRITE;
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index 9de6a6b844c9e..e541a004f8efa 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -270,8 +270,7 @@ int exfat_zeroed_cluster(struct inode *dir, unsigned int clu)
+ struct super_block *sb = dir->i_sb;
+ struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ struct buffer_head *bh;
+- sector_t blknr, last_blknr;
+- int i;
++ sector_t blknr, last_blknr, i;
+
+ blknr = exfat_cluster_to_sector(sbi, clu);
+ last_blknr = blknr + sbi->sect_per_clus;
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index adfc30ee4b7be..0d86931269bfc 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -167,8 +167,6 @@ enum SHIFT_DIRECTION {
+ #define EXT4_MB_CR0_OPTIMIZED 0x8000
+ /* Avg fragment size rb tree lookup succeeded at least once for cr = 1 */
+ #define EXT4_MB_CR1_OPTIMIZED 0x00010000
+-/* Perform linear traversal for one group */
+-#define EXT4_MB_SEARCH_NEXT_LINEAR 0x00020000
+ struct ext4_allocation_request {
+ /* target inode for block we're allocating */
+ struct inode *inode;
+@@ -1589,8 +1587,8 @@ struct ext4_sb_info {
+ struct list_head s_discard_list;
+ struct work_struct s_discard_work;
+ atomic_t s_retry_alloc_pending;
+- struct rb_root s_mb_avg_fragment_size_root;
+- rwlock_t s_mb_rb_lock;
++ struct list_head *s_mb_avg_fragment_size;
++ rwlock_t *s_mb_avg_fragment_size_locks;
+ struct list_head *s_mb_largest_free_orders;
+ rwlock_t *s_mb_largest_free_orders_locks;
+
+@@ -3402,6 +3400,8 @@ struct ext4_group_info {
+ ext4_grpblk_t bb_first_free; /* first free block */
+ ext4_grpblk_t bb_free; /* total free blocks */
+ ext4_grpblk_t bb_fragments; /* nr of freespace fragments */
++ int bb_avg_fragment_size_order; /* order of average
++ fragment in BG */
+ ext4_grpblk_t bb_largest_free_order;/* order of largest frag in BG */
+ ext4_group_t bb_group; /* Group number */
+ struct list_head bb_prealloc_list;
+@@ -3409,7 +3409,7 @@ struct ext4_group_info {
+ void *bb_bitmap;
+ #endif
+ struct rw_semaphore alloc_sem;
+- struct rb_node bb_avg_fragment_size_rb;
++ struct list_head bb_avg_fragment_size_node;
+ struct list_head bb_largest_free_order_node;
+ ext4_grpblk_t bb_counters[]; /* Nr of free power-of-two-block
+ * regions, index is order.
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index c148bb97b5273..5235974126bd3 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -460,6 +460,10 @@ static int __ext4_ext_check(const char *function, unsigned int line,
+ error_msg = "invalid eh_entries";
+ goto corrupted;
+ }
++ if (unlikely((eh->eh_entries == 0) && (depth > 0))) {
++ error_msg = "eh_entries is 0 but eh_depth is > 0";
++ goto corrupted;
++ }
+ if (!ext4_valid_extent_entries(inode, eh, lblk, &pblk, depth)) {
+ error_msg = "invalid extent entries";
+ goto corrupted;
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index f73e5eb43eae1..208b87ce88588 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -510,7 +510,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
+ goto fallback;
+ }
+
+- max_dirs = ndirs / ngroups + inodes_per_group / 16;
++ max_dirs = ndirs / ngroups + inodes_per_group*flex_size / 16;
+ min_inodes = avefreei - inodes_per_group*flex_size / 4;
+ if (min_inodes < 1)
+ min_inodes = 1;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 38e7dc2531b17..fd29e15d1c3b5 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -140,13 +140,15 @@
+ * number of buddy bitmap orders possible) number of lists. Group-infos are
+ * placed in appropriate lists.
+ *
+- * 2) Average fragment size rb tree (sbi->s_mb_avg_fragment_size_root)
++ * 2) Average fragment size lists (sbi->s_mb_avg_fragment_size)
+ *
+- * Locking: sbi->s_mb_rb_lock (rwlock)
++ * Locking: sbi->s_mb_avg_fragment_size_locks(array of rw locks)
+ *
+- * This is a red black tree consisting of group infos and the tree is sorted
+- * by average fragment sizes (which is calculated as ext4_group_info->bb_free
+- * / ext4_group_info->bb_fragments).
++ * This is an array of lists where in the i-th list there are groups with
++ * average fragment size >= 2^i and < 2^(i+1). The average fragment size
++ * is computed as ext4_group_info->bb_free / ext4_group_info->bb_fragments.
++ * Note that we don't bother with a special list for completely empty groups
++ * so we only have MB_NUM_ORDERS(sb) lists.
+ *
+ * When "mb_optimize_scan" mount option is set, mballoc consults the above data
+ * structures to decide the order in which groups are to be traversed for
+@@ -160,7 +162,8 @@
+ *
+ * At CR = 1, we only consider groups where average fragment size > request
+ * size. So, we lookup a group which has average fragment size just above or
+- * equal to request size using our rb tree (data structure 2) in O(log N) time.
++ * equal to request size using our average fragment size group lists (data
++ * structure 2) in O(1) time.
+ *
+ * If "mb_optimize_scan" mount option is not set, mballoc traverses groups in
+ * linear order which requires O(N) search time for each CR 0 and CR 1 phase.
+@@ -802,65 +805,51 @@ static void ext4_mb_mark_free_simple(struct super_block *sb,
+ }
+ }
+
+-static void ext4_mb_rb_insert(struct rb_root *root, struct rb_node *new,
+- int (*cmp)(struct rb_node *, struct rb_node *))
++static int mb_avg_fragment_size_order(struct super_block *sb, ext4_grpblk_t len)
+ {
+- struct rb_node **iter = &root->rb_node, *parent = NULL;
++ int order;
+
+- while (*iter) {
+- parent = *iter;
+- if (cmp(new, *iter) > 0)
+- iter = &((*iter)->rb_left);
+- else
+- iter = &((*iter)->rb_right);
+- }
+-
+- rb_link_node(new, parent, iter);
+- rb_insert_color(new, root);
+-}
+-
+-static int
+-ext4_mb_avg_fragment_size_cmp(struct rb_node *rb1, struct rb_node *rb2)
+-{
+- struct ext4_group_info *grp1 = rb_entry(rb1,
+- struct ext4_group_info,
+- bb_avg_fragment_size_rb);
+- struct ext4_group_info *grp2 = rb_entry(rb2,
+- struct ext4_group_info,
+- bb_avg_fragment_size_rb);
+- int num_frags_1, num_frags_2;
+-
+- num_frags_1 = grp1->bb_fragments ?
+- grp1->bb_free / grp1->bb_fragments : 0;
+- num_frags_2 = grp2->bb_fragments ?
+- grp2->bb_free / grp2->bb_fragments : 0;
+-
+- return (num_frags_2 - num_frags_1);
++ /*
++ * We don't bother with a special lists groups with only 1 block free
++ * extents and for completely empty groups.
++ */
++ order = fls(len) - 2;
++ if (order < 0)
++ return 0;
++ if (order == MB_NUM_ORDERS(sb))
++ order--;
++ return order;
+ }
+
+-/*
+- * Reinsert grpinfo into the avg_fragment_size tree with new average
+- * fragment size.
+- */
++/* Move group to appropriate avg_fragment_size list */
+ static void
+ mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp)
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ int new_order;
+
+ if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_free == 0)
+ return;
+
+- write_lock(&sbi->s_mb_rb_lock);
+- if (!RB_EMPTY_NODE(&grp->bb_avg_fragment_size_rb)) {
+- rb_erase(&grp->bb_avg_fragment_size_rb,
+- &sbi->s_mb_avg_fragment_size_root);
+- RB_CLEAR_NODE(&grp->bb_avg_fragment_size_rb);
+- }
++ new_order = mb_avg_fragment_size_order(sb,
++ grp->bb_free / grp->bb_fragments);
++ if (new_order == grp->bb_avg_fragment_size_order)
++ return;
+
+- ext4_mb_rb_insert(&sbi->s_mb_avg_fragment_size_root,
+- &grp->bb_avg_fragment_size_rb,
+- ext4_mb_avg_fragment_size_cmp);
+- write_unlock(&sbi->s_mb_rb_lock);
++ if (grp->bb_avg_fragment_size_order != -1) {
++ write_lock(&sbi->s_mb_avg_fragment_size_locks[
++ grp->bb_avg_fragment_size_order]);
++ list_del(&grp->bb_avg_fragment_size_node);
++ write_unlock(&sbi->s_mb_avg_fragment_size_locks[
++ grp->bb_avg_fragment_size_order]);
++ }
++ grp->bb_avg_fragment_size_order = new_order;
++ write_lock(&sbi->s_mb_avg_fragment_size_locks[
++ grp->bb_avg_fragment_size_order]);
++ list_add_tail(&grp->bb_avg_fragment_size_node,
++ &sbi->s_mb_avg_fragment_size[grp->bb_avg_fragment_size_order]);
++ write_unlock(&sbi->s_mb_avg_fragment_size_locks[
++ grp->bb_avg_fragment_size_order]);
+ }
+
+ /*
+@@ -909,86 +898,55 @@ static void ext4_mb_choose_next_group_cr0(struct ext4_allocation_context *ac,
+ *new_cr = 1;
+ } else {
+ *group = grp->bb_group;
+- ac->ac_last_optimal_group = *group;
+ ac->ac_flags |= EXT4_MB_CR0_OPTIMIZED;
+ }
+ }
+
+ /*
+- * Choose next group by traversing average fragment size tree. Updates *new_cr
+- * if cr lvel needs an update. Sets EXT4_MB_SEARCH_NEXT_LINEAR to indicate that
+- * the linear search should continue for one iteration since there's lock
+- * contention on the rb tree lock.
++ * Choose next group by traversing average fragment size list of suitable
++ * order. Updates *new_cr if cr level needs an update.
+ */
+ static void ext4_mb_choose_next_group_cr1(struct ext4_allocation_context *ac,
+ int *new_cr, ext4_group_t *group, ext4_group_t ngroups)
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
+- int avg_fragment_size, best_so_far;
+- struct rb_node *node, *found;
+- struct ext4_group_info *grp;
+-
+- /*
+- * If there is contention on the lock, instead of waiting for the lock
+- * to become available, just continue searching lineraly. We'll resume
+- * our rb tree search later starting at ac->ac_last_optimal_group.
+- */
+- if (!read_trylock(&sbi->s_mb_rb_lock)) {
+- ac->ac_flags |= EXT4_MB_SEARCH_NEXT_LINEAR;
+- return;
+- }
++ struct ext4_group_info *grp = NULL, *iter;
++ int i;
+
+ if (unlikely(ac->ac_flags & EXT4_MB_CR1_OPTIMIZED)) {
+ if (sbi->s_mb_stats)
+ atomic_inc(&sbi->s_bal_cr1_bad_suggestions);
+- /* We have found something at CR 1 in the past */
+- grp = ext4_get_group_info(ac->ac_sb, ac->ac_last_optimal_group);
+- for (found = rb_next(&grp->bb_avg_fragment_size_rb); found != NULL;
+- found = rb_next(found)) {
+- grp = rb_entry(found, struct ext4_group_info,
+- bb_avg_fragment_size_rb);
++ }
++
++ for (i = mb_avg_fragment_size_order(ac->ac_sb, ac->ac_g_ex.fe_len);
++ i < MB_NUM_ORDERS(ac->ac_sb); i++) {
++ if (list_empty(&sbi->s_mb_avg_fragment_size[i]))
++ continue;
++ read_lock(&sbi->s_mb_avg_fragment_size_locks[i]);
++ if (list_empty(&sbi->s_mb_avg_fragment_size[i])) {
++ read_unlock(&sbi->s_mb_avg_fragment_size_locks[i]);
++ continue;
++ }
++ list_for_each_entry(iter, &sbi->s_mb_avg_fragment_size[i],
++ bb_avg_fragment_size_node) {
+ if (sbi->s_mb_stats)
+ atomic64_inc(&sbi->s_bal_cX_groups_considered[1]);
+- if (likely(ext4_mb_good_group(ac, grp->bb_group, 1)))
++ if (likely(ext4_mb_good_group(ac, iter->bb_group, 1))) {
++ grp = iter;
+ break;
+- }
+- goto done;
+- }
+-
+- node = sbi->s_mb_avg_fragment_size_root.rb_node;
+- best_so_far = 0;
+- found = NULL;
+-
+- while (node) {
+- grp = rb_entry(node, struct ext4_group_info,
+- bb_avg_fragment_size_rb);
+- avg_fragment_size = 0;
+- if (ext4_mb_good_group(ac, grp->bb_group, 1)) {
+- avg_fragment_size = grp->bb_fragments ?
+- grp->bb_free / grp->bb_fragments : 0;
+- if (!best_so_far || avg_fragment_size < best_so_far) {
+- best_so_far = avg_fragment_size;
+- found = node;
+ }
+ }
+- if (avg_fragment_size > ac->ac_g_ex.fe_len)
+- node = node->rb_right;
+- else
+- node = node->rb_left;
++ read_unlock(&sbi->s_mb_avg_fragment_size_locks[i]);
++ if (grp)
++ break;
+ }
+
+-done:
+- if (found) {
+- grp = rb_entry(found, struct ext4_group_info,
+- bb_avg_fragment_size_rb);
++ if (grp) {
+ *group = grp->bb_group;
+ ac->ac_flags |= EXT4_MB_CR1_OPTIMIZED;
+ } else {
+ *new_cr = 2;
+ }
+-
+- read_unlock(&sbi->s_mb_rb_lock);
+- ac->ac_last_optimal_group = *group;
+ }
+
+ static inline int should_optimize_scan(struct ext4_allocation_context *ac)
+@@ -1017,11 +975,6 @@ next_linear_group(struct ext4_allocation_context *ac, int group, int ngroups)
+ goto inc_and_return;
+ }
+
+- if (ac->ac_flags & EXT4_MB_SEARCH_NEXT_LINEAR) {
+- ac->ac_flags &= ~EXT4_MB_SEARCH_NEXT_LINEAR;
+- goto inc_and_return;
+- }
+-
+ return group;
+ inc_and_return:
+ /*
+@@ -1049,8 +1002,10 @@ static void ext4_mb_choose_next_group(struct ext4_allocation_context *ac,
+ {
+ *new_cr = ac->ac_criteria;
+
+- if (!should_optimize_scan(ac) || ac->ac_groups_linear_remaining)
++ if (!should_optimize_scan(ac) || ac->ac_groups_linear_remaining) {
++ *group = next_linear_group(ac, *group, ngroups);
+ return;
++ }
+
+ if (*new_cr == 0) {
+ ext4_mb_choose_next_group_cr0(ac, new_cr, group, ngroups);
+@@ -1075,23 +1030,25 @@ mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ int i;
+
+- if (test_opt2(sb, MB_OPTIMIZE_SCAN) && grp->bb_largest_free_order >= 0) {
++ for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--)
++ if (grp->bb_counters[i] > 0)
++ break;
++ /* No need to move between order lists? */
++ if (!test_opt2(sb, MB_OPTIMIZE_SCAN) ||
++ i == grp->bb_largest_free_order) {
++ grp->bb_largest_free_order = i;
++ return;
++ }
++
++ if (grp->bb_largest_free_order >= 0) {
+ write_lock(&sbi->s_mb_largest_free_orders_locks[
+ grp->bb_largest_free_order]);
+ list_del_init(&grp->bb_largest_free_order_node);
+ write_unlock(&sbi->s_mb_largest_free_orders_locks[
+ grp->bb_largest_free_order]);
+ }
+- grp->bb_largest_free_order = -1; /* uninit */
+-
+- for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--) {
+- if (grp->bb_counters[i] > 0) {
+- grp->bb_largest_free_order = i;
+- break;
+- }
+- }
+- if (test_opt2(sb, MB_OPTIMIZE_SCAN) &&
+- grp->bb_largest_free_order >= 0 && grp->bb_free) {
++ grp->bb_largest_free_order = i;
++ if (grp->bb_largest_free_order >= 0 && grp->bb_free) {
+ write_lock(&sbi->s_mb_largest_free_orders_locks[
+ grp->bb_largest_free_order]);
+ list_add_tail(&grp->bb_largest_free_order_node,
+@@ -1148,13 +1105,13 @@ void ext4_mb_generate_buddy(struct super_block *sb,
+ EXT4_GROUP_INFO_BBITMAP_CORRUPT);
+ }
+ mb_set_largest_free_order(sb, grp);
++ mb_update_avg_fragment_size(sb, grp);
+
+ clear_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &(grp->bb_state));
+
+ period = get_cycles() - period;
+ atomic_inc(&sbi->s_mb_buddies_generated);
+ atomic64_add(period, &sbi->s_mb_generation_time);
+- mb_update_avg_fragment_size(sb, grp);
+ }
+
+ /* The buddy information is attached the buddy cache inode
+@@ -2630,7 +2587,7 @@ static noinline_for_stack int
+ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
+ {
+ ext4_group_t prefetch_grp = 0, ngroups, group, i;
+- int cr = -1;
++ int cr = -1, new_cr;
+ int err = 0, first_err = 0;
+ unsigned int nr = 0, prefetch_ios = 0;
+ struct ext4_sb_info *sbi;
+@@ -2701,17 +2658,14 @@ repeat:
+ * from the goal value specified
+ */
+ group = ac->ac_g_ex.fe_group;
+- ac->ac_last_optimal_group = group;
+ ac->ac_groups_linear_remaining = sbi->s_mb_max_linear_groups;
+ prefetch_grp = group;
+
+- for (i = 0; i < ngroups; group = next_linear_group(ac, group, ngroups),
+- i++) {
+- int ret = 0, new_cr;
++ for (i = 0, new_cr = cr; i < ngroups; i++,
++ ext4_mb_choose_next_group(ac, &new_cr, &group, ngroups)) {
++ int ret = 0;
+
+ cond_resched();
+-
+- ext4_mb_choose_next_group(ac, &new_cr, &group, ngroups);
+ if (new_cr != cr) {
+ cr = new_cr;
+ goto repeat;
+@@ -2985,9 +2939,7 @@ __acquires(&EXT4_SB(sb)->s_mb_rb_lock)
+ struct super_block *sb = pde_data(file_inode(seq->file));
+ unsigned long position;
+
+- read_lock(&EXT4_SB(sb)->s_mb_rb_lock);
+-
+- if (*pos < 0 || *pos >= MB_NUM_ORDERS(sb) + 1)
++ if (*pos < 0 || *pos >= 2*MB_NUM_ORDERS(sb))
+ return NULL;
+ position = *pos + 1;
+ return (void *) ((unsigned long) position);
+@@ -2999,7 +2951,7 @@ static void *ext4_mb_seq_structs_summary_next(struct seq_file *seq, void *v, lof
+ unsigned long position;
+
+ ++*pos;
+- if (*pos < 0 || *pos >= MB_NUM_ORDERS(sb) + 1)
++ if (*pos < 0 || *pos >= 2*MB_NUM_ORDERS(sb))
+ return NULL;
+ position = *pos + 1;
+ return (void *) ((unsigned long) position);
+@@ -3011,29 +2963,22 @@ static int ext4_mb_seq_structs_summary_show(struct seq_file *seq, void *v)
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ unsigned long position = ((unsigned long) v);
+ struct ext4_group_info *grp;
+- struct rb_node *n;
+- unsigned int count, min, max;
++ unsigned int count;
+
+ position--;
+ if (position >= MB_NUM_ORDERS(sb)) {
+- seq_puts(seq, "fragment_size_tree:\n");
+- n = rb_first(&sbi->s_mb_avg_fragment_size_root);
+- if (!n) {
+- seq_puts(seq, "\ttree_min: 0\n\ttree_max: 0\n\ttree_nodes: 0\n");
+- return 0;
+- }
+- grp = rb_entry(n, struct ext4_group_info, bb_avg_fragment_size_rb);
+- min = grp->bb_fragments ? grp->bb_free / grp->bb_fragments : 0;
+- count = 1;
+- while (rb_next(n)) {
+- count++;
+- n = rb_next(n);
+- }
+- grp = rb_entry(n, struct ext4_group_info, bb_avg_fragment_size_rb);
+- max = grp->bb_fragments ? grp->bb_free / grp->bb_fragments : 0;
++ position -= MB_NUM_ORDERS(sb);
++ if (position == 0)
++ seq_puts(seq, "avg_fragment_size_lists:\n");
+
+- seq_printf(seq, "\ttree_min: %u\n\ttree_max: %u\n\ttree_nodes: %u\n",
+- min, max, count);
++ count = 0;
++ read_lock(&sbi->s_mb_avg_fragment_size_locks[position]);
++ list_for_each_entry(grp, &sbi->s_mb_avg_fragment_size[position],
++ bb_avg_fragment_size_node)
++ count++;
++ read_unlock(&sbi->s_mb_avg_fragment_size_locks[position]);
++ seq_printf(seq, "\tlist_order_%u_groups: %u\n",
++ (unsigned int)position, count);
+ return 0;
+ }
+
+@@ -3043,9 +2988,11 @@ static int ext4_mb_seq_structs_summary_show(struct seq_file *seq, void *v)
+ seq_puts(seq, "max_free_order_lists:\n");
+ }
+ count = 0;
++ read_lock(&sbi->s_mb_largest_free_orders_locks[position]);
+ list_for_each_entry(grp, &sbi->s_mb_largest_free_orders[position],
+ bb_largest_free_order_node)
+ count++;
++ read_unlock(&sbi->s_mb_largest_free_orders_locks[position]);
+ seq_printf(seq, "\tlist_order_%u_groups: %u\n",
+ (unsigned int)position, count);
+
+@@ -3053,11 +3000,7 @@ static int ext4_mb_seq_structs_summary_show(struct seq_file *seq, void *v)
+ }
+
+ static void ext4_mb_seq_structs_summary_stop(struct seq_file *seq, void *v)
+-__releases(&EXT4_SB(sb)->s_mb_rb_lock)
+ {
+- struct super_block *sb = pde_data(file_inode(seq->file));
+-
+- read_unlock(&EXT4_SB(sb)->s_mb_rb_lock);
+ }
+
+ const struct seq_operations ext4_mb_seq_structs_summary_ops = {
+@@ -3170,8 +3113,9 @@ int ext4_mb_add_groupinfo(struct super_block *sb, ext4_group_t group,
+ init_rwsem(&meta_group_info[i]->alloc_sem);
+ meta_group_info[i]->bb_free_root = RB_ROOT;
+ INIT_LIST_HEAD(&meta_group_info[i]->bb_largest_free_order_node);
+- RB_CLEAR_NODE(&meta_group_info[i]->bb_avg_fragment_size_rb);
++ INIT_LIST_HEAD(&meta_group_info[i]->bb_avg_fragment_size_node);
+ meta_group_info[i]->bb_largest_free_order = -1; /* uninit */
++ meta_group_info[i]->bb_avg_fragment_size_order = -1; /* uninit */
+ meta_group_info[i]->bb_group = group;
+
+ mb_group_bb_bitmap_alloc(sb, meta_group_info[i], group);
+@@ -3420,7 +3364,24 @@ int ext4_mb_init(struct super_block *sb)
+ i++;
+ } while (i < MB_NUM_ORDERS(sb));
+
+- sbi->s_mb_avg_fragment_size_root = RB_ROOT;
++ sbi->s_mb_avg_fragment_size =
++ kmalloc_array(MB_NUM_ORDERS(sb), sizeof(struct list_head),
++ GFP_KERNEL);
++ if (!sbi->s_mb_avg_fragment_size) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ sbi->s_mb_avg_fragment_size_locks =
++ kmalloc_array(MB_NUM_ORDERS(sb), sizeof(rwlock_t),
++ GFP_KERNEL);
++ if (!sbi->s_mb_avg_fragment_size_locks) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ for (i = 0; i < MB_NUM_ORDERS(sb); i++) {
++ INIT_LIST_HEAD(&sbi->s_mb_avg_fragment_size[i]);
++ rwlock_init(&sbi->s_mb_avg_fragment_size_locks[i]);
++ }
+ sbi->s_mb_largest_free_orders =
+ kmalloc_array(MB_NUM_ORDERS(sb), sizeof(struct list_head),
+ GFP_KERNEL);
+@@ -3439,7 +3400,6 @@ int ext4_mb_init(struct super_block *sb)
+ INIT_LIST_HEAD(&sbi->s_mb_largest_free_orders[i]);
+ rwlock_init(&sbi->s_mb_largest_free_orders_locks[i]);
+ }
+- rwlock_init(&sbi->s_mb_rb_lock);
+
+ spin_lock_init(&sbi->s_md_lock);
+ sbi->s_mb_free_pending = 0;
+@@ -3510,6 +3470,8 @@ out_free_locality_groups:
+ free_percpu(sbi->s_locality_groups);
+ sbi->s_locality_groups = NULL;
+ out:
++ kfree(sbi->s_mb_avg_fragment_size);
++ kfree(sbi->s_mb_avg_fragment_size_locks);
+ kfree(sbi->s_mb_largest_free_orders);
+ kfree(sbi->s_mb_largest_free_orders_locks);
+ kfree(sbi->s_mb_offsets);
+@@ -3576,6 +3538,8 @@ int ext4_mb_release(struct super_block *sb)
+ kvfree(group_info);
+ rcu_read_unlock();
+ }
++ kfree(sbi->s_mb_avg_fragment_size);
++ kfree(sbi->s_mb_avg_fragment_size_locks);
+ kfree(sbi->s_mb_largest_free_orders);
+ kfree(sbi->s_mb_largest_free_orders_locks);
+ kfree(sbi->s_mb_offsets);
+@@ -5187,6 +5151,7 @@ static void ext4_mb_group_or_file(struct ext4_allocation_context *ac)
+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
+ int bsbits = ac->ac_sb->s_blocksize_bits;
+ loff_t size, isize;
++ bool inode_pa_eligible, group_pa_eligible;
+
+ if (!(ac->ac_flags & EXT4_MB_HINT_DATA))
+ return;
+@@ -5194,25 +5159,27 @@ static void ext4_mb_group_or_file(struct ext4_allocation_context *ac)
+ if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))
+ return;
+
++ group_pa_eligible = sbi->s_mb_group_prealloc > 0;
++ inode_pa_eligible = true;
+ size = ac->ac_o_ex.fe_logical + EXT4_C2B(sbi, ac->ac_o_ex.fe_len);
+ isize = (i_size_read(ac->ac_inode) + ac->ac_sb->s_blocksize - 1)
+ >> bsbits;
+
++ /* No point in using inode preallocation for closed files */
+ if ((size == isize) && !ext4_fs_is_busy(sbi) &&
+- !inode_is_open_for_write(ac->ac_inode)) {
+- ac->ac_flags |= EXT4_MB_HINT_NOPREALLOC;
+- return;
+- }
++ !inode_is_open_for_write(ac->ac_inode))
++ inode_pa_eligible = false;
+
+- if (sbi->s_mb_group_prealloc <= 0) {
+- ac->ac_flags |= EXT4_MB_STREAM_ALLOC;
+- return;
+- }
+-
+- /* don't use group allocation for large files */
+ size = max(size, isize);
+- if (size > sbi->s_mb_stream_request) {
+- ac->ac_flags |= EXT4_MB_STREAM_ALLOC;
++ /* Don't use group allocation for large files */
++ if (size > sbi->s_mb_stream_request)
++ group_pa_eligible = false;
++
++ if (!group_pa_eligible) {
++ if (inode_pa_eligible)
++ ac->ac_flags |= EXT4_MB_STREAM_ALLOC;
++ else
++ ac->ac_flags |= EXT4_MB_HINT_NOPREALLOC;
+ return;
+ }
+
+@@ -5559,6 +5526,7 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
+ ext4_fsblk_t block = 0;
+ unsigned int inquota = 0;
+ unsigned int reserv_clstrs = 0;
++ int retries = 0;
+ u64 seq;
+
+ might_sleep();
+@@ -5661,7 +5629,8 @@ repeat:
+ ar->len = ac->ac_b_ex.fe_len;
+ }
+ } else {
+- if (ext4_mb_discard_preallocations_should_retry(sb, ac, &seq))
++ if (++retries < 3 &&
++ ext4_mb_discard_preallocations_should_retry(sb, ac, &seq))
+ goto repeat;
+ /*
+ * If block allocation fails then the pa allocated above
+diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
+index 39da92ceabf88..dcda2a943cee0 100644
+--- a/fs/ext4/mballoc.h
++++ b/fs/ext4/mballoc.h
+@@ -178,7 +178,6 @@ struct ext4_allocation_context {
+ /* copy of the best found extent taken before preallocation efforts */
+ struct ext4_free_extent ac_f_ex;
+
+- ext4_group_t ac_last_optimal_group;
+ __u32 ac_groups_considered;
+ __u32 ac_flags; /* allocation hints */
+ __u16 ac_groups_scanned;
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 7515a465ec03a..7c90b1ab3e00d 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -543,10 +543,9 @@
+ */
+ #ifdef CONFIG_CFI_CLANG
+ #define TEXT_CFI_JT \
+- . = ALIGN(PMD_SIZE); \
++ ALIGN_FUNCTION(); \
+ __cfi_jt_start = .; \
+ *(.text..L.cfi.jumptable .text..L.cfi.jumptable.*) \
+- . = ALIGN(PMD_SIZE); \
+ __cfi_jt_end = .;
+ #else
+ #define TEXT_CFI_JT
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index e2d9daf7e8dd0..0fd96e92c6c65 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -686,10 +686,13 @@ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata,
+ \
+ __blk_mq_alloc_disk(set, queuedata, &__key); \
+ })
++struct gendisk *blk_mq_alloc_disk_for_queue(struct request_queue *q,
++ struct lock_class_key *lkclass);
+ struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *);
+ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ struct request_queue *q);
+ void blk_mq_unregister_dev(struct device *, struct request_queue *);
++void blk_mq_destroy_queue(struct request_queue *);
+
+ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set);
+ int blk_mq_alloc_sq_tag_set(struct blk_mq_tag_set *set,
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 62e3ff52ab033..83eb8869a8c94 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -148,6 +148,7 @@ struct gendisk {
+ #define GD_NATIVE_CAPACITY 3
+ #define GD_ADDED 4
+ #define GD_SUPPRESS_PART_SCAN 5
++#define GD_OWNS_QUEUE 6
+
+ struct mutex open_mutex; /* open/close mutex */
+ unsigned open_partitions; /* number of open partitions */
+@@ -559,7 +560,6 @@ struct request_queue {
+ #define QUEUE_FLAG_NOXMERGES 9 /* No extended merges */
+ #define QUEUE_FLAG_ADD_RANDOM 10 /* Contributes to random pool */
+ #define QUEUE_FLAG_SAME_FORCE 12 /* force complete on same CPU */
+-#define QUEUE_FLAG_DEAD 13 /* queue tear-down finished */
+ #define QUEUE_FLAG_INIT_DONE 14 /* queue is initialized */
+ #define QUEUE_FLAG_STABLE_WRITES 15 /* don't modify blks until WB is done */
+ #define QUEUE_FLAG_POLL 16 /* IO polling enabled if set */
+@@ -587,7 +587,6 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
+ #define blk_queue_stopped(q) test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags)
+ #define blk_queue_dying(q) test_bit(QUEUE_FLAG_DYING, &(q)->queue_flags)
+ #define blk_queue_has_srcu(q) test_bit(QUEUE_FLAG_HAS_SRCU, &(q)->queue_flags)
+-#define blk_queue_dead(q) test_bit(QUEUE_FLAG_DEAD, &(q)->queue_flags)
+ #define blk_queue_init_done(q) test_bit(QUEUE_FLAG_INIT_DONE, &(q)->queue_flags)
+ #define blk_queue_nomerges(q) test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags)
+ #define blk_queue_noxmerges(q) \
+@@ -812,8 +811,6 @@ static inline u64 sb_bdev_nr_blocks(struct super_block *sb)
+
+ int bdev_disk_changed(struct gendisk *disk, bool invalidate);
+
+-struct gendisk *__alloc_disk_node(struct request_queue *q, int node_id,
+- struct lock_class_key *lkclass);
+ void put_disk(struct gendisk *disk);
+ struct gendisk *__blk_alloc_disk(int node, struct lock_class_key *lkclass);
+
+@@ -955,7 +952,6 @@ static inline unsigned int blk_max_size_offset(struct request_queue *q,
+ /*
+ * Access functions for manipulating queue properties
+ */
+-extern void blk_cleanup_queue(struct request_queue *);
+ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit);
+ extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int);
+ extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);
+diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
+index 4592d08459417..57aa459c6618a 100644
+--- a/include/linux/cpumask.h
++++ b/include/linux/cpumask.h
+@@ -1083,9 +1083,10 @@ cpumap_print_list_to_buf(char *buf, const struct cpumask *mask,
+ * cover a worst-case of every other cpu being on one of two nodes for a
+ * very large NR_CPUS.
+ *
+- * Use PAGE_SIZE as a minimum for smaller configurations.
++ * Use PAGE_SIZE as a minimum for smaller configurations while avoiding
++ * unsigned comparison to -1.
+ */
+-#define CPUMAP_FILE_MAX_BYTES ((((NR_CPUS * 9)/32 - 1) > PAGE_SIZE) \
++#define CPUMAP_FILE_MAX_BYTES (((NR_CPUS * 9)/32 > PAGE_SIZE) \
+ ? (NR_CPUS * 9)/32 - 1 : PAGE_SIZE)
+ #define CPULIST_FILE_MAX_BYTES (((NR_CPUS * 7)/2 > PAGE_SIZE) ? (NR_CPUS * 7)/2 : PAGE_SIZE)
+
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index fde258b3decd5..037a8d81a66cf 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -302,6 +302,23 @@ struct uart_state {
+ /* number of characters left in xmit buffer before we ask for more */
+ #define WAKEUP_CHARS 256
+
++/**
++ * uart_xmit_advance - Advance xmit buffer and account Tx'ed chars
++ * @up: uart_port structure describing the port
++ * @chars: number of characters sent
++ *
++ * This function advances the tail of circular xmit buffer by the number of
++ * @chars transmitted and handles accounting of transmitted bytes (into
++ * @up's icount.tx).
++ */
++static inline void uart_xmit_advance(struct uart_port *up, unsigned int chars)
++{
++ struct circ_buf *xmit = &up->state->xmit;
++
++ xmit->tail = (xmit->tail + chars) & (UART_XMIT_SIZE - 1);
++ up->icount.tx += chars;
++}
++
+ struct module;
+ struct tty_driver;
+
+diff --git a/include/net/bond_3ad.h b/include/net/bond_3ad.h
+index 184105d682942..f2273bd5a4c58 100644
+--- a/include/net/bond_3ad.h
++++ b/include/net/bond_3ad.h
+@@ -15,8 +15,6 @@
+ #define PKT_TYPE_LACPDU cpu_to_be16(ETH_P_SLOW)
+ #define AD_TIMER_INTERVAL 100 /*msec*/
+
+-#define MULTICAST_LACPDU_ADDR {0x01, 0x80, 0xC2, 0x00, 0x00, 0x02}
+-
+ #define AD_LACP_SLOW 0
+ #define AD_LACP_FAST 1
+
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index 3b816ae8b1f3b..7ac1773b99224 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -785,6 +785,9 @@ extern struct rtnl_link_ops bond_link_ops;
+ /* exported from bond_sysfs_slave.c */
+ extern const struct sysfs_ops slave_sysfs_ops;
+
++/* exported from bond_3ad.c */
++extern const u8 lacpdu_mcast_addr[];
++
+ static inline netdev_tx_t bond_tx_drop(struct net_device *dev, struct sk_buff *skb)
+ {
+ dev_core_stats_tx_dropped_inc(dev);
+diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
+index 667d889b92b52..3e1cea155049b 100644
+--- a/include/scsi/scsi_host.h
++++ b/include/scsi/scsi_host.h
+@@ -557,6 +557,8 @@ struct Scsi_Host {
+ struct scsi_host_template *hostt;
+ struct scsi_transport_template *transportt;
+
++ struct kref tagset_refcnt;
++ struct completion tagset_freed;
+ /* Area to keep a shared tag map */
+ struct blk_mq_tag_set tag_set;
+
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index 65e13a099b1a0..a9f5d884560ac 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -296,7 +296,7 @@ enum xfrm_attr_type_t {
+ XFRMA_ETIMER_THRESH,
+ XFRMA_SRCADDR, /* xfrm_address_t */
+ XFRMA_COADDR, /* xfrm_address_t */
+- XFRMA_LASTUSED, /* unsigned long */
++ XFRMA_LASTUSED, /* __u64 */
+ XFRMA_POLICY_TYPE, /* struct xfrm_userpolicy_type */
+ XFRMA_MIGRATE,
+ XFRMA_ALG_AEAD, /* struct xfrm_algo_aead */
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 602da2cfd57c8..15a6f1e93e5af 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -10951,6 +10951,9 @@ static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ io_poll_remove_all(ctx, NULL, true);
+ /* if we failed setting up the ctx, we might not have any rings */
+ io_iopoll_try_reap_events(ctx);
++ /* drop cached put refs after potentially doing completions */
++ if (current->io_uring)
++ io_uring_drop_tctx_refs(current);
+ }
+
+ INIT_WORK(&ctx->exit_work, io_ring_exit_work);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index e702ca368539a..80c23f48f3b4b 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -6026,6 +6026,9 @@ struct cgroup *cgroup_get_from_id(u64 id)
+ if (!kn)
+ goto out;
+
++ if (kernfs_type(kn) != KERNFS_DIR)
++ goto put;
++
+ rcu_read_lock();
+
+ cgrp = rcu_dereference(*(void __rcu __force **)&kn->priv);
+@@ -6033,7 +6036,7 @@ struct cgroup *cgroup_get_from_id(u64 id)
+ cgrp = NULL;
+
+ rcu_read_unlock();
+-
++put:
+ kernfs_put(kn);
+ out:
+ return cgrp;
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index aa8a82bc67384..fc6e4f2523452 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3066,10 +3066,8 @@ static bool __flush_work(struct work_struct *work, bool from_cancel)
+ if (WARN_ON(!work->func))
+ return false;
+
+- if (!from_cancel) {
+- lock_map_acquire(&work->lockdep_map);
+- lock_map_release(&work->lockdep_map);
+- }
++ lock_map_acquire(&work->lockdep_map);
++ lock_map_release(&work->lockdep_map);
+
+ if (start_flush_work(work, &barr, from_cancel)) {
+ wait_for_completion(&barr.done);
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 2e24db4bff192..c399ab486557f 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -264,8 +264,10 @@ config DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
+ config DEBUG_INFO_DWARF4
+ bool "Generate DWARF Version 4 debuginfo"
+ select DEBUG_INFO
++ depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)))
+ help
+- Generate DWARF v4 debug info. This requires gcc 4.5+ and gdb 7.0+.
++ Generate DWARF v4 debug info. This requires gcc 4.5+, binutils 2.35.2
++ if using clang without clang's integrated assembler, and gdb 7.0+.
+
+ If you have consumers of DWARF debug info that are not ready for
+ newer revisions of DWARF, you may wish to choose this or have your
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index dbd4b6f9b0e79..29ae1358d5f07 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -503,6 +503,7 @@ void slab_kmem_cache_release(struct kmem_cache *s)
+ void kmem_cache_destroy(struct kmem_cache *s)
+ {
+ int refcnt;
++ bool rcu_set;
+
+ if (unlikely(!s) || !kasan_check_byte(s))
+ return;
+@@ -510,6 +511,8 @@ void kmem_cache_destroy(struct kmem_cache *s)
+ cpus_read_lock();
+ mutex_lock(&slab_mutex);
+
++ rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU;
++
+ refcnt = --s->refcount;
+ if (refcnt)
+ goto out_unlock;
+@@ -520,7 +523,7 @@ void kmem_cache_destroy(struct kmem_cache *s)
+ out_unlock:
+ mutex_unlock(&slab_mutex);
+ cpus_read_unlock();
+- if (!refcnt && !(s->flags & SLAB_TYPESAFE_BY_RCU))
++ if (!refcnt && !rcu_set)
+ kmem_cache_release(s);
+ }
+ EXPORT_SYMBOL(kmem_cache_destroy);
+diff --git a/mm/slub.c b/mm/slub.c
+index b1281b8654bd3..1eec942b8336c 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -310,6 +310,11 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
+ */
+ static nodemask_t slab_nodes;
+
++/*
++ * Workqueue used for flush_cpu_slab().
++ */
++static struct workqueue_struct *flushwq;
++
+ /********************************************************************
+ * Core slab cache functions
+ *******************************************************************/
+@@ -2730,7 +2735,7 @@ static void flush_all_cpus_locked(struct kmem_cache *s)
+ INIT_WORK(&sfw->work, flush_cpu_slab);
+ sfw->skip = false;
+ sfw->s = s;
+- schedule_work_on(cpu, &sfw->work);
++ queue_work_on(cpu, flushwq, &sfw->work);
+ }
+
+ for_each_online_cpu(cpu) {
+@@ -4880,6 +4885,8 @@ void __init kmem_cache_init(void)
+
+ void __init kmem_cache_init_late(void)
+ {
++ flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM, 0);
++ WARN_ON(!flushwq);
+ }
+
+ struct kmem_cache *
+@@ -4950,6 +4957,8 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)
+ /* Honor the call site pointer we received. */
+ trace_kmalloc(caller, ret, size, s->size, gfpflags);
+
++ ret = kasan_kmalloc(s, ret, size, gfpflags);
++
+ return ret;
+ }
+ EXPORT_SYMBOL(__kmalloc_track_caller);
+@@ -4981,6 +4990,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags,
+ /* Honor the call site pointer we received. */
+ trace_kmalloc_node(caller, ret, size, s->size, gfpflags, node);
+
++ ret = kasan_kmalloc(s, ret, size, gfpflags);
++
+ return ret;
+ }
+ EXPORT_SYMBOL(__kmalloc_node_track_caller);
+@@ -5914,7 +5925,8 @@ static char *create_unique_id(struct kmem_cache *s)
+ char *name = kmalloc(ID_STR_LENGTH, GFP_KERNEL);
+ char *p = name;
+
+- BUG_ON(!name);
++ if (!name)
++ return ERR_PTR(-ENOMEM);
+
+ *p++ = ':';
+ /*
+@@ -5972,6 +5984,8 @@ static int sysfs_slab_add(struct kmem_cache *s)
+ * for the symlinks.
+ */
+ name = create_unique_id(s);
++ if (IS_ERR(name))
++ return PTR_ERR(name);
+ }
+
+ s->kobj.kset = kset;
+diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
+index b8f8da7ee3dea..41c1ad33d009f 100644
+--- a/net/batman-adv/hard-interface.c
++++ b/net/batman-adv/hard-interface.c
+@@ -10,6 +10,7 @@
+ #include <linux/atomic.h>
+ #include <linux/byteorder/generic.h>
+ #include <linux/container_of.h>
++#include <linux/errno.h>
+ #include <linux/gfp.h>
+ #include <linux/if.h>
+ #include <linux/if_arp.h>
+@@ -700,6 +701,9 @@ int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface,
+ int max_header_len = batadv_max_header_len();
+ int ret;
+
++ if (hard_iface->net_dev->mtu < ETH_MIN_MTU + max_header_len)
++ return -EINVAL;
++
+ if (hard_iface->if_status != BATADV_IF_NOT_IN_USE)
+ goto out;
+
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 9a0ae59cdc500..4f385d52a1c49 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -1040,8 +1040,10 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
+ goto free_iterate;
+ }
+
+- if (repl->valid_hooks != t->valid_hooks)
++ if (repl->valid_hooks != t->valid_hooks) {
++ ret = -EINVAL;
+ goto free_unlock;
++ }
+
+ if (repl->num_counters && repl->num_counters != t->private->nentries) {
+ ret = -EINVAL;
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 6aee04f75e3e4..bcba61ef5b378 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1572,9 +1572,8 @@ static inline void __flow_hash_consistentify(struct flow_keys *keys)
+
+ switch (keys->control.addr_type) {
+ case FLOW_DISSECTOR_KEY_IPV4_ADDRS:
+- addr_diff = (__force u32)keys->addrs.v4addrs.dst -
+- (__force u32)keys->addrs.v4addrs.src;
+- if (addr_diff < 0)
++ if ((__force u32)keys->addrs.v4addrs.dst <
++ (__force u32)keys->addrs.v4addrs.src)
+ swap(keys->addrs.v4addrs.src, keys->addrs.v4addrs.dst);
+
+ if ((__force u16)keys->ports.dst <
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 9f6f4a41245d4..1012012a061fe 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -1069,13 +1069,13 @@ static int __init inet6_init(void)
+ for (r = &inetsw6[0]; r < &inetsw6[SOCK_MAX]; ++r)
+ INIT_LIST_HEAD(r);
+
++ raw_hashinfo_init(&raw_v6_hashinfo);
++
+ if (disable_ipv6_mod) {
+ pr_info("Loaded, but administratively disabled, reboot required to enable\n");
+ goto out;
+ }
+
+- raw_hashinfo_init(&raw_v6_hashinfo);
+-
+ err = proto_register(&tcpv6_prot, 1);
+ if (err)
+ goto out;
+diff --git a/net/netfilter/nf_conntrack_ftp.c b/net/netfilter/nf_conntrack_ftp.c
+index 0d9332e9cf71a..617f744a2e3a3 100644
+--- a/net/netfilter/nf_conntrack_ftp.c
++++ b/net/netfilter/nf_conntrack_ftp.c
+@@ -33,6 +33,7 @@ MODULE_AUTHOR("Rusty Russell <rusty@rustcorp.com.au>");
+ MODULE_DESCRIPTION("ftp connection tracking helper");
+ MODULE_ALIAS("ip_conntrack_ftp");
+ MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
++static DEFINE_SPINLOCK(nf_ftp_lock);
+
+ #define MAX_PORTS 8
+ static u_int16_t ports[MAX_PORTS];
+@@ -409,7 +410,8 @@ static int help(struct sk_buff *skb,
+ }
+ datalen = skb->len - dataoff;
+
+- spin_lock_bh(&ct->lock);
++ /* seqadj (nat) uses ct->lock internally, nf_nat_ftp would cause deadlock */
++ spin_lock_bh(&nf_ftp_lock);
+ fb_ptr = skb->data + dataoff;
+
+ ends_in_nl = (fb_ptr[datalen - 1] == '\n');
+@@ -538,7 +540,7 @@ out_update_nl:
+ if (ends_in_nl)
+ update_nl_seq(ct, seq, ct_ftp_info, dir, skb);
+ out:
+- spin_unlock_bh(&ct->lock);
++ spin_unlock_bh(&nf_ftp_lock);
+ return ret;
+ }
+
+diff --git a/net/netfilter/nf_conntrack_irc.c b/net/netfilter/nf_conntrack_irc.c
+index 992decbcaa5c1..5703846bea3b6 100644
+--- a/net/netfilter/nf_conntrack_irc.c
++++ b/net/netfilter/nf_conntrack_irc.c
+@@ -157,15 +157,37 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ data = ib_ptr;
+ data_limit = ib_ptr + datalen;
+
+- /* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24
+- * 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */
+- while (data < data_limit - (19 + MINMATCHLEN)) {
+- if (memcmp(data, "\1DCC ", 5)) {
++ /* Skip any whitespace */
++ while (data < data_limit - 10) {
++ if (*data == ' ' || *data == '\r' || *data == '\n')
++ data++;
++ else
++ break;
++ }
++
++ /* strlen("PRIVMSG x ")=10 */
++ if (data < data_limit - 10) {
++ if (strncasecmp("PRIVMSG ", data, 8))
++ goto out;
++ data += 8;
++ }
++
++ /* strlen(" :\1DCC SENT t AAAAAAAA P\1\n")=26
++ * 7+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=26
++ */
++ while (data < data_limit - (21 + MINMATCHLEN)) {
++ /* Find first " :", the start of message */
++ if (memcmp(data, " :", 2)) {
+ data++;
+ continue;
+ }
++ data += 2;
++
++ /* then check that place only for the DCC command */
++ if (memcmp(data, "\1DCC ", 5))
++ goto out;
+ data += 5;
+- /* we have at least (19+MINMATCHLEN)-5 bytes valid data left */
++ /* we have at least (21+MINMATCHLEN)-(2+5) bytes valid data left */
+
+ iph = ip_hdr(skb);
+ pr_debug("DCC found in master %pI4:%u %pI4:%u\n",
+@@ -181,7 +203,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ pr_debug("DCC %s detected\n", dccprotos[i]);
+
+ /* we have at least
+- * (19+MINMATCHLEN)-5-dccprotos[i].matchlen bytes valid
++ * (21+MINMATCHLEN)-7-dccprotos[i].matchlen bytes valid
+ * data left (== 14/13 bytes) */
+ if (parse_dcc(data, data_limit, &dcc_ip,
+ &dcc_port, &addr_beg_p, &addr_end_p)) {
+diff --git a/net/netfilter/nf_conntrack_sip.c b/net/netfilter/nf_conntrack_sip.c
+index b83dc9bf0a5dd..78fd9122b70c7 100644
+--- a/net/netfilter/nf_conntrack_sip.c
++++ b/net/netfilter/nf_conntrack_sip.c
+@@ -477,7 +477,7 @@ static int ct_sip_walk_headers(const struct nf_conn *ct, const char *dptr,
+ return ret;
+ if (ret == 0)
+ break;
+- dataoff += *matchoff;
++ dataoff = *matchoff;
+ }
+ *in_header = 0;
+ }
+@@ -489,7 +489,7 @@ static int ct_sip_walk_headers(const struct nf_conn *ct, const char *dptr,
+ break;
+ if (ret == 0)
+ return ret;
+- dataoff += *matchoff;
++ dataoff = *matchoff;
+ }
+
+ if (in_header)
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 848cc81d69926..2fde193c3d26a 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2197,7 +2197,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ struct netlink_ext_ack *extack)
+ {
+ const struct nlattr * const *nla = ctx->nla;
+- struct nft_stats __percpu *stats = NULL;
+ struct nft_table *table = ctx->table;
+ struct nft_base_chain *basechain;
+ struct net *net = ctx->net;
+@@ -2212,6 +2211,7 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ return -EOVERFLOW;
+
+ if (nla[NFTA_CHAIN_HOOK]) {
++ struct nft_stats __percpu *stats = NULL;
+ struct nft_chain_hook hook;
+
+ if (flags & NFT_CHAIN_BINDING)
+@@ -2243,8 +2243,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ if (err < 0) {
+ nft_chain_release_hook(&hook);
+ kfree(basechain);
++ free_percpu(stats);
+ return err;
+ }
++ if (stats)
++ static_branch_inc(&nft_counters_enabled);
+ } else {
+ if (flags & NFT_CHAIN_BASE)
+ return -EINVAL;
+@@ -2319,9 +2322,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ goto err_unregister_hook;
+ }
+
+- if (stats)
+- static_branch_inc(&nft_counters_enabled);
+-
+ table->use++;
+
+ return 0;
+diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
+index 0fa2e20304272..ee6840bd59337 100644
+--- a/net/netfilter/nfnetlink_osf.c
++++ b/net/netfilter/nfnetlink_osf.c
+@@ -269,6 +269,7 @@ bool nf_osf_find(const struct sk_buff *skb,
+ struct nf_osf_hdr_ctx ctx;
+ const struct tcphdr *tcp;
+ struct tcphdr _tcph;
++ bool found = false;
+
+ memset(&ctx, 0, sizeof(ctx));
+
+@@ -283,10 +284,11 @@ bool nf_osf_find(const struct sk_buff *skb,
+
+ data->genre = f->genre;
+ data->version = f->version;
++ found = true;
+ break;
+ }
+
+- return true;
++ return found;
+ }
+ EXPORT_SYMBOL_GPL(nf_osf_find);
+
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index ac366c99086fd..7d7f7bac0216a 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2136,6 +2136,7 @@ replay:
+ }
+
+ if (chain->tmplt_ops && chain->tmplt_ops != tp->ops) {
++ tfilter_put(tp, fh);
+ NL_SET_ERR_MSG(extack, "Chain template is set to a different filter kind");
+ err = -EINVAL;
+ goto errout;
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 0b941dd63d268..86675a79da1e4 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -67,6 +67,7 @@ struct taprio_sched {
+ u32 flags;
+ enum tk_offsets tk_offset;
+ int clockid;
++ bool offloaded;
+ atomic64_t picos_per_byte; /* Using picoseconds because for 10Gbps+
+ * speeds it's sub-nanoseconds per byte
+ */
+@@ -1279,6 +1280,8 @@ static int taprio_enable_offload(struct net_device *dev,
+ goto done;
+ }
+
++ q->offloaded = true;
++
+ done:
+ taprio_offload_free(offload);
+
+@@ -1293,12 +1296,9 @@ static int taprio_disable_offload(struct net_device *dev,
+ struct tc_taprio_qopt_offload *offload;
+ int err;
+
+- if (!FULL_OFFLOAD_IS_ENABLED(q->flags))
++ if (!q->offloaded)
+ return 0;
+
+- if (!ops->ndo_setup_tc)
+- return -EOPNOTSUPP;
+-
+ offload = taprio_offload_alloc(0);
+ if (!offload) {
+ NL_SET_ERR_MSG(extack,
+@@ -1314,6 +1314,8 @@ static int taprio_disable_offload(struct net_device *dev,
+ goto out;
+ }
+
++ q->offloaded = false;
++
+ out:
+ taprio_offload_free(offload);
+
+@@ -1949,12 +1951,14 @@ start_error:
+
+ static struct Qdisc *taprio_leaf(struct Qdisc *sch, unsigned long cl)
+ {
+- struct netdev_queue *dev_queue = taprio_queue_get(sch, cl);
++ struct taprio_sched *q = qdisc_priv(sch);
++ struct net_device *dev = qdisc_dev(sch);
++ unsigned int ntx = cl - 1;
+
+- if (!dev_queue)
++ if (ntx >= dev->num_tx_queues)
+ return NULL;
+
+- return dev_queue->qdisc_sleeping;
++ return q->qdiscs[ntx];
+ }
+
+ static unsigned long taprio_find(struct Qdisc *sch, u32 classid)
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 1f3bb1f6b1f7b..8095876b66eb6 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -2148,7 +2148,7 @@ static struct smc_buf_desc *smcr_new_buf_create(struct smc_link_group *lgr,
+ static int smcr_buf_map_usable_links(struct smc_link_group *lgr,
+ struct smc_buf_desc *buf_desc, bool is_rmb)
+ {
+- int i, rc = 0;
++ int i, rc = 0, cnt = 0;
+
+ /* protect against parallel link reconfiguration */
+ mutex_lock(&lgr->llc_conf_mutex);
+@@ -2161,9 +2161,12 @@ static int smcr_buf_map_usable_links(struct smc_link_group *lgr,
+ rc = -ENOMEM;
+ goto out;
+ }
++ cnt++;
+ }
+ out:
+ mutex_unlock(&lgr->llc_conf_mutex);
++ if (!rc && !cnt)
++ rc = -EINVAL;
+ return rc;
+ }
+
+diff --git a/scripts/Makefile.debug b/scripts/Makefile.debug
+index 9f39b0130551f..8cf1cb22dd934 100644
+--- a/scripts/Makefile.debug
++++ b/scripts/Makefile.debug
+@@ -1,20 +1,19 @@
+ DEBUG_CFLAGS :=
++debug-flags-y := -g
+
+ ifdef CONFIG_DEBUG_INFO_SPLIT
+ DEBUG_CFLAGS += -gsplit-dwarf
+-else
+-DEBUG_CFLAGS += -g
+ endif
+
+-ifndef CONFIG_AS_IS_LLVM
+-KBUILD_AFLAGS += -Wa,-gdwarf-2
+-endif
+-
+-ifndef CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
+-dwarf-version-$(CONFIG_DEBUG_INFO_DWARF4) := 4
+-dwarf-version-$(CONFIG_DEBUG_INFO_DWARF5) := 5
+-DEBUG_CFLAGS += -gdwarf-$(dwarf-version-y)
++debug-flags-$(CONFIG_DEBUG_INFO_DWARF4) += -gdwarf-4
++debug-flags-$(CONFIG_DEBUG_INFO_DWARF5) += -gdwarf-5
++ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_AS_IS_GNU),yy)
++# Clang does not pass -g or -gdwarf-* option down to GAS.
++# Add -Wa, prefix to explicitly specify the flags.
++KBUILD_AFLAGS += $(addprefix -Wa$(comma), $(debug-flags-y))
+ endif
++DEBUG_CFLAGS += $(debug-flags-y)
++KBUILD_AFLAGS += $(debug-flags-y)
+
+ ifdef CONFIG_DEBUG_INFO_REDUCED
+ DEBUG_CFLAGS += -fno-var-tracking
+@@ -29,5 +28,5 @@ KBUILD_AFLAGS += -gz=zlib
+ KBUILD_LDFLAGS += --compress-debug-sections=zlib
+ endif
+
+-KBUILD_CFLAGS += $(DEBUG_CFLAGS)
++KBUILD_CFLAGS += $(DEBUG_CFLAGS)
+ export DEBUG_CFLAGS
+diff --git a/sound/core/init.c b/sound/core/init.c
+index 726a8353201f8..4eacfafa41730 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -178,10 +178,8 @@ int snd_card_new(struct device *parent, int idx, const char *xid,
+ return -ENOMEM;
+
+ err = snd_card_init(card, parent, idx, xid, module, extra_size);
+- if (err < 0) {
+- kfree(card);
+- return err;
+- }
++ if (err < 0)
++ return err; /* card is freed by error handler */
+
+ *card_ret = card;
+ return 0;
+@@ -231,7 +229,7 @@ int snd_devm_card_new(struct device *parent, int idx, const char *xid,
+ card->managed = true;
+ err = snd_card_init(card, parent, idx, xid, module, extra_size);
+ if (err < 0) {
+- devres_free(card);
++ devres_free(card); /* in managed mode, we need to free manually */
+ return err;
+ }
+
+@@ -293,6 +291,8 @@ static int snd_card_init(struct snd_card *card, struct device *parent,
+ mutex_unlock(&snd_card_mutex);
+ dev_err(parent, "cannot find the slot for index %d (range 0-%i), error: %d\n",
+ idx, snd_ecards_limit - 1, err);
++ if (!card->managed)
++ kfree(card); /* manually free here, as no destructor called */
+ return err;
+ }
+ set_bit(idx, snd_cards_lock); /* lock it */
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index c572fb5886d5d..7af2515735957 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -157,10 +157,10 @@ static int hda_codec_driver_remove(struct device *dev)
+ return codec->bus->core.ext_ops->hdev_detach(&codec->core);
+ }
+
+- refcount_dec(&codec->pcm_ref);
+ snd_hda_codec_disconnect_pcms(codec);
+ snd_hda_jack_tbl_disconnect(codec);
+- wait_event(codec->remove_sleep, !refcount_read(&codec->pcm_ref));
++ if (!refcount_dec_and_test(&codec->pcm_ref))
++ wait_event(codec->remove_sleep, !refcount_read(&codec->pcm_ref));
+ snd_power_sync_ref(codec->bus->card);
+
+ if (codec->patch_ops.free)
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index b20694fd69dea..6f30c374f896e 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2550,6 +2550,8 @@ static const struct pci_device_id azx_ids[] = {
+ /* 5 Series/3400 */
+ { PCI_DEVICE(0x8086, 0x3b56),
+ .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM },
++ { PCI_DEVICE(0x8086, 0x3b57),
++ .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM },
+ /* Poulsbo */
+ { PCI_DEVICE(0x8086, 0x811b),
+ .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_BASE },
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 6c209cd26c0ca..c9d9aa6351ecf 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -170,6 +170,8 @@ struct hdmi_spec {
+ bool dyn_pcm_no_legacy;
+ /* hdmi interrupt trigger control flag for Nvidia codec */
+ bool hdmi_intr_trig_ctrl;
++ bool nv_dp_workaround; /* workaround DP audio infoframe for Nvidia */
++
+ bool intel_hsw_fixup; /* apply Intel platform-specific fixups */
+ /*
+ * Non-generic VIA/NVIDIA specific
+@@ -679,15 +681,24 @@ static void hdmi_pin_setup_infoframe(struct hda_codec *codec,
+ int ca, int active_channels,
+ int conn_type)
+ {
++ struct hdmi_spec *spec = codec->spec;
+ union audio_infoframe ai;
+
+ memset(&ai, 0, sizeof(ai));
+- if (conn_type == 0) { /* HDMI */
++ if ((conn_type == 0) || /* HDMI */
++ /* Nvidia DisplayPort: Nvidia HW expects same layout as HDMI */
++ (conn_type == 1 && spec->nv_dp_workaround)) {
+ struct hdmi_audio_infoframe *hdmi_ai = &ai.hdmi;
+
+- hdmi_ai->type = 0x84;
+- hdmi_ai->ver = 0x01;
+- hdmi_ai->len = 0x0a;
++ if (conn_type == 0) { /* HDMI */
++ hdmi_ai->type = 0x84;
++ hdmi_ai->ver = 0x01;
++ hdmi_ai->len = 0x0a;
++ } else {/* Nvidia DP */
++ hdmi_ai->type = 0x84;
++ hdmi_ai->ver = 0x1b;
++ hdmi_ai->len = 0x11 << 2;
++ }
+ hdmi_ai->CC02_CT47 = active_channels - 1;
+ hdmi_ai->CA = ca;
+ hdmi_checksum_audio_infoframe(hdmi_ai);
+@@ -3617,6 +3628,7 @@ static int patch_nvhdmi_2ch(struct hda_codec *codec)
+ spec->pcm_playback.rates = SUPPORTED_RATES;
+ spec->pcm_playback.maxbps = SUPPORTED_MAXBPS;
+ spec->pcm_playback.formats = SUPPORTED_FORMATS;
++ spec->nv_dp_workaround = true;
+ return 0;
+ }
+
+@@ -3756,6 +3768,7 @@ static int patch_nvhdmi(struct hda_codec *codec)
+ spec->chmap.ops.chmap_cea_alloc_validate_get_type =
+ nvhdmi_chmap_cea_alloc_validate_get_type;
+ spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
++ spec->nv_dp_workaround = true;
+
+ codec->link_down_at_suspend = 1;
+
+@@ -3779,6 +3792,7 @@ static int patch_nvhdmi_legacy(struct hda_codec *codec)
+ spec->chmap.ops.chmap_cea_alloc_validate_get_type =
+ nvhdmi_chmap_cea_alloc_validate_get_type;
+ spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
++ spec->nv_dp_workaround = true;
+
+ codec->link_down_at_suspend = 1;
+
+@@ -3984,6 +3998,7 @@ static int tegra_hdmi_init(struct hda_codec *codec)
+
+ generic_hdmi_init_per_pins(codec);
+
++ codec->depop_delay = 10;
+ codec->patch_ops.build_pcms = tegra_hdmi_build_pcms;
+ spec->chmap.ops.chmap_cea_alloc_validate_get_type =
+ nvhdmi_chmap_cea_alloc_validate_get_type;
+@@ -3992,6 +4007,7 @@ static int tegra_hdmi_init(struct hda_codec *codec)
+ spec->chmap.ops.chmap_cea_alloc_validate_get_type =
+ nvhdmi_chmap_cea_alloc_validate_get_type;
+ spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
++ spec->nv_dp_workaround = true;
+
+ return 0;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 799f6bf266dd0..9614b63415a8e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7037,6 +7037,8 @@ enum {
+ ALC294_FIXUP_ASUS_GU502_HP,
+ ALC294_FIXUP_ASUS_GU502_PINS,
+ ALC294_FIXUP_ASUS_GU502_VERBS,
++ ALC294_FIXUP_ASUS_G513_PINS,
++ ALC285_FIXUP_ASUS_G533Z_PINS,
+ ALC285_FIXUP_HP_GPIO_LED,
+ ALC285_FIXUP_HP_MUTE_LED,
+ ALC236_FIXUP_HP_GPIO_LED,
+@@ -8374,6 +8376,24 @@ static const struct hda_fixup alc269_fixups[] = {
+ [ALC294_FIXUP_ASUS_GU502_HP] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc294_fixup_gu502_hp,
++ },
++ [ALC294_FIXUP_ASUS_G513_PINS] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x03a11050 }, /* front HP mic */
++ { 0x1a, 0x03a11c30 }, /* rear external mic */
++ { 0x21, 0x03211420 }, /* front HP out */
++ { }
++ },
++ },
++ [ALC285_FIXUP_ASUS_G533Z_PINS] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x14, 0x90170120 },
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC294_FIXUP_ASUS_G513_PINS,
+ },
+ [ALC294_FIXUP_ASUS_COEF_1B] = {
+ .type = HDA_FIXUP_VERBS,
+@@ -9114,6 +9134,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
++ SND_PCI_QUIRK(0x1028, 0x087d, "Dell Precision 5530", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+@@ -9130,6 +9151,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0b19, "Dell XPS 15 9520", ALC289_FIXUP_DUAL_SPK),
++ SND_PCI_QUIRK(0x1028, 0x0b1a, "Dell Precision 5570", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -9257,6 +9279,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x896e, "HP EliteBook x360 830 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8971, "HP EliteBook 830 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8972, "HP EliteBook 840 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+@@ -9304,10 +9327,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
++ SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
++ SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
+ SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
+- SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
+ SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -9323,14 +9347,16 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++ SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
++ SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
++ SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+- SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+- SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+@@ -9532,6 +9558,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
+ SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
++ SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+ SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
+ SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index ff2aa13b7b26f..5d105c44b46df 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -758,8 +758,7 @@ bool snd_usb_endpoint_compatible(struct snd_usb_audio *chip,
+ * The endpoint needs to be closed via snd_usb_endpoint_close() later.
+ *
+ * Note that this function doesn't configure the endpoint. The substream
+- * needs to set it up later via snd_usb_endpoint_set_params() and
+- * snd_usb_endpoint_prepare().
++ * needs to set it up later via snd_usb_endpoint_configure().
+ */
+ struct snd_usb_endpoint *
+ snd_usb_endpoint_open(struct snd_usb_audio *chip,
+@@ -1293,13 +1292,12 @@ out_of_memory:
+ /*
+ * snd_usb_endpoint_set_params: configure an snd_usb_endpoint
+ *
+- * It's called either from hw_params callback.
+ * Determine the number of URBs to be used on this endpoint.
+ * An endpoint must be configured before it can be started.
+ * An endpoint that is already running can not be reconfigured.
+ */
+-int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
+- struct snd_usb_endpoint *ep)
++static int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
++ struct snd_usb_endpoint *ep)
+ {
+ const struct audioformat *fmt = ep->cur_audiofmt;
+ int err;
+@@ -1382,18 +1380,18 @@ static int init_sample_rate(struct snd_usb_audio *chip,
+ }
+
+ /*
+- * snd_usb_endpoint_prepare: Prepare the endpoint
++ * snd_usb_endpoint_configure: Configure the endpoint
+ *
+ * This function sets up the EP to be fully usable state.
+- * It's called either from prepare callback.
++ * It's called either from hw_params or prepare callback.
+ * The function checks need_setup flag, and performs nothing unless needed,
+ * so it's safe to call this multiple times.
+ *
+ * This returns zero if unchanged, 1 if the configuration has changed,
+ * or a negative error code.
+ */
+-int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,
+- struct snd_usb_endpoint *ep)
++int snd_usb_endpoint_configure(struct snd_usb_audio *chip,
++ struct snd_usb_endpoint *ep)
+ {
+ bool iface_first;
+ int err = 0;
+@@ -1414,6 +1412,9 @@ int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,
+ if (err < 0)
+ goto unlock;
+ }
++ err = snd_usb_endpoint_set_params(chip, ep);
++ if (err < 0)
++ goto unlock;
+ goto done;
+ }
+
+@@ -1441,6 +1442,10 @@ int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,
+ if (err < 0)
+ goto unlock;
+
++ err = snd_usb_endpoint_set_params(chip, ep);
++ if (err < 0)
++ goto unlock;
++
+ err = snd_usb_select_mode_quirk(chip, ep->cur_audiofmt);
+ if (err < 0)
+ goto unlock;
+diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h
+index e67ea28faa54f..6a9af04cf175a 100644
+--- a/sound/usb/endpoint.h
++++ b/sound/usb/endpoint.h
+@@ -17,10 +17,8 @@ snd_usb_endpoint_open(struct snd_usb_audio *chip,
+ bool is_sync_ep);
+ void snd_usb_endpoint_close(struct snd_usb_audio *chip,
+ struct snd_usb_endpoint *ep);
+-int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
+- struct snd_usb_endpoint *ep);
+-int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,
+- struct snd_usb_endpoint *ep);
++int snd_usb_endpoint_configure(struct snd_usb_audio *chip,
++ struct snd_usb_endpoint *ep);
+ int snd_usb_endpoint_get_clock_rate(struct snd_usb_audio *chip, int clock);
+
+ bool snd_usb_endpoint_compatible(struct snd_usb_audio *chip,
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 02035b545f9dd..e692ae04436a5 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -443,17 +443,17 @@ static int configure_endpoints(struct snd_usb_audio *chip,
+ if (stop_endpoints(subs, false))
+ sync_pending_stops(subs);
+ if (subs->sync_endpoint) {
+- err = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);
++ err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);
+ if (err < 0)
+ return err;
+ }
+- err = snd_usb_endpoint_prepare(chip, subs->data_endpoint);
++ err = snd_usb_endpoint_configure(chip, subs->data_endpoint);
+ if (err < 0)
+ return err;
+ snd_usb_set_format_quirk(subs, subs->cur_audiofmt);
+ } else {
+ if (subs->sync_endpoint) {
+- err = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);
++ err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);
+ if (err < 0)
+ return err;
+ }
+@@ -551,13 +551,7 @@ static int snd_usb_hw_params(struct snd_pcm_substream *substream,
+ subs->cur_audiofmt = fmt;
+ mutex_unlock(&chip->mutex);
+
+- if (subs->sync_endpoint) {
+- ret = snd_usb_endpoint_set_params(chip, subs->sync_endpoint);
+- if (ret < 0)
+- goto unlock;
+- }
+-
+- ret = snd_usb_endpoint_set_params(chip, subs->data_endpoint);
++ ret = configure_endpoints(chip, subs);
+
+ unlock:
+ if (ret < 0)
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index 6b1bafe267a42..8ec5b9f344e02 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -441,6 +441,7 @@ mmap_per_evsel(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
+
+ perf_evlist__for_each_entry(evlist, evsel) {
+ bool overwrite = evsel->attr.write_backward;
++ enum fdarray_flags flgs;
+ struct perf_mmap *map;
+ int *output, fd, cpu;
+
+@@ -504,8 +505,8 @@ mmap_per_evsel(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
+
+ revent = !overwrite ? POLLIN : 0;
+
+- if (!evsel->system_wide &&
+- perf_evlist__add_pollfd(evlist, fd, map, revent, fdarray_flag__default) < 0) {
++ flgs = evsel->system_wide ? fdarray_flag__nonfilterable : fdarray_flag__default;
++ if (perf_evlist__add_pollfd(evlist, fd, map, revent, flgs) < 0) {
+ perf_mmap__put(map);
+ return -1;
+ }
+diff --git a/tools/perf/util/bpf_counter_cgroup.c b/tools/perf/util/bpf_counter_cgroup.c
+index 63b9db6574425..97c69a249c6e4 100644
+--- a/tools/perf/util/bpf_counter_cgroup.c
++++ b/tools/perf/util/bpf_counter_cgroup.c
+@@ -95,7 +95,7 @@ static int bperf_load_program(struct evlist *evlist)
+
+ perf_cpu_map__for_each_cpu(cpu, i, evlist->core.all_cpus) {
+ link = bpf_program__attach_perf_event(skel->progs.on_cgrp_switch,
+- FD(cgrp_switch, cpu.cpu));
++ FD(cgrp_switch, i));
+ if (IS_ERR(link)) {
+ pr_err("Failed to attach cgroup program\n");
+ err = PTR_ERR(link);
+@@ -123,7 +123,7 @@ static int bperf_load_program(struct evlist *evlist)
+
+ map_fd = bpf_map__fd(skel->maps.events);
+ perf_cpu_map__for_each_cpu(cpu, j, evlist->core.all_cpus) {
+- int fd = FD(evsel, cpu.cpu);
++ int fd = FD(evsel, j);
+ __u32 idx = evsel->core.idx * total_cpus + cpu.cpu;
+
+ err = bpf_map_update_elem(map_fd, &idx, &fd,
+diff --git a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
+index 292c430768b52..c72f8ad96f751 100644
+--- a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
++++ b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
+@@ -176,7 +176,7 @@ static int bperf_cgroup_count(void)
+ }
+
+ // This will be attached to cgroup-switches event for each cpu
+-SEC("perf_events")
++SEC("perf_event")
+ int BPF_PROG(on_cgrp_switch)
+ {
+ return bperf_cgroup_count();
+diff --git a/tools/perf/util/genelf.c b/tools/perf/util/genelf.c
+index 953338b9e887e..02cd9f75e3d2f 100644
+--- a/tools/perf/util/genelf.c
++++ b/tools/perf/util/genelf.c
+@@ -251,6 +251,7 @@ jit_write_elf(int fd, uint64_t load_addr, const char *sym,
+ Elf_Data *d;
+ Elf_Scn *scn;
+ Elf_Ehdr *ehdr;
++ Elf_Phdr *phdr;
+ Elf_Shdr *shdr;
+ uint64_t eh_frame_base_offset;
+ char *strsym = NULL;
+@@ -285,6 +286,19 @@ jit_write_elf(int fd, uint64_t load_addr, const char *sym,
+ ehdr->e_version = EV_CURRENT;
+ ehdr->e_shstrndx= unwinding ? 4 : 2; /* shdr index for section name */
+
++ /*
++ * setup program header
++ */
++ phdr = elf_newphdr(e, 1);
++ phdr[0].p_type = PT_LOAD;
++ phdr[0].p_offset = 0;
++ phdr[0].p_vaddr = 0;
++ phdr[0].p_paddr = 0;
++ phdr[0].p_filesz = csize;
++ phdr[0].p_memsz = csize;
++ phdr[0].p_flags = PF_X | PF_R;
++ phdr[0].p_align = 8;
++
+ /*
+ * setup text section
+ */
+diff --git a/tools/perf/util/genelf.h b/tools/perf/util/genelf.h
+index ae138afe6c563..b5c909546e3f2 100644
+--- a/tools/perf/util/genelf.h
++++ b/tools/perf/util/genelf.h
+@@ -53,8 +53,10 @@ int jit_add_debug_info(Elf *e, uint64_t code_addr, void *debug, int nr_debug_ent
+
+ #if GEN_ELF_CLASS == ELFCLASS64
+ #define elf_newehdr elf64_newehdr
++#define elf_newphdr elf64_newphdr
+ #define elf_getshdr elf64_getshdr
+ #define Elf_Ehdr Elf64_Ehdr
++#define Elf_Phdr Elf64_Phdr
+ #define Elf_Shdr Elf64_Shdr
+ #define Elf_Sym Elf64_Sym
+ #define ELF_ST_TYPE(a) ELF64_ST_TYPE(a)
+@@ -62,8 +64,10 @@ int jit_add_debug_info(Elf *e, uint64_t code_addr, void *debug, int nr_debug_ent
+ #define ELF_ST_VIS(a) ELF64_ST_VISIBILITY(a)
+ #else
+ #define elf_newehdr elf32_newehdr
++#define elf_newphdr elf32_newphdr
+ #define elf_getshdr elf32_getshdr
+ #define Elf_Ehdr Elf32_Ehdr
++#define Elf_Phdr Elf32_Phdr
+ #define Elf_Shdr Elf32_Shdr
+ #define Elf_Sym Elf32_Sym
+ #define ELF_ST_TYPE(a) ELF32_ST_TYPE(a)
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 75bec32d4f571..647b7dff8ef36 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -2102,8 +2102,8 @@ static int kcore_copy__compare_file(const char *from_dir, const char *to_dir,
+ * unusual. One significant peculiarity is that the mapping (start -> pgoff)
+ * is not the same for the kernel map and the modules map. That happens because
+ * the data is copied adjacently whereas the original kcore has gaps. Finally,
+- * kallsyms and modules files are compared with their copies to check that
+- * modules have not been loaded or unloaded while the copies were taking place.
++ * kallsyms file is compared with its copy to check that modules have not been
++ * loaded or unloaded while the copies were taking place.
+ *
+ * Return: %0 on success, %-1 on failure.
+ */
+@@ -2166,9 +2166,6 @@ int kcore_copy(const char *from_dir, const char *to_dir)
+ goto out_extract_close;
+ }
+
+- if (kcore_copy__compare_file(from_dir, to_dir, "modules"))
+- goto out_extract_close;
+-
+ if (kcore_copy__compare_file(from_dir, to_dir, "kallsyms"))
+ goto out_extract_close;
+
+diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
+index 84d17bd4efaed..64e273b2b1b21 100644
+--- a/tools/perf/util/synthetic-events.c
++++ b/tools/perf/util/synthetic-events.c
+@@ -367,13 +367,24 @@ static void perf_record_mmap2__read_build_id(struct perf_record_mmap2 *event,
+ bool is_kernel)
+ {
+ struct build_id bid;
++ struct nsinfo *nsi;
++ struct nscookie nc;
+ int rc;
+
+- if (is_kernel)
++ if (is_kernel) {
+ rc = sysfs__read_build_id("/sys/kernel/notes", &bid);
+- else
+- rc = filename__read_build_id(event->filename, &bid) > 0 ? 0 : -1;
++ goto out;
++ }
++
++ nsi = nsinfo__new(event->pid);
++ nsinfo__mountns_enter(nsi, &nc);
+
++ rc = filename__read_build_id(event->filename, &bid) > 0 ? 0 : -1;
++
++ nsinfo__mountns_exit(&nc);
++ nsinfo__put(nsi);
++
++out:
+ if (rc == 0) {
+ memcpy(event->build_id, bid.data, sizeof(bid.data));
+ event->build_id_size = (u8) bid.size;
+diff --git a/tools/testing/selftests/net/forwarding/sch_red.sh b/tools/testing/selftests/net/forwarding/sch_red.sh
+index e714bae473fb4..81f31179ac887 100755
+--- a/tools/testing/selftests/net/forwarding/sch_red.sh
++++ b/tools/testing/selftests/net/forwarding/sch_red.sh
+@@ -1,3 +1,4 @@
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+
+ # This test sends one stream of traffic from H1 through a TBF shaper, to a RED
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-10-04 14:51 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-10-04 14:51 UTC (permalink / raw
To: gentoo-commits
commit: d83c1a4e8cdb54ac8a3c68fd1a5b31759ad19313
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 4 14:50:52 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Oct 4 14:50:52 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d83c1a4e
Linux patch 5.19.13
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1012_linux-5.19.13.patch | 2127 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2131 insertions(+)
diff --git a/0000_README b/0000_README
index 05763bb8..56f7e0a3 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-5.19.12.patch
From: http://www.kernel.org
Desc: Linux 5.19.12
+Patch: 1012_linux-5.19.13.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.13
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1012_linux-5.19.13.patch b/1012_linux-5.19.13.patch
new file mode 100644
index 00000000..57f0172d
--- /dev/null
+++ b/1012_linux-5.19.13.patch
@@ -0,0 +1,2127 @@
+diff --git a/Makefile b/Makefile
+index 7df4c195c8ab2..2ecedf786e273 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/drivers/gpu/drm/i915/display/g4x_dp.c b/drivers/gpu/drm/i915/display/g4x_dp.c
+index 82ad8fe7440c0..5a957acebfd62 100644
+--- a/drivers/gpu/drm/i915/display/g4x_dp.c
++++ b/drivers/gpu/drm/i915/display/g4x_dp.c
+@@ -395,8 +395,26 @@ static void intel_dp_get_config(struct intel_encoder *encoder,
+ intel_dotclock_calculate(pipe_config->port_clock,
+ &pipe_config->dp_m_n);
+
+- if (intel_dp_is_edp(intel_dp))
+- intel_edp_fixup_vbt_bpp(encoder, pipe_config->pipe_bpp);
++ if (intel_dp_is_edp(intel_dp) && dev_priv->vbt.edp.bpp &&
++ pipe_config->pipe_bpp > dev_priv->vbt.edp.bpp) {
++ /*
++ * This is a big fat ugly hack.
++ *
++ * Some machines in UEFI boot mode provide us a VBT that has 18
++ * bpp and 1.62 GHz link bandwidth for eDP, which for reasons
++ * unknown we fail to light up. Yet the same BIOS boots up with
++ * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
++ * max, not what it tells us to use.
++ *
++ * Note: This will still be broken if the eDP panel is not lit
++ * up by the BIOS, and thus we can't get the mode at module
++ * load.
++ */
++ drm_dbg_kms(&dev_priv->drm,
++ "pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
++ pipe_config->pipe_bpp, dev_priv->vbt.edp.bpp);
++ dev_priv->vbt.edp.bpp = pipe_config->pipe_bpp;
++ }
+ }
+
+ static void
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index f416499dad6f3..5508ebb9eb434 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -1864,8 +1864,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct intel_connector *connector = intel_dsi->attached_connector;
+- struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
++ struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
+ u32 tlpx_ns;
+ u32 prepare_cnt, exit_zero_cnt, clk_zero_cnt, trail_cnt;
+ u32 ths_prepare_ns, tclk_trail_ns;
+@@ -2052,8 +2051,6 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
+ /* attach connector to encoder */
+ intel_connector_attach_encoder(intel_connector, encoder);
+
+- intel_bios_init_panel(dev_priv, &intel_connector->panel);
+-
+ mutex_lock(&dev->mode_config.mutex);
+ intel_panel_add_vbt_lfp_fixed_mode(intel_connector);
+ mutex_unlock(&dev->mode_config.mutex);
+@@ -2067,20 +2064,13 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
+
+ intel_backlight_setup(intel_connector, INVALID_PIPE);
+
+- if (intel_connector->panel.vbt.dsi.config->dual_link)
++ if (dev_priv->vbt.dsi.config->dual_link)
+ intel_dsi->ports = BIT(PORT_A) | BIT(PORT_B);
+ else
+ intel_dsi->ports = BIT(port);
+
+- if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports))
+- intel_connector->panel.vbt.dsi.bl_ports &= intel_dsi->ports;
+-
+- intel_dsi->dcs_backlight_ports = intel_connector->panel.vbt.dsi.bl_ports;
+-
+- if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports))
+- intel_connector->panel.vbt.dsi.cabc_ports &= intel_dsi->ports;
+-
+- intel_dsi->dcs_cabc_ports = intel_connector->panel.vbt.dsi.cabc_ports;
++ intel_dsi->dcs_backlight_ports = dev_priv->vbt.dsi.bl_ports;
++ intel_dsi->dcs_cabc_ports = dev_priv->vbt.dsi.cabc_ports;
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+ struct intel_dsi_host *host;
+diff --git a/drivers/gpu/drm/i915/display/intel_backlight.c b/drivers/gpu/drm/i915/display/intel_backlight.c
+index 5182bb66bd289..3e200a2e4ba29 100644
+--- a/drivers/gpu/drm/i915/display/intel_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_backlight.c
+@@ -1158,10 +1158,9 @@ static u32 vlv_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ return DIV_ROUND_CLOSEST(clock, pwm_freq_hz * mul);
+ }
+
+-static u16 get_vbt_pwm_freq(struct intel_connector *connector)
++static u16 get_vbt_pwm_freq(struct drm_i915_private *dev_priv)
+ {
+- struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+- u16 pwm_freq_hz = connector->panel.vbt.backlight.pwm_freq_hz;
++ u16 pwm_freq_hz = dev_priv->vbt.backlight.pwm_freq_hz;
+
+ if (pwm_freq_hz) {
+ drm_dbg_kms(&dev_priv->drm,
+@@ -1181,7 +1180,7 @@ static u32 get_backlight_max_vbt(struct intel_connector *connector)
+ {
+ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ struct intel_panel *panel = &connector->panel;
+- u16 pwm_freq_hz = get_vbt_pwm_freq(connector);
++ u16 pwm_freq_hz = get_vbt_pwm_freq(dev_priv);
+ u32 pwm;
+
+ if (!panel->backlight.pwm_funcs->hz_to_pwm) {
+@@ -1218,11 +1217,11 @@ static u32 get_backlight_min_vbt(struct intel_connector *connector)
+ * against this by letting the minimum be at most (arbitrarily chosen)
+ * 25% of the max.
+ */
+- min = clamp_t(int, connector->panel.vbt.backlight.min_brightness, 0, 64);
+- if (min != connector->panel.vbt.backlight.min_brightness) {
++ min = clamp_t(int, dev_priv->vbt.backlight.min_brightness, 0, 64);
++ if (min != dev_priv->vbt.backlight.min_brightness) {
+ drm_dbg_kms(&dev_priv->drm,
+ "clamping VBT min backlight %d/255 to %d/255\n",
+- connector->panel.vbt.backlight.min_brightness, min);
++ dev_priv->vbt.backlight.min_brightness, min);
+ }
+
+ /* vbt value is a coefficient in range [0..255] */
+@@ -1411,7 +1410,7 @@ bxt_setup_backlight(struct intel_connector *connector, enum pipe unused)
+ struct intel_panel *panel = &connector->panel;
+ u32 pwm_ctl, val;
+
+- panel->backlight.controller = connector->panel.vbt.backlight.controller;
++ panel->backlight.controller = dev_priv->vbt.backlight.controller;
+
+ pwm_ctl = intel_de_read(dev_priv,
+ BXT_BLC_PWM_CTL(panel->backlight.controller));
+@@ -1484,7 +1483,7 @@ static int ext_pwm_setup_backlight(struct intel_connector *connector,
+ u32 level;
+
+ /* Get the right PWM chip for DSI backlight according to VBT */
+- if (connector->panel.vbt.dsi.config->pwm_blc == PPS_BLC_PMIC) {
++ if (dev_priv->vbt.dsi.config->pwm_blc == PPS_BLC_PMIC) {
+ panel->backlight.pwm = pwm_get(dev->dev, "pwm_pmic_backlight");
+ desc = "PMIC";
+ } else {
+@@ -1513,11 +1512,11 @@ static int ext_pwm_setup_backlight(struct intel_connector *connector,
+
+ drm_dbg_kms(&dev_priv->drm, "PWM already enabled at freq %ld, VBT freq %d, level %d\n",
+ NSEC_PER_SEC / (unsigned long)panel->backlight.pwm_state.period,
+- get_vbt_pwm_freq(connector), level);
++ get_vbt_pwm_freq(dev_priv), level);
+ } else {
+ /* Set period from VBT frequency, leave other settings at 0. */
+ panel->backlight.pwm_state.period =
+- NSEC_PER_SEC / get_vbt_pwm_freq(connector);
++ NSEC_PER_SEC / get_vbt_pwm_freq(dev_priv);
+ }
+
+ drm_info(&dev_priv->drm, "Using %s PWM for LCD backlight control\n",
+@@ -1602,7 +1601,7 @@ int intel_backlight_setup(struct intel_connector *connector, enum pipe pipe)
+ struct intel_panel *panel = &connector->panel;
+ int ret;
+
+- if (!connector->panel.vbt.backlight.present) {
++ if (!dev_priv->vbt.backlight.present) {
+ if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) {
+ drm_dbg_kms(&dev_priv->drm,
+ "no backlight present per VBT, but present per quirk\n");
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
+index b5de61fe9cc67..91caf4523b34d 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.c
++++ b/drivers/gpu/drm/i915/display/intel_bios.c
+@@ -682,8 +682,7 @@ static int get_panel_type(struct drm_i915_private *i915)
+
+ /* Parse general panel options */
+ static void
+-parse_panel_options(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_panel_options(struct drm_i915_private *i915)
+ {
+ const struct bdb_lvds_options *lvds_options;
+ int panel_type;
+@@ -693,11 +692,11 @@ parse_panel_options(struct drm_i915_private *i915,
+ if (!lvds_options)
+ return;
+
+- panel->vbt.lvds_dither = lvds_options->pixel_dither;
++ i915->vbt.lvds_dither = lvds_options->pixel_dither;
+
+ panel_type = get_panel_type(i915);
+
+- panel->vbt.panel_type = panel_type;
++ i915->vbt.panel_type = panel_type;
+
+ drrs_mode = (lvds_options->dps_panel_type_bits
+ >> (panel_type * 2)) & MODE_MASK;
+@@ -708,16 +707,16 @@ parse_panel_options(struct drm_i915_private *i915,
+ */
+ switch (drrs_mode) {
+ case 0:
+- panel->vbt.drrs_type = DRRS_TYPE_STATIC;
++ i915->vbt.drrs_type = DRRS_TYPE_STATIC;
+ drm_dbg_kms(&i915->drm, "DRRS supported mode is static\n");
+ break;
+ case 2:
+- panel->vbt.drrs_type = DRRS_TYPE_SEAMLESS;
++ i915->vbt.drrs_type = DRRS_TYPE_SEAMLESS;
+ drm_dbg_kms(&i915->drm,
+ "DRRS supported mode is seamless\n");
+ break;
+ default:
+- panel->vbt.drrs_type = DRRS_TYPE_NONE;
++ i915->vbt.drrs_type = DRRS_TYPE_NONE;
+ drm_dbg_kms(&i915->drm,
+ "DRRS not supported (VBT input)\n");
+ break;
+@@ -726,14 +725,13 @@ parse_panel_options(struct drm_i915_private *i915,
+
+ static void
+ parse_lfp_panel_dtd(struct drm_i915_private *i915,
+- struct intel_panel *panel,
+ const struct bdb_lvds_lfp_data *lvds_lfp_data,
+ const struct bdb_lvds_lfp_data_ptrs *lvds_lfp_data_ptrs)
+ {
+ const struct lvds_dvo_timing *panel_dvo_timing;
+ const struct lvds_fp_timing *fp_timing;
+ struct drm_display_mode *panel_fixed_mode;
+- int panel_type = panel->vbt.panel_type;
++ int panel_type = i915->vbt.panel_type;
+
+ panel_dvo_timing = get_lvds_dvo_timing(lvds_lfp_data,
+ lvds_lfp_data_ptrs,
+@@ -745,7 +743,7 @@ parse_lfp_panel_dtd(struct drm_i915_private *i915,
+
+ fill_detail_timing_data(panel_fixed_mode, panel_dvo_timing);
+
+- panel->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
++ i915->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
+
+ drm_dbg_kms(&i915->drm,
+ "Found panel mode in BIOS VBT legacy lfp table: " DRM_MODE_FMT "\n",
+@@ -758,21 +756,20 @@ parse_lfp_panel_dtd(struct drm_i915_private *i915,
+ /* check the resolution, just to be sure */
+ if (fp_timing->x_res == panel_fixed_mode->hdisplay &&
+ fp_timing->y_res == panel_fixed_mode->vdisplay) {
+- panel->vbt.bios_lvds_val = fp_timing->lvds_reg_val;
++ i915->vbt.bios_lvds_val = fp_timing->lvds_reg_val;
+ drm_dbg_kms(&i915->drm,
+ "VBT initial LVDS value %x\n",
+- panel->vbt.bios_lvds_val);
++ i915->vbt.bios_lvds_val);
+ }
+ }
+
+ static void
+-parse_lfp_data(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_lfp_data(struct drm_i915_private *i915)
+ {
+ const struct bdb_lvds_lfp_data *data;
+ const struct bdb_lvds_lfp_data_tail *tail;
+ const struct bdb_lvds_lfp_data_ptrs *ptrs;
+- int panel_type = panel->vbt.panel_type;
++ int panel_type = i915->vbt.panel_type;
+
+ ptrs = find_section(i915, BDB_LVDS_LFP_DATA_PTRS);
+ if (!ptrs)
+@@ -782,25 +779,24 @@ parse_lfp_data(struct drm_i915_private *i915,
+ if (!data)
+ return;
+
+- if (!panel->vbt.lfp_lvds_vbt_mode)
+- parse_lfp_panel_dtd(i915, panel, data, ptrs);
++ if (!i915->vbt.lfp_lvds_vbt_mode)
++ parse_lfp_panel_dtd(i915, data, ptrs);
+
+ tail = get_lfp_data_tail(data, ptrs);
+ if (!tail)
+ return;
+
+ if (i915->vbt.version >= 188) {
+- panel->vbt.seamless_drrs_min_refresh_rate =
++ i915->vbt.seamless_drrs_min_refresh_rate =
+ tail->seamless_drrs_min_refresh_rate[panel_type];
+ drm_dbg_kms(&i915->drm,
+ "Seamless DRRS min refresh rate: %d Hz\n",
+- panel->vbt.seamless_drrs_min_refresh_rate);
++ i915->vbt.seamless_drrs_min_refresh_rate);
+ }
+ }
+
+ static void
+-parse_generic_dtd(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_generic_dtd(struct drm_i915_private *i915)
+ {
+ const struct bdb_generic_dtd *generic_dtd;
+ const struct generic_dtd_entry *dtd;
+@@ -835,14 +831,14 @@ parse_generic_dtd(struct drm_i915_private *i915,
+
+ num_dtd = (get_blocksize(generic_dtd) -
+ sizeof(struct bdb_generic_dtd)) / generic_dtd->gdtd_size;
+- if (panel->vbt.panel_type >= num_dtd) {
++ if (i915->vbt.panel_type >= num_dtd) {
+ drm_err(&i915->drm,
+ "Panel type %d not found in table of %d DTD's\n",
+- panel->vbt.panel_type, num_dtd);
++ i915->vbt.panel_type, num_dtd);
+ return;
+ }
+
+- dtd = &generic_dtd->dtd[panel->vbt.panel_type];
++ dtd = &generic_dtd->dtd[i915->vbt.panel_type];
+
+ panel_fixed_mode = kzalloc(sizeof(*panel_fixed_mode), GFP_KERNEL);
+ if (!panel_fixed_mode)
+@@ -885,16 +881,15 @@ parse_generic_dtd(struct drm_i915_private *i915,
+ "Found panel mode in BIOS VBT generic dtd table: " DRM_MODE_FMT "\n",
+ DRM_MODE_ARG(panel_fixed_mode));
+
+- panel->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
++ i915->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
+ }
+
+ static void
+-parse_lfp_backlight(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_lfp_backlight(struct drm_i915_private *i915)
+ {
+ const struct bdb_lfp_backlight_data *backlight_data;
+ const struct lfp_backlight_data_entry *entry;
+- int panel_type = panel->vbt.panel_type;
++ int panel_type = i915->vbt.panel_type;
+ u16 level;
+
+ backlight_data = find_section(i915, BDB_LVDS_BACKLIGHT);
+@@ -910,15 +905,15 @@ parse_lfp_backlight(struct drm_i915_private *i915,
+
+ entry = &backlight_data->data[panel_type];
+
+- panel->vbt.backlight.present = entry->type == BDB_BACKLIGHT_TYPE_PWM;
+- if (!panel->vbt.backlight.present) {
++ i915->vbt.backlight.present = entry->type == BDB_BACKLIGHT_TYPE_PWM;
++ if (!i915->vbt.backlight.present) {
+ drm_dbg_kms(&i915->drm,
+ "PWM backlight not present in VBT (type %u)\n",
+ entry->type);
+ return;
+ }
+
+- panel->vbt.backlight.type = INTEL_BACKLIGHT_DISPLAY_DDI;
++ i915->vbt.backlight.type = INTEL_BACKLIGHT_DISPLAY_DDI;
+ if (i915->vbt.version >= 191) {
+ size_t exp_size;
+
+@@ -933,13 +928,13 @@ parse_lfp_backlight(struct drm_i915_private *i915,
+ const struct lfp_backlight_control_method *method;
+
+ method = &backlight_data->backlight_control[panel_type];
+- panel->vbt.backlight.type = method->type;
+- panel->vbt.backlight.controller = method->controller;
++ i915->vbt.backlight.type = method->type;
++ i915->vbt.backlight.controller = method->controller;
+ }
+ }
+
+- panel->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
+- panel->vbt.backlight.active_low_pwm = entry->active_low_pwm;
++ i915->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
++ i915->vbt.backlight.active_low_pwm = entry->active_low_pwm;
+
+ if (i915->vbt.version >= 234) {
+ u16 min_level;
+@@ -960,29 +955,28 @@ parse_lfp_backlight(struct drm_i915_private *i915,
+ drm_warn(&i915->drm, "Brightness min level > 255\n");
+ level = 255;
+ }
+- panel->vbt.backlight.min_brightness = min_level;
++ i915->vbt.backlight.min_brightness = min_level;
+
+- panel->vbt.backlight.brightness_precision_bits =
++ i915->vbt.backlight.brightness_precision_bits =
+ backlight_data->brightness_precision_bits[panel_type];
+ } else {
+ level = backlight_data->level[panel_type];
+- panel->vbt.backlight.min_brightness = entry->min_brightness;
++ i915->vbt.backlight.min_brightness = entry->min_brightness;
+ }
+
+ drm_dbg_kms(&i915->drm,
+ "VBT backlight PWM modulation frequency %u Hz, "
+ "active %s, min brightness %u, level %u, controller %u\n",
+- panel->vbt.backlight.pwm_freq_hz,
+- panel->vbt.backlight.active_low_pwm ? "low" : "high",
+- panel->vbt.backlight.min_brightness,
++ i915->vbt.backlight.pwm_freq_hz,
++ i915->vbt.backlight.active_low_pwm ? "low" : "high",
++ i915->vbt.backlight.min_brightness,
+ level,
+- panel->vbt.backlight.controller);
++ i915->vbt.backlight.controller);
+ }
+
+ /* Try to find sdvo panel data */
+ static void
+-parse_sdvo_panel_data(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_sdvo_panel_data(struct drm_i915_private *i915)
+ {
+ const struct bdb_sdvo_panel_dtds *dtds;
+ struct drm_display_mode *panel_fixed_mode;
+@@ -1015,7 +1009,7 @@ parse_sdvo_panel_data(struct drm_i915_private *i915,
+
+ fill_detail_timing_data(panel_fixed_mode, &dtds->dtds[index]);
+
+- panel->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode;
++ i915->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode;
+
+ drm_dbg_kms(&i915->drm,
+ "Found SDVO panel mode in BIOS VBT tables: " DRM_MODE_FMT "\n",
+@@ -1194,17 +1188,6 @@ parse_driver_features(struct drm_i915_private *i915)
+ driver->lvds_config != BDB_DRIVER_FEATURE_INT_SDVO_LVDS)
+ i915->vbt.int_lvds_support = 0;
+ }
+-}
+-
+-static void
+-parse_panel_driver_features(struct drm_i915_private *i915,
+- struct intel_panel *panel)
+-{
+- const struct bdb_driver_features *driver;
+-
+- driver = find_section(i915, BDB_DRIVER_FEATURES);
+- if (!driver)
+- return;
+
+ if (i915->vbt.version < 228) {
+ drm_dbg_kms(&i915->drm, "DRRS State Enabled:%d\n",
+@@ -1216,18 +1199,17 @@ parse_panel_driver_features(struct drm_i915_private *i915,
+ * driver->drrs_enabled=false
+ */
+ if (!driver->drrs_enabled)
+- panel->vbt.drrs_type = DRRS_TYPE_NONE;
++ i915->vbt.drrs_type = DRRS_TYPE_NONE;
+
+- panel->vbt.psr.enable = driver->psr_enabled;
++ i915->vbt.psr.enable = driver->psr_enabled;
+ }
+ }
+
+ static void
+-parse_power_conservation_features(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_power_conservation_features(struct drm_i915_private *i915)
+ {
+ const struct bdb_lfp_power *power;
+- u8 panel_type = panel->vbt.panel_type;
++ u8 panel_type = i915->vbt.panel_type;
+
+ if (i915->vbt.version < 228)
+ return;
+@@ -1236,7 +1218,7 @@ parse_power_conservation_features(struct drm_i915_private *i915,
+ if (!power)
+ return;
+
+- panel->vbt.psr.enable = power->psr & BIT(panel_type);
++ i915->vbt.psr.enable = power->psr & BIT(panel_type);
+
+ /*
+ * If DRRS is not supported, drrs_type has to be set to 0.
+@@ -1245,20 +1227,19 @@ parse_power_conservation_features(struct drm_i915_private *i915,
+ * power->drrs & BIT(panel_type)=false
+ */
+ if (!(power->drrs & BIT(panel_type)))
+- panel->vbt.drrs_type = DRRS_TYPE_NONE;
++ i915->vbt.drrs_type = DRRS_TYPE_NONE;
+
+ if (i915->vbt.version >= 232)
+- panel->vbt.edp.hobl = power->hobl & BIT(panel_type);
++ i915->vbt.edp.hobl = power->hobl & BIT(panel_type);
+ }
+
+ static void
+-parse_edp(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_edp(struct drm_i915_private *i915)
+ {
+ const struct bdb_edp *edp;
+ const struct edp_power_seq *edp_pps;
+ const struct edp_fast_link_params *edp_link_params;
+- int panel_type = panel->vbt.panel_type;
++ int panel_type = i915->vbt.panel_type;
+
+ edp = find_section(i915, BDB_EDP);
+ if (!edp)
+@@ -1266,13 +1247,13 @@ parse_edp(struct drm_i915_private *i915,
+
+ switch ((edp->color_depth >> (panel_type * 2)) & 3) {
+ case EDP_18BPP:
+- panel->vbt.edp.bpp = 18;
++ i915->vbt.edp.bpp = 18;
+ break;
+ case EDP_24BPP:
+- panel->vbt.edp.bpp = 24;
++ i915->vbt.edp.bpp = 24;
+ break;
+ case EDP_30BPP:
+- panel->vbt.edp.bpp = 30;
++ i915->vbt.edp.bpp = 30;
+ break;
+ }
+
+@@ -1280,14 +1261,14 @@ parse_edp(struct drm_i915_private *i915,
+ edp_pps = &edp->power_seqs[panel_type];
+ edp_link_params = &edp->fast_link_params[panel_type];
+
+- panel->vbt.edp.pps = *edp_pps;
++ i915->vbt.edp.pps = *edp_pps;
+
+ switch (edp_link_params->rate) {
+ case EDP_RATE_1_62:
+- panel->vbt.edp.rate = DP_LINK_BW_1_62;
++ i915->vbt.edp.rate = DP_LINK_BW_1_62;
+ break;
+ case EDP_RATE_2_7:
+- panel->vbt.edp.rate = DP_LINK_BW_2_7;
++ i915->vbt.edp.rate = DP_LINK_BW_2_7;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1298,13 +1279,13 @@ parse_edp(struct drm_i915_private *i915,
+
+ switch (edp_link_params->lanes) {
+ case EDP_LANE_1:
+- panel->vbt.edp.lanes = 1;
++ i915->vbt.edp.lanes = 1;
+ break;
+ case EDP_LANE_2:
+- panel->vbt.edp.lanes = 2;
++ i915->vbt.edp.lanes = 2;
+ break;
+ case EDP_LANE_4:
+- panel->vbt.edp.lanes = 4;
++ i915->vbt.edp.lanes = 4;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1315,16 +1296,16 @@ parse_edp(struct drm_i915_private *i915,
+
+ switch (edp_link_params->preemphasis) {
+ case EDP_PREEMPHASIS_NONE:
+- panel->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_0;
++ i915->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_0;
+ break;
+ case EDP_PREEMPHASIS_3_5dB:
+- panel->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_1;
++ i915->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_1;
+ break;
+ case EDP_PREEMPHASIS_6dB:
+- panel->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_2;
++ i915->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_2;
+ break;
+ case EDP_PREEMPHASIS_9_5dB:
+- panel->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_3;
++ i915->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_3;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1335,16 +1316,16 @@ parse_edp(struct drm_i915_private *i915,
+
+ switch (edp_link_params->vswing) {
+ case EDP_VSWING_0_4V:
+- panel->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_0;
++ i915->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_0;
+ break;
+ case EDP_VSWING_0_6V:
+- panel->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_1;
++ i915->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_1;
+ break;
+ case EDP_VSWING_0_8V:
+- panel->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
++ i915->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
+ break;
+ case EDP_VSWING_1_2V:
+- panel->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
++ i915->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1358,25 +1339,24 @@ parse_edp(struct drm_i915_private *i915,
+
+ /* Don't read from VBT if module parameter has valid value*/
+ if (i915->params.edp_vswing) {
+- panel->vbt.edp.low_vswing =
++ i915->vbt.edp.low_vswing =
+ i915->params.edp_vswing == 1;
+ } else {
+ vswing = (edp->edp_vswing_preemph >> (panel_type * 4)) & 0xF;
+- panel->vbt.edp.low_vswing = vswing == 0;
++ i915->vbt.edp.low_vswing = vswing == 0;
+ }
+ }
+
+- panel->vbt.edp.drrs_msa_timing_delay =
++ i915->vbt.edp.drrs_msa_timing_delay =
+ (edp->sdrrs_msa_timing_delay >> (panel_type * 2)) & 3;
+ }
+
+ static void
+-parse_psr(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_psr(struct drm_i915_private *i915)
+ {
+ const struct bdb_psr *psr;
+ const struct psr_table *psr_table;
+- int panel_type = panel->vbt.panel_type;
++ int panel_type = i915->vbt.panel_type;
+
+ psr = find_section(i915, BDB_PSR);
+ if (!psr) {
+@@ -1386,11 +1366,11 @@ parse_psr(struct drm_i915_private *i915,
+
+ psr_table = &psr->psr_table[panel_type];
+
+- panel->vbt.psr.full_link = psr_table->full_link;
+- panel->vbt.psr.require_aux_wakeup = psr_table->require_aux_to_wakeup;
++ i915->vbt.psr.full_link = psr_table->full_link;
++ i915->vbt.psr.require_aux_wakeup = psr_table->require_aux_to_wakeup;
+
+ /* Allowed VBT values goes from 0 to 15 */
+- panel->vbt.psr.idle_frames = psr_table->idle_frames < 0 ? 0 :
++ i915->vbt.psr.idle_frames = psr_table->idle_frames < 0 ? 0 :
+ psr_table->idle_frames > 15 ? 15 : psr_table->idle_frames;
+
+ /*
+@@ -1401,13 +1381,13 @@ parse_psr(struct drm_i915_private *i915,
+ (DISPLAY_VER(i915) >= 9 && !IS_BROXTON(i915))) {
+ switch (psr_table->tp1_wakeup_time) {
+ case 0:
+- panel->vbt.psr.tp1_wakeup_time_us = 500;
++ i915->vbt.psr.tp1_wakeup_time_us = 500;
+ break;
+ case 1:
+- panel->vbt.psr.tp1_wakeup_time_us = 100;
++ i915->vbt.psr.tp1_wakeup_time_us = 100;
+ break;
+ case 3:
+- panel->vbt.psr.tp1_wakeup_time_us = 0;
++ i915->vbt.psr.tp1_wakeup_time_us = 0;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1415,19 +1395,19 @@ parse_psr(struct drm_i915_private *i915,
+ psr_table->tp1_wakeup_time);
+ fallthrough;
+ case 2:
+- panel->vbt.psr.tp1_wakeup_time_us = 2500;
++ i915->vbt.psr.tp1_wakeup_time_us = 2500;
+ break;
+ }
+
+ switch (psr_table->tp2_tp3_wakeup_time) {
+ case 0:
+- panel->vbt.psr.tp2_tp3_wakeup_time_us = 500;
++ i915->vbt.psr.tp2_tp3_wakeup_time_us = 500;
+ break;
+ case 1:
+- panel->vbt.psr.tp2_tp3_wakeup_time_us = 100;
++ i915->vbt.psr.tp2_tp3_wakeup_time_us = 100;
+ break;
+ case 3:
+- panel->vbt.psr.tp2_tp3_wakeup_time_us = 0;
++ i915->vbt.psr.tp2_tp3_wakeup_time_us = 0;
+ break;
+ default:
+ drm_dbg_kms(&i915->drm,
+@@ -1435,12 +1415,12 @@ parse_psr(struct drm_i915_private *i915,
+ psr_table->tp2_tp3_wakeup_time);
+ fallthrough;
+ case 2:
+- panel->vbt.psr.tp2_tp3_wakeup_time_us = 2500;
++ i915->vbt.psr.tp2_tp3_wakeup_time_us = 2500;
+ break;
+ }
+ } else {
+- panel->vbt.psr.tp1_wakeup_time_us = psr_table->tp1_wakeup_time * 100;
+- panel->vbt.psr.tp2_tp3_wakeup_time_us = psr_table->tp2_tp3_wakeup_time * 100;
++ i915->vbt.psr.tp1_wakeup_time_us = psr_table->tp1_wakeup_time * 100;
++ i915->vbt.psr.tp2_tp3_wakeup_time_us = psr_table->tp2_tp3_wakeup_time * 100;
+ }
+
+ if (i915->vbt.version >= 226) {
+@@ -1462,66 +1442,62 @@ parse_psr(struct drm_i915_private *i915,
+ wakeup_time = 2500;
+ break;
+ }
+- panel->vbt.psr.psr2_tp2_tp3_wakeup_time_us = wakeup_time;
++ i915->vbt.psr.psr2_tp2_tp3_wakeup_time_us = wakeup_time;
+ } else {
+ /* Reusing PSR1 wakeup time for PSR2 in older VBTs */
+- panel->vbt.psr.psr2_tp2_tp3_wakeup_time_us = panel->vbt.psr.tp2_tp3_wakeup_time_us;
++ i915->vbt.psr.psr2_tp2_tp3_wakeup_time_us = i915->vbt.psr.tp2_tp3_wakeup_time_us;
+ }
+ }
+
+ static void parse_dsi_backlight_ports(struct drm_i915_private *i915,
+- struct intel_panel *panel,
+- enum port port)
++ u16 version, enum port port)
+ {
+- enum port port_bc = DISPLAY_VER(i915) >= 11 ? PORT_B : PORT_C;
+-
+- if (!panel->vbt.dsi.config->dual_link || i915->vbt.version < 197) {
+- panel->vbt.dsi.bl_ports = BIT(port);
+- if (panel->vbt.dsi.config->cabc_supported)
+- panel->vbt.dsi.cabc_ports = BIT(port);
++ if (!i915->vbt.dsi.config->dual_link || version < 197) {
++ i915->vbt.dsi.bl_ports = BIT(port);
++ if (i915->vbt.dsi.config->cabc_supported)
++ i915->vbt.dsi.cabc_ports = BIT(port);
+
+ return;
+ }
+
+- switch (panel->vbt.dsi.config->dl_dcs_backlight_ports) {
++ switch (i915->vbt.dsi.config->dl_dcs_backlight_ports) {
+ case DL_DCS_PORT_A:
+- panel->vbt.dsi.bl_ports = BIT(PORT_A);
++ i915->vbt.dsi.bl_ports = BIT(PORT_A);
+ break;
+ case DL_DCS_PORT_C:
+- panel->vbt.dsi.bl_ports = BIT(port_bc);
++ i915->vbt.dsi.bl_ports = BIT(PORT_C);
+ break;
+ default:
+ case DL_DCS_PORT_A_AND_C:
+- panel->vbt.dsi.bl_ports = BIT(PORT_A) | BIT(port_bc);
++ i915->vbt.dsi.bl_ports = BIT(PORT_A) | BIT(PORT_C);
+ break;
+ }
+
+- if (!panel->vbt.dsi.config->cabc_supported)
++ if (!i915->vbt.dsi.config->cabc_supported)
+ return;
+
+- switch (panel->vbt.dsi.config->dl_dcs_cabc_ports) {
++ switch (i915->vbt.dsi.config->dl_dcs_cabc_ports) {
+ case DL_DCS_PORT_A:
+- panel->vbt.dsi.cabc_ports = BIT(PORT_A);
++ i915->vbt.dsi.cabc_ports = BIT(PORT_A);
+ break;
+ case DL_DCS_PORT_C:
+- panel->vbt.dsi.cabc_ports = BIT(port_bc);
++ i915->vbt.dsi.cabc_ports = BIT(PORT_C);
+ break;
+ default:
+ case DL_DCS_PORT_A_AND_C:
+- panel->vbt.dsi.cabc_ports =
+- BIT(PORT_A) | BIT(port_bc);
++ i915->vbt.dsi.cabc_ports =
++ BIT(PORT_A) | BIT(PORT_C);
+ break;
+ }
+ }
+
+ static void
+-parse_mipi_config(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_mipi_config(struct drm_i915_private *i915)
+ {
+ const struct bdb_mipi_config *start;
+ const struct mipi_config *config;
+ const struct mipi_pps_data *pps;
+- int panel_type = panel->vbt.panel_type;
++ int panel_type = i915->vbt.panel_type;
+ enum port port;
+
+ /* parse MIPI blocks only if LFP type is MIPI */
+@@ -1529,7 +1505,7 @@ parse_mipi_config(struct drm_i915_private *i915,
+ return;
+
+ /* Initialize this to undefined indicating no generic MIPI support */
+- panel->vbt.dsi.panel_id = MIPI_DSI_UNDEFINED_PANEL_ID;
++ i915->vbt.dsi.panel_id = MIPI_DSI_UNDEFINED_PANEL_ID;
+
+ /* Block #40 is already parsed and panel_fixed_mode is
+ * stored in i915->lfp_lvds_vbt_mode
+@@ -1556,17 +1532,17 @@ parse_mipi_config(struct drm_i915_private *i915,
+ pps = &start->pps[panel_type];
+
+ /* store as of now full data. Trim when we realise all is not needed */
+- panel->vbt.dsi.config = kmemdup(config, sizeof(struct mipi_config), GFP_KERNEL);
+- if (!panel->vbt.dsi.config)
++ i915->vbt.dsi.config = kmemdup(config, sizeof(struct mipi_config), GFP_KERNEL);
++ if (!i915->vbt.dsi.config)
+ return;
+
+- panel->vbt.dsi.pps = kmemdup(pps, sizeof(struct mipi_pps_data), GFP_KERNEL);
+- if (!panel->vbt.dsi.pps) {
+- kfree(panel->vbt.dsi.config);
++ i915->vbt.dsi.pps = kmemdup(pps, sizeof(struct mipi_pps_data), GFP_KERNEL);
++ if (!i915->vbt.dsi.pps) {
++ kfree(i915->vbt.dsi.config);
+ return;
+ }
+
+- parse_dsi_backlight_ports(i915, panel, port);
++ parse_dsi_backlight_ports(i915, i915->vbt.version, port);
+
+ /* FIXME is the 90 vs. 270 correct? */
+ switch (config->rotation) {
+@@ -1575,25 +1551,25 @@ parse_mipi_config(struct drm_i915_private *i915,
+ * Most (all?) VBTs claim 0 degrees despite having
+ * an upside down panel, thus we do not trust this.
+ */
+- panel->vbt.dsi.orientation =
++ i915->vbt.dsi.orientation =
+ DRM_MODE_PANEL_ORIENTATION_UNKNOWN;
+ break;
+ case ENABLE_ROTATION_90:
+- panel->vbt.dsi.orientation =
++ i915->vbt.dsi.orientation =
+ DRM_MODE_PANEL_ORIENTATION_RIGHT_UP;
+ break;
+ case ENABLE_ROTATION_180:
+- panel->vbt.dsi.orientation =
++ i915->vbt.dsi.orientation =
+ DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP;
+ break;
+ case ENABLE_ROTATION_270:
+- panel->vbt.dsi.orientation =
++ i915->vbt.dsi.orientation =
+ DRM_MODE_PANEL_ORIENTATION_LEFT_UP;
+ break;
+ }
+
+ /* We have mandatory mipi config blocks. Initialize as generic panel */
+- panel->vbt.dsi.panel_id = MIPI_DSI_GENERIC_PANEL_ID;
++ i915->vbt.dsi.panel_id = MIPI_DSI_GENERIC_PANEL_ID;
+ }
+
+ /* Find the sequence block and size for the given panel. */
+@@ -1756,14 +1732,13 @@ static int goto_next_sequence_v3(const u8 *data, int index, int total)
+ * Get len of pre-fixed deassert fragment from a v1 init OTP sequence,
+ * skip all delay + gpio operands and stop at the first DSI packet op.
+ */
+-static int get_init_otp_deassert_fragment_len(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++static int get_init_otp_deassert_fragment_len(struct drm_i915_private *i915)
+ {
+- const u8 *data = panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
++ const u8 *data = i915->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
+ int index, len;
+
+ if (drm_WARN_ON(&i915->drm,
+- !data || panel->vbt.dsi.seq_version != 1))
++ !data || i915->vbt.dsi.seq_version != 1))
+ return 0;
+
+ /* index = 1 to skip sequence byte */
+@@ -1791,8 +1766,7 @@ static int get_init_otp_deassert_fragment_len(struct drm_i915_private *i915,
+ * these devices we split the init OTP sequence into a deassert sequence and
+ * the actual init OTP part.
+ */
+-static void fixup_mipi_sequences(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++static void fixup_mipi_sequences(struct drm_i915_private *i915)
+ {
+ u8 *init_otp;
+ int len;
+@@ -1802,18 +1776,18 @@ static void fixup_mipi_sequences(struct drm_i915_private *i915,
+ return;
+
+ /* Limit this to v1 vid-mode sequences */
+- if (panel->vbt.dsi.config->is_cmd_mode ||
+- panel->vbt.dsi.seq_version != 1)
++ if (i915->vbt.dsi.config->is_cmd_mode ||
++ i915->vbt.dsi.seq_version != 1)
+ return;
+
+ /* Only do this if there are otp and assert seqs and no deassert seq */
+- if (!panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] ||
+- !panel->vbt.dsi.sequence[MIPI_SEQ_ASSERT_RESET] ||
+- panel->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET])
++ if (!i915->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] ||
++ !i915->vbt.dsi.sequence[MIPI_SEQ_ASSERT_RESET] ||
++ i915->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET])
+ return;
+
+ /* The deassert-sequence ends at the first DSI packet */
+- len = get_init_otp_deassert_fragment_len(i915, panel);
++ len = get_init_otp_deassert_fragment_len(i915);
+ if (!len)
+ return;
+
+@@ -1821,26 +1795,25 @@ static void fixup_mipi_sequences(struct drm_i915_private *i915,
+ "Using init OTP fragment to deassert reset\n");
+
+ /* Copy the fragment, update seq byte and terminate it */
+- init_otp = (u8 *)panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
+- panel->vbt.dsi.deassert_seq = kmemdup(init_otp, len + 1, GFP_KERNEL);
+- if (!panel->vbt.dsi.deassert_seq)
++ init_otp = (u8 *)i915->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
++ i915->vbt.dsi.deassert_seq = kmemdup(init_otp, len + 1, GFP_KERNEL);
++ if (!i915->vbt.dsi.deassert_seq)
+ return;
+- panel->vbt.dsi.deassert_seq[0] = MIPI_SEQ_DEASSERT_RESET;
+- panel->vbt.dsi.deassert_seq[len] = MIPI_SEQ_ELEM_END;
++ i915->vbt.dsi.deassert_seq[0] = MIPI_SEQ_DEASSERT_RESET;
++ i915->vbt.dsi.deassert_seq[len] = MIPI_SEQ_ELEM_END;
+ /* Use the copy for deassert */
+- panel->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET] =
+- panel->vbt.dsi.deassert_seq;
++ i915->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET] =
++ i915->vbt.dsi.deassert_seq;
+ /* Replace the last byte of the fragment with init OTP seq byte */
+ init_otp[len - 1] = MIPI_SEQ_INIT_OTP;
+ /* And make MIPI_MIPI_SEQ_INIT_OTP point to it */
+- panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] = init_otp + len - 1;
++ i915->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] = init_otp + len - 1;
+ }
+
+ static void
+-parse_mipi_sequence(struct drm_i915_private *i915,
+- struct intel_panel *panel)
++parse_mipi_sequence(struct drm_i915_private *i915)
+ {
+- int panel_type = panel->vbt.panel_type;
++ int panel_type = i915->vbt.panel_type;
+ const struct bdb_mipi_sequence *sequence;
+ const u8 *seq_data;
+ u32 seq_size;
+@@ -1848,7 +1821,7 @@ parse_mipi_sequence(struct drm_i915_private *i915,
+ int index = 0;
+
+ /* Only our generic panel driver uses the sequence block. */
+- if (panel->vbt.dsi.panel_id != MIPI_DSI_GENERIC_PANEL_ID)
++ if (i915->vbt.dsi.panel_id != MIPI_DSI_GENERIC_PANEL_ID)
+ return;
+
+ sequence = find_section(i915, BDB_MIPI_SEQUENCE);
+@@ -1894,7 +1867,7 @@ parse_mipi_sequence(struct drm_i915_private *i915,
+ drm_dbg_kms(&i915->drm,
+ "Unsupported sequence %u\n", seq_id);
+
+- panel->vbt.dsi.sequence[seq_id] = data + index;
++ i915->vbt.dsi.sequence[seq_id] = data + index;
+
+ if (sequence->version >= 3)
+ index = goto_next_sequence_v3(data, index, seq_size);
+@@ -1907,18 +1880,18 @@ parse_mipi_sequence(struct drm_i915_private *i915,
+ }
+ }
+
+- panel->vbt.dsi.data = data;
+- panel->vbt.dsi.size = seq_size;
+- panel->vbt.dsi.seq_version = sequence->version;
++ i915->vbt.dsi.data = data;
++ i915->vbt.dsi.size = seq_size;
++ i915->vbt.dsi.seq_version = sequence->version;
+
+- fixup_mipi_sequences(i915, panel);
++ fixup_mipi_sequences(i915);
+
+ drm_dbg(&i915->drm, "MIPI related VBT parsing complete\n");
+ return;
+
+ err:
+ kfree(data);
+- memset(panel->vbt.dsi.sequence, 0, sizeof(panel->vbt.dsi.sequence));
++ memset(i915->vbt.dsi.sequence, 0, sizeof(i915->vbt.dsi.sequence));
+ }
+
+ static void
+@@ -2672,6 +2645,15 @@ init_vbt_defaults(struct drm_i915_private *i915)
+ {
+ i915->vbt.crt_ddc_pin = GMBUS_PIN_VGADDC;
+
++ /* Default to having backlight */
++ i915->vbt.backlight.present = true;
++
++ /* LFP panel data */
++ i915->vbt.lvds_dither = 1;
++
++ /* SDVO panel data */
++ i915->vbt.sdvo_lvds_vbt_mode = NULL;
++
+ /* general features */
+ i915->vbt.int_tv_support = 1;
+ i915->vbt.int_crt_support = 1;
+@@ -2691,17 +2673,6 @@ init_vbt_defaults(struct drm_i915_private *i915)
+ i915->vbt.lvds_ssc_freq);
+ }
+
+-/* Common defaults which may be overridden by VBT. */
+-static void
+-init_vbt_panel_defaults(struct intel_panel *panel)
+-{
+- /* Default to having backlight */
+- panel->vbt.backlight.present = true;
+-
+- /* LFP panel data */
+- panel->vbt.lvds_dither = true;
+-}
+-
+ /* Defaults to initialize only if there is no VBT. */
+ static void
+ init_vbt_missing_defaults(struct drm_i915_private *i915)
+@@ -2988,7 +2959,17 @@ void intel_bios_init(struct drm_i915_private *i915)
+ /* Grab useful general definitions */
+ parse_general_features(i915);
+ parse_general_definitions(i915);
++ parse_panel_options(i915);
++ parse_generic_dtd(i915);
++ parse_lfp_data(i915);
++ parse_lfp_backlight(i915);
++ parse_sdvo_panel_data(i915);
+ parse_driver_features(i915);
++ parse_power_conservation_features(i915);
++ parse_edp(i915);
++ parse_psr(i915);
++ parse_mipi_config(i915);
++ parse_mipi_sequence(i915);
+
+ /* Depends on child device list */
+ parse_compression_parameters(i915);
+@@ -3007,24 +2988,6 @@ out:
+ kfree(oprom_vbt);
+ }
+
+-void intel_bios_init_panel(struct drm_i915_private *i915,
+- struct intel_panel *panel)
+-{
+- init_vbt_panel_defaults(panel);
+-
+- parse_panel_options(i915, panel);
+- parse_generic_dtd(i915, panel);
+- parse_lfp_data(i915, panel);
+- parse_lfp_backlight(i915, panel);
+- parse_sdvo_panel_data(i915, panel);
+- parse_panel_driver_features(i915, panel);
+- parse_power_conservation_features(i915, panel);
+- parse_edp(i915, panel);
+- parse_psr(i915, panel);
+- parse_mipi_config(i915, panel);
+- parse_mipi_sequence(i915, panel);
+-}
+-
+ /**
+ * intel_bios_driver_remove - Free any resources allocated by intel_bios_init()
+ * @i915: i915 device instance
+@@ -3044,22 +3007,19 @@ void intel_bios_driver_remove(struct drm_i915_private *i915)
+ list_del(&entry->node);
+ kfree(entry);
+ }
+-}
+
+-void intel_bios_fini_panel(struct intel_panel *panel)
+-{
+- kfree(panel->vbt.sdvo_lvds_vbt_mode);
+- panel->vbt.sdvo_lvds_vbt_mode = NULL;
+- kfree(panel->vbt.lfp_lvds_vbt_mode);
+- panel->vbt.lfp_lvds_vbt_mode = NULL;
+- kfree(panel->vbt.dsi.data);
+- panel->vbt.dsi.data = NULL;
+- kfree(panel->vbt.dsi.pps);
+- panel->vbt.dsi.pps = NULL;
+- kfree(panel->vbt.dsi.config);
+- panel->vbt.dsi.config = NULL;
+- kfree(panel->vbt.dsi.deassert_seq);
+- panel->vbt.dsi.deassert_seq = NULL;
++ kfree(i915->vbt.sdvo_lvds_vbt_mode);
++ i915->vbt.sdvo_lvds_vbt_mode = NULL;
++ kfree(i915->vbt.lfp_lvds_vbt_mode);
++ i915->vbt.lfp_lvds_vbt_mode = NULL;
++ kfree(i915->vbt.dsi.data);
++ i915->vbt.dsi.data = NULL;
++ kfree(i915->vbt.dsi.pps);
++ i915->vbt.dsi.pps = NULL;
++ kfree(i915->vbt.dsi.config);
++ i915->vbt.dsi.config = NULL;
++ kfree(i915->vbt.dsi.deassert_seq);
++ i915->vbt.dsi.deassert_seq = NULL;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.h b/drivers/gpu/drm/i915/display/intel_bios.h
+index 86129f015718d..4709c4d298059 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.h
++++ b/drivers/gpu/drm/i915/display/intel_bios.h
+@@ -36,7 +36,6 @@ struct drm_i915_private;
+ struct intel_bios_encoder_data;
+ struct intel_crtc_state;
+ struct intel_encoder;
+-struct intel_panel;
+ enum port;
+
+ enum intel_backlight_type {
+@@ -231,9 +230,6 @@ struct mipi_pps_data {
+ } __packed;
+
+ void intel_bios_init(struct drm_i915_private *dev_priv);
+-void intel_bios_init_panel(struct drm_i915_private *dev_priv,
+- struct intel_panel *panel);
+-void intel_bios_fini_panel(struct intel_panel *panel);
+ void intel_bios_driver_remove(struct drm_i915_private *dev_priv);
+ bool intel_bios_is_valid_vbt(const void *buf, size_t size);
+ bool intel_bios_is_tv_present(struct drm_i915_private *dev_priv);
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 333871cf3a2c5..9e6fa59eabba7 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3433,8 +3433,26 @@ static void intel_ddi_get_config(struct intel_encoder *encoder,
+ pipe_config->has_audio =
+ intel_ddi_is_audio_enabled(dev_priv, cpu_transcoder);
+
+- if (encoder->type == INTEL_OUTPUT_EDP)
+- intel_edp_fixup_vbt_bpp(encoder, pipe_config->pipe_bpp);
++ if (encoder->type == INTEL_OUTPUT_EDP && dev_priv->vbt.edp.bpp &&
++ pipe_config->pipe_bpp > dev_priv->vbt.edp.bpp) {
++ /*
++ * This is a big fat ugly hack.
++ *
++ * Some machines in UEFI boot mode provide us a VBT that has 18
++ * bpp and 1.62 GHz link bandwidth for eDP, which for reasons
++ * unknown we fail to light up. Yet the same BIOS boots up with
++ * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
++ * max, not what it tells us to use.
++ *
++ * Note: This will still be broken if the eDP panel is not lit
++ * up by the BIOS, and thus we can't get the mode at module
++ * load.
++ */
++ drm_dbg_kms(&dev_priv->drm,
++ "pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
++ pipe_config->pipe_bpp, dev_priv->vbt.edp.bpp);
++ dev_priv->vbt.edp.bpp = pipe_config->pipe_bpp;
++ }
+
+ ddi_dotclock_get(pipe_config);
+
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
+index b490acd0ab691..85f58dd3df722 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
+@@ -1062,18 +1062,17 @@ bool is_hobl_buf_trans(const struct intel_ddi_buf_trans *table)
+
+ static bool use_edp_hobl(struct intel_encoder *encoder)
+ {
++ struct drm_i915_private *i915 = to_i915(encoder->base.dev);
+ struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+- struct intel_connector *connector = intel_dp->attached_connector;
+
+- return connector->panel.vbt.edp.hobl && !intel_dp->hobl_failed;
++ return i915->vbt.edp.hobl && !intel_dp->hobl_failed;
+ }
+
+ static bool use_edp_low_vswing(struct intel_encoder *encoder)
+ {
+- struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+- struct intel_connector *connector = intel_dp->attached_connector;
++ struct drm_i915_private *i915 = to_i915(encoder->base.dev);
+
+- return connector->panel.vbt.edp.low_vswing;
++ return i915->vbt.edp.low_vswing;
+ }
+
+ static const struct intel_ddi_buf_trans *
+diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
+index e2561c5d4953c..408152f9f46a4 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_types.h
++++ b/drivers/gpu/drm/i915/display/intel_display_types.h
+@@ -279,73 +279,6 @@ struct intel_panel_bl_funcs {
+ u32 (*hz_to_pwm)(struct intel_connector *connector, u32 hz);
+ };
+
+-enum drrs_type {
+- DRRS_TYPE_NONE,
+- DRRS_TYPE_STATIC,
+- DRRS_TYPE_SEAMLESS,
+-};
+-
+-struct intel_vbt_panel_data {
+- struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */
+- struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
+-
+- /* Feature bits */
+- unsigned int panel_type:4;
+- unsigned int lvds_dither:1;
+- unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */
+-
+- u8 seamless_drrs_min_refresh_rate;
+- enum drrs_type drrs_type;
+-
+- struct {
+- int rate;
+- int lanes;
+- int preemphasis;
+- int vswing;
+- int bpp;
+- struct edp_power_seq pps;
+- u8 drrs_msa_timing_delay;
+- bool low_vswing;
+- bool initialized;
+- bool hobl;
+- } edp;
+-
+- struct {
+- bool enable;
+- bool full_link;
+- bool require_aux_wakeup;
+- int idle_frames;
+- int tp1_wakeup_time_us;
+- int tp2_tp3_wakeup_time_us;
+- int psr2_tp2_tp3_wakeup_time_us;
+- } psr;
+-
+- struct {
+- u16 pwm_freq_hz;
+- u16 brightness_precision_bits;
+- bool present;
+- bool active_low_pwm;
+- u8 min_brightness; /* min_brightness/255 of max */
+- u8 controller; /* brightness controller number */
+- enum intel_backlight_type type;
+- } backlight;
+-
+- /* MIPI DSI */
+- struct {
+- u16 panel_id;
+- struct mipi_config *config;
+- struct mipi_pps_data *pps;
+- u16 bl_ports;
+- u16 cabc_ports;
+- u8 seq_version;
+- u32 size;
+- u8 *data;
+- const u8 *sequence[MIPI_SEQ_MAX];
+- u8 *deassert_seq; /* Used by fixup_mipi_sequences() */
+- enum drm_panel_orientation orientation;
+- } dsi;
+-};
+-
+ struct intel_panel {
+ struct list_head fixed_modes;
+
+@@ -385,8 +318,6 @@ struct intel_panel {
+ const struct intel_panel_bl_funcs *pwm_funcs;
+ void (*power)(struct intel_connector *, bool enable);
+ } backlight;
+-
+- struct intel_vbt_panel_data vbt;
+ };
+
+ struct intel_digital_port;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 0efec6023fbe8..fe8b6b72970a2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1246,12 +1246,11 @@ static int intel_dp_max_bpp(struct intel_dp *intel_dp,
+ if (intel_dp_is_edp(intel_dp)) {
+ /* Get bpp from vbt only for panels that dont have bpp in edid */
+ if (intel_connector->base.display_info.bpc == 0 &&
+- intel_connector->panel.vbt.edp.bpp &&
+- intel_connector->panel.vbt.edp.bpp < bpp) {
++ dev_priv->vbt.edp.bpp && dev_priv->vbt.edp.bpp < bpp) {
+ drm_dbg_kms(&dev_priv->drm,
+ "clamping bpp for eDP panel to BIOS-provided %i\n",
+- intel_connector->panel.vbt.edp.bpp);
+- bpp = intel_connector->panel.vbt.edp.bpp;
++ dev_priv->vbt.edp.bpp);
++ bpp = dev_priv->vbt.edp.bpp;
+ }
+ }
+
+@@ -1908,7 +1907,7 @@ intel_dp_drrs_compute_config(struct intel_connector *connector,
+ }
+
+ if (IS_IRONLAKE(i915) || IS_SANDYBRIDGE(i915) || IS_IVYBRIDGE(i915))
+- pipe_config->msa_timing_delay = connector->panel.vbt.edp.drrs_msa_timing_delay;
++ pipe_config->msa_timing_delay = i915->vbt.edp.drrs_msa_timing_delay;
+
+ pipe_config->has_drrs = true;
+
+@@ -2738,33 +2737,6 @@ static void intel_edp_mso_mode_fixup(struct intel_connector *connector,
+ DRM_MODE_ARG(mode));
+ }
+
+-void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp)
+-{
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+- struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+- struct intel_connector *connector = intel_dp->attached_connector;
+-
+- if (connector->panel.vbt.edp.bpp && pipe_bpp > connector->panel.vbt.edp.bpp) {
+- /*
+- * This is a big fat ugly hack.
+- *
+- * Some machines in UEFI boot mode provide us a VBT that has 18
+- * bpp and 1.62 GHz link bandwidth for eDP, which for reasons
+- * unknown we fail to light up. Yet the same BIOS boots up with
+- * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
+- * max, not what it tells us to use.
+- *
+- * Note: This will still be broken if the eDP panel is not lit
+- * up by the BIOS, and thus we can't get the mode at module
+- * load.
+- */
+- drm_dbg_kms(&dev_priv->drm,
+- "pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
+- pipe_bpp, connector->panel.vbt.edp.bpp);
+- connector->panel.vbt.edp.bpp = pipe_bpp;
+- }
+-}
+-
+ static void intel_edp_mso_init(struct intel_dp *intel_dp)
+ {
+ struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+@@ -5240,10 +5212,8 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp,
+ }
+ intel_connector->edid = edid;
+
+- intel_bios_init_panel(dev_priv, &intel_connector->panel);
+-
+ intel_panel_add_edid_fixed_modes(intel_connector,
+- intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE);
++ dev_priv->vbt.drrs_type != DRRS_TYPE_NONE);
+
+ /* MSO requires information from the EDID */
+ intel_edp_mso_init(intel_dp);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
+index a54902c713a34..d457e17bdc57e 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.h
++++ b/drivers/gpu/drm/i915/display/intel_dp.h
+@@ -29,7 +29,6 @@ struct link_config_limits {
+ int min_bpp, max_bpp;
+ };
+
+-void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp);
+ void intel_dp_adjust_compliance_config(struct intel_dp *intel_dp,
+ struct intel_crtc_state *pipe_config,
+ struct link_config_limits *limits);
+@@ -64,7 +63,6 @@ enum irqreturn intel_dp_hpd_pulse(struct intel_digital_port *dig_port,
+ void intel_edp_backlight_on(const struct intel_crtc_state *crtc_state,
+ const struct drm_connector_state *conn_state);
+ void intel_edp_backlight_off(const struct drm_connector_state *conn_state);
+-void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp);
+ void intel_dp_mst_suspend(struct drm_i915_private *dev_priv);
+ void intel_dp_mst_resume(struct drm_i915_private *dev_priv);
+ int intel_dp_max_link_rate(struct intel_dp *intel_dp);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c b/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
+index c92d5bb2326a3..fb6cf30ee6281 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
+@@ -370,7 +370,7 @@ static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector,
+ int ret;
+
+ ret = drm_edp_backlight_init(&intel_dp->aux, &panel->backlight.edp.vesa.info,
+- panel->vbt.backlight.pwm_freq_hz, intel_dp->edp_dpcd,
++ i915->vbt.backlight.pwm_freq_hz, intel_dp->edp_dpcd,
+ ¤t_level, ¤t_mode);
+ if (ret < 0)
+ return ret;
+@@ -454,7 +454,7 @@ int intel_dp_aux_init_backlight_funcs(struct intel_connector *connector)
+ case INTEL_DP_AUX_BACKLIGHT_OFF:
+ return -ENODEV;
+ case INTEL_DP_AUX_BACKLIGHT_AUTO:
+- switch (panel->vbt.backlight.type) {
++ switch (i915->vbt.backlight.type) {
+ case INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE:
+ try_vesa_interface = true;
+ break;
+@@ -466,7 +466,7 @@ int intel_dp_aux_init_backlight_funcs(struct intel_connector *connector)
+ }
+ break;
+ case INTEL_DP_AUX_BACKLIGHT_ON:
+- if (panel->vbt.backlight.type != INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE)
++ if (i915->vbt.backlight.type != INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE)
+ try_intel_interface = true;
+
+ try_vesa_interface = true;
+diff --git a/drivers/gpu/drm/i915/display/intel_drrs.c b/drivers/gpu/drm/i915/display/intel_drrs.c
+index 7da4a9cbe4ba4..166caf293f7bc 100644
+--- a/drivers/gpu/drm/i915/display/intel_drrs.c
++++ b/drivers/gpu/drm/i915/display/intel_drrs.c
+@@ -217,6 +217,9 @@ static void intel_drrs_frontbuffer_update(struct drm_i915_private *dev_priv,
+ {
+ struct intel_crtc *crtc;
+
++ if (dev_priv->vbt.drrs_type != DRRS_TYPE_SEAMLESS)
++ return;
++
+ for_each_intel_crtc(&dev_priv->drm, crtc) {
+ unsigned int frontbuffer_bits;
+
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi.c b/drivers/gpu/drm/i915/display/intel_dsi.c
+index 35e121cd226c5..389a8c24cdc1e 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi.c
+@@ -102,7 +102,7 @@ intel_dsi_get_panel_orientation(struct intel_connector *connector)
+ struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ enum drm_panel_orientation orientation;
+
+- orientation = connector->panel.vbt.dsi.orientation;
++ orientation = dev_priv->vbt.dsi.orientation;
+ if (orientation != DRM_MODE_PANEL_ORIENTATION_UNKNOWN)
+ return orientation;
+
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c b/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c
+index 1bc7118c56a2a..7d234429e71ef 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c
+@@ -160,10 +160,12 @@ static void dcs_enable_backlight(const struct intel_crtc_state *crtc_state,
+ static int dcs_setup_backlight(struct intel_connector *connector,
+ enum pipe unused)
+ {
++ struct drm_device *dev = connector->base.dev;
++ struct drm_i915_private *dev_priv = to_i915(dev);
+ struct intel_panel *panel = &connector->panel;
+
+- if (panel->vbt.backlight.brightness_precision_bits > 8)
+- panel->backlight.max = (1 << panel->vbt.backlight.brightness_precision_bits) - 1;
++ if (dev_priv->vbt.backlight.brightness_precision_bits > 8)
++ panel->backlight.max = (1 << dev_priv->vbt.backlight.brightness_precision_bits) - 1;
+ else
+ panel->backlight.max = PANEL_PWM_MAX_VALUE;
+
+@@ -183,10 +185,11 @@ static const struct intel_panel_bl_funcs dcs_bl_funcs = {
+ int intel_dsi_dcs_init_backlight_funcs(struct intel_connector *intel_connector)
+ {
+ struct drm_device *dev = intel_connector->base.dev;
++ struct drm_i915_private *dev_priv = to_i915(dev);
+ struct intel_encoder *encoder = intel_attached_encoder(intel_connector);
+ struct intel_panel *panel = &intel_connector->panel;
+
+- if (panel->vbt.backlight.type != INTEL_BACKLIGHT_DSI_DCS)
++ if (dev_priv->vbt.backlight.type != INTEL_BACKLIGHT_DSI_DCS)
+ return -ENODEV;
+
+ if (drm_WARN_ON(dev, encoder->type != INTEL_OUTPUT_DSI))
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+index 75e8cc4337c93..dd24aef925f2e 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+@@ -240,10 +240,9 @@ static const u8 *mipi_exec_delay(struct intel_dsi *intel_dsi, const u8 *data)
+ return data;
+ }
+
+-static void vlv_exec_gpio(struct intel_connector *connector,
++static void vlv_exec_gpio(struct drm_i915_private *dev_priv,
+ u8 gpio_source, u8 gpio_index, bool value)
+ {
+- struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ struct gpio_map *map;
+ u16 pconf0, padval;
+ u32 tmp;
+@@ -257,7 +256,7 @@ static void vlv_exec_gpio(struct intel_connector *connector,
+
+ map = &vlv_gpio_table[gpio_index];
+
+- if (connector->panel.vbt.dsi.seq_version >= 3) {
++ if (dev_priv->vbt.dsi.seq_version >= 3) {
+ /* XXX: this assumes vlv_gpio_table only has NC GPIOs. */
+ port = IOSF_PORT_GPIO_NC;
+ } else {
+@@ -288,15 +287,14 @@ static void vlv_exec_gpio(struct intel_connector *connector,
+ vlv_iosf_sb_put(dev_priv, BIT(VLV_IOSF_SB_GPIO));
+ }
+
+-static void chv_exec_gpio(struct intel_connector *connector,
++static void chv_exec_gpio(struct drm_i915_private *dev_priv,
+ u8 gpio_source, u8 gpio_index, bool value)
+ {
+- struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ u16 cfg0, cfg1;
+ u16 family_num;
+ u8 port;
+
+- if (connector->panel.vbt.dsi.seq_version >= 3) {
++ if (dev_priv->vbt.dsi.seq_version >= 3) {
+ if (gpio_index >= CHV_GPIO_IDX_START_SE) {
+ /* XXX: it's unclear whether 255->57 is part of SE. */
+ gpio_index -= CHV_GPIO_IDX_START_SE;
+@@ -342,10 +340,9 @@ static void chv_exec_gpio(struct intel_connector *connector,
+ vlv_iosf_sb_put(dev_priv, BIT(VLV_IOSF_SB_GPIO));
+ }
+
+-static void bxt_exec_gpio(struct intel_connector *connector,
++static void bxt_exec_gpio(struct drm_i915_private *dev_priv,
+ u8 gpio_source, u8 gpio_index, bool value)
+ {
+- struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ /* XXX: this table is a quick ugly hack. */
+ static struct gpio_desc *bxt_gpio_table[U8_MAX + 1];
+ struct gpio_desc *gpio_desc = bxt_gpio_table[gpio_index];
+@@ -369,11 +366,9 @@ static void bxt_exec_gpio(struct intel_connector *connector,
+ gpiod_set_value(gpio_desc, value);
+ }
+
+-static void icl_exec_gpio(struct intel_connector *connector,
++static void icl_exec_gpio(struct drm_i915_private *dev_priv,
+ u8 gpio_source, u8 gpio_index, bool value)
+ {
+- struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+-
+ drm_dbg_kms(&dev_priv->drm, "Skipping ICL GPIO element execution\n");
+ }
+
+@@ -381,19 +376,18 @@ static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct intel_connector *connector = intel_dsi->attached_connector;
+ u8 gpio_source, gpio_index = 0, gpio_number;
+ bool value;
+
+ drm_dbg_kms(&dev_priv->drm, "\n");
+
+- if (connector->panel.vbt.dsi.seq_version >= 3)
++ if (dev_priv->vbt.dsi.seq_version >= 3)
+ gpio_index = *data++;
+
+ gpio_number = *data++;
+
+ /* gpio source in sequence v2 only */
+- if (connector->panel.vbt.dsi.seq_version == 2)
++ if (dev_priv->vbt.dsi.seq_version == 2)
+ gpio_source = (*data >> 1) & 3;
+ else
+ gpio_source = 0;
+@@ -402,13 +396,13 @@ static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
+ value = *data++ & 1;
+
+ if (DISPLAY_VER(dev_priv) >= 11)
+- icl_exec_gpio(connector, gpio_source, gpio_index, value);
++ icl_exec_gpio(dev_priv, gpio_source, gpio_index, value);
+ else if (IS_VALLEYVIEW(dev_priv))
+- vlv_exec_gpio(connector, gpio_source, gpio_number, value);
++ vlv_exec_gpio(dev_priv, gpio_source, gpio_number, value);
+ else if (IS_CHERRYVIEW(dev_priv))
+- chv_exec_gpio(connector, gpio_source, gpio_number, value);
++ chv_exec_gpio(dev_priv, gpio_source, gpio_number, value);
+ else
+- bxt_exec_gpio(connector, gpio_source, gpio_index, value);
++ bxt_exec_gpio(dev_priv, gpio_source, gpio_index, value);
+
+ return data;
+ }
+@@ -591,15 +585,14 @@ static void intel_dsi_vbt_exec(struct intel_dsi *intel_dsi,
+ enum mipi_seq seq_id)
+ {
+ struct drm_i915_private *dev_priv = to_i915(intel_dsi->base.base.dev);
+- struct intel_connector *connector = intel_dsi->attached_connector;
+ const u8 *data;
+ fn_mipi_elem_exec mipi_elem_exec;
+
+ if (drm_WARN_ON(&dev_priv->drm,
+- seq_id >= ARRAY_SIZE(connector->panel.vbt.dsi.sequence)))
++ seq_id >= ARRAY_SIZE(dev_priv->vbt.dsi.sequence)))
+ return;
+
+- data = connector->panel.vbt.dsi.sequence[seq_id];
++ data = dev_priv->vbt.dsi.sequence[seq_id];
+ if (!data)
+ return;
+
+@@ -612,7 +605,7 @@ static void intel_dsi_vbt_exec(struct intel_dsi *intel_dsi,
+ data++;
+
+ /* Skip Size of Sequence. */
+- if (connector->panel.vbt.dsi.seq_version >= 3)
++ if (dev_priv->vbt.dsi.seq_version >= 3)
+ data += 4;
+
+ while (1) {
+@@ -628,7 +621,7 @@ static void intel_dsi_vbt_exec(struct intel_dsi *intel_dsi,
+ mipi_elem_exec = NULL;
+
+ /* Size of Operation. */
+- if (connector->panel.vbt.dsi.seq_version >= 3)
++ if (dev_priv->vbt.dsi.seq_version >= 3)
+ operation_size = *data++;
+
+ if (mipi_elem_exec) {
+@@ -676,10 +669,10 @@ void intel_dsi_vbt_exec_sequence(struct intel_dsi *intel_dsi,
+
+ void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec)
+ {
+- struct intel_connector *connector = intel_dsi->attached_connector;
++ struct drm_i915_private *dev_priv = to_i915(intel_dsi->base.base.dev);
+
+ /* For v3 VBTs in vid-mode the delays are part of the VBT sequences */
+- if (is_vid_mode(intel_dsi) && connector->panel.vbt.dsi.seq_version >= 3)
++ if (is_vid_mode(intel_dsi) && dev_priv->vbt.dsi.seq_version >= 3)
+ return;
+
+ msleep(msec);
+@@ -741,10 +734,9 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct intel_connector *connector = intel_dsi->attached_connector;
+- struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
+- struct mipi_pps_data *pps = connector->panel.vbt.dsi.pps;
+- struct drm_display_mode *mode = connector->panel.vbt.lfp_lvds_vbt_mode;
++ struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
++ struct mipi_pps_data *pps = dev_priv->vbt.dsi.pps;
++ struct drm_display_mode *mode = dev_priv->vbt.lfp_lvds_vbt_mode;
+ u16 burst_mode_ratio;
+ enum port port;
+
+@@ -880,8 +872,7 @@ void intel_dsi_vbt_gpio_init(struct intel_dsi *intel_dsi, bool panel_is_on)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct intel_connector *connector = intel_dsi->attached_connector;
+- struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
++ struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
+ enum gpiod_flags flags = panel_is_on ? GPIOD_OUT_HIGH : GPIOD_OUT_LOW;
+ bool want_backlight_gpio = false;
+ bool want_panel_gpio = false;
+@@ -936,8 +927,7 @@ void intel_dsi_vbt_gpio_cleanup(struct intel_dsi *intel_dsi)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct intel_connector *connector = intel_dsi->attached_connector;
+- struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
++ struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
+
+ if (intel_dsi->gpio_panel) {
+ gpiod_put(intel_dsi->gpio_panel);
+diff --git a/drivers/gpu/drm/i915/display/intel_lvds.c b/drivers/gpu/drm/i915/display/intel_lvds.c
+index 9f250a70519aa..e8478161f8b9b 100644
+--- a/drivers/gpu/drm/i915/display/intel_lvds.c
++++ b/drivers/gpu/drm/i915/display/intel_lvds.c
+@@ -809,7 +809,7 @@ static bool compute_is_dual_link_lvds(struct intel_lvds_encoder *lvds_encoder)
+ else
+ val &= ~(LVDS_DETECTED | LVDS_PIPE_SEL_MASK);
+ if (val == 0)
+- val = connector->panel.vbt.bios_lvds_val;
++ val = dev_priv->vbt.bios_lvds_val;
+
+ return (val & LVDS_CLKB_POWER_MASK) == LVDS_CLKB_POWER_UP;
+ }
+@@ -967,11 +967,9 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
+ }
+ intel_connector->edid = edid;
+
+- intel_bios_init_panel(dev_priv, &intel_connector->panel);
+-
+ /* Try EDID first */
+ intel_panel_add_edid_fixed_modes(intel_connector,
+- intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE);
++ dev_priv->vbt.drrs_type != DRRS_TYPE_NONE);
+
+ /* Failed to get EDID, what about VBT? */
+ if (!intel_panel_preferred_fixed_mode(intel_connector))
+diff --git a/drivers/gpu/drm/i915/display/intel_panel.c b/drivers/gpu/drm/i915/display/intel_panel.c
+index d055e41185582..d1d1b59102d69 100644
+--- a/drivers/gpu/drm/i915/display/intel_panel.c
++++ b/drivers/gpu/drm/i915/display/intel_panel.c
+@@ -75,8 +75,9 @@ const struct drm_display_mode *
+ intel_panel_downclock_mode(struct intel_connector *connector,
+ const struct drm_display_mode *adjusted_mode)
+ {
++ struct drm_i915_private *i915 = to_i915(connector->base.dev);
+ const struct drm_display_mode *fixed_mode, *best_mode = NULL;
+- int min_vrefresh = connector->panel.vbt.seamless_drrs_min_refresh_rate;
++ int min_vrefresh = i915->vbt.seamless_drrs_min_refresh_rate;
+ int max_vrefresh = drm_mode_vrefresh(adjusted_mode);
+
+ /* pick the fixed_mode with the lowest refresh rate */
+@@ -112,11 +113,13 @@ int intel_panel_get_modes(struct intel_connector *connector)
+
+ enum drrs_type intel_panel_drrs_type(struct intel_connector *connector)
+ {
++ struct drm_i915_private *i915 = to_i915(connector->base.dev);
++
+ if (list_empty(&connector->panel.fixed_modes) ||
+ list_is_singular(&connector->panel.fixed_modes))
+ return DRRS_TYPE_NONE;
+
+- return connector->panel.vbt.drrs_type;
++ return i915->vbt.drrs_type;
+ }
+
+ int intel_panel_compute_config(struct intel_connector *connector,
+@@ -257,7 +260,7 @@ void intel_panel_add_vbt_lfp_fixed_mode(struct intel_connector *connector)
+ struct drm_i915_private *i915 = to_i915(connector->base.dev);
+ const struct drm_display_mode *mode;
+
+- mode = connector->panel.vbt.lfp_lvds_vbt_mode;
++ mode = i915->vbt.lfp_lvds_vbt_mode;
+ if (!mode)
+ return;
+
+@@ -271,7 +274,7 @@ void intel_panel_add_vbt_sdvo_fixed_mode(struct intel_connector *connector)
+ struct drm_i915_private *i915 = to_i915(connector->base.dev);
+ const struct drm_display_mode *mode;
+
+- mode = connector->panel.vbt.sdvo_lvds_vbt_mode;
++ mode = i915->vbt.sdvo_lvds_vbt_mode;
+ if (!mode)
+ return;
+
+@@ -636,8 +639,6 @@ void intel_panel_fini(struct intel_connector *connector)
+
+ intel_backlight_destroy(panel);
+
+- intel_bios_fini_panel(panel);
+-
+ list_for_each_entry_safe(fixed_mode, next, &panel->fixed_modes, head) {
+ list_del(&fixed_mode->head);
+ drm_mode_destroy(connector->base.dev, fixed_mode);
+diff --git a/drivers/gpu/drm/i915/display/intel_pps.c b/drivers/gpu/drm/i915/display/intel_pps.c
+index a226e4e5c5698..5a598dd060391 100644
+--- a/drivers/gpu/drm/i915/display/intel_pps.c
++++ b/drivers/gpu/drm/i915/display/intel_pps.c
+@@ -209,8 +209,7 @@ static int
+ bxt_power_sequencer_idx(struct intel_dp *intel_dp)
+ {
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+- struct intel_connector *connector = intel_dp->attached_connector;
+- int backlight_controller = connector->panel.vbt.backlight.controller;
++ int backlight_controller = dev_priv->vbt.backlight.controller;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+@@ -1160,84 +1159,53 @@ intel_pps_verify_state(struct intel_dp *intel_dp)
+ }
+ }
+
+-static void pps_init_delays_cur(struct intel_dp *intel_dp,
+- struct edp_power_seq *cur)
++static void pps_init_delays(struct intel_dp *intel_dp)
+ {
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
++ struct edp_power_seq cur, vbt, spec,
++ *final = &intel_dp->pps.pps_delays;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+- intel_pps_readout_hw_state(intel_dp, cur);
+-
+- intel_pps_dump_state(intel_dp, "cur", cur);
+-}
++ /* already initialized? */
++ if (final->t11_t12 != 0)
++ return;
+
+-static void pps_init_delays_vbt(struct intel_dp *intel_dp,
+- struct edp_power_seq *vbt)
+-{
+- struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+- struct intel_connector *connector = intel_dp->attached_connector;
++ intel_pps_readout_hw_state(intel_dp, &cur);
+
+- *vbt = connector->panel.vbt.edp.pps;
++ intel_pps_dump_state(intel_dp, "cur", &cur);
+
++ vbt = dev_priv->vbt.edp.pps;
+ /* On Toshiba Satellite P50-C-18C system the VBT T12 delay
+ * of 500ms appears to be too short. Ocassionally the panel
+ * just fails to power back on. Increasing the delay to 800ms
+ * seems sufficient to avoid this problem.
+ */
+ if (dev_priv->quirks & QUIRK_INCREASE_T12_DELAY) {
+- vbt->t11_t12 = max_t(u16, vbt->t11_t12, 1300 * 10);
++ vbt.t11_t12 = max_t(u16, vbt.t11_t12, 1300 * 10);
+ drm_dbg_kms(&dev_priv->drm,
+ "Increasing T12 panel delay as per the quirk to %d\n",
+- vbt->t11_t12);
++ vbt.t11_t12);
+ }
+-
+ /* T11_T12 delay is special and actually in units of 100ms, but zero
+ * based in the hw (so we need to add 100 ms). But the sw vbt
+ * table multiplies it with 1000 to make it in units of 100usec,
+ * too. */
+- vbt->t11_t12 += 100 * 10;
+-
+- intel_pps_dump_state(intel_dp, "vbt", vbt);
+-}
+-
+-static void pps_init_delays_spec(struct intel_dp *intel_dp,
+- struct edp_power_seq *spec)
+-{
+- struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+-
+- lockdep_assert_held(&dev_priv->pps_mutex);
++ vbt.t11_t12 += 100 * 10;
+
+ /* Upper limits from eDP 1.3 spec. Note that we use the clunky units of
+ * our hw here, which are all in 100usec. */
+- spec->t1_t3 = 210 * 10;
+- spec->t8 = 50 * 10; /* no limit for t8, use t7 instead */
+- spec->t9 = 50 * 10; /* no limit for t9, make it symmetric with t8 */
+- spec->t10 = 500 * 10;
++ spec.t1_t3 = 210 * 10;
++ spec.t8 = 50 * 10; /* no limit for t8, use t7 instead */
++ spec.t9 = 50 * 10; /* no limit for t9, make it symmetric with t8 */
++ spec.t10 = 500 * 10;
+ /* This one is special and actually in units of 100ms, but zero
+ * based in the hw (so we need to add 100 ms). But the sw vbt
+ * table multiplies it with 1000 to make it in units of 100usec,
+ * too. */
+- spec->t11_t12 = (510 + 100) * 10;
+-
+- intel_pps_dump_state(intel_dp, "spec", spec);
+-}
+-
+-static void pps_init_delays(struct intel_dp *intel_dp)
+-{
+- struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+- struct edp_power_seq cur, vbt, spec,
+- *final = &intel_dp->pps.pps_delays;
+-
+- lockdep_assert_held(&dev_priv->pps_mutex);
+-
+- /* already initialized? */
+- if (final->t11_t12 != 0)
+- return;
++ spec.t11_t12 = (510 + 100) * 10;
+
+- pps_init_delays_cur(intel_dp, &cur);
+- pps_init_delays_vbt(intel_dp, &vbt);
+- pps_init_delays_spec(intel_dp, &spec);
++ intel_pps_dump_state(intel_dp, "vbt", &vbt);
+
+ /* Use the max of the register settings and vbt. If both are
+ * unset, fall back to the spec limits. */
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 8f09203e0cf03..06db407e2749f 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -86,13 +86,10 @@
+
+ static bool psr_global_enabled(struct intel_dp *intel_dp)
+ {
+- struct intel_connector *connector = intel_dp->attached_connector;
+ struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+
+ switch (intel_dp->psr.debug & I915_PSR_DEBUG_MODE_MASK) {
+ case I915_PSR_DEBUG_DEFAULT:
+- if (i915->params.enable_psr == -1)
+- return connector->panel.vbt.psr.enable;
+ return i915->params.enable_psr;
+ case I915_PSR_DEBUG_DISABLE:
+ return false;
+@@ -402,7 +399,6 @@ static void intel_psr_enable_sink(struct intel_dp *intel_dp)
+
+ static u32 intel_psr1_get_tp_time(struct intel_dp *intel_dp)
+ {
+- struct intel_connector *connector = intel_dp->attached_connector;
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+ u32 val = 0;
+
+@@ -415,20 +411,20 @@ static u32 intel_psr1_get_tp_time(struct intel_dp *intel_dp)
+ goto check_tp3_sel;
+ }
+
+- if (connector->panel.vbt.psr.tp1_wakeup_time_us == 0)
++ if (dev_priv->vbt.psr.tp1_wakeup_time_us == 0)
+ val |= EDP_PSR_TP1_TIME_0us;
+- else if (connector->panel.vbt.psr.tp1_wakeup_time_us <= 100)
++ else if (dev_priv->vbt.psr.tp1_wakeup_time_us <= 100)
+ val |= EDP_PSR_TP1_TIME_100us;
+- else if (connector->panel.vbt.psr.tp1_wakeup_time_us <= 500)
++ else if (dev_priv->vbt.psr.tp1_wakeup_time_us <= 500)
+ val |= EDP_PSR_TP1_TIME_500us;
+ else
+ val |= EDP_PSR_TP1_TIME_2500us;
+
+- if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us == 0)
++ if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us == 0)
+ val |= EDP_PSR_TP2_TP3_TIME_0us;
+- else if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us <= 100)
++ else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 100)
+ val |= EDP_PSR_TP2_TP3_TIME_100us;
+- else if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us <= 500)
++ else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 500)
+ val |= EDP_PSR_TP2_TP3_TIME_500us;
+ else
+ val |= EDP_PSR_TP2_TP3_TIME_2500us;
+@@ -445,14 +441,13 @@ check_tp3_sel:
+
+ static u8 psr_compute_idle_frames(struct intel_dp *intel_dp)
+ {
+- struct intel_connector *connector = intel_dp->attached_connector;
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+ int idle_frames;
+
+ /* Let's use 6 as the minimum to cover all known cases including the
+ * off-by-one issue that HW has in some cases.
+ */
+- idle_frames = max(6, connector->panel.vbt.psr.idle_frames);
++ idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
+ idle_frames = max(idle_frames, intel_dp->psr.sink_sync_latency + 1);
+
+ if (drm_WARN_ON(&dev_priv->drm, idle_frames > 0xf))
+@@ -488,19 +483,18 @@ static void hsw_activate_psr1(struct intel_dp *intel_dp)
+
+ static u32 intel_psr2_get_tp_time(struct intel_dp *intel_dp)
+ {
+- struct intel_connector *connector = intel_dp->attached_connector;
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+ u32 val = 0;
+
+ if (dev_priv->params.psr_safest_params)
+ return EDP_PSR2_TP2_TIME_2500us;
+
+- if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us >= 0 &&
+- connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 50)
++ if (dev_priv->vbt.psr.psr2_tp2_tp3_wakeup_time_us >= 0 &&
++ dev_priv->vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 50)
+ val |= EDP_PSR2_TP2_TIME_50us;
+- else if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 100)
++ else if (dev_priv->vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 100)
+ val |= EDP_PSR2_TP2_TIME_100us;
+- else if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 500)
++ else if (dev_priv->vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 500)
+ val |= EDP_PSR2_TP2_TIME_500us;
+ else
+ val |= EDP_PSR2_TP2_TIME_2500us;
+@@ -2350,7 +2344,6 @@ unlock:
+ */
+ void intel_psr_init(struct intel_dp *intel_dp)
+ {
+- struct intel_connector *connector = intel_dp->attached_connector;
+ struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+
+@@ -2374,10 +2367,14 @@ void intel_psr_init(struct intel_dp *intel_dp)
+
+ intel_dp->psr.source_support = true;
+
++ if (dev_priv->params.enable_psr == -1)
++ if (!dev_priv->vbt.psr.enable)
++ dev_priv->params.enable_psr = 0;
++
+ /* Set link_standby x link_off defaults */
+ if (DISPLAY_VER(dev_priv) < 12)
+ /* For new platforms up to TGL let's respect VBT back again */
+- intel_dp->psr.link_standby = connector->panel.vbt.psr.full_link;
++ intel_dp->psr.link_standby = dev_priv->vbt.psr.full_link;
+
+ INIT_WORK(&intel_dp->psr.work, intel_psr_work);
+ INIT_DELAYED_WORK(&intel_dp->psr.dc3co_work, tgl_dc3co_disable_work);
+diff --git a/drivers/gpu/drm/i915/display/intel_sdvo.c b/drivers/gpu/drm/i915/display/intel_sdvo.c
+index 14a64bd61176d..d81855d57cdc9 100644
+--- a/drivers/gpu/drm/i915/display/intel_sdvo.c
++++ b/drivers/gpu/drm/i915/display/intel_sdvo.c
+@@ -2869,7 +2869,6 @@ static bool
+ intel_sdvo_lvds_init(struct intel_sdvo *intel_sdvo, int device)
+ {
+ struct drm_encoder *encoder = &intel_sdvo->base.base;
+- struct drm_i915_private *i915 = to_i915(encoder->dev);
+ struct drm_connector *connector;
+ struct intel_connector *intel_connector;
+ struct intel_sdvo_connector *intel_sdvo_connector;
+@@ -2901,8 +2900,6 @@ intel_sdvo_lvds_init(struct intel_sdvo *intel_sdvo, int device)
+ if (!intel_sdvo_create_enhance_property(intel_sdvo, intel_sdvo_connector))
+ goto err;
+
+- intel_bios_init_panel(i915, &intel_connector->panel);
+-
+ /*
+ * Fetch modes from VBT. For SDVO prefer the VBT mode since some
+ * SDVO->LVDS transcoders can't cope with the EDID mode.
+diff --git a/drivers/gpu/drm/i915/display/vlv_dsi.c b/drivers/gpu/drm/i915/display/vlv_dsi.c
+index 02f75e95b2ec1..1954f07f0d3ec 100644
+--- a/drivers/gpu/drm/i915/display/vlv_dsi.c
++++ b/drivers/gpu/drm/i915/display/vlv_dsi.c
+@@ -782,7 +782,6 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
+ {
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
+- struct intel_connector *connector = to_intel_connector(conn_state->connector);
+ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ enum pipe pipe = crtc->pipe;
+ enum port port;
+@@ -839,7 +838,7 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
+ * the delay in that case. If there is no deassert-seq, then an
+ * unconditional msleep is used to give the panel time to power-on.
+ */
+- if (connector->panel.vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) {
++ if (dev_priv->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) {
+ intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay);
+ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
+ } else {
+@@ -1691,8 +1690,7 @@ static void vlv_dphy_param_init(struct intel_dsi *intel_dsi)
+ {
+ struct drm_device *dev = intel_dsi->base.base.dev;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+- struct intel_connector *connector = intel_dsi->attached_connector;
+- struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
++ struct mipi_config *mipi_config = dev_priv->vbt.dsi.config;
+ u32 tlpx_ns, extra_byte_count, tlpx_ui;
+ u32 ui_num, ui_den;
+ u32 prepare_cnt, exit_zero_cnt, clk_zero_cnt, trail_cnt;
+@@ -1926,22 +1924,13 @@ void vlv_dsi_init(struct drm_i915_private *dev_priv)
+
+ intel_dsi->panel_power_off_time = ktime_get_boottime();
+
+- intel_bios_init_panel(dev_priv, &intel_connector->panel);
+-
+- if (intel_connector->panel.vbt.dsi.config->dual_link)
++ if (dev_priv->vbt.dsi.config->dual_link)
+ intel_dsi->ports = BIT(PORT_A) | BIT(PORT_C);
+ else
+ intel_dsi->ports = BIT(port);
+
+- if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports))
+- intel_connector->panel.vbt.dsi.bl_ports &= intel_dsi->ports;
+-
+- intel_dsi->dcs_backlight_ports = intel_connector->panel.vbt.dsi.bl_ports;
+-
+- if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports))
+- intel_connector->panel.vbt.dsi.cabc_ports &= intel_dsi->ports;
+-
+- intel_dsi->dcs_cabc_ports = intel_connector->panel.vbt.dsi.cabc_ports;
++ intel_dsi->dcs_backlight_ports = dev_priv->vbt.dsi.bl_ports;
++ intel_dsi->dcs_cabc_ports = dev_priv->vbt.dsi.cabc_ports;
+
+ /* Create a DSI host (and a device) for each port. */
+ for_each_dsi_port(port, intel_dsi->ports) {
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 554d79bc0312d..5184d70d48382 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -194,6 +194,12 @@ struct drm_i915_display_funcs {
+
+ #define I915_COLOR_UNEVICTABLE (-1) /* a non-vma sharing the address space */
+
++enum drrs_type {
++ DRRS_TYPE_NONE,
++ DRRS_TYPE_STATIC,
++ DRRS_TYPE_SEAMLESS,
++};
++
+ #define QUIRK_LVDS_SSC_DISABLE (1<<1)
+ #define QUIRK_INVERT_BRIGHTNESS (1<<2)
+ #define QUIRK_BACKLIGHT_PRESENT (1<<3)
+@@ -302,19 +308,76 @@ struct intel_vbt_data {
+ /* bdb version */
+ u16 version;
+
++ struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */
++ struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
++
+ /* Feature bits */
+ unsigned int int_tv_support:1;
++ unsigned int lvds_dither:1;
+ unsigned int int_crt_support:1;
+ unsigned int lvds_use_ssc:1;
+ unsigned int int_lvds_support:1;
+ unsigned int display_clock_mode:1;
+ unsigned int fdi_rx_polarity_inverted:1;
++ unsigned int panel_type:4;
+ int lvds_ssc_freq;
++ unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */
+ enum drm_panel_orientation orientation;
+
+ bool override_afc_startup;
+ u8 override_afc_startup_val;
+
++ u8 seamless_drrs_min_refresh_rate;
++ enum drrs_type drrs_type;
++
++ struct {
++ int rate;
++ int lanes;
++ int preemphasis;
++ int vswing;
++ int bpp;
++ struct edp_power_seq pps;
++ u8 drrs_msa_timing_delay;
++ bool low_vswing;
++ bool initialized;
++ bool hobl;
++ } edp;
++
++ struct {
++ bool enable;
++ bool full_link;
++ bool require_aux_wakeup;
++ int idle_frames;
++ int tp1_wakeup_time_us;
++ int tp2_tp3_wakeup_time_us;
++ int psr2_tp2_tp3_wakeup_time_us;
++ } psr;
++
++ struct {
++ u16 pwm_freq_hz;
++ u16 brightness_precision_bits;
++ bool present;
++ bool active_low_pwm;
++ u8 min_brightness; /* min_brightness/255 of max */
++ u8 controller; /* brightness controller number */
++ enum intel_backlight_type type;
++ } backlight;
++
++ /* MIPI DSI */
++ struct {
++ u16 panel_id;
++ struct mipi_config *config;
++ struct mipi_pps_data *pps;
++ u16 bl_ports;
++ u16 cabc_ports;
++ u8 seq_version;
++ u32 size;
++ u8 *data;
++ const u8 *sequence[MIPI_SEQ_MAX];
++ u8 *deassert_seq; /* Used by fixup_mipi_sequences() */
++ enum drm_panel_orientation orientation;
++ } dsi;
++
+ int crt_ddc_pin;
+
+ struct list_head display_devices;
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-10-05 11:56 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-10-05 11:56 UTC (permalink / raw
To: gentoo-commits
commit: c7edfeebac5feee8d23cf87c01b12f13bff78be8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 5 11:56:38 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 5 11:56:38 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c7edfeeb
Linux patch 5.19.14
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1013_linux-5.19.14.patch | 5315 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5319 insertions(+)
diff --git a/0000_README b/0000_README
index 56f7e0a3..df106d7b 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-5.19.13.patch
From: http://www.kernel.org
Desc: Linux 5.19.13
+Patch: 1013_linux-5.19.14.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.14
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1013_linux-5.19.14.patch b/1013_linux-5.19.14.patch
new file mode 100644
index 00000000..40991ab7
--- /dev/null
+++ b/1013_linux-5.19.14.patch
@@ -0,0 +1,5315 @@
+diff --git a/Makefile b/Makefile
+index 2ecedf786e273..ff4a158671455 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/boot/dts/am33xx-l4.dtsi b/arch/arm/boot/dts/am33xx-l4.dtsi
+index 7da42a5b959cf..7e50fe633d8a1 100644
+--- a/arch/arm/boot/dts/am33xx-l4.dtsi
++++ b/arch/arm/boot/dts/am33xx-l4.dtsi
+@@ -1502,8 +1502,7 @@
+ mmc1: mmc@0 {
+ compatible = "ti,am335-sdhci";
+ ti,needs-special-reset;
+- dmas = <&edma_xbar 24 0 0
+- &edma_xbar 25 0 0>;
++ dmas = <&edma 24 0>, <&edma 25 0>;
+ dma-names = "tx", "rx";
+ interrupts = <64>;
+ reg = <0x0 0x1000>;
+diff --git a/arch/arm/boot/dts/am5748.dtsi b/arch/arm/boot/dts/am5748.dtsi
+index c260aa1a85bdb..a1f029e9d1f3d 100644
+--- a/arch/arm/boot/dts/am5748.dtsi
++++ b/arch/arm/boot/dts/am5748.dtsi
+@@ -25,6 +25,10 @@
+ status = "disabled";
+ };
+
++&usb4_tm {
++ status = "disabled";
++};
++
+ &atl_tm {
+ status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/integratorap.dts b/arch/arm/boot/dts/integratorap.dts
+index 9b652cc27b141..c983435ed492e 100644
+--- a/arch/arm/boot/dts/integratorap.dts
++++ b/arch/arm/boot/dts/integratorap.dts
+@@ -160,6 +160,7 @@
+
+ pci: pciv3@62000000 {
+ compatible = "arm,integrator-ap-pci", "v3,v360epc-pci";
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ #size-cells = <2>;
+ #address-cells = <3>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 3293f76478df4..0e5a4fbb5eb19 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -2128,7 +2128,7 @@
+
+ ufs_mem_phy: phy@1d87000 {
+ compatible = "qcom,sm8350-qmp-ufs-phy";
+- reg = <0 0x01d87000 0 0xe10>;
++ reg = <0 0x01d87000 0 0x1c4>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index db2f3d1934481..33d50f38f2e06 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -937,15 +937,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
+ pmd = *pmdp;
+ pmd_clear(pmdp);
+
+- /*
+- * pmdp collapse_flush need to ensure that there are no parallel gup
+- * walk after this call. This is needed so that we can have stable
+- * page ref count when collapsing a page. We don't allow a collapse page
+- * if we have gup taken on the page. We can ensure that by sending IPI
+- * because gup walk happens with IRQ disabled.
+- */
+- serialize_against_pte_lookup(vma->vm_mm);
+-
+ radix__flush_tlb_collapsed_pmd(vma->vm_mm, address);
+
+ return pmd;
+diff --git a/arch/riscv/Kconfig.erratas b/arch/riscv/Kconfig.erratas
+index 457ac72c9b36d..e59a770b4432f 100644
+--- a/arch/riscv/Kconfig.erratas
++++ b/arch/riscv/Kconfig.erratas
+@@ -46,7 +46,7 @@ config ERRATA_THEAD
+
+ config ERRATA_THEAD_PBMT
+ bool "Apply T-Head memory type errata"
+- depends on ERRATA_THEAD && 64BIT
++ depends on ERRATA_THEAD && 64BIT && MMU
+ select RISCV_ALTERNATIVE_EARLY
+ default y
+ help
+diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
+index 81a0211a372d3..a73bced40e241 100644
+--- a/arch/x86/include/asm/smp.h
++++ b/arch/x86/include/asm/smp.h
+@@ -21,16 +21,6 @@ DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id);
+ DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id);
+ DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
+
+-static inline struct cpumask *cpu_llc_shared_mask(int cpu)
+-{
+- return per_cpu(cpu_llc_shared_map, cpu);
+-}
+-
+-static inline struct cpumask *cpu_l2c_shared_mask(int cpu)
+-{
+- return per_cpu(cpu_l2c_shared_map, cpu);
+-}
+-
+ DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid);
+ DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid);
+ DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid);
+@@ -172,6 +162,16 @@ extern int safe_smp_processor_id(void);
+ # define safe_smp_processor_id() smp_processor_id()
+ #endif
+
++static inline struct cpumask *cpu_llc_shared_mask(int cpu)
++{
++ return per_cpu(cpu_llc_shared_map, cpu);
++}
++
++static inline struct cpumask *cpu_l2c_shared_mask(int cpu)
++{
++ return per_cpu(cpu_l2c_shared_map, cpu);
++}
++
+ #else /* !CONFIG_SMP */
+ #define wbinvd_on_cpu(cpu) wbinvd()
+ static inline int wbinvd_on_all_cpus(void)
+@@ -179,6 +179,11 @@ static inline int wbinvd_on_all_cpus(void)
+ wbinvd();
+ return 0;
+ }
++
++static inline struct cpumask *cpu_llc_shared_mask(int cpu)
++{
++ return (struct cpumask *)cpumask_of(0);
++}
+ #endif /* CONFIG_SMP */
+
+ extern unsigned disabled_cpus;
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 62f6b8b7c4a52..4f3204364caa5 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -1319,22 +1319,23 @@ struct bp_patching_desc {
+ atomic_t refs;
+ };
+
+-static struct bp_patching_desc *bp_desc;
++static struct bp_patching_desc bp_desc;
+
+ static __always_inline
+-struct bp_patching_desc *try_get_desc(struct bp_patching_desc **descp)
++struct bp_patching_desc *try_get_desc(void)
+ {
+- /* rcu_dereference */
+- struct bp_patching_desc *desc = __READ_ONCE(*descp);
++ struct bp_patching_desc *desc = &bp_desc;
+
+- if (!desc || !arch_atomic_inc_not_zero(&desc->refs))
++ if (!arch_atomic_inc_not_zero(&desc->refs))
+ return NULL;
+
+ return desc;
+ }
+
+-static __always_inline void put_desc(struct bp_patching_desc *desc)
++static __always_inline void put_desc(void)
+ {
++ struct bp_patching_desc *desc = &bp_desc;
++
+ smp_mb__before_atomic();
+ arch_atomic_dec(&desc->refs);
+ }
+@@ -1367,15 +1368,15 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
+
+ /*
+ * Having observed our INT3 instruction, we now must observe
+- * bp_desc:
++ * bp_desc with non-zero refcount:
+ *
+- * bp_desc = desc INT3
++ * bp_desc.refs = 1 INT3
+ * WMB RMB
+- * write INT3 if (desc)
++ * write INT3 if (bp_desc.refs != 0)
+ */
+ smp_rmb();
+
+- desc = try_get_desc(&bp_desc);
++ desc = try_get_desc();
+ if (!desc)
+ return 0;
+
+@@ -1429,7 +1430,7 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
+ ret = 1;
+
+ out_put:
+- put_desc(desc);
++ put_desc();
+ return ret;
+ }
+
+@@ -1460,18 +1461,20 @@ static int tp_vec_nr;
+ */
+ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries)
+ {
+- struct bp_patching_desc desc = {
+- .vec = tp,
+- .nr_entries = nr_entries,
+- .refs = ATOMIC_INIT(1),
+- };
+ unsigned char int3 = INT3_INSN_OPCODE;
+ unsigned int i;
+ int do_sync;
+
+ lockdep_assert_held(&text_mutex);
+
+- smp_store_release(&bp_desc, &desc); /* rcu_assign_pointer */
++ bp_desc.vec = tp;
++ bp_desc.nr_entries = nr_entries;
++
++ /*
++ * Corresponds to the implicit memory barrier in try_get_desc() to
++ * ensure reading a non-zero refcount provides up to date bp_desc data.
++ */
++ atomic_set_release(&bp_desc.refs, 1);
+
+ /*
+ * Corresponding read barrier in int3 notifier for making sure the
+@@ -1559,12 +1562,10 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
+ text_poke_sync();
+
+ /*
+- * Remove and synchronize_rcu(), except we have a very primitive
+- * refcount based completion.
++ * Remove and wait for refs to be zero.
+ */
+- WRITE_ONCE(bp_desc, NULL); /* RCU_INIT_POINTER */
+- if (!atomic_dec_and_test(&desc.refs))
+- atomic_cond_read_acquire(&desc.refs, !VAL);
++ if (!atomic_dec_and_test(&bp_desc.refs))
++ atomic_cond_read_acquire(&bp_desc.refs, !VAL);
+ }
+
+ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
+index a78652d43e61b..9fbc43b6b8b45 100644
+--- a/arch/x86/kernel/cpu/sgx/main.c
++++ b/arch/x86/kernel/cpu/sgx/main.c
+@@ -49,9 +49,13 @@ static LIST_HEAD(sgx_dirty_page_list);
+ * Reset post-kexec EPC pages to the uninitialized state. The pages are removed
+ * from the input list, and made available for the page allocator. SECS pages
+ * prepending their children in the input list are left intact.
++ *
++ * Return 0 when sanitization was successful or kthread was stopped, and the
++ * number of unsanitized pages otherwise.
+ */
+-static void __sgx_sanitize_pages(struct list_head *dirty_page_list)
++static unsigned long __sgx_sanitize_pages(struct list_head *dirty_page_list)
+ {
++ unsigned long left_dirty = 0;
+ struct sgx_epc_page *page;
+ LIST_HEAD(dirty);
+ int ret;
+@@ -59,7 +63,7 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list)
+ /* dirty_page_list is thread-local, no need for a lock: */
+ while (!list_empty(dirty_page_list)) {
+ if (kthread_should_stop())
+- return;
++ return 0;
+
+ page = list_first_entry(dirty_page_list, struct sgx_epc_page, list);
+
+@@ -92,12 +96,14 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list)
+ } else {
+ /* The page is not yet clean - move to the dirty list. */
+ list_move_tail(&page->list, &dirty);
++ left_dirty++;
+ }
+
+ cond_resched();
+ }
+
+ list_splice(&dirty, dirty_page_list);
++ return left_dirty;
+ }
+
+ static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page)
+@@ -440,10 +446,7 @@ static int ksgxd(void *p)
+ * required for SECS pages, whose child pages blocked EREMOVE.
+ */
+ __sgx_sanitize_pages(&sgx_dirty_page_list);
+- __sgx_sanitize_pages(&sgx_dirty_page_list);
+-
+- /* sanity check: */
+- WARN_ON(!list_empty(&sgx_dirty_page_list));
++ WARN_ON(__sgx_sanitize_pages(&sgx_dirty_page_list));
+
+ while (!kthread_should_stop()) {
+ if (try_to_freeze())
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 3ab498165639f..cb14441cee37d 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -870,8 +870,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ entry->edx = 0;
+ }
+ break;
+- case 9:
+- break;
+ case 0xa: { /* Architectural Performance Monitoring */
+ struct x86_pmu_capability cap;
+ union cpuid10_eax eax;
+diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
+index ad0139d254014..f1bb186171562 100644
+--- a/arch/x86/lib/usercopy.c
++++ b/arch/x86/lib/usercopy.c
+@@ -44,7 +44,7 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
+ * called from other contexts.
+ */
+ pagefault_disable();
+- ret = __copy_from_user_inatomic(to, from, n);
++ ret = raw_copy_from_user(to, from, n);
+ pagefault_enable();
+
+ return ret;
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 9601fa92950a0..6211d5bb76371 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3988,6 +3988,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ { "PIONEER DVD-RW DVR-212D", NULL, ATA_HORKAGE_NOSETXFER },
+ { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER },
+
++ /* These specific Pioneer models have LPM issues */
++ { "PIONEER BD-RW BDR-207M", NULL, ATA_HORKAGE_NOLPM },
++ { "PIONEER BD-RW BDR-205", NULL, ATA_HORKAGE_NOLPM },
++
+ /* Crucial BX100 SSD 500GB has broken LPM support */
+ { "CT500BX100SSD1", NULL, ATA_HORKAGE_NOLPM },
+
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 59d6d5faf7396..dcd639e58ff06 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -322,14 +322,14 @@ static blk_status_t virtblk_prep_rq(struct blk_mq_hw_ctx *hctx,
+ if (unlikely(status))
+ return status;
+
+- blk_mq_start_request(req);
+-
+ vbr->sg_table.nents = virtblk_map_data(hctx, req, vbr);
+ if (unlikely(vbr->sg_table.nents < 0)) {
+ virtblk_cleanup_cmd(req);
+ return BLK_STS_RESOURCE;
+ }
+
++ blk_mq_start_request(req);
++
+ return BLK_STS_OK;
+ }
+
+@@ -391,8 +391,7 @@ static bool virtblk_prep_rq_batch(struct request *req)
+ }
+
+ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
+- struct request **rqlist,
+- struct request **requeue_list)
++ struct request **rqlist)
+ {
+ unsigned long flags;
+ int err;
+@@ -408,7 +407,7 @@ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
+ if (err) {
+ virtblk_unmap_data(req, vbr);
+ virtblk_cleanup_cmd(req);
+- rq_list_add(requeue_list, req);
++ blk_mq_requeue_request(req, true);
+ }
+ }
+
+@@ -436,7 +435,7 @@ static void virtio_queue_rqs(struct request **rqlist)
+
+ if (!next || req->mq_hctx != next->mq_hctx) {
+ req->rq_next = NULL;
+- kick = virtblk_add_req_batch(vq, rqlist, &requeue_list);
++ kick = virtblk_add_req_batch(vq, rqlist);
+ if (kick)
+ virtqueue_notify(vq->vq);
+
+diff --git a/drivers/clk/bcm/clk-iproc-pll.c b/drivers/clk/bcm/clk-iproc-pll.c
+index 33da30f99c79b..d39c44b61c523 100644
+--- a/drivers/clk/bcm/clk-iproc-pll.c
++++ b/drivers/clk/bcm/clk-iproc-pll.c
+@@ -736,6 +736,7 @@ void iproc_pll_clk_setup(struct device_node *node,
+ const char *parent_name;
+ struct iproc_clk *iclk_array;
+ struct clk_hw_onecell_data *clk_data;
++ const char *clk_name;
+
+ if (WARN_ON(!pll_ctrl) || WARN_ON(!clk_ctrl))
+ return;
+@@ -783,7 +784,12 @@ void iproc_pll_clk_setup(struct device_node *node,
+ iclk = &iclk_array[0];
+ iclk->pll = pll;
+
+- init.name = node->name;
++ ret = of_property_read_string_index(node, "clock-output-names",
++ 0, &clk_name);
++ if (WARN_ON(ret))
++ goto err_pll_register;
++
++ init.name = clk_name;
+ init.ops = &iproc_pll_ops;
+ init.flags = 0;
+ parent_name = of_clk_get_parent_name(node, 0);
+@@ -803,13 +809,11 @@ void iproc_pll_clk_setup(struct device_node *node,
+ goto err_pll_register;
+
+ clk_data->hws[0] = &iclk->hw;
++ parent_name = clk_name;
+
+ /* now initialize and register all leaf clocks */
+ for (i = 1; i < num_clks; i++) {
+- const char *clk_name;
+-
+ memset(&init, 0, sizeof(init));
+- parent_name = node->name;
+
+ ret = of_property_read_string_index(node, "clock-output-names",
+ i, &clk_name);
+diff --git a/drivers/clk/imx/clk-imx6sx.c b/drivers/clk/imx/clk-imx6sx.c
+index fc1bd23d45834..598f3cf4eba49 100644
+--- a/drivers/clk/imx/clk-imx6sx.c
++++ b/drivers/clk/imx/clk-imx6sx.c
+@@ -280,13 +280,13 @@ static void __init imx6sx_clocks_init(struct device_node *ccm_node)
+ hws[IMX6SX_CLK_SSI3_SEL] = imx_clk_hw_mux("ssi3_sel", base + 0x1c, 14, 2, ssi_sels, ARRAY_SIZE(ssi_sels));
+ hws[IMX6SX_CLK_SSI2_SEL] = imx_clk_hw_mux("ssi2_sel", base + 0x1c, 12, 2, ssi_sels, ARRAY_SIZE(ssi_sels));
+ hws[IMX6SX_CLK_SSI1_SEL] = imx_clk_hw_mux("ssi1_sel", base + 0x1c, 10, 2, ssi_sels, ARRAY_SIZE(ssi_sels));
+- hws[IMX6SX_CLK_QSPI1_SEL] = imx_clk_hw_mux_flags("qspi1_sel", base + 0x1c, 7, 3, qspi1_sels, ARRAY_SIZE(qspi1_sels), CLK_SET_RATE_PARENT);
++ hws[IMX6SX_CLK_QSPI1_SEL] = imx_clk_hw_mux("qspi1_sel", base + 0x1c, 7, 3, qspi1_sels, ARRAY_SIZE(qspi1_sels));
+ hws[IMX6SX_CLK_PERCLK_SEL] = imx_clk_hw_mux("perclk_sel", base + 0x1c, 6, 1, perclk_sels, ARRAY_SIZE(perclk_sels));
+ hws[IMX6SX_CLK_VID_SEL] = imx_clk_hw_mux("vid_sel", base + 0x20, 21, 3, vid_sels, ARRAY_SIZE(vid_sels));
+ hws[IMX6SX_CLK_ESAI_SEL] = imx_clk_hw_mux("esai_sel", base + 0x20, 19, 2, audio_sels, ARRAY_SIZE(audio_sels));
+ hws[IMX6SX_CLK_CAN_SEL] = imx_clk_hw_mux("can_sel", base + 0x20, 8, 2, can_sels, ARRAY_SIZE(can_sels));
+ hws[IMX6SX_CLK_UART_SEL] = imx_clk_hw_mux("uart_sel", base + 0x24, 6, 1, uart_sels, ARRAY_SIZE(uart_sels));
+- hws[IMX6SX_CLK_QSPI2_SEL] = imx_clk_hw_mux_flags("qspi2_sel", base + 0x2c, 15, 3, qspi2_sels, ARRAY_SIZE(qspi2_sels), CLK_SET_RATE_PARENT);
++ hws[IMX6SX_CLK_QSPI2_SEL] = imx_clk_hw_mux("qspi2_sel", base + 0x2c, 15, 3, qspi2_sels, ARRAY_SIZE(qspi2_sels));
+ hws[IMX6SX_CLK_SPDIF_SEL] = imx_clk_hw_mux("spdif_sel", base + 0x30, 20, 2, audio_sels, ARRAY_SIZE(audio_sels));
+ hws[IMX6SX_CLK_AUDIO_SEL] = imx_clk_hw_mux("audio_sel", base + 0x30, 7, 2, audio_sels, ARRAY_SIZE(audio_sels));
+ hws[IMX6SX_CLK_ENET_PRE_SEL] = imx_clk_hw_mux("enet_pre_sel", base + 0x34, 15, 3, enet_pre_sels, ARRAY_SIZE(enet_pre_sels));
+diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
+index f5c9fa40491c5..dcc41d178238e 100644
+--- a/drivers/clk/imx/clk-imx93.c
++++ b/drivers/clk/imx/clk-imx93.c
+@@ -332,7 +332,7 @@ static struct platform_driver imx93_clk_driver = {
+ .driver = {
+ .name = "imx93-ccm",
+ .suppress_bind_attrs = true,
+- .of_match_table = of_match_ptr(imx93_clk_of_match),
++ .of_match_table = imx93_clk_of_match,
+ },
+ };
+ module_platform_driver(imx93_clk_driver);
+diff --git a/drivers/clk/ingenic/tcu.c b/drivers/clk/ingenic/tcu.c
+index 201bf6e6b6e0f..d5544cbc5c484 100644
+--- a/drivers/clk/ingenic/tcu.c
++++ b/drivers/clk/ingenic/tcu.c
+@@ -101,15 +101,11 @@ static bool ingenic_tcu_enable_regs(struct clk_hw *hw)
+ bool enabled = false;
+
+ /*
+- * If the SoC has no global TCU clock, we must ungate the channel's
+- * clock to be able to access its registers.
+- * If we have a TCU clock, it will be enabled automatically as it has
+- * been attached to the regmap.
++ * According to the programming manual, a timer channel's registers can
++ * only be accessed when the channel's stop bit is clear.
+ */
+- if (!tcu->clk) {
+- enabled = !!ingenic_tcu_is_enabled(hw);
+- regmap_write(tcu->map, TCU_REG_TSCR, BIT(info->gate_bit));
+- }
++ enabled = !!ingenic_tcu_is_enabled(hw);
++ regmap_write(tcu->map, TCU_REG_TSCR, BIT(info->gate_bit));
+
+ return enabled;
+ }
+@@ -120,8 +116,7 @@ static void ingenic_tcu_disable_regs(struct clk_hw *hw)
+ const struct ingenic_tcu_clk_info *info = tcu_clk->info;
+ struct ingenic_tcu *tcu = tcu_clk->tcu;
+
+- if (!tcu->clk)
+- regmap_write(tcu->map, TCU_REG_TSSR, BIT(info->gate_bit));
++ regmap_write(tcu->map, TCU_REG_TSSR, BIT(info->gate_bit));
+ }
+
+ static u8 ingenic_tcu_get_parent(struct clk_hw *hw)
+diff --git a/drivers/clk/microchip/clk-mpfs.c b/drivers/clk/microchip/clk-mpfs.c
+index 070c3b8965590..b6b89413e0904 100644
+--- a/drivers/clk/microchip/clk-mpfs.c
++++ b/drivers/clk/microchip/clk-mpfs.c
+@@ -239,6 +239,11 @@ static const struct clk_ops mpfs_clk_cfg_ops = {
+ .hw.init = CLK_HW_INIT(_name, _parent, &mpfs_clk_cfg_ops, 0), \
+ }
+
++#define CLK_CPU_OFFSET 0u
++#define CLK_AXI_OFFSET 1u
++#define CLK_AHB_OFFSET 2u
++#define CLK_RTCREF_OFFSET 3u
++
+ static struct mpfs_cfg_hw_clock mpfs_cfg_clks[] = {
+ CLK_CFG(CLK_CPU, "clk_cpu", "clk_msspll", 0, 2, mpfs_div_cpu_axi_table, 0,
+ REG_CLOCK_CONFIG_CR),
+@@ -362,7 +367,7 @@ static const struct clk_ops mpfs_periph_clk_ops = {
+ _flags), \
+ }
+
+-#define PARENT_CLK(PARENT) (&mpfs_cfg_clks[CLK_##PARENT].hw)
++#define PARENT_CLK(PARENT) (&mpfs_cfg_clks[CLK_##PARENT##_OFFSET].hw)
+
+ /*
+ * Critical clocks:
+@@ -370,6 +375,8 @@ static const struct clk_ops mpfs_periph_clk_ops = {
+ * trap handler
+ * - CLK_MMUART0: reserved by the hss
+ * - CLK_DDRC: provides clock to the ddr subsystem
++ * - CLK_RTC: the onboard RTC's AHB bus clock must be kept running as the rtc will stop
++ * if the AHB interface clock is disabled
+ * - CLK_FICx: these provide the processor side clocks to the "FIC" (Fabric InterConnect)
+ * clock domain crossers which provide the interface to the FPGA fabric. Disabling them
+ * causes the FPGA fabric to go into reset.
+@@ -394,7 +401,7 @@ static struct mpfs_periph_hw_clock mpfs_periph_clks[] = {
+ CLK_PERIPH(CLK_CAN0, "clk_periph_can0", PARENT_CLK(AHB), 14, 0),
+ CLK_PERIPH(CLK_CAN1, "clk_periph_can1", PARENT_CLK(AHB), 15, 0),
+ CLK_PERIPH(CLK_USB, "clk_periph_usb", PARENT_CLK(AHB), 16, 0),
+- CLK_PERIPH(CLK_RTC, "clk_periph_rtc", PARENT_CLK(AHB), 18, 0),
++ CLK_PERIPH(CLK_RTC, "clk_periph_rtc", PARENT_CLK(AHB), 18, CLK_IS_CRITICAL),
+ CLK_PERIPH(CLK_QSPI, "clk_periph_qspi", PARENT_CLK(AHB), 19, 0),
+ CLK_PERIPH(CLK_GPIO0, "clk_periph_gpio0", PARENT_CLK(AHB), 20, 0),
+ CLK_PERIPH(CLK_GPIO1, "clk_periph_gpio1", PARENT_CLK(AHB), 21, 0),
+diff --git a/drivers/counter/104-quad-8.c b/drivers/counter/104-quad-8.c
+index a17e51d65aca8..4407203e0c9b3 100644
+--- a/drivers/counter/104-quad-8.c
++++ b/drivers/counter/104-quad-8.c
+@@ -33,6 +33,36 @@ MODULE_PARM_DESC(irq, "ACCES 104-QUAD-8 interrupt line numbers");
+
+ #define QUAD8_NUM_COUNTERS 8
+
++/**
++ * struct channel_reg - channel register structure
++ * @data: Count data
++ * @control: Channel flags and control
++ */
++struct channel_reg {
++ u8 data;
++ u8 control;
++};
++
++/**
++ * struct quad8_reg - device register structure
++ * @channel: quadrature counter data and control
++ * @interrupt_status: channel interrupt status
++ * @channel_oper: enable/reset counters and interrupt functions
++ * @index_interrupt: enable channel interrupts
++ * @reserved: reserved for Factory Use
++ * @index_input_levels: index signal logical input level
++ * @cable_status: differential encoder cable status
++ */
++struct quad8_reg {
++ struct channel_reg channel[QUAD8_NUM_COUNTERS];
++ u8 interrupt_status;
++ u8 channel_oper;
++ u8 index_interrupt;
++ u8 reserved[3];
++ u8 index_input_levels;
++ u8 cable_status;
++};
++
+ /**
+ * struct quad8 - device private data structure
+ * @lock: lock to prevent clobbering device states during R/W ops
+@@ -48,7 +78,7 @@ MODULE_PARM_DESC(irq, "ACCES 104-QUAD-8 interrupt line numbers");
+ * @synchronous_mode: array of index function synchronous mode configurations
+ * @index_polarity: array of index function polarity configurations
+ * @cable_fault_enable: differential encoder cable status enable configurations
+- * @base: base port address of the device
++ * @reg: I/O address offset for the device registers
+ */
+ struct quad8 {
+ spinlock_t lock;
+@@ -63,14 +93,9 @@ struct quad8 {
+ unsigned int synchronous_mode[QUAD8_NUM_COUNTERS];
+ unsigned int index_polarity[QUAD8_NUM_COUNTERS];
+ unsigned int cable_fault_enable;
+- unsigned int base;
++ struct quad8_reg __iomem *reg;
+ };
+
+-#define QUAD8_REG_INTERRUPT_STATUS 0x10
+-#define QUAD8_REG_CHAN_OP 0x11
+-#define QUAD8_REG_INDEX_INTERRUPT 0x12
+-#define QUAD8_REG_INDEX_INPUT_LEVELS 0x16
+-#define QUAD8_DIFF_ENCODER_CABLE_STATUS 0x17
+ /* Borrow Toggle flip-flop */
+ #define QUAD8_FLAG_BT BIT(0)
+ /* Carry Toggle flip-flop */
+@@ -118,8 +143,7 @@ static int quad8_signal_read(struct counter_device *counter,
+ if (signal->id < 16)
+ return -EINVAL;
+
+- state = inb(priv->base + QUAD8_REG_INDEX_INPUT_LEVELS)
+- & BIT(signal->id - 16);
++ state = ioread8(&priv->reg->index_input_levels) & BIT(signal->id - 16);
+
+ *level = (state) ? COUNTER_SIGNAL_LEVEL_HIGH : COUNTER_SIGNAL_LEVEL_LOW;
+
+@@ -130,14 +154,14 @@ static int quad8_count_read(struct counter_device *counter,
+ struct counter_count *count, u64 *val)
+ {
+ struct quad8 *const priv = counter_priv(counter);
+- const int base_offset = priv->base + 2 * count->id;
++ struct channel_reg __iomem *const chan = priv->reg->channel + count->id;
+ unsigned int flags;
+ unsigned int borrow;
+ unsigned int carry;
+ unsigned long irqflags;
+ int i;
+
+- flags = inb(base_offset + 1);
++ flags = ioread8(&chan->control);
+ borrow = flags & QUAD8_FLAG_BT;
+ carry = !!(flags & QUAD8_FLAG_CT);
+
+@@ -147,11 +171,11 @@ static int quad8_count_read(struct counter_device *counter,
+ spin_lock_irqsave(&priv->lock, irqflags);
+
+ /* Reset Byte Pointer; transfer Counter to Output Latch */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_CNTR_OUT,
+- base_offset + 1);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_CNTR_OUT,
++ &chan->control);
+
+ for (i = 0; i < 3; i++)
+- *val |= (unsigned long)inb(base_offset) << (8 * i);
++ *val |= (unsigned long)ioread8(&chan->data) << (8 * i);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -162,7 +186,7 @@ static int quad8_count_write(struct counter_device *counter,
+ struct counter_count *count, u64 val)
+ {
+ struct quad8 *const priv = counter_priv(counter);
+- const int base_offset = priv->base + 2 * count->id;
++ struct channel_reg __iomem *const chan = priv->reg->channel + count->id;
+ unsigned long irqflags;
+ int i;
+
+@@ -173,27 +197,27 @@ static int quad8_count_write(struct counter_device *counter,
+ spin_lock_irqsave(&priv->lock, irqflags);
+
+ /* Reset Byte Pointer */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, &chan->control);
+
+ /* Counter can only be set via Preset Register */
+ for (i = 0; i < 3; i++)
+- outb(val >> (8 * i), base_offset);
++ iowrite8(val >> (8 * i), &chan->data);
+
+ /* Transfer Preset Register to Counter */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_PRESET_CNTR, base_offset + 1);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_PRESET_CNTR, &chan->control);
+
+ /* Reset Byte Pointer */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, &chan->control);
+
+ /* Set Preset Register back to original value */
+ val = priv->preset[count->id];
+ for (i = 0; i < 3; i++)
+- outb(val >> (8 * i), base_offset);
++ iowrite8(val >> (8 * i), &chan->data);
+
+ /* Reset Borrow, Carry, Compare, and Sign flags */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_FLAGS, base_offset + 1);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_FLAGS, &chan->control);
+ /* Reset Error flag */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_E, base_offset + 1);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_E, &chan->control);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -246,7 +270,7 @@ static int quad8_function_write(struct counter_device *counter,
+ unsigned int *const quadrature_mode = priv->quadrature_mode + id;
+ unsigned int *const scale = priv->quadrature_scale + id;
+ unsigned int *const synchronous_mode = priv->synchronous_mode + id;
+- const int base_offset = priv->base + 2 * id + 1;
++ u8 __iomem *const control = &priv->reg->channel[id].control;
+ unsigned long irqflags;
+ unsigned int mode_cfg;
+ unsigned int idr_cfg;
+@@ -266,7 +290,7 @@ static int quad8_function_write(struct counter_device *counter,
+ if (*synchronous_mode) {
+ *synchronous_mode = 0;
+ /* Disable synchronous function mode */
+- outb(QUAD8_CTR_IDR | idr_cfg, base_offset);
++ iowrite8(QUAD8_CTR_IDR | idr_cfg, control);
+ }
+ } else {
+ *quadrature_mode = 1;
+@@ -292,7 +316,7 @@ static int quad8_function_write(struct counter_device *counter,
+ }
+
+ /* Load mode configuration to Counter Mode Register */
+- outb(QUAD8_CTR_CMR | mode_cfg, base_offset);
++ iowrite8(QUAD8_CTR_CMR | mode_cfg, control);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -305,10 +329,10 @@ static int quad8_direction_read(struct counter_device *counter,
+ {
+ const struct quad8 *const priv = counter_priv(counter);
+ unsigned int ud_flag;
+- const unsigned int flag_addr = priv->base + 2 * count->id + 1;
++ u8 __iomem *const flag_addr = &priv->reg->channel[count->id].control;
+
+ /* U/D flag: nonzero = up, zero = down */
+- ud_flag = inb(flag_addr) & QUAD8_FLAG_UD;
++ ud_flag = ioread8(flag_addr) & QUAD8_FLAG_UD;
+
+ *direction = (ud_flag) ? COUNTER_COUNT_DIRECTION_FORWARD :
+ COUNTER_COUNT_DIRECTION_BACKWARD;
+@@ -402,7 +426,6 @@ static int quad8_events_configure(struct counter_device *counter)
+ struct counter_event_node *event_node;
+ unsigned int next_irq_trigger;
+ unsigned long ior_cfg;
+- unsigned long base_offset;
+
+ spin_lock_irqsave(&priv->lock, irqflags);
+
+@@ -426,6 +449,9 @@ static int quad8_events_configure(struct counter_device *counter)
+ return -EINVAL;
+ }
+
++ /* Enable IRQ line */
++ irq_enabled |= BIT(event_node->channel);
++
+ /* Skip configuration if it is the same as previously set */
+ if (priv->irq_trigger[event_node->channel] == next_irq_trigger)
+ continue;
+@@ -437,14 +463,11 @@ static int quad8_events_configure(struct counter_device *counter)
+ ior_cfg = priv->ab_enable[event_node->channel] |
+ priv->preset_enable[event_node->channel] << 1 |
+ priv->irq_trigger[event_node->channel] << 3;
+- base_offset = priv->base + 2 * event_node->channel + 1;
+- outb(QUAD8_CTR_IOR | ior_cfg, base_offset);
+-
+- /* Enable IRQ line */
+- irq_enabled |= BIT(event_node->channel);
++ iowrite8(QUAD8_CTR_IOR | ior_cfg,
++ &priv->reg->channel[event_node->channel].control);
+ }
+
+- outb(irq_enabled, priv->base + QUAD8_REG_INDEX_INTERRUPT);
++ iowrite8(irq_enabled, &priv->reg->index_interrupt);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -508,7 +531,7 @@ static int quad8_index_polarity_set(struct counter_device *counter,
+ {
+ struct quad8 *const priv = counter_priv(counter);
+ const size_t channel_id = signal->id - 16;
+- const int base_offset = priv->base + 2 * channel_id + 1;
++ u8 __iomem *const control = &priv->reg->channel[channel_id].control;
+ unsigned long irqflags;
+ unsigned int idr_cfg = index_polarity << 1;
+
+@@ -519,7 +542,7 @@ static int quad8_index_polarity_set(struct counter_device *counter,
+ priv->index_polarity[channel_id] = index_polarity;
+
+ /* Load Index Control configuration to Index Control Register */
+- outb(QUAD8_CTR_IDR | idr_cfg, base_offset);
++ iowrite8(QUAD8_CTR_IDR | idr_cfg, control);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -549,7 +572,7 @@ static int quad8_synchronous_mode_set(struct counter_device *counter,
+ {
+ struct quad8 *const priv = counter_priv(counter);
+ const size_t channel_id = signal->id - 16;
+- const int base_offset = priv->base + 2 * channel_id + 1;
++ u8 __iomem *const control = &priv->reg->channel[channel_id].control;
+ unsigned long irqflags;
+ unsigned int idr_cfg = synchronous_mode;
+
+@@ -566,7 +589,7 @@ static int quad8_synchronous_mode_set(struct counter_device *counter,
+ priv->synchronous_mode[channel_id] = synchronous_mode;
+
+ /* Load Index Control configuration to Index Control Register */
+- outb(QUAD8_CTR_IDR | idr_cfg, base_offset);
++ iowrite8(QUAD8_CTR_IDR | idr_cfg, control);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -614,7 +637,7 @@ static int quad8_count_mode_write(struct counter_device *counter,
+ struct quad8 *const priv = counter_priv(counter);
+ unsigned int count_mode;
+ unsigned int mode_cfg;
+- const int base_offset = priv->base + 2 * count->id + 1;
++ u8 __iomem *const control = &priv->reg->channel[count->id].control;
+ unsigned long irqflags;
+
+ /* Map Generic Counter count mode to 104-QUAD-8 count mode */
+@@ -648,7 +671,7 @@ static int quad8_count_mode_write(struct counter_device *counter,
+ mode_cfg |= (priv->quadrature_scale[count->id] + 1) << 3;
+
+ /* Load mode configuration to Counter Mode Register */
+- outb(QUAD8_CTR_CMR | mode_cfg, base_offset);
++ iowrite8(QUAD8_CTR_CMR | mode_cfg, control);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -669,7 +692,7 @@ static int quad8_count_enable_write(struct counter_device *counter,
+ struct counter_count *count, u8 enable)
+ {
+ struct quad8 *const priv = counter_priv(counter);
+- const int base_offset = priv->base + 2 * count->id;
++ u8 __iomem *const control = &priv->reg->channel[count->id].control;
+ unsigned long irqflags;
+ unsigned int ior_cfg;
+
+@@ -681,7 +704,7 @@ static int quad8_count_enable_write(struct counter_device *counter,
+ priv->irq_trigger[count->id] << 3;
+
+ /* Load I/O control configuration */
+- outb(QUAD8_CTR_IOR | ior_cfg, base_offset + 1);
++ iowrite8(QUAD8_CTR_IOR | ior_cfg, control);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -697,9 +720,9 @@ static int quad8_error_noise_get(struct counter_device *counter,
+ struct counter_count *count, u32 *noise_error)
+ {
+ const struct quad8 *const priv = counter_priv(counter);
+- const int base_offset = priv->base + 2 * count->id + 1;
++ u8 __iomem *const flag_addr = &priv->reg->channel[count->id].control;
+
+- *noise_error = !!(inb(base_offset) & QUAD8_FLAG_E);
++ *noise_error = !!(ioread8(flag_addr) & QUAD8_FLAG_E);
+
+ return 0;
+ }
+@@ -717,17 +740,17 @@ static int quad8_count_preset_read(struct counter_device *counter,
+ static void quad8_preset_register_set(struct quad8 *const priv, const int id,
+ const unsigned int preset)
+ {
+- const unsigned int base_offset = priv->base + 2 * id;
++ struct channel_reg __iomem *const chan = priv->reg->channel + id;
+ int i;
+
+ priv->preset[id] = preset;
+
+ /* Reset Byte Pointer */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, &chan->control);
+
+ /* Set Preset Register */
+ for (i = 0; i < 3; i++)
+- outb(preset >> (8 * i), base_offset);
++ iowrite8(preset >> (8 * i), &chan->data);
+ }
+
+ static int quad8_count_preset_write(struct counter_device *counter,
+@@ -816,7 +839,7 @@ static int quad8_count_preset_enable_write(struct counter_device *counter,
+ u8 preset_enable)
+ {
+ struct quad8 *const priv = counter_priv(counter);
+- const int base_offset = priv->base + 2 * count->id + 1;
++ u8 __iomem *const control = &priv->reg->channel[count->id].control;
+ unsigned long irqflags;
+ unsigned int ior_cfg;
+
+@@ -831,7 +854,7 @@ static int quad8_count_preset_enable_write(struct counter_device *counter,
+ priv->irq_trigger[count->id] << 3;
+
+ /* Load I/O control configuration to Input / Output Control Register */
+- outb(QUAD8_CTR_IOR | ior_cfg, base_offset);
++ iowrite8(QUAD8_CTR_IOR | ior_cfg, control);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -858,7 +881,7 @@ static int quad8_signal_cable_fault_read(struct counter_device *counter,
+ }
+
+ /* Logic 0 = cable fault */
+- status = inb(priv->base + QUAD8_DIFF_ENCODER_CABLE_STATUS);
++ status = ioread8(&priv->reg->cable_status);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -899,7 +922,7 @@ static int quad8_signal_cable_fault_enable_write(struct counter_device *counter,
+ /* Enable is active low in Differential Encoder Cable Status register */
+ cable_fault_enable = ~priv->cable_fault_enable;
+
+- outb(cable_fault_enable, priv->base + QUAD8_DIFF_ENCODER_CABLE_STATUS);
++ iowrite8(cable_fault_enable, &priv->reg->cable_status);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -923,7 +946,7 @@ static int quad8_signal_fck_prescaler_write(struct counter_device *counter,
+ {
+ struct quad8 *const priv = counter_priv(counter);
+ const size_t channel_id = signal->id / 2;
+- const int base_offset = priv->base + 2 * channel_id;
++ struct channel_reg __iomem *const chan = priv->reg->channel + channel_id;
+ unsigned long irqflags;
+
+ spin_lock_irqsave(&priv->lock, irqflags);
+@@ -931,12 +954,12 @@ static int quad8_signal_fck_prescaler_write(struct counter_device *counter,
+ priv->fck_prescaler[channel_id] = prescaler;
+
+ /* Reset Byte Pointer */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, &chan->control);
+
+ /* Set filter clock factor */
+- outb(prescaler, base_offset);
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_PRESET_PSC,
+- base_offset + 1);
++ iowrite8(prescaler, &chan->data);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_PRESET_PSC,
++ &chan->control);
+
+ spin_unlock_irqrestore(&priv->lock, irqflags);
+
+@@ -1084,12 +1107,11 @@ static irqreturn_t quad8_irq_handler(int irq, void *private)
+ {
+ struct counter_device *counter = private;
+ struct quad8 *const priv = counter_priv(counter);
+- const unsigned long base = priv->base;
+ unsigned long irq_status;
+ unsigned long channel;
+ u8 event;
+
+- irq_status = inb(base + QUAD8_REG_INTERRUPT_STATUS);
++ irq_status = ioread8(&priv->reg->interrupt_status);
+ if (!irq_status)
+ return IRQ_NONE;
+
+@@ -1118,17 +1140,43 @@ static irqreturn_t quad8_irq_handler(int irq, void *private)
+ }
+
+ /* Clear pending interrupts on device */
+- outb(QUAD8_CHAN_OP_ENABLE_INTERRUPT_FUNC, base + QUAD8_REG_CHAN_OP);
++ iowrite8(QUAD8_CHAN_OP_ENABLE_INTERRUPT_FUNC, &priv->reg->channel_oper);
+
+ return IRQ_HANDLED;
+ }
+
++static void quad8_init_counter(struct channel_reg __iomem *const chan)
++{
++ unsigned long i;
++
++ /* Reset Byte Pointer */
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, &chan->control);
++ /* Reset filter clock factor */
++ iowrite8(0, &chan->data);
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_PRESET_PSC,
++ &chan->control);
++ /* Reset Byte Pointer */
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, &chan->control);
++ /* Reset Preset Register */
++ for (i = 0; i < 3; i++)
++ iowrite8(0x00, &chan->data);
++ /* Reset Borrow, Carry, Compare, and Sign flags */
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_FLAGS, &chan->control);
++ /* Reset Error flag */
++ iowrite8(QUAD8_CTR_RLD | QUAD8_RLD_RESET_E, &chan->control);
++ /* Binary encoding; Normal count; non-quadrature mode */
++ iowrite8(QUAD8_CTR_CMR, &chan->control);
++ /* Disable A and B inputs; preset on index; FLG1 as Carry */
++ iowrite8(QUAD8_CTR_IOR, &chan->control);
++ /* Disable index function; negative index polarity */
++ iowrite8(QUAD8_CTR_IDR, &chan->control);
++}
++
+ static int quad8_probe(struct device *dev, unsigned int id)
+ {
+ struct counter_device *counter;
+ struct quad8 *priv;
+- int i, j;
+- unsigned int base_offset;
++ unsigned long i;
+ int err;
+
+ if (!devm_request_region(dev, base[id], QUAD8_EXTENT, dev_name(dev))) {
+@@ -1142,6 +1190,10 @@ static int quad8_probe(struct device *dev, unsigned int id)
+ return -ENOMEM;
+ priv = counter_priv(counter);
+
++ priv->reg = devm_ioport_map(dev, base[id], QUAD8_EXTENT);
++ if (!priv->reg)
++ return -ENOMEM;
++
+ /* Initialize Counter device and driver data */
+ counter->name = dev_name(dev);
+ counter->parent = dev;
+@@ -1150,43 +1202,20 @@ static int quad8_probe(struct device *dev, unsigned int id)
+ counter->num_counts = ARRAY_SIZE(quad8_counts);
+ counter->signals = quad8_signals;
+ counter->num_signals = ARRAY_SIZE(quad8_signals);
+- priv->base = base[id];
+
+ spin_lock_init(&priv->lock);
+
+ /* Reset Index/Interrupt Register */
+- outb(0x00, base[id] + QUAD8_REG_INDEX_INTERRUPT);
++ iowrite8(0x00, &priv->reg->index_interrupt);
+ /* Reset all counters and disable interrupt function */
+- outb(QUAD8_CHAN_OP_RESET_COUNTERS, base[id] + QUAD8_REG_CHAN_OP);
++ iowrite8(QUAD8_CHAN_OP_RESET_COUNTERS, &priv->reg->channel_oper);
+ /* Set initial configuration for all counters */
+- for (i = 0; i < QUAD8_NUM_COUNTERS; i++) {
+- base_offset = base[id] + 2 * i;
+- /* Reset Byte Pointer */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
+- /* Reset filter clock factor */
+- outb(0, base_offset);
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_PRESET_PSC,
+- base_offset + 1);
+- /* Reset Byte Pointer */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
+- /* Reset Preset Register */
+- for (j = 0; j < 3; j++)
+- outb(0x00, base_offset);
+- /* Reset Borrow, Carry, Compare, and Sign flags */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_FLAGS, base_offset + 1);
+- /* Reset Error flag */
+- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_E, base_offset + 1);
+- /* Binary encoding; Normal count; non-quadrature mode */
+- outb(QUAD8_CTR_CMR, base_offset + 1);
+- /* Disable A and B inputs; preset on index; FLG1 as Carry */
+- outb(QUAD8_CTR_IOR, base_offset + 1);
+- /* Disable index function; negative index polarity */
+- outb(QUAD8_CTR_IDR, base_offset + 1);
+- }
++ for (i = 0; i < QUAD8_NUM_COUNTERS; i++)
++ quad8_init_counter(priv->reg->channel + i);
+ /* Disable Differential Encoder Cable Status for all channels */
+- outb(0xFF, base[id] + QUAD8_DIFF_ENCODER_CABLE_STATUS);
++ iowrite8(0xFF, &priv->reg->cable_status);
+ /* Enable all counters and enable interrupt function */
+- outb(QUAD8_CHAN_OP_ENABLE_INTERRUPT_FUNC, base[id] + QUAD8_REG_CHAN_OP);
++ iowrite8(QUAD8_CHAN_OP_ENABLE_INTERRUPT_FUNC, &priv->reg->channel_oper);
+
+ err = devm_request_irq(&counter->dev, irq[id], quad8_irq_handler,
+ IRQF_SHARED, counter->name, counter);
+diff --git a/drivers/firmware/arm_scmi/scmi_pm_domain.c b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+index 581d34c957695..d5dee625de780 100644
+--- a/drivers/firmware/arm_scmi/scmi_pm_domain.c
++++ b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+@@ -8,7 +8,6 @@
+ #include <linux/err.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
+-#include <linux/pm_clock.h>
+ #include <linux/pm_domain.h>
+ #include <linux/scmi_protocol.h>
+
+@@ -53,27 +52,6 @@ static int scmi_pd_power_off(struct generic_pm_domain *domain)
+ return scmi_pd_power(domain, false);
+ }
+
+-static int scmi_pd_attach_dev(struct generic_pm_domain *pd, struct device *dev)
+-{
+- int ret;
+-
+- ret = pm_clk_create(dev);
+- if (ret)
+- return ret;
+-
+- ret = of_pm_clk_add_clks(dev);
+- if (ret >= 0)
+- return 0;
+-
+- pm_clk_destroy(dev);
+- return ret;
+-}
+-
+-static void scmi_pd_detach_dev(struct generic_pm_domain *pd, struct device *dev)
+-{
+- pm_clk_destroy(dev);
+-}
+-
+ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ {
+ int num_domains, i;
+@@ -124,10 +102,6 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ scmi_pd->genpd.name = scmi_pd->name;
+ scmi_pd->genpd.power_off = scmi_pd_power_off;
+ scmi_pd->genpd.power_on = scmi_pd_power_on;
+- scmi_pd->genpd.attach_dev = scmi_pd_attach_dev;
+- scmi_pd->genpd.detach_dev = scmi_pd_detach_dev;
+- scmi_pd->genpd.flags = GENPD_FLAG_PM_CLK |
+- GENPD_FLAG_ACTIVE_WAKEUP;
+
+ pm_genpd_init(&scmi_pd->genpd, NULL,
+ state == SCMI_POWER_STATE_GENERIC_OFF);
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index 2db19cd640a43..de1e7a1a76f2e 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -793,8 +793,12 @@ static int mvebu_pwm_probe(struct platform_device *pdev,
+ u32 offset;
+ u32 set;
+
+- if (of_device_is_compatible(mvchip->chip.of_node,
+- "marvell,armada-370-gpio")) {
++ if (mvchip->soc_variant == MVEBU_GPIO_SOC_VARIANT_A8K) {
++ int ret = of_property_read_u32(dev->of_node,
++ "marvell,pwm-offset", &offset);
++ if (ret < 0)
++ return 0;
++ } else {
+ /*
+ * There are only two sets of PWM configuration registers for
+ * all the GPIO lines on those SoCs which this driver reserves
+@@ -804,13 +808,6 @@ static int mvebu_pwm_probe(struct platform_device *pdev,
+ if (!platform_get_resource_byname(pdev, IORESOURCE_MEM, "pwm"))
+ return 0;
+ offset = 0;
+- } else if (mvchip->soc_variant == MVEBU_GPIO_SOC_VARIANT_A8K) {
+- int ret = of_property_read_u32(dev->of_node,
+- "marvell,pwm-offset", &offset);
+- if (ret < 0)
+- return 0;
+- } else {
+- return 0;
+ }
+
+ if (IS_ERR(mvchip->clk))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 98ac53ee6bb55..6cded09d5878a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -1056,6 +1056,10 @@ bool amdgpu_acpi_should_gpu_reset(struct amdgpu_device *adev)
+ {
+ if (adev->flags & AMD_IS_APU)
+ return false;
++
++ if (amdgpu_sriov_vf(adev))
++ return false;
++
+ return pm_suspend_target_state != PM_SUSPEND_TO_IDLE;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 929f8b75bfaee..53b07b091e823 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3178,7 +3178,8 @@ static int amdgpu_device_ip_resume_phase1(struct amdgpu_device *adev)
+ continue;
+ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_COMMON ||
+ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC ||
+- adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_IH) {
++ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_IH ||
++ (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_PSP && amdgpu_sriov_vf(adev))) {
+
+ r = adev->ip_blocks[i].version->funcs->resume(adev);
+ if (r) {
+@@ -4124,12 +4125,20 @@ static void amdgpu_device_evict_resources(struct amdgpu_device *adev)
+ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
+ {
+ struct amdgpu_device *adev = drm_to_adev(dev);
++ int r = 0;
+
+ if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ return 0;
+
+ adev->in_suspend = true;
+
++ if (amdgpu_sriov_vf(adev)) {
++ amdgpu_virt_fini_data_exchange(adev);
++ r = amdgpu_virt_request_full_gpu(adev, false);
++ if (r)
++ return r;
++ }
++
+ if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D3))
+ DRM_WARN("smart shift update failed\n");
+
+@@ -4153,6 +4162,9 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
+
+ amdgpu_device_ip_suspend_phase2(adev);
+
++ if (amdgpu_sriov_vf(adev))
++ amdgpu_virt_release_full_gpu(adev, false);
++
+ return 0;
+ }
+
+@@ -4171,6 +4183,12 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
+ struct amdgpu_device *adev = drm_to_adev(dev);
+ int r = 0;
+
++ if (amdgpu_sriov_vf(adev)) {
++ r = amdgpu_virt_request_full_gpu(adev, true);
++ if (r)
++ return r;
++ }
++
+ if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ return 0;
+
+@@ -4185,6 +4203,13 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
+ }
+
+ r = amdgpu_device_ip_resume(adev);
++
++ /* no matter what r is, always need to properly release full GPU */
++ if (amdgpu_sriov_vf(adev)) {
++ amdgpu_virt_init_data_exchange(adev);
++ amdgpu_virt_release_full_gpu(adev, true);
++ }
++
+ if (r) {
+ dev_err(adev->dev, "amdgpu_device_ip_resume failed (%d).\n", r);
+ return r;
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index 01c8b80e34ec4..41431b9d55bd9 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1863,12 +1863,6 @@ EXPORT_SYMBOL_GPL(analogix_dp_remove);
+ int analogix_dp_suspend(struct analogix_dp_device *dp)
+ {
+ clk_disable_unprepare(dp->clock);
+-
+- if (dp->plat_data->panel) {
+- if (drm_panel_unprepare(dp->plat_data->panel))
+- DRM_ERROR("failed to turnoff the panel\n");
+- }
+-
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_suspend);
+@@ -1883,13 +1877,6 @@ int analogix_dp_resume(struct analogix_dp_device *dp)
+ return ret;
+ }
+
+- if (dp->plat_data->panel) {
+- if (drm_panel_prepare(dp->plat_data->panel)) {
+- DRM_ERROR("failed to setup the panel\n");
+- return -EBUSY;
+- }
+- }
+-
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_resume);
+diff --git a/drivers/gpu/drm/bridge/lontium-lt8912b.c b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+index c642d1e02b2f8..167cd7d85dbbb 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt8912b.c
++++ b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+@@ -186,7 +186,7 @@ static int lt8912_write_lvds_config(struct lt8912 *lt)
+ {0x03, 0xff},
+ };
+
+- return regmap_multi_reg_write(lt->regmap[I2C_CEC_DSI], seq, ARRAY_SIZE(seq));
++ return regmap_multi_reg_write(lt->regmap[I2C_MAIN], seq, ARRAY_SIZE(seq));
+ };
+
+ static inline struct lt8912 *bridge_to_lt8912(struct drm_bridge *b)
+@@ -266,7 +266,7 @@ static int lt8912_video_setup(struct lt8912 *lt)
+ u32 hactive, h_total, hpw, hfp, hbp;
+ u32 vactive, v_total, vpw, vfp, vbp;
+ u8 settle = 0x08;
+- int ret;
++ int ret, hsync_activehigh, vsync_activehigh;
+
+ if (!lt)
+ return -EINVAL;
+@@ -276,12 +276,14 @@ static int lt8912_video_setup(struct lt8912 *lt)
+ hpw = lt->mode.hsync_len;
+ hbp = lt->mode.hback_porch;
+ h_total = hactive + hfp + hpw + hbp;
++ hsync_activehigh = lt->mode.flags & DISPLAY_FLAGS_HSYNC_HIGH;
+
+ vactive = lt->mode.vactive;
+ vfp = lt->mode.vfront_porch;
+ vpw = lt->mode.vsync_len;
+ vbp = lt->mode.vback_porch;
+ v_total = vactive + vfp + vpw + vbp;
++ vsync_activehigh = lt->mode.flags & DISPLAY_FLAGS_VSYNC_HIGH;
+
+ if (vactive <= 600)
+ settle = 0x04;
+@@ -315,6 +317,13 @@ static int lt8912_video_setup(struct lt8912 *lt)
+ ret |= regmap_write(lt->regmap[I2C_CEC_DSI], 0x3e, hfp & 0xff);
+ ret |= regmap_write(lt->regmap[I2C_CEC_DSI], 0x3f, hfp >> 8);
+
++ ret |= regmap_update_bits(lt->regmap[I2C_MAIN], 0xab, BIT(0),
++ vsync_activehigh ? BIT(0) : 0);
++ ret |= regmap_update_bits(lt->regmap[I2C_MAIN], 0xab, BIT(1),
++ hsync_activehigh ? BIT(1) : 0);
++ ret |= regmap_update_bits(lt->regmap[I2C_MAIN], 0xb2, BIT(0),
++ lt->connector.display_info.is_hdmi ? BIT(0) : 0);
++
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+index 298f2cc7a879f..3ca0ae5ed1fb4 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+@@ -155,6 +155,21 @@ struct intel_engine_execlists {
+ */
+ struct timer_list preempt;
+
++ /**
++ * @preempt_target: active request at the time of the preemption request
++ *
++ * We force a preemption to occur if the pending contexts have not
++ * been promoted to active upon receipt of the CS ack event within
++ * the timeout. This timeout maybe chosen based on the target,
++ * using a very short timeout if the context is no longer schedulable.
++ * That short timeout may not be applicable to other contexts, so
++ * if a context switch should happen within before the preemption
++ * timeout, we may shoot early at an innocent context. To prevent this,
++ * we record which context was active at the time of the preemption
++ * request and only reset that context upon the timeout.
++ */
++ const struct i915_request *preempt_target;
++
+ /**
+ * @ccid: identifier for contexts submitted to this engine
+ */
+diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+index 0627fa10d2dcb..277f9d6551f44 100644
+--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+@@ -1241,6 +1241,9 @@ static unsigned long active_preempt_timeout(struct intel_engine_cs *engine,
+ if (!rq)
+ return 0;
+
++ /* Only allow ourselves to force reset the currently active context */
++ engine->execlists.preempt_target = rq;
++
+ /* Force a fast reset for terminated contexts (ignoring sysfs!) */
+ if (unlikely(intel_context_is_banned(rq->context) || bad_request(rq)))
+ return 1;
+@@ -2427,8 +2430,24 @@ static void execlists_submission_tasklet(struct tasklet_struct *t)
+ GEM_BUG_ON(inactive - post > ARRAY_SIZE(post));
+
+ if (unlikely(preempt_timeout(engine))) {
++ const struct i915_request *rq = *engine->execlists.active;
++
++ /*
++ * If after the preempt-timeout expired, we are still on the
++ * same active request/context as before we initiated the
++ * preemption, reset the engine.
++ *
++ * However, if we have processed a CS event to switch contexts,
++ * but not yet processed the CS event for the pending
++ * preemption, reset the timer allowing the new context to
++ * gracefully exit.
++ */
+ cancel_timer(&engine->execlists.preempt);
+- engine->execlists.error_interrupt |= ERROR_PREEMPT;
++ if (rq == engine->execlists.preempt_target)
++ engine->execlists.error_interrupt |= ERROR_PREEMPT;
++ else
++ set_timer_ms(&engine->execlists.preempt,
++ active_preempt_timeout(engine, rq));
+ }
+
+ if (unlikely(READ_ONCE(engine->execlists.error_interrupt))) {
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
+index f76b6cf8040ec..b8cb58e2819a5 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
+@@ -544,8 +544,7 @@ static INTEL_GT_RPS_BOOL_ATTR_RO(throttle_reason_ratl, RATL_MASK);
+ static INTEL_GT_RPS_BOOL_ATTR_RO(throttle_reason_vr_thermalert, VR_THERMALERT_MASK);
+ static INTEL_GT_RPS_BOOL_ATTR_RO(throttle_reason_vr_tdc, VR_TDC_MASK);
+
+-static const struct attribute *freq_attrs[] = {
+- &dev_attr_punit_req_freq_mhz.attr,
++static const struct attribute *throttle_reason_attrs[] = {
+ &attr_throttle_reason_status.attr,
+ &attr_throttle_reason_pl1.attr,
+ &attr_throttle_reason_pl2.attr,
+@@ -594,9 +593,17 @@ void intel_gt_sysfs_pm_init(struct intel_gt *gt, struct kobject *kobj)
+ if (!is_object_gt(kobj))
+ return;
+
+- ret = sysfs_create_files(kobj, freq_attrs);
++ ret = sysfs_create_file(kobj, &dev_attr_punit_req_freq_mhz.attr);
+ if (ret)
+ drm_warn(>->i915->drm,
+- "failed to create gt%u throttle sysfs files (%pe)",
++ "failed to create gt%u punit_req_freq_mhz sysfs (%pe)",
+ gt->info.id, ERR_PTR(ret));
++
++ if (GRAPHICS_VER(gt->i915) >= 11) {
++ ret = sysfs_create_files(kobj, throttle_reason_attrs);
++ if (ret)
++ drm_warn(>->i915->drm,
++ "failed to create gt%u throttle sysfs files (%pe)",
++ gt->info.id, ERR_PTR(ret));
++ }
+ }
+diff --git a/drivers/input/keyboard/snvs_pwrkey.c b/drivers/input/keyboard/snvs_pwrkey.c
+index 65286762b02ab..ad8660be0127c 100644
+--- a/drivers/input/keyboard/snvs_pwrkey.c
++++ b/drivers/input/keyboard/snvs_pwrkey.c
+@@ -20,7 +20,7 @@
+ #include <linux/mfd/syscon.h>
+ #include <linux/regmap.h>
+
+-#define SNVS_HPVIDR1_REG 0xF8
++#define SNVS_HPVIDR1_REG 0xBF8
+ #define SNVS_LPSR_REG 0x4C /* LP Status Register */
+ #define SNVS_LPCR_REG 0x38 /* LP Control Register */
+ #define SNVS_HPSR_REG 0x14
+diff --git a/drivers/input/touchscreen/melfas_mip4.c b/drivers/input/touchscreen/melfas_mip4.c
+index 2745bf1aee381..83f4be05e27b6 100644
+--- a/drivers/input/touchscreen/melfas_mip4.c
++++ b/drivers/input/touchscreen/melfas_mip4.c
+@@ -1453,7 +1453,7 @@ static int mip4_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ "ce", GPIOD_OUT_LOW);
+ if (IS_ERR(ts->gpio_ce)) {
+ error = PTR_ERR(ts->gpio_ce);
+- if (error != EPROBE_DEFER)
++ if (error != -EPROBE_DEFER)
+ dev_err(&client->dev,
+ "Failed to get gpio: %d\n", error);
+ return error;
+diff --git a/drivers/media/dvb-core/dvb_vb2.c b/drivers/media/dvb-core/dvb_vb2.c
+index a1bd6d9c9223c..909df82fed332 100644
+--- a/drivers/media/dvb-core/dvb_vb2.c
++++ b/drivers/media/dvb-core/dvb_vb2.c
+@@ -354,6 +354,12 @@ int dvb_vb2_reqbufs(struct dvb_vb2_ctx *ctx, struct dmx_requestbuffers *req)
+
+ int dvb_vb2_querybuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
+ {
++ struct vb2_queue *q = &ctx->vb_q;
++
++ if (b->index >= q->num_buffers) {
++ dprintk(1, "[%s] buffer index out of range\n", ctx->name);
++ return -EINVAL;
++ }
+ vb2_core_querybuf(&ctx->vb_q, b->index, b);
+ dprintk(3, "[%s] index=%d\n", ctx->name, b->index);
+ return 0;
+@@ -378,8 +384,13 @@ int dvb_vb2_expbuf(struct dvb_vb2_ctx *ctx, struct dmx_exportbuffer *exp)
+
+ int dvb_vb2_qbuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
+ {
++ struct vb2_queue *q = &ctx->vb_q;
+ int ret;
+
++ if (b->index >= q->num_buffers) {
++ dprintk(1, "[%s] buffer index out of range\n", ctx->name);
++ return -EINVAL;
++ }
+ ret = vb2_core_qbuf(&ctx->vb_q, b->index, b, NULL);
+ if (ret) {
+ dprintk(1, "[%s] index=%d errno=%d\n", ctx->name,
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_enc_drv.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_enc_drv.c
+index 95e8c29ccc651..d2f5f30582a9c 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_enc_drv.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_enc_drv.c
+@@ -228,7 +228,6 @@ static int mtk_vcodec_probe(struct platform_device *pdev)
+ {
+ struct mtk_vcodec_dev *dev;
+ struct video_device *vfd_enc;
+- struct resource *res;
+ phandle rproc_phandle;
+ enum mtk_vcodec_fw_type fw_type;
+ int ret;
+@@ -272,14 +271,12 @@ static int mtk_vcodec_probe(struct platform_device *pdev)
+ goto err_res;
+ }
+
+- res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+- if (res == NULL) {
+- dev_err(&pdev->dev, "failed to get irq resource");
+- ret = -ENOENT;
++ dev->enc_irq = platform_get_irq(pdev, 0);
++ if (dev->enc_irq < 0) {
++ ret = dev->enc_irq;
+ goto err_res;
+ }
+
+- dev->enc_irq = platform_get_irq(pdev, 0);
+ irq_set_status_flags(dev->enc_irq, IRQ_NOAUTOEN);
+ ret = devm_request_irq(&pdev->dev, dev->enc_irq,
+ mtk_vcodec_enc_irq_handler,
+diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+index 0f3d6b5667b07..55c26e7d370e9 100644
+--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+@@ -1040,6 +1040,8 @@ int v4l2_compat_get_array_args(struct file *file, void *mbuf,
+ {
+ int err = 0;
+
++ memset(mbuf, 0, array_size);
++
+ switch (cmd) {
+ case VIDIOC_G_FMT32:
+ case VIDIOC_S_FMT32:
+diff --git a/drivers/mmc/host/mmc_hsq.c b/drivers/mmc/host/mmc_hsq.c
+index a5e05ed0fda3e..9d35453e7371b 100644
+--- a/drivers/mmc/host/mmc_hsq.c
++++ b/drivers/mmc/host/mmc_hsq.c
+@@ -34,7 +34,7 @@ static void mmc_hsq_pump_requests(struct mmc_hsq *hsq)
+ spin_lock_irqsave(&hsq->lock, flags);
+
+ /* Make sure we are not already running a request now */
+- if (hsq->mrq) {
++ if (hsq->mrq || hsq->recovery_halt) {
+ spin_unlock_irqrestore(&hsq->lock, flags);
+ return;
+ }
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index b6eb75f4bbfc6..dfc3ffd5b1f8c 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -111,8 +111,8 @@
+ #define CLK_DIV_MASK 0x7f
+
+ /* REG_BUS_WIDTH */
+-#define BUS_WIDTH_8 BIT(2)
+-#define BUS_WIDTH_4 BIT(1)
++#define BUS_WIDTH_4_SUPPORT BIT(3)
++#define BUS_WIDTH_4 BIT(2)
+ #define BUS_WIDTH_1 BIT(0)
+
+ #define MMC_VDD_360 23
+@@ -524,9 +524,6 @@ static void moxart_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ case MMC_BUS_WIDTH_4:
+ writel(BUS_WIDTH_4, host->base + REG_BUS_WIDTH);
+ break;
+- case MMC_BUS_WIDTH_8:
+- writel(BUS_WIDTH_8, host->base + REG_BUS_WIDTH);
+- break;
+ default:
+ writel(BUS_WIDTH_1, host->base + REG_BUS_WIDTH);
+ break;
+@@ -651,16 +648,8 @@ static int moxart_probe(struct platform_device *pdev)
+ dmaengine_slave_config(host->dma_chan_rx, &cfg);
+ }
+
+- switch ((readl(host->base + REG_BUS_WIDTH) >> 3) & 3) {
+- case 1:
++ if (readl(host->base + REG_BUS_WIDTH) & BUS_WIDTH_4_SUPPORT)
+ mmc->caps |= MMC_CAP_4_BIT_DATA;
+- break;
+- case 2:
+- mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA;
+- break;
+- default:
+- break;
+- }
+
+ writel(0, host->base + REG_INTERRUPT_MASK);
+
+diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
+index bd2f6dc011941..e33d5a9676944 100644
+--- a/drivers/net/can/c_can/c_can.h
++++ b/drivers/net/can/c_can/c_can.h
+@@ -235,9 +235,22 @@ static inline u8 c_can_get_tx_tail(const struct c_can_tx_ring *ring)
+ return ring->tail & (ring->obj_num - 1);
+ }
+
+-static inline u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
++static inline u8 c_can_get_tx_free(const struct c_can_priv *priv,
++ const struct c_can_tx_ring *ring)
+ {
+- return ring->obj_num - (ring->head - ring->tail);
++ u8 head = c_can_get_tx_head(ring);
++ u8 tail = c_can_get_tx_tail(ring);
++
++ if (priv->type == BOSCH_D_CAN)
++ return ring->obj_num - (ring->head - ring->tail);
++
++ /* This is not a FIFO. C/D_CAN sends out the buffers
++ * prioritized. The lowest buffer number wins.
++ */
++ if (head < tail)
++ return 0;
++
++ return ring->obj_num - head;
+ }
+
+ #endif /* C_CAN_H */
+diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
+index a7362af0babb6..b42264dd7addd 100644
+--- a/drivers/net/can/c_can/c_can_main.c
++++ b/drivers/net/can/c_can/c_can_main.c
+@@ -429,7 +429,7 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface,
+ static bool c_can_tx_busy(const struct c_can_priv *priv,
+ const struct c_can_tx_ring *tx_ring)
+ {
+- if (c_can_get_tx_free(tx_ring) > 0)
++ if (c_can_get_tx_free(priv, tx_ring) > 0)
+ return false;
+
+ netif_stop_queue(priv->dev);
+@@ -437,7 +437,7 @@ static bool c_can_tx_busy(const struct c_can_priv *priv,
+ /* Memory barrier before checking tx_free (head and tail) */
+ smp_mb();
+
+- if (c_can_get_tx_free(tx_ring) == 0) {
++ if (c_can_get_tx_free(priv, tx_ring) == 0) {
+ netdev_dbg(priv->dev,
+ "Stopping tx-queue (tx_head=0x%08x, tx_tail=0x%08x, len=%d).\n",
+ tx_ring->head, tx_ring->tail,
+@@ -465,7 +465,7 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
+
+ idx = c_can_get_tx_head(tx_ring);
+ tx_ring->head++;
+- if (c_can_get_tx_free(tx_ring) == 0)
++ if (c_can_get_tx_free(priv, tx_ring) == 0)
+ netif_stop_queue(dev);
+
+ if (idx < c_can_get_tx_tail(tx_ring))
+@@ -748,7 +748,7 @@ static void c_can_do_tx(struct net_device *dev)
+ return;
+
+ tx_ring->tail += pkts;
+- if (c_can_get_tx_free(tx_ring)) {
++ if (c_can_get_tx_free(priv, tx_ring)) {
+ /* Make sure that anybody stopping the queue after
+ * this sees the new tx_ring->tail.
+ */
+@@ -760,8 +760,7 @@ static void c_can_do_tx(struct net_device *dev)
+ stats->tx_packets += pkts;
+
+ tail = c_can_get_tx_tail(tx_ring);
+-
+- if (tail == 0) {
++ if (priv->type == BOSCH_D_CAN && tail == 0) {
+ u8 head = c_can_get_tx_head(tx_ring);
+
+ /* Start transmission for all cached messages */
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 2b02d823d4977..c2d452a75355c 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -506,14 +506,19 @@ static bool mt7531_dual_sgmii_supported(struct mt7530_priv *priv)
+ static int
+ mt7531_pad_setup(struct dsa_switch *ds, phy_interface_t interface)
+ {
+- struct mt7530_priv *priv = ds->priv;
++ return 0;
++}
++
++static void
++mt7531_pll_setup(struct mt7530_priv *priv)
++{
+ u32 top_sig;
+ u32 hwstrap;
+ u32 xtal;
+ u32 val;
+
+ if (mt7531_dual_sgmii_supported(priv))
+- return 0;
++ return;
+
+ val = mt7530_read(priv, MT7531_CREV);
+ top_sig = mt7530_read(priv, MT7531_TOP_SIG_SR);
+@@ -592,8 +597,6 @@ mt7531_pad_setup(struct dsa_switch *ds, phy_interface_t interface)
+ val |= EN_COREPLL;
+ mt7530_write(priv, MT7531_PLLGP_EN, val);
+ usleep_range(25, 35);
+-
+- return 0;
+ }
+
+ static void
+@@ -2310,6 +2313,8 @@ mt7531_setup(struct dsa_switch *ds)
+ SYS_CTRL_PHY_RST | SYS_CTRL_SW_RST |
+ SYS_CTRL_REG_RST);
+
++ mt7531_pll_setup(priv);
++
+ if (mt7531_dual_sgmii_supported(priv)) {
+ priv->p5_intf_sel = P5_INTF_SEL_GMAC5_SGMII;
+
+@@ -2863,8 +2868,6 @@ mt7531_cpu_port_config(struct dsa_switch *ds, int port)
+ case 6:
+ interface = PHY_INTERFACE_MODE_2500BASEX;
+
+- mt7531_pad_setup(ds, interface);
+-
+ priv->p6_interface = interface;
+ break;
+ default:
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index d89098f4ede80..e9aa41949a4b7 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -5092,6 +5092,7 @@ static int __maybe_unused macb_suspend(struct device *dev)
+ if (!(bp->wol & MACB_WOL_ENABLED)) {
+ rtnl_lock();
+ phylink_stop(bp->phylink);
++ phy_exit(bp->sgmii_phy);
+ rtnl_unlock();
+ spin_lock_irqsave(&bp->lock, flags);
+ macb_reset_hw(bp);
+@@ -5181,6 +5182,9 @@ static int __maybe_unused macb_resume(struct device *dev)
+ macb_set_rx_mode(netdev);
+ macb_restore_features(bp);
+ rtnl_lock();
++ if (!device_may_wakeup(&bp->dev->dev))
++ phy_init(bp->sgmii_phy);
++
+ phylink_start(bp->phylink);
+ rtnl_unlock();
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+index a7f291c897021..557c591a6ce3a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+@@ -14,6 +14,7 @@
+ #include "cudbg_entity.h"
+ #include "cudbg_lib.h"
+ #include "cudbg_zlib.h"
++#include "cxgb4_tc_mqprio.h"
+
+ static const u32 t6_tp_pio_array[][IREG_NUM_ELEM] = {
+ {0x7e40, 0x7e44, 0x020, 28}, /* t6_tp_pio_regs_20_to_3b */
+@@ -3458,7 +3459,7 @@ int cudbg_collect_qdesc(struct cudbg_init *pdbg_init,
+ for (i = 0; i < utxq->ntxq; i++)
+ QDESC_GET_TXQ(&utxq->uldtxq[i].q,
+ cudbg_uld_txq_to_qtype(j),
+- out_unlock);
++ out_unlock_uld);
+ }
+ }
+
+@@ -3475,7 +3476,7 @@ int cudbg_collect_qdesc(struct cudbg_init *pdbg_init,
+ for (i = 0; i < urxq->nrxq; i++)
+ QDESC_GET_RXQ(&urxq->uldrxq[i].rspq,
+ cudbg_uld_rxq_to_qtype(j),
+- out_unlock);
++ out_unlock_uld);
+ }
+
+ /* ULD FLQ */
+@@ -3487,7 +3488,7 @@ int cudbg_collect_qdesc(struct cudbg_init *pdbg_init,
+ for (i = 0; i < urxq->nrxq; i++)
+ QDESC_GET_FLQ(&urxq->uldrxq[i].fl,
+ cudbg_uld_flq_to_qtype(j),
+- out_unlock);
++ out_unlock_uld);
+ }
+
+ /* ULD CIQ */
+@@ -3500,29 +3501,34 @@ int cudbg_collect_qdesc(struct cudbg_init *pdbg_init,
+ for (i = 0; i < urxq->nciq; i++)
+ QDESC_GET_RXQ(&urxq->uldrxq[base + i].rspq,
+ cudbg_uld_ciq_to_qtype(j),
+- out_unlock);
++ out_unlock_uld);
+ }
+ }
++ mutex_unlock(&uld_mutex);
++
++ if (!padap->tc_mqprio)
++ goto out;
+
++ mutex_lock(&padap->tc_mqprio->mqprio_mutex);
+ /* ETHOFLD TXQ */
+ if (s->eohw_txq)
+ for (i = 0; i < s->eoqsets; i++)
+ QDESC_GET_TXQ(&s->eohw_txq[i].q,
+- CUDBG_QTYPE_ETHOFLD_TXQ, out);
++ CUDBG_QTYPE_ETHOFLD_TXQ, out_unlock_mqprio);
+
+ /* ETHOFLD RXQ and FLQ */
+ if (s->eohw_rxq) {
+ for (i = 0; i < s->eoqsets; i++)
+ QDESC_GET_RXQ(&s->eohw_rxq[i].rspq,
+- CUDBG_QTYPE_ETHOFLD_RXQ, out);
++ CUDBG_QTYPE_ETHOFLD_RXQ, out_unlock_mqprio);
+
+ for (i = 0; i < s->eoqsets; i++)
+ QDESC_GET_FLQ(&s->eohw_rxq[i].fl,
+- CUDBG_QTYPE_ETHOFLD_FLQ, out);
++ CUDBG_QTYPE_ETHOFLD_FLQ, out_unlock_mqprio);
+ }
+
+-out_unlock:
+- mutex_unlock(&uld_mutex);
++out_unlock_mqprio:
++ mutex_unlock(&padap->tc_mqprio->mqprio_mutex);
+
+ out:
+ qdesc_info->qdesc_entry_size = sizeof(*qdesc_entry);
+@@ -3559,6 +3565,10 @@ out_free:
+ #undef QDESC_GET
+
+ return rc;
++
++out_unlock_uld:
++ mutex_unlock(&uld_mutex);
++ goto out;
+ }
+
+ int cudbg_collect_flash(struct cudbg_init *pdbg_init,
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 97453d1dfafed..dd2285d4bef47 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -1467,7 +1467,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget)
+ bool wd;
+
+ if (tx_ring->xsk_pool)
+- wd = ice_xmit_zc(tx_ring, ICE_DESC_UNUSED(tx_ring), budget);
++ wd = ice_xmit_zc(tx_ring);
+ else if (ice_ring_is_xdp(tx_ring))
+ wd = true;
+ else
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 03ce85f6e6df8..056c904b83ccb 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -392,13 +392,6 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
+ goto failure;
+ }
+
+- if (!is_power_of_2(vsi->rx_rings[qid]->count) ||
+- !is_power_of_2(vsi->tx_rings[qid]->count)) {
+- netdev_err(vsi->netdev, "Please align ring sizes to power of 2\n");
+- pool_failure = -EINVAL;
+- goto failure;
+- }
+-
+ if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi);
+
+ if (if_running) {
+@@ -534,11 +527,10 @@ exit:
+ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
+ {
+ u16 rx_thresh = ICE_RING_QUARTER(rx_ring);
+- u16 batched, leftover, i, tail_bumps;
++ u16 leftover, i, tail_bumps;
+
+- batched = ALIGN_DOWN(count, rx_thresh);
+- tail_bumps = batched / rx_thresh;
+- leftover = count & (rx_thresh - 1);
++ tail_bumps = count / rx_thresh;
++ leftover = count - (tail_bumps * rx_thresh);
+
+ for (i = 0; i < tail_bumps; i++)
+ if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh))
+@@ -788,69 +780,57 @@ ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf)
+ }
+
+ /**
+- * ice_clean_xdp_irq_zc - Reclaim resources after transmit completes on XDP ring
+- * @xdp_ring: XDP ring to clean
+- * @napi_budget: amount of descriptors that NAPI allows us to clean
+- *
+- * Returns count of cleaned descriptors
++ * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ
++ * @xdp_ring: XDP Tx ring
+ */
+-static u16 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring, int napi_budget)
++static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)
+ {
+- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
+- int budget = napi_budget / tx_thresh;
+- u16 next_dd = xdp_ring->next_dd;
+- u16 ntc, cleared_dds = 0;
+-
+- do {
+- struct ice_tx_desc *next_dd_desc;
+- u16 desc_cnt = xdp_ring->count;
+- struct ice_tx_buf *tx_buf;
+- u32 xsk_frames;
+- u16 i;
+-
+- next_dd_desc = ICE_TX_DESC(xdp_ring, next_dd);
+- if (!(next_dd_desc->cmd_type_offset_bsz &
+- cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)))
+- break;
++ u16 ntc = xdp_ring->next_to_clean;
++ struct ice_tx_desc *tx_desc;
++ u16 cnt = xdp_ring->count;
++ struct ice_tx_buf *tx_buf;
++ u16 xsk_frames = 0;
++ u16 last_rs;
++ int i;
+
+- cleared_dds++;
+- xsk_frames = 0;
+- if (likely(!xdp_ring->xdp_tx_active)) {
+- xsk_frames = tx_thresh;
+- goto skip;
+- }
++ last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1;
++ tx_desc = ICE_TX_DESC(xdp_ring, last_rs);
++ if ((tx_desc->cmd_type_offset_bsz &
++ cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
++ if (last_rs >= ntc)
++ xsk_frames = last_rs - ntc + 1;
++ else
++ xsk_frames = last_rs + cnt - ntc + 1;
++ }
+
+- ntc = xdp_ring->next_to_clean;
++ if (!xsk_frames)
++ return;
+
+- for (i = 0; i < tx_thresh; i++) {
+- tx_buf = &xdp_ring->tx_buf[ntc];
++ if (likely(!xdp_ring->xdp_tx_active))
++ goto skip;
+
+- if (tx_buf->raw_buf) {
+- ice_clean_xdp_tx_buf(xdp_ring, tx_buf);
+- tx_buf->raw_buf = NULL;
+- } else {
+- xsk_frames++;
+- }
++ ntc = xdp_ring->next_to_clean;
++ for (i = 0; i < xsk_frames; i++) {
++ tx_buf = &xdp_ring->tx_buf[ntc];
+
+- ntc++;
+- if (ntc >= xdp_ring->count)
+- ntc = 0;
++ if (tx_buf->raw_buf) {
++ ice_clean_xdp_tx_buf(xdp_ring, tx_buf);
++ tx_buf->raw_buf = NULL;
++ } else {
++ xsk_frames++;
+ }
++
++ ntc++;
++ if (ntc >= xdp_ring->count)
++ ntc = 0;
++ }
+ skip:
+- xdp_ring->next_to_clean += tx_thresh;
+- if (xdp_ring->next_to_clean >= desc_cnt)
+- xdp_ring->next_to_clean -= desc_cnt;
+- if (xsk_frames)
+- xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames);
+- next_dd_desc->cmd_type_offset_bsz = 0;
+- next_dd = next_dd + tx_thresh;
+- if (next_dd >= desc_cnt)
+- next_dd = tx_thresh - 1;
+- } while (--budget);
+-
+- xdp_ring->next_dd = next_dd;
+-
+- return cleared_dds * tx_thresh;
++ tx_desc->cmd_type_offset_bsz = 0;
++ xdp_ring->next_to_clean += xsk_frames;
++ if (xdp_ring->next_to_clean >= cnt)
++ xdp_ring->next_to_clean -= cnt;
++ if (xsk_frames)
++ xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames);
+ }
+
+ /**
+@@ -885,7 +865,6 @@ static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc,
+ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs,
+ unsigned int *total_bytes)
+ {
+- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
+ u16 ntu = xdp_ring->next_to_use;
+ struct ice_tx_desc *tx_desc;
+ u32 i;
+@@ -905,13 +884,6 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de
+ }
+
+ xdp_ring->next_to_use = ntu;
+-
+- if (xdp_ring->next_to_use > xdp_ring->next_rs) {
+- tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs);
+- tx_desc->cmd_type_offset_bsz |=
+- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
+- xdp_ring->next_rs += tx_thresh;
+- }
+ }
+
+ /**
+@@ -924,7 +896,6 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de
+ static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs,
+ u32 nb_pkts, unsigned int *total_bytes)
+ {
+- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
+ u32 batched, leftover, i;
+
+ batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH);
+@@ -933,54 +904,54 @@ static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *d
+ ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes);
+ for (; i < batched + leftover; i++)
+ ice_xmit_pkt(xdp_ring, &descs[i], total_bytes);
++}
+
+- if (xdp_ring->next_to_use > xdp_ring->next_rs) {
+- struct ice_tx_desc *tx_desc;
++/**
++ * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU)
++ * @xdp_ring: XDP ring to produce the HW Tx descriptors on
++ */
++static void ice_set_rs_bit(struct ice_tx_ring *xdp_ring)
++{
++ u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
++ struct ice_tx_desc *tx_desc;
+
+- tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs);
+- tx_desc->cmd_type_offset_bsz |=
+- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
+- xdp_ring->next_rs += tx_thresh;
+- }
++ tx_desc = ICE_TX_DESC(xdp_ring, ntu);
++ tx_desc->cmd_type_offset_bsz |=
++ cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
+ }
+
+ /**
+ * ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring
+ * @xdp_ring: XDP ring to produce the HW Tx descriptors on
+- * @budget: number of free descriptors on HW Tx ring that can be used
+- * @napi_budget: amount of descriptors that NAPI allows us to clean
+ *
+ * Returns true if there is no more work that needs to be done, false otherwise
+ */
+-bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget)
++bool ice_xmit_zc(struct ice_tx_ring *xdp_ring)
+ {
+ struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs;
+- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
+ u32 nb_pkts, nb_processed = 0;
+ unsigned int total_bytes = 0;
++ int budget;
++
++ ice_clean_xdp_irq_zc(xdp_ring);
+
+- if (budget < tx_thresh)
+- budget += ice_clean_xdp_irq_zc(xdp_ring, napi_budget);
++ budget = ICE_DESC_UNUSED(xdp_ring);
++ budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring));
+
+ nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget);
+ if (!nb_pkts)
+ return true;
+
+ if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) {
+- struct ice_tx_desc *tx_desc;
+-
+ nb_processed = xdp_ring->count - xdp_ring->next_to_use;
+ ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes);
+- tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs);
+- tx_desc->cmd_type_offset_bsz |=
+- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
+- xdp_ring->next_rs = tx_thresh - 1;
+ xdp_ring->next_to_use = 0;
+ }
+
+ ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed,
+ &total_bytes);
+
++ ice_set_rs_bit(xdp_ring);
+ ice_xdp_ring_update_tail(xdp_ring);
+ ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes);
+
+@@ -1058,14 +1029,16 @@ bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi)
+ */
+ void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring)
+ {
+- u16 count_mask = rx_ring->count - 1;
+ u16 ntc = rx_ring->next_to_clean;
+ u16 ntu = rx_ring->next_to_use;
+
+- for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) {
++ while (ntc != ntu) {
+ struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc);
+
+ xsk_buff_free(xdp);
++ ntc++;
++ if (ntc >= rx_ring->count)
++ ntc = 0;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.h b/drivers/net/ethernet/intel/ice/ice_xsk.h
+index 4edbe81eb6460..6fa181f080ef1 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.h
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.h
+@@ -26,13 +26,10 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count);
+ bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi);
+ void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring);
+ void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring);
+-bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget);
++bool ice_xmit_zc(struct ice_tx_ring *xdp_ring);
+ int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc);
+ #else
+-static inline bool
+-ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring,
+- u32 __always_unused budget,
+- int __always_unused napi_budget)
++static inline bool ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring)
+ {
+ return false;
+ }
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 98d6a6d047e32..c1fe1a2cb7460 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -312,8 +312,8 @@
+ #define MTK_RXD5_PPE_CPU_REASON GENMASK(22, 18)
+ #define MTK_RXD5_SRC_PORT GENMASK(29, 26)
+
+-#define RX_DMA_GET_SPORT(x) (((x) >> 19) & 0xf)
+-#define RX_DMA_GET_SPORT_V2(x) (((x) >> 26) & 0x7)
++#define RX_DMA_GET_SPORT(x) (((x) >> 19) & 0x7)
++#define RX_DMA_GET_SPORT_V2(x) (((x) >> 26) & 0xf)
+
+ /* PDMA V2 descriptor rxd3 */
+ #define RX_DMA_VTAG_V2 BIT(0)
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
+index 4aeb927c37153..aa780b1614a3d 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
+@@ -246,8 +246,8 @@ int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv)
+ }
+
+ priv->clk_io = devm_ioremap(dev, res->start, resource_size(res));
+- if (IS_ERR(priv->clk_io))
+- return PTR_ERR(priv->clk_io);
++ if (!priv->clk_io)
++ return -ENOMEM;
+
+ mlxbf_gige_mdio_cfg(priv);
+
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 68991b021c560..c250ad6dc956d 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -290,6 +290,13 @@ static int ocelot_port_num_untagged_vlans(struct ocelot *ocelot, int port)
+ if (!(vlan->portmask & BIT(port)))
+ continue;
+
++ /* Ignore the VLAN added by ocelot_add_vlan_unaware_pvid(),
++ * because this is never active in hardware at the same time as
++ * the bridge VLANs, which only matter in VLAN-aware mode.
++ */
++ if (vlan->vid >= OCELOT_RSV_VLAN_RANGE_START)
++ continue;
++
+ if (vlan->untagged & BIT(port))
+ num_untagged++;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 78f11dabca056..8d9272f01e312 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3704,6 +3704,15 @@ static int stmmac_open(struct net_device *dev)
+ goto init_error;
+ }
+
++ if (priv->plat->serdes_powerup) {
++ ret = priv->plat->serdes_powerup(dev, priv->plat->bsp_priv);
++ if (ret < 0) {
++ netdev_err(priv->dev, "%s: Serdes powerup failed\n",
++ __func__);
++ goto init_error;
++ }
++ }
++
+ ret = stmmac_hw_setup(dev, true);
+ if (ret < 0) {
+ netdev_err(priv->dev, "%s: Hw setup failed\n", __func__);
+@@ -3793,6 +3802,10 @@ static int stmmac_release(struct net_device *dev)
+ /* Disable the MAC Rx/Tx */
+ stmmac_mac_set(priv, priv->ioaddr, false);
+
++ /* Powerdown Serdes if there is */
++ if (priv->plat->serdes_powerdown)
++ priv->plat->serdes_powerdown(dev, priv->plat->bsp_priv);
++
+ netif_carrier_off(dev);
+
+ stmmac_release_ptp(priv);
+@@ -7158,14 +7171,6 @@ int stmmac_dvr_probe(struct device *device,
+ goto error_netdev_register;
+ }
+
+- if (priv->plat->serdes_powerup) {
+- ret = priv->plat->serdes_powerup(ndev,
+- priv->plat->bsp_priv);
+-
+- if (ret < 0)
+- goto error_serdes_powerup;
+- }
+-
+ #ifdef CONFIG_DEBUG_FS
+ stmmac_init_fs(ndev);
+ #endif
+@@ -7180,8 +7185,6 @@ int stmmac_dvr_probe(struct device *device,
+
+ return ret;
+
+-error_serdes_powerup:
+- unregister_netdev(ndev);
+ error_netdev_register:
+ phylink_destroy(priv->phylink);
+ error_xpcs_setup:
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index f90a21781d8d6..adc9d97cbb88c 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -316,11 +316,13 @@ static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
+
+ phydev->suspended_by_mdio_bus = 0;
+
+- /* If we manged to get here with the PHY state machine in a state neither
+- * PHY_HALTED nor PHY_READY this is an indication that something went wrong
+- * and we should most likely be using MAC managed PM and we are not.
++ /* If we managed to get here with the PHY state machine in a state
++ * neither PHY_HALTED, PHY_READY nor PHY_UP, this is an indication
++ * that something went wrong and we should most likely be using
++ * MAC managed PM, but we are not.
+ */
+- WARN_ON(phydev->state != PHY_HALTED && phydev->state != PHY_READY);
++ WARN_ON(phydev->state != PHY_HALTED && phydev->state != PHY_READY &&
++ phydev->state != PHY_UP);
+
+ ret = phy_init_hw(phydev);
+ if (ret < 0)
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 571a399c195dd..c1d4fb62f6dd0 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1399,6 +1399,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x413c, 0x81b3, 8)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */
+ {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */
++ {QMI_FIXED_INTF(0x413c, 0x81c2, 8)}, /* Dell Wireless 5811e */
+ {QMI_FIXED_INTF(0x413c, 0x81cc, 8)}, /* Dell Wireless 5816e */
+ {QMI_FIXED_INTF(0x413c, 0x81d7, 0)}, /* Dell Wireless 5821e */
+ {QMI_FIXED_INTF(0x413c, 0x81d7, 1)}, /* Dell Wireless 5821e preproduction config */
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 0ed09bb91c442..bccf63aac6cd6 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1601,6 +1601,7 @@ void usbnet_disconnect (struct usb_interface *intf)
+ struct usbnet *dev;
+ struct usb_device *xdev;
+ struct net_device *net;
++ struct urb *urb;
+
+ dev = usb_get_intfdata(intf);
+ usb_set_intfdata(intf, NULL);
+@@ -1617,7 +1618,11 @@ void usbnet_disconnect (struct usb_interface *intf)
+ net = dev->net;
+ unregister_netdev (net);
+
+- usb_scuttle_anchored_urbs(&dev->deferred);
++ while ((urb = usb_get_from_anchor(&dev->deferred))) {
++ dev_kfree_skb(urb->context);
++ kfree(urb->sg);
++ usb_free_urb(urb);
++ }
+
+ if (dev->driver_info->unbind)
+ dev->driver_info->unbind(dev, intf);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 6d76fc608b741..326ad33537ede 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2069,14 +2069,14 @@ static int nvme_pr_preempt(struct block_device *bdev, u64 old, u64 new,
+
+ static int nvme_pr_clear(struct block_device *bdev, u64 key)
+ {
+- u32 cdw10 = 1 | (key ? 1 << 3 : 0);
++ u32 cdw10 = 1 | (key ? 0 : 1 << 3);
+
+- return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_register);
++ return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release);
+ }
+
+ static int nvme_pr_release(struct block_device *bdev, u64 key, enum pr_type type)
+ {
+- u32 cdw10 = nvme_pr_type(type) << 8 | (key ? 1 << 3 : 0);
++ u32 cdw10 = nvme_pr_type(type) << 8 | (key ? 0 : 1 << 3);
+
+ return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release);
+ }
+diff --git a/drivers/reset/reset-imx7.c b/drivers/reset/reset-imx7.c
+index 185a333df66c5..d2408725eb2c3 100644
+--- a/drivers/reset/reset-imx7.c
++++ b/drivers/reset/reset-imx7.c
+@@ -329,6 +329,7 @@ static int imx8mp_reset_set(struct reset_controller_dev *rcdev,
+ break;
+
+ case IMX8MP_RESET_PCIE_CTRL_APPS_EN:
++ case IMX8MP_RESET_PCIEPHY_PERST:
+ value = assert ? 0 : bit;
+ break;
+ }
+diff --git a/drivers/soc/sunxi/sunxi_sram.c b/drivers/soc/sunxi/sunxi_sram.c
+index a8f3876963a08..09754cd1d57dc 100644
+--- a/drivers/soc/sunxi/sunxi_sram.c
++++ b/drivers/soc/sunxi/sunxi_sram.c
+@@ -78,8 +78,8 @@ static struct sunxi_sram_desc sun4i_a10_sram_d = {
+
+ static struct sunxi_sram_desc sun50i_a64_sram_c = {
+ .data = SUNXI_SRAM_DATA("C", 0x4, 24, 1,
+- SUNXI_SRAM_MAP(0, 1, "cpu"),
+- SUNXI_SRAM_MAP(1, 0, "de2")),
++ SUNXI_SRAM_MAP(1, 0, "cpu"),
++ SUNXI_SRAM_MAP(0, 1, "de2")),
+ };
+
+ static const struct of_device_id sunxi_sram_dt_ids[] = {
+@@ -254,6 +254,7 @@ int sunxi_sram_claim(struct device *dev)
+ writel(val | ((device << sram_data->offset) & mask),
+ base + sram_data->reg);
+
++ sram_desc->claimed = true;
+ spin_unlock(&sram_lock);
+
+ return 0;
+@@ -329,11 +330,11 @@ static struct regmap_config sunxi_sram_emac_clock_regmap = {
+ .writeable_reg = sunxi_sram_regmap_accessible_reg,
+ };
+
+-static int sunxi_sram_probe(struct platform_device *pdev)
++static int __init sunxi_sram_probe(struct platform_device *pdev)
+ {
+- struct dentry *d;
+ struct regmap *emac_clock;
+ const struct sunxi_sramc_variant *variant;
++ struct device *dev = &pdev->dev;
+
+ sram_dev = &pdev->dev;
+
+@@ -345,13 +346,6 @@ static int sunxi_sram_probe(struct platform_device *pdev)
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+- of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+-
+- d = debugfs_create_file("sram", S_IRUGO, NULL, NULL,
+- &sunxi_sram_fops);
+- if (!d)
+- return -ENOMEM;
+-
+ if (variant->num_emac_clocks > 0) {
+ emac_clock = devm_regmap_init_mmio(&pdev->dev, base,
+ &sunxi_sram_emac_clock_regmap);
+@@ -360,6 +354,10 @@ static int sunxi_sram_probe(struct platform_device *pdev)
+ return PTR_ERR(emac_clock);
+ }
+
++ of_platform_populate(dev->of_node, NULL, NULL, dev);
++
++ debugfs_create_file("sram", 0444, NULL, NULL, &sunxi_sram_fops);
++
+ return 0;
+ }
+
+@@ -409,9 +407,8 @@ static struct platform_driver sunxi_sram_driver = {
+ .name = "sunxi-sram",
+ .of_match_table = sunxi_sram_dt_match,
+ },
+- .probe = sunxi_sram_probe,
+ };
+-module_platform_driver(sunxi_sram_driver);
++builtin_platform_driver_probe(sunxi_sram_driver, sunxi_sram_probe);
+
+ MODULE_AUTHOR("Maxime Ripard <maxime.ripard@free-electrons.com>");
+ MODULE_DESCRIPTION("Allwinner sunXi SRAM Controller Driver");
+diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c
+index 2992fb87cf723..55596ce6bb6e4 100644
+--- a/drivers/staging/media/rkvdec/rkvdec-h264.c
++++ b/drivers/staging/media/rkvdec/rkvdec-h264.c
+@@ -1175,8 +1175,8 @@ static int rkvdec_h264_run(struct rkvdec_ctx *ctx)
+
+ schedule_delayed_work(&rkvdec->watchdog_work, msecs_to_jiffies(2000));
+
+- writel(0xffffffff, rkvdec->regs + RKVDEC_REG_STRMD_ERR_EN);
+- writel(0xffffffff, rkvdec->regs + RKVDEC_REG_H264_ERR_E);
++ writel(0, rkvdec->regs + RKVDEC_REG_STRMD_ERR_EN);
++ writel(0, rkvdec->regs + RKVDEC_REG_H264_ERR_E);
+ writel(1, rkvdec->regs + RKVDEC_REG_PREF_LUMA_CACHE_COMMAND);
+ writel(1, rkvdec->regs + RKVDEC_REG_PREF_CHR_CACHE_COMMAND);
+
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 64f0aec7e70ae..0508da6f63d9e 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2413,6 +2413,7 @@ int tb_switch_configure(struct tb_switch *sw)
+ * additional capabilities.
+ */
+ sw->config.cmuv = USB4_VERSION_1_0;
++ sw->config.plug_events_delay = 0xa;
+
+ /* Enumerate the switch */
+ ret = tb_sw_write(sw, (u32 *)&sw->config + 1, TB_CFG_SWITCH,
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 23ab3b048d9be..251778d14e2dd 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -52,6 +52,13 @@ UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+
++/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
++UNUSUAL_DEV(0x090c, 0x2000, 0x0000, 0x9999,
++ "Hiksemi",
++ "External HDD",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_UAS),
++
+ /*
+ * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+ * commands in UAS mode. Observed with the 1.28 firmware; are there others?
+@@ -76,6 +83,13 @@ UNUSUAL_DEV(0x0bc2, 0x331a, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_REPORT_LUNS),
+
++/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
++UNUSUAL_DEV(0x0bda, 0x9210, 0x0000, 0x9999,
++ "Hiksemi",
++ "External HDD",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_UAS),
++
+ /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */
+ UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999,
+ "Initio Corporation",
+@@ -118,6 +132,13 @@ UNUSUAL_DEV(0x154b, 0xf00d, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_ATA_1X),
+
++/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
++UNUSUAL_DEV(0x17ef, 0x3899, 0x0000, 0x9999,
++ "Thinkplus",
++ "External HDD",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_UAS),
++
+ /* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+ UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
+ "VIA",
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 7f2624f427241..6364f0d467ea3 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -588,8 +588,6 @@ static int ucsi_get_pdos(struct ucsi_connector *con, int is_partner,
+ num_pdos * sizeof(u32));
+ if (ret < 0 && ret != -ETIMEDOUT)
+ dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret);
+- if (ret == 0 && offset == 0)
+- dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n");
+
+ return ret;
+ }
+diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c
+index 48c4dadb0c7c7..a4c1b985f79a7 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_base.c
++++ b/drivers/vdpa/ifcvf/ifcvf_base.c
+@@ -315,7 +315,7 @@ u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid)
+ u32 q_pair_id;
+
+ ifcvf_lm = (struct ifcvf_lm_cfg __iomem *)hw->lm_cfg;
+- q_pair_id = qid / hw->nr_vring;
++ q_pair_id = qid / 2;
+ avail_idx_addr = &ifcvf_lm->vring_lm_cfg[q_pair_id].idx_addr[qid % 2];
+ last_avail_idx = vp_ioread16(avail_idx_addr);
+
+@@ -329,7 +329,7 @@ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num)
+ u32 q_pair_id;
+
+ ifcvf_lm = (struct ifcvf_lm_cfg __iomem *)hw->lm_cfg;
+- q_pair_id = qid / hw->nr_vring;
++ q_pair_id = qid / 2;
+ avail_idx_addr = &ifcvf_lm->vring_lm_cfg[q_pair_id].idx_addr[qid % 2];
+ hw->vring[qid].last_avail_idx = num;
+ vp_iowrite16(num, avail_idx_addr);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index e85c1d71f4ed2..f527cbeb11699 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -1297,6 +1297,8 @@ static void teardown_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *
+
+ static int create_rqt(struct mlx5_vdpa_net *ndev)
+ {
++ int rqt_table_size = roundup_pow_of_two(ndev->rqt_size);
++ int act_sz = roundup_pow_of_two(ndev->cur_num_vqs / 2);
+ __be32 *list;
+ void *rqtc;
+ int inlen;
+@@ -1304,7 +1306,7 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
+ int i, j;
+ int err;
+
+- inlen = MLX5_ST_SZ_BYTES(create_rqt_in) + ndev->rqt_size * MLX5_ST_SZ_BYTES(rq_num);
++ inlen = MLX5_ST_SZ_BYTES(create_rqt_in) + rqt_table_size * MLX5_ST_SZ_BYTES(rq_num);
+ in = kzalloc(inlen, GFP_KERNEL);
+ if (!in)
+ return -ENOMEM;
+@@ -1313,12 +1315,12 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
+ rqtc = MLX5_ADDR_OF(create_rqt_in, in, rqt_context);
+
+ MLX5_SET(rqtc, rqtc, list_q_type, MLX5_RQTC_LIST_Q_TYPE_VIRTIO_NET_Q);
+- MLX5_SET(rqtc, rqtc, rqt_max_size, ndev->rqt_size);
++ MLX5_SET(rqtc, rqtc, rqt_max_size, rqt_table_size);
+ list = MLX5_ADDR_OF(rqtc, rqtc, rq_num[0]);
+- for (i = 0, j = 0; i < ndev->rqt_size; i++, j += 2)
++ for (i = 0, j = 0; i < act_sz; i++, j += 2)
+ list[i] = cpu_to_be32(ndev->vqs[j % ndev->cur_num_vqs].virtq_id);
+
+- MLX5_SET(rqtc, rqtc, rqt_actual_size, ndev->rqt_size);
++ MLX5_SET(rqtc, rqtc, rqt_actual_size, act_sz);
+ err = mlx5_vdpa_create_rqt(&ndev->mvdev, in, inlen, &ndev->res.rqtn);
+ kfree(in);
+ if (err)
+@@ -1331,6 +1333,7 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
+
+ static int modify_rqt(struct mlx5_vdpa_net *ndev, int num)
+ {
++ int act_sz = roundup_pow_of_two(num / 2);
+ __be32 *list;
+ void *rqtc;
+ int inlen;
+@@ -1338,7 +1341,7 @@ static int modify_rqt(struct mlx5_vdpa_net *ndev, int num)
+ int i, j;
+ int err;
+
+- inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) + ndev->rqt_size * MLX5_ST_SZ_BYTES(rq_num);
++ inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) + act_sz * MLX5_ST_SZ_BYTES(rq_num);
+ in = kzalloc(inlen, GFP_KERNEL);
+ if (!in)
+ return -ENOMEM;
+@@ -1349,10 +1352,10 @@ static int modify_rqt(struct mlx5_vdpa_net *ndev, int num)
+ MLX5_SET(rqtc, rqtc, list_q_type, MLX5_RQTC_LIST_Q_TYPE_VIRTIO_NET_Q);
+
+ list = MLX5_ADDR_OF(rqtc, rqtc, rq_num[0]);
+- for (i = 0, j = 0; i < ndev->rqt_size; i++, j += 2)
++ for (i = 0, j = 0; i < act_sz; i++, j = j + 2)
+ list[i] = cpu_to_be32(ndev->vqs[j % num].virtq_id);
+
+- MLX5_SET(rqtc, rqtc, rqt_actual_size, ndev->rqt_size);
++ MLX5_SET(rqtc, rqtc, rqt_actual_size, act_sz);
+ err = mlx5_vdpa_modify_rqt(&ndev->mvdev, in, inlen, ndev->res.rqtn);
+ kfree(in);
+ if (err)
+diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
+index 3bc27de58f46b..8e0efae6cc8ad 100644
+--- a/drivers/vdpa/vdpa_user/vduse_dev.c
++++ b/drivers/vdpa/vdpa_user/vduse_dev.c
+@@ -662,10 +662,15 @@ static void vduse_vdpa_get_config(struct vdpa_device *vdpa, unsigned int offset,
+ {
+ struct vduse_dev *dev = vdpa_to_vduse(vdpa);
+
+- if (offset > dev->config_size ||
+- len > dev->config_size - offset)
++ /* Initialize the buffer in case of partial copy. */
++ memset(buf, 0, len);
++
++ if (offset > dev->config_size)
+ return;
+
++ if (len > dev->config_size - offset)
++ len = dev->config_size - offset;
++
+ memcpy(buf, dev->config + offset, len);
+ }
+
+diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
+index 5ae8de09b271b..001f4e053c85a 100644
+--- a/fs/ntfs/super.c
++++ b/fs/ntfs/super.c
+@@ -2092,7 +2092,8 @@ get_ctx_vol_failed:
+ // TODO: Initialize security.
+ /* Get the extended system files' directory inode. */
+ vol->extend_ino = ntfs_iget(sb, FILE_Extend);
+- if (IS_ERR(vol->extend_ino) || is_bad_inode(vol->extend_ino)) {
++ if (IS_ERR(vol->extend_ino) || is_bad_inode(vol->extend_ino) ||
++ !S_ISDIR(vol->extend_ino->i_mode)) {
+ if (!IS_ERR(vol->extend_ino))
+ iput(vol->extend_ino);
+ ntfs_error(sb, "Failed to load $Extend.");
+diff --git a/mm/damon/dbgfs.c b/mm/damon/dbgfs.c
+index 53ba8b1e619ca..89075fa4e8a9a 100644
+--- a/mm/damon/dbgfs.c
++++ b/mm/damon/dbgfs.c
+@@ -853,6 +853,7 @@ static int dbgfs_rm_context(char *name)
+ struct dentry *root, *dir, **new_dirs;
+ struct damon_ctx **new_ctxs;
+ int i, j;
++ int ret = 0;
+
+ if (damon_nr_running_ctxs())
+ return -EBUSY;
+@@ -867,14 +868,16 @@ static int dbgfs_rm_context(char *name)
+
+ new_dirs = kmalloc_array(dbgfs_nr_ctxs - 1, sizeof(*dbgfs_dirs),
+ GFP_KERNEL);
+- if (!new_dirs)
+- return -ENOMEM;
++ if (!new_dirs) {
++ ret = -ENOMEM;
++ goto out_dput;
++ }
+
+ new_ctxs = kmalloc_array(dbgfs_nr_ctxs - 1, sizeof(*dbgfs_ctxs),
+ GFP_KERNEL);
+ if (!new_ctxs) {
+- kfree(new_dirs);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto out_new_dirs;
+ }
+
+ for (i = 0, j = 0; i < dbgfs_nr_ctxs; i++) {
+@@ -894,7 +897,13 @@ static int dbgfs_rm_context(char *name)
+ dbgfs_ctxs = new_ctxs;
+ dbgfs_nr_ctxs--;
+
+- return 0;
++ goto out_dput;
++
++out_new_dirs:
++ kfree(new_dirs);
++out_dput:
++ dput(dir);
++ return ret;
+ }
+
+ static ssize_t dbgfs_rm_context_write(struct file *file,
+diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
+index 09f9e8ca3d1fa..5b5ee3308d71b 100644
+--- a/mm/damon/sysfs.c
++++ b/mm/damon/sysfs.c
+@@ -2181,13 +2181,13 @@ static int damon_sysfs_add_target(struct damon_sysfs_target *sys_target,
+
+ if (!t)
+ return -ENOMEM;
++ damon_add_target(ctx, t);
+ if (ctx->ops.id == DAMON_OPS_VADDR ||
+ ctx->ops.id == DAMON_OPS_FVADDR) {
+ t->pid = find_get_pid(sys_target->pid);
+ if (!t->pid)
+ goto destroy_targets_out;
+ }
+- damon_add_target(ctx, t);
+ err = damon_sysfs_set_regions(t, sys_target->regions);
+ if (err)
+ goto destroy_targets_out;
+diff --git a/mm/frontswap.c b/mm/frontswap.c
+index 6f69b044a8cc7..42262cb6a8646 100644
+--- a/mm/frontswap.c
++++ b/mm/frontswap.c
+@@ -125,6 +125,9 @@ void frontswap_init(unsigned type, unsigned long *map)
+ * p->frontswap set to something valid to work properly.
+ */
+ frontswap_map_set(sis, map);
++
++ if (!frontswap_enabled())
++ return;
+ frontswap_ops->init(type);
+ }
+
+diff --git a/mm/gup.c b/mm/gup.c
+index 38effce68b48d..0d500cdfa6e0e 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2278,8 +2278,28 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
+ }
+
+ #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
+-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
+- unsigned int flags, struct page **pages, int *nr)
++/*
++ * Fast-gup relies on pte change detection to avoid concurrent pgtable
++ * operations.
++ *
++ * To pin the page, fast-gup needs to do below in order:
++ * (1) pin the page (by prefetching pte), then (2) check pte not changed.
++ *
++ * For the rest of pgtable operations where pgtable updates can be racy
++ * with fast-gup, we need to do (1) clear pte, then (2) check whether page
++ * is pinned.
++ *
++ * Above will work for all pte-level operations, including THP split.
++ *
++ * For THP collapse, it's a bit more complicated because fast-gup may be
++ * walking a pgtable page that is being freed (pte is still valid but pmd
++ * can be cleared already). To avoid race in such condition, we need to
++ * also check pmd here to make sure pmd doesn't change (corresponds to
++ * pmdp_collapse_flush() in the THP collapse code path).
++ */
++static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
++ unsigned long end, unsigned int flags,
++ struct page **pages, int *nr)
+ {
+ struct dev_pagemap *pgmap = NULL;
+ int nr_start = *nr, ret = 0;
+@@ -2325,7 +2345,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
+ goto pte_unmap;
+ }
+
+- if (unlikely(pte_val(pte) != pte_val(*ptep))) {
++ if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
++ unlikely(pte_val(pte) != pte_val(*ptep))) {
+ gup_put_folio(folio, 1, flags);
+ goto pte_unmap;
+ }
+@@ -2372,8 +2393,9 @@ pte_unmap:
+ * get_user_pages_fast_only implementation that can pin pages. Thus it's still
+ * useful to have gup_huge_pmd even if we can't operate on ptes.
+ */
+-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
+- unsigned int flags, struct page **pages, int *nr)
++static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
++ unsigned long end, unsigned int flags,
++ struct page **pages, int *nr)
+ {
+ return 0;
+ }
+@@ -2697,7 +2719,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
+ if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr,
+ PMD_SHIFT, next, flags, pages, nr))
+ return 0;
+- } else if (!gup_pte_range(pmd, addr, next, flags, pages, nr))
++ } else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr))
+ return 0;
+ } while (pmdp++, addr = next, addr != end);
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 299dcfaa35b25..b508efbdcdbed 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3418,6 +3418,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
+ {
+ int i, nid = page_to_nid(page);
+ struct hstate *target_hstate;
++ struct page *subpage;
+ int rc = 0;
+
+ target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order);
+@@ -3451,15 +3452,16 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
+ mutex_lock(&target_hstate->resize_lock);
+ for (i = 0; i < pages_per_huge_page(h);
+ i += pages_per_huge_page(target_hstate)) {
++ subpage = nth_page(page, i);
+ if (hstate_is_gigantic(target_hstate))
+- prep_compound_gigantic_page_for_demote(page + i,
++ prep_compound_gigantic_page_for_demote(subpage,
+ target_hstate->order);
+ else
+- prep_compound_page(page + i, target_hstate->order);
+- set_page_private(page + i, 0);
+- set_page_refcounted(page + i);
+- prep_new_huge_page(target_hstate, page + i, nid);
+- put_page(page + i);
++ prep_compound_page(subpage, target_hstate->order);
++ set_page_private(subpage, 0);
++ set_page_refcounted(subpage);
++ prep_new_huge_page(target_hstate, subpage, nid);
++ put_page(subpage);
+ }
+ mutex_unlock(&target_hstate->resize_lock);
+
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 16be62d493cd9..6c16db25ff8e3 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1121,10 +1121,12 @@ static void collapse_huge_page(struct mm_struct *mm,
+
+ pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
+ /*
+- * After this gup_fast can't run anymore. This also removes
+- * any huge TLB entry from the CPU so we won't allow
+- * huge and small TLB entries for the same virtual address
+- * to avoid the risk of CPU bugs in that area.
++ * This removes any huge TLB entry from the CPU so we won't allow
++ * huge and small TLB entries for the same virtual address to
++ * avoid the risk of CPU bugs in that area.
++ *
++ * Parallel fast GUP is fine since fast GUP will back off when
++ * it detects PMD is changed.
+ */
+ _pmd = pmdp_collapse_flush(vma, address, pmd);
+ spin_unlock(pmd_ptl);
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 0316bbc6441b2..bb4a714fea5e7 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -451,8 +451,11 @@ regular_page:
+ continue;
+ }
+
+- /* Do not interfere with other mappings of this page */
+- if (page_mapcount(page) != 1)
++ /*
++ * Do not interfere with other mappings of this page and
++ * non-LRU page.
++ */
++ if (!PageLRU(page) || page_mapcount(page) != 1)
+ continue;
+
+ VM_BUG_ON_PAGE(PageTransCompound(page), page);
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 845369f839e19..828801eab6aca 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -697,6 +697,9 @@ static int kill_accessing_process(struct task_struct *p, unsigned long pfn,
+ };
+ priv.tk.tsk = p;
+
++ if (!p->mm)
++ return -EFAULT;
++
+ mmap_read_lock(p->mm);
+ ret = walk_page_range(p->mm, 0, TASK_SIZE, &hwp_walk_ops,
+ (void *)&priv);
+diff --git a/mm/memory.c b/mm/memory.c
+index 1c6027adc5426..e644f6fad3892 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4378,14 +4378,20 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
+
+ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
+ vmf->address, &vmf->ptl);
+- ret = 0;
++
+ /* Re-check under ptl */
+- if (likely(!vmf_pte_changed(vmf)))
++ if (likely(!vmf_pte_changed(vmf))) {
+ do_set_pte(vmf, page, vmf->address);
+- else
++
++ /* no need to invalidate: a not-present page won't be cached */
++ update_mmu_cache(vma, vmf->address, vmf->pte);
++
++ ret = 0;
++ } else {
++ update_mmu_tlb(vma, vmf->address, vmf->pte);
+ ret = VM_FAULT_NOPAGE;
++ }
+
+- update_mmu_tlb(vma, vmf->address, vmf->pte);
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
+ return ret;
+ }
+diff --git a/mm/migrate_device.c b/mm/migrate_device.c
+index 5052093d0262d..0370f23c3b01f 100644
+--- a/mm/migrate_device.c
++++ b/mm/migrate_device.c
+@@ -7,6 +7,7 @@
+ #include <linux/export.h>
+ #include <linux/memremap.h>
+ #include <linux/migrate.h>
++#include <linux/mm.h>
+ #include <linux/mm_inline.h>
+ #include <linux/mmu_notifier.h>
+ #include <linux/oom.h>
+@@ -187,10 +188,10 @@ again:
+ bool anon_exclusive;
+ pte_t swp_pte;
+
++ flush_cache_page(vma, addr, pte_pfn(*ptep));
+ anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
+ if (anon_exclusive) {
+- flush_cache_page(vma, addr, pte_pfn(*ptep));
+- ptep_clear_flush(vma, addr, ptep);
++ pte = ptep_clear_flush(vma, addr, ptep);
+
+ if (page_try_share_anon_rmap(page)) {
+ set_pte_at(mm, addr, ptep, pte);
+@@ -200,11 +201,15 @@ again:
+ goto next;
+ }
+ } else {
+- ptep_get_and_clear(mm, addr, ptep);
++ pte = ptep_get_and_clear(mm, addr, ptep);
+ }
+
+ migrate->cpages++;
+
++ /* Set the dirty flag on the folio now the pte is gone. */
++ if (pte_dirty(pte))
++ folio_mark_dirty(page_folio(page));
++
+ /* Setup special migration page table entry */
+ if (mpfn & MIGRATE_PFN_WRITE)
+ entry = make_writable_migration_entry(
+@@ -248,13 +253,14 @@ next:
+ migrate->dst[migrate->npages] = 0;
+ migrate->src[migrate->npages++] = mpfn;
+ }
+- arch_leave_lazy_mmu_mode();
+- pte_unmap_unlock(ptep - 1, ptl);
+
+ /* Only flush the TLB if we actually modified any entries */
+ if (unmapped)
+ flush_tlb_range(walk->vma, start, end);
+
++ arch_leave_lazy_mmu_mode();
++ pte_unmap_unlock(ptep - 1, ptl);
++
+ return 0;
+ }
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index cdf0e7d707c37..a88d06dac743e 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4623,6 +4623,30 @@ void fs_reclaim_release(gfp_t gfp_mask)
+ EXPORT_SYMBOL_GPL(fs_reclaim_release);
+ #endif
+
++/*
++ * Zonelists may change due to hotplug during allocation. Detect when zonelists
++ * have been rebuilt so allocation retries. Reader side does not lock and
++ * retries the allocation if zonelist changes. Writer side is protected by the
++ * embedded spin_lock.
++ */
++static DEFINE_SEQLOCK(zonelist_update_seq);
++
++static unsigned int zonelist_iter_begin(void)
++{
++ if (IS_ENABLED(CONFIG_MEMORY_HOTREMOVE))
++ return read_seqbegin(&zonelist_update_seq);
++
++ return 0;
++}
++
++static unsigned int check_retry_zonelist(unsigned int seq)
++{
++ if (IS_ENABLED(CONFIG_MEMORY_HOTREMOVE))
++ return read_seqretry(&zonelist_update_seq, seq);
++
++ return seq;
++}
++
+ /* Perform direct synchronous page reclaim */
+ static unsigned long
+ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
+@@ -4916,6 +4940,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ int compaction_retries;
+ int no_progress_loops;
+ unsigned int cpuset_mems_cookie;
++ unsigned int zonelist_iter_cookie;
+ int reserve_flags;
+
+ /*
+@@ -4926,11 +4951,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))
+ gfp_mask &= ~__GFP_ATOMIC;
+
+-retry_cpuset:
++restart:
+ compaction_retries = 0;
+ no_progress_loops = 0;
+ compact_priority = DEF_COMPACT_PRIORITY;
+ cpuset_mems_cookie = read_mems_allowed_begin();
++ zonelist_iter_cookie = zonelist_iter_begin();
+
+ /*
+ * The fast path uses conservative alloc_flags to succeed only until
+@@ -5102,9 +5128,13 @@ retry:
+ goto retry;
+
+
+- /* Deal with possible cpuset update races before we start OOM killing */
+- if (check_retry_cpuset(cpuset_mems_cookie, ac))
+- goto retry_cpuset;
++ /*
++ * Deal with possible cpuset update races or zonelist updates to avoid
++ * a unnecessary OOM kill.
++ */
++ if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
++ check_retry_zonelist(zonelist_iter_cookie))
++ goto restart;
+
+ /* Reclaim has failed us, start killing things */
+ page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
+@@ -5124,9 +5154,13 @@ retry:
+ }
+
+ nopage:
+- /* Deal with possible cpuset update races before we fail */
+- if (check_retry_cpuset(cpuset_mems_cookie, ac))
+- goto retry_cpuset;
++ /*
++ * Deal with possible cpuset update races or zonelist updates to avoid
++ * a unnecessary OOM kill.
++ */
++ if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
++ check_retry_zonelist(zonelist_iter_cookie))
++ goto restart;
+
+ /*
+ * Make sure that __GFP_NOFAIL request doesn't leak out and make sure
+@@ -5617,6 +5651,18 @@ refill:
+ /* reset page count bias and offset to start of new frag */
+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ offset = size - fragsz;
++ if (unlikely(offset < 0)) {
++ /*
++ * The caller is trying to allocate a fragment
++ * with fragsz > PAGE_SIZE but the cache isn't big
++ * enough to satisfy the request, this may
++ * happen in low memory conditions.
++ * We don't release the cache page because
++ * it could make memory pressure worse
++ * so we simply return NULL here.
++ */
++ return NULL;
++ }
+ }
+
+ nc->pagecnt_bias--;
+@@ -6421,9 +6467,8 @@ static void __build_all_zonelists(void *data)
+ int nid;
+ int __maybe_unused cpu;
+ pg_data_t *self = data;
+- static DEFINE_SPINLOCK(lock);
+
+- spin_lock(&lock);
++ write_seqlock(&zonelist_update_seq);
+
+ #ifdef CONFIG_NUMA
+ memset(node_load, 0, sizeof(node_load));
+@@ -6460,7 +6505,7 @@ static void __build_all_zonelists(void *data)
+ #endif
+ }
+
+- spin_unlock(&lock);
++ write_sequnlock(&zonelist_update_seq);
+ }
+
+ static noinline void __init
+diff --git a/mm/page_isolation.c b/mm/page_isolation.c
+index 9d73dc38e3d75..eb3a68ca92ad9 100644
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -288,6 +288,7 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
+ * @isolate_before: isolate the pageblock before the boundary_pfn
+ * @skip_isolation: the flag to skip the pageblock isolation in second
+ * isolate_single_pageblock()
++ * @migratetype: migrate type to set in error recovery.
+ *
+ * Free and in-use pages can be as big as MAX_ORDER-1 and contain more than one
+ * pageblock. When not all pageblocks within a page are isolated at the same
+@@ -302,9 +303,9 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
+ * the in-use page then splitting the free page.
+ */
+ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
+- gfp_t gfp_flags, bool isolate_before, bool skip_isolation)
++ gfp_t gfp_flags, bool isolate_before, bool skip_isolation,
++ int migratetype)
+ {
+- unsigned char saved_mt;
+ unsigned long start_pfn;
+ unsigned long isolate_pageblock;
+ unsigned long pfn;
+@@ -328,13 +329,13 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
+ start_pfn = max(ALIGN_DOWN(isolate_pageblock, MAX_ORDER_NR_PAGES),
+ zone->zone_start_pfn);
+
+- saved_mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock));
++ if (skip_isolation) {
++ int mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock));
+
+- if (skip_isolation)
+- VM_BUG_ON(!is_migrate_isolate(saved_mt));
+- else {
+- ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), saved_mt, flags,
+- isolate_pageblock, isolate_pageblock + pageblock_nr_pages);
++ VM_BUG_ON(!is_migrate_isolate(mt));
++ } else {
++ ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), migratetype,
++ flags, isolate_pageblock, isolate_pageblock + pageblock_nr_pages);
+
+ if (ret)
+ return ret;
+@@ -475,7 +476,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
+ failed:
+ /* restore the original migratetype */
+ if (!skip_isolation)
+- unset_migratetype_isolate(pfn_to_page(isolate_pageblock), saved_mt);
++ unset_migratetype_isolate(pfn_to_page(isolate_pageblock), migratetype);
+ return -EBUSY;
+ }
+
+@@ -537,7 +538,8 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ bool skip_isolation = false;
+
+ /* isolate [isolate_start, isolate_start + pageblock_nr_pages) pageblock */
+- ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false, skip_isolation);
++ ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false,
++ skip_isolation, migratetype);
+ if (ret)
+ return ret;
+
+@@ -545,7 +547,8 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ skip_isolation = true;
+
+ /* isolate [isolate_end - pageblock_nr_pages, isolate_end) pageblock */
+- ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true, skip_isolation);
++ ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true,
++ skip_isolation, migratetype);
+ if (ret) {
+ unset_migratetype_isolate(pfn_to_page(isolate_start), migratetype);
+ return ret;
+diff --git a/mm/secretmem.c b/mm/secretmem.c
+index f06279d6190a5..53f3badce7e4a 100644
+--- a/mm/secretmem.c
++++ b/mm/secretmem.c
+@@ -283,7 +283,7 @@ static int secretmem_init(void)
+
+ secretmem_mnt = kern_mount(&secretmem_fs);
+ if (IS_ERR(secretmem_mnt))
+- ret = PTR_ERR(secretmem_mnt);
++ return PTR_ERR(secretmem_mnt);
+
+ /* prevent secretmem mappings from ever getting PROT_EXEC */
+ secretmem_mnt->mnt_flags |= MNT_NOEXEC;
+diff --git a/mm/util.c b/mm/util.c
+index 0837570c92251..95d8472747f99 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -619,6 +619,10 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node)
+ if (ret || size <= PAGE_SIZE)
+ return ret;
+
++ /* non-sleeping allocations are not supported by vmalloc */
++ if (!gfpflags_allow_blocking(flags))
++ return NULL;
++
+ /* Don't even allow crazy sizes */
+ if (unlikely(size > INT_MAX)) {
+ WARN_ON_ONCE(!(flags & __GFP_NOWARN));
+diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c
+index 5f27e6746762a..788a82f9c74d5 100644
+--- a/net/mac80211/rc80211_minstrel_ht.c
++++ b/net/mac80211/rc80211_minstrel_ht.c
+@@ -10,6 +10,7 @@
+ #include <linux/random.h>
+ #include <linux/moduleparam.h>
+ #include <linux/ieee80211.h>
++#include <linux/minmax.h>
+ #include <net/mac80211.h>
+ #include "rate.h"
+ #include "sta_info.h"
+@@ -1550,6 +1551,7 @@ minstrel_ht_update_rates(struct minstrel_priv *mp, struct minstrel_ht_sta *mi)
+ {
+ struct ieee80211_sta_rates *rates;
+ int i = 0;
++ int max_rates = min_t(int, mp->hw->max_rates, IEEE80211_TX_RATE_TABLE_SIZE);
+
+ rates = kzalloc(sizeof(*rates), GFP_ATOMIC);
+ if (!rates)
+@@ -1559,10 +1561,10 @@ minstrel_ht_update_rates(struct minstrel_priv *mp, struct minstrel_ht_sta *mi)
+ minstrel_ht_set_rate(mp, mi, rates, i++, mi->max_tp_rate[0]);
+
+ /* Fill up remaining, keep one entry for max_probe_rate */
+- for (; i < (mp->hw->max_rates - 1); i++)
++ for (; i < (max_rates - 1); i++)
+ minstrel_ht_set_rate(mp, mi, rates, i, mi->max_tp_rate[i]);
+
+- if (i < mp->hw->max_rates)
++ if (i < max_rates)
+ minstrel_ht_set_rate(mp, mi, rates, i++, mi->max_prob_rate);
+
+ if (i < IEEE80211_TX_RATE_TABLE_SIZE)
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 3cd24d8170d32..f6f09a3506aae 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -5761,6 +5761,9 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev,
+ skb_reset_network_header(skb);
+ skb_reset_mac_header(skb);
+
++ if (local->hw.queues < IEEE80211_NUM_ACS)
++ goto start_xmit;
++
+ /* update QoS header to prioritize control port frames if possible,
+ * priorization also happens for control port frames send over
+ * AF_PACKET
+@@ -5776,6 +5779,7 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev,
+
+ rcu_read_unlock();
+
++start_xmit:
+ /* mutex lock is only needed for incrementing the cookie counter */
+ mutex_lock(&local->mtx);
+
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index b58df3e63a86a..3f698e508dd71 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -301,14 +301,14 @@ static void __ieee80211_wake_txqs(struct ieee80211_sub_if_data *sdata, int ac)
+ local_bh_disable();
+ spin_lock(&fq->lock);
+
++ sdata->vif.txqs_stopped[ac] = false;
++
+ if (!test_bit(SDATA_STATE_RUNNING, &sdata->state))
+ goto out;
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP)
+ ps = &sdata->bss->ps;
+
+- sdata->vif.txqs_stopped[ac] = false;
+-
+ list_for_each_entry_rcu(sta, &local->sta_list, list) {
+ if (sdata != sta->sdata)
+ continue;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 513f571a082ba..e44b5ea1a448b 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2692,7 +2692,7 @@ static void __mptcp_clear_xmit(struct sock *sk)
+ dfrag_clear(sk, dfrag);
+ }
+
+-static void mptcp_cancel_work(struct sock *sk)
++void mptcp_cancel_work(struct sock *sk)
+ {
+ struct mptcp_sock *msk = mptcp_sk(sk);
+
+@@ -2832,13 +2832,12 @@ static void __mptcp_destroy_sock(struct sock *sk)
+ sock_put(sk);
+ }
+
+-static void mptcp_close(struct sock *sk, long timeout)
++bool __mptcp_close(struct sock *sk, long timeout)
+ {
+ struct mptcp_subflow_context *subflow;
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ bool do_cancel_work = false;
+
+- lock_sock(sk);
+ sk->sk_shutdown = SHUTDOWN_MASK;
+
+ if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) {
+@@ -2880,6 +2879,17 @@ cleanup:
+ } else {
+ mptcp_reset_timeout(msk, 0);
+ }
++
++ return do_cancel_work;
++}
++
++static void mptcp_close(struct sock *sk, long timeout)
++{
++ bool do_cancel_work;
++
++ lock_sock(sk);
++
++ do_cancel_work = __mptcp_close(sk, timeout);
+ release_sock(sk);
+ if (do_cancel_work)
+ mptcp_cancel_work(sk);
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 092154d5bc752..d6bbc484420dc 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -613,6 +613,8 @@ void mptcp_subflow_reset(struct sock *ssk);
+ void mptcp_subflow_queue_clean(struct sock *ssk);
+ void mptcp_sock_graft(struct sock *sk, struct socket *parent);
+ struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk);
++bool __mptcp_close(struct sock *sk, long timeout);
++void mptcp_cancel_work(struct sock *sk);
+
+ bool mptcp_addresses_equal(const struct mptcp_addr_info *a,
+ const struct mptcp_addr_info *b, bool use_port);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index ac41b55b0a81a..6f603dbcf75c8 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -602,30 +602,6 @@ static bool subflow_hmac_valid(const struct request_sock *req,
+ return !crypto_memneq(hmac, mp_opt->hmac, MPTCPOPT_HMAC_LEN);
+ }
+
+-static void mptcp_sock_destruct(struct sock *sk)
+-{
+- /* if new mptcp socket isn't accepted, it is free'd
+- * from the tcp listener sockets request queue, linked
+- * from req->sk. The tcp socket is released.
+- * This calls the ULP release function which will
+- * also remove the mptcp socket, via
+- * sock_put(ctx->conn).
+- *
+- * Problem is that the mptcp socket will be in
+- * ESTABLISHED state and will not have the SOCK_DEAD flag.
+- * Both result in warnings from inet_sock_destruct.
+- */
+- if ((1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) {
+- sk->sk_state = TCP_CLOSE;
+- WARN_ON_ONCE(sk->sk_socket);
+- sock_orphan(sk);
+- }
+-
+- /* We don't need to clear msk->subflow, as it's still NULL at this point */
+- mptcp_destroy_common(mptcp_sk(sk), 0);
+- inet_sock_destruct(sk);
+-}
+-
+ static void mptcp_force_close(struct sock *sk)
+ {
+ /* the msk is not yet exposed to user-space */
+@@ -768,7 +744,6 @@ create_child:
+ /* new mpc subflow takes ownership of the newly
+ * created mptcp socket
+ */
+- new_msk->sk_destruct = mptcp_sock_destruct;
+ mptcp_sk(new_msk)->setsockopt_seq = ctx->setsockopt_seq;
+ mptcp_pm_new_connection(mptcp_sk(new_msk), child, 1);
+ mptcp_token_accept(subflow_req, mptcp_sk(new_msk));
+@@ -1763,13 +1738,19 @@ void mptcp_subflow_queue_clean(struct sock *listener_ssk)
+
+ for (msk = head; msk; msk = next) {
+ struct sock *sk = (struct sock *)msk;
+- bool slow;
++ bool slow, do_cancel_work;
+
++ sock_hold(sk);
+ slow = lock_sock_fast_nested(sk);
+ next = msk->dl_next;
+ msk->first = NULL;
+ msk->dl_next = NULL;
++
++ do_cancel_work = __mptcp_close(sk, 0);
+ unlock_sock_fast(sk, slow);
++ if (do_cancel_work)
++ mptcp_cancel_work(sk);
++ sock_put(sk);
+ }
+
+ /* we are still under the listener msk socket lock */
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index e013253b10d18..4d44a1bf4a042 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -1393,7 +1393,7 @@ static int tcf_ct_init(struct net *net, struct nlattr *nla,
+
+ err = tcf_ct_flow_table_get(params);
+ if (err)
+- goto cleanup;
++ goto cleanup_params;
+
+ spin_lock_bh(&c->tcf_lock);
+ goto_ch = tcf_action_set_ctrlact(*a, parm->action, goto_ch);
+@@ -1408,6 +1408,9 @@ static int tcf_ct_init(struct net *net, struct nlattr *nla,
+
+ return res;
+
++cleanup_params:
++ if (params->tmpl)
++ nf_ct_put(params->tmpl);
+ cleanup:
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index b7257862e0fe6..28b7f120501ae 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1361,7 +1361,7 @@ static u32 cfg80211_calculate_bitrate_he(struct rate_info *rate)
+ 25599, /* 4.166666... */
+ 17067, /* 2.777777... */
+ 12801, /* 2.083333... */
+- 11769, /* 1.851851... */
++ 11377, /* 1.851725... */
+ 10239, /* 1.666666... */
+ 8532, /* 1.388888... */
+ 7680, /* 1.250000... */
+@@ -1444,7 +1444,7 @@ static u32 cfg80211_calculate_bitrate_eht(struct rate_info *rate)
+ 25599, /* 4.166666... */
+ 17067, /* 2.777777... */
+ 12801, /* 2.083333... */
+- 11769, /* 1.851851... */
++ 11377, /* 1.851725... */
+ 10239, /* 1.666666... */
+ 8532, /* 1.388888... */
+ 7680, /* 1.250000... */
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 9ea2aca65e899..e02ad765351b2 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -495,6 +495,8 @@ static struct snd_soc_dai_driver tas2770_dai_driver[] = {
+ },
+ };
+
++static const struct regmap_config tas2770_i2c_regmap;
++
+ static int tas2770_codec_probe(struct snd_soc_component *component)
+ {
+ struct tas2770_priv *tas2770 =
+@@ -508,6 +510,7 @@ static int tas2770_codec_probe(struct snd_soc_component *component)
+ }
+
+ tas2770_reset(tas2770);
++ regmap_reinit_cache(tas2770->regmap, &tas2770_i2c_regmap);
+
+ return 0;
+ }
+diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c
+index 4a8609b0d700d..5153af3281d23 100644
+--- a/sound/soc/fsl/imx-card.c
++++ b/sound/soc/fsl/imx-card.c
+@@ -698,6 +698,10 @@ static int imx_card_parse_of(struct imx_card_data *data)
+ of_node_put(cpu);
+ of_node_put(codec);
+ of_node_put(platform);
++
++ cpu = NULL;
++ codec = NULL;
++ platform = NULL;
+ }
+
+ return 0;
+diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c
+index 468958154ed90..744dd35205847 100644
+--- a/tools/perf/builtin-list.c
++++ b/tools/perf/builtin-list.c
+@@ -10,7 +10,7 @@
+ */
+ #include "builtin.h"
+
+-#include "util/parse-events.h"
++#include "util/print-events.h"
+ #include "util/pmu.h"
+ #include "util/pmu-hybrid.h"
+ #include "util/debug.h"
+diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
+index 23a33ac15e685..dcc079a805851 100644
+--- a/tools/perf/builtin-lock.c
++++ b/tools/perf/builtin-lock.c
+@@ -13,6 +13,7 @@
+ #include <subcmd/pager.h>
+ #include <subcmd/parse-options.h>
+ #include "util/trace-event.h"
++#include "util/tracepoint.h"
+
+ #include "util/debug.h"
+ #include "util/session.h"
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index 68c878b4e5e4c..7fbc85c1da81e 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -3335,16 +3335,24 @@ static struct option __record_options[] = {
+
+ struct option *record_options = __record_options;
+
+-static void record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_cpu_map *cpus)
++static int record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_cpu_map *cpus)
+ {
+ struct perf_cpu cpu;
+ int idx;
+
+ if (cpu_map__is_dummy(cpus))
+- return;
++ return 0;
+
+- perf_cpu_map__for_each_cpu(cpu, idx, cpus)
++ perf_cpu_map__for_each_cpu(cpu, idx, cpus) {
++ if (cpu.cpu == -1)
++ continue;
++ /* Return ENODEV is input cpu is greater than max cpu */
++ if ((unsigned long)cpu.cpu > mask->nbits)
++ return -ENODEV;
+ set_bit(cpu.cpu, mask->bits);
++ }
++
++ return 0;
+ }
+
+ static int record__mmap_cpu_mask_init_spec(struct mmap_cpu_mask *mask, const char *mask_spec)
+@@ -3356,7 +3364,9 @@ static int record__mmap_cpu_mask_init_spec(struct mmap_cpu_mask *mask, const cha
+ return -ENOMEM;
+
+ bitmap_zero(mask->bits, mask->nbits);
+- record__mmap_cpu_mask_init(mask, cpus);
++ if (record__mmap_cpu_mask_init(mask, cpus))
++ return -ENODEV;
++
+ perf_cpu_map__put(cpus);
+
+ return 0;
+@@ -3438,7 +3448,12 @@ static int record__init_thread_masks_spec(struct record *rec, struct perf_cpu_ma
+ pr_err("Failed to allocate CPUs mask\n");
+ return ret;
+ }
+- record__mmap_cpu_mask_init(&cpus_mask, cpus);
++
++ ret = record__mmap_cpu_mask_init(&cpus_mask, cpus);
++ if (ret) {
++ pr_err("Failed to init cpu mask\n");
++ goto out_free_cpu_mask;
++ }
+
+ ret = record__thread_mask_alloc(&full_mask, cpu__max_cpu().cpu);
+ if (ret) {
+@@ -3679,7 +3694,8 @@ static int record__init_thread_default_masks(struct record *rec, struct perf_cpu
+ if (ret)
+ return ret;
+
+- record__mmap_cpu_mask_init(&rec->thread_masks->maps, cpus);
++ if (record__mmap_cpu_mask_init(&rec->thread_masks->maps, cpus))
++ return -ENODEV;
+
+ rec->nr_threads = 1;
+
+diff --git a/tools/perf/builtin-timechart.c b/tools/perf/builtin-timechart.c
+index afce731cec16d..e2e9ad929bafa 100644
+--- a/tools/perf/builtin-timechart.c
++++ b/tools/perf/builtin-timechart.c
+@@ -36,6 +36,7 @@
+ #include "util/data.h"
+ #include "util/debug.h"
+ #include "util/string2.h"
++#include "util/tracepoint.h"
+ #include <linux/err.h>
+
+ #ifdef LACKS_OPEN_MEMSTREAM_PROTOTYPE
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index f075cf37a65ef..1e1f10a1971de 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -53,6 +53,7 @@
+ #include "trace-event.h"
+ #include "util/parse-events.h"
+ #include "util/bpf-loader.h"
++#include "util/tracepoint.h"
+ #include "callchain.h"
+ #include "print_binary.h"
+ #include "string2.h"
+diff --git a/tools/perf/tests/perf-record.c b/tools/perf/tests/perf-record.c
+index 6a001fcfed68e..4952abe716f31 100644
+--- a/tools/perf/tests/perf-record.c
++++ b/tools/perf/tests/perf-record.c
+@@ -332,7 +332,7 @@ out_delete_evlist:
+ out:
+ if (err == -EACCES)
+ return TEST_SKIP;
+- if (err < 0)
++ if (err < 0 || errs != 0)
+ return TEST_FAIL;
+ return TEST_OK;
+ }
+diff --git a/tools/perf/tests/shell/record.sh b/tools/perf/tests/shell/record.sh
+index 00c7285ce1ac6..301f95427159d 100755
+--- a/tools/perf/tests/shell/record.sh
++++ b/tools/perf/tests/shell/record.sh
+@@ -61,7 +61,7 @@ test_register_capture() {
+ echo "Register capture test [Skipped missing registers]"
+ return
+ fi
+- if ! perf record -o - --intr-regs=di,r8,dx,cx -e cpu/br_inst_retired.near_call/p \
++ if ! perf record -o - --intr-regs=di,r8,dx,cx -e br_inst_retired.near_call:p \
+ -c 1000 --per-thread true 2> /dev/null \
+ | perf script -F ip,sym,iregs -i - 2> /dev/null \
+ | egrep -q "DI:"
+diff --git a/tools/perf/util/Build b/tools/perf/util/Build
+index a51267d88ca90..038e4cf8f4885 100644
+--- a/tools/perf/util/Build
++++ b/tools/perf/util/Build
+@@ -26,6 +26,8 @@ perf-y += mmap.o
+ perf-y += memswap.o
+ perf-y += parse-events.o
+ perf-y += parse-events-hybrid.o
++perf-y += print-events.o
++perf-y += tracepoint.o
+ perf-y += perf_regs.o
+ perf-y += path.o
+ perf-y += print_binary.o
+diff --git a/tools/perf/util/parse-events-hybrid.c b/tools/perf/util/parse-events-hybrid.c
+index 284f8eabd3b9a..7c9f9150bad50 100644
+--- a/tools/perf/util/parse-events-hybrid.c
++++ b/tools/perf/util/parse-events-hybrid.c
+@@ -33,7 +33,8 @@ static void config_hybrid_attr(struct perf_event_attr *attr,
+ * If the PMU type ID is 0, the PERF_TYPE_RAW will be applied.
+ */
+ attr->type = type;
+- attr->config = attr->config | ((__u64)pmu_type << PERF_PMU_TYPE_SHIFT);
++ attr->config = (attr->config & PERF_HW_EVENT_MASK) |
++ ((__u64)pmu_type << PERF_PMU_TYPE_SHIFT);
+ }
+
+ static int create_event_hybrid(__u32 config_type, int *idx,
+@@ -48,13 +49,25 @@ static int create_event_hybrid(__u32 config_type, int *idx,
+ __u64 config = attr->config;
+
+ config_hybrid_attr(attr, config_type, pmu->type);
++
++ /*
++ * Some hybrid hardware cache events are only available on one CPU
++ * PMU. For example, the 'L1-dcache-load-misses' is only available
++ * on cpu_core, while the 'L1-icache-loads' is only available on
++ * cpu_atom. We need to remove "not supported" hybrid cache events.
++ */
++ if (attr->type == PERF_TYPE_HW_CACHE
++ && !is_event_supported(attr->type, attr->config))
++ return 0;
++
+ evsel = parse_events__add_event_hybrid(list, idx, attr, name, metric_id,
+ pmu, config_terms);
+- if (evsel)
++ if (evsel) {
+ evsel->pmu_name = strdup(pmu->name);
+- else
++ if (!evsel->pmu_name)
++ return -ENOMEM;
++ } else
+ return -ENOMEM;
+-
+ attr->type = type;
+ attr->config = config;
+ return 0;
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 700c95eafd62a..b51c646c212e5 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -5,18 +5,12 @@
+ #include <dirent.h>
+ #include <errno.h>
+ #include <sys/ioctl.h>
+-#include <sys/types.h>
+-#include <sys/stat.h>
+-#include <fcntl.h>
+ #include <sys/param.h>
+ #include "term.h"
+-#include "build-id.h"
+ #include "evlist.h"
+ #include "evsel.h"
+-#include <subcmd/pager.h>
+ #include <subcmd/parse-options.h>
+ #include "parse-events.h"
+-#include <subcmd/exec-cmd.h>
+ #include "string2.h"
+ #include "strlist.h"
+ #include "bpf-loader.h"
+@@ -27,20 +21,23 @@
+ #define YY_EXTRA_TYPE void*
+ #include "parse-events-flex.h"
+ #include "pmu.h"
+-#include "thread_map.h"
+-#include "probe-file.h"
+ #include "asm/bug.h"
+ #include "util/parse-branch-options.h"
+-#include "metricgroup.h"
+ #include "util/evsel_config.h"
+ #include "util/event.h"
+-#include "util/pfm.h"
++#include "perf.h"
+ #include "util/parse-events-hybrid.h"
+ #include "util/pmu-hybrid.h"
+-#include "perf.h"
++#include "tracepoint.h"
++#include "thread_map.h"
+
+ #define MAX_NAME_LEN 100
+
++struct perf_pmu_event_symbol {
++ char *symbol;
++ enum perf_pmu_event_symbol_type type;
++};
++
+ #ifdef PARSER_DEBUG
+ extern int parse_events_debug;
+ #endif
+@@ -154,21 +151,6 @@ struct event_symbol event_symbols_sw[PERF_COUNT_SW_MAX] = {
+ },
+ };
+
+-struct event_symbol event_symbols_tool[PERF_TOOL_MAX] = {
+- [PERF_TOOL_DURATION_TIME] = {
+- .symbol = "duration_time",
+- .alias = "",
+- },
+- [PERF_TOOL_USER_TIME] = {
+- .symbol = "user_time",
+- .alias = "",
+- },
+- [PERF_TOOL_SYSTEM_TIME] = {
+- .symbol = "system_time",
+- .alias = "",
+- },
+-};
+-
+ #define __PERF_EVENT_FIELD(config, name) \
+ ((config & PERF_EVENT_##name##_MASK) >> PERF_EVENT_##name##_SHIFT)
+
+@@ -177,119 +159,42 @@ struct event_symbol event_symbols_tool[PERF_TOOL_MAX] = {
+ #define PERF_EVENT_TYPE(config) __PERF_EVENT_FIELD(config, TYPE)
+ #define PERF_EVENT_ID(config) __PERF_EVENT_FIELD(config, EVENT)
+
+-#define for_each_subsystem(sys_dir, sys_dirent) \
+- while ((sys_dirent = readdir(sys_dir)) != NULL) \
+- if (sys_dirent->d_type == DT_DIR && \
+- (strcmp(sys_dirent->d_name, ".")) && \
+- (strcmp(sys_dirent->d_name, "..")))
+-
+-static int tp_event_has_id(const char *dir_path, struct dirent *evt_dir)
+-{
+- char evt_path[MAXPATHLEN];
+- int fd;
+-
+- snprintf(evt_path, MAXPATHLEN, "%s/%s/id", dir_path, evt_dir->d_name);
+- fd = open(evt_path, O_RDONLY);
+- if (fd < 0)
+- return -EINVAL;
+- close(fd);
+-
+- return 0;
+-}
+-
+-#define for_each_event(dir_path, evt_dir, evt_dirent) \
+- while ((evt_dirent = readdir(evt_dir)) != NULL) \
+- if (evt_dirent->d_type == DT_DIR && \
+- (strcmp(evt_dirent->d_name, ".")) && \
+- (strcmp(evt_dirent->d_name, "..")) && \
+- (!tp_event_has_id(dir_path, evt_dirent)))
+-
+-#define MAX_EVENT_LENGTH 512
+-
+-struct tracepoint_path *tracepoint_id_to_path(u64 config)
++bool is_event_supported(u8 type, u64 config)
+ {
+- struct tracepoint_path *path = NULL;
+- DIR *sys_dir, *evt_dir;
+- struct dirent *sys_dirent, *evt_dirent;
+- char id_buf[24];
+- int fd;
+- u64 id;
+- char evt_path[MAXPATHLEN];
+- char *dir_path;
+-
+- sys_dir = tracing_events__opendir();
+- if (!sys_dir)
+- return NULL;
++ bool ret = true;
++ int open_return;
++ struct evsel *evsel;
++ struct perf_event_attr attr = {
++ .type = type,
++ .config = config,
++ .disabled = 1,
++ };
++ struct perf_thread_map *tmap = thread_map__new_by_tid(0);
+
+- for_each_subsystem(sys_dir, sys_dirent) {
+- dir_path = get_events_file(sys_dirent->d_name);
+- if (!dir_path)
+- continue;
+- evt_dir = opendir(dir_path);
+- if (!evt_dir)
+- goto next;
++ if (tmap == NULL)
++ return false;
+
+- for_each_event(dir_path, evt_dir, evt_dirent) {
++ evsel = evsel__new(&attr);
++ if (evsel) {
++ open_return = evsel__open(evsel, NULL, tmap);
++ ret = open_return >= 0;
+
+- scnprintf(evt_path, MAXPATHLEN, "%s/%s/id", dir_path,
+- evt_dirent->d_name);
+- fd = open(evt_path, O_RDONLY);
+- if (fd < 0)
+- continue;
+- if (read(fd, id_buf, sizeof(id_buf)) < 0) {
+- close(fd);
+- continue;
+- }
+- close(fd);
+- id = atoll(id_buf);
+- if (id == config) {
+- put_events_file(dir_path);
+- closedir(evt_dir);
+- closedir(sys_dir);
+- path = zalloc(sizeof(*path));
+- if (!path)
+- return NULL;
+- if (asprintf(&path->system, "%.*s", MAX_EVENT_LENGTH, sys_dirent->d_name) < 0) {
+- free(path);
+- return NULL;
+- }
+- if (asprintf(&path->name, "%.*s", MAX_EVENT_LENGTH, evt_dirent->d_name) < 0) {
+- zfree(&path->system);
+- free(path);
+- return NULL;
+- }
+- return path;
+- }
++ if (open_return == -EACCES) {
++ /*
++ * This happens if the paranoid value
++ * /proc/sys/kernel/perf_event_paranoid is set to 2
++ * Re-run with exclude_kernel set; we don't do that
++ * by default as some ARM machines do not support it.
++ *
++ */
++ evsel->core.attr.exclude_kernel = 1;
++ ret = evsel__open(evsel, NULL, tmap) >= 0;
+ }
+- closedir(evt_dir);
+-next:
+- put_events_file(dir_path);
+- }
+-
+- closedir(sys_dir);
+- return NULL;
+-}
+-
+-struct tracepoint_path *tracepoint_name_to_path(const char *name)
+-{
+- struct tracepoint_path *path = zalloc(sizeof(*path));
+- char *str = strchr(name, ':');
+-
+- if (path == NULL || str == NULL) {
+- free(path);
+- return NULL;
+- }
+-
+- path->system = strndup(name, str - name);
+- path->name = strdup(str+1);
+-
+- if (path->system == NULL || path->name == NULL) {
+- zfree(&path->system);
+- zfree(&path->name);
+- zfree(&path);
++ evsel__delete(evsel);
+ }
+
+- return path;
++ perf_thread_map__put(tmap);
++ return ret;
+ }
+
+ const char *event_type(int type)
+@@ -2674,571 +2579,6 @@ int exclude_perf(const struct option *opt,
+ NULL);
+ }
+
+-static const char * const event_type_descriptors[] = {
+- "Hardware event",
+- "Software event",
+- "Tracepoint event",
+- "Hardware cache event",
+- "Raw hardware event descriptor",
+- "Hardware breakpoint",
+-};
+-
+-static int cmp_string(const void *a, const void *b)
+-{
+- const char * const *as = a;
+- const char * const *bs = b;
+-
+- return strcmp(*as, *bs);
+-}
+-
+-/*
+- * Print the events from <debugfs_mount_point>/tracing/events
+- */
+-
+-void print_tracepoint_events(const char *subsys_glob, const char *event_glob,
+- bool name_only)
+-{
+- DIR *sys_dir, *evt_dir;
+- struct dirent *sys_dirent, *evt_dirent;
+- char evt_path[MAXPATHLEN];
+- char *dir_path;
+- char **evt_list = NULL;
+- unsigned int evt_i = 0, evt_num = 0;
+- bool evt_num_known = false;
+-
+-restart:
+- sys_dir = tracing_events__opendir();
+- if (!sys_dir)
+- return;
+-
+- if (evt_num_known) {
+- evt_list = zalloc(sizeof(char *) * evt_num);
+- if (!evt_list)
+- goto out_close_sys_dir;
+- }
+-
+- for_each_subsystem(sys_dir, sys_dirent) {
+- if (subsys_glob != NULL &&
+- !strglobmatch(sys_dirent->d_name, subsys_glob))
+- continue;
+-
+- dir_path = get_events_file(sys_dirent->d_name);
+- if (!dir_path)
+- continue;
+- evt_dir = opendir(dir_path);
+- if (!evt_dir)
+- goto next;
+-
+- for_each_event(dir_path, evt_dir, evt_dirent) {
+- if (event_glob != NULL &&
+- !strglobmatch(evt_dirent->d_name, event_glob))
+- continue;
+-
+- if (!evt_num_known) {
+- evt_num++;
+- continue;
+- }
+-
+- snprintf(evt_path, MAXPATHLEN, "%s:%s",
+- sys_dirent->d_name, evt_dirent->d_name);
+-
+- evt_list[evt_i] = strdup(evt_path);
+- if (evt_list[evt_i] == NULL) {
+- put_events_file(dir_path);
+- goto out_close_evt_dir;
+- }
+- evt_i++;
+- }
+- closedir(evt_dir);
+-next:
+- put_events_file(dir_path);
+- }
+- closedir(sys_dir);
+-
+- if (!evt_num_known) {
+- evt_num_known = true;
+- goto restart;
+- }
+- qsort(evt_list, evt_num, sizeof(char *), cmp_string);
+- evt_i = 0;
+- while (evt_i < evt_num) {
+- if (name_only) {
+- printf("%s ", evt_list[evt_i++]);
+- continue;
+- }
+- printf(" %-50s [%s]\n", evt_list[evt_i++],
+- event_type_descriptors[PERF_TYPE_TRACEPOINT]);
+- }
+- if (evt_num && pager_in_use())
+- printf("\n");
+-
+-out_free:
+- evt_num = evt_i;
+- for (evt_i = 0; evt_i < evt_num; evt_i++)
+- zfree(&evt_list[evt_i]);
+- zfree(&evt_list);
+- return;
+-
+-out_close_evt_dir:
+- closedir(evt_dir);
+-out_close_sys_dir:
+- closedir(sys_dir);
+-
+- printf("FATAL: not enough memory to print %s\n",
+- event_type_descriptors[PERF_TYPE_TRACEPOINT]);
+- if (evt_list)
+- goto out_free;
+-}
+-
+-/*
+- * Check whether event is in <debugfs_mount_point>/tracing/events
+- */
+-
+-int is_valid_tracepoint(const char *event_string)
+-{
+- DIR *sys_dir, *evt_dir;
+- struct dirent *sys_dirent, *evt_dirent;
+- char evt_path[MAXPATHLEN];
+- char *dir_path;
+-
+- sys_dir = tracing_events__opendir();
+- if (!sys_dir)
+- return 0;
+-
+- for_each_subsystem(sys_dir, sys_dirent) {
+- dir_path = get_events_file(sys_dirent->d_name);
+- if (!dir_path)
+- continue;
+- evt_dir = opendir(dir_path);
+- if (!evt_dir)
+- goto next;
+-
+- for_each_event(dir_path, evt_dir, evt_dirent) {
+- snprintf(evt_path, MAXPATHLEN, "%s:%s",
+- sys_dirent->d_name, evt_dirent->d_name);
+- if (!strcmp(evt_path, event_string)) {
+- closedir(evt_dir);
+- closedir(sys_dir);
+- return 1;
+- }
+- }
+- closedir(evt_dir);
+-next:
+- put_events_file(dir_path);
+- }
+- closedir(sys_dir);
+- return 0;
+-}
+-
+-static bool is_event_supported(u8 type, u64 config)
+-{
+- bool ret = true;
+- int open_return;
+- struct evsel *evsel;
+- struct perf_event_attr attr = {
+- .type = type,
+- .config = config,
+- .disabled = 1,
+- };
+- struct perf_thread_map *tmap = thread_map__new_by_tid(0);
+-
+- if (tmap == NULL)
+- return false;
+-
+- evsel = evsel__new(&attr);
+- if (evsel) {
+- open_return = evsel__open(evsel, NULL, tmap);
+- ret = open_return >= 0;
+-
+- if (open_return == -EACCES) {
+- /*
+- * This happens if the paranoid value
+- * /proc/sys/kernel/perf_event_paranoid is set to 2
+- * Re-run with exclude_kernel set; we don't do that
+- * by default as some ARM machines do not support it.
+- *
+- */
+- evsel->core.attr.exclude_kernel = 1;
+- ret = evsel__open(evsel, NULL, tmap) >= 0;
+- }
+- evsel__delete(evsel);
+- }
+-
+- perf_thread_map__put(tmap);
+- return ret;
+-}
+-
+-void print_sdt_events(const char *subsys_glob, const char *event_glob,
+- bool name_only)
+-{
+- struct probe_cache *pcache;
+- struct probe_cache_entry *ent;
+- struct strlist *bidlist, *sdtlist;
+- struct strlist_config cfg = {.dont_dupstr = true};
+- struct str_node *nd, *nd2;
+- char *buf, *path, *ptr = NULL;
+- bool show_detail = false;
+- int ret;
+-
+- sdtlist = strlist__new(NULL, &cfg);
+- if (!sdtlist) {
+- pr_debug("Failed to allocate new strlist for SDT\n");
+- return;
+- }
+- bidlist = build_id_cache__list_all(true);
+- if (!bidlist) {
+- pr_debug("Failed to get buildids: %d\n", errno);
+- return;
+- }
+- strlist__for_each_entry(nd, bidlist) {
+- pcache = probe_cache__new(nd->s, NULL);
+- if (!pcache)
+- continue;
+- list_for_each_entry(ent, &pcache->entries, node) {
+- if (!ent->sdt)
+- continue;
+- if (subsys_glob &&
+- !strglobmatch(ent->pev.group, subsys_glob))
+- continue;
+- if (event_glob &&
+- !strglobmatch(ent->pev.event, event_glob))
+- continue;
+- ret = asprintf(&buf, "%s:%s@%s", ent->pev.group,
+- ent->pev.event, nd->s);
+- if (ret > 0)
+- strlist__add(sdtlist, buf);
+- }
+- probe_cache__delete(pcache);
+- }
+- strlist__delete(bidlist);
+-
+- strlist__for_each_entry(nd, sdtlist) {
+- buf = strchr(nd->s, '@');
+- if (buf)
+- *(buf++) = '\0';
+- if (name_only) {
+- printf("%s ", nd->s);
+- continue;
+- }
+- nd2 = strlist__next(nd);
+- if (nd2) {
+- ptr = strchr(nd2->s, '@');
+- if (ptr)
+- *ptr = '\0';
+- if (strcmp(nd->s, nd2->s) == 0)
+- show_detail = true;
+- }
+- if (show_detail) {
+- path = build_id_cache__origname(buf);
+- ret = asprintf(&buf, "%s@%s(%.12s)", nd->s, path, buf);
+- if (ret > 0) {
+- printf(" %-50s [%s]\n", buf, "SDT event");
+- free(buf);
+- }
+- free(path);
+- } else
+- printf(" %-50s [%s]\n", nd->s, "SDT event");
+- if (nd2) {
+- if (strcmp(nd->s, nd2->s) != 0)
+- show_detail = false;
+- if (ptr)
+- *ptr = '@';
+- }
+- }
+- strlist__delete(sdtlist);
+-}
+-
+-int print_hwcache_events(const char *event_glob, bool name_only)
+-{
+- unsigned int type, op, i, evt_i = 0, evt_num = 0, npmus = 0;
+- char name[64], new_name[128];
+- char **evt_list = NULL, **evt_pmus = NULL;
+- bool evt_num_known = false;
+- struct perf_pmu *pmu = NULL;
+-
+- if (perf_pmu__has_hybrid()) {
+- npmus = perf_pmu__hybrid_pmu_num();
+- evt_pmus = zalloc(sizeof(char *) * npmus);
+- if (!evt_pmus)
+- goto out_enomem;
+- }
+-
+-restart:
+- if (evt_num_known) {
+- evt_list = zalloc(sizeof(char *) * evt_num);
+- if (!evt_list)
+- goto out_enomem;
+- }
+-
+- for (type = 0; type < PERF_COUNT_HW_CACHE_MAX; type++) {
+- for (op = 0; op < PERF_COUNT_HW_CACHE_OP_MAX; op++) {
+- /* skip invalid cache type */
+- if (!evsel__is_cache_op_valid(type, op))
+- continue;
+-
+- for (i = 0; i < PERF_COUNT_HW_CACHE_RESULT_MAX; i++) {
+- unsigned int hybrid_supported = 0, j;
+- bool supported;
+-
+- __evsel__hw_cache_type_op_res_name(type, op, i, name, sizeof(name));
+- if (event_glob != NULL && !strglobmatch(name, event_glob))
+- continue;
+-
+- if (!perf_pmu__has_hybrid()) {
+- if (!is_event_supported(PERF_TYPE_HW_CACHE,
+- type | (op << 8) | (i << 16))) {
+- continue;
+- }
+- } else {
+- perf_pmu__for_each_hybrid_pmu(pmu) {
+- if (!evt_num_known) {
+- evt_num++;
+- continue;
+- }
+-
+- supported = is_event_supported(
+- PERF_TYPE_HW_CACHE,
+- type | (op << 8) | (i << 16) |
+- ((__u64)pmu->type << PERF_PMU_TYPE_SHIFT));
+- if (supported) {
+- snprintf(new_name, sizeof(new_name), "%s/%s/",
+- pmu->name, name);
+- evt_pmus[hybrid_supported] = strdup(new_name);
+- hybrid_supported++;
+- }
+- }
+-
+- if (hybrid_supported == 0)
+- continue;
+- }
+-
+- if (!evt_num_known) {
+- evt_num++;
+- continue;
+- }
+-
+- if ((hybrid_supported == 0) ||
+- (hybrid_supported == npmus)) {
+- evt_list[evt_i] = strdup(name);
+- if (npmus > 0) {
+- for (j = 0; j < npmus; j++)
+- zfree(&evt_pmus[j]);
+- }
+- } else {
+- for (j = 0; j < hybrid_supported; j++) {
+- evt_list[evt_i++] = evt_pmus[j];
+- evt_pmus[j] = NULL;
+- }
+- continue;
+- }
+-
+- if (evt_list[evt_i] == NULL)
+- goto out_enomem;
+- evt_i++;
+- }
+- }
+- }
+-
+- if (!evt_num_known) {
+- evt_num_known = true;
+- goto restart;
+- }
+-
+- for (evt_i = 0; evt_i < evt_num; evt_i++) {
+- if (!evt_list[evt_i])
+- break;
+- }
+-
+- evt_num = evt_i;
+- qsort(evt_list, evt_num, sizeof(char *), cmp_string);
+- evt_i = 0;
+- while (evt_i < evt_num) {
+- if (name_only) {
+- printf("%s ", evt_list[evt_i++]);
+- continue;
+- }
+- printf(" %-50s [%s]\n", evt_list[evt_i++],
+- event_type_descriptors[PERF_TYPE_HW_CACHE]);
+- }
+- if (evt_num && pager_in_use())
+- printf("\n");
+-
+-out_free:
+- evt_num = evt_i;
+- for (evt_i = 0; evt_i < evt_num; evt_i++)
+- zfree(&evt_list[evt_i]);
+- zfree(&evt_list);
+-
+- for (evt_i = 0; evt_i < npmus; evt_i++)
+- zfree(&evt_pmus[evt_i]);
+- zfree(&evt_pmus);
+- return evt_num;
+-
+-out_enomem:
+- printf("FATAL: not enough memory to print %s\n", event_type_descriptors[PERF_TYPE_HW_CACHE]);
+- if (evt_list)
+- goto out_free;
+- return evt_num;
+-}
+-
+-static void print_tool_event(const struct event_symbol *syms, const char *event_glob,
+- bool name_only)
+-{
+- if (syms->symbol == NULL)
+- return;
+-
+- if (event_glob && !(strglobmatch(syms->symbol, event_glob) ||
+- (syms->alias && strglobmatch(syms->alias, event_glob))))
+- return;
+-
+- if (name_only)
+- printf("%s ", syms->symbol);
+- else {
+- char name[MAX_NAME_LEN];
+- if (syms->alias && strlen(syms->alias))
+- snprintf(name, MAX_NAME_LEN, "%s OR %s", syms->symbol, syms->alias);
+- else
+- strlcpy(name, syms->symbol, MAX_NAME_LEN);
+- printf(" %-50s [%s]\n", name, "Tool event");
+- }
+-}
+-
+-void print_tool_events(const char *event_glob, bool name_only)
+-{
+- // Start at 1 because the first enum entry symbols no tool event
+- for (int i = 1; i < PERF_TOOL_MAX; ++i) {
+- print_tool_event(event_symbols_tool + i, event_glob, name_only);
+- }
+- if (pager_in_use())
+- printf("\n");
+-}
+-
+-void print_symbol_events(const char *event_glob, unsigned type,
+- struct event_symbol *syms, unsigned max,
+- bool name_only)
+-{
+- unsigned int i, evt_i = 0, evt_num = 0;
+- char name[MAX_NAME_LEN];
+- char **evt_list = NULL;
+- bool evt_num_known = false;
+-
+-restart:
+- if (evt_num_known) {
+- evt_list = zalloc(sizeof(char *) * evt_num);
+- if (!evt_list)
+- goto out_enomem;
+- syms -= max;
+- }
+-
+- for (i = 0; i < max; i++, syms++) {
+- /*
+- * New attr.config still not supported here, the latest
+- * example was PERF_COUNT_SW_CGROUP_SWITCHES
+- */
+- if (syms->symbol == NULL)
+- continue;
+-
+- if (event_glob != NULL && !(strglobmatch(syms->symbol, event_glob) ||
+- (syms->alias && strglobmatch(syms->alias, event_glob))))
+- continue;
+-
+- if (!is_event_supported(type, i))
+- continue;
+-
+- if (!evt_num_known) {
+- evt_num++;
+- continue;
+- }
+-
+- if (!name_only && strlen(syms->alias))
+- snprintf(name, MAX_NAME_LEN, "%s OR %s", syms->symbol, syms->alias);
+- else
+- strlcpy(name, syms->symbol, MAX_NAME_LEN);
+-
+- evt_list[evt_i] = strdup(name);
+- if (evt_list[evt_i] == NULL)
+- goto out_enomem;
+- evt_i++;
+- }
+-
+- if (!evt_num_known) {
+- evt_num_known = true;
+- goto restart;
+- }
+- qsort(evt_list, evt_num, sizeof(char *), cmp_string);
+- evt_i = 0;
+- while (evt_i < evt_num) {
+- if (name_only) {
+- printf("%s ", evt_list[evt_i++]);
+- continue;
+- }
+- printf(" %-50s [%s]\n", evt_list[evt_i++], event_type_descriptors[type]);
+- }
+- if (evt_num && pager_in_use())
+- printf("\n");
+-
+-out_free:
+- evt_num = evt_i;
+- for (evt_i = 0; evt_i < evt_num; evt_i++)
+- zfree(&evt_list[evt_i]);
+- zfree(&evt_list);
+- return;
+-
+-out_enomem:
+- printf("FATAL: not enough memory to print %s\n", event_type_descriptors[type]);
+- if (evt_list)
+- goto out_free;
+-}
+-
+-/*
+- * Print the help text for the event symbols:
+- */
+-void print_events(const char *event_glob, bool name_only, bool quiet_flag,
+- bool long_desc, bool details_flag, bool deprecated,
+- const char *pmu_name)
+-{
+- print_symbol_events(event_glob, PERF_TYPE_HARDWARE,
+- event_symbols_hw, PERF_COUNT_HW_MAX, name_only);
+-
+- print_symbol_events(event_glob, PERF_TYPE_SOFTWARE,
+- event_symbols_sw, PERF_COUNT_SW_MAX, name_only);
+- print_tool_events(event_glob, name_only);
+-
+- print_hwcache_events(event_glob, name_only);
+-
+- print_pmu_events(event_glob, name_only, quiet_flag, long_desc,
+- details_flag, deprecated, pmu_name);
+-
+- if (event_glob != NULL)
+- return;
+-
+- if (!name_only) {
+- printf(" %-50s [%s]\n",
+- "rNNN",
+- event_type_descriptors[PERF_TYPE_RAW]);
+- printf(" %-50s [%s]\n",
+- "cpu/t1=v1[,t2=v2,t3 ...]/modifier",
+- event_type_descriptors[PERF_TYPE_RAW]);
+- if (pager_in_use())
+- printf(" (see 'man perf-list' on how to encode it)\n\n");
+-
+- printf(" %-50s [%s]\n",
+- "mem:<addr>[/len][:access]",
+- event_type_descriptors[PERF_TYPE_BREAKPOINT]);
+- if (pager_in_use())
+- printf("\n");
+- }
+-
+- print_tracepoint_events(NULL, NULL, name_only);
+-
+- print_sdt_events(NULL, NULL, name_only);
+-
+- metricgroup__print(true, true, NULL, name_only, details_flag,
+- pmu_name);
+-
+- print_libpfm_events(name_only, long_desc);
+-}
+-
+ int parse_events__is_hardcoded_term(struct parse_events_term *term)
+ {
+ return term->type_term != PARSE_EVENTS__TERM_TYPE_USER;
+diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h
+index a38b8b160e80b..fd97bb74559e6 100644
+--- a/tools/perf/util/parse-events.h
++++ b/tools/perf/util/parse-events.h
+@@ -11,7 +11,6 @@
+ #include <linux/perf_event.h>
+ #include <string.h>
+
+-struct list_head;
+ struct evsel;
+ struct evlist;
+ struct parse_events_error;
+@@ -19,15 +18,8 @@ struct parse_events_error;
+ struct option;
+ struct perf_pmu;
+
+-struct tracepoint_path {
+- char *system;
+- char *name;
+- struct tracepoint_path *next;
+-};
+-
+-struct tracepoint_path *tracepoint_id_to_path(u64 config);
+-struct tracepoint_path *tracepoint_name_to_path(const char *name);
+ bool have_tracepoints(struct list_head *evlist);
++bool is_event_supported(u8 type, u64 config);
+
+ const char *event_type(int type);
+
+@@ -46,8 +38,6 @@ int parse_events_terms(struct list_head *terms, const char *str);
+ int parse_filter(const struct option *opt, const char *str, int unset);
+ int exclude_perf(const struct option *opt, const char *arg, int unset);
+
+-#define EVENTS_HELP_MAX (128*1024)
+-
+ enum perf_pmu_event_symbol_type {
+ PMU_EVENT_SYMBOL_ERR, /* not a PMU EVENT */
+ PMU_EVENT_SYMBOL, /* normal style PMU event */
+@@ -56,11 +46,6 @@ enum perf_pmu_event_symbol_type {
+ PMU_EVENT_SYMBOL_SUFFIX2, /* suffix of pre-suf2 style event */
+ };
+
+-struct perf_pmu_event_symbol {
+- char *symbol;
+- enum perf_pmu_event_symbol_type type;
+-};
+-
+ enum {
+ PARSE_EVENTS__TERM_TYPE_NUM,
+ PARSE_EVENTS__TERM_TYPE_STR,
+@@ -219,28 +204,13 @@ void parse_events_update_lists(struct list_head *list_event,
+ void parse_events_evlist_error(struct parse_events_state *parse_state,
+ int idx, const char *str);
+
+-void print_events(const char *event_glob, bool name_only, bool quiet,
+- bool long_desc, bool details_flag, bool deprecated,
+- const char *pmu_name);
+-
+ struct event_symbol {
+ const char *symbol;
+ const char *alias;
+ };
+ extern struct event_symbol event_symbols_hw[];
+ extern struct event_symbol event_symbols_sw[];
+-void print_symbol_events(const char *event_glob, unsigned type,
+- struct event_symbol *syms, unsigned max,
+- bool name_only);
+-void print_tool_events(const char *event_glob, bool name_only);
+-void print_tracepoint_events(const char *subsys_glob, const char *event_glob,
+- bool name_only);
+-int print_hwcache_events(const char *event_glob, bool name_only);
+-void print_sdt_events(const char *subsys_glob, const char *event_glob,
+- bool name_only);
+-int is_valid_tracepoint(const char *event_string);
+
+-int valid_event_mount(const char *eventfs);
+ char *parse_events_formats_error_string(char *additional_terms);
+
+ void parse_events_error__init(struct parse_events_error *err);
+diff --git a/tools/perf/util/print-events.c b/tools/perf/util/print-events.c
+new file mode 100644
+index 0000000000000..c4d5d87fae2f6
+--- /dev/null
++++ b/tools/perf/util/print-events.c
+@@ -0,0 +1,533 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <dirent.h>
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++#include <sys/param.h>
++
++#include <api/fs/tracing_path.h>
++#include <linux/stddef.h>
++#include <linux/perf_event.h>
++#include <linux/zalloc.h>
++#include <subcmd/pager.h>
++
++#include "build-id.h"
++#include "debug.h"
++#include "evsel.h"
++#include "metricgroup.h"
++#include "parse-events.h"
++#include "pmu.h"
++#include "print-events.h"
++#include "probe-file.h"
++#include "string2.h"
++#include "strlist.h"
++#include "tracepoint.h"
++#include "pfm.h"
++#include "pmu-hybrid.h"
++
++#define MAX_NAME_LEN 100
++
++static const char * const event_type_descriptors[] = {
++ "Hardware event",
++ "Software event",
++ "Tracepoint event",
++ "Hardware cache event",
++ "Raw hardware event descriptor",
++ "Hardware breakpoint",
++};
++
++static const struct event_symbol event_symbols_tool[PERF_TOOL_MAX] = {
++ [PERF_TOOL_DURATION_TIME] = {
++ .symbol = "duration_time",
++ .alias = "",
++ },
++ [PERF_TOOL_USER_TIME] = {
++ .symbol = "user_time",
++ .alias = "",
++ },
++ [PERF_TOOL_SYSTEM_TIME] = {
++ .symbol = "system_time",
++ .alias = "",
++ },
++};
++
++static int cmp_string(const void *a, const void *b)
++{
++ const char * const *as = a;
++ const char * const *bs = b;
++
++ return strcmp(*as, *bs);
++}
++
++/*
++ * Print the events from <debugfs_mount_point>/tracing/events
++ */
++void print_tracepoint_events(const char *subsys_glob,
++ const char *event_glob, bool name_only)
++{
++ DIR *sys_dir, *evt_dir;
++ struct dirent *sys_dirent, *evt_dirent;
++ char evt_path[MAXPATHLEN];
++ char *dir_path;
++ char **evt_list = NULL;
++ unsigned int evt_i = 0, evt_num = 0;
++ bool evt_num_known = false;
++
++restart:
++ sys_dir = tracing_events__opendir();
++ if (!sys_dir)
++ return;
++
++ if (evt_num_known) {
++ evt_list = zalloc(sizeof(char *) * evt_num);
++ if (!evt_list)
++ goto out_close_sys_dir;
++ }
++
++ for_each_subsystem(sys_dir, sys_dirent) {
++ if (subsys_glob != NULL &&
++ !strglobmatch(sys_dirent->d_name, subsys_glob))
++ continue;
++
++ dir_path = get_events_file(sys_dirent->d_name);
++ if (!dir_path)
++ continue;
++ evt_dir = opendir(dir_path);
++ if (!evt_dir)
++ goto next;
++
++ for_each_event(dir_path, evt_dir, evt_dirent) {
++ if (event_glob != NULL &&
++ !strglobmatch(evt_dirent->d_name, event_glob))
++ continue;
++
++ if (!evt_num_known) {
++ evt_num++;
++ continue;
++ }
++
++ snprintf(evt_path, MAXPATHLEN, "%s:%s",
++ sys_dirent->d_name, evt_dirent->d_name);
++
++ evt_list[evt_i] = strdup(evt_path);
++ if (evt_list[evt_i] == NULL) {
++ put_events_file(dir_path);
++ goto out_close_evt_dir;
++ }
++ evt_i++;
++ }
++ closedir(evt_dir);
++next:
++ put_events_file(dir_path);
++ }
++ closedir(sys_dir);
++
++ if (!evt_num_known) {
++ evt_num_known = true;
++ goto restart;
++ }
++ qsort(evt_list, evt_num, sizeof(char *), cmp_string);
++ evt_i = 0;
++ while (evt_i < evt_num) {
++ if (name_only) {
++ printf("%s ", evt_list[evt_i++]);
++ continue;
++ }
++ printf(" %-50s [%s]\n", evt_list[evt_i++],
++ event_type_descriptors[PERF_TYPE_TRACEPOINT]);
++ }
++ if (evt_num && pager_in_use())
++ printf("\n");
++
++out_free:
++ evt_num = evt_i;
++ for (evt_i = 0; evt_i < evt_num; evt_i++)
++ zfree(&evt_list[evt_i]);
++ zfree(&evt_list);
++ return;
++
++out_close_evt_dir:
++ closedir(evt_dir);
++out_close_sys_dir:
++ closedir(sys_dir);
++
++ printf("FATAL: not enough memory to print %s\n",
++ event_type_descriptors[PERF_TYPE_TRACEPOINT]);
++ if (evt_list)
++ goto out_free;
++}
++
++void print_sdt_events(const char *subsys_glob, const char *event_glob,
++ bool name_only)
++{
++ struct probe_cache *pcache;
++ struct probe_cache_entry *ent;
++ struct strlist *bidlist, *sdtlist;
++ struct strlist_config cfg = {.dont_dupstr = true};
++ struct str_node *nd, *nd2;
++ char *buf, *path, *ptr = NULL;
++ bool show_detail = false;
++ int ret;
++
++ sdtlist = strlist__new(NULL, &cfg);
++ if (!sdtlist) {
++ pr_debug("Failed to allocate new strlist for SDT\n");
++ return;
++ }
++ bidlist = build_id_cache__list_all(true);
++ if (!bidlist) {
++ pr_debug("Failed to get buildids: %d\n", errno);
++ return;
++ }
++ strlist__for_each_entry(nd, bidlist) {
++ pcache = probe_cache__new(nd->s, NULL);
++ if (!pcache)
++ continue;
++ list_for_each_entry(ent, &pcache->entries, node) {
++ if (!ent->sdt)
++ continue;
++ if (subsys_glob &&
++ !strglobmatch(ent->pev.group, subsys_glob))
++ continue;
++ if (event_glob &&
++ !strglobmatch(ent->pev.event, event_glob))
++ continue;
++ ret = asprintf(&buf, "%s:%s@%s", ent->pev.group,
++ ent->pev.event, nd->s);
++ if (ret > 0)
++ strlist__add(sdtlist, buf);
++ }
++ probe_cache__delete(pcache);
++ }
++ strlist__delete(bidlist);
++
++ strlist__for_each_entry(nd, sdtlist) {
++ buf = strchr(nd->s, '@');
++ if (buf)
++ *(buf++) = '\0';
++ if (name_only) {
++ printf("%s ", nd->s);
++ continue;
++ }
++ nd2 = strlist__next(nd);
++ if (nd2) {
++ ptr = strchr(nd2->s, '@');
++ if (ptr)
++ *ptr = '\0';
++ if (strcmp(nd->s, nd2->s) == 0)
++ show_detail = true;
++ }
++ if (show_detail) {
++ path = build_id_cache__origname(buf);
++ ret = asprintf(&buf, "%s@%s(%.12s)", nd->s, path, buf);
++ if (ret > 0) {
++ printf(" %-50s [%s]\n", buf, "SDT event");
++ free(buf);
++ }
++ free(path);
++ } else
++ printf(" %-50s [%s]\n", nd->s, "SDT event");
++ if (nd2) {
++ if (strcmp(nd->s, nd2->s) != 0)
++ show_detail = false;
++ if (ptr)
++ *ptr = '@';
++ }
++ }
++ strlist__delete(sdtlist);
++}
++
++int print_hwcache_events(const char *event_glob, bool name_only)
++{
++ unsigned int type, op, i, evt_i = 0, evt_num = 0, npmus = 0;
++ char name[64], new_name[128];
++ char **evt_list = NULL, **evt_pmus = NULL;
++ bool evt_num_known = false;
++ struct perf_pmu *pmu = NULL;
++
++ if (perf_pmu__has_hybrid()) {
++ npmus = perf_pmu__hybrid_pmu_num();
++ evt_pmus = zalloc(sizeof(char *) * npmus);
++ if (!evt_pmus)
++ goto out_enomem;
++ }
++
++restart:
++ if (evt_num_known) {
++ evt_list = zalloc(sizeof(char *) * evt_num);
++ if (!evt_list)
++ goto out_enomem;
++ }
++
++ for (type = 0; type < PERF_COUNT_HW_CACHE_MAX; type++) {
++ for (op = 0; op < PERF_COUNT_HW_CACHE_OP_MAX; op++) {
++ /* skip invalid cache type */
++ if (!evsel__is_cache_op_valid(type, op))
++ continue;
++
++ for (i = 0; i < PERF_COUNT_HW_CACHE_RESULT_MAX; i++) {
++ unsigned int hybrid_supported = 0, j;
++ bool supported;
++
++ __evsel__hw_cache_type_op_res_name(type, op, i, name, sizeof(name));
++ if (event_glob != NULL && !strglobmatch(name, event_glob))
++ continue;
++
++ if (!perf_pmu__has_hybrid()) {
++ if (!is_event_supported(PERF_TYPE_HW_CACHE,
++ type | (op << 8) | (i << 16))) {
++ continue;
++ }
++ } else {
++ perf_pmu__for_each_hybrid_pmu(pmu) {
++ if (!evt_num_known) {
++ evt_num++;
++ continue;
++ }
++
++ supported = is_event_supported(
++ PERF_TYPE_HW_CACHE,
++ type | (op << 8) | (i << 16) |
++ ((__u64)pmu->type << PERF_PMU_TYPE_SHIFT));
++ if (supported) {
++ snprintf(new_name, sizeof(new_name),
++ "%s/%s/", pmu->name, name);
++ evt_pmus[hybrid_supported] =
++ strdup(new_name);
++ hybrid_supported++;
++ }
++ }
++
++ if (hybrid_supported == 0)
++ continue;
++ }
++
++ if (!evt_num_known) {
++ evt_num++;
++ continue;
++ }
++
++ if ((hybrid_supported == 0) ||
++ (hybrid_supported == npmus)) {
++ evt_list[evt_i] = strdup(name);
++ if (npmus > 0) {
++ for (j = 0; j < npmus; j++)
++ zfree(&evt_pmus[j]);
++ }
++ } else {
++ for (j = 0; j < hybrid_supported; j++) {
++ evt_list[evt_i++] = evt_pmus[j];
++ evt_pmus[j] = NULL;
++ }
++ continue;
++ }
++
++ if (evt_list[evt_i] == NULL)
++ goto out_enomem;
++ evt_i++;
++ }
++ }
++ }
++
++ if (!evt_num_known) {
++ evt_num_known = true;
++ goto restart;
++ }
++
++ for (evt_i = 0; evt_i < evt_num; evt_i++) {
++ if (!evt_list[evt_i])
++ break;
++ }
++
++ evt_num = evt_i;
++ qsort(evt_list, evt_num, sizeof(char *), cmp_string);
++ evt_i = 0;
++ while (evt_i < evt_num) {
++ if (name_only) {
++ printf("%s ", evt_list[evt_i++]);
++ continue;
++ }
++ printf(" %-50s [%s]\n", evt_list[evt_i++],
++ event_type_descriptors[PERF_TYPE_HW_CACHE]);
++ }
++ if (evt_num && pager_in_use())
++ printf("\n");
++
++out_free:
++ evt_num = evt_i;
++ for (evt_i = 0; evt_i < evt_num; evt_i++)
++ zfree(&evt_list[evt_i]);
++ zfree(&evt_list);
++
++ for (evt_i = 0; evt_i < npmus; evt_i++)
++ zfree(&evt_pmus[evt_i]);
++ zfree(&evt_pmus);
++ return evt_num;
++
++out_enomem:
++ printf("FATAL: not enough memory to print %s\n",
++ event_type_descriptors[PERF_TYPE_HW_CACHE]);
++ if (evt_list)
++ goto out_free;
++ return evt_num;
++}
++
++static void print_tool_event(const struct event_symbol *syms, const char *event_glob,
++ bool name_only)
++{
++ if (syms->symbol == NULL)
++ return;
++
++ if (event_glob && !(strglobmatch(syms->symbol, event_glob) ||
++ (syms->alias && strglobmatch(syms->alias, event_glob))))
++ return;
++
++ if (name_only)
++ printf("%s ", syms->symbol);
++ else {
++ char name[MAX_NAME_LEN];
++
++ if (syms->alias && strlen(syms->alias))
++ snprintf(name, MAX_NAME_LEN, "%s OR %s", syms->symbol, syms->alias);
++ else
++ strlcpy(name, syms->symbol, MAX_NAME_LEN);
++ printf(" %-50s [%s]\n", name, "Tool event");
++ }
++}
++
++void print_tool_events(const char *event_glob, bool name_only)
++{
++ // Start at 1 because the first enum entry means no tool event.
++ for (int i = 1; i < PERF_TOOL_MAX; ++i)
++ print_tool_event(event_symbols_tool + i, event_glob, name_only);
++
++ if (pager_in_use())
++ printf("\n");
++}
++
++void print_symbol_events(const char *event_glob, unsigned int type,
++ struct event_symbol *syms, unsigned int max,
++ bool name_only)
++{
++ unsigned int i, evt_i = 0, evt_num = 0;
++ char name[MAX_NAME_LEN];
++ char **evt_list = NULL;
++ bool evt_num_known = false;
++
++restart:
++ if (evt_num_known) {
++ evt_list = zalloc(sizeof(char *) * evt_num);
++ if (!evt_list)
++ goto out_enomem;
++ syms -= max;
++ }
++
++ for (i = 0; i < max; i++, syms++) {
++ /*
++ * New attr.config still not supported here, the latest
++ * example was PERF_COUNT_SW_CGROUP_SWITCHES
++ */
++ if (syms->symbol == NULL)
++ continue;
++
++ if (event_glob != NULL && !(strglobmatch(syms->symbol, event_glob) ||
++ (syms->alias && strglobmatch(syms->alias, event_glob))))
++ continue;
++
++ if (!is_event_supported(type, i))
++ continue;
++
++ if (!evt_num_known) {
++ evt_num++;
++ continue;
++ }
++
++ if (!name_only && strlen(syms->alias))
++ snprintf(name, MAX_NAME_LEN, "%s OR %s", syms->symbol, syms->alias);
++ else
++ strlcpy(name, syms->symbol, MAX_NAME_LEN);
++
++ evt_list[evt_i] = strdup(name);
++ if (evt_list[evt_i] == NULL)
++ goto out_enomem;
++ evt_i++;
++ }
++
++ if (!evt_num_known) {
++ evt_num_known = true;
++ goto restart;
++ }
++ qsort(evt_list, evt_num, sizeof(char *), cmp_string);
++ evt_i = 0;
++ while (evt_i < evt_num) {
++ if (name_only) {
++ printf("%s ", evt_list[evt_i++]);
++ continue;
++ }
++ printf(" %-50s [%s]\n", evt_list[evt_i++], event_type_descriptors[type]);
++ }
++ if (evt_num && pager_in_use())
++ printf("\n");
++
++out_free:
++ evt_num = evt_i;
++ for (evt_i = 0; evt_i < evt_num; evt_i++)
++ zfree(&evt_list[evt_i]);
++ zfree(&evt_list);
++ return;
++
++out_enomem:
++ printf("FATAL: not enough memory to print %s\n", event_type_descriptors[type]);
++ if (evt_list)
++ goto out_free;
++}
++
++/*
++ * Print the help text for the event symbols:
++ */
++void print_events(const char *event_glob, bool name_only, bool quiet_flag,
++ bool long_desc, bool details_flag, bool deprecated,
++ const char *pmu_name)
++{
++ print_symbol_events(event_glob, PERF_TYPE_HARDWARE,
++ event_symbols_hw, PERF_COUNT_HW_MAX, name_only);
++
++ print_symbol_events(event_glob, PERF_TYPE_SOFTWARE,
++ event_symbols_sw, PERF_COUNT_SW_MAX, name_only);
++ print_tool_events(event_glob, name_only);
++
++ print_hwcache_events(event_glob, name_only);
++
++ print_pmu_events(event_glob, name_only, quiet_flag, long_desc,
++ details_flag, deprecated, pmu_name);
++
++ if (event_glob != NULL)
++ return;
++
++ if (!name_only) {
++ printf(" %-50s [%s]\n",
++ "rNNN",
++ event_type_descriptors[PERF_TYPE_RAW]);
++ printf(" %-50s [%s]\n",
++ "cpu/t1=v1[,t2=v2,t3 ...]/modifier",
++ event_type_descriptors[PERF_TYPE_RAW]);
++ if (pager_in_use())
++ printf(" (see 'man perf-list' on how to encode it)\n\n");
++
++ printf(" %-50s [%s]\n",
++ "mem:<addr>[/len][:access]",
++ event_type_descriptors[PERF_TYPE_BREAKPOINT]);
++ if (pager_in_use())
++ printf("\n");
++ }
++
++ print_tracepoint_events(NULL, NULL, name_only);
++
++ print_sdt_events(NULL, NULL, name_only);
++
++ metricgroup__print(true, true, NULL, name_only, details_flag,
++ pmu_name);
++
++ print_libpfm_events(name_only, long_desc);
++}
+diff --git a/tools/perf/util/print-events.h b/tools/perf/util/print-events.h
+new file mode 100644
+index 0000000000000..1da9910d83a60
+--- /dev/null
++++ b/tools/perf/util/print-events.h
+@@ -0,0 +1,22 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __PERF_PRINT_EVENTS_H
++#define __PERF_PRINT_EVENTS_H
++
++#include <stdbool.h>
++
++struct event_symbol;
++
++void print_events(const char *event_glob, bool name_only, bool quiet_flag,
++ bool long_desc, bool details_flag, bool deprecated,
++ const char *pmu_name);
++int print_hwcache_events(const char *event_glob, bool name_only);
++void print_sdt_events(const char *subsys_glob, const char *event_glob,
++ bool name_only);
++void print_symbol_events(const char *event_glob, unsigned int type,
++ struct event_symbol *syms, unsigned int max,
++ bool name_only);
++void print_tool_events(const char *event_glob, bool name_only);
++void print_tracepoint_events(const char *subsys_glob, const char *event_glob,
++ bool name_only);
++
++#endif /* __PERF_PRINT_EVENTS_H */
+diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
+index a65f65d0857e6..892c323b4ac9f 100644
+--- a/tools/perf/util/trace-event-info.c
++++ b/tools/perf/util/trace-event-info.c
+@@ -19,16 +19,24 @@
+ #include <linux/kernel.h>
+ #include <linux/zalloc.h>
+ #include <internal/lib.h> // page_size
++#include <sys/param.h>
+
+ #include "trace-event.h"
++#include "tracepoint.h"
+ #include <api/fs/tracing_path.h>
+ #include "evsel.h"
+ #include "debug.h"
+
+ #define VERSION "0.6"
++#define MAX_EVENT_LENGTH 512
+
+ static int output_fd;
+
++struct tracepoint_path {
++ char *system;
++ char *name;
++ struct tracepoint_path *next;
++};
+
+ int bigendian(void)
+ {
+@@ -400,6 +408,94 @@ put_tracepoints_path(struct tracepoint_path *tps)
+ }
+ }
+
++static struct tracepoint_path *tracepoint_id_to_path(u64 config)
++{
++ struct tracepoint_path *path = NULL;
++ DIR *sys_dir, *evt_dir;
++ struct dirent *sys_dirent, *evt_dirent;
++ char id_buf[24];
++ int fd;
++ u64 id;
++ char evt_path[MAXPATHLEN];
++ char *dir_path;
++
++ sys_dir = tracing_events__opendir();
++ if (!sys_dir)
++ return NULL;
++
++ for_each_subsystem(sys_dir, sys_dirent) {
++ dir_path = get_events_file(sys_dirent->d_name);
++ if (!dir_path)
++ continue;
++ evt_dir = opendir(dir_path);
++ if (!evt_dir)
++ goto next;
++
++ for_each_event(dir_path, evt_dir, evt_dirent) {
++
++ scnprintf(evt_path, MAXPATHLEN, "%s/%s/id", dir_path,
++ evt_dirent->d_name);
++ fd = open(evt_path, O_RDONLY);
++ if (fd < 0)
++ continue;
++ if (read(fd, id_buf, sizeof(id_buf)) < 0) {
++ close(fd);
++ continue;
++ }
++ close(fd);
++ id = atoll(id_buf);
++ if (id == config) {
++ put_events_file(dir_path);
++ closedir(evt_dir);
++ closedir(sys_dir);
++ path = zalloc(sizeof(*path));
++ if (!path)
++ return NULL;
++ if (asprintf(&path->system, "%.*s",
++ MAX_EVENT_LENGTH, sys_dirent->d_name) < 0) {
++ free(path);
++ return NULL;
++ }
++ if (asprintf(&path->name, "%.*s",
++ MAX_EVENT_LENGTH, evt_dirent->d_name) < 0) {
++ zfree(&path->system);
++ free(path);
++ return NULL;
++ }
++ return path;
++ }
++ }
++ closedir(evt_dir);
++next:
++ put_events_file(dir_path);
++ }
++
++ closedir(sys_dir);
++ return NULL;
++}
++
++static struct tracepoint_path *tracepoint_name_to_path(const char *name)
++{
++ struct tracepoint_path *path = zalloc(sizeof(*path));
++ char *str = strchr(name, ':');
++
++ if (path == NULL || str == NULL) {
++ free(path);
++ return NULL;
++ }
++
++ path->system = strndup(name, str - name);
++ path->name = strdup(str+1);
++
++ if (path->system == NULL || path->name == NULL) {
++ zfree(&path->system);
++ zfree(&path->name);
++ zfree(&path);
++ }
++
++ return path;
++}
++
+ static struct tracepoint_path *
+ get_tracepoints_path(struct list_head *pattrs)
+ {
+diff --git a/tools/perf/util/tracepoint.c b/tools/perf/util/tracepoint.c
+new file mode 100644
+index 0000000000000..89ef56c433110
+--- /dev/null
++++ b/tools/perf/util/tracepoint.c
+@@ -0,0 +1,63 @@
++// SPDX-License-Identifier: GPL-2.0
++#include "tracepoint.h"
++
++#include <errno.h>
++#include <fcntl.h>
++#include <stdio.h>
++#include <sys/param.h>
++#include <unistd.h>
++
++#include <api/fs/tracing_path.h>
++
++int tp_event_has_id(const char *dir_path, struct dirent *evt_dir)
++{
++ char evt_path[MAXPATHLEN];
++ int fd;
++
++ snprintf(evt_path, MAXPATHLEN, "%s/%s/id", dir_path, evt_dir->d_name);
++ fd = open(evt_path, O_RDONLY);
++ if (fd < 0)
++ return -EINVAL;
++ close(fd);
++
++ return 0;
++}
++
++/*
++ * Check whether event is in <debugfs_mount_point>/tracing/events
++ */
++int is_valid_tracepoint(const char *event_string)
++{
++ DIR *sys_dir, *evt_dir;
++ struct dirent *sys_dirent, *evt_dirent;
++ char evt_path[MAXPATHLEN];
++ char *dir_path;
++
++ sys_dir = tracing_events__opendir();
++ if (!sys_dir)
++ return 0;
++
++ for_each_subsystem(sys_dir, sys_dirent) {
++ dir_path = get_events_file(sys_dirent->d_name);
++ if (!dir_path)
++ continue;
++ evt_dir = opendir(dir_path);
++ if (!evt_dir)
++ goto next;
++
++ for_each_event(dir_path, evt_dir, evt_dirent) {
++ snprintf(evt_path, MAXPATHLEN, "%s:%s",
++ sys_dirent->d_name, evt_dirent->d_name);
++ if (!strcmp(evt_path, event_string)) {
++ closedir(evt_dir);
++ closedir(sys_dir);
++ return 1;
++ }
++ }
++ closedir(evt_dir);
++next:
++ put_events_file(dir_path);
++ }
++ closedir(sys_dir);
++ return 0;
++}
+diff --git a/tools/perf/util/tracepoint.h b/tools/perf/util/tracepoint.h
+new file mode 100644
+index 0000000000000..c4a110fe87d7b
+--- /dev/null
++++ b/tools/perf/util/tracepoint.h
+@@ -0,0 +1,25 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __PERF_TRACEPOINT_H
++#define __PERF_TRACEPOINT_H
++
++#include <dirent.h>
++#include <string.h>
++
++int tp_event_has_id(const char *dir_path, struct dirent *evt_dir);
++
++#define for_each_event(dir_path, evt_dir, evt_dirent) \
++ while ((evt_dirent = readdir(evt_dir)) != NULL) \
++ if (evt_dirent->d_type == DT_DIR && \
++ (strcmp(evt_dirent->d_name, ".")) && \
++ (strcmp(evt_dirent->d_name, "..")) && \
++ (!tp_event_has_id(dir_path, evt_dirent)))
++
++#define for_each_subsystem(sys_dir, sys_dirent) \
++ while ((sys_dirent = readdir(sys_dir)) != NULL) \
++ if (sys_dirent->d_type == DT_DIR && \
++ (strcmp(sys_dirent->d_name, ".")) && \
++ (strcmp(sys_dirent->d_name, "..")))
++
++int is_valid_tracepoint(const char *event_string);
++
++#endif /* __PERF_TRACEPOINT_H */
+diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
+index 072d709c96b48..65aea27d761ca 100644
+--- a/tools/testing/selftests/net/reuseport_bpf.c
++++ b/tools/testing/selftests/net/reuseport_bpf.c
+@@ -328,7 +328,7 @@ static void test_extra_filter(const struct test_params p)
+ if (bind(fd1, addr, sockaddr_size()))
+ error(1, errno, "failed to bind recv socket 1");
+
+- if (!bind(fd2, addr, sockaddr_size()) && errno != EADDRINUSE)
++ if (!bind(fd2, addr, sockaddr_size()) || errno != EADDRINUSE)
+ error(1, errno, "bind socket 2 should fail with EADDRINUSE");
+
+ free(addr);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-10-12 11:17 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-10-12 11:17 UTC (permalink / raw
To: gentoo-commits
commit: 3dbd32d6f07aec079dccc14cdf4aad43a43c8bf9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 12 11:17:17 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 12 11:17:17 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3dbd32d6
Linux patch 5.19.15
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1014_linux-5.19.15.patch | 2578 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2582 insertions(+)
diff --git a/0000_README b/0000_README
index df106d7b..5d6628ec 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-5.19.14.patch
From: http://www.kernel.org
Desc: Linux 5.19.14
+Patch: 1014_linux-5.19.15.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.15
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1014_linux-5.19.15.patch b/1014_linux-5.19.15.patch
new file mode 100644
index 00000000..d5600ea4
--- /dev/null
+++ b/1014_linux-5.19.15.patch
@@ -0,0 +1,2578 @@
+diff --git a/Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt b/Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt
+index 8a9f3559335b5..7e14e26676ec9 100644
+--- a/Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt
++++ b/Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt
+@@ -34,8 +34,8 @@ Example:
+ Use specific request line passing from dma
+ For example, MMC request line is 5
+
+- sdhci: sdhci@98e00000 {
+- compatible = "moxa,moxart-sdhci";
++ mmc: mmc@98e00000 {
++ compatible = "moxa,moxart-mmc";
+ reg = <0x98e00000 0x5C>;
+ interrupts = <5 0>;
+ clocks = <&clk_apb>;
+diff --git a/Documentation/process/code-of-conduct-interpretation.rst b/Documentation/process/code-of-conduct-interpretation.rst
+index e899f14a4ba24..4f8a06b00f608 100644
+--- a/Documentation/process/code-of-conduct-interpretation.rst
++++ b/Documentation/process/code-of-conduct-interpretation.rst
+@@ -51,7 +51,7 @@ the Technical Advisory Board (TAB) or other maintainers if you're
+ uncertain how to handle situations that come up. It will not be
+ considered a violation report unless you want it to be. If you are
+ uncertain about approaching the TAB or any other maintainers, please
+-reach out to our conflict mediator, Mishi Choudhary <mishi@linux.com>.
++reach out to our conflict mediator, Joanna Lee <joanna.lee@gesmer.com>.
+
+ In the end, "be kind to each other" is really what the end goal is for
+ everybody. We know everyone is human and we all fail at times, but the
+diff --git a/Makefile b/Makefile
+index ff4a158671455..af05237987ef3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+@@ -830,8 +830,8 @@ endif
+ # Initialize all stack variables with a zero value.
+ ifdef CONFIG_INIT_STACK_ALL_ZERO
+ KBUILD_CFLAGS += -ftrivial-auto-var-init=zero
+-ifdef CONFIG_CC_IS_CLANG
+-# https://bugs.llvm.org/show_bug.cgi?id=45497
++ifdef CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER
++# https://github.com/llvm/llvm-project/issues/44842
+ KBUILD_CFLAGS += -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang
+ endif
+ endif
+diff --git a/arch/arm/boot/dts/moxart-uc7112lx.dts b/arch/arm/boot/dts/moxart-uc7112lx.dts
+index eb5291b0ee3aa..e07b807b4cec5 100644
+--- a/arch/arm/boot/dts/moxart-uc7112lx.dts
++++ b/arch/arm/boot/dts/moxart-uc7112lx.dts
+@@ -79,7 +79,7 @@
+ clocks = <&ref12>;
+ };
+
+-&sdhci {
++&mmc {
+ status = "okay";
+ };
+
+diff --git a/arch/arm/boot/dts/moxart.dtsi b/arch/arm/boot/dts/moxart.dtsi
+index f5f070a874823..764832ddfa78a 100644
+--- a/arch/arm/boot/dts/moxart.dtsi
++++ b/arch/arm/boot/dts/moxart.dtsi
+@@ -93,8 +93,8 @@
+ clock-names = "PCLK";
+ };
+
+- sdhci: sdhci@98e00000 {
+- compatible = "moxa,moxart-sdhci";
++ mmc: mmc@98e00000 {
++ compatible = "moxa,moxart-mmc";
+ reg = <0x98e00000 0x5C>;
+ interrupts = <5 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_apb>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-bpi-r2-pro.dts b/arch/arm64/boot/dts/rockchip/rk3568-bpi-r2-pro.dts
+index 40cf2236c0b61..ca48d9a54939c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-bpi-r2-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-bpi-r2-pro.dts
+@@ -558,7 +558,7 @@
+ };
+
+ &usb_host0_xhci {
+- extcon = <&usb2phy0>;
++ dr_mode = "host";
+ status = "okay";
+ };
+
+diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
+index 227ed00093543..0e82bf85e59b3 100644
+--- a/arch/s390/kvm/gaccess.c
++++ b/arch/s390/kvm/gaccess.c
+@@ -489,6 +489,8 @@ enum prot_type {
+ PROT_TYPE_ALC = 2,
+ PROT_TYPE_DAT = 3,
+ PROT_TYPE_IEP = 4,
++ /* Dummy value for passing an initialized value when code != PGM_PROTECTION */
++ PROT_NONE,
+ };
+
+ static int trans_exc_ending(struct kvm_vcpu *vcpu, int code, unsigned long gva, u8 ar,
+@@ -504,6 +506,10 @@ static int trans_exc_ending(struct kvm_vcpu *vcpu, int code, unsigned long gva,
+ switch (code) {
+ case PGM_PROTECTION:
+ switch (prot) {
++ case PROT_NONE:
++ /* We should never get here, acts like termination */
++ WARN_ON_ONCE(1);
++ break;
+ case PROT_TYPE_IEP:
+ tec->b61 = 1;
+ fallthrough;
+@@ -968,8 +974,10 @@ static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+ return rc;
+ } else {
+ gpa = kvm_s390_real_to_abs(vcpu, ga);
+- if (kvm_is_error_gpa(vcpu->kvm, gpa))
++ if (kvm_is_error_gpa(vcpu->kvm, gpa)) {
+ rc = PGM_ADDRESSING;
++ prot = PROT_NONE;
++ }
+ }
+ if (rc)
+ return trans_exc(vcpu, rc, ga, ar, mode, prot);
+@@ -1112,8 +1120,6 @@ int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+ if (rc == PGM_PROTECTION && try_storage_prot_override)
+ rc = access_guest_page_with_key(vcpu->kvm, mode, gpas[idx],
+ data, fragment_len, PAGE_SPO_ACC);
+- if (rc == PGM_PROTECTION)
+- prot = PROT_TYPE_KEYC;
+ if (rc)
+ break;
+ len -= fragment_len;
+@@ -1123,6 +1129,10 @@ int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+ if (rc > 0) {
+ bool terminate = (mode == GACC_STORE) && (idx > 0);
+
++ if (rc == PGM_PROTECTION)
++ prot = PROT_TYPE_KEYC;
++ else
++ prot = PROT_NONE;
+ rc = trans_exc_ending(vcpu, rc, ga, ar, mode, prot, terminate);
+ }
+ out_unlock:
+diff --git a/arch/sparc/include/asm/smp_32.h b/arch/sparc/include/asm/smp_32.h
+index 856081761b0fc..2cf7971d7f6c9 100644
+--- a/arch/sparc/include/asm/smp_32.h
++++ b/arch/sparc/include/asm/smp_32.h
+@@ -33,9 +33,6 @@ extern volatile unsigned long cpu_callin_map[NR_CPUS];
+ extern cpumask_t smp_commenced_mask;
+ extern struct linux_prom_registers smp_penguin_ctable;
+
+-typedef void (*smpfunc_t)(unsigned long, unsigned long, unsigned long,
+- unsigned long, unsigned long);
+-
+ void cpu_panic(void);
+
+ /*
+@@ -57,7 +54,7 @@ void smp_bogo(struct seq_file *);
+ void smp_info(struct seq_file *);
+
+ struct sparc32_ipi_ops {
+- void (*cross_call)(smpfunc_t func, cpumask_t mask, unsigned long arg1,
++ void (*cross_call)(void *func, cpumask_t mask, unsigned long arg1,
+ unsigned long arg2, unsigned long arg3,
+ unsigned long arg4);
+ void (*resched)(int cpu);
+@@ -66,28 +63,28 @@ struct sparc32_ipi_ops {
+ };
+ extern const struct sparc32_ipi_ops *sparc32_ipi_ops;
+
+-static inline void xc0(smpfunc_t func)
++static inline void xc0(void *func)
+ {
+ sparc32_ipi_ops->cross_call(func, *cpu_online_mask, 0, 0, 0, 0);
+ }
+
+-static inline void xc1(smpfunc_t func, unsigned long arg1)
++static inline void xc1(void *func, unsigned long arg1)
+ {
+ sparc32_ipi_ops->cross_call(func, *cpu_online_mask, arg1, 0, 0, 0);
+ }
+-static inline void xc2(smpfunc_t func, unsigned long arg1, unsigned long arg2)
++static inline void xc2(void *func, unsigned long arg1, unsigned long arg2)
+ {
+ sparc32_ipi_ops->cross_call(func, *cpu_online_mask, arg1, arg2, 0, 0);
+ }
+
+-static inline void xc3(smpfunc_t func, unsigned long arg1, unsigned long arg2,
++static inline void xc3(void *func, unsigned long arg1, unsigned long arg2,
+ unsigned long arg3)
+ {
+ sparc32_ipi_ops->cross_call(func, *cpu_online_mask,
+ arg1, arg2, arg3, 0);
+ }
+
+-static inline void xc4(smpfunc_t func, unsigned long arg1, unsigned long arg2,
++static inline void xc4(void *func, unsigned long arg1, unsigned long arg2,
+ unsigned long arg3, unsigned long arg4)
+ {
+ sparc32_ipi_ops->cross_call(func, *cpu_online_mask,
+diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
+index 1eed26d423fb2..991e9ad3d3e8f 100644
+--- a/arch/sparc/kernel/leon_smp.c
++++ b/arch/sparc/kernel/leon_smp.c
+@@ -359,7 +359,7 @@ void leonsmp_ipi_interrupt(void)
+ }
+
+ static struct smp_funcall {
+- smpfunc_t func;
++ void *func;
+ unsigned long arg1;
+ unsigned long arg2;
+ unsigned long arg3;
+@@ -372,7 +372,7 @@ static struct smp_funcall {
+ static DEFINE_SPINLOCK(cross_call_lock);
+
+ /* Cross calls must be serialized, at least currently. */
+-static void leon_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1,
++static void leon_cross_call(void *func, cpumask_t mask, unsigned long arg1,
+ unsigned long arg2, unsigned long arg3,
+ unsigned long arg4)
+ {
+@@ -384,7 +384,7 @@ static void leon_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1,
+
+ {
+ /* If you make changes here, make sure gcc generates proper code... */
+- register smpfunc_t f asm("i0") = func;
++ register void *f asm("i0") = func;
+ register unsigned long a1 asm("i1") = arg1;
+ register unsigned long a2 asm("i2") = arg2;
+ register unsigned long a3 asm("i3") = arg3;
+@@ -444,11 +444,13 @@ static void leon_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1,
+ /* Running cross calls. */
+ void leon_cross_call_irq(void)
+ {
++ void (*func)(unsigned long, unsigned long, unsigned long, unsigned long,
++ unsigned long) = ccall_info.func;
+ int i = smp_processor_id();
+
+ ccall_info.processors_in[i] = 1;
+- ccall_info.func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3,
+- ccall_info.arg4, ccall_info.arg5);
++ func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, ccall_info.arg4,
++ ccall_info.arg5);
+ ccall_info.processors_out[i] = 1;
+ }
+
+diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
+index ff30f03beb7c7..9a62a5cf33370 100644
+--- a/arch/sparc/kernel/sun4d_smp.c
++++ b/arch/sparc/kernel/sun4d_smp.c
+@@ -268,7 +268,7 @@ static void sun4d_ipi_resched(int cpu)
+ }
+
+ static struct smp_funcall {
+- smpfunc_t func;
++ void *func;
+ unsigned long arg1;
+ unsigned long arg2;
+ unsigned long arg3;
+@@ -281,7 +281,7 @@ static struct smp_funcall {
+ static DEFINE_SPINLOCK(cross_call_lock);
+
+ /* Cross calls must be serialized, at least currently. */
+-static void sun4d_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1,
++static void sun4d_cross_call(void *func, cpumask_t mask, unsigned long arg1,
+ unsigned long arg2, unsigned long arg3,
+ unsigned long arg4)
+ {
+@@ -296,7 +296,7 @@ static void sun4d_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1,
+ * If you make changes here, make sure
+ * gcc generates proper code...
+ */
+- register smpfunc_t f asm("i0") = func;
++ register void *f asm("i0") = func;
+ register unsigned long a1 asm("i1") = arg1;
+ register unsigned long a2 asm("i2") = arg2;
+ register unsigned long a3 asm("i3") = arg3;
+@@ -353,11 +353,13 @@ static void sun4d_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1,
+ /* Running cross calls. */
+ void smp4d_cross_call_irq(void)
+ {
++ void (*func)(unsigned long, unsigned long, unsigned long, unsigned long,
++ unsigned long) = ccall_info.func;
+ int i = hard_smp_processor_id();
+
+ ccall_info.processors_in[i] = 1;
+- ccall_info.func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3,
+- ccall_info.arg4, ccall_info.arg5);
++ func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, ccall_info.arg4,
++ ccall_info.arg5);
+ ccall_info.processors_out[i] = 1;
+ }
+
+diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
+index 228a6527082dc..056df034e79ee 100644
+--- a/arch/sparc/kernel/sun4m_smp.c
++++ b/arch/sparc/kernel/sun4m_smp.c
+@@ -157,7 +157,7 @@ static void sun4m_ipi_mask_one(int cpu)
+ }
+
+ static struct smp_funcall {
+- smpfunc_t func;
++ void *func;
+ unsigned long arg1;
+ unsigned long arg2;
+ unsigned long arg3;
+@@ -170,7 +170,7 @@ static struct smp_funcall {
+ static DEFINE_SPINLOCK(cross_call_lock);
+
+ /* Cross calls must be serialized, at least currently. */
+-static void sun4m_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1,
++static void sun4m_cross_call(void *func, cpumask_t mask, unsigned long arg1,
+ unsigned long arg2, unsigned long arg3,
+ unsigned long arg4)
+ {
+@@ -230,11 +230,13 @@ static void sun4m_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1,
+ /* Running cross calls. */
+ void smp4m_cross_call_irq(void)
+ {
++ void (*func)(unsigned long, unsigned long, unsigned long, unsigned long,
++ unsigned long) = ccall_info.func;
+ int i = smp_processor_id();
+
+ ccall_info.processors_in[i] = 1;
+- ccall_info.func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3,
+- ccall_info.arg4, ccall_info.arg5);
++ func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, ccall_info.arg4,
++ ccall_info.arg5);
+ ccall_info.processors_out[i] = 1;
+ }
+
+diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
+index a9aa6a92c7fee..13f027afc875c 100644
+--- a/arch/sparc/mm/srmmu.c
++++ b/arch/sparc/mm/srmmu.c
+@@ -1636,19 +1636,19 @@ static void __init get_srmmu_type(void)
+ /* Local cross-calls. */
+ static void smp_flush_page_for_dma(unsigned long page)
+ {
+- xc1((smpfunc_t) local_ops->page_for_dma, page);
++ xc1(local_ops->page_for_dma, page);
+ local_ops->page_for_dma(page);
+ }
+
+ static void smp_flush_cache_all(void)
+ {
+- xc0((smpfunc_t) local_ops->cache_all);
++ xc0(local_ops->cache_all);
+ local_ops->cache_all();
+ }
+
+ static void smp_flush_tlb_all(void)
+ {
+- xc0((smpfunc_t) local_ops->tlb_all);
++ xc0(local_ops->tlb_all);
+ local_ops->tlb_all();
+ }
+
+@@ -1659,7 +1659,7 @@ static void smp_flush_cache_mm(struct mm_struct *mm)
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
+- xc1((smpfunc_t) local_ops->cache_mm, (unsigned long) mm);
++ xc1(local_ops->cache_mm, (unsigned long)mm);
+ local_ops->cache_mm(mm);
+ }
+ }
+@@ -1671,7 +1671,7 @@ static void smp_flush_tlb_mm(struct mm_struct *mm)
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask)) {
+- xc1((smpfunc_t) local_ops->tlb_mm, (unsigned long) mm);
++ xc1(local_ops->tlb_mm, (unsigned long)mm);
+ if (atomic_read(&mm->mm_users) == 1 && current->active_mm == mm)
+ cpumask_copy(mm_cpumask(mm),
+ cpumask_of(smp_processor_id()));
+@@ -1691,8 +1691,8 @@ static void smp_flush_cache_range(struct vm_area_struct *vma,
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
+- xc3((smpfunc_t) local_ops->cache_range,
+- (unsigned long) vma, start, end);
++ xc3(local_ops->cache_range, (unsigned long)vma, start,
++ end);
+ local_ops->cache_range(vma, start, end);
+ }
+ }
+@@ -1708,8 +1708,8 @@ static void smp_flush_tlb_range(struct vm_area_struct *vma,
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
+- xc3((smpfunc_t) local_ops->tlb_range,
+- (unsigned long) vma, start, end);
++ xc3(local_ops->tlb_range, (unsigned long)vma, start,
++ end);
+ local_ops->tlb_range(vma, start, end);
+ }
+ }
+@@ -1723,8 +1723,7 @@ static void smp_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
+- xc2((smpfunc_t) local_ops->cache_page,
+- (unsigned long) vma, page);
++ xc2(local_ops->cache_page, (unsigned long)vma, page);
+ local_ops->cache_page(vma, page);
+ }
+ }
+@@ -1738,8 +1737,7 @@ static void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
+- xc2((smpfunc_t) local_ops->tlb_page,
+- (unsigned long) vma, page);
++ xc2(local_ops->tlb_page, (unsigned long)vma, page);
+ local_ops->tlb_page(vma, page);
+ }
+ }
+@@ -1753,7 +1751,7 @@ static void smp_flush_page_to_ram(unsigned long page)
+ * XXX This experiment failed, research further... -DaveM
+ */
+ #if 1
+- xc1((smpfunc_t) local_ops->page_to_ram, page);
++ xc1(local_ops->page_to_ram, page);
+ #endif
+ local_ops->page_to_ram(page);
+ }
+@@ -1764,8 +1762,7 @@ static void smp_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr)
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
+- xc2((smpfunc_t) local_ops->sig_insns,
+- (unsigned long) mm, insn_addr);
++ xc2(local_ops->sig_insns, (unsigned long)mm, insn_addr);
+ local_ops->sig_insns(mm, insn_addr);
+ }
+
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index f2fe63bfd819f..f1d4d67157be0 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -132,10 +132,18 @@ export LDS_ELF_FORMAT := $(ELF_FORMAT)
+ # The wrappers will select whether using "malloc" or the kernel allocator.
+ LINK_WRAPS = -Wl,--wrap,malloc -Wl,--wrap,free -Wl,--wrap,calloc
+
++# Avoid binutils 2.39+ warnings by marking the stack non-executable and
++# ignorning warnings for the kallsyms sections.
++LDFLAGS_EXECSTACK = -z noexecstack
++ifeq ($(CONFIG_LD_IS_BFD),y)
++LDFLAGS_EXECSTACK += $(call ld-option,--no-warn-rwx-segments)
++endif
++
+ LD_FLAGS_CMDLINE = $(foreach opt,$(KBUILD_LDFLAGS),-Wl,$(opt))
+
+ # Used by link-vmlinux.sh which has special support for um link
+ export CFLAGS_vmlinux := $(LINK-y) $(LINK_WRAPS) $(LD_FLAGS_CMDLINE)
++export LDFLAGS_vmlinux := $(LDFLAGS_EXECSTACK)
+
+ # When cleaning we don't include .config, so we don't include
+ # TT or skas makefiles and don't clean skas_ptregs.h.
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index bd8b988576097..8d6befb24b8ed 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2101,6 +2101,15 @@ static struct extra_reg intel_tnt_extra_regs[] __read_mostly = {
+ EVENT_EXTRA_END
+ };
+
++EVENT_ATTR_STR(mem-loads, mem_ld_grt, "event=0xd0,umask=0x5,ldlat=3");
++EVENT_ATTR_STR(mem-stores, mem_st_grt, "event=0xd0,umask=0x6");
++
++static struct attribute *grt_mem_attrs[] = {
++ EVENT_PTR(mem_ld_grt),
++ EVENT_PTR(mem_st_grt),
++ NULL
++};
++
+ static struct extra_reg intel_grt_extra_regs[] __read_mostly = {
+ /* must define OFFCORE_RSP_X first, see intel_fixup_er() */
+ INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x3fffffffffull, RSP_0),
+@@ -5874,6 +5883,36 @@ __init int intel_pmu_init(void)
+ name = "Tremont";
+ break;
+
++ case INTEL_FAM6_ALDERLAKE_N:
++ x86_pmu.mid_ack = true;
++ memcpy(hw_cache_event_ids, glp_hw_cache_event_ids,
++ sizeof(hw_cache_event_ids));
++ memcpy(hw_cache_extra_regs, tnt_hw_cache_extra_regs,
++ sizeof(hw_cache_extra_regs));
++ hw_cache_event_ids[C(ITLB)][C(OP_READ)][C(RESULT_ACCESS)] = -1;
++
++ x86_pmu.event_constraints = intel_slm_event_constraints;
++ x86_pmu.pebs_constraints = intel_grt_pebs_event_constraints;
++ x86_pmu.extra_regs = intel_grt_extra_regs;
++
++ x86_pmu.pebs_aliases = NULL;
++ x86_pmu.pebs_prec_dist = true;
++ x86_pmu.pebs_block = true;
++ x86_pmu.lbr_pt_coexist = true;
++ x86_pmu.flags |= PMU_FL_HAS_RSP_1;
++ x86_pmu.flags |= PMU_FL_INSTR_LATENCY;
++
++ intel_pmu_pebs_data_source_grt();
++ x86_pmu.pebs_latency_data = adl_latency_data_small;
++ x86_pmu.get_event_constraints = tnt_get_event_constraints;
++ x86_pmu.limit_period = spr_limit_period;
++ td_attr = tnt_events_attrs;
++ mem_attr = grt_mem_attrs;
++ extra_attr = nhm_format_attr;
++ pr_cont("Gracemont events, ");
++ name = "gracemont";
++ break;
++
+ case INTEL_FAM6_WESTMERE:
+ case INTEL_FAM6_WESTMERE_EP:
+ case INTEL_FAM6_WESTMERE_EX:
+@@ -6216,7 +6255,6 @@ __init int intel_pmu_init(void)
+
+ case INTEL_FAM6_ALDERLAKE:
+ case INTEL_FAM6_ALDERLAKE_L:
+- case INTEL_FAM6_ALDERLAKE_N:
+ case INTEL_FAM6_RAPTORLAKE:
+ case INTEL_FAM6_RAPTORLAKE_P:
+ /*
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 9b48d957d2b3f..139204aea94e3 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -110,13 +110,18 @@ void __init intel_pmu_pebs_data_source_skl(bool pmem)
+ __intel_pmu_pebs_data_source_skl(pmem, pebs_data_source);
+ }
+
+-static void __init intel_pmu_pebs_data_source_grt(u64 *data_source)
++static void __init __intel_pmu_pebs_data_source_grt(u64 *data_source)
+ {
+ data_source[0x05] = OP_LH | P(LVL, L3) | LEVEL(L3) | P(SNOOP, HIT);
+ data_source[0x06] = OP_LH | P(LVL, L3) | LEVEL(L3) | P(SNOOP, HITM);
+ data_source[0x08] = OP_LH | P(LVL, L3) | LEVEL(L3) | P(SNOOPX, FWD);
+ }
+
++void __init intel_pmu_pebs_data_source_grt(void)
++{
++ __intel_pmu_pebs_data_source_grt(pebs_data_source);
++}
++
+ void __init intel_pmu_pebs_data_source_adl(void)
+ {
+ u64 *data_source;
+@@ -127,7 +132,7 @@ void __init intel_pmu_pebs_data_source_adl(void)
+
+ data_source = x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX].pebs_data_source;
+ memcpy(data_source, pebs_data_source, sizeof(pebs_data_source));
+- intel_pmu_pebs_data_source_grt(data_source);
++ __intel_pmu_pebs_data_source_grt(data_source);
+ }
+
+ static u64 precise_store_data(u64 status)
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 821098aebf78c..84f6f947ddef5 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -1513,6 +1513,8 @@ void intel_pmu_pebs_data_source_skl(bool pmem);
+
+ void intel_pmu_pebs_data_source_adl(void);
+
++void intel_pmu_pebs_data_source_grt(void);
++
+ int intel_pmu_setup_lbr_filter(struct perf_event *event);
+
+ void intel_pt_interrupt(void);
+diff --git a/arch/x86/um/shared/sysdep/syscalls_32.h b/arch/x86/um/shared/sysdep/syscalls_32.h
+index 68fd2cf526fd7..f6e9f84397e79 100644
+--- a/arch/x86/um/shared/sysdep/syscalls_32.h
++++ b/arch/x86/um/shared/sysdep/syscalls_32.h
+@@ -6,10 +6,9 @@
+ #include <asm/unistd.h>
+ #include <sysdep/ptrace.h>
+
+-typedef long syscall_handler_t(struct pt_regs);
++typedef long syscall_handler_t(struct syscall_args);
+
+ extern syscall_handler_t *sys_call_table[];
+
+ #define EXECUTE_SYSCALL(syscall, regs) \
+- ((long (*)(struct syscall_args)) \
+- (*sys_call_table[syscall]))(SYSCALL_ARGS(®s->regs))
++ ((*sys_call_table[syscall]))(SYSCALL_ARGS(®s->regs))
+diff --git a/arch/x86/um/tls_32.c b/arch/x86/um/tls_32.c
+index ac8eee093f9cd..66162eafd8e8f 100644
+--- a/arch/x86/um/tls_32.c
++++ b/arch/x86/um/tls_32.c
+@@ -65,9 +65,6 @@ static int get_free_idx(struct task_struct* task)
+ struct thread_struct *t = &task->thread;
+ int idx;
+
+- if (!t->arch.tls_array)
+- return GDT_ENTRY_TLS_MIN;
+-
+ for (idx = 0; idx < GDT_ENTRY_TLS_ENTRIES; idx++)
+ if (!t->arch.tls_array[idx].present)
+ return idx + GDT_ENTRY_TLS_MIN;
+@@ -240,9 +237,6 @@ static int get_tls_entry(struct task_struct *task, struct user_desc *info,
+ {
+ struct thread_struct *t = &task->thread;
+
+- if (!t->arch.tls_array)
+- goto clear;
+-
+ if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+ return -EINVAL;
+
+diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
+index 5943387e3f357..5ca366e15c767 100644
+--- a/arch/x86/um/vdso/Makefile
++++ b/arch/x86/um/vdso/Makefile
+@@ -62,7 +62,7 @@ quiet_cmd_vdso = VDSO $@
+ -Wl,-T,$(filter %.lds,$^) $(filter %.o,$^) && \
+ sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@'
+
+-VDSO_LDFLAGS = -fPIC -shared -Wl,--hash-style=sysv
++VDSO_LDFLAGS = -fPIC -shared -Wl,--hash-style=sysv -z noexecstack
+ GCOV_PROFILE := n
+
+ #
+diff --git a/drivers/clk/ti/clk-44xx.c b/drivers/clk/ti/clk-44xx.c
+index 868bc7af21b0b..d078e5d73ed94 100644
+--- a/drivers/clk/ti/clk-44xx.c
++++ b/drivers/clk/ti/clk-44xx.c
+@@ -56,7 +56,7 @@ static const struct omap_clkctrl_bit_data omap4_aess_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_dmic_abe_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0018:26",
++ "abe_cm:clk:0018:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -76,7 +76,7 @@ static const struct omap_clkctrl_bit_data omap4_dmic_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_mcasp_abe_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0020:26",
++ "abe_cm:clk:0020:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -89,7 +89,7 @@ static const struct omap_clkctrl_bit_data omap4_mcasp_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_mcbsp1_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0028:26",
++ "abe_cm:clk:0028:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -102,7 +102,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp1_bit_data[] __initconst =
+ };
+
+ static const char * const omap4_func_mcbsp2_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0030:26",
++ "abe_cm:clk:0030:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -115,7 +115,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp2_bit_data[] __initconst =
+ };
+
+ static const char * const omap4_func_mcbsp3_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0038:26",
++ "abe_cm:clk:0038:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -183,18 +183,18 @@ static const struct omap_clkctrl_bit_data omap4_timer8_bit_data[] __initconst =
+
+ static const struct omap_clkctrl_reg_data omap4_abe_clkctrl_regs[] __initconst = {
+ { OMAP4_L4_ABE_CLKCTRL, NULL, 0, "ocp_abe_iclk" },
+- { OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
++ { OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
+ { OMAP4_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+- { OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
+- { OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe-clkctrl:0020:24" },
+- { OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
+- { OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
+- { OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
+- { OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0040:8" },
+- { OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
+- { OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
+- { OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
+- { OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
++ { OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
++ { OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe_cm:clk:0020:24" },
++ { OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
++ { OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
++ { OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
++ { OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0040:8" },
++ { OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
++ { OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
++ { OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
++ { OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
+ { OMAP4_WD_TIMER3_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+ };
+@@ -287,7 +287,7 @@ static const struct omap_clkctrl_bit_data omap4_fdif_bit_data[] __initconst = {
+
+ static const struct omap_clkctrl_reg_data omap4_iss_clkctrl_regs[] __initconst = {
+ { OMAP4_ISS_CLKCTRL, omap4_iss_bit_data, CLKF_SW_SUP, "ducati_clk_mux_ck" },
+- { OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss-clkctrl:0008:24" },
++ { OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss_cm:clk:0008:24" },
+ { 0 },
+ };
+
+@@ -320,7 +320,7 @@ static const struct omap_clkctrl_bit_data omap4_dss_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_dss_clkctrl_regs[] __initconst = {
+- { OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3-dss-clkctrl:0000:8" },
++ { OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3_dss_cm:clk:0000:8" },
+ { 0 },
+ };
+
+@@ -336,7 +336,7 @@ static const struct omap_clkctrl_bit_data omap4_gpu_bit_data[] __initconst = {
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_gfx_clkctrl_regs[] __initconst = {
+- { OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3-gfx-clkctrl:0000:24" },
++ { OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3_gfx_cm:clk:0000:24" },
+ { 0 },
+ };
+
+@@ -372,12 +372,12 @@ static const struct omap_clkctrl_bit_data omap4_hsi_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+- "l3-init-clkctrl:0038:24",
++ "l3_init_cm:clk:0038:24",
+ NULL,
+ };
+
+ static const char * const omap4_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+- "l3-init-clkctrl:0038:25",
++ "l3_init_cm:clk:0038:25",
+ NULL,
+ };
+
+@@ -418,7 +418,7 @@ static const struct omap_clkctrl_bit_data omap4_usb_host_hs_bit_data[] __initcon
+ };
+
+ static const char * const omap4_usb_otg_hs_xclk_parents[] __initconst = {
+- "l3-init-clkctrl:0040:24",
++ "l3_init_cm:clk:0040:24",
+ NULL,
+ };
+
+@@ -452,14 +452,14 @@ static const struct omap_clkctrl_bit_data omap4_ocp2scp_usb_phy_bit_data[] __ini
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_init_clkctrl_regs[] __initconst = {
+- { OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0008:24" },
+- { OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0010:24" },
+- { OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:0018:24" },
++ { OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0008:24" },
++ { OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0010:24" },
++ { OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:0018:24" },
+ { OMAP4_USB_HOST_HS_CLKCTRL, omap4_usb_host_hs_bit_data, CLKF_SW_SUP, "init_60m_fclk" },
+ { OMAP4_USB_OTG_HS_CLKCTRL, omap4_usb_otg_hs_bit_data, CLKF_HW_SUP, "l3_div_ck" },
+ { OMAP4_USB_TLL_HS_CLKCTRL, omap4_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ { OMAP4_USB_HOST_FS_CLKCTRL, NULL, CLKF_SW_SUP, "func_48mc_fclk" },
+- { OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:00c0:8" },
++ { OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:00c0:8" },
+ { 0 },
+ };
+
+@@ -530,7 +530,7 @@ static const struct omap_clkctrl_bit_data omap4_gpio6_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_per_mcbsp4_gfclk_parents[] __initconst = {
+- "l4-per-clkctrl:00c0:26",
++ "l4_per_cm:clk:00c0:26",
+ "pad_clks_ck",
+ NULL,
+ };
+@@ -570,12 +570,12 @@ static const struct omap_clkctrl_bit_data omap4_slimbus2_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initconst = {
+- { OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0008:24" },
+- { OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0010:24" },
+- { OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0018:24" },
+- { OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0020:24" },
+- { OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0028:24" },
+- { OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0030:24" },
++ { OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0008:24" },
++ { OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0010:24" },
++ { OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0018:24" },
++ { OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0020:24" },
++ { OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0028:24" },
++ { OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0030:24" },
+ { OMAP4_ELM_CLKCTRL, NULL, 0, "l4_div_ck" },
+ { OMAP4_GPIO2_CLKCTRL, omap4_gpio2_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ { OMAP4_GPIO3_CLKCTRL, omap4_gpio3_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+@@ -588,14 +588,14 @@ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initcons
+ { OMAP4_I2C3_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ { OMAP4_I2C4_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ { OMAP4_L4_PER_CLKCTRL, NULL, 0, "l4_div_ck" },
+- { OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:00c0:24" },
++ { OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:00c0:24" },
+ { OMAP4_MCSPI1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MMC3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MMC4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+- { OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0118:8" },
++ { OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0118:8" },
+ { OMAP4_UART1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_UART2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_UART3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -630,7 +630,7 @@ static const struct omap_clkctrl_reg_data omap4_l4_wkup_clkctrl_regs[] __initcon
+ { OMAP4_L4_WKUP_CLKCTRL, NULL, 0, "l4_wkup_clk_mux_ck" },
+ { OMAP4_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { OMAP4_GPIO1_CLKCTRL, omap4_gpio1_bit_data, CLKF_HW_SUP, "l4_wkup_clk_mux_ck" },
+- { OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4-wkup-clkctrl:0020:24" },
++ { OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4_wkup_cm:clk:0020:24" },
+ { OMAP4_COUNTER_32K_CLKCTRL, NULL, 0, "sys_32k_ck" },
+ { OMAP4_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+@@ -644,7 +644,7 @@ static const char * const omap4_pmd_stm_clock_mux_ck_parents[] __initconst = {
+ };
+
+ static const char * const omap4_trace_clk_div_div_ck_parents[] __initconst = {
+- "emu-sys-clkctrl:0000:22",
++ "emu_sys_cm:clk:0000:22",
+ NULL,
+ };
+
+@@ -662,7 +662,7 @@ static const struct omap_clkctrl_div_data omap4_trace_clk_div_div_ck_data __init
+ };
+
+ static const char * const omap4_stm_clk_div_ck_parents[] __initconst = {
+- "emu-sys-clkctrl:0000:20",
++ "emu_sys_cm:clk:0000:20",
+ NULL,
+ };
+
+@@ -716,73 +716,73 @@ static struct ti_dt_clk omap44xx_clks[] = {
+ * hwmod support. Once hwmod is removed, these can be removed
+ * also.
+ */
+- DT_CLK(NULL, "aess_fclk", "abe-clkctrl:0008:24"),
+- DT_CLK(NULL, "cm2_dm10_mux", "l4-per-clkctrl:0008:24"),
+- DT_CLK(NULL, "cm2_dm11_mux", "l4-per-clkctrl:0010:24"),
+- DT_CLK(NULL, "cm2_dm2_mux", "l4-per-clkctrl:0018:24"),
+- DT_CLK(NULL, "cm2_dm3_mux", "l4-per-clkctrl:0020:24"),
+- DT_CLK(NULL, "cm2_dm4_mux", "l4-per-clkctrl:0028:24"),
+- DT_CLK(NULL, "cm2_dm9_mux", "l4-per-clkctrl:0030:24"),
+- DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
+- DT_CLK(NULL, "dmt1_clk_mux", "l4-wkup-clkctrl:0020:24"),
+- DT_CLK(NULL, "dss_48mhz_clk", "l3-dss-clkctrl:0000:9"),
+- DT_CLK(NULL, "dss_dss_clk", "l3-dss-clkctrl:0000:8"),
+- DT_CLK(NULL, "dss_sys_clk", "l3-dss-clkctrl:0000:10"),
+- DT_CLK(NULL, "dss_tv_clk", "l3-dss-clkctrl:0000:11"),
+- DT_CLK(NULL, "fdif_fck", "iss-clkctrl:0008:24"),
+- DT_CLK(NULL, "func_dmic_abe_gfclk", "abe-clkctrl:0018:24"),
+- DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe-clkctrl:0020:24"),
+- DT_CLK(NULL, "func_mcbsp1_gfclk", "abe-clkctrl:0028:24"),
+- DT_CLK(NULL, "func_mcbsp2_gfclk", "abe-clkctrl:0030:24"),
+- DT_CLK(NULL, "func_mcbsp3_gfclk", "abe-clkctrl:0038:24"),
+- DT_CLK(NULL, "gpio1_dbclk", "l4-wkup-clkctrl:0018:8"),
+- DT_CLK(NULL, "gpio2_dbclk", "l4-per-clkctrl:0040:8"),
+- DT_CLK(NULL, "gpio3_dbclk", "l4-per-clkctrl:0048:8"),
+- DT_CLK(NULL, "gpio4_dbclk", "l4-per-clkctrl:0050:8"),
+- DT_CLK(NULL, "gpio5_dbclk", "l4-per-clkctrl:0058:8"),
+- DT_CLK(NULL, "gpio6_dbclk", "l4-per-clkctrl:0060:8"),
+- DT_CLK(NULL, "hsi_fck", "l3-init-clkctrl:0018:24"),
+- DT_CLK(NULL, "hsmmc1_fclk", "l3-init-clkctrl:0008:24"),
+- DT_CLK(NULL, "hsmmc2_fclk", "l3-init-clkctrl:0010:24"),
+- DT_CLK(NULL, "iss_ctrlclk", "iss-clkctrl:0000:8"),
+- DT_CLK(NULL, "mcasp_sync_mux_ck", "abe-clkctrl:0020:26"),
+- DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
+- DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
+- DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
+- DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4-per-clkctrl:00c0:26"),
+- DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3-init-clkctrl:00c0:8"),
+- DT_CLK(NULL, "otg_60m_gfclk", "l3-init-clkctrl:0040:24"),
+- DT_CLK(NULL, "per_mcbsp4_gfclk", "l4-per-clkctrl:00c0:24"),
+- DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu-sys-clkctrl:0000:20"),
+- DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu-sys-clkctrl:0000:22"),
+- DT_CLK(NULL, "sgx_clk_mux", "l3-gfx-clkctrl:0000:24"),
+- DT_CLK(NULL, "slimbus1_fclk_0", "abe-clkctrl:0040:8"),
+- DT_CLK(NULL, "slimbus1_fclk_1", "abe-clkctrl:0040:9"),
+- DT_CLK(NULL, "slimbus1_fclk_2", "abe-clkctrl:0040:10"),
+- DT_CLK(NULL, "slimbus1_slimbus_clk", "abe-clkctrl:0040:11"),
+- DT_CLK(NULL, "slimbus2_fclk_0", "l4-per-clkctrl:0118:8"),
+- DT_CLK(NULL, "slimbus2_fclk_1", "l4-per-clkctrl:0118:9"),
+- DT_CLK(NULL, "slimbus2_slimbus_clk", "l4-per-clkctrl:0118:10"),
+- DT_CLK(NULL, "stm_clk_div_ck", "emu-sys-clkctrl:0000:27"),
+- DT_CLK(NULL, "timer5_sync_mux", "abe-clkctrl:0048:24"),
+- DT_CLK(NULL, "timer6_sync_mux", "abe-clkctrl:0050:24"),
+- DT_CLK(NULL, "timer7_sync_mux", "abe-clkctrl:0058:24"),
+- DT_CLK(NULL, "timer8_sync_mux", "abe-clkctrl:0060:24"),
+- DT_CLK(NULL, "trace_clk_div_div_ck", "emu-sys-clkctrl:0000:24"),
+- DT_CLK(NULL, "usb_host_hs_func48mclk", "l3-init-clkctrl:0038:15"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3-init-clkctrl:0038:13"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3-init-clkctrl:0038:14"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3-init-clkctrl:0038:11"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3-init-clkctrl:0038:12"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3-init-clkctrl:0038:8"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3-init-clkctrl:0038:9"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init-clkctrl:0038:10"),
+- DT_CLK(NULL, "usb_otg_hs_xclk", "l3-init-clkctrl:0040:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3-init-clkctrl:0048:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3-init-clkctrl:0048:9"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3-init-clkctrl:0048:10"),
+- DT_CLK(NULL, "utmi_p1_gfclk", "l3-init-clkctrl:0038:24"),
+- DT_CLK(NULL, "utmi_p2_gfclk", "l3-init-clkctrl:0038:25"),
++ DT_CLK(NULL, "aess_fclk", "abe_cm:0008:24"),
++ DT_CLK(NULL, "cm2_dm10_mux", "l4_per_cm:0008:24"),
++ DT_CLK(NULL, "cm2_dm11_mux", "l4_per_cm:0010:24"),
++ DT_CLK(NULL, "cm2_dm2_mux", "l4_per_cm:0018:24"),
++ DT_CLK(NULL, "cm2_dm3_mux", "l4_per_cm:0020:24"),
++ DT_CLK(NULL, "cm2_dm4_mux", "l4_per_cm:0028:24"),
++ DT_CLK(NULL, "cm2_dm9_mux", "l4_per_cm:0030:24"),
++ DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
++ DT_CLK(NULL, "dmt1_clk_mux", "l4_wkup_cm:0020:24"),
++ DT_CLK(NULL, "dss_48mhz_clk", "l3_dss_cm:0000:9"),
++ DT_CLK(NULL, "dss_dss_clk", "l3_dss_cm:0000:8"),
++ DT_CLK(NULL, "dss_sys_clk", "l3_dss_cm:0000:10"),
++ DT_CLK(NULL, "dss_tv_clk", "l3_dss_cm:0000:11"),
++ DT_CLK(NULL, "fdif_fck", "iss_cm:0008:24"),
++ DT_CLK(NULL, "func_dmic_abe_gfclk", "abe_cm:0018:24"),
++ DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe_cm:0020:24"),
++ DT_CLK(NULL, "func_mcbsp1_gfclk", "abe_cm:0028:24"),
++ DT_CLK(NULL, "func_mcbsp2_gfclk", "abe_cm:0030:24"),
++ DT_CLK(NULL, "func_mcbsp3_gfclk", "abe_cm:0038:24"),
++ DT_CLK(NULL, "gpio1_dbclk", "l4_wkup_cm:0018:8"),
++ DT_CLK(NULL, "gpio2_dbclk", "l4_per_cm:0040:8"),
++ DT_CLK(NULL, "gpio3_dbclk", "l4_per_cm:0048:8"),
++ DT_CLK(NULL, "gpio4_dbclk", "l4_per_cm:0050:8"),
++ DT_CLK(NULL, "gpio5_dbclk", "l4_per_cm:0058:8"),
++ DT_CLK(NULL, "gpio6_dbclk", "l4_per_cm:0060:8"),
++ DT_CLK(NULL, "hsi_fck", "l3_init_cm:0018:24"),
++ DT_CLK(NULL, "hsmmc1_fclk", "l3_init_cm:0008:24"),
++ DT_CLK(NULL, "hsmmc2_fclk", "l3_init_cm:0010:24"),
++ DT_CLK(NULL, "iss_ctrlclk", "iss_cm:0000:8"),
++ DT_CLK(NULL, "mcasp_sync_mux_ck", "abe_cm:0020:26"),
++ DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
++ DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
++ DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
++ DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4_per_cm:00c0:26"),
++ DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3_init_cm:00c0:8"),
++ DT_CLK(NULL, "otg_60m_gfclk", "l3_init_cm:0040:24"),
++ DT_CLK(NULL, "per_mcbsp4_gfclk", "l4_per_cm:00c0:24"),
++ DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu_sys_cm:0000:20"),
++ DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu_sys_cm:0000:22"),
++ DT_CLK(NULL, "sgx_clk_mux", "l3_gfx_cm:0000:24"),
++ DT_CLK(NULL, "slimbus1_fclk_0", "abe_cm:0040:8"),
++ DT_CLK(NULL, "slimbus1_fclk_1", "abe_cm:0040:9"),
++ DT_CLK(NULL, "slimbus1_fclk_2", "abe_cm:0040:10"),
++ DT_CLK(NULL, "slimbus1_slimbus_clk", "abe_cm:0040:11"),
++ DT_CLK(NULL, "slimbus2_fclk_0", "l4_per_cm:0118:8"),
++ DT_CLK(NULL, "slimbus2_fclk_1", "l4_per_cm:0118:9"),
++ DT_CLK(NULL, "slimbus2_slimbus_clk", "l4_per_cm:0118:10"),
++ DT_CLK(NULL, "stm_clk_div_ck", "emu_sys_cm:0000:27"),
++ DT_CLK(NULL, "timer5_sync_mux", "abe_cm:0048:24"),
++ DT_CLK(NULL, "timer6_sync_mux", "abe_cm:0050:24"),
++ DT_CLK(NULL, "timer7_sync_mux", "abe_cm:0058:24"),
++ DT_CLK(NULL, "timer8_sync_mux", "abe_cm:0060:24"),
++ DT_CLK(NULL, "trace_clk_div_div_ck", "emu_sys_cm:0000:24"),
++ DT_CLK(NULL, "usb_host_hs_func48mclk", "l3_init_cm:0038:15"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3_init_cm:0038:13"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3_init_cm:0038:14"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3_init_cm:0038:11"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3_init_cm:0038:12"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3_init_cm:0038:8"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3_init_cm:0038:9"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init_cm:0038:10"),
++ DT_CLK(NULL, "usb_otg_hs_xclk", "l3_init_cm:0040:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3_init_cm:0048:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3_init_cm:0048:9"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3_init_cm:0048:10"),
++ DT_CLK(NULL, "utmi_p1_gfclk", "l3_init_cm:0038:24"),
++ DT_CLK(NULL, "utmi_p2_gfclk", "l3_init_cm:0038:25"),
+ { .node_name = NULL },
+ };
+
+diff --git a/drivers/clk/ti/clk-54xx.c b/drivers/clk/ti/clk-54xx.c
+index b4aff76eb3735..90e0a9ea63515 100644
+--- a/drivers/clk/ti/clk-54xx.c
++++ b/drivers/clk/ti/clk-54xx.c
+@@ -50,7 +50,7 @@ static const struct omap_clkctrl_bit_data omap5_aess_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_dmic_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0018:26",
++ "abe_cm:clk:0018:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -70,7 +70,7 @@ static const struct omap_clkctrl_bit_data omap5_dmic_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_mcbsp1_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0028:26",
++ "abe_cm:clk:0028:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -83,7 +83,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp1_bit_data[] __initconst =
+ };
+
+ static const char * const omap5_mcbsp2_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0030:26",
++ "abe_cm:clk:0030:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -96,7 +96,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp2_bit_data[] __initconst =
+ };
+
+ static const char * const omap5_mcbsp3_gfclk_parents[] __initconst = {
+- "abe-clkctrl:0038:26",
++ "abe_cm:clk:0038:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -136,16 +136,16 @@ static const struct omap_clkctrl_bit_data omap5_timer8_bit_data[] __initconst =
+
+ static const struct omap_clkctrl_reg_data omap5_abe_clkctrl_regs[] __initconst = {
+ { OMAP5_L4_ABE_CLKCTRL, NULL, 0, "abe_iclk" },
+- { OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
++ { OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
+ { OMAP5_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+- { OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
+- { OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
+- { OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
+- { OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
+- { OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
+- { OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
+- { OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
+- { OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
++ { OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
++ { OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
++ { OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
++ { OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
++ { OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
++ { OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
++ { OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
++ { OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
+ { 0 },
+ };
+
+@@ -268,12 +268,12 @@ static const struct omap_clkctrl_bit_data omap5_gpio8_bit_data[] __initconst = {
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_l4per_clkctrl_regs[] __initconst = {
+- { OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0008:24" },
+- { OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0010:24" },
+- { OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0018:24" },
+- { OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0020:24" },
+- { OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0028:24" },
+- { OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0030:24" },
++ { OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0008:24" },
++ { OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0010:24" },
++ { OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0018:24" },
++ { OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0020:24" },
++ { OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0028:24" },
++ { OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0030:24" },
+ { OMAP5_GPIO2_CLKCTRL, omap5_gpio2_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_GPIO3_CLKCTRL, omap5_gpio3_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_GPIO4_CLKCTRL, omap5_gpio4_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+@@ -345,7 +345,7 @@ static const struct omap_clkctrl_bit_data omap5_dss_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_dss_clkctrl_regs[] __initconst = {
+- { OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss-clkctrl:0000:8" },
++ { OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss_cm:clk:0000:8" },
+ { 0 },
+ };
+
+@@ -378,7 +378,7 @@ static const struct omap_clkctrl_bit_data omap5_gpu_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_gpu_clkctrl_regs[] __initconst = {
+- { OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu-clkctrl:0000:24" },
++ { OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu_cm:clk:0000:24" },
+ { 0 },
+ };
+
+@@ -389,7 +389,7 @@ static const char * const omap5_mmc1_fclk_mux_parents[] __initconst = {
+ };
+
+ static const char * const omap5_mmc1_fclk_parents[] __initconst = {
+- "l3init-clkctrl:0008:24",
++ "l3init_cm:clk:0008:24",
+ NULL,
+ };
+
+@@ -405,7 +405,7 @@ static const struct omap_clkctrl_bit_data omap5_mmc1_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_mmc2_fclk_parents[] __initconst = {
+- "l3init-clkctrl:0010:24",
++ "l3init_cm:clk:0010:24",
+ NULL,
+ };
+
+@@ -430,12 +430,12 @@ static const char * const omap5_usb_host_hs_hsic480m_p3_clk_parents[] __initcons
+ };
+
+ static const char * const omap5_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+- "l3init-clkctrl:0038:24",
++ "l3init_cm:clk:0038:24",
+ NULL,
+ };
+
+ static const char * const omap5_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+- "l3init-clkctrl:0038:25",
++ "l3init_cm:clk:0038:25",
+ NULL,
+ };
+
+@@ -494,8 +494,8 @@ static const struct omap_clkctrl_bit_data omap5_usb_otg_ss_bit_data[] __initcons
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_l3init_clkctrl_regs[] __initconst = {
+- { OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0008:25" },
+- { OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0010:25" },
++ { OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0008:25" },
++ { OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0010:25" },
+ { OMAP5_USB_HOST_HS_CLKCTRL, omap5_usb_host_hs_bit_data, CLKF_SW_SUP, "l3init_60m_fclk" },
+ { OMAP5_USB_TLL_HS_CLKCTRL, omap5_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_SATA_CLKCTRL, omap5_sata_bit_data, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -519,7 +519,7 @@ static const struct omap_clkctrl_reg_data omap5_wkupaon_clkctrl_regs[] __initcon
+ { OMAP5_L4_WKUP_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ { OMAP5_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { OMAP5_GPIO1_CLKCTRL, omap5_gpio1_bit_data, CLKF_HW_SUP, "wkupaon_iclk_mux" },
+- { OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon-clkctrl:0020:24" },
++ { OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon_cm:clk:0020:24" },
+ { OMAP5_COUNTER_32K_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ { OMAP5_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+@@ -549,58 +549,58 @@ const struct omap_clkctrl_data omap5_clkctrl_data[] __initconst = {
+ static struct ti_dt_clk omap54xx_clks[] = {
+ DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
+ DT_CLK(NULL, "sys_clkin_ck", "sys_clkin"),
+- DT_CLK(NULL, "dmic_gfclk", "abe-clkctrl:0018:24"),
+- DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
+- DT_CLK(NULL, "dss_32khz_clk", "dss-clkctrl:0000:11"),
+- DT_CLK(NULL, "dss_48mhz_clk", "dss-clkctrl:0000:9"),
+- DT_CLK(NULL, "dss_dss_clk", "dss-clkctrl:0000:8"),
+- DT_CLK(NULL, "dss_sys_clk", "dss-clkctrl:0000:10"),
+- DT_CLK(NULL, "gpio1_dbclk", "wkupaon-clkctrl:0018:8"),
+- DT_CLK(NULL, "gpio2_dbclk", "l4per-clkctrl:0040:8"),
+- DT_CLK(NULL, "gpio3_dbclk", "l4per-clkctrl:0048:8"),
+- DT_CLK(NULL, "gpio4_dbclk", "l4per-clkctrl:0050:8"),
+- DT_CLK(NULL, "gpio5_dbclk", "l4per-clkctrl:0058:8"),
+- DT_CLK(NULL, "gpio6_dbclk", "l4per-clkctrl:0060:8"),
+- DT_CLK(NULL, "gpio7_dbclk", "l4per-clkctrl:00f0:8"),
+- DT_CLK(NULL, "gpio8_dbclk", "l4per-clkctrl:00f8:8"),
+- DT_CLK(NULL, "mcbsp1_gfclk", "abe-clkctrl:0028:24"),
+- DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
+- DT_CLK(NULL, "mcbsp2_gfclk", "abe-clkctrl:0030:24"),
+- DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
+- DT_CLK(NULL, "mcbsp3_gfclk", "abe-clkctrl:0038:24"),
+- DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
+- DT_CLK(NULL, "mmc1_32khz_clk", "l3init-clkctrl:0008:8"),
+- DT_CLK(NULL, "mmc1_fclk", "l3init-clkctrl:0008:25"),
+- DT_CLK(NULL, "mmc1_fclk_mux", "l3init-clkctrl:0008:24"),
+- DT_CLK(NULL, "mmc2_fclk", "l3init-clkctrl:0010:25"),
+- DT_CLK(NULL, "mmc2_fclk_mux", "l3init-clkctrl:0010:24"),
+- DT_CLK(NULL, "sata_ref_clk", "l3init-clkctrl:0068:8"),
+- DT_CLK(NULL, "timer10_gfclk_mux", "l4per-clkctrl:0008:24"),
+- DT_CLK(NULL, "timer11_gfclk_mux", "l4per-clkctrl:0010:24"),
+- DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon-clkctrl:0020:24"),
+- DT_CLK(NULL, "timer2_gfclk_mux", "l4per-clkctrl:0018:24"),
+- DT_CLK(NULL, "timer3_gfclk_mux", "l4per-clkctrl:0020:24"),
+- DT_CLK(NULL, "timer4_gfclk_mux", "l4per-clkctrl:0028:24"),
+- DT_CLK(NULL, "timer5_gfclk_mux", "abe-clkctrl:0048:24"),
+- DT_CLK(NULL, "timer6_gfclk_mux", "abe-clkctrl:0050:24"),
+- DT_CLK(NULL, "timer7_gfclk_mux", "abe-clkctrl:0058:24"),
+- DT_CLK(NULL, "timer8_gfclk_mux", "abe-clkctrl:0060:24"),
+- DT_CLK(NULL, "timer9_gfclk_mux", "l4per-clkctrl:0030:24"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init-clkctrl:0038:13"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init-clkctrl:0038:14"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init-clkctrl:0038:7"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init-clkctrl:0038:11"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init-clkctrl:0038:12"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init-clkctrl:0038:6"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init-clkctrl:0038:8"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init-clkctrl:0038:9"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init-clkctrl:0038:10"),
+- DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init-clkctrl:00d0:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init-clkctrl:0048:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init-clkctrl:0048:9"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init-clkctrl:0048:10"),
+- DT_CLK(NULL, "utmi_p1_gfclk", "l3init-clkctrl:0038:24"),
+- DT_CLK(NULL, "utmi_p2_gfclk", "l3init-clkctrl:0038:25"),
++ DT_CLK(NULL, "dmic_gfclk", "abe_cm:0018:24"),
++ DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
++ DT_CLK(NULL, "dss_32khz_clk", "dss_cm:0000:11"),
++ DT_CLK(NULL, "dss_48mhz_clk", "dss_cm:0000:9"),
++ DT_CLK(NULL, "dss_dss_clk", "dss_cm:0000:8"),
++ DT_CLK(NULL, "dss_sys_clk", "dss_cm:0000:10"),
++ DT_CLK(NULL, "gpio1_dbclk", "wkupaon_cm:0018:8"),
++ DT_CLK(NULL, "gpio2_dbclk", "l4per_cm:0040:8"),
++ DT_CLK(NULL, "gpio3_dbclk", "l4per_cm:0048:8"),
++ DT_CLK(NULL, "gpio4_dbclk", "l4per_cm:0050:8"),
++ DT_CLK(NULL, "gpio5_dbclk", "l4per_cm:0058:8"),
++ DT_CLK(NULL, "gpio6_dbclk", "l4per_cm:0060:8"),
++ DT_CLK(NULL, "gpio7_dbclk", "l4per_cm:00f0:8"),
++ DT_CLK(NULL, "gpio8_dbclk", "l4per_cm:00f8:8"),
++ DT_CLK(NULL, "mcbsp1_gfclk", "abe_cm:0028:24"),
++ DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
++ DT_CLK(NULL, "mcbsp2_gfclk", "abe_cm:0030:24"),
++ DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
++ DT_CLK(NULL, "mcbsp3_gfclk", "abe_cm:0038:24"),
++ DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
++ DT_CLK(NULL, "mmc1_32khz_clk", "l3init_cm:0008:8"),
++ DT_CLK(NULL, "mmc1_fclk", "l3init_cm:0008:25"),
++ DT_CLK(NULL, "mmc1_fclk_mux", "l3init_cm:0008:24"),
++ DT_CLK(NULL, "mmc2_fclk", "l3init_cm:0010:25"),
++ DT_CLK(NULL, "mmc2_fclk_mux", "l3init_cm:0010:24"),
++ DT_CLK(NULL, "sata_ref_clk", "l3init_cm:0068:8"),
++ DT_CLK(NULL, "timer10_gfclk_mux", "l4per_cm:0008:24"),
++ DT_CLK(NULL, "timer11_gfclk_mux", "l4per_cm:0010:24"),
++ DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon_cm:0020:24"),
++ DT_CLK(NULL, "timer2_gfclk_mux", "l4per_cm:0018:24"),
++ DT_CLK(NULL, "timer3_gfclk_mux", "l4per_cm:0020:24"),
++ DT_CLK(NULL, "timer4_gfclk_mux", "l4per_cm:0028:24"),
++ DT_CLK(NULL, "timer5_gfclk_mux", "abe_cm:0048:24"),
++ DT_CLK(NULL, "timer6_gfclk_mux", "abe_cm:0050:24"),
++ DT_CLK(NULL, "timer7_gfclk_mux", "abe_cm:0058:24"),
++ DT_CLK(NULL, "timer8_gfclk_mux", "abe_cm:0060:24"),
++ DT_CLK(NULL, "timer9_gfclk_mux", "l4per_cm:0030:24"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init_cm:0038:13"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init_cm:0038:14"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init_cm:0038:7"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init_cm:0038:11"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init_cm:0038:12"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init_cm:0038:6"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init_cm:0038:8"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init_cm:0038:9"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init_cm:0038:10"),
++ DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init_cm:00d0:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init_cm:0048:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init_cm:0048:9"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init_cm:0048:10"),
++ DT_CLK(NULL, "utmi_p1_gfclk", "l3init_cm:0038:24"),
++ DT_CLK(NULL, "utmi_p2_gfclk", "l3init_cm:0038:25"),
+ { .node_name = NULL },
+ };
+
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index e23bf04586320..617360e20d86f 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -528,6 +528,10 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ char *c;
+ u16 soc_mask = 0;
+
++ if (!(ti_clk_get_features()->flags & TI_CLK_CLKCTRL_COMPAT) &&
++ of_node_name_eq(node, "clk"))
++ ti_clk_features.flags |= TI_CLK_CLKCTRL_COMPAT;
++
+ addrp = of_get_address(node, 0, NULL, NULL);
+ addr = (u32)of_translate_address(node, addrp);
+
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index cd62bbb50e8b4..7ce8bb160a592 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -3160,9 +3160,10 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+
+ /* Request and map I/O memory */
+ xdev->regs = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(xdev->regs))
+- return PTR_ERR(xdev->regs);
+-
++ if (IS_ERR(xdev->regs)) {
++ err = PTR_ERR(xdev->regs);
++ goto disable_clks;
++ }
+ /* Retrieve the DMA engine properties from the device tree */
+ xdev->max_buffer_len = GENMASK(XILINX_DMA_MAX_TRANS_LEN_MAX - 1, 0);
+ xdev->s2mm_chan_id = xdev->dma_config->max_channels / 2;
+@@ -3190,7 +3191,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ if (err < 0) {
+ dev_err(xdev->dev,
+ "missing xlnx,num-fstores property\n");
+- return err;
++ goto disable_clks;
+ }
+
+ err = of_property_read_u32(node, "xlnx,flush-fsync",
+@@ -3210,7 +3211,11 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ xdev->ext_addr = false;
+
+ /* Set the dma mask bits */
+- dma_set_mask_and_coherent(xdev->dev, DMA_BIT_MASK(addr_width));
++ err = dma_set_mask_and_coherent(xdev->dev, DMA_BIT_MASK(addr_width));
++ if (err < 0) {
++ dev_err(xdev->dev, "DMA mask error %d\n", err);
++ goto disable_clks;
++ }
+
+ /* Initialize the DMA engine */
+ xdev->common.dev = &pdev->dev;
+@@ -3259,7 +3264,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ for_each_child_of_node(node, child) {
+ err = xilinx_dma_child_probe(xdev, child);
+ if (err < 0)
+- goto disable_clks;
++ goto error;
+ }
+
+ if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+@@ -3294,12 +3299,12 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+
+ return 0;
+
+-disable_clks:
+- xdma_disable_allclks(xdev);
+ error:
+ for (i = 0; i < xdev->dma_config->max_channels; i++)
+ if (xdev->chan[i])
+ xilinx_dma_chan_remove(xdev->chan[i]);
++disable_clks:
++ xdma_disable_allclks(xdev);
+
+ return err;
+ }
+diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c
+index 3ed7ae0d6781e..96060bf90a24a 100644
+--- a/drivers/firmware/arm_scmi/clock.c
++++ b/drivers/firmware/arm_scmi/clock.c
+@@ -450,9 +450,13 @@ static int scmi_clock_count_get(const struct scmi_protocol_handle *ph)
+ static const struct scmi_clock_info *
+ scmi_clock_info_get(const struct scmi_protocol_handle *ph, u32 clk_id)
+ {
++ struct scmi_clock_info *clk;
+ struct clock_info *ci = ph->get_priv(ph);
+- struct scmi_clock_info *clk = ci->clk + clk_id;
+
++ if (clk_id >= ci->num_clocks)
++ return NULL;
++
++ clk = ci->clk + clk_id;
+ if (!clk->name[0])
+ return NULL;
+
+diff --git a/drivers/firmware/arm_scmi/scmi_pm_domain.c b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+index d5dee625de780..0e05a79de82d8 100644
+--- a/drivers/firmware/arm_scmi/scmi_pm_domain.c
++++ b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+@@ -112,9 +112,28 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ scmi_pd_data->domains = domains;
+ scmi_pd_data->num_domains = num_domains;
+
++ dev_set_drvdata(dev, scmi_pd_data);
++
+ return of_genpd_add_provider_onecell(np, scmi_pd_data);
+ }
+
++static void scmi_pm_domain_remove(struct scmi_device *sdev)
++{
++ int i;
++ struct genpd_onecell_data *scmi_pd_data;
++ struct device *dev = &sdev->dev;
++ struct device_node *np = dev->of_node;
++
++ of_genpd_del_provider(np);
++
++ scmi_pd_data = dev_get_drvdata(dev);
++ for (i = 0; i < scmi_pd_data->num_domains; i++) {
++ if (!scmi_pd_data->domains[i])
++ continue;
++ pm_genpd_remove(scmi_pd_data->domains[i]);
++ }
++}
++
+ static const struct scmi_device_id scmi_id_table[] = {
+ { SCMI_PROTOCOL_POWER, "genpd" },
+ { },
+@@ -124,6 +143,7 @@ MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+ static struct scmi_driver scmi_power_domain_driver = {
+ .name = "scmi-power-domain",
+ .probe = scmi_pm_domain_probe,
++ .remove = scmi_pm_domain_remove,
+ .id_table = scmi_id_table,
+ };
+ module_scmi_driver(scmi_power_domain_driver);
+diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c
+index 7288c61178380..0b5853fa9d874 100644
+--- a/drivers/firmware/arm_scmi/sensors.c
++++ b/drivers/firmware/arm_scmi/sensors.c
+@@ -762,6 +762,10 @@ static int scmi_sensor_config_get(const struct scmi_protocol_handle *ph,
+ {
+ int ret;
+ struct scmi_xfer *t;
++ struct sensors_info *si = ph->get_priv(ph);
++
++ if (sensor_id >= si->num_sensors)
++ return -EINVAL;
+
+ ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_GET,
+ sizeof(__le32), sizeof(__le32), &t);
+@@ -771,7 +775,6 @@ static int scmi_sensor_config_get(const struct scmi_protocol_handle *ph,
+ put_unaligned_le32(sensor_id, t->tx.buf);
+ ret = ph->xops->do_xfer(ph, t);
+ if (!ret) {
+- struct sensors_info *si = ph->get_priv(ph);
+ struct scmi_sensor_info *s = si->sensors + sensor_id;
+
+ *sensor_config = get_unaligned_le64(t->rx.buf);
+@@ -788,6 +791,10 @@ static int scmi_sensor_config_set(const struct scmi_protocol_handle *ph,
+ int ret;
+ struct scmi_xfer *t;
+ struct scmi_msg_sensor_config_set *msg;
++ struct sensors_info *si = ph->get_priv(ph);
++
++ if (sensor_id >= si->num_sensors)
++ return -EINVAL;
+
+ ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_SET,
+ sizeof(*msg), 0, &t);
+@@ -800,7 +807,6 @@ static int scmi_sensor_config_set(const struct scmi_protocol_handle *ph,
+
+ ret = ph->xops->do_xfer(ph, t);
+ if (!ret) {
+- struct sensors_info *si = ph->get_priv(ph);
+ struct scmi_sensor_info *s = si->sensors + sensor_id;
+
+ s->sensor_config = sensor_config;
+@@ -831,8 +837,11 @@ static int scmi_sensor_reading_get(const struct scmi_protocol_handle *ph,
+ int ret;
+ struct scmi_xfer *t;
+ struct scmi_msg_sensor_reading_get *sensor;
++ struct scmi_sensor_info *s;
+ struct sensors_info *si = ph->get_priv(ph);
+- struct scmi_sensor_info *s = si->sensors + sensor_id;
++
++ if (sensor_id >= si->num_sensors)
++ return -EINVAL;
+
+ ret = ph->xops->xfer_get_init(ph, SENSOR_READING_GET,
+ sizeof(*sensor), 0, &t);
+@@ -841,6 +850,7 @@ static int scmi_sensor_reading_get(const struct scmi_protocol_handle *ph,
+
+ sensor = t->tx.buf;
+ sensor->id = cpu_to_le32(sensor_id);
++ s = si->sensors + sensor_id;
+ if (s->async) {
+ sensor->flags = cpu_to_le32(SENSOR_READ_ASYNC);
+ ret = ph->xops->do_xfer_with_response(ph, t);
+@@ -895,9 +905,13 @@ scmi_sensor_reading_get_timestamped(const struct scmi_protocol_handle *ph,
+ int ret;
+ struct scmi_xfer *t;
+ struct scmi_msg_sensor_reading_get *sensor;
++ struct scmi_sensor_info *s;
+ struct sensors_info *si = ph->get_priv(ph);
+- struct scmi_sensor_info *s = si->sensors + sensor_id;
+
++ if (sensor_id >= si->num_sensors)
++ return -EINVAL;
++
++ s = si->sensors + sensor_id;
+ if (!count || !readings ||
+ (!s->num_axis && count > 1) || (s->num_axis && count > s->num_axis))
+ return -EINVAL;
+@@ -948,6 +962,9 @@ scmi_sensor_info_get(const struct scmi_protocol_handle *ph, u32 sensor_id)
+ {
+ struct sensors_info *si = ph->get_priv(ph);
+
++ if (sensor_id >= si->num_sensors)
++ return NULL;
++
+ return si->sensors + sensor_id;
+ }
+
+diff --git a/drivers/gpio/gpio-ftgpio010.c b/drivers/gpio/gpio-ftgpio010.c
+index f422c3e129a0c..f77a965f5780d 100644
+--- a/drivers/gpio/gpio-ftgpio010.c
++++ b/drivers/gpio/gpio-ftgpio010.c
+@@ -41,14 +41,12 @@
+ * struct ftgpio_gpio - Gemini GPIO state container
+ * @dev: containing device for this instance
+ * @gc: gpiochip for this instance
+- * @irq: irqchip for this instance
+ * @base: remapped I/O-memory base
+ * @clk: silicon clock
+ */
+ struct ftgpio_gpio {
+ struct device *dev;
+ struct gpio_chip gc;
+- struct irq_chip irq;
+ void __iomem *base;
+ struct clk *clk;
+ };
+@@ -70,6 +68,7 @@ static void ftgpio_gpio_mask_irq(struct irq_data *d)
+ val = readl(g->base + GPIO_INT_EN);
+ val &= ~BIT(irqd_to_hwirq(d));
+ writel(val, g->base + GPIO_INT_EN);
++ gpiochip_disable_irq(gc, irqd_to_hwirq(d));
+ }
+
+ static void ftgpio_gpio_unmask_irq(struct irq_data *d)
+@@ -78,6 +77,7 @@ static void ftgpio_gpio_unmask_irq(struct irq_data *d)
+ struct ftgpio_gpio *g = gpiochip_get_data(gc);
+ u32 val;
+
++ gpiochip_enable_irq(gc, irqd_to_hwirq(d));
+ val = readl(g->base + GPIO_INT_EN);
+ val |= BIT(irqd_to_hwirq(d));
+ writel(val, g->base + GPIO_INT_EN);
+@@ -221,6 +221,16 @@ static int ftgpio_gpio_set_config(struct gpio_chip *gc, unsigned int offset,
+ return 0;
+ }
+
++static const struct irq_chip ftgpio_irq_chip = {
++ .name = "FTGPIO010",
++ .irq_ack = ftgpio_gpio_ack_irq,
++ .irq_mask = ftgpio_gpio_mask_irq,
++ .irq_unmask = ftgpio_gpio_unmask_irq,
++ .irq_set_type = ftgpio_gpio_set_irq_type,
++ .flags = IRQCHIP_IMMUTABLE,
++ GPIOCHIP_IRQ_RESOURCE_HELPERS,
++};
++
+ static int ftgpio_gpio_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -277,14 +287,8 @@ static int ftgpio_gpio_probe(struct platform_device *pdev)
+ if (!IS_ERR(g->clk))
+ g->gc.set_config = ftgpio_gpio_set_config;
+
+- g->irq.name = "FTGPIO010";
+- g->irq.irq_ack = ftgpio_gpio_ack_irq;
+- g->irq.irq_mask = ftgpio_gpio_mask_irq;
+- g->irq.irq_unmask = ftgpio_gpio_unmask_irq;
+- g->irq.irq_set_type = ftgpio_gpio_set_irq_type;
+-
+ girq = &g->gc.irq;
+- girq->chip = &g->irq;
++ gpio_irq_chip_set_chip(girq, &ftgpio_irq_chip);
+ girq->parent_handler = ftgpio_gpio_irq_handler;
+ girq->num_parents = 1;
+ girq->parents = devm_kcalloc(dev, 1, sizeof(*girq->parents),
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index c2523ac26facd..9c8ab1dc60879 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -32,9 +32,16 @@ MODULE_PARM_DESC(ignore_wake,
+ "controller@pin combos on which to ignore the ACPI wake flag "
+ "ignore_wake=controller@pin[,controller@pin[,...]]");
+
++static char *ignore_interrupt;
++module_param(ignore_interrupt, charp, 0444);
++MODULE_PARM_DESC(ignore_interrupt,
++ "controller@pin combos on which to ignore interrupt "
++ "ignore_interrupt=controller@pin[,controller@pin[,...]]");
++
+ struct acpi_gpiolib_dmi_quirk {
+ bool no_edge_events_on_boot;
+ char *ignore_wake;
++ char *ignore_interrupt;
+ };
+
+ /**
+@@ -317,14 +324,15 @@ static struct gpio_desc *acpi_request_own_gpiod(struct gpio_chip *chip,
+ return desc;
+ }
+
+-static bool acpi_gpio_in_ignore_list(const char *controller_in, unsigned int pin_in)
++static bool acpi_gpio_in_ignore_list(const char *ignore_list, const char *controller_in,
++ unsigned int pin_in)
+ {
+ const char *controller, *pin_str;
+ unsigned int pin;
+ char *endp;
+ int len;
+
+- controller = ignore_wake;
++ controller = ignore_list;
+ while (controller) {
+ pin_str = strchr(controller, '@');
+ if (!pin_str)
+@@ -348,7 +356,7 @@ static bool acpi_gpio_in_ignore_list(const char *controller_in, unsigned int pin
+
+ return false;
+ err:
+- pr_err_once("Error: Invalid value for gpiolib_acpi.ignore_wake: %s\n", ignore_wake);
++ pr_err_once("Error: Invalid value for gpiolib_acpi.ignore_...: %s\n", ignore_list);
+ return false;
+ }
+
+@@ -360,7 +368,7 @@ static bool acpi_gpio_irq_is_wake(struct device *parent,
+ if (agpio->wake_capable != ACPI_WAKE_CAPABLE)
+ return false;
+
+- if (acpi_gpio_in_ignore_list(dev_name(parent), pin)) {
++ if (acpi_gpio_in_ignore_list(ignore_wake, dev_name(parent), pin)) {
+ dev_info(parent, "Ignoring wakeup on pin %u\n", pin);
+ return false;
+ }
+@@ -427,6 +435,11 @@ static acpi_status acpi_gpiochip_alloc_event(struct acpi_resource *ares,
+ goto fail_unlock_irq;
+ }
+
++ if (acpi_gpio_in_ignore_list(ignore_interrupt, dev_name(chip->parent), pin)) {
++ dev_info(chip->parent, "Ignoring interrupt on pin %u\n", pin);
++ return AE_OK;
++ }
++
+ event = kzalloc(sizeof(*event), GFP_KERNEL);
+ if (!event)
+ goto fail_unlock_irq;
+@@ -1560,6 +1573,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ .ignore_wake = "INT33FF:01@0",
+ },
+ },
++ {
++ /*
++ * Interrupt storm caused from edge triggered floating pin
++ * Found in BIOS UX325UAZ.300
++ * https://bugzilla.kernel.org/show_bug.cgi?id=216208
++ */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UAZ_UM325UAZ"),
++ },
++ .driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++ .ignore_interrupt = "AMDI0030:00@18",
++ },
++ },
+ {} /* Terminating entry */
+ };
+
+@@ -1582,6 +1609,9 @@ static int __init acpi_gpio_setup_params(void)
+ if (ignore_wake == NULL && quirk && quirk->ignore_wake)
+ ignore_wake = quirk->ignore_wake;
+
++ if (ignore_interrupt == NULL && quirk && quirk->ignore_interrupt)
++ ignore_interrupt = quirk->ignore_interrupt;
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index 69a70a0aaed93..6ab062c63da17 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -169,6 +169,9 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
+ for (i = 0; i < AMDGPU_MES_MAX_SDMA_PIPES; i++) {
+ if (adev->ip_versions[SDMA0_HWIP][0] < IP_VERSION(6, 0, 0))
+ adev->mes.sdma_hqd_mask[i] = i ? 0 : 0x3fc;
++ /* zero sdma_hqd_mask for non-existent engine */
++ else if (adev->sdma.num_instances == 1)
++ adev->mes.sdma_hqd_mask[i] = i ? 0 : 0xfc;
+ else
+ adev->mes.sdma_hqd_mask[i] = 0xfc;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0424570c736fa..c781f92db9590 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -5629,7 +5629,7 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
+ plane_info->visible = true;
+ plane_info->stereo_format = PLANE_STEREO_FORMAT_NONE;
+
+- plane_info->layer_index = 0;
++ plane_info->layer_index = plane_state->normalized_zpos;
+
+ ret = fill_plane_color_attributes(plane_state, plane_info->format,
+ &plane_info->color_space);
+@@ -5697,7 +5697,7 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
+ dc_plane_state->global_alpha = plane_info.global_alpha;
+ dc_plane_state->global_alpha_value = plane_info.global_alpha_value;
+ dc_plane_state->dcc = plane_info.dcc;
+- dc_plane_state->layer_index = plane_info.layer_index; // Always returns 0
++ dc_plane_state->layer_index = plane_info.layer_index;
+ dc_plane_state->flip_int_enabled = true;
+
+ /*
+@@ -11147,6 +11147,14 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ }
+ }
+
++ /*
++ * DC consults the zpos (layer_index in DC terminology) to determine the
++ * hw plane on which to enable the hw cursor (see
++ * `dcn10_can_pipe_disable_cursor`). By now, all modified planes are in
++ * atomic state, so call drm helper to normalize zpos.
++ */
++ drm_atomic_normalize_zpos(dev, state);
++
+ /* Remove exiting planes if they are modified */
+ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
+ ret = dm_update_plane_state(dc, state, plane,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+index f4381725b2107..c3d7712e9fd05 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+@@ -46,6 +46,9 @@
+ #define TO_CLK_MGR_DCN315(clk_mgr)\
+ container_of(clk_mgr, struct clk_mgr_dcn315, base)
+
++#define UNSUPPORTED_DCFCLK 10000000
++#define MIN_DPP_DISP_CLK 100000
++
+ static int dcn315_get_active_display_cnt_wa(
+ struct dc *dc,
+ struct dc_state *context)
+@@ -146,6 +149,9 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
+ }
+ }
+
++ /* Lock pstate by requesting unsupported dcfclk if change is unsupported */
++ if (!new_clocks->p_state_change_support)
++ new_clocks->dcfclk_khz = UNSUPPORTED_DCFCLK;
+ if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz)) {
+ clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
+ dcn315_smu_set_hard_min_dcfclk(clk_mgr, clk_mgr_base->clks.dcfclk_khz);
+@@ -159,10 +165,10 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
+
+ // workaround: Limit dppclk to 100Mhz to avoid lower eDP panel switch to plus 4K monitor underflow.
+ if (!IS_DIAG_DC(dc->ctx->dce_environment)) {
+- if (new_clocks->dppclk_khz < 100000)
+- new_clocks->dppclk_khz = 100000;
+- if (new_clocks->dispclk_khz < 100000)
+- new_clocks->dispclk_khz = 100000;
++ if (new_clocks->dppclk_khz < MIN_DPP_DISP_CLK)
++ new_clocks->dppclk_khz = MIN_DPP_DISP_CLK;
++ if (new_clocks->dispclk_khz < MIN_DPP_DISP_CLK)
++ new_clocks->dispclk_khz = MIN_DPP_DISP_CLK;
+ }
+
+ if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr->base.clks.dppclk_khz)) {
+@@ -272,7 +278,7 @@ static struct wm_table ddr5_wm_table = {
+ {
+ .wm_inst = WM_A,
+ .wm_type = WM_TYPE_PSTATE_CHG,
+- .pstate_latency_us = 64.0,
++ .pstate_latency_us = 129.0,
+ .sr_exit_time_us = 11.5,
+ .sr_enter_plus_exit_time_us = 14.5,
+ .valid = true,
+@@ -280,7 +286,7 @@ static struct wm_table ddr5_wm_table = {
+ {
+ .wm_inst = WM_B,
+ .wm_type = WM_TYPE_PSTATE_CHG,
+- .pstate_latency_us = 64.0,
++ .pstate_latency_us = 129.0,
+ .sr_exit_time_us = 11.5,
+ .sr_enter_plus_exit_time_us = 14.5,
+ .valid = true,
+@@ -288,7 +294,7 @@ static struct wm_table ddr5_wm_table = {
+ {
+ .wm_inst = WM_C,
+ .wm_type = WM_TYPE_PSTATE_CHG,
+- .pstate_latency_us = 64.0,
++ .pstate_latency_us = 129.0,
+ .sr_exit_time_us = 11.5,
+ .sr_enter_plus_exit_time_us = 14.5,
+ .valid = true,
+@@ -296,7 +302,7 @@ static struct wm_table ddr5_wm_table = {
+ {
+ .wm_inst = WM_D,
+ .wm_type = WM_TYPE_PSTATE_CHG,
+- .pstate_latency_us = 64.0,
++ .pstate_latency_us = 129.0,
+ .sr_exit_time_us = 11.5,
+ .sr_enter_plus_exit_time_us = 14.5,
+ .valid = true,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index a4fc9a6c850ed..b4203a812c4be 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -2857,8 +2857,14 @@ bool perform_link_training_with_retries(
+ skip_video_pattern);
+
+ /* Transmit idle pattern once training successful. */
+- if (status == LINK_TRAINING_SUCCESS && !is_link_bw_low)
++ if (status == LINK_TRAINING_SUCCESS && !is_link_bw_low) {
+ dp_set_hw_test_pattern(link, &pipe_ctx->link_res, DP_TEST_PATTERN_VIDEO_MODE, NULL, 0);
++ /* Update verified link settings to current one
++ * Because DPIA LT might fallback to lower link setting.
++ */
++ link->verified_link_cap.link_rate = link->cur_link_settings.link_rate;
++ link->verified_link_cap.lane_count = link->cur_link_settings.lane_count;
++ }
+ } else {
+ status = dc_link_dp_perform_link_training(link,
+ &pipe_ctx->link_res,
+@@ -5211,6 +5217,14 @@ bool dp_retrieve_lttpr_cap(struct dc_link *link)
+ lttpr_dpcd_data[DP_PHY_REPEATER_128B132B_RATES -
+ DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
+
++ /* If this chip cap is set, at least one retimer must exist in the chain
++ * Override count to 1 if we receive a known bad count (0 or an invalid value) */
++ if (link->chip_caps & EXT_DISPLAY_PATH_CAPS__DP_FIXED_VS_EN &&
++ (dp_convert_to_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt) == 0)) {
++ ASSERT(0);
++ link->dpcd_caps.lttpr_caps.phy_repeater_cnt = 0x80;
++ }
++
+ /* Attempt to train in LTTPR transparent mode if repeater count exceeds 8. */
+ is_lttpr_present = (link->dpcd_caps.lttpr_caps.max_lane_count > 0 &&
+ link->dpcd_caps.lttpr_caps.max_lane_count <= 4 &&
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index aee31c785aa9f..4f0ea50eaa839 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -2165,7 +2165,8 @@ static void dce110_setup_audio_dto(
+ continue;
+ if (pipe_ctx->stream->signal != SIGNAL_TYPE_HDMI_TYPE_A)
+ continue;
+- if (pipe_ctx->stream_res.audio != NULL) {
++ if (pipe_ctx->stream_res.audio != NULL &&
++ pipe_ctx->stream_res.audio->enabled == false) {
+ struct audio_output audio_output;
+
+ build_audio_output(context, pipe_ctx, &audio_output);
+@@ -2207,7 +2208,8 @@ static void dce110_setup_audio_dto(
+ if (!dc_is_dp_signal(pipe_ctx->stream->signal))
+ continue;
+
+- if (pipe_ctx->stream_res.audio != NULL) {
++ if (pipe_ctx->stream_res.audio != NULL &&
++ pipe_ctx->stream_res.audio->enabled == false) {
+ struct audio_output audio_output;
+
+ build_audio_output(context, pipe_ctx, &audio_output);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index ec6aa8d8b251a..213a02a769d45 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1520,6 +1520,7 @@ static void dcn20_update_dchubp_dpp(
+ /* Any updates are handled in dc interface, just need
+ * to apply existing for plane enable / opp change */
+ if (pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed
++ || pipe_ctx->update_flags.bits.plane_changed
+ || pipe_ctx->stream->update_flags.bits.gamut_remap
+ || pipe_ctx->stream->update_flags.bits.out_csc) {
+ /* dpp/cm gamut remap*/
+diff --git a/drivers/i2c/busses/i2c-davinci.c b/drivers/i2c/busses/i2c-davinci.c
+index 9e09db31a937e..5343c82c85944 100644
+--- a/drivers/i2c/busses/i2c-davinci.c
++++ b/drivers/i2c/busses/i2c-davinci.c
+@@ -823,7 +823,7 @@ static int davinci_i2c_probe(struct platform_device *pdev)
+ r = pm_runtime_resume_and_get(dev->dev);
+ if (r < 0) {
+ dev_err(dev->dev, "failed to runtime_get device: %d\n", r);
+- return r;
++ goto err_pm;
+ }
+
+ i2c_davinci_init(dev);
+@@ -882,6 +882,7 @@ static int davinci_i2c_probe(struct platform_device *pdev)
+ err_unuse_clocks:
+ pm_runtime_dont_use_autosuspend(dev->dev);
+ pm_runtime_put_sync(dev->dev);
++err_pm:
+ pm_runtime_disable(dev->dev);
+
+ return r;
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 5e4e2d2182d91..21bfe3448dec8 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -870,7 +870,8 @@ try_again:
+ * the CCS bit is set as well. We deliberately deviate from the spec in
+ * regards to this, which allows UHS-I to be supported for SDSC cards.
+ */
+- if (!mmc_host_is_spi(host) && rocr && (*rocr & 0x01000000)) {
++ if (!mmc_host_is_spi(host) && (ocr & SD_OCR_S18R) &&
++ rocr && (*rocr & SD_ROCR_S18A)) {
+ err = mmc_set_uhs_voltage(host, pocr);
+ if (err == -EAGAIN) {
+ retries--;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+index 88595863d8bc6..8a0af371e7dc7 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+@@ -94,11 +94,8 @@ static int aq_ndev_close(struct net_device *ndev)
+ int err = 0;
+
+ err = aq_nic_stop(aq_nic);
+- if (err < 0)
+- goto err_exit;
+ aq_nic_deinit(aq_nic, true);
+
+-err_exit:
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_pci.c b/drivers/net/ethernet/marvell/prestera/prestera_pci.c
+index f538a749ebd4d..59470d99f5228 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_pci.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_pci.c
+@@ -872,6 +872,7 @@ static void prestera_pci_remove(struct pci_dev *pdev)
+ static const struct pci_device_id prestera_pci_devices[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0xC804) },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0xC80C) },
++ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0xCC1E) },
+ { }
+ };
+ MODULE_DEVICE_TABLE(pci, prestera_pci_devices);
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c
+index cfe804bc8d205..148ea636ef979 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe.c
+@@ -412,7 +412,7 @@ __mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry)
+ if (entry->hash != 0xffff) {
+ ppe->foe_table[entry->hash].ib1 &= ~MTK_FOE_IB1_STATE;
+ ppe->foe_table[entry->hash].ib1 |= FIELD_PREP(MTK_FOE_IB1_STATE,
+- MTK_FOE_STATE_UNBIND);
++ MTK_FOE_STATE_INVALID);
+ dma_wmb();
+ }
+ entry->hash = 0xffff;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index c5626ff838058..640e3786c2444 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1833,8 +1833,8 @@ static void iwl_mvm_parse_ppe(struct iwl_mvm *mvm,
+ * If nss < MAX: we can set zeros in other streams
+ */
+ if (nss > MAX_HE_SUPP_NSS) {
+- IWL_INFO(mvm, "Got NSS = %d - trimming to %d\n", nss,
+- MAX_HE_SUPP_NSS);
++ IWL_DEBUG_INFO(mvm, "Got NSS = %d - trimming to %d\n", nss,
++ MAX_HE_SUPP_NSS);
+ nss = MAX_HE_SUPP_NSS;
+ }
+
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 07586514991f0..5bc5a0a6a8a72 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -1546,7 +1546,7 @@ static void qcom_glink_rx_close(struct qcom_glink *glink, unsigned int rcid)
+ cancel_work_sync(&channel->intent_work);
+
+ if (channel->rpdev) {
+- strncpy(chinfo.name, channel->name, sizeof(chinfo.name));
++ strscpy_pad(chinfo.name, channel->name, sizeof(chinfo.name));
+ chinfo.src = RPMSG_ADDR_ANY;
+ chinfo.dst = RPMSG_ADDR_ANY;
+
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index f7af53891ef92..4ed7c54c85353 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1089,7 +1089,7 @@ static int qcom_smd_create_device(struct qcom_smd_channel *channel)
+
+ /* Assign public information to the rpmsg_device */
+ rpdev = &qsdev->rpdev;
+- strncpy(rpdev->id.name, channel->name, RPMSG_NAME_SIZE);
++ strscpy_pad(rpdev->id.name, channel->name, RPMSG_NAME_SIZE);
+ rpdev->src = RPMSG_ADDR_ANY;
+ rpdev->dst = RPMSG_ADDR_ANY;
+
+@@ -1323,7 +1323,7 @@ static void qcom_channel_state_worker(struct work_struct *work)
+
+ spin_unlock_irqrestore(&edge->channels_lock, flags);
+
+- strncpy(chinfo.name, channel->name, sizeof(chinfo.name));
++ strscpy_pad(chinfo.name, channel->name, sizeof(chinfo.name));
+ chinfo.src = RPMSG_ADDR_ANY;
+ chinfo.dst = RPMSG_ADDR_ANY;
+ rpmsg_unregister_device(&edge->dev, &chinfo);
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 3d6b137314f3f..bbc4d5890ae6a 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3686,11 +3686,6 @@ err2:
+ err1:
+ scsi_host_put(lport->host);
+ err0:
+- if (qedf) {
+- QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n");
+-
+- clear_bit(QEDF_PROBING, &qedf->flags);
+- }
+ return rc;
+ }
+
+diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
+index f48a23adbc35d..094e812e9e692 100644
+--- a/drivers/usb/mon/mon_bin.c
++++ b/drivers/usb/mon/mon_bin.c
+@@ -1268,6 +1268,11 @@ static int mon_bin_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ /* don't do anything here: "fault" will set up page table entries */
+ vma->vm_ops = &mon_bin_vm_ops;
++
++ if (vma->vm_flags & VM_WRITE)
++ return -EPERM;
++
++ vma->vm_flags &= ~VM_MAYWRITE;
+ vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_private_data = filp->private_data;
+ mon_bin_vma_open(vma);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 52d59be920342..787e63fd7f99b 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1319,8 +1319,7 @@ static u32 get_ftdi_divisor(struct tty_struct *tty,
+ case 38400: div_value = ftdi_sio_b38400; break;
+ case 57600: div_value = ftdi_sio_b57600; break;
+ case 115200: div_value = ftdi_sio_b115200; break;
+- } /* baud */
+- if (div_value == 0) {
++ default:
+ dev_dbg(dev, "%s - Baudrate (%d) requested is not supported\n",
+ __func__, baud);
+ div_value = ftdi_sio_b9600;
+diff --git a/fs/coredump.c b/fs/coredump.c
+index ebc43f960b645..f1355e52614a6 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -832,6 +832,38 @@ static int __dump_skip(struct coredump_params *cprm, size_t nr)
+ }
+ }
+
++static int dump_emit_page(struct coredump_params *cprm, struct page *page)
++{
++ struct bio_vec bvec = {
++ .bv_page = page,
++ .bv_offset = 0,
++ .bv_len = PAGE_SIZE,
++ };
++ struct iov_iter iter;
++ struct file *file = cprm->file;
++ loff_t pos = file->f_pos;
++ ssize_t n;
++
++ if (cprm->to_skip) {
++ if (!__dump_skip(cprm, cprm->to_skip))
++ return 0;
++ cprm->to_skip = 0;
++ }
++ if (cprm->written + PAGE_SIZE > cprm->limit)
++ return 0;
++ if (dump_interrupted())
++ return 0;
++ iov_iter_bvec(&iter, WRITE, &bvec, 1, PAGE_SIZE);
++ n = __kernel_write_iter(cprm->file, &iter, &pos);
++ if (n != PAGE_SIZE)
++ return 0;
++ file->f_pos = pos;
++ cprm->written += PAGE_SIZE;
++ cprm->pos += PAGE_SIZE;
++
++ return 1;
++}
++
+ int dump_emit(struct coredump_params *cprm, const void *addr, int nr)
+ {
+ if (cprm->to_skip) {
+@@ -863,7 +895,6 @@ int dump_user_range(struct coredump_params *cprm, unsigned long start,
+
+ for (addr = start; addr < start + len; addr += PAGE_SIZE) {
+ struct page *page;
+- int stop;
+
+ /*
+ * To avoid having to allocate page tables for virtual address
+@@ -874,10 +905,7 @@ int dump_user_range(struct coredump_params *cprm, unsigned long start,
+ */
+ page = get_dump_page(addr);
+ if (page) {
+- void *kaddr = kmap_local_page(page);
+-
+- stop = !dump_emit(cprm, kaddr, PAGE_SIZE);
+- kunmap_local(kaddr);
++ int stop = !dump_emit_page(cprm, page);
+ put_page(page);
+ if (stop)
+ return 0;
+diff --git a/fs/inode.c b/fs/inode.c
+index bd4da9c5207ea..08e0857b429e3 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -192,8 +192,6 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
+ inode->i_wb_frn_history = 0;
+ #endif
+
+- if (security_inode_alloc(inode))
+- goto out;
+ spin_lock_init(&inode->i_lock);
+ lockdep_set_class(&inode->i_lock, &sb->s_type->i_lock_key);
+
+@@ -228,11 +226,12 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
+ inode->i_fsnotify_mask = 0;
+ #endif
+ inode->i_flctx = NULL;
++
++ if (unlikely(security_inode_alloc(inode)))
++ return -ENOMEM;
+ this_cpu_inc(nr_inodes);
+
+ return 0;
+-out:
+- return -ENOMEM;
+ }
+ EXPORT_SYMBOL(inode_init_always);
+
+diff --git a/fs/internal.h b/fs/internal.h
+index 87e96b9024ce1..3e206d3e317c4 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -16,6 +16,7 @@ struct shrink_control;
+ struct fs_context;
+ struct user_namespace;
+ struct pipe_inode_info;
++struct iov_iter;
+
+ /*
+ * block/bdev.c
+@@ -221,3 +222,5 @@ ssize_t do_getxattr(struct user_namespace *mnt_userns,
+ int setxattr_copy(const char __user *name, struct xattr_ctx *ctx);
+ int do_setxattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ struct xattr_ctx *ctx);
++
++ssize_t __kernel_write_iter(struct file *file, struct iov_iter *from, loff_t *pos);
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 397da0236607e..a0a3d35e2c0fd 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -509,14 +509,9 @@ static ssize_t new_sync_write(struct file *filp, const char __user *buf, size_t
+ }
+
+ /* caller is responsible for file_start_write/file_end_write */
+-ssize_t __kernel_write(struct file *file, const void *buf, size_t count, loff_t *pos)
++ssize_t __kernel_write_iter(struct file *file, struct iov_iter *from, loff_t *pos)
+ {
+- struct kvec iov = {
+- .iov_base = (void *)buf,
+- .iov_len = min_t(size_t, count, MAX_RW_COUNT),
+- };
+ struct kiocb kiocb;
+- struct iov_iter iter;
+ ssize_t ret;
+
+ if (WARN_ON_ONCE(!(file->f_mode & FMODE_WRITE)))
+@@ -532,8 +527,7 @@ ssize_t __kernel_write(struct file *file, const void *buf, size_t count, loff_t
+
+ init_sync_kiocb(&kiocb, file);
+ kiocb.ki_pos = pos ? *pos : 0;
+- iov_iter_kvec(&iter, WRITE, &iov, 1, iov.iov_len);
+- ret = file->f_op->write_iter(&kiocb, &iter);
++ ret = file->f_op->write_iter(&kiocb, from);
+ if (ret > 0) {
+ if (pos)
+ *pos = kiocb.ki_pos;
+@@ -543,6 +537,18 @@ ssize_t __kernel_write(struct file *file, const void *buf, size_t count, loff_t
+ inc_syscw(current);
+ return ret;
+ }
++
++/* caller is responsible for file_start_write/file_end_write */
++ssize_t __kernel_write(struct file *file, const void *buf, size_t count, loff_t *pos)
++{
++ struct kvec iov = {
++ .iov_base = (void *)buf,
++ .iov_len = min_t(size_t, count, MAX_RW_COUNT),
++ };
++ struct iov_iter iter;
++ iov_iter_kvec(&iter, WRITE, &iov, 1, iov.iov_len);
++ return __kernel_write_iter(file, &iter, pos);
++}
+ /*
+ * This "EXPORT_SYMBOL_GPL()" is more of a "EXPORT_SYMBOL_DONTUSE()",
+ * but autofs is one of the few internal kernel users that actually
+diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
+index 704111f639937..6dd50ac82d108 100644
+--- a/include/linux/scmi_protocol.h
++++ b/include/linux/scmi_protocol.h
+@@ -78,7 +78,7 @@ struct scmi_protocol_handle;
+ struct scmi_clk_proto_ops {
+ int (*count_get)(const struct scmi_protocol_handle *ph);
+
+- const struct scmi_clock_info *(*info_get)
++ const struct scmi_clock_info __must_check *(*info_get)
+ (const struct scmi_protocol_handle *ph, u32 clk_id);
+ int (*rate_get)(const struct scmi_protocol_handle *ph, u32 clk_id,
+ u64 *rate);
+@@ -460,7 +460,7 @@ enum scmi_sensor_class {
+ */
+ struct scmi_sensor_proto_ops {
+ int (*count_get)(const struct scmi_protocol_handle *ph);
+- const struct scmi_sensor_info *(*info_get)
++ const struct scmi_sensor_info __must_check *(*info_get)
+ (const struct scmi_protocol_handle *ph, u32 sensor_id);
+ int (*trip_point_config)(const struct scmi_protocol_handle *ph,
+ u32 sensor_id, u8 trip_id, u64 trip_value);
+diff --git a/include/net/ieee802154_netdev.h b/include/net/ieee802154_netdev.h
+index d0d188c3294bd..a8994f307fc38 100644
+--- a/include/net/ieee802154_netdev.h
++++ b/include/net/ieee802154_netdev.h
+@@ -15,6 +15,22 @@
+ #ifndef IEEE802154_NETDEVICE_H
+ #define IEEE802154_NETDEVICE_H
+
++#define IEEE802154_REQUIRED_SIZE(struct_type, member) \
++ (offsetof(typeof(struct_type), member) + \
++ sizeof(((typeof(struct_type) *)(NULL))->member))
++
++#define IEEE802154_ADDR_OFFSET \
++ offsetof(typeof(struct sockaddr_ieee802154), addr)
++
++#define IEEE802154_MIN_NAMELEN (IEEE802154_ADDR_OFFSET + \
++ IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, addr_type))
++
++#define IEEE802154_NAMELEN_SHORT (IEEE802154_ADDR_OFFSET + \
++ IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, short_addr))
++
++#define IEEE802154_NAMELEN_LONG (IEEE802154_ADDR_OFFSET + \
++ IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, hwaddr))
++
+ #include <net/af_ieee802154.h>
+ #include <linux/netdevice.h>
+ #include <linux/skbuff.h>
+@@ -165,6 +181,27 @@ static inline void ieee802154_devaddr_to_raw(void *raw, __le64 addr)
+ memcpy(raw, &temp, IEEE802154_ADDR_LEN);
+ }
+
++static inline int
++ieee802154_sockaddr_check_size(struct sockaddr_ieee802154 *daddr, int len)
++{
++ struct ieee802154_addr_sa *sa;
++
++ sa = &daddr->addr;
++ if (len < IEEE802154_MIN_NAMELEN)
++ return -EINVAL;
++ switch (sa->addr_type) {
++ case IEEE802154_ADDR_SHORT:
++ if (len < IEEE802154_NAMELEN_SHORT)
++ return -EINVAL;
++ break;
++ case IEEE802154_ADDR_LONG:
++ if (len < IEEE802154_NAMELEN_LONG)
++ return -EINVAL;
++ break;
++ }
++ return 0;
++}
++
+ static inline void ieee802154_addr_from_sa(struct ieee802154_addr *a,
+ const struct ieee802154_addr_sa *sa)
+ {
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index 647722e847b41..f787c3f524b03 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -95,7 +95,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
+ struct xdp_umem *umem);
+ int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *dev,
+ u16 queue_id, u16 flags);
+-int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
++int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_sock *umem_xs,
+ struct net_device *dev, u16 queue_id);
+ int xp_alloc_tx_descs(struct xsk_buff_pool *pool, struct xdp_sock *xs);
+ void xp_destroy(struct xsk_buff_pool *pool);
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index bb1254f076672..9853db0ce487b 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -1627,26 +1627,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
+ return &bpf_ringbuf_discard_proto;
+ case BPF_FUNC_ringbuf_query:
+ return &bpf_ringbuf_query_proto;
+- case BPF_FUNC_ringbuf_reserve_dynptr:
+- return &bpf_ringbuf_reserve_dynptr_proto;
+- case BPF_FUNC_ringbuf_submit_dynptr:
+- return &bpf_ringbuf_submit_dynptr_proto;
+- case BPF_FUNC_ringbuf_discard_dynptr:
+- return &bpf_ringbuf_discard_dynptr_proto;
+ case BPF_FUNC_for_each_map_elem:
+ return &bpf_for_each_map_elem_proto;
+ case BPF_FUNC_loop:
+ return &bpf_loop_proto;
+ case BPF_FUNC_strncmp:
+ return &bpf_strncmp_proto;
+- case BPF_FUNC_dynptr_from_mem:
+- return &bpf_dynptr_from_mem_proto;
+- case BPF_FUNC_dynptr_read:
+- return &bpf_dynptr_read_proto;
+- case BPF_FUNC_dynptr_write:
+- return &bpf_dynptr_write_proto;
+- case BPF_FUNC_dynptr_data:
+- return &bpf_dynptr_data_proto;
+ default:
+ break;
+ }
+@@ -1675,6 +1661,20 @@ bpf_base_func_proto(enum bpf_func_id func_id)
+ return &bpf_timer_cancel_proto;
+ case BPF_FUNC_kptr_xchg:
+ return &bpf_kptr_xchg_proto;
++ case BPF_FUNC_ringbuf_reserve_dynptr:
++ return &bpf_ringbuf_reserve_dynptr_proto;
++ case BPF_FUNC_ringbuf_submit_dynptr:
++ return &bpf_ringbuf_submit_dynptr_proto;
++ case BPF_FUNC_ringbuf_discard_dynptr:
++ return &bpf_ringbuf_discard_dynptr_proto;
++ case BPF_FUNC_dynptr_from_mem:
++ return &bpf_dynptr_from_mem_proto;
++ case BPF_FUNC_dynptr_read:
++ return &bpf_dynptr_read_proto;
++ case BPF_FUNC_dynptr_write:
++ return &bpf_dynptr_write_proto;
++ case BPF_FUNC_dynptr_data:
++ return &bpf_dynptr_data_proto;
+ default:
+ break;
+ }
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index dd0fc2a86ce17..d334aeb234076 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -578,7 +578,7 @@ void bpf_map_free_kptrs(struct bpf_map *map, void *map_value)
+ if (off_desc->type == BPF_KPTR_UNREF) {
+ u64 *p = (u64 *)btf_id_ptr;
+
+- WRITE_ONCE(p, 0);
++ WRITE_ONCE(*p, 0);
+ continue;
+ }
+ old_ptr = xchg(btf_id_ptr, 0);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 6a53bcc5cfbb1..48029a390c65a 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -596,6 +596,15 @@ static int hci_dev_do_reset(struct hci_dev *hdev)
+
+ /* Cancel these to avoid queueing non-chained pending work */
+ hci_dev_set_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
++ /* Wait for
++ *
++ * if (!hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
++ * queue_delayed_work(&hdev->{cmd,ncmd}_timer)
++ *
++ * inside RCU section to see the flag or complete scheduling.
++ */
++ synchronize_rcu();
++ /* Explicitly cancel works in case scheduled after setting the flag. */
+ cancel_delayed_work(&hdev->cmd_timer);
+ cancel_delayed_work(&hdev->ncmd_timer);
+
+@@ -3871,12 +3880,14 @@ static void hci_cmd_work(struct work_struct *work)
+ if (res < 0)
+ __hci_cmd_sync_cancel(hdev, -res);
+
++ rcu_read_lock();
+ if (test_bit(HCI_RESET, &hdev->flags) ||
+ hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
+ cancel_delayed_work(&hdev->cmd_timer);
+ else
+- schedule_delayed_work(&hdev->cmd_timer,
+- HCI_CMD_TIMEOUT);
++ queue_delayed_work(hdev->workqueue, &hdev->cmd_timer,
++ HCI_CMD_TIMEOUT);
++ rcu_read_unlock();
+ } else {
+ skb_queue_head(&hdev->cmd_q, skb);
+ queue_work(hdev->workqueue, &hdev->cmd_work);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 2c320a8fe70d7..81e5bcdbbe944 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3763,16 +3763,18 @@ static inline void handle_cmd_cnt_and_timer(struct hci_dev *hdev, u8 ncmd)
+ {
+ cancel_delayed_work(&hdev->cmd_timer);
+
++ rcu_read_lock();
+ if (!test_bit(HCI_RESET, &hdev->flags)) {
+ if (ncmd) {
+ cancel_delayed_work(&hdev->ncmd_timer);
+ atomic_set(&hdev->cmd_cnt, 1);
+ } else {
+ if (!hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
+- schedule_delayed_work(&hdev->ncmd_timer,
+- HCI_NCMD_TIMEOUT);
++ queue_delayed_work(hdev->workqueue, &hdev->ncmd_timer,
++ HCI_NCMD_TIMEOUT);
+ }
+ }
++ rcu_read_unlock();
+ }
+
+ #define HCI_CC_VL(_op, _func, _min, _max) \
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index 718fb77bb372c..7889e1ef7fad6 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -200,8 +200,9 @@ static int raw_bind(struct sock *sk, struct sockaddr *_uaddr, int len)
+ int err = 0;
+ struct net_device *dev = NULL;
+
+- if (len < sizeof(*uaddr))
+- return -EINVAL;
++ err = ieee802154_sockaddr_check_size(uaddr, len);
++ if (err < 0)
++ return err;
+
+ uaddr = (struct sockaddr_ieee802154 *)_uaddr;
+ if (uaddr->family != AF_IEEE802154)
+@@ -493,7 +494,8 @@ static int dgram_bind(struct sock *sk, struct sockaddr *uaddr, int len)
+
+ ro->bound = 0;
+
+- if (len < sizeof(*addr))
++ err = ieee802154_sockaddr_check_size(addr, len);
++ if (err < 0)
+ goto out;
+
+ if (addr->family != AF_IEEE802154)
+@@ -564,8 +566,9 @@ static int dgram_connect(struct sock *sk, struct sockaddr *uaddr,
+ struct dgram_sock *ro = dgram_sk(sk);
+ int err = 0;
+
+- if (len < sizeof(*addr))
+- return -EINVAL;
++ err = ieee802154_sockaddr_check_size(addr, len);
++ if (err < 0)
++ return err;
+
+ if (addr->family != AF_IEEE802154)
+ return -EINVAL;
+@@ -604,6 +607,7 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ struct ieee802154_mac_cb *cb;
+ struct dgram_sock *ro = dgram_sk(sk);
+ struct ieee802154_addr dst_addr;
++ DECLARE_SOCKADDR(struct sockaddr_ieee802154*, daddr, msg->msg_name);
+ int hlen, tlen;
+ int err;
+
+@@ -612,10 +616,20 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ return -EOPNOTSUPP;
+ }
+
+- if (!ro->connected && !msg->msg_name)
+- return -EDESTADDRREQ;
+- else if (ro->connected && msg->msg_name)
+- return -EISCONN;
++ if (msg->msg_name) {
++ if (ro->connected)
++ return -EISCONN;
++ if (msg->msg_namelen < IEEE802154_MIN_NAMELEN)
++ return -EINVAL;
++ err = ieee802154_sockaddr_check_size(daddr, msg->msg_namelen);
++ if (err < 0)
++ return err;
++ ieee802154_addr_from_sa(&dst_addr, &daddr->addr);
++ } else {
++ if (!ro->connected)
++ return -EDESTADDRREQ;
++ dst_addr = ro->dst_addr;
++ }
+
+ if (!ro->bound)
+ dev = dev_getfirstbyhwtype(sock_net(sk), ARPHRD_IEEE802154);
+@@ -651,16 +665,6 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ cb = mac_cb_init(skb);
+ cb->type = IEEE802154_FC_TYPE_DATA;
+ cb->ackreq = ro->want_ack;
+-
+- if (msg->msg_name) {
+- DECLARE_SOCKADDR(struct sockaddr_ieee802154*,
+- daddr, msg->msg_name);
+-
+- ieee802154_addr_from_sa(&dst_addr, &daddr->addr);
+- } else {
+- dst_addr = ro->dst_addr;
+- }
+-
+ cb->secen = ro->secen;
+ cb->secen_override = ro->secen_override;
+ cb->seclevel = ro->seclevel;
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 09002387987ea..7e311420aab9f 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -951,8 +951,8 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ goto out_unlock;
+ }
+
+- err = xp_assign_dev_shared(xs->pool, umem_xs->umem,
+- dev, qid);
++ err = xp_assign_dev_shared(xs->pool, umem_xs, dev,
++ qid);
+ if (err) {
+ xp_destroy(xs->pool);
+ xs->pool = NULL;
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index a71a8c6edf553..ed6c71826d31f 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -212,17 +212,18 @@ err_unreg_pool:
+ return err;
+ }
+
+-int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
++int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_sock *umem_xs,
+ struct net_device *dev, u16 queue_id)
+ {
+ u16 flags;
++ struct xdp_umem *umem = umem_xs->umem;
+
+ /* One fill and completion ring required for each queue id. */
+ if (!pool->fq || !pool->cq)
+ return -EINVAL;
+
+ flags = umem->zc ? XDP_ZEROCOPY : XDP_COPY;
+- if (pool->uses_need_wakeup)
++ if (umem_xs->pool->uses_need_wakeup)
+ flags |= XDP_USE_NEED_WAKEUP;
+
+ return xp_assign_dev(pool, dev, queue_id, flags);
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index f5f0d6f09053f..830b9f15fa3f3 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -53,6 +53,7 @@ KBUILD_CFLAGS += -Wno-format-zero-length
+ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast)
+ KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare
+ KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access)
++KBUILD_CFLAGS += $(call cc-disable-warning, cast-function-type-strict)
+ endif
+
+ endif
+diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
+index bd2aabb2c60f9..995bc42003e6c 100644
+--- a/security/Kconfig.hardening
++++ b/security/Kconfig.hardening
+@@ -22,11 +22,17 @@ menu "Memory initialization"
+ config CC_HAS_AUTO_VAR_INIT_PATTERN
+ def_bool $(cc-option,-ftrivial-auto-var-init=pattern)
+
+-config CC_HAS_AUTO_VAR_INIT_ZERO
+- # GCC ignores the -enable flag, so we can test for the feature with
+- # a single invocation using the flag, but drop it as appropriate in
+- # the Makefile, depending on the presence of Clang.
++config CC_HAS_AUTO_VAR_INIT_ZERO_BARE
++ def_bool $(cc-option,-ftrivial-auto-var-init=zero)
++
++config CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER
++ # Clang 16 and later warn about using the -enable flag, but it
++ # is required before then.
+ def_bool $(cc-option,-ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang)
++ depends on !CC_HAS_AUTO_VAR_INIT_ZERO_BARE
++
++config CC_HAS_AUTO_VAR_INIT_ZERO
++ def_bool CC_HAS_AUTO_VAR_INIT_ZERO_BARE || CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER
+
+ choice
+ prompt "Initialize kernel stack variables at function entry"
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index c9d9aa6351ecf..c239d9dbbaefe 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1278,6 +1278,7 @@ static int hdmi_pcm_open(struct hda_pcm_stream *hinfo,
+ set_bit(pcm_idx, &spec->pcm_in_use);
+ per_pin = get_pin(spec, pin_idx);
+ per_pin->cvt_nid = per_cvt->cvt_nid;
++ per_pin->silent_stream = false;
+ hinfo->nid = per_cvt->cvt_nid;
+
+ /* flip stripe flag for the assigned stream if supported */
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-10-15 10:04 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-10-15 10:04 UTC (permalink / raw
To: gentoo-commits
commit: 2ed2a5f7a1e7bc5fea73aff0bb1a4c1bed86f26e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 15 10:04:01 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct 15 10:04:01 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2ed2a5f7
Linuxpatch 5.19.16
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1015_linux-5.19.16.patch | 1346 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1350 insertions(+)
diff --git a/0000_README b/0000_README
index 5d6628ec..256c6e52 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 1014_linux-5.19.15.patch
From: http://www.kernel.org
Desc: Linux 5.19.15
+Patch: 1015_linux-5.19.16.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.16
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1015_linux-5.19.16.patch b/1015_linux-5.19.16.patch
new file mode 100644
index 00000000..b091bea2
--- /dev/null
+++ b/1015_linux-5.19.16.patch
@@ -0,0 +1,1346 @@
+diff --git a/Makefile b/Makefile
+index af05237987ef3..a1d1978bbd039 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
+index 4d7aaab827023..3537b0500f4d0 100644
+--- a/arch/powerpc/include/asm/paca.h
++++ b/arch/powerpc/include/asm/paca.h
+@@ -263,7 +263,6 @@ struct paca_struct {
+ u64 l1d_flush_size;
+ #endif
+ #ifdef CONFIG_PPC_PSERIES
+- struct rtas_args *rtas_args_reentrant;
+ u8 *mce_data_buf; /* buffer to hold per cpu rtas errlog */
+ #endif /* CONFIG_PPC_PSERIES */
+
+diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
+index 00531af17ce05..56319aea646e6 100644
+--- a/arch/powerpc/include/asm/rtas.h
++++ b/arch/powerpc/include/asm/rtas.h
+@@ -240,7 +240,6 @@ extern struct rtas_t rtas;
+ extern int rtas_token(const char *service);
+ extern int rtas_service_present(const char *service);
+ extern int rtas_call(int token, int, int, int *, ...);
+-int rtas_call_reentrant(int token, int nargs, int nret, int *outputs, ...);
+ void rtas_call_unlocked(struct rtas_args *args, int token, int nargs,
+ int nret, ...);
+ extern void __noreturn rtas_restart(char *cmd);
+diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
+index ba593fd601245..dfd097b79160a 100644
+--- a/arch/powerpc/kernel/paca.c
++++ b/arch/powerpc/kernel/paca.c
+@@ -16,7 +16,6 @@
+ #include <asm/kexec.h>
+ #include <asm/svm.h>
+ #include <asm/ultravisor.h>
+-#include <asm/rtas.h>
+
+ #include "setup.h"
+
+@@ -170,30 +169,6 @@ static struct slb_shadow * __init new_slb_shadow(int cpu, unsigned long limit)
+ }
+ #endif /* CONFIG_PPC_64S_HASH_MMU */
+
+-#ifdef CONFIG_PPC_PSERIES
+-/**
+- * new_rtas_args() - Allocates rtas args
+- * @cpu: CPU number
+- * @limit: Memory limit for this allocation
+- *
+- * Allocates a struct rtas_args and return it's pointer,
+- * if not in Hypervisor mode
+- *
+- * Return: Pointer to allocated rtas_args
+- * NULL if CPU in Hypervisor Mode
+- */
+-static struct rtas_args * __init new_rtas_args(int cpu, unsigned long limit)
+-{
+- limit = min_t(unsigned long, limit, RTAS_INSTANTIATE_MAX);
+-
+- if (early_cpu_has_feature(CPU_FTR_HVMODE))
+- return NULL;
+-
+- return alloc_paca_data(sizeof(struct rtas_args), L1_CACHE_BYTES,
+- limit, cpu);
+-}
+-#endif /* CONFIG_PPC_PSERIES */
+-
+ /* The Paca is an array with one entry per processor. Each contains an
+ * lppaca, which contains the information shared between the
+ * hypervisor and Linux.
+@@ -232,10 +207,6 @@ void __init initialise_paca(struct paca_struct *new_paca, int cpu)
+ /* For now -- if we have threads this will be adjusted later */
+ new_paca->tcd_ptr = &new_paca->tcd;
+ #endif
+-
+-#ifdef CONFIG_PPC_PSERIES
+- new_paca->rtas_args_reentrant = NULL;
+-#endif
+ }
+
+ /* Put the paca pointer into r13 and SPRG_PACA */
+@@ -307,9 +278,6 @@ void __init allocate_paca(int cpu)
+ #endif
+ #ifdef CONFIG_PPC_64S_HASH_MMU
+ paca->slb_shadow_ptr = new_slb_shadow(cpu, limit);
+-#endif
+-#ifdef CONFIG_PPC_PSERIES
+- paca->rtas_args_reentrant = new_rtas_args(cpu, limit);
+ #endif
+ paca_struct_size += sizeof(struct paca_struct);
+ }
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 6931339722948..0b8a858aa8479 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -43,7 +43,6 @@
+ #include <asm/time.h>
+ #include <asm/mmu.h>
+ #include <asm/topology.h>
+-#include <asm/paca.h>
+
+ /* This is here deliberately so it's only used in this file */
+ void enter_rtas(unsigned long);
+@@ -932,59 +931,6 @@ void rtas_activate_firmware(void)
+ pr_err("ibm,activate-firmware failed (%i)\n", fwrc);
+ }
+
+-#ifdef CONFIG_PPC_PSERIES
+-/**
+- * rtas_call_reentrant() - Used for reentrant rtas calls
+- * @token: Token for desired reentrant RTAS call
+- * @nargs: Number of Input Parameters
+- * @nret: Number of Output Parameters
+- * @outputs: Array of outputs
+- * @...: Inputs for desired RTAS call
+- *
+- * According to LoPAR documentation, only "ibm,int-on", "ibm,int-off",
+- * "ibm,get-xive" and "ibm,set-xive" are currently reentrant.
+- * Reentrant calls need their own rtas_args buffer, so not using rtas.args, but
+- * PACA one instead.
+- *
+- * Return: -1 on error,
+- * First output value of RTAS call if (nret > 0),
+- * 0 otherwise,
+- */
+-int rtas_call_reentrant(int token, int nargs, int nret, int *outputs, ...)
+-{
+- va_list list;
+- struct rtas_args *args;
+- unsigned long flags;
+- int i, ret = 0;
+-
+- if (!rtas.entry || token == RTAS_UNKNOWN_SERVICE)
+- return -1;
+-
+- local_irq_save(flags);
+- preempt_disable();
+-
+- /* We use the per-cpu (PACA) rtas args buffer */
+- args = local_paca->rtas_args_reentrant;
+-
+- va_start(list, outputs);
+- va_rtas_call_unlocked(args, token, nargs, nret, list);
+- va_end(list);
+-
+- if (nret > 1 && outputs)
+- for (i = 0; i < nret - 1; ++i)
+- outputs[i] = be32_to_cpu(args->rets[i + 1]);
+-
+- if (nret > 0)
+- ret = be32_to_cpu(args->rets[0]);
+-
+- local_irq_restore(flags);
+- preempt_enable();
+-
+- return ret;
+-}
+-
+-#endif /* CONFIG_PPC_PSERIES */
+-
+ /**
+ * get_pseries_errorlog() - Find a specific pseries error log in an RTAS
+ * extended event log.
+diff --git a/arch/powerpc/sysdev/xics/ics-rtas.c b/arch/powerpc/sysdev/xics/ics-rtas.c
+index 9e7007f9aca5c..f8320f8e5bc79 100644
+--- a/arch/powerpc/sysdev/xics/ics-rtas.c
++++ b/arch/powerpc/sysdev/xics/ics-rtas.c
+@@ -36,8 +36,8 @@ static void ics_rtas_unmask_irq(struct irq_data *d)
+
+ server = xics_get_irq_server(d->irq, irq_data_get_affinity_mask(d), 0);
+
+- call_status = rtas_call_reentrant(ibm_set_xive, 3, 1, NULL, hw_irq,
+- server, DEFAULT_PRIORITY);
++ call_status = rtas_call(ibm_set_xive, 3, 1, NULL, hw_irq, server,
++ DEFAULT_PRIORITY);
+ if (call_status != 0) {
+ printk(KERN_ERR
+ "%s: ibm_set_xive irq %u server %x returned %d\n",
+@@ -46,7 +46,7 @@ static void ics_rtas_unmask_irq(struct irq_data *d)
+ }
+
+ /* Now unmask the interrupt (often a no-op) */
+- call_status = rtas_call_reentrant(ibm_int_on, 1, 1, NULL, hw_irq);
++ call_status = rtas_call(ibm_int_on, 1, 1, NULL, hw_irq);
+ if (call_status != 0) {
+ printk(KERN_ERR "%s: ibm_int_on irq=%u returned %d\n",
+ __func__, hw_irq, call_status);
+@@ -68,7 +68,7 @@ static void ics_rtas_mask_real_irq(unsigned int hw_irq)
+ if (hw_irq == XICS_IPI)
+ return;
+
+- call_status = rtas_call_reentrant(ibm_int_off, 1, 1, NULL, hw_irq);
++ call_status = rtas_call(ibm_int_off, 1, 1, NULL, hw_irq);
+ if (call_status != 0) {
+ printk(KERN_ERR "%s: ibm_int_off irq=%u returned %d\n",
+ __func__, hw_irq, call_status);
+@@ -76,8 +76,8 @@ static void ics_rtas_mask_real_irq(unsigned int hw_irq)
+ }
+
+ /* Have to set XIVE to 0xff to be able to remove a slot */
+- call_status = rtas_call_reentrant(ibm_set_xive, 3, 1, NULL, hw_irq,
+- xics_default_server, 0xff);
++ call_status = rtas_call(ibm_set_xive, 3, 1, NULL, hw_irq,
++ xics_default_server, 0xff);
+ if (call_status != 0) {
+ printk(KERN_ERR "%s: ibm_set_xive(0xff) irq=%u returned %d\n",
+ __func__, hw_irq, call_status);
+@@ -108,7 +108,7 @@ static int ics_rtas_set_affinity(struct irq_data *d,
+ if (hw_irq == XICS_IPI || hw_irq == XICS_IRQ_SPURIOUS)
+ return -1;
+
+- status = rtas_call_reentrant(ibm_get_xive, 1, 3, xics_status, hw_irq);
++ status = rtas_call(ibm_get_xive, 1, 3, xics_status, hw_irq);
+
+ if (status) {
+ printk(KERN_ERR "%s: ibm,get-xive irq=%u returns %d\n",
+@@ -126,8 +126,8 @@ static int ics_rtas_set_affinity(struct irq_data *d,
+ pr_debug("%s: irq %d [hw 0x%x] server: 0x%x\n", __func__, d->irq,
+ hw_irq, irq_server);
+
+- status = rtas_call_reentrant(ibm_set_xive, 3, 1, NULL,
+- hw_irq, irq_server, xics_status[1]);
++ status = rtas_call(ibm_set_xive, 3, 1, NULL,
++ hw_irq, irq_server, xics_status[1]);
+
+ if (status) {
+ printk(KERN_ERR "%s: ibm,set-xive irq=%u returns %d\n",
+@@ -158,7 +158,7 @@ static int ics_rtas_check(struct ics *ics, unsigned int hw_irq)
+ return -EINVAL;
+
+ /* Check if RTAS knows about this interrupt */
+- rc = rtas_call_reentrant(ibm_get_xive, 1, 3, status, hw_irq);
++ rc = rtas_call(ibm_get_xive, 1, 3, status, hw_irq);
+ if (rc)
+ return -ENXIO;
+
+@@ -174,7 +174,7 @@ static long ics_rtas_get_server(struct ics *ics, unsigned long vec)
+ {
+ int rc, status[2];
+
+- rc = rtas_call_reentrant(ibm_get_xive, 1, 3, status, vec);
++ rc = rtas_call(ibm_get_xive, 1, 3, status, vec);
+ if (rc)
+ return -1;
+ return status[0];
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index 84ca98ed1dada..c2b37009b11ec 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -706,8 +706,8 @@ static const struct memdev {
+ #endif
+ [5] = { "zero", 0666, &zero_fops, FMODE_NOWAIT },
+ [7] = { "full", 0666, &full_fops, 0 },
+- [8] = { "random", 0666, &random_fops, 0 },
+- [9] = { "urandom", 0666, &urandom_fops, 0 },
++ [8] = { "random", 0666, &random_fops, FMODE_NOWAIT },
++ [9] = { "urandom", 0666, &urandom_fops, FMODE_NOWAIT },
+ #ifdef CONFIG_PRINTK
+ [11] = { "kmsg", 0644, &kmsg_fops, 0 },
+ #endif
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index a1af90bacc9f8..8dfb28d5ae3fa 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -903,20 +903,23 @@ EXPORT_SYMBOL_GPL(unregister_random_vmfork_notifier);
+ #endif
+
+ struct fast_pool {
+- struct work_struct mix;
+ unsigned long pool[4];
+ unsigned long last;
+ unsigned int count;
++ struct timer_list mix;
+ };
+
++static void mix_interrupt_randomness(struct timer_list *work);
++
+ static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
+ #ifdef CONFIG_64BIT
+ #define FASTMIX_PERM SIPHASH_PERMUTATION
+- .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
++ .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 },
+ #else
+ #define FASTMIX_PERM HSIPHASH_PERMUTATION
+- .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
++ .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
+ #endif
++ .mix = __TIMER_INITIALIZER(mix_interrupt_randomness, 0)
+ };
+
+ /*
+@@ -958,7 +961,7 @@ int __cold random_online_cpu(unsigned int cpu)
+ }
+ #endif
+
+-static void mix_interrupt_randomness(struct work_struct *work)
++static void mix_interrupt_randomness(struct timer_list *work)
+ {
+ struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
+ /*
+@@ -989,7 +992,7 @@ static void mix_interrupt_randomness(struct work_struct *work)
+ local_irq_enable();
+
+ mix_pool_bytes(pool, sizeof(pool));
+- credit_init_bits(max(1u, (count & U16_MAX) / 64));
++ credit_init_bits(clamp_t(unsigned int, (count & U16_MAX) / 64, 1, sizeof(pool) * 8));
+
+ memzero_explicit(pool, sizeof(pool));
+ }
+@@ -1012,10 +1015,11 @@ void add_interrupt_randomness(int irq)
+ if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
+ return;
+
+- if (unlikely(!fast_pool->mix.func))
+- INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
+ fast_pool->count |= MIX_INFLIGHT;
+- queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
++ if (!timer_pending(&fast_pool->mix)) {
++ fast_pool->mix.expires = jiffies;
++ add_timer_on(&fast_pool->mix, raw_smp_processor_id());
++ }
+ }
+ EXPORT_SYMBOL_GPL(add_interrupt_randomness);
+
+@@ -1330,6 +1334,11 @@ static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
+ {
+ int ret;
+
++ if (!crng_ready() &&
++ ((kiocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO)) ||
++ (kiocb->ki_filp->f_flags & O_NONBLOCK)))
++ return -EAGAIN;
++
+ ret = wait_for_random_bytes();
+ if (ret != 0)
+ return ret;
+diff --git a/drivers/crypto/qat/qat_common/qat_asym_algs.c b/drivers/crypto/qat/qat_common/qat_asym_algs.c
+index 16d97db9ea15f..11c7f2b6e5975 100644
+--- a/drivers/crypto/qat/qat_common/qat_asym_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_asym_algs.c
+@@ -333,13 +333,13 @@ static int qat_dh_compute_value(struct kpp_request *req)
+ qat_req->out.dh.out_tab[1] = 0;
+ /* Mapping in.in.b or in.in_g2.xa is the same */
+ qat_req->phy_in = dma_map_single(dev, &qat_req->in.dh.in.b,
+- sizeof(qat_req->in.dh.in.b),
++ sizeof(struct qat_dh_input_params),
+ DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, qat_req->phy_in)))
+ goto unmap_dst;
+
+ qat_req->phy_out = dma_map_single(dev, &qat_req->out.dh.r,
+- sizeof(qat_req->out.dh.r),
++ sizeof(struct qat_dh_output_params),
+ DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, qat_req->phy_out)))
+ goto unmap_in_params;
+@@ -730,13 +730,13 @@ static int qat_rsa_enc(struct akcipher_request *req)
+ qat_req->in.rsa.in_tab[3] = 0;
+ qat_req->out.rsa.out_tab[1] = 0;
+ qat_req->phy_in = dma_map_single(dev, &qat_req->in.rsa.enc.m,
+- sizeof(qat_req->in.rsa.enc.m),
++ sizeof(struct qat_rsa_input_params),
+ DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, qat_req->phy_in)))
+ goto unmap_dst;
+
+ qat_req->phy_out = dma_map_single(dev, &qat_req->out.rsa.enc.c,
+- sizeof(qat_req->out.rsa.enc.c),
++ sizeof(struct qat_rsa_output_params),
+ DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, qat_req->phy_out)))
+ goto unmap_in_params;
+@@ -876,13 +876,13 @@ static int qat_rsa_dec(struct akcipher_request *req)
+ qat_req->in.rsa.in_tab[3] = 0;
+ qat_req->out.rsa.out_tab[1] = 0;
+ qat_req->phy_in = dma_map_single(dev, &qat_req->in.rsa.dec.c,
+- sizeof(qat_req->in.rsa.dec.c),
++ sizeof(struct qat_rsa_input_params),
+ DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, qat_req->phy_in)))
+ goto unmap_dst;
+
+ qat_req->phy_out = dma_map_single(dev, &qat_req->out.rsa.dec.m,
+- sizeof(qat_req->out.rsa.dec.m),
++ sizeof(struct qat_rsa_output_params),
+ DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, qat_req->phy_out)))
+ goto unmap_in_params;
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 18190b529bca3..3da5fd5b5aaf4 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -113,6 +113,8 @@ static const struct xpad_device {
+ u8 xtype;
+ } xpad_device[] = {
+ { 0x0079, 0x18d4, "GPD Win 2 X-Box Controller", 0, XTYPE_XBOX360 },
++ { 0x03eb, 0xff01, "Wooting One (Legacy)", 0, XTYPE_XBOX360 },
++ { 0x03eb, 0xff02, "Wooting Two (Legacy)", 0, XTYPE_XBOX360 },
+ { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
+@@ -244,6 +246,7 @@ static const struct xpad_device {
+ { 0x0f0d, 0x0063, "Hori Real Arcade Pro Hayabusa (USA) Xbox One", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ { 0x0f0d, 0x0067, "HORIPAD ONE", 0, XTYPE_XBOXONE },
+ { 0x0f0d, 0x0078, "Hori Real Arcade Pro V Kai Xbox One", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
++ { 0x0f0d, 0x00c5, "Hori Fighting Commander ONE", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ { 0x0f30, 0x010b, "Philips Recoil", 0, XTYPE_XBOX },
+ { 0x0f30, 0x0202, "Joytech Advanced Controller", 0, XTYPE_XBOX },
+ { 0x0f30, 0x8888, "BigBen XBMiniPad Controller", 0, XTYPE_XBOX },
+@@ -260,6 +263,7 @@ static const struct xpad_device {
+ { 0x1430, 0x8888, "TX6500+ Dance Pad (first generation)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
+ { 0x1430, 0xf801, "RedOctane Controller", 0, XTYPE_XBOX360 },
+ { 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 },
++ { 0x146b, 0x0604, "Bigben Interactive DAIJA Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x1532, 0x0037, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ { 0x1532, 0x0a00, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ { 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE },
+@@ -325,6 +329,7 @@ static const struct xpad_device {
+ { 0x24c6, 0x5502, "Hori Fighting Stick VX Alt", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x24c6, 0x5503, "Hori Fighting Edge", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x24c6, 0x5506, "Hori SOULCALIBUR V Stick", 0, XTYPE_XBOX360 },
++ { 0x24c6, 0x5510, "Hori Fighting Commander ONE (Xbox 360/PC Mode)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x24c6, 0x550d, "Hori GEM Xbox controller", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0x550e, "Hori Real Arcade Pro V Kai 360", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x24c6, 0x551a, "PowerA FUSION Pro Controller", 0, XTYPE_XBOXONE },
+@@ -334,6 +339,14 @@ static const struct xpad_device {
+ { 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0x5d04, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0xfafe, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
++ { 0x2563, 0x058d, "OneXPlayer Gamepad", 0, XTYPE_XBOX360 },
++ { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE },
++ { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 },
++ { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 },
++ { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 },
++ { 0x31e3, 0x1220, "Wooting Two HE", 0, XTYPE_XBOX360 },
++ { 0x31e3, 0x1300, "Wooting 60HE (AVR)", 0, XTYPE_XBOX360 },
++ { 0x31e3, 0x1310, "Wooting 60HE (ARM)", 0, XTYPE_XBOX360 },
+ { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 },
+ { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
+ { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+@@ -419,6 +432,7 @@ static const signed short xpad_abs_triggers[] = {
+ static const struct usb_device_id xpad_table[] = {
+ { USB_INTERFACE_INFO('X', 'B', 0) }, /* X-Box USB-IF not approved class */
+ XPAD_XBOX360_VENDOR(0x0079), /* GPD Win 2 Controller */
++ XPAD_XBOX360_VENDOR(0x03eb), /* Wooting Keyboards (Legacy) */
+ XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster X-Box 360 controllers */
+ XPAD_XBOX360_VENDOR(0x045e), /* Microsoft X-Box 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft X-Box One controllers */
+@@ -429,6 +443,7 @@ static const struct usb_device_id xpad_table[] = {
+ { USB_DEVICE(0x0738, 0x4540) }, /* Mad Catz Beat Pad */
+ XPAD_XBOXONE_VENDOR(0x0738), /* Mad Catz FightStick TE 2 */
+ XPAD_XBOX360_VENDOR(0x07ff), /* Mad Catz GamePad */
++ XPAD_XBOX360_VENDOR(0x0c12), /* Zeroplus X-Box 360 controllers */
+ XPAD_XBOX360_VENDOR(0x0e6f), /* 0x0e6f X-Box 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x0e6f), /* 0x0e6f X-Box One controllers */
+ XPAD_XBOX360_VENDOR(0x0f0d), /* Hori Controllers */
+@@ -450,8 +465,12 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA Controllers */
+ XPAD_XBOX360_VENDOR(0x24c6), /* PowerA Controllers */
+ XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */
++ XPAD_XBOX360_VENDOR(0x2563), /* OneXPlayer Gamepad */
++ XPAD_XBOX360_VENDOR(0x260d), /* Dareu H101 */
++ XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller for Xbox */
+ XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke X-Box One pad */
+ XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */
++ XPAD_XBOX360_VENDOR(0x31e3), /* Wooting Keyboards */
+ XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */
+ { }
+ };
+@@ -1972,7 +1991,6 @@ static struct usb_driver xpad_driver = {
+ .disconnect = xpad_disconnect,
+ .suspend = xpad_suspend,
+ .resume = xpad_resume,
+- .reset_resume = xpad_resume,
+ .id_table = xpad_table,
+ };
+
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 8f786a225dcf8..11530b4ec3892 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -332,6 +332,22 @@ static bool pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
+ return false;
+ }
+
++static int pci_endpoint_test_validate_xfer_params(struct device *dev,
++ struct pci_endpoint_test_xfer_param *param, size_t alignment)
++{
++ if (!param->size) {
++ dev_dbg(dev, "Data size is zero\n");
++ return -EINVAL;
++ }
++
++ if (param->size > SIZE_MAX - alignment) {
++ dev_dbg(dev, "Maximum transfer data size exceeded\n");
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
+ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
+ unsigned long arg)
+ {
+@@ -363,9 +379,11 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
+ return false;
+ }
+
++ err = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
++ if (err)
++ return false;
++
+ size = param.size;
+- if (size > SIZE_MAX - alignment)
+- goto err;
+
+ use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
+ if (use_dma)
+@@ -497,9 +515,11 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test,
+ return false;
+ }
+
++ err = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
++ if (err)
++ return false;
++
+ size = param.size;
+- if (size > SIZE_MAX - alignment)
+- goto err;
+
+ use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
+ if (use_dma)
+@@ -595,9 +615,11 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test,
+ return false;
+ }
+
++ err = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
++ if (err)
++ return false;
++
+ size = param.size;
+- if (size > SIZE_MAX - alignment)
+- goto err;
+
+ use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
+ if (use_dma)
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index b511e705a46e4..6c81422fd226d 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -4251,6 +4251,8 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+
+ rx_status.band = channel->band;
+ rx_status.rate_idx = nla_get_u32(info->attrs[HWSIM_ATTR_RX_RATE]);
++ if (rx_status.rate_idx >= data2->hw->wiphy->bands[rx_status.band]->n_bitrates)
++ goto out;
+ rx_status.signal = nla_get_u32(info->attrs[HWSIM_ATTR_SIGNAL]);
+
+ hdr = (void *)skb->data;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 3516678d37541..7f328c8786c42 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2825,6 +2825,8 @@ static void nvme_reset_work(struct work_struct *work)
+ goto out;
+ }
+
++ dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
++
+ /*
+ * If we're called to reset a live controller first shut it down before
+ * moving on.
+@@ -2858,7 +2860,6 @@ static void nvme_reset_work(struct work_struct *work)
+ * Don't limit the IOMMU merged segment size.
+ */
+ dma_set_max_seg_size(dev->dev, 0xffffffff);
+- dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
+
+ mutex_unlock(&dev->shutdown_lock);
+
+diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c
+index e6420f2127ce1..8def242675ef3 100644
+--- a/drivers/scsi/stex.c
++++ b/drivers/scsi/stex.c
+@@ -665,16 +665,17 @@ static int stex_queuecommand_lck(struct scsi_cmnd *cmd)
+ return 0;
+ case PASSTHRU_CMD:
+ if (cmd->cmnd[1] == PASSTHRU_GET_DRVVER) {
+- struct st_drvver ver;
++ const struct st_drvver ver = {
++ .major = ST_VER_MAJOR,
++ .minor = ST_VER_MINOR,
++ .oem = ST_OEM,
++ .build = ST_BUILD_VER,
++ .signature[0] = PASSTHRU_SIGNATURE,
++ .console_id = host->max_id - 1,
++ .host_no = hba->host->host_no,
++ };
+ size_t cp_len = sizeof(ver);
+
+- ver.major = ST_VER_MAJOR;
+- ver.minor = ST_VER_MINOR;
+- ver.oem = ST_OEM;
+- ver.build = ST_BUILD_VER;
+- ver.signature[0] = PASSTHRU_SIGNATURE;
+- ver.console_id = host->max_id - 1;
+- ver.host_no = hba->host->host_no;
+ cp_len = scsi_sg_copy_from_buffer(cmd, &ver, cp_len);
+ if (sizeof(ver) == cp_len)
+ cmd->result = DID_OK << 16;
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 08ca65ffe57b7..ebf3afad378ba 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -23,7 +23,6 @@
+ #include <linux/delay.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/of.h>
+-#include <linux/of_graph.h>
+ #include <linux/acpi.h>
+ #include <linux/pinctrl/consumer.h>
+ #include <linux/reset.h>
+@@ -86,7 +85,7 @@ static int dwc3_get_dr_mode(struct dwc3 *dwc)
+ * mode. If the controller supports DRD but the dr_mode is not
+ * specified or set to OTG, then set the mode to peripheral.
+ */
+- if (mode == USB_DR_MODE_OTG && !dwc->edev &&
++ if (mode == USB_DR_MODE_OTG &&
+ (!IS_ENABLED(CONFIG_USB_ROLE_SWITCH) ||
+ !device_property_read_bool(dwc->dev, "usb-role-switch")) &&
+ !DWC3_VER_IS_PRIOR(DWC3, 330A))
+@@ -1634,46 +1633,6 @@ static void dwc3_check_params(struct dwc3 *dwc)
+ }
+ }
+
+-static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)
+-{
+- struct device *dev = dwc->dev;
+- struct device_node *np_phy;
+- struct extcon_dev *edev = NULL;
+- const char *name;
+-
+- if (device_property_read_bool(dev, "extcon"))
+- return extcon_get_edev_by_phandle(dev, 0);
+-
+- /*
+- * Device tree platforms should get extcon via phandle.
+- * On ACPI platforms, we get the name from a device property.
+- * This device property is for kernel internal use only and
+- * is expected to be set by the glue code.
+- */
+- if (device_property_read_string(dev, "linux,extcon-name", &name) == 0)
+- return extcon_get_extcon_dev(name);
+-
+- /*
+- * Try to get an extcon device from the USB PHY controller's "port"
+- * node. Check if it has the "port" node first, to avoid printing the
+- * error message from underlying code, as it's a valid case: extcon
+- * device (and "port" node) may be missing in case of "usb-role-switch"
+- * or OTG mode.
+- */
+- np_phy = of_parse_phandle(dev->of_node, "phys", 0);
+- if (of_graph_is_present(np_phy)) {
+- struct device_node *np_conn;
+-
+- np_conn = of_graph_get_remote_node(np_phy, -1, -1);
+- if (np_conn)
+- edev = extcon_find_edev_by_node(np_conn);
+- of_node_put(np_conn);
+- }
+- of_node_put(np_phy);
+-
+- return edev;
+-}
+-
+ static int dwc3_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -1810,13 +1769,6 @@ static int dwc3_probe(struct platform_device *pdev)
+ goto err2;
+ }
+
+- dwc->edev = dwc3_get_extcon(dwc);
+- if (IS_ERR(dwc->edev)) {
+- ret = PTR_ERR(dwc->edev);
+- dev_err_probe(dwc->dev, ret, "failed to get extcon\n");
+- goto err3;
+- }
+-
+ ret = dwc3_get_dr_mode(dwc);
+ if (ret)
+ goto err3;
+diff --git a/drivers/usb/dwc3/drd.c b/drivers/usb/dwc3/drd.c
+index 039bf241769af..8cad9e7d33687 100644
+--- a/drivers/usb/dwc3/drd.c
++++ b/drivers/usb/dwc3/drd.c
+@@ -8,6 +8,7 @@
+ */
+
+ #include <linux/extcon.h>
++#include <linux/of_graph.h>
+ #include <linux/of_platform.h>
+ #include <linux/platform_device.h>
+ #include <linux/property.h>
+@@ -438,6 +439,51 @@ static int dwc3_drd_notifier(struct notifier_block *nb,
+ return NOTIFY_DONE;
+ }
+
++static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)
++{
++ struct device *dev = dwc->dev;
++ struct device_node *np_phy;
++ struct extcon_dev *edev = NULL;
++ const char *name;
++
++ if (device_property_read_bool(dev, "extcon"))
++ return extcon_get_edev_by_phandle(dev, 0);
++
++ /*
++ * Device tree platforms should get extcon via phandle.
++ * On ACPI platforms, we get the name from a device property.
++ * This device property is for kernel internal use only and
++ * is expected to be set by the glue code.
++ */
++ if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) {
++ edev = extcon_get_extcon_dev(name);
++ if (!edev)
++ return ERR_PTR(-EPROBE_DEFER);
++
++ return edev;
++ }
++
++ /*
++ * Try to get an extcon device from the USB PHY controller's "port"
++ * node. Check if it has the "port" node first, to avoid printing the
++ * error message from underlying code, as it's a valid case: extcon
++ * device (and "port" node) may be missing in case of "usb-role-switch"
++ * or OTG mode.
++ */
++ np_phy = of_parse_phandle(dev->of_node, "phys", 0);
++ if (of_graph_is_present(np_phy)) {
++ struct device_node *np_conn;
++
++ np_conn = of_graph_get_remote_node(np_phy, -1, -1);
++ if (np_conn)
++ edev = extcon_find_edev_by_node(np_conn);
++ of_node_put(np_conn);
++ }
++ of_node_put(np_phy);
++
++ return edev;
++}
++
+ #if IS_ENABLED(CONFIG_USB_ROLE_SWITCH)
+ #define ROLE_SWITCH 1
+ static int dwc3_usb_role_switch_set(struct usb_role_switch *sw,
+@@ -542,6 +588,10 @@ int dwc3_drd_init(struct dwc3 *dwc)
+ device_property_read_bool(dwc->dev, "usb-role-switch"))
+ return dwc3_setup_role_switch(dwc);
+
++ dwc->edev = dwc3_get_extcon(dwc);
++ if (IS_ERR(dwc->edev))
++ return PTR_ERR(dwc->edev);
++
+ if (dwc->edev) {
+ dwc->edev_nb.notifier_call = dwc3_drd_notifier;
+ ret = extcon_register_notifier(dwc->edev, EXTCON_USB_HOST,
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index 586ef5551e76e..b1e844bf31f81 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -177,6 +177,7 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x413c, 0x81b3)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ {DEVICE_SWI(0x413c, 0x81b5)}, /* Dell Wireless 5811e QDL */
+ {DEVICE_SWI(0x413c, 0x81b6)}, /* Dell Wireless 5811e QDL */
++ {DEVICE_SWI(0x413c, 0x81c2)}, /* Dell Wireless 5811e */
+ {DEVICE_SWI(0x413c, 0x81cb)}, /* Dell Wireless 5816e QDL */
+ {DEVICE_SWI(0x413c, 0x81cc)}, /* Dell Wireless 5816e */
+ {DEVICE_SWI(0x413c, 0x81cf)}, /* Dell Wireless 5819 */
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index da59e836a06eb..a43e40138a3b0 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -740,6 +740,12 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ if (dentry->d_name.len > NAME_MAX)
+ return -ENAMETOOLONG;
+
++ /*
++ * Do not truncate the file, since atomic_open is called before the
++ * permission check. The caller will do the truncation afterward.
++ */
++ flags &= ~O_TRUNC;
++
+ if (flags & O_CREAT) {
+ if (ceph_quota_is_max_files_exceeded(dir))
+ return -EDQUOT;
+@@ -807,9 +813,7 @@ retry:
+ }
+
+ set_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags);
+- err = ceph_mdsc_do_request(mdsc,
+- (flags & (O_CREAT|O_TRUNC)) ? dir : NULL,
+- req);
++ err = ceph_mdsc_do_request(mdsc, (flags & O_CREAT) ? dir : NULL, req);
+ if (err == -ENOENT) {
+ dentry = ceph_handle_snapdir(req, dentry);
+ if (IS_ERR(dentry)) {
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index 67f63cfeade5c..232dd7b6cca14 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -328,6 +328,7 @@ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
+ struct inode *inode;
+ struct nilfs_inode_info *ii;
+ struct nilfs_root *root;
++ struct buffer_head *bh;
+ int err = -ENOMEM;
+ ino_t ino;
+
+@@ -343,11 +344,25 @@ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
+ ii->i_state = BIT(NILFS_I_NEW);
+ ii->i_root = root;
+
+- err = nilfs_ifile_create_inode(root->ifile, &ino, &ii->i_bh);
++ err = nilfs_ifile_create_inode(root->ifile, &ino, &bh);
+ if (unlikely(err))
+ goto failed_ifile_create_inode;
+ /* reference count of i_bh inherits from nilfs_mdt_read_block() */
+
++ if (unlikely(ino < NILFS_USER_INO)) {
++ nilfs_warn(sb,
++ "inode bitmap is inconsistent for reserved inodes");
++ do {
++ brelse(bh);
++ err = nilfs_ifile_create_inode(root->ifile, &ino, &bh);
++ if (unlikely(err))
++ goto failed_ifile_create_inode;
++ } while (ino < NILFS_USER_INO);
++
++ nilfs_info(sb, "repaired inode bitmap for reserved inodes");
++ }
++ ii->i_bh = bh;
++
+ atomic64_inc(&root->inodes_count);
+ inode_init_owner(&init_user_ns, inode, dir, mode);
+ inode->i_ino = ino;
+@@ -440,6 +455,8 @@ int nilfs_read_inode_common(struct inode *inode,
+ inode->i_atime.tv_nsec = le32_to_cpu(raw_inode->i_mtime_nsec);
+ inode->i_ctime.tv_nsec = le32_to_cpu(raw_inode->i_ctime_nsec);
+ inode->i_mtime.tv_nsec = le32_to_cpu(raw_inode->i_mtime_nsec);
++ if (nilfs_is_metadata_file_inode(inode) && !S_ISREG(inode->i_mode))
++ return -EIO; /* this inode is for metadata and corrupted */
+ if (inode->i_nlink == 0)
+ return -ESTALE; /* this inode is deleted */
+
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 0afe0832c7547..56d2c6fc61753 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -875,9 +875,11 @@ static int nilfs_segctor_create_checkpoint(struct nilfs_sc_info *sci)
+ nilfs_mdt_mark_dirty(nilfs->ns_cpfile);
+ nilfs_cpfile_put_checkpoint(
+ nilfs->ns_cpfile, nilfs->ns_cno, bh_cp);
+- } else
+- WARN_ON(err == -EINVAL || err == -ENOENT);
+-
++ } else if (err == -EINVAL || err == -ENOENT) {
++ nilfs_error(sci->sc_super,
++ "checkpoint creation failed due to metadata corruption.");
++ err = -EIO;
++ }
+ return err;
+ }
+
+@@ -891,7 +893,11 @@ static int nilfs_segctor_fill_in_checkpoint(struct nilfs_sc_info *sci)
+ err = nilfs_cpfile_get_checkpoint(nilfs->ns_cpfile, nilfs->ns_cno, 0,
+ &raw_cp, &bh_cp);
+ if (unlikely(err)) {
+- WARN_ON(err == -EINVAL || err == -ENOENT);
++ if (err == -EINVAL || err == -ENOENT) {
++ nilfs_error(sci->sc_super,
++ "checkpoint finalization failed due to metadata corruption.");
++ err = -EIO;
++ }
+ goto failed_ibh;
+ }
+ raw_cp->cp_snapshot_list.ssl_next = 0;
+@@ -2786,10 +2792,9 @@ int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
+ inode_attach_wb(nilfs->ns_bdev->bd_inode, NULL);
+
+ err = nilfs_segctor_start_thread(nilfs->ns_writer);
+- if (err) {
+- kfree(nilfs->ns_writer);
+- nilfs->ns_writer = NULL;
+- }
++ if (unlikely(err))
++ nilfs_detach_log_writer(sb);
++
+ return err;
+ }
+
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index 1e80e70dfa927..5ce1aac64edd1 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -201,7 +201,7 @@ static inline unsigned int scsi_get_resid(struct scsi_cmnd *cmd)
+ for_each_sg(scsi_sglist(cmd), sg, nseg, __i)
+
+ static inline int scsi_sg_copy_from_buffer(struct scsi_cmnd *cmd,
+- void *buf, int buflen)
++ const void *buf, int buflen)
+ {
+ return sg_copy_from_buffer(scsi_sglist(cmd), scsi_sg_count(cmd),
+ buf, buflen);
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 48fbccbf2a545..44c8701af95c0 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1640,6 +1640,14 @@ struct ieee802_11_elems {
+
+ /* whether a parse error occurred while retrieving these elements */
+ bool parse_error;
++
++ /*
++ * scratch buffer that can be used for various element parsing related
++ * tasks, e.g., element de-fragmentation etc.
++ */
++ size_t scratch_len;
++ u8 *scratch_pos;
++ u8 scratch[];
+ };
+
+ static inline struct ieee80211_local *hw_to_local(
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index b938806a5184a..2d584a86dbf39 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1988,10 +1988,11 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+
+ if (mmie_keyidx < NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS ||
+ mmie_keyidx >= NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS +
+- NUM_DEFAULT_BEACON_KEYS) {
+- cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
+- skb->data,
+- skb->len);
++ NUM_DEFAULT_BEACON_KEYS) {
++ if (rx->sdata->dev)
++ cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
++ skb->data,
++ skb->len);
+ return RX_DROP_MONITOR; /* unexpected BIP keyidx */
+ }
+
+@@ -2139,7 +2140,8 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+ /* either the frame has been decrypted or will be dropped */
+ status->flag |= RX_FLAG_DECRYPTED;
+
+- if (unlikely(ieee80211_is_beacon(fc) && result == RX_DROP_UNUSABLE))
++ if (unlikely(ieee80211_is_beacon(fc) && result == RX_DROP_UNUSABLE &&
++ rx->sdata->dev))
+ cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
+ skb->data, skb->len);
+
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 3f698e508dd71..8f36ab8fcfb24 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1439,6 +1439,8 @@ static size_t ieee802_11_find_bssid_profile(const u8 *start, size_t len,
+ for_each_element_id(elem, WLAN_EID_MULTIPLE_BSSID, start, len) {
+ if (elem->datalen < 2)
+ continue;
++ if (elem->data[0] < 1 || elem->data[0] > 8)
++ continue;
+
+ for_each_element(sub, elem->data + 1, elem->datalen - 1) {
+ u8 new_bssid[ETH_ALEN];
+@@ -1501,25 +1503,27 @@ struct ieee802_11_elems *ieee802_11_parse_elems_crc(const u8 *start, size_t len,
+ const struct element *non_inherit = NULL;
+ u8 *nontransmitted_profile;
+ int nontransmitted_profile_len = 0;
++ size_t scratch_len = len;
+
+- elems = kzalloc(sizeof(*elems), GFP_ATOMIC);
++ elems = kzalloc(sizeof(*elems) + scratch_len, GFP_ATOMIC);
+ if (!elems)
+ return NULL;
+ elems->ie_start = start;
+ elems->total_len = len;
+-
+- nontransmitted_profile = kmalloc(len, GFP_ATOMIC);
+- if (nontransmitted_profile) {
+- nontransmitted_profile_len =
+- ieee802_11_find_bssid_profile(start, len, elems,
+- transmitter_bssid,
+- bss_bssid,
+- nontransmitted_profile);
+- non_inherit =
+- cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
+- nontransmitted_profile,
+- nontransmitted_profile_len);
+- }
++ elems->scratch_len = scratch_len;
++ elems->scratch_pos = elems->scratch;
++
++ nontransmitted_profile = elems->scratch_pos;
++ nontransmitted_profile_len =
++ ieee802_11_find_bssid_profile(start, len, elems,
++ transmitter_bssid,
++ bss_bssid,
++ nontransmitted_profile);
++ elems->scratch_pos += nontransmitted_profile_len;
++ elems->scratch_len -= nontransmitted_profile_len;
++ non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
++ nontransmitted_profile,
++ nontransmitted_profile_len);
+
+ crc = _ieee802_11_parse_elems_crc(start, len, action, elems, filter,
+ crc, non_inherit);
+@@ -1548,8 +1552,6 @@ struct ieee802_11_elems *ieee802_11_parse_elems_crc(const u8 *start, size_t len,
+ offsetofend(struct ieee80211_bssid_index, dtim_count))
+ elems->dtim_count = elems->bssid_index->dtim_count;
+
+- kfree(nontransmitted_profile);
+-
+ elems->crc = crc;
+
+ return elems;
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index c2fc2a7b25285..b6b5e496fa403 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -295,11 +295,12 @@ __must_hold(&net->mctp.keys_lock)
+ mctp_dev_release_key(key->dev, key);
+ spin_unlock_irqrestore(&key->lock, flags);
+
+- hlist_del(&key->hlist);
+- hlist_del(&key->sklist);
+-
+- /* unref for the lists */
+- mctp_key_unref(key);
++ if (!hlist_unhashed(&key->hlist)) {
++ hlist_del_init(&key->hlist);
++ hlist_del_init(&key->sklist);
++ /* unref for the lists */
++ mctp_key_unref(key);
++ }
+
+ kfree_skb(skb);
+ }
+@@ -373,9 +374,17 @@ static int mctp_ioctl_alloctag(struct mctp_sock *msk, unsigned long arg)
+
+ ctl.tag = tag | MCTP_TAG_OWNER | MCTP_TAG_PREALLOC;
+ if (copy_to_user((void __user *)arg, &ctl, sizeof(ctl))) {
+- spin_lock_irqsave(&key->lock, flags);
+- __mctp_key_remove(key, net, flags, MCTP_TRACE_KEY_DROPPED);
++ unsigned long fl2;
++ /* Unwind our key allocation: the keys list lock needs to be
++ * taken before the individual key locks, and we need a valid
++ * flags value (fl2) to pass to __mctp_key_remove, hence the
++ * second spin_lock_irqsave() rather than a plain spin_lock().
++ */
++ spin_lock_irqsave(&net->mctp.keys_lock, flags);
++ spin_lock_irqsave(&key->lock, fl2);
++ __mctp_key_remove(key, net, fl2, MCTP_TRACE_KEY_DROPPED);
+ mctp_key_unref(key);
++ spin_unlock_irqrestore(&net->mctp.keys_lock, flags);
+ return -EFAULT;
+ }
+
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index 3b24b8d18b5b5..2155f15a074cd 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -228,12 +228,12 @@ __releases(&key->lock)
+
+ if (!key->manual_alloc) {
+ spin_lock_irqsave(&net->mctp.keys_lock, flags);
+- hlist_del(&key->hlist);
+- hlist_del(&key->sklist);
++ if (!hlist_unhashed(&key->hlist)) {
++ hlist_del_init(&key->hlist);
++ hlist_del_init(&key->sklist);
++ mctp_key_unref(key);
++ }
+ spin_unlock_irqrestore(&net->mctp.keys_lock, flags);
+-
+- /* unref for the lists */
+- mctp_key_unref(key);
+ }
+
+ /* and one for the local reference */
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 0134e5d5c81a4..39fb9cc25cdca 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -143,18 +143,12 @@ static inline void bss_ref_get(struct cfg80211_registered_device *rdev,
+ lockdep_assert_held(&rdev->bss_lock);
+
+ bss->refcount++;
+- if (bss->pub.hidden_beacon_bss) {
+- bss = container_of(bss->pub.hidden_beacon_bss,
+- struct cfg80211_internal_bss,
+- pub);
+- bss->refcount++;
+- }
+- if (bss->pub.transmitted_bss) {
+- bss = container_of(bss->pub.transmitted_bss,
+- struct cfg80211_internal_bss,
+- pub);
+- bss->refcount++;
+- }
++
++ if (bss->pub.hidden_beacon_bss)
++ bss_from_pub(bss->pub.hidden_beacon_bss)->refcount++;
++
++ if (bss->pub.transmitted_bss)
++ bss_from_pub(bss->pub.transmitted_bss)->refcount++;
+ }
+
+ static inline void bss_ref_put(struct cfg80211_registered_device *rdev,
+@@ -304,7 +298,8 @@ static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
+ tmp_old = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen);
+ tmp_old = (tmp_old) ? tmp_old + tmp_old[1] + 2 : ie;
+
+- while (tmp_old + tmp_old[1] + 2 - ie <= ielen) {
++ while (tmp_old + 2 - ie <= ielen &&
++ tmp_old + tmp_old[1] + 2 - ie <= ielen) {
+ if (tmp_old[0] == 0) {
+ tmp_old++;
+ continue;
+@@ -364,7 +359,8 @@ static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
+ * copied to new ie, skip ssid, capability, bssid-index ie
+ */
+ tmp_new = sub_copy;
+- while (tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) {
++ while (tmp_new + 2 - sub_copy <= subie_len &&
++ tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) {
+ if (!(tmp_new[0] == WLAN_EID_NON_TX_BSSID_CAP ||
+ tmp_new[0] == WLAN_EID_SSID)) {
+ memcpy(pos, tmp_new, tmp_new[1] + 2);
+@@ -427,6 +423,15 @@ cfg80211_add_nontrans_list(struct cfg80211_bss *trans_bss,
+
+ rcu_read_unlock();
+
++ /*
++ * This is a bit weird - it's not on the list, but already on another
++ * one! The only way that could happen is if there's some BSSID/SSID
++ * shared by multiple APs in their multi-BSSID profiles, potentially
++ * with hidden SSID mixed in ... ignore it.
++ */
++ if (!list_empty(&nontrans_bss->nontrans_list))
++ return -EINVAL;
++
+ /* add to the list */
+ list_add_tail(&nontrans_bss->nontrans_list, &trans_bss->nontrans_list);
+ return 0;
+@@ -1602,6 +1607,23 @@ struct cfg80211_non_tx_bss {
+ u8 bssid_index;
+ };
+
++static void cfg80211_update_hidden_bsses(struct cfg80211_internal_bss *known,
++ const struct cfg80211_bss_ies *new_ies,
++ const struct cfg80211_bss_ies *old_ies)
++{
++ struct cfg80211_internal_bss *bss;
++
++ /* Assign beacon IEs to all sub entries */
++ list_for_each_entry(bss, &known->hidden_list, hidden_list) {
++ const struct cfg80211_bss_ies *ies;
++
++ ies = rcu_access_pointer(bss->pub.beacon_ies);
++ WARN_ON(ies != old_ies);
++
++ rcu_assign_pointer(bss->pub.beacon_ies, new_ies);
++ }
++}
++
+ static bool
+ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ struct cfg80211_internal_bss *known,
+@@ -1625,7 +1647,6 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+ } else if (rcu_access_pointer(new->pub.beacon_ies)) {
+ const struct cfg80211_bss_ies *old;
+- struct cfg80211_internal_bss *bss;
+
+ if (known->pub.hidden_beacon_bss &&
+ !list_empty(&known->hidden_list)) {
+@@ -1653,16 +1674,7 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ if (old == rcu_access_pointer(known->pub.ies))
+ rcu_assign_pointer(known->pub.ies, new->pub.beacon_ies);
+
+- /* Assign beacon IEs to all sub entries */
+- list_for_each_entry(bss, &known->hidden_list, hidden_list) {
+- const struct cfg80211_bss_ies *ies;
+-
+- ies = rcu_access_pointer(bss->pub.beacon_ies);
+- WARN_ON(ies != old);
+-
+- rcu_assign_pointer(bss->pub.beacon_ies,
+- new->pub.beacon_ies);
+- }
++ cfg80211_update_hidden_bsses(known, new->pub.beacon_ies, old);
+
+ if (old)
+ kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+@@ -1739,6 +1751,8 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ new->refcount = 1;
+ INIT_LIST_HEAD(&new->hidden_list);
+ INIT_LIST_HEAD(&new->pub.nontrans_list);
++ /* we'll set this later if it was non-NULL */
++ new->pub.transmitted_bss = NULL;
+
+ if (rcu_access_pointer(tmp->pub.proberesp_ies)) {
+ hidden = rb_find_bss(rdev, tmp, BSS_CMP_HIDE_ZLEN);
+@@ -2021,10 +2035,15 @@ cfg80211_inform_single_bss_data(struct wiphy *wiphy,
+ spin_lock_bh(&rdev->bss_lock);
+ if (cfg80211_add_nontrans_list(non_tx_data->tx_bss,
+ &res->pub)) {
+- if (__cfg80211_unlink_bss(rdev, res))
++ if (__cfg80211_unlink_bss(rdev, res)) {
+ rdev->bss_generation++;
++ res = NULL;
++ }
+ }
+ spin_unlock_bh(&rdev->bss_lock);
++
++ if (!res)
++ return NULL;
+ }
+
+ trace_cfg80211_return_bss(&res->pub);
+@@ -2143,6 +2162,8 @@ static void cfg80211_parse_mbssid_data(struct wiphy *wiphy,
+ for_each_element_id(elem, WLAN_EID_MULTIPLE_BSSID, ie, ielen) {
+ if (elem->datalen < 4)
+ continue;
++ if (elem->data[0] < 1 || (int)elem->data[0] > 8)
++ continue;
+ for_each_element(sub, elem->data + 1, elem->datalen - 1) {
+ u8 profile_len;
+
+@@ -2279,7 +2300,7 @@ cfg80211_update_notlisted_nontrans(struct wiphy *wiphy,
+ size_t new_ie_len;
+ struct cfg80211_bss_ies *new_ies;
+ const struct cfg80211_bss_ies *old;
+- u8 cpy_len;
++ size_t cpy_len;
+
+ lockdep_assert_held(&wiphy_to_rdev(wiphy)->bss_lock);
+
+@@ -2346,6 +2367,8 @@ cfg80211_update_notlisted_nontrans(struct wiphy *wiphy,
+ } else {
+ old = rcu_access_pointer(nontrans_bss->beacon_ies);
+ rcu_assign_pointer(nontrans_bss->beacon_ies, new_ies);
++ cfg80211_update_hidden_bsses(bss_from_pub(nontrans_bss),
++ new_ies, old);
+ rcu_assign_pointer(nontrans_bss->ies, new_ies);
+ if (old)
+ kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c
+index 093894a640dca..b78753d27d8ea 100644
+--- a/security/integrity/platform_certs/load_uefi.c
++++ b/security/integrity/platform_certs/load_uefi.c
+@@ -31,7 +31,7 @@ static const struct dmi_system_id uefi_skip_cert[] = {
+ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,1") },
+ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,2") },
+ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir9,1") },
+- { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacMini8,1") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "Macmini8,1") },
+ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacPro7,1") },
+ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,1") },
+ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,2") },
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 6f30c374f896e..1631e1de84046 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2554,7 +2554,8 @@ static const struct pci_device_id azx_ids[] = {
+ .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM },
+ /* Poulsbo */
+ { PCI_DEVICE(0x8086, 0x811b),
+- .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_BASE },
++ .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_BASE |
++ AZX_DCAPS_POSFIX_LPIB },
+ /* Oaktrail */
+ { PCI_DEVICE(0x8086, 0x080a),
+ .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_BASE },
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 9614b63415a8e..2f335f0d8b4b5 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6717,6 +6717,11 @@ static void cs35l41_fixup_spi_two(struct hda_codec *codec, const struct hda_fixu
+ cs35l41_generic_fixup(codec, action, "spi0", "CSC3551", 2);
+ }
+
++static void cs35l41_fixup_spi1_two(struct hda_codec *codec, const struct hda_fixup *fix, int action)
++{
++ cs35l41_generic_fixup(codec, action, "spi1", "CSC3551", 2);
++}
++
+ static void cs35l41_fixup_spi_four(struct hda_codec *codec, const struct hda_fixup *fix, int action)
+ {
+ cs35l41_generic_fixup(codec, action, "spi0", "CSC3551", 4);
+@@ -7102,6 +7107,8 @@ enum {
+ ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED,
+ ALC245_FIXUP_CS35L41_SPI_2,
+ ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED,
++ ALC245_FIXUP_CS35L41_SPI1_2,
++ ALC245_FIXUP_CS35L41_SPI1_2_HP_GPIO_LED,
+ ALC245_FIXUP_CS35L41_SPI_4,
+ ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED,
+ ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED,
+@@ -8948,6 +8955,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC285_FIXUP_HP_GPIO_LED,
+ },
++ [ALC245_FIXUP_CS35L41_SPI1_2] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cs35l41_fixup_spi1_two,
++ },
++ [ALC245_FIXUP_CS35L41_SPI1_2_HP_GPIO_LED] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cs35l41_fixup_spi1_two,
++ .chained = true,
++ .chain_id = ALC285_FIXUP_HP_GPIO_LED,
++ },
+ [ALC245_FIXUP_CS35L41_SPI_4] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cs35l41_fixup_spi_four,
+@@ -9306,6 +9323,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8aa3, "HP ProBook 450 G9 (MB 8AA1)", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8aa8, "HP EliteBook 640 G9 (MB 8AA6)", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8aab, "HP EliteBook 650 G9 (MB 8AA9)", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8abb, "HP ZBook Firefly 14 G9", ALC245_FIXUP_CS35L41_SPI1_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ad1, "HP EliteBook 840 14 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2022-10-24 11:04 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2022-10-24 11:04 UTC (permalink / raw
To: gentoo-commits
commit: 890a9a856fcbbd48ed12143fd8c4fbac094f8fa9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 24 11:04:43 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Oct 24 11:04:43 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=890a9a85
Linux patch 5.19.17
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1016_linux-5.19.17.patch | 27445 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 27449 insertions(+)
diff --git a/0000_README b/0000_README
index 256c6e52..7b85fc87 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1015_linux-5.19.16.patch
From: http://www.kernel.org
Desc: Linux 5.19.16
+Patch: 1016_linux-5.19.17.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.17
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1016_linux-5.19.17.patch b/1016_linux-5.19.17.patch
new file mode 100644
index 00000000..af478d28
--- /dev/null
+++ b/1016_linux-5.19.17.patch
@@ -0,0 +1,27445 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
+index d4ccc68fdcf05..b19ff517e5d65 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio
++++ b/Documentation/ABI/testing/sysfs-bus-iio
+@@ -188,7 +188,7 @@ Description:
+ Raw capacitance measurement from channel Y. Units after
+ application of scale and offset are nanofarads.
+
+-What: /sys/.../iio:deviceX/in_capacitanceY-in_capacitanceZ_raw
++What: /sys/.../iio:deviceX/in_capacitanceY-capacitanceZ_raw
+ KernelVersion: 3.2
+ Contact: linux-iio@vger.kernel.org
+ Description:
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 1b38d0f70677e..5ef5d727ca343 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -3765,6 +3765,10 @@
+
+ nox2apic [X86-64,APIC] Do not enable x2APIC mode.
+
++ NOTE: this parameter will be ignored on systems with the
++ LEGACY_XAPIC_DISABLED bit set in the
++ IA32_XAPIC_DISABLE_STATUS MSR.
++
+ nps_mtm_hs_ctr= [KNL,ARC]
+ This parameter sets the maximum duration, in
+ cycles, each HW thread of the CTOP can run
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index fda97b3fcf018..b8ae278a4c873 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -76,6 +76,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A55 | #1530923 | ARM64_ERRATUM_1530923 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM | Cortex-A55 | #2441007 | ARM64_ERRATUM_2441007 |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A57 | #852523 | N/A |
+diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
+index 08069ecd49a6e..743f6a63f1ca9 100644
+--- a/Documentation/filesystems/vfs.rst
++++ b/Documentation/filesystems/vfs.rst
+@@ -274,6 +274,9 @@ or bottom half).
+ This is specifically for the inode itself being marked dirty,
+ not its data. If the update needs to be persisted by fdatasync(),
+ then I_DIRTY_DATASYNC will be set in the flags argument.
++ I_DIRTY_TIME will be set in the flags in case lazytime is enabled
++ and struct inode has times updated since the last ->dirty_inode
++ call.
+
+ ``write_inode``
+ this method is called when the VFS needs to write an inode to
+diff --git a/Makefile b/Makefile
+index a1d1978bbd039..2113ad46488a7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 7630ba9cb6ccc..ccc4706484d32 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -1653,7 +1653,6 @@ config CMDLINE
+ choice
+ prompt "Kernel command line type" if CMDLINE != ""
+ default CMDLINE_FROM_BOOTLOADER
+- depends on ATAGS
+
+ config CMDLINE_FROM_BOOTLOADER
+ bool "Use bootloader kernel arguments if available"
+diff --git a/arch/arm/boot/compressed/misc.c b/arch/arm/boot/compressed/misc.c
+index cb2e069dc73fd..abfed1aa2baa8 100644
+--- a/arch/arm/boot/compressed/misc.c
++++ b/arch/arm/boot/compressed/misc.c
+@@ -23,7 +23,9 @@ unsigned int __machine_arch_type;
+ #include <linux/types.h>
+ #include <linux/linkage.h>
+ #include "misc.h"
++#ifdef CONFIG_ARCH_EP93XX
+ #include "misc-ep93xx.h"
++#endif
+
+ static void putstr(const char *ptr);
+
+diff --git a/arch/arm/boot/compressed/vmlinux.lds.S b/arch/arm/boot/compressed/vmlinux.lds.S
+index 1bcb68ac4b011..3fcb3e62dc569 100644
+--- a/arch/arm/boot/compressed/vmlinux.lds.S
++++ b/arch/arm/boot/compressed/vmlinux.lds.S
+@@ -23,6 +23,7 @@ SECTIONS
+ *(.ARM.extab*)
+ *(.note.*)
+ *(.rel.*)
++ *(.printk_index)
+ /*
+ * Discard any r/w data - this produces a link error if we have any,
+ * which is required for PIC decompression. Local data generates
+@@ -57,6 +58,7 @@ SECTIONS
+ *(.rodata)
+ *(.rodata.*)
+ *(.data.rel.ro)
++ *(.data.rel.ro.*)
+ }
+ .piggydata : {
+ *(.piggydata)
+diff --git a/arch/arm/boot/dts/armada-385-turris-omnia.dts b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+index f4878df39753e..487dece2033cd 100644
+--- a/arch/arm/boot/dts/armada-385-turris-omnia.dts
++++ b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+@@ -478,7 +478,7 @@
+ marvell,function = "spi0";
+ };
+
+- spi0cs1_pins: spi0cs1-pins {
++ spi0cs2_pins: spi0cs2-pins {
+ marvell,pins = "mpp26";
+ marvell,function = "spi0";
+ };
+@@ -513,7 +513,7 @@
+ };
+ };
+
+- /* MISO, MOSI, SCLK and CS1 are routed to pin header CN11 */
++ /* MISO, MOSI, SCLK and CS2 are routed to pin header CN11 */
+ };
+
+ &uart0 {
+diff --git a/arch/arm/boot/dts/exynos4412-midas.dtsi b/arch/arm/boot/dts/exynos4412-midas.dtsi
+index 23f50c9be5273..6ca9108b76333 100644
+--- a/arch/arm/boot/dts/exynos4412-midas.dtsi
++++ b/arch/arm/boot/dts/exynos4412-midas.dtsi
+@@ -585,7 +585,7 @@
+ clocks = <&camera 1>;
+ clock-names = "extclk";
+ samsung,camclk-out = <1>;
+- gpios = <&gpm1 6 GPIO_ACTIVE_HIGH>;
++ gpios = <&gpm1 6 GPIO_ACTIVE_LOW>;
+
+ port {
+ is_s5k6a3_ep: endpoint {
+diff --git a/arch/arm/boot/dts/exynos4412-origen.dts b/arch/arm/boot/dts/exynos4412-origen.dts
+index 6db09dba07ffd..a3905e27b9cd9 100644
+--- a/arch/arm/boot/dts/exynos4412-origen.dts
++++ b/arch/arm/boot/dts/exynos4412-origen.dts
+@@ -95,7 +95,7 @@
+ };
+
+ &ehci {
+- samsung,vbus-gpio = <&gpx3 5 1>;
++ samsung,vbus-gpio = <&gpx3 5 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+ phys = <&exynos_usbphy 2>, <&exynos_usbphy 3>;
+ phy-names = "hsic0", "hsic1";
+diff --git a/arch/arm/boot/dts/imx6dl-riotboard.dts b/arch/arm/boot/dts/imx6dl-riotboard.dts
+index e7d9bfbfd0e4d..e7be05f205d32 100644
+--- a/arch/arm/boot/dts/imx6dl-riotboard.dts
++++ b/arch/arm/boot/dts/imx6dl-riotboard.dts
+@@ -90,6 +90,7 @@
+ pinctrl-0 = <&pinctrl_enet>;
+ phy-mode = "rgmii-id";
+ phy-handle = <&rgmii_phy>;
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
+index fdd81fdc3f357..cd3183c36488a 100644
+--- a/arch/arm/boot/dts/imx6dl.dtsi
++++ b/arch/arm/boot/dts/imx6dl.dtsi
+@@ -84,6 +84,9 @@
+ ocram: sram@900000 {
+ compatible = "mmio-sram";
+ reg = <0x00900000 0x20000>;
++ ranges = <0 0x00900000 0x20000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ clocks = <&clks IMX6QDL_CLK_OCRAM>;
+ };
+
+diff --git a/arch/arm/boot/dts/imx6q-arm2.dts b/arch/arm/boot/dts/imx6q-arm2.dts
+index 0b40f52268b3c..75586299d9cab 100644
+--- a/arch/arm/boot/dts/imx6q-arm2.dts
++++ b/arch/arm/boot/dts/imx6q-arm2.dts
+@@ -178,6 +178,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet>;
+ phy-mode = "rgmii";
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6q-evi.dts b/arch/arm/boot/dts/imx6q-evi.dts
+index c63f371ede8b9..78d941fef5dfb 100644
+--- a/arch/arm/boot/dts/imx6q-evi.dts
++++ b/arch/arm/boot/dts/imx6q-evi.dts
+@@ -146,6 +146,7 @@
+ pinctrl-0 = <&pinctrl_enet>;
+ phy-mode = "rgmii";
+ phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6q-mccmon6.dts b/arch/arm/boot/dts/imx6q-mccmon6.dts
+index 55692c73943d6..64ab01018b71e 100644
+--- a/arch/arm/boot/dts/imx6q-mccmon6.dts
++++ b/arch/arm/boot/dts/imx6q-mccmon6.dts
+@@ -100,6 +100,7 @@
+ pinctrl-0 = <&pinctrl_enet>;
+ phy-mode = "rgmii";
+ phy-reset-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
+index 9caba4529c718..a8069e0a8fe82 100644
+--- a/arch/arm/boot/dts/imx6q.dtsi
++++ b/arch/arm/boot/dts/imx6q.dtsi
+@@ -163,6 +163,9 @@
+ ocram: sram@900000 {
+ compatible = "mmio-sram";
+ reg = <0x00900000 0x40000>;
++ ranges = <0 0x00900000 0x40000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ clocks = <&clks IMX6QDL_CLK_OCRAM>;
+ };
+
+diff --git a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+index 6b791d515e294..683f6e58ab230 100644
+--- a/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
+@@ -263,6 +263,10 @@
+ phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
+ };
+
++&hdmi {
++ ddc-i2c-bus = <&i2c2>;
++};
++
+ &i2c_intern {
+ pmic@8 {
+ compatible = "fsl,pfuze100";
+@@ -387,7 +391,7 @@
+
+ /* HDMI_CTRL */
+ &i2c2 {
+- clock-frequency = <375000>;
++ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2>;
+ };
+diff --git a/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi b/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
+index 0ad4cb4f1e828..a53a5d0766a51 100644
+--- a/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
+@@ -192,6 +192,7 @@
+ phy-mode = "rgmii";
+ phy-handle = <ðphy>;
+ phy-reset-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
+index beaa2dcd436ce..57c21a01f126d 100644
+--- a/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
+@@ -334,6 +334,7 @@
+ phy-mode = "rgmii";
+ phy-handle = <ðphy>;
+ phy-reset-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
+index ee7e2371f94bd..000e9dc97b1ac 100644
+--- a/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
+@@ -263,6 +263,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet>;
+ phy-mode = "rgmii";
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
+index 904d5d051d63c..731759bdd7f57 100644
+--- a/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
+@@ -267,6 +267,7 @@
+ phy-mode = "rgmii";
+ phy-handle = <ðphy>;
+ phy-reset-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+index 1368a47620372..3dbb460ef102e 100644
+--- a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+@@ -295,6 +295,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet>;
+ phy-mode = "rgmii-id";
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6qdl-tqma6a.dtsi b/arch/arm/boot/dts/imx6qdl-tqma6a.dtsi
+index 7dc3f0005b0f0..0a36e1bce375d 100644
+--- a/arch/arm/boot/dts/imx6qdl-tqma6a.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-tqma6a.dtsi
+@@ -7,6 +7,7 @@
+ #include <dt-bindings/gpio/gpio.h>
+
+ &fec {
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6qdl-ts7970.dtsi b/arch/arm/boot/dts/imx6qdl-ts7970.dtsi
+index d6ba4b2a60f6f..c096d25a6f5b5 100644
+--- a/arch/arm/boot/dts/imx6qdl-ts7970.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-ts7970.dtsi
+@@ -192,6 +192,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet>;
+ phy-mode = "rgmii";
++ /delete-property/ interrupts;
+ interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6qp.dtsi b/arch/arm/boot/dts/imx6qp.dtsi
+index 0503655138363..fc164991d2ae8 100644
+--- a/arch/arm/boot/dts/imx6qp.dtsi
++++ b/arch/arm/boot/dts/imx6qp.dtsi
+@@ -9,12 +9,18 @@
+ ocram2: sram@940000 {
+ compatible = "mmio-sram";
+ reg = <0x00940000 0x20000>;
++ ranges = <0 0x00940000 0x20000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ clocks = <&clks IMX6QDL_CLK_OCRAM>;
+ };
+
+ ocram3: sram@960000 {
+ compatible = "mmio-sram";
+ reg = <0x00960000 0x20000>;
++ ranges = <0 0x00960000 0x20000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ clocks = <&clks IMX6QDL_CLK_OCRAM>;
+ };
+
+diff --git a/arch/arm/boot/dts/imx6sl.dtsi b/arch/arm/boot/dts/imx6sl.dtsi
+index 06a515121dfc5..01122ddfdc0d3 100644
+--- a/arch/arm/boot/dts/imx6sl.dtsi
++++ b/arch/arm/boot/dts/imx6sl.dtsi
+@@ -61,10 +61,10 @@
+ <792000 1175000>,
+ <396000 975000>;
+ fsl,soc-operating-points =
+- /* ARM kHz SOC-PU uV */
+- <996000 1225000>,
+- <792000 1175000>,
+- <396000 1175000>;
++ /* ARM kHz SOC-PU uV */
++ <996000 1225000>,
++ <792000 1175000>,
++ <396000 1175000>;
+ clock-latency = <61036>; /* two CLK32 periods */
+ #cooling-cells = <2>;
+ clocks = <&clks IMX6SL_CLK_ARM>, <&clks IMX6SL_CLK_PLL2_PFD2>,
+@@ -115,6 +115,9 @@
+ ocram: sram@900000 {
+ compatible = "mmio-sram";
+ reg = <0x00900000 0x20000>;
++ ranges = <0 0x00900000 0x20000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ clocks = <&clks IMX6SL_CLK_OCRAM>;
+ };
+
+@@ -222,7 +225,7 @@
+
+ uart5: serial@2018000 {
+ compatible = "fsl,imx6sl-uart",
+- "fsl,imx6q-uart", "fsl,imx21-uart";
++ "fsl,imx6q-uart", "fsl,imx21-uart";
+ reg = <0x02018000 0x4000>;
+ interrupts = <0 30 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX6SL_CLK_UART>,
+@@ -235,7 +238,7 @@
+
+ uart1: serial@2020000 {
+ compatible = "fsl,imx6sl-uart",
+- "fsl,imx6q-uart", "fsl,imx21-uart";
++ "fsl,imx6q-uart", "fsl,imx21-uart";
+ reg = <0x02020000 0x4000>;
+ interrupts = <0 26 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX6SL_CLK_UART>,
+@@ -248,7 +251,7 @@
+
+ uart2: serial@2024000 {
+ compatible = "fsl,imx6sl-uart",
+- "fsl,imx6q-uart", "fsl,imx21-uart";
++ "fsl,imx6q-uart", "fsl,imx21-uart";
+ reg = <0x02024000 0x4000>;
+ interrupts = <0 27 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX6SL_CLK_UART>,
+@@ -309,7 +312,7 @@
+
+ uart3: serial@2034000 {
+ compatible = "fsl,imx6sl-uart",
+- "fsl,imx6q-uart", "fsl,imx21-uart";
++ "fsl,imx6q-uart", "fsl,imx21-uart";
+ reg = <0x02034000 0x4000>;
+ interrupts = <0 28 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX6SL_CLK_UART>,
+@@ -322,7 +325,7 @@
+
+ uart4: serial@2038000 {
+ compatible = "fsl,imx6sl-uart",
+- "fsl,imx6q-uart", "fsl,imx21-uart";
++ "fsl,imx6q-uart", "fsl,imx21-uart";
+ reg = <0x02038000 0x4000>;
+ interrupts = <0 29 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX6SL_CLK_UART>,
+@@ -711,7 +714,7 @@
+ #power-domain-cells = <0>;
+ power-supply = <®_pu>;
+ clocks = <&clks IMX6SL_CLK_GPU2D_OVG>,
+- <&clks IMX6SL_CLK_GPU2D_PODF>;
++ <&clks IMX6SL_CLK_GPU2D_PODF>;
+ };
+
+ pd_disp: power-domain@2 {
+diff --git a/arch/arm/boot/dts/imx6sll.dtsi b/arch/arm/boot/dts/imx6sll.dtsi
+index d4a000c3dde70..2873369a57c02 100644
+--- a/arch/arm/boot/dts/imx6sll.dtsi
++++ b/arch/arm/boot/dts/imx6sll.dtsi
+@@ -115,6 +115,9 @@
+ ocram: sram@900000 {
+ compatible = "mmio-sram";
+ reg = <0x00900000 0x20000>;
++ ranges = <0 0x00900000 0x20000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ };
+
+ intc: interrupt-controller@a01000 {
+diff --git a/arch/arm/boot/dts/imx6sx-udoo-neo.dtsi b/arch/arm/boot/dts/imx6sx-udoo-neo.dtsi
+index 35861bbea94e6..c84ea1fac5e98 100644
+--- a/arch/arm/boot/dts/imx6sx-udoo-neo.dtsi
++++ b/arch/arm/boot/dts/imx6sx-udoo-neo.dtsi
+@@ -226,7 +226,7 @@
+ &iomuxc {
+ pinctrl_bt_reg: btreggrp {
+ fsl,pins =
+- <MX6SX_PAD_KEY_ROW2__GPIO2_IO_17 0x15059>;
++ <MX6SX_PAD_KEY_ROW2__GPIO2_IO_17 0x15059>;
+ };
+
+ pinctrl_enet1: enet1grp {
+@@ -306,7 +306,6 @@
+ >;
+ };
+
+-
+ pinctrl_uart1: uart1grp {
+ fsl,pins =
+ <MX6SX_PAD_GPIO1_IO04__UART1_DCE_TX 0x1b0b1>,
+@@ -347,24 +346,23 @@
+
+ pinctrl_otg1_reg: otg1grp {
+ fsl,pins =
+- <MX6SX_PAD_GPIO1_IO09__GPIO1_IO_9 0x10b0>;
++ <MX6SX_PAD_GPIO1_IO09__GPIO1_IO_9 0x10b0>;
+ };
+
+-
+ pinctrl_otg2_reg: otg2grp {
+ fsl,pins =
+- <MX6SX_PAD_NAND_RE_B__GPIO4_IO_12 0x10b0>;
++ <MX6SX_PAD_NAND_RE_B__GPIO4_IO_12 0x10b0>;
+ };
+
+ pinctrl_usb_otg1: usbotg1grp {
+ fsl,pins =
+- <MX6SX_PAD_GPIO1_IO10__ANATOP_OTG1_ID 0x17059>,
+- <MX6SX_PAD_GPIO1_IO08__USB_OTG1_OC 0x10b0>;
++ <MX6SX_PAD_GPIO1_IO10__ANATOP_OTG1_ID 0x17059>,
++ <MX6SX_PAD_GPIO1_IO08__USB_OTG1_OC 0x10b0>;
+ };
+
+ pinctrl_usb_otg2: usbot2ggrp {
+ fsl,pins =
+- <MX6SX_PAD_QSPI1A_DATA0__USB_OTG2_OC 0x10b0>;
++ <MX6SX_PAD_QSPI1A_DATA0__USB_OTG2_OC 0x10b0>;
+ };
+
+ pinctrl_usdhc2: usdhc2grp {
+diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
+index fc6334336b3d0..719c61f7e9140 100644
+--- a/arch/arm/boot/dts/imx6sx.dtsi
++++ b/arch/arm/boot/dts/imx6sx.dtsi
+@@ -164,12 +164,18 @@
+ ocram_s: sram@8f8000 {
+ compatible = "mmio-sram";
+ reg = <0x008f8000 0x4000>;
++ ranges = <0 0x008f8000 0x4000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ clocks = <&clks IMX6SX_CLK_OCRAM_S>;
+ };
+
+ ocram: sram@900000 {
+ compatible = "mmio-sram";
+ reg = <0x00900000 0x20000>;
++ ranges = <0 0x00900000 0x20000>;
++ #address-cells = <1>;
++ #size-cells = <1>;
+ clocks = <&clks IMX6SX_CLK_OCRAM>;
+ };
+
+diff --git a/arch/arm/boot/dts/imx7d-sdb.dts b/arch/arm/boot/dts/imx7d-sdb.dts
+index f053f51227417..0fe0a2f5e433f 100644
+--- a/arch/arm/boot/dts/imx7d-sdb.dts
++++ b/arch/arm/boot/dts/imx7d-sdb.dts
+@@ -206,12 +206,7 @@
+ interrupt-parent = <&gpio2>;
+ interrupts = <29 0>;
+ pendown-gpio = <&gpio2 29 GPIO_ACTIVE_HIGH>;
+- ti,x-min = /bits/ 16 <0>;
+- ti,x-max = /bits/ 16 <0>;
+- ti,y-min = /bits/ 16 <0>;
+- ti,y-max = /bits/ 16 <0>;
+- ti,pressure-max = /bits/ 16 <0>;
+- ti,x-plate-ohms = /bits/ 16 <400>;
++ touchscreen-max-pressure = <255>;
+ wakeup-source;
+ };
+ };
+diff --git a/arch/arm/boot/dts/kirkwood-lsxl.dtsi b/arch/arm/boot/dts/kirkwood-lsxl.dtsi
+index 7b151acb99846..88b70ba1c8fee 100644
+--- a/arch/arm/boot/dts/kirkwood-lsxl.dtsi
++++ b/arch/arm/boot/dts/kirkwood-lsxl.dtsi
+@@ -10,6 +10,11 @@
+
+ ocp@f1000000 {
+ pinctrl: pin-controller@10000 {
++ /* Non-default UART pins */
++ pmx_uart0: pmx-uart0 {
++ marvell,pins = "mpp4", "mpp5";
++ };
++
+ pmx_power_hdd: pmx-power-hdd {
+ marvell,pins = "mpp10";
+ marvell,function = "gpo";
+@@ -213,22 +218,11 @@
+ &mdio {
+ status = "okay";
+
+- ethphy0: ethernet-phy@0 {
+- reg = <0>;
+- };
+-
+ ethphy1: ethernet-phy@8 {
+ reg = <8>;
+ };
+ };
+
+-ð0 {
+- status = "okay";
+- ethernet0-port@0 {
+- phy-handle = <ðphy0>;
+- };
+-};
+-
+ ð1 {
+ status = "okay";
+ ethernet1-port@0 {
+diff --git a/arch/arm/include/asm/stacktrace.h b/arch/arm/include/asm/stacktrace.h
+index 3e78f921b8b2d..39be2d1aa27b8 100644
+--- a/arch/arm/include/asm/stacktrace.h
++++ b/arch/arm/include/asm/stacktrace.h
+@@ -21,6 +21,9 @@ struct stackframe {
+ struct llist_node *kr_cur;
+ struct task_struct *tsk;
+ #endif
++#ifdef CONFIG_UNWINDER_FRAME_POINTER
++ bool ex_frame;
++#endif
+ };
+
+ static __always_inline
+@@ -34,6 +37,9 @@ void arm_get_current_stackframe(struct pt_regs *regs, struct stackframe *frame)
+ frame->kr_cur = NULL;
+ frame->tsk = current;
+ #endif
++#ifdef CONFIG_UNWINDER_FRAME_POINTER
++ frame->ex_frame = in_entry_text(frame->pc);
++#endif
+ }
+
+ extern int unwind_frame(struct stackframe *frame);
+diff --git a/arch/arm/kernel/return_address.c b/arch/arm/kernel/return_address.c
+index 8aac1e10b117a..38f1ea9c724d5 100644
+--- a/arch/arm/kernel/return_address.c
++++ b/arch/arm/kernel/return_address.c
+@@ -47,6 +47,7 @@ here:
+ frame.kr_cur = NULL;
+ frame.tsk = current;
+ #endif
++ frame.ex_frame = false;
+
+ walk_stackframe(&frame, save_return_addr, &data);
+
+diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c
+index d0fa2037460ac..85443b5d19221 100644
+--- a/arch/arm/kernel/stacktrace.c
++++ b/arch/arm/kernel/stacktrace.c
+@@ -9,6 +9,8 @@
+ #include <asm/stacktrace.h>
+ #include <asm/traps.h>
+
++#include "reboot.h"
++
+ #if defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND)
+ /*
+ * Unwind the current stack frame and store the new register values in the
+@@ -39,29 +41,74 @@
+ * Note that with framepointer enabled, even the leaf functions have the same
+ * prologue and epilogue, therefore we can ignore the LR value in this case.
+ */
+-int notrace unwind_frame(struct stackframe *frame)
++
++extern unsigned long call_with_stack_end;
++
++static int frame_pointer_check(struct stackframe *frame)
+ {
+ unsigned long high, low;
+ unsigned long fp = frame->fp;
++ unsigned long pc = frame->pc;
++
++ /*
++ * call_with_stack() is the only place we allow SP to jump from one
++ * stack to another, with FP and SP pointing to different stacks,
++ * skipping the FP boundary check at this point.
++ */
++ if (pc >= (unsigned long)&call_with_stack &&
++ pc < (unsigned long)&call_with_stack_end)
++ return 0;
+
+ /* only go to a higher address on the stack */
+ low = frame->sp;
+ high = ALIGN(low, THREAD_SIZE);
+
+-#ifdef CONFIG_CC_IS_CLANG
+ /* check current frame pointer is within bounds */
++#ifdef CONFIG_CC_IS_CLANG
+ if (fp < low + 4 || fp > high - 4)
+ return -EINVAL;
+-
+- frame->sp = frame->fp;
+- frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
+- frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 4));
+ #else
+- /* check current frame pointer is within bounds */
+ if (fp < low + 12 || fp > high - 4)
+ return -EINVAL;
++#endif
++
++ return 0;
++}
++
++int notrace unwind_frame(struct stackframe *frame)
++{
++ unsigned long fp = frame->fp;
++
++ if (frame_pointer_check(frame))
++ return -EINVAL;
++
++ /*
++ * When we unwind through an exception stack, include the saved PC
++ * value into the stack trace.
++ */
++ if (frame->ex_frame) {
++ struct pt_regs *regs = (struct pt_regs *)frame->sp;
++
++ /*
++ * We check that 'regs + sizeof(struct pt_regs)' (that is,
++ * ®s[1]) does not exceed the bottom of the stack to avoid
++ * accessing data outside the task's stack. This may happen
++ * when frame->ex_frame is a false positive.
++ */
++ if ((unsigned long)®s[1] > ALIGN(frame->sp, THREAD_SIZE))
++ return -EINVAL;
++
++ frame->pc = regs->ARM_pc;
++ frame->ex_frame = false;
++ return 0;
++ }
+
+ /* restore the registers from the stack frame */
++#ifdef CONFIG_CC_IS_CLANG
++ frame->sp = frame->fp;
++ frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
++ frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 4));
++#else
+ frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 12));
+ frame->sp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 8));
+ frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 4));
+@@ -72,6 +119,9 @@ int notrace unwind_frame(struct stackframe *frame)
+ (void *)frame->fp, &frame->kr_cur);
+ #endif
+
++ if (in_entry_text(frame->pc))
++ frame->ex_frame = true;
++
+ return 0;
+ }
+ #endif
+@@ -102,7 +152,6 @@ static int save_trace(struct stackframe *frame, void *d)
+ {
+ struct stack_trace_data *data = d;
+ struct stack_trace *trace = data->trace;
+- struct pt_regs *regs;
+ unsigned long addr = frame->pc;
+
+ if (data->no_sched_functions && in_sched_functions(addr))
+@@ -113,19 +162,6 @@ static int save_trace(struct stackframe *frame, void *d)
+ }
+
+ trace->entries[trace->nr_entries++] = addr;
+-
+- if (trace->nr_entries >= trace->max_entries)
+- return 1;
+-
+- if (!in_entry_text(frame->pc))
+- return 0;
+-
+- regs = (struct pt_regs *)frame->sp;
+- if ((unsigned long)®s[1] > ALIGN(frame->sp, THREAD_SIZE))
+- return 0;
+-
+- trace->entries[trace->nr_entries++] = regs->ARM_pc;
+-
+ return trace->nr_entries >= trace->max_entries;
+ }
+
+@@ -167,6 +203,9 @@ here:
+ frame.kr_cur = NULL;
+ frame.tsk = tsk;
+ #endif
++#ifdef CONFIG_UNWINDER_FRAME_POINTER
++ frame.ex_frame = false;
++#endif
+
+ walk_stackframe(&frame, save_trace, &data);
+ }
+@@ -188,6 +227,9 @@ void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
+ frame.kr_cur = NULL;
+ frame.tsk = current;
+ #endif
++#ifdef CONFIG_UNWINDER_FRAME_POINTER
++ frame.ex_frame = in_entry_text(frame.pc);
++#endif
+
+ walk_stackframe(&frame, save_trace, &data);
+ }
+diff --git a/arch/arm/lib/call_with_stack.S b/arch/arm/lib/call_with_stack.S
+index 0a268a6c513c8..5030d4e8d1267 100644
+--- a/arch/arm/lib/call_with_stack.S
++++ b/arch/arm/lib/call_with_stack.S
+@@ -46,4 +46,6 @@ UNWIND( .setfp fpreg, sp )
+ pop {fpreg, pc}
+ UNWIND( .fnend )
+ #endif
++ .globl call_with_stack_end
++call_with_stack_end:
+ ENDPROC(call_with_stack)
+diff --git a/arch/arm/mm/dump.c b/arch/arm/mm/dump.c
+index fb688003d156e..712da6a81b23f 100644
+--- a/arch/arm/mm/dump.c
++++ b/arch/arm/mm/dump.c
+@@ -346,7 +346,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
+ addr = start + i * PMD_SIZE;
+ domain = get_domain_name(pmd);
+ if (pmd_none(*pmd) || pmd_large(*pmd) || !pmd_present(*pmd))
+- note_page(st, addr, 3, pmd_val(*pmd), domain);
++ note_page(st, addr, 4, pmd_val(*pmd), domain);
+ else
+ walk_pte(st, pmd, addr, domain);
+
+diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
+index 5ad0d6c56d56e..29d7233e5ad2e 100644
+--- a/arch/arm/mm/kasan_init.c
++++ b/arch/arm/mm/kasan_init.c
+@@ -264,12 +264,17 @@ void __init kasan_init(void)
+
+ /*
+ * 1. The module global variables are in MODULES_VADDR ~ MODULES_END,
+- * so we need to map this area.
++ * so we need to map this area if CONFIG_KASAN_VMALLOC=n. With
++ * VMALLOC support KASAN will manage this region dynamically,
++ * refer to kasan_populate_vmalloc() and ARM's implementation of
++ * module_alloc().
+ * 2. PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
+ * ~ MODULES_END's shadow is in the same PMD_SIZE, so we can't
+ * use kasan_populate_zero_shadow.
+ */
+- create_mapping((void *)MODULES_VADDR, (void *)(PKMAP_BASE + PMD_SIZE));
++ if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) && IS_ENABLED(CONFIG_MODULES))
++ create_mapping((void *)MODULES_VADDR, (void *)(MODULES_END));
++ create_mapping((void *)PKMAP_BASE, (void *)(PKMAP_BASE + PMD_SIZE));
+
+ /*
+ * KAsan may reuse the contents of kasan_early_shadow_pte directly, so
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index cd17e324aa51e..83a91e0ab8480 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -300,7 +300,11 @@ static struct mem_type mem_types[] __ro_after_init = {
+ .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
+ L_PTE_XN | L_PTE_RDONLY,
+ .prot_l1 = PMD_TYPE_TABLE,
++#ifdef CONFIG_ARM_LPAE
++ .prot_sect = PMD_TYPE_SECT | L_PMD_SECT_RDONLY | PMD_SECT_AP2,
++#else
+ .prot_sect = PMD_TYPE_SECT,
++#endif
+ .domain = DOMAIN_KERNEL,
+ },
+ [MT_ROM] = {
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index cc1e7bb49d38b..dfd9228c2adce 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -629,6 +629,23 @@ config ARM64_ERRATUM_1530923
+ config ARM64_WORKAROUND_REPEAT_TLBI
+ bool
+
++config ARM64_ERRATUM_2441007
++ bool "Cortex-A55: Completion of affected memory accesses might not be guaranteed by completion of a TLBI"
++ default y
++ select ARM64_WORKAROUND_REPEAT_TLBI
++ help
++ This option adds a workaround for ARM Cortex-A55 erratum #2441007.
++
++ Under very rare circumstances, affected Cortex-A55 CPUs
++ may not handle a race between a break-before-make sequence on one
++ CPU, and another CPU accessing the same page. This could allow a
++ store to a page that has been unmapped.
++
++ Work around this by adding the affected CPUs to the list that needs
++ TLB sequences to be done twice.
++
++ If unsure, say Y.
++
+ config ARM64_ERRATUM_1286807
+ bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
+ default y
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-s.dts b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-s.dts
+index 23be1ec538ba6..c54536c0a2ba1 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-s.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-s.dts
+@@ -321,6 +321,7 @@
+ MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2 0x1d0
+ MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3 0x1d0
+ MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12 0x019
++ MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x1d0
+ >;
+ };
+
+@@ -333,6 +334,7 @@
+ MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2 0x1d4
+ MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3 0x1d4
+ MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12 0x019
++ MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x1d0
+ >;
+ };
+
+@@ -345,6 +347,7 @@
+ MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2 0x1d6
+ MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3 0x1d6
+ MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12 0x019
++ MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x1d0
+ >;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
+index 8f90eb02550d8..6307af803429e 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
+@@ -86,7 +86,6 @@
+ pinctrl-0 = <&pinctrl_pmic>;
+ interrupt-parent = <&gpio1>;
+ interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
+- sd-vsel-gpios = <&gpio1 4 GPIO_ACTIVE_HIGH>;
+
+ regulators {
+ reg_vdd_soc: BUCK1 {
+@@ -229,7 +228,6 @@
+ pinctrl_pmic: pmicgrp {
+ fsl,pins = <
+ MX8MM_IOMUXC_GPIO1_IO00_GPIO1_IO0 0x141
+- MX8MM_IOMUXC_GPIO1_IO04_GPIO1_IO4 0x141
+ >;
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 410d0d5e6f1e5..7faf2d71ba4f8 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -1168,7 +1168,7 @@
+ interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+ phys = <&usb3_phy0>, <&usb3_phy0>;
+ phy-names = "usb2-phy", "usb3-phy";
+- snps,dis-u2-freeclk-exists-quirk;
++ snps,gfladj-refclk-lpm-sel-quirk;
+ };
+
+ };
+@@ -1210,7 +1210,7 @@
+ interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
+ phys = <&usb3_phy1>, <&usb3_phy1>;
+ phy-names = "usb2-phy", "usb3-phy";
+- snps,dis-u2-freeclk-exists-quirk;
++ snps,gfladj-refclk-lpm-sel-quirk;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
+index 587e55aaa57bb..11f56138c5331 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
+@@ -1077,6 +1077,7 @@
+ interrupts = <20 IRQ_TYPE_LEVEL_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_gauge>;
++ power-supplies = <&bq25895>;
+ maxim,over-heat-temp = <700>;
+ maxim,over-volt = <4500>;
+ maxim,rsns-microohm = <5000>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-mtp.dts b/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
+index 7713e8060c5b6..de2d10e0315af 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
+@@ -536,42 +536,42 @@
+ reg = <ADC5_XO_THERM_100K_PU>;
+ label = "xo_therm";
+ qcom,ratiometric;
+- qcom,hw-settle-time-us = <200>;
++ qcom,hw-settle-time = <200>;
+ };
+
+ adc-chan@4d {
+ reg = <ADC5_AMUX_THM1_100K_PU>;
+ label = "msm_therm";
+ qcom,ratiometric;
+- qcom,hw-settle-time-us = <200>;
++ qcom,hw-settle-time = <200>;
+ };
+
+ adc-chan@4f {
+ reg = <ADC5_AMUX_THM3_100K_PU>;
+ label = "pa_therm1";
+ qcom,ratiometric;
+- qcom,hw-settle-time-us = <200>;
++ qcom,hw-settle-time = <200>;
+ };
+
+ adc-chan@51 {
+ reg = <ADC5_AMUX_THM5_100K_PU>;
+ label = "quiet_therm";
+ qcom,ratiometric;
+- qcom,hw-settle-time-us = <200>;
++ qcom,hw-settle-time = <200>;
+ };
+
+ adc-chan@83 {
+ reg = <ADC5_VPH_PWR>;
+ label = "vph_pwr";
+ qcom,ratiometric;
+- qcom,hw-settle-time-us = <200>;
++ qcom,hw-settle-time = <200>;
+ };
+
+ adc-chan@85 {
+ reg = <ADC5_VCOIN>;
+ label = "vcoin";
+ qcom,ratiometric;
+- qcom,hw-settle-time-us = <200>;
++ qcom,hw-settle-time = <200>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g043.dtsi b/arch/arm64/boot/dts/renesas/r9a07g043.dtsi
+index b31fb713ae4d7..434ae73664a23 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g043.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g043.dtsi
+@@ -334,8 +334,8 @@
+ compatible = "renesas,r9a07g043-sci", "renesas,sci";
+ reg = <0 0x1004d000 0 0x400>;
+ interrupts = <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 406 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 407 IRQ_TYPE_EDGE_RISING>,
+ <GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "eri", "rxi", "txi", "tei";
+ clocks = <&cpg CPG_MOD R9A07G043_SCI0_CLKP>;
+@@ -349,8 +349,8 @@
+ compatible = "renesas,r9a07g043-sci", "renesas,sci";
+ reg = <0 0x1004d400 0 0x400>;
+ interrupts = <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 411 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 410 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 411 IRQ_TYPE_EDGE_RISING>,
+ <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "eri", "rxi", "txi", "tei";
+ clocks = <&cpg CPG_MOD R9A07G043_SCI1_CLKP>;
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+index 3652e511160fb..265140b20dadd 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+@@ -394,8 +394,8 @@
+ compatible = "renesas,r9a07g044-sci", "renesas,sci";
+ reg = <0 0x1004d000 0 0x400>;
+ interrupts = <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 406 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 407 IRQ_TYPE_EDGE_RISING>,
+ <GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "eri", "rxi", "txi", "tei";
+ clocks = <&cpg CPG_MOD R9A07G044_SCI0_CLKP>;
+@@ -409,8 +409,8 @@
+ compatible = "renesas,r9a07g044-sci", "renesas,sci";
+ reg = <0 0x1004d400 0 0x400>;
+ interrupts = <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 411 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 410 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 411 IRQ_TYPE_EDGE_RISING>,
+ <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "eri", "rxi", "txi", "tei";
+ clocks = <&cpg CPG_MOD R9A07G044_SCI1_CLKP>;
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+index 4d6b9d7684c94..d0eeca4f6aa1b 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+@@ -399,8 +399,8 @@
+ compatible = "renesas,r9a07g054-sci", "renesas,sci";
+ reg = <0 0x1004d000 0 0x400>;
+ interrupts = <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 406 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 407 IRQ_TYPE_EDGE_RISING>,
+ <GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "eri", "rxi", "txi", "tei";
+ clocks = <&cpg CPG_MOD R9A07G054_SCI0_CLKP>;
+@@ -414,8 +414,8 @@
+ compatible = "renesas,r9a07g054-sci", "renesas,sci";
+ reg = <0 0x1004d400 0 0x400>;
+ interrupts = <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 411 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 410 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 411 IRQ_TYPE_EDGE_RISING>,
+ <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "eri", "rxi", "txi", "tei";
+ clocks = <&cpg CPG_MOD R9A07G054_SCI1_CLKP>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index 121975dc82397..7e8552fd2b6ae 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -134,15 +134,17 @@
+ >;
+ };
+
+- main_usbss0_pins_default: main-usbss0-pins-default {
++ vdd_sd_dv_pins_default: vdd-sd-dv-pins-default {
+ pinctrl-single,pins = <
+- J721E_IOPAD(0x120, PIN_OUTPUT, 0) /* (T4) USB0_DRVVBUS */
++ J721E_IOPAD(0xd0, PIN_OUTPUT, 7) /* (T5) SPI0_D1.GPIO0_55 */
+ >;
+ };
++};
+
+- vdd_sd_dv_pins_default: vdd-sd-dv-pins-default {
++&main_pmx1 {
++ main_usbss0_pins_default: main-usbss0-pins-default {
+ pinctrl-single,pins = <
+- J721E_IOPAD(0xd0, PIN_OUTPUT, 7) /* (T5) SPI0_D1.GPIO0_55 */
++ J721E_IOPAD(0x04, PIN_OUTPUT, 0) /* (T4) USB0_DRVVBUS */
+ >;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 16684a2f054d9..e12a53f1857f8 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -295,7 +295,16 @@
+ main_pmx0: pinctrl@11c000 {
+ compatible = "pinctrl-single";
+ /* Proxy 0 addressing */
+- reg = <0x00 0x11c000 0x00 0x2b4>;
++ reg = <0x00 0x11c000 0x00 0x10c>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ main_pmx1: pinctrl@11c11c {
++ compatible = "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x11c11c 0x00 0xc>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
+index aa523591a44e5..760c62f8e22f8 100644
+--- a/arch/arm64/include/asm/mte.h
++++ b/arch/arm64/include/asm/mte.h
+@@ -42,7 +42,9 @@ void mte_sync_tags(pte_t old_pte, pte_t pte);
+ void mte_copy_page_tags(void *kto, const void *kfrom);
+ void mte_thread_init_user(void);
+ void mte_thread_switch(struct task_struct *next);
++void mte_cpu_setup(void);
+ void mte_suspend_enter(void);
++void mte_suspend_exit(void);
+ long set_mte_ctrl(struct task_struct *task, unsigned long arg);
+ long get_mte_ctrl(struct task_struct *task);
+ int mte_ptrace_copy_tags(struct task_struct *child, long request,
+@@ -72,6 +74,9 @@ static inline void mte_thread_switch(struct task_struct *next)
+ static inline void mte_suspend_enter(void)
+ {
+ }
++static inline void mte_suspend_exit(void)
++{
++}
+ static inline long set_mte_ctrl(struct task_struct *task, unsigned long arg)
+ {
+ return 0;
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index af137f91607da..9b7440d97a32e 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -214,6 +214,11 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
+ ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe),
+ },
+ #endif
++#ifdef CONFIG_ARM64_ERRATUM_2441007
++ {
++ ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
++ },
++#endif
+ #ifdef CONFIG_ARM64_ERRATUM_2441009
+ {
+ /* Cortex-A510 r0p0 -> r1p1. Fixed in r1p2 */
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index f34c9f8b9ee0a..6f29e12fcfd5f 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1962,7 +1962,8 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
+ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
+ {
+ sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ATA | SCTLR_EL1_ATA0);
+- isb();
++
++ mte_cpu_setup();
+
+ /*
+ * Clear the tags in the zero page. This needs to be done via the
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index ea5dc7c90f465..b49ba9a24bcc8 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -217,11 +217,26 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
+ unsigned long pc = rec->ip;
+ u32 old = 0, new;
+
++ new = aarch64_insn_gen_nop();
++
++ /*
++ * When using mcount, callsites in modules may have been initalized to
++ * call an arbitrary module PLT (which redirects to the _mcount stub)
++ * rather than the ftrace PLT we'll use at runtime (which redirects to
++ * the ftrace trampoline). We can ignore the old PLT when initializing
++ * the callsite.
++ *
++ * Note: 'mod' is only set at module load time.
++ */
++ if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) &&
++ IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && mod) {
++ return aarch64_insn_patch_text_nosync((void *)pc, new);
++ }
++
+ if (!ftrace_find_callable_addr(rec, mod, &addr))
+ return -EINVAL;
+
+ old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
+- new = aarch64_insn_gen_nop();
+
+ return ftrace_modify_code(pc, old, new, true);
+ }
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index f6b00743c3994..54fed9a7f3cca 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -294,6 +294,49 @@ void mte_thread_switch(struct task_struct *next)
+ mte_check_tfsr_el1();
+ }
+
++void mte_cpu_setup(void)
++{
++ u64 rgsr;
++
++ /*
++ * CnP must be enabled only after the MAIR_EL1 register has been set
++ * up. Inconsistent MAIR_EL1 between CPUs sharing the same TLB may
++ * lead to the wrong memory type being used for a brief window during
++ * CPU power-up.
++ *
++ * CnP is not a boot feature so MTE gets enabled before CnP, but let's
++ * make sure that is the case.
++ */
++ BUG_ON(read_sysreg(ttbr0_el1) & TTBR_CNP_BIT);
++ BUG_ON(read_sysreg(ttbr1_el1) & TTBR_CNP_BIT);
++
++ /* Normal Tagged memory type at the corresponding MAIR index */
++ sysreg_clear_set(mair_el1,
++ MAIR_ATTRIDX(MAIR_ATTR_MASK, MT_NORMAL_TAGGED),
++ MAIR_ATTRIDX(MAIR_ATTR_NORMAL_TAGGED,
++ MT_NORMAL_TAGGED));
++
++ write_sysreg_s(KERNEL_GCR_EL1, SYS_GCR_EL1);
++
++ /*
++ * If GCR_EL1.RRND=1 is implemented the same way as RRND=0, then
++ * RGSR_EL1.SEED must be non-zero for IRG to produce
++ * pseudorandom numbers. As RGSR_EL1 is UNKNOWN out of reset, we
++ * must initialize it.
++ */
++ rgsr = (read_sysreg(CNTVCT_EL0) & SYS_RGSR_EL1_SEED_MASK) <<
++ SYS_RGSR_EL1_SEED_SHIFT;
++ if (rgsr == 0)
++ rgsr = 1 << SYS_RGSR_EL1_SEED_SHIFT;
++ write_sysreg_s(rgsr, SYS_RGSR_EL1);
++
++ /* clear any pending tag check faults in TFSR*_EL1 */
++ write_sysreg_s(0, SYS_TFSR_EL1);
++ write_sysreg_s(0, SYS_TFSRE0_EL1);
++
++ local_flush_tlb_all();
++}
++
+ void mte_suspend_enter(void)
+ {
+ if (!system_supports_mte())
+@@ -310,6 +353,14 @@ void mte_suspend_enter(void)
+ mte_check_tfsr_el1();
+ }
+
++void mte_suspend_exit(void)
++{
++ if (!system_supports_mte())
++ return;
++
++ mte_cpu_setup();
++}
++
+ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
+ {
+ u64 mte_ctrl = (~((arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT) &
+diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
+index 2b0887e58a7c4..033cd080af680 100644
+--- a/arch/arm64/kernel/suspend.c
++++ b/arch/arm64/kernel/suspend.c
+@@ -43,6 +43,8 @@ void notrace __cpu_suspend_exit(void)
+ {
+ unsigned int cpu = smp_processor_id();
+
++ mte_suspend_exit();
++
+ /*
+ * We are resuming from reset with the idmap active in TTBR0_EL1.
+ * We must uninstall the idmap and restore the expected MMU
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index d4abb948eb14e..ea650f2b0e2e0 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -22,46 +22,6 @@
+ #include <asm/cputype.h>
+ #include <asm/topology.h>
+
+-void store_cpu_topology(unsigned int cpuid)
+-{
+- struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
+- u64 mpidr;
+-
+- if (cpuid_topo->package_id != -1)
+- goto topology_populated;
+-
+- mpidr = read_cpuid_mpidr();
+-
+- /* Uniprocessor systems can rely on default topology values */
+- if (mpidr & MPIDR_UP_BITMASK)
+- return;
+-
+- /*
+- * This would be the place to create cpu topology based on MPIDR.
+- *
+- * However, it cannot be trusted to depict the actual topology; some
+- * pieces of the architecture enforce an artificial cap on Aff0 values
+- * (e.g. GICv3's ICC_SGI1R_EL1 limits it to 15), leading to an
+- * artificial cycling of Aff1, Aff2 and Aff3 values. IOW, these end up
+- * having absolutely no relationship to the actual underlying system
+- * topology, and cannot be reasonably used as core / package ID.
+- *
+- * If the MT bit is set, Aff0 *could* be used to define a thread ID, but
+- * we still wouldn't be able to obtain a sane core ID. This means we
+- * need to entirely ignore MPIDR for any topology deduction.
+- */
+- cpuid_topo->thread_id = -1;
+- cpuid_topo->core_id = cpuid;
+- cpuid_topo->package_id = cpu_to_node(cpuid);
+-
+- pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n",
+- cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
+- cpuid_topo->thread_id, mpidr);
+-
+-topology_populated:
+- update_siblings_masks(cpuid);
+-}
+-
+ #ifdef CONFIG_ACPI
+ static bool __init acpi_cpu_is_threaded(int cpu)
+ {
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 50bbed947bec7..1a9684b114745 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -47,17 +47,19 @@
+
+ #ifdef CONFIG_KASAN_HW_TAGS
+ #define TCR_MTE_FLAGS TCR_TCMA1 | TCR_TBI1 | TCR_TBID1
+-#else
++#elif defined(CONFIG_ARM64_MTE)
+ /*
+ * The mte_zero_clear_page_tags() implementation uses DC GZVA, which relies on
+ * TBI being enabled at EL1.
+ */
+ #define TCR_MTE_FLAGS TCR_TBI1 | TCR_TBID1
++#else
++#define TCR_MTE_FLAGS 0
+ #endif
+
+ /*
+ * Default MAIR_EL1. MT_NORMAL_TAGGED is initially mapped as Normal memory and
+- * changed during __cpu_setup to Normal Tagged if the system supports MTE.
++ * changed during mte_cpu_setup to Normal Tagged if the system supports MTE.
+ */
+ #define MAIR_EL1_SET \
+ (MAIR_ATTRIDX(MAIR_ATTR_DEVICE_nGnRnE, MT_DEVICE_nGnRnE) | \
+@@ -421,46 +423,8 @@ SYM_FUNC_START(__cpu_setup)
+ mov_q mair, MAIR_EL1_SET
+ mov_q tcr, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
+ TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
+- TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS
+-
+-#ifdef CONFIG_ARM64_MTE
+- /*
+- * Update MAIR_EL1, GCR_EL1 and TFSR*_EL1 if MTE is supported
+- * (ID_AA64PFR1_EL1[11:8] > 1).
+- */
+- mrs x10, ID_AA64PFR1_EL1
+- ubfx x10, x10, #ID_AA64PFR1_MTE_SHIFT, #4
+- cmp x10, #ID_AA64PFR1_MTE
+- b.lt 1f
+-
+- /* Normal Tagged memory type at the corresponding MAIR index */
+- mov x10, #MAIR_ATTR_NORMAL_TAGGED
+- bfi mair, x10, #(8 * MT_NORMAL_TAGGED), #8
++ TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS
+
+- mov x10, #KERNEL_GCR_EL1
+- msr_s SYS_GCR_EL1, x10
+-
+- /*
+- * If GCR_EL1.RRND=1 is implemented the same way as RRND=0, then
+- * RGSR_EL1.SEED must be non-zero for IRG to produce
+- * pseudorandom numbers. As RGSR_EL1 is UNKNOWN out of reset, we
+- * must initialize it.
+- */
+- mrs x10, CNTVCT_EL0
+- ands x10, x10, #SYS_RGSR_EL1_SEED_MASK
+- csinc x10, x10, xzr, ne
+- lsl x10, x10, #SYS_RGSR_EL1_SEED_SHIFT
+- msr_s SYS_RGSR_EL1, x10
+-
+- /* clear any pending tag check faults in TFSR*_EL1 */
+- msr_s SYS_TFSR_EL1, xzr
+- msr_s SYS_TFSRE0_EL1, xzr
+-
+- /* set the TCR_EL1 bits */
+- mov_q x10, TCR_MTE_FLAGS
+- orr tcr, tcr, x10
+-1:
+-#endif
+ tcr_clear_errata_bits tcr, x9, x5
+
+ #ifdef CONFIG_ARM64_VA_BITS_52
+diff --git a/arch/ia64/mm/numa.c b/arch/ia64/mm/numa.c
+index d6579ec3ea324..4c7b1f50e3b7d 100644
+--- a/arch/ia64/mm/numa.c
++++ b/arch/ia64/mm/numa.c
+@@ -75,5 +75,6 @@ int memory_add_physaddr_to_nid(u64 addr)
+ return 0;
+ return nid;
+ }
++EXPORT_SYMBOL(memory_add_physaddr_to_nid);
+ #endif
+ #endif
+diff --git a/arch/mips/bcm47xx/prom.c b/arch/mips/bcm47xx/prom.c
+index 0a63721d0fbf3..5a33d6b48d779 100644
+--- a/arch/mips/bcm47xx/prom.c
++++ b/arch/mips/bcm47xx/prom.c
+@@ -86,7 +86,7 @@ static __init void prom_init_mem(void)
+ pr_debug("Assume 128MB RAM\n");
+ break;
+ }
+- if (!memcmp(prom_init, prom_init + mem, 32))
++ if (!memcmp((void *)prom_init, (void *)prom_init + mem, 32))
+ break;
+ }
+ lowmem = mem;
+@@ -159,7 +159,7 @@ void __init bcm47xx_prom_highmem_init(void)
+
+ off = EXTVBASE + __pa(off);
+ for (extmem = 128 << 20; extmem < 512 << 20; extmem <<= 1) {
+- if (!memcmp(prom_init, (void *)(off + extmem), 16))
++ if (!memcmp((void *)prom_init, (void *)(off + extmem), 16))
+ break;
+ }
+ extmem -= lowmem;
+diff --git a/arch/mips/boot/dts/ralink/mt7621-gnubee-gb-pc2.dts b/arch/mips/boot/dts/ralink/mt7621-gnubee-gb-pc2.dts
+index a6201a119a1f2..5bdc63187e77f 100644
+--- a/arch/mips/boot/dts/ralink/mt7621-gnubee-gb-pc2.dts
++++ b/arch/mips/boot/dts/ralink/mt7621-gnubee-gb-pc2.dts
+@@ -83,12 +83,12 @@
+
+ &gmac1 {
+ status = "okay";
+- phy-handle = <ðphy7>;
++ phy-handle = <ðphy5>;
+ };
+
+ &mdio {
+- ethphy7: ethernet-phy@7 {
+- reg = <7>;
++ ethphy5: ethernet-phy@5 {
++ reg = <5>;
+ phy-mode = "rgmii-rxid";
+ };
+ };
+diff --git a/arch/mips/sgi-ip27/ip27-xtalk.c b/arch/mips/sgi-ip27/ip27-xtalk.c
+index e762886d1dda9..5143d1cf8984c 100644
+--- a/arch/mips/sgi-ip27/ip27-xtalk.c
++++ b/arch/mips/sgi-ip27/ip27-xtalk.c
+@@ -27,15 +27,18 @@ static void bridge_platform_create(nasid_t nasid, int widget, int masterwid)
+ {
+ struct xtalk_bridge_platform_data *bd;
+ struct sgi_w1_platform_data *wd;
+- struct platform_device *pdev;
++ struct platform_device *pdev_wd;
++ struct platform_device *pdev_bd;
+ struct resource w1_res;
+ unsigned long offset;
+
+ offset = NODE_OFFSET(nasid);
+
+ wd = kzalloc(sizeof(*wd), GFP_KERNEL);
+- if (!wd)
+- goto no_mem;
++ if (!wd) {
++ pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++ return;
++ }
+
+ snprintf(wd->dev_id, sizeof(wd->dev_id), "bridge-%012lx",
+ offset + (widget << SWIN_SIZE_BITS));
+@@ -46,24 +49,35 @@ static void bridge_platform_create(nasid_t nasid, int widget, int masterwid)
+ w1_res.end = w1_res.start + 3;
+ w1_res.flags = IORESOURCE_MEM;
+
+- pdev = platform_device_alloc("sgi_w1", PLATFORM_DEVID_AUTO);
+- if (!pdev) {
+- kfree(wd);
+- goto no_mem;
++ pdev_wd = platform_device_alloc("sgi_w1", PLATFORM_DEVID_AUTO);
++ if (!pdev_wd) {
++ pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++ goto err_kfree_wd;
++ }
++ if (platform_device_add_resources(pdev_wd, &w1_res, 1)) {
++ pr_warn("xtalk:n%d/%x bridge failed to add platform resources.\n", nasid, widget);
++ goto err_put_pdev_wd;
++ }
++ if (platform_device_add_data(pdev_wd, wd, sizeof(*wd))) {
++ pr_warn("xtalk:n%d/%x bridge failed to add platform data.\n", nasid, widget);
++ goto err_put_pdev_wd;
++ }
++ if (platform_device_add(pdev_wd)) {
++ pr_warn("xtalk:n%d/%x bridge failed to add platform device.\n", nasid, widget);
++ goto err_put_pdev_wd;
+ }
+- platform_device_add_resources(pdev, &w1_res, 1);
+- platform_device_add_data(pdev, wd, sizeof(*wd));
+ /* platform_device_add_data() duplicates the data */
+ kfree(wd);
+- platform_device_add(pdev);
+
+ bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+- if (!bd)
+- goto no_mem;
+- pdev = platform_device_alloc("xtalk-bridge", PLATFORM_DEVID_AUTO);
+- if (!pdev) {
+- kfree(bd);
+- goto no_mem;
++ if (!bd) {
++ pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++ goto err_unregister_pdev_wd;
++ }
++ pdev_bd = platform_device_alloc("xtalk-bridge", PLATFORM_DEVID_AUTO);
++ if (!pdev_bd) {
++ pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++ goto err_kfree_bd;
+ }
+
+
+@@ -84,15 +98,31 @@ static void bridge_platform_create(nasid_t nasid, int widget, int masterwid)
+ bd->io.flags = IORESOURCE_IO;
+ bd->io_offset = offset;
+
+- platform_device_add_data(pdev, bd, sizeof(*bd));
++ if (platform_device_add_data(pdev_bd, bd, sizeof(*bd))) {
++ pr_warn("xtalk:n%d/%x bridge failed to add platform data.\n", nasid, widget);
++ goto err_put_pdev_bd;
++ }
++ if (platform_device_add(pdev_bd)) {
++ pr_warn("xtalk:n%d/%x bridge failed to add platform device.\n", nasid, widget);
++ goto err_put_pdev_bd;
++ }
+ /* platform_device_add_data() duplicates the data */
+ kfree(bd);
+- platform_device_add(pdev);
+ pr_info("xtalk:n%d/%x bridge widget\n", nasid, widget);
+ return;
+
+-no_mem:
+- pr_warn("xtalk:n%d/%x bridge create out of memory\n", nasid, widget);
++err_put_pdev_bd:
++ platform_device_put(pdev_bd);
++err_kfree_bd:
++ kfree(bd);
++err_unregister_pdev_wd:
++ platform_device_unregister(pdev_wd);
++ return;
++err_put_pdev_wd:
++ platform_device_put(pdev_wd);
++err_kfree_wd:
++ kfree(wd);
++ return;
+ }
+
+ static int probe_one_port(nasid_t nasid, int widget, int masterwid)
+diff --git a/arch/mips/sgi-ip30/ip30-xtalk.c b/arch/mips/sgi-ip30/ip30-xtalk.c
+index 8129524421cb0..7ceb2b23ea1cf 100644
+--- a/arch/mips/sgi-ip30/ip30-xtalk.c
++++ b/arch/mips/sgi-ip30/ip30-xtalk.c
+@@ -40,12 +40,15 @@ static void bridge_platform_create(int widget, int masterwid)
+ {
+ struct xtalk_bridge_platform_data *bd;
+ struct sgi_w1_platform_data *wd;
+- struct platform_device *pdev;
++ struct platform_device *pdev_wd;
++ struct platform_device *pdev_bd;
+ struct resource w1_res;
+
+ wd = kzalloc(sizeof(*wd), GFP_KERNEL);
+- if (!wd)
+- goto no_mem;
++ if (!wd) {
++ pr_warn("xtalk:%x bridge create out of memory\n", widget);
++ return;
++ }
+
+ snprintf(wd->dev_id, sizeof(wd->dev_id), "bridge-%012lx",
+ IP30_SWIN_BASE(widget));
+@@ -56,24 +59,35 @@ static void bridge_platform_create(int widget, int masterwid)
+ w1_res.end = w1_res.start + 3;
+ w1_res.flags = IORESOURCE_MEM;
+
+- pdev = platform_device_alloc("sgi_w1", PLATFORM_DEVID_AUTO);
+- if (!pdev) {
+- kfree(wd);
+- goto no_mem;
++ pdev_wd = platform_device_alloc("sgi_w1", PLATFORM_DEVID_AUTO);
++ if (!pdev_wd) {
++ pr_warn("xtalk:%x bridge create out of memory\n", widget);
++ goto err_kfree_wd;
++ }
++ if (platform_device_add_resources(pdev_wd, &w1_res, 1)) {
++ pr_warn("xtalk:%x bridge failed to add platform resources.\n", widget);
++ goto err_put_pdev_wd;
++ }
++ if (platform_device_add_data(pdev_wd, wd, sizeof(*wd))) {
++ pr_warn("xtalk:%x bridge failed to add platform data.\n", widget);
++ goto err_put_pdev_wd;
++ }
++ if (platform_device_add(pdev_wd)) {
++ pr_warn("xtalk:%x bridge failed to add platform device.\n", widget);
++ goto err_put_pdev_wd;
+ }
+- platform_device_add_resources(pdev, &w1_res, 1);
+- platform_device_add_data(pdev, wd, sizeof(*wd));
+ /* platform_device_add_data() duplicates the data */
+ kfree(wd);
+- platform_device_add(pdev);
+
+ bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+- if (!bd)
+- goto no_mem;
+- pdev = platform_device_alloc("xtalk-bridge", PLATFORM_DEVID_AUTO);
+- if (!pdev) {
+- kfree(bd);
+- goto no_mem;
++ if (!bd) {
++ pr_warn("xtalk:%x bridge create out of memory\n", widget);
++ goto err_unregister_pdev_wd;
++ }
++ pdev_bd = platform_device_alloc("xtalk-bridge", PLATFORM_DEVID_AUTO);
++ if (!pdev_bd) {
++ pr_warn("xtalk:%x bridge create out of memory\n", widget);
++ goto err_kfree_bd;
+ }
+
+ bd->bridge_addr = IP30_RAW_SWIN_BASE(widget);
+@@ -93,15 +107,31 @@ static void bridge_platform_create(int widget, int masterwid)
+ bd->io.flags = IORESOURCE_IO;
+ bd->io_offset = IP30_SWIN_BASE(widget);
+
+- platform_device_add_data(pdev, bd, sizeof(*bd));
++ if (platform_device_add_data(pdev_bd, bd, sizeof(*bd))) {
++ pr_warn("xtalk:%x bridge failed to add platform data.\n", widget);
++ goto err_put_pdev_bd;
++ }
++ if (platform_device_add(pdev_bd)) {
++ pr_warn("xtalk:%x bridge failed to add platform device.\n", widget);
++ goto err_put_pdev_bd;
++ }
+ /* platform_device_add_data() duplicates the data */
+ kfree(bd);
+- platform_device_add(pdev);
+ pr_info("xtalk:%x bridge widget\n", widget);
+ return;
+
+-no_mem:
+- pr_warn("xtalk:%x bridge create out of memory\n", widget);
++err_put_pdev_bd:
++ platform_device_put(pdev_bd);
++err_kfree_bd:
++ kfree(bd);
++err_unregister_pdev_wd:
++ platform_device_unregister(pdev_wd);
++ return;
++err_put_pdev_wd:
++ platform_device_put(pdev_wd);
++err_kfree_wd:
++ kfree(wd);
++ return;
+ }
+
+ static unsigned int __init xbow_widget_active(s8 wid)
+diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
+index 69765a6dbe89d..4229ae96eb385 100644
+--- a/arch/parisc/include/asm/pgtable.h
++++ b/arch/parisc/include/asm/pgtable.h
+@@ -192,6 +192,11 @@ extern void __update_cache(pte_t pte);
+ #define _PAGE_PRESENT_BIT 22 /* (0x200) Software: translation valid */
+ #define _PAGE_HPAGE_BIT 21 /* (0x400) Software: Huge Page */
+ #define _PAGE_USER_BIT 20 /* (0x800) Software: User accessible page */
++#ifdef CONFIG_HUGETLB_PAGE
++#define _PAGE_SPECIAL_BIT _PAGE_DMB_BIT /* DMB feature is currently unused */
++#else
++#define _PAGE_SPECIAL_BIT _PAGE_HPAGE_BIT /* use unused HUGE PAGE bit */
++#endif
+
+ /* N.B. The bits are defined in terms of a 32 bit word above, so the */
+ /* following macro is ok for both 32 and 64 bit. */
+@@ -219,7 +224,7 @@ extern void __update_cache(pte_t pte);
+ #define _PAGE_PRESENT (1 << xlate_pabit(_PAGE_PRESENT_BIT))
+ #define _PAGE_HUGE (1 << xlate_pabit(_PAGE_HPAGE_BIT))
+ #define _PAGE_USER (1 << xlate_pabit(_PAGE_USER_BIT))
+-#define _PAGE_SPECIAL (_PAGE_DMB)
++#define _PAGE_SPECIAL (1 << xlate_pabit(_PAGE_SPECIAL_BIT))
+
+ #define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_ACCESSED)
+ #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_SPECIAL)
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index df8102fb435fc..0e5ebfe8d9d29 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -499,6 +499,10 @@
+ * Finally, _PAGE_READ goes in the top bit of PL1 (so we
+ * trigger an access rights trap in user space if the user
+ * tries to read an unreadable page */
++#if _PAGE_SPECIAL_BIT == _PAGE_DMB_BIT
++ /* need to drop DMB bit, as it's used as SPECIAL flag */
++ depi 0,_PAGE_SPECIAL_BIT,1,\pte
++#endif
+ depd \pte,8,7,\prot
+
+ /* PAGE_USER indicates the page can be read with user privileges,
+@@ -529,6 +533,10 @@
+ * makes the tlb entry for the differently formatted pa11
+ * insertion instructions */
+ .macro make_insert_tlb_11 spc,pte,prot
++#if _PAGE_SPECIAL_BIT == _PAGE_DMB_BIT
++ /* need to drop DMB bit, as it's used as SPECIAL flag */
++ depi 0,_PAGE_SPECIAL_BIT,1,\pte
++#endif
+ zdep \spc,30,15,\prot
+ dep \pte,8,7,\prot
+ extru,= \pte,_PAGE_NO_CACHE_BIT,1,%r0
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 4d8f26c1399be..d9d9f84c480b8 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -817,7 +817,7 @@ config DATA_SHIFT
+ default 24 if STRICT_KERNEL_RWX && PPC64
+ range 17 28 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC || KFENCE) && PPC_BOOK3S_32
+ range 19 23 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC || KFENCE) && PPC_8xx
+- range 20 24 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC || KFENCE) && PPC_FSL_BOOKE
++ range 20 24 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC || KFENCE) && FSL_BOOKE
+ default 22 if STRICT_KERNEL_RWX && PPC_BOOK3S_32
+ default 18 if (DEBUG_PAGEALLOC || KFENCE) && PPC_BOOK3S_32
+ default 23 if STRICT_KERNEL_RWX && PPC_8xx
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index d54e1fe035517..dbeba3e209c0a 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -152,7 +152,7 @@ CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=power8
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power9,-mtune=power8)
+ else
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,$(call cc-option,-mtune=power5))
+-CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mcpu=power5,-mcpu=power4)
++CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=power4
+ endif
+ else ifdef CONFIG_PPC_BOOK3E_64
+ CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
+diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
+index a9cd2ea4a8617..d32d95aea5d6f 100644
+--- a/arch/powerpc/boot/Makefile
++++ b/arch/powerpc/boot/Makefile
+@@ -34,6 +34,7 @@ endif
+
+ BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ -fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \
++ $(call cc-option,-mno-spe) $(call cc-option,-mspe=no) \
+ -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
+ $(LINUXINCLUDE)
+
+diff --git a/arch/powerpc/boot/dts/fsl/e500v1_power_isa.dtsi b/arch/powerpc/boot/dts/fsl/e500v1_power_isa.dtsi
+new file mode 100644
+index 0000000000000..7e2a90cde72e5
+--- /dev/null
++++ b/arch/powerpc/boot/dts/fsl/e500v1_power_isa.dtsi
+@@ -0,0 +1,51 @@
++/*
++ * e500v1 Power ISA Device Tree Source (include)
++ *
++ * Copyright 2012 Freescale Semiconductor Inc.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions are met:
++ * * Redistributions of source code must retain the above copyright
++ * notice, this list of conditions and the following disclaimer.
++ * * Redistributions in binary form must reproduce the above copyright
++ * notice, this list of conditions and the following disclaimer in the
++ * documentation and/or other materials provided with the distribution.
++ * * Neither the name of Freescale Semiconductor nor the
++ * names of its contributors may be used to endorse or promote products
++ * derived from this software without specific prior written permission.
++ *
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") as published by the Free Software
++ * Foundation, either version 2 of that License or (at your option) any
++ * later version.
++ *
++ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY
++ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
++ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++/ {
++ cpus {
++ power-isa-version = "2.03";
++ power-isa-b; // Base
++ power-isa-e; // Embedded
++ power-isa-atb; // Alternate Time Base
++ power-isa-cs; // Cache Specification
++ power-isa-e.le; // Embedded.Little-Endian
++ power-isa-e.pm; // Embedded.Performance Monitor
++ power-isa-ecl; // Embedded Cache Locking
++ power-isa-mmc; // Memory Coherence
++ power-isa-sp; // Signal Processing Engine
++ power-isa-sp.fs; // SPE.Embedded Float Scalar Single
++ power-isa-sp.fv; // SPE.Embedded Float Vector
++ mmu-type = "power-embedded";
++ };
++};
+diff --git a/arch/powerpc/boot/dts/fsl/mpc8540ads.dts b/arch/powerpc/boot/dts/fsl/mpc8540ads.dts
+index 18a885130538a..e03ae130162ba 100644
+--- a/arch/powerpc/boot/dts/fsl/mpc8540ads.dts
++++ b/arch/powerpc/boot/dts/fsl/mpc8540ads.dts
+@@ -7,7 +7,7 @@
+
+ /dts-v1/;
+
+-/include/ "e500v2_power_isa.dtsi"
++/include/ "e500v1_power_isa.dtsi"
+
+ / {
+ model = "MPC8540ADS";
+diff --git a/arch/powerpc/boot/dts/fsl/mpc8541cds.dts b/arch/powerpc/boot/dts/fsl/mpc8541cds.dts
+index ac381e7b1c60e..a2a6c5cf852e9 100644
+--- a/arch/powerpc/boot/dts/fsl/mpc8541cds.dts
++++ b/arch/powerpc/boot/dts/fsl/mpc8541cds.dts
+@@ -7,7 +7,7 @@
+
+ /dts-v1/;
+
+-/include/ "e500v2_power_isa.dtsi"
++/include/ "e500v1_power_isa.dtsi"
+
+ / {
+ model = "MPC8541CDS";
+diff --git a/arch/powerpc/boot/dts/fsl/mpc8555cds.dts b/arch/powerpc/boot/dts/fsl/mpc8555cds.dts
+index 9f58db2a7e661..901b6ff06dfbb 100644
+--- a/arch/powerpc/boot/dts/fsl/mpc8555cds.dts
++++ b/arch/powerpc/boot/dts/fsl/mpc8555cds.dts
+@@ -7,7 +7,7 @@
+
+ /dts-v1/;
+
+-/include/ "e500v2_power_isa.dtsi"
++/include/ "e500v1_power_isa.dtsi"
+
+ / {
+ model = "MPC8555CDS";
+diff --git a/arch/powerpc/boot/dts/fsl/mpc8560ads.dts b/arch/powerpc/boot/dts/fsl/mpc8560ads.dts
+index a24722ccaebf1..c2f9aea78b29f 100644
+--- a/arch/powerpc/boot/dts/fsl/mpc8560ads.dts
++++ b/arch/powerpc/boot/dts/fsl/mpc8560ads.dts
+@@ -7,7 +7,7 @@
+
+ /dts-v1/;
+
+-/include/ "e500v2_power_isa.dtsi"
++/include/ "e500v1_power_isa.dtsi"
+
+ / {
+ model = "MPC8560ADS";
+diff --git a/arch/powerpc/configs/pseries_defconfig b/arch/powerpc/configs/pseries_defconfig
+index b571d084c148b..c05e37af9f1e8 100644
+--- a/arch/powerpc/configs/pseries_defconfig
++++ b/arch/powerpc/configs/pseries_defconfig
+@@ -40,6 +40,7 @@ CONFIG_PPC_SPLPAR=y
+ CONFIG_DTL=y
+ CONFIG_PPC_SMLPAR=y
+ CONFIG_IBMEBUS=y
++CONFIG_LIBNVDIMM=m
+ CONFIG_PAPR_SCM=m
+ CONFIG_PPC_SVM=y
+ # CONFIG_PPC_PMAC is not set
+diff --git a/arch/powerpc/include/asm/syscalls.h b/arch/powerpc/include/asm/syscalls.h
+index a2b13e55254fb..da40219b303a6 100644
+--- a/arch/powerpc/include/asm/syscalls.h
++++ b/arch/powerpc/include/asm/syscalls.h
+@@ -8,6 +8,18 @@
+ #include <linux/types.h>
+ #include <linux/compat.h>
+
++/*
++ * long long munging:
++ * The 32 bit ABI passes long longs in an odd even register pair.
++ * High and low parts are swapped depending on endian mode,
++ * so define a macro (similar to mips linux32) to handle that.
++ */
++#ifdef __LITTLE_ENDIAN__
++#define merge_64(low, high) (((u64)high << 32) | low)
++#else
++#define merge_64(high, low) (((u64)high << 32) | low)
++#endif
++
+ struct rtas_args;
+
+ asmlinkage long sys_mmap(unsigned long addr, size_t len,
+diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
+index 784ea3289c840..0b656b897f997 100644
+--- a/arch/powerpc/kernel/interrupt.c
++++ b/arch/powerpc/kernel/interrupt.c
+@@ -592,16 +592,6 @@ again:
+
+ if (unlikely(stack_store))
+ __hard_EE_RI_disable();
+- /*
+- * Returning to a kernel context with local irqs disabled.
+- * Here, if EE was enabled in the interrupted context, enable
+- * it on return as well. A problem exists here where a soft
+- * masked interrupt may have cleared MSR[EE] and set HARD_DIS
+- * here, and it will still exist on return to the caller. This
+- * will be resolved by the masked interrupt firing again.
+- */
+- if (regs->msr & MSR_EE)
+- local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
+ #endif /* CONFIG_PPC64 */
+ }
+
+diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
+index ce25b28cf418e..2ca1c037ea258 100644
+--- a/arch/powerpc/kernel/interrupt_64.S
++++ b/arch/powerpc/kernel/interrupt_64.S
+@@ -559,15 +559,54 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_kernel)
+ ld r11,SOFTE(r1)
+ cmpwi r11,IRQS_ENABLED
+ stb r11,PACAIRQSOFTMASK(r13)
+- bne 1f
++ beq .Linterrupt_return_\srr\()_soft_enabled
++
++ /*
++ * Returning to soft-disabled context.
++ * Check if a MUST_HARD_MASK interrupt has become pending, in which
++ * case we need to disable MSR[EE] in the return context.
++ */
++ ld r12,_MSR(r1)
++ andi. r10,r12,MSR_EE
++ beq .Lfast_kernel_interrupt_return_\srr\() // EE already disabled
++ lbz r11,PACAIRQHAPPENED(r13)
++ andi. r10,r11,PACA_IRQ_MUST_HARD_MASK
++ beq .Lfast_kernel_interrupt_return_\srr\() // No HARD_MASK pending
++
++ /* Must clear MSR_EE from _MSR */
++#ifdef CONFIG_PPC_BOOK3S
++ li r10,0
++ /* Clear valid before changing _MSR */
++ .ifc \srr,srr
++ stb r10,PACASRR_VALID(r13)
++ .else
++ stb r10,PACAHSRR_VALID(r13)
++ .endif
++#endif
++ xori r12,r12,MSR_EE
++ std r12,_MSR(r1)
++ b .Lfast_kernel_interrupt_return_\srr\()
++
++.Linterrupt_return_\srr\()_soft_enabled:
++ /*
++ * In the soft-enabled case, need to double-check that we have no
++ * pending interrupts that might have come in before we reached the
++ * restart section of code, and restart the exit so those can be
++ * handled.
++ *
++ * If there are none, it is be possible that the interrupt still
++ * has PACA_IRQ_HARD_DIS set, which needs to be cleared for the
++ * interrupted context. This clear will not clobber a new pending
++ * interrupt coming in, because we're in the restart section, so
++ * such would return to the restart location.
++ */
+ #ifdef CONFIG_PPC_BOOK3S
+ lbz r11,PACAIRQHAPPENED(r13)
+ andi. r11,r11,(~PACA_IRQ_HARD_DIS)@l
+ bne- interrupt_return_\srr\()_kernel_restart
+ #endif
+ li r11,0
+- stb r11,PACAIRQHAPPENED(r13) # clear out possible HARD_DIS
+-1:
++ stb r11,PACAIRQHAPPENED(r13) // clear the possible HARD_DIS
+
+ .Lfast_kernel_interrupt_return_\srr\():
+ cmpdi cr1,r3,0
+diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
+index 1c97c0f177aed..ed4f6b992f974 100644
+--- a/arch/powerpc/kernel/kprobes.c
++++ b/arch/powerpc/kernel/kprobes.c
+@@ -161,7 +161,13 @@ int arch_prepare_kprobe(struct kprobe *p)
+ preempt_disable();
+ prev = get_kprobe(p->addr - 1);
+ preempt_enable_no_resched();
+- if (prev && ppc_inst_prefixed(ppc_inst_read(prev->ainsn.insn))) {
++
++ /*
++ * When prev is a ftrace-based kprobe, we don't have an insn, and it
++ * doesn't probe for prefixed instruction.
++ */
++ if (prev && !kprobe_ftrace(prev) &&
++ ppc_inst_prefixed(ppc_inst_read(prev->ainsn.insn))) {
+ printk("Cannot register a kprobe on the second word of prefixed instruction\n");
+ ret = -EINVAL;
+ }
+diff --git a/arch/powerpc/kernel/pci_dn.c b/arch/powerpc/kernel/pci_dn.c
+index 938ab8838ab54..aa221958007ef 100644
+--- a/arch/powerpc/kernel/pci_dn.c
++++ b/arch/powerpc/kernel/pci_dn.c
+@@ -330,6 +330,7 @@ struct pci_dn *pci_add_device_node_info(struct pci_controller *hose,
+ INIT_LIST_HEAD(&pdn->list);
+ parent = of_get_parent(dn);
+ pdn->parent = parent ? PCI_DN(parent) : NULL;
++ of_node_put(parent);
+ if (pdn->parent)
+ list_add_tail(&pdn->list, &pdn->parent->child_list);
+
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 5761f08dae958..6562517bcb3b7 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -183,8 +183,10 @@ static void __init fixup_boot_paca(void)
+ get_paca()->cpu_start = 1;
+ /* Allow percpu accesses to work until we setup percpu data */
+ get_paca()->data_offset = 0;
+- /* Mark interrupts disabled in PACA */
++ /* Mark interrupts soft and hard disabled in PACA */
+ irq_soft_mask_set(IRQS_DISABLED);
++ get_paca()->irq_happened = PACA_IRQ_HARD_DIS;
++ WARN_ON(mfmsr() & MSR_EE);
+ }
+
+ static void __init configure_exceptions(void)
+diff --git a/arch/powerpc/kernel/sys_ppc32.c b/arch/powerpc/kernel/sys_ppc32.c
+index 16ff0399a2574..719bfc6d1e3f5 100644
+--- a/arch/powerpc/kernel/sys_ppc32.c
++++ b/arch/powerpc/kernel/sys_ppc32.c
+@@ -56,18 +56,6 @@ unsigned long compat_sys_mmap2(unsigned long addr, size_t len,
+ return sys_mmap(addr, len, prot, flags, fd, pgoff << 12);
+ }
+
+-/*
+- * long long munging:
+- * The 32 bit ABI passes long longs in an odd even register pair.
+- * High and low parts are swapped depending on endian mode,
+- * so define a macro (similar to mips linux32) to handle that.
+- */
+-#ifdef __LITTLE_ENDIAN__
+-#define merge_64(low, high) ((u64)high << 32) | low
+-#else
+-#define merge_64(high, low) ((u64)high << 32) | low
+-#endif
+-
+ compat_ssize_t compat_sys_pread64(unsigned int fd, char __user *ubuf, compat_size_t count,
+ u32 reg6, u32 pos1, u32 pos2)
+ {
+@@ -94,7 +82,7 @@ asmlinkage int compat_sys_truncate64(const char __user * path, u32 reg4,
+ asmlinkage long compat_sys_fallocate(int fd, int mode, u32 offset1, u32 offset2,
+ u32 len1, u32 len2)
+ {
+- return ksys_fallocate(fd, mode, ((loff_t)offset1 << 32) | offset2,
++ return ksys_fallocate(fd, mode, merge_64(offset1, offset2),
+ merge_64(len1, len2));
+ }
+
+diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c
+index fc999140bc27e..abc3fbb3c4902 100644
+--- a/arch/powerpc/kernel/syscalls.c
++++ b/arch/powerpc/kernel/syscalls.c
+@@ -98,8 +98,8 @@ long ppc64_personality(unsigned long personality)
+ long ppc_fadvise64_64(int fd, int advice, u32 offset_high, u32 offset_low,
+ u32 len_high, u32 len_low)
+ {
+- return ksys_fadvise64_64(fd, (u64)offset_high << 32 | offset_low,
+- (u64)len_high << 32 | len_low, advice);
++ return ksys_fadvise64_64(fd, merge_64(offset_high, offset_low),
++ merge_64(len_high, len_low), advice);
+ }
+
+ SYSCALL_DEFINE0(switch_endian)
+diff --git a/arch/powerpc/math-emu/math_efp.c b/arch/powerpc/math-emu/math_efp.c
+index 39b84e7452e1b..aa3bb8da1cb9b 100644
+--- a/arch/powerpc/math-emu/math_efp.c
++++ b/arch/powerpc/math-emu/math_efp.c
+@@ -17,6 +17,7 @@
+
+ #include <linux/types.h>
+ #include <linux/prctl.h>
++#include <linux/module.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/reg.h>
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index 55a8fbfdb5b28..3510b55b36f8c 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -892,6 +892,7 @@ static void opal_export_attrs(void)
+ kobj = kobject_create_and_add("exports", opal_kobj);
+ if (!kobj) {
+ pr_warn("kobject_create_and_add() of exports failed\n");
++ of_node_put(np);
+ return;
+ }
+
+diff --git a/arch/powerpc/platforms/pseries/vas.c b/arch/powerpc/platforms/pseries/vas.c
+index 500a1fc4a1d7d..b2a32f8a837ac 100644
+--- a/arch/powerpc/platforms/pseries/vas.c
++++ b/arch/powerpc/platforms/pseries/vas.c
+@@ -332,7 +332,7 @@ static struct vas_window *vas_allocate_window(int vas_id, u64 flags,
+ * So no unpacking needs to be done.
+ */
+ rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, domain,
+- VPHN_FLAG_VCPU, smp_processor_id());
++ VPHN_FLAG_VCPU, hard_smp_processor_id());
+ if (rc != H_SUCCESS) {
+ pr_err("H_HOME_NODE_ASSOCIATIVITY error: %d\n", rc);
+ goto out;
+diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
+index ef9a5999fa93d..73c2d70706c0a 100644
+--- a/arch/powerpc/sysdev/fsl_msi.c
++++ b/arch/powerpc/sysdev/fsl_msi.c
+@@ -209,8 +209,10 @@ static int fsl_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ dev_err(&pdev->dev,
+ "node %pOF has an invalid fsl,msi phandle %u\n",
+ hose->dn, np->phandle);
++ of_node_put(np);
+ return -EINVAL;
+ }
++ of_node_put(np);
+ }
+
+ msi_for_each_desc(entry, &pdev->dev, MSI_DESC_NOTASSOCIATED) {
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 1f02f15569749..696279ce03c92 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -52,7 +52,7 @@ config RISCV
+ select COMMON_CLK
+ select CPU_PM if CPU_IDLE
+ select EDAC_SUPPORT
+- select GENERIC_ARCH_TOPOLOGY if SMP
++ select GENERIC_ARCH_TOPOLOGY
+ select GENERIC_ATOMIC64 if !64BIT
+ select GENERIC_CLOCKEVENTS_BROADCAST if SMP
+ select GENERIC_EARLY_IOREMAP
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 81029d40a6727..ccd0e000bbefd 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -37,6 +37,7 @@ else
+ endif
+
+ ifeq ($(CONFIG_LD_IS_LLD),y)
++ifeq ($(shell test $(CONFIG_LLD_VERSION) -lt 150000; echo $$?),0)
+ KBUILD_CFLAGS += -mno-relax
+ KBUILD_AFLAGS += -mno-relax
+ ifndef CONFIG_AS_IS_LLVM
+@@ -44,6 +45,7 @@ ifndef CONFIG_AS_IS_LLVM
+ KBUILD_AFLAGS += -Wa,-mno-relax
+ endif
+ endif
++endif
+
+ # ISA string setting
+ riscv-march-$(CONFIG_ARCH_RV32I) := rv32ima
+diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
+index 69605a4742706..92080a2279372 100644
+--- a/arch/riscv/include/asm/io.h
++++ b/arch/riscv/include/asm/io.h
+@@ -101,9 +101,9 @@ __io_reads_ins(reads, u32, l, __io_br(), __io_ar(addr))
+ __io_reads_ins(ins, u8, b, __io_pbr(), __io_par(addr))
+ __io_reads_ins(ins, u16, w, __io_pbr(), __io_par(addr))
+ __io_reads_ins(ins, u32, l, __io_pbr(), __io_par(addr))
+-#define insb(addr, buffer, count) __insb((void __iomem *)(long)addr, buffer, count)
+-#define insw(addr, buffer, count) __insw((void __iomem *)(long)addr, buffer, count)
+-#define insl(addr, buffer, count) __insl((void __iomem *)(long)addr, buffer, count)
++#define insb(addr, buffer, count) __insb(PCI_IOBASE + (addr), buffer, count)
++#define insw(addr, buffer, count) __insw(PCI_IOBASE + (addr), buffer, count)
++#define insl(addr, buffer, count) __insl(PCI_IOBASE + (addr), buffer, count)
+
+ __io_writes_outs(writes, u8, b, __io_bw(), __io_aw())
+ __io_writes_outs(writes, u16, w, __io_bw(), __io_aw())
+@@ -115,22 +115,22 @@ __io_writes_outs(writes, u32, l, __io_bw(), __io_aw())
+ __io_writes_outs(outs, u8, b, __io_pbw(), __io_paw())
+ __io_writes_outs(outs, u16, w, __io_pbw(), __io_paw())
+ __io_writes_outs(outs, u32, l, __io_pbw(), __io_paw())
+-#define outsb(addr, buffer, count) __outsb((void __iomem *)(long)addr, buffer, count)
+-#define outsw(addr, buffer, count) __outsw((void __iomem *)(long)addr, buffer, count)
+-#define outsl(addr, buffer, count) __outsl((void __iomem *)(long)addr, buffer, count)
++#define outsb(addr, buffer, count) __outsb(PCI_IOBASE + (addr), buffer, count)
++#define outsw(addr, buffer, count) __outsw(PCI_IOBASE + (addr), buffer, count)
++#define outsl(addr, buffer, count) __outsl(PCI_IOBASE + (addr), buffer, count)
+
+ #ifdef CONFIG_64BIT
+ __io_reads_ins(reads, u64, q, __io_br(), __io_ar(addr))
+ #define readsq(addr, buffer, count) __readsq(addr, buffer, count)
+
+ __io_reads_ins(ins, u64, q, __io_pbr(), __io_par(addr))
+-#define insq(addr, buffer, count) __insq((void __iomem *)addr, buffer, count)
++#define insq(addr, buffer, count) __insq(PCI_IOBASE + (addr), buffer, count)
+
+ __io_writes_outs(writes, u64, q, __io_bw(), __io_aw())
+ #define writesq(addr, buffer, count) __writesq(addr, buffer, count)
+
+ __io_writes_outs(outs, u64, q, __io_pbr(), __io_paw())
+-#define outsq(addr, buffer, count) __outsq((void __iomem *)addr, buffer, count)
++#define outsq(addr, buffer, count) __outsq(PCI_IOBASE + (addr), buffer, count)
+ #endif
+
+ #include <asm-generic/io.h>
+diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
+index cedcf8ea3c766..0099dc1161683 100644
+--- a/arch/riscv/include/asm/mmu.h
++++ b/arch/riscv/include/asm/mmu.h
+@@ -16,7 +16,6 @@ typedef struct {
+ atomic_long_t id;
+ #endif
+ void *vdso;
+- void *vdso_info;
+ #ifdef CONFIG_SMP
+ /* A local icache flush is needed before user execution can resume. */
+ cpumask_t icache_stale_mask;
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index f0f36a4a0e9b8..061cf8db3156a 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -251,10 +251,10 @@ static void __init parse_dtb(void)
+ pr_info("Machine model: %s\n", name);
+ dump_stack_set_arch_desc("%s (DT)", name);
+ }
+- return;
++ } else {
++ pr_err("No DTB passed to the kernel\n");
+ }
+
+- pr_err("No DTB passed to the kernel\n");
+ #ifdef CONFIG_CMDLINE_FORCE
+ strscpy(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);
+ pr_info("Forcing kernel command line to: %s\n", boot_command_line);
+diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
+index f1e4948a4b525..b4d5524b10773 100644
+--- a/arch/riscv/kernel/smpboot.c
++++ b/arch/riscv/kernel/smpboot.c
+@@ -49,6 +49,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ unsigned int curr_cpuid;
+
+ curr_cpuid = smp_processor_id();
++ store_cpu_topology(curr_cpuid);
+ numa_store_cpu_info(curr_cpuid);
+ numa_add_cpu(curr_cpuid);
+
+@@ -161,9 +162,9 @@ asmlinkage __visible void smp_callin(void)
+ mmgrab(mm);
+ current->active_mm = mm;
+
++ store_cpu_topology(curr_cpuid);
+ notify_cpu_starting(curr_cpuid);
+ numa_add_cpu(curr_cpuid);
+- update_siblings_masks(curr_cpuid);
+ set_cpu_online(curr_cpuid, 1);
+
+ /*
+diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
+index 571556bb9261a..5d3f2fbeb33c7 100644
+--- a/arch/riscv/kernel/sys_riscv.c
++++ b/arch/riscv/kernel/sys_riscv.c
+@@ -18,9 +18,6 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
+ if (unlikely(offset & (~PAGE_MASK >> page_shift_offset)))
+ return -EINVAL;
+
+- if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ)))
+- return -EINVAL;
+-
+ return ksys_mmap_pgoff(addr, len, prot, flags, fd,
+ offset >> (PAGE_SHIFT - page_shift_offset));
+ }
+diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
+index 69b05b6c181b6..4abc9aebdfae2 100644
+--- a/arch/riscv/kernel/vdso.c
++++ b/arch/riscv/kernel/vdso.c
+@@ -60,6 +60,11 @@ struct __vdso_info {
+ struct vm_special_mapping *cm;
+ };
+
++static struct __vdso_info vdso_info;
++#ifdef CONFIG_COMPAT
++static struct __vdso_info compat_vdso_info;
++#endif
++
+ static int vdso_mremap(const struct vm_special_mapping *sm,
+ struct vm_area_struct *new_vma)
+ {
+@@ -114,15 +119,18 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
+ {
+ struct mm_struct *mm = task->mm;
+ struct vm_area_struct *vma;
+- struct __vdso_info *vdso_info = mm->context.vdso_info;
+
+ mmap_read_lock(mm);
+
+ for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ unsigned long size = vma->vm_end - vma->vm_start;
+
+- if (vma_is_special_mapping(vma, vdso_info->dm))
++ if (vma_is_special_mapping(vma, vdso_info.dm))
+ zap_page_range(vma, vma->vm_start, size);
++#ifdef CONFIG_COMPAT
++ if (vma_is_special_mapping(vma, compat_vdso_info.dm))
++ zap_page_range(vma, vma->vm_start, size);
++#endif
+ }
+
+ mmap_read_unlock(mm);
+@@ -264,7 +272,6 @@ static int __setup_additional_pages(struct mm_struct *mm,
+
+ vdso_base += VVAR_SIZE;
+ mm->context.vdso = (void *)vdso_base;
+- mm->context.vdso_info = (void *)vdso_info;
+
+ ret =
+ _install_special_mapping(mm, vdso_base, vdso_text_len,
+diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
+index 40694f0cab9e5..849674086561c 100644
+--- a/arch/riscv/mm/fault.c
++++ b/arch/riscv/mm/fault.c
+@@ -184,7 +184,8 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
+ }
+ break;
+ case EXC_LOAD_PAGE_FAULT:
+- if (!(vma->vm_flags & VM_READ)) {
++ /* Write implies read */
++ if (!(vma->vm_flags & (VM_READ | VM_WRITE))) {
+ return true;
+ }
+ break;
+diff --git a/arch/sh/include/asm/sections.h b/arch/sh/include/asm/sections.h
+index 8edb824049b9e..0cb0ca149ac34 100644
+--- a/arch/sh/include/asm/sections.h
++++ b/arch/sh/include/asm/sections.h
+@@ -4,7 +4,7 @@
+
+ #include <asm-generic/sections.h>
+
+-extern long __machvec_start, __machvec_end;
++extern char __machvec_start[], __machvec_end[];
+ extern char __uncached_start, __uncached_end;
+ extern char __start_eh_frame[], __stop_eh_frame[];
+
+diff --git a/arch/sh/kernel/machvec.c b/arch/sh/kernel/machvec.c
+index d606679a211e1..57efaf5b82ae0 100644
+--- a/arch/sh/kernel/machvec.c
++++ b/arch/sh/kernel/machvec.c
+@@ -20,8 +20,8 @@
+ #define MV_NAME_SIZE 32
+
+ #define for_each_mv(mv) \
+- for ((mv) = (struct sh_machine_vector *)&__machvec_start; \
+- (mv) && (unsigned long)(mv) < (unsigned long)&__machvec_end; \
++ for ((mv) = (struct sh_machine_vector *)__machvec_start; \
++ (mv) && (unsigned long)(mv) < (unsigned long)__machvec_end; \
+ (mv)++)
+
+ static struct sh_machine_vector * __init get_mv_byname(const char *name)
+@@ -87,8 +87,8 @@ void __init sh_mv_setup(void)
+ if (!machvec_selected) {
+ unsigned long machvec_size;
+
+- machvec_size = ((unsigned long)&__machvec_end -
+- (unsigned long)&__machvec_start);
++ machvec_size = ((unsigned long)__machvec_end -
++ (unsigned long)__machvec_start);
+
+ /*
+ * Sanity check for machvec section alignment. Ensure
+@@ -102,7 +102,7 @@ void __init sh_mv_setup(void)
+ * vector (usually the only one) from .machvec.init.
+ */
+ if (machvec_size >= sizeof(struct sh_machine_vector))
+- sh_mv = *(struct sh_machine_vector *)&__machvec_start;
++ sh_mv = *(struct sh_machine_vector *)__machvec_start;
+ }
+
+ pr_notice("Booting machvec: %s\n", get_system_type());
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index d9e023c78f568..f6f126ac34804 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -96,7 +96,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+
+ static void *c_start(struct seq_file *m, loff_t *pos)
+ {
+- return *pos < NR_CPUS ? cpu_data + *pos : NULL;
++ return *pos < nr_cpu_ids ? cpu_data + *pos : NULL;
+ }
+
+ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 25e2b8b75e40c..1cccedfc2a486 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -450,6 +450,11 @@ config X86_X2APIC
+ This allows 32-bit apic IDs (so it can support very large systems),
+ and accesses the local apic via MSRs not via mmio.
+
++ Some Intel systems circa 2022 and later are locked into x2APIC mode
++ and can not fall back to the legacy APIC modes if SGX or TDX are
++ enabled in the BIOS. They will be unable to boot without enabling
++ this option.
++
+ If you don't know what to do here, say N.
+
+ config X86_MPPARSE
+@@ -1930,7 +1935,7 @@ endchoice
+
+ config X86_SGX
+ bool "Software Guard eXtensions (SGX)"
+- depends on X86_64 && CPU_SUP_INTEL
++ depends on X86_64 && CPU_SUP_INTEL && X86_X2APIC
+ depends on CRYPTO=y
+ depends on CRYPTO_SHA256=y
+ select SRCU
+diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
+index 8cbf623f0ecfb..b472ef76826ad 100644
+--- a/arch/x86/include/asm/cpu.h
++++ b/arch/x86/include/asm/cpu.h
+@@ -94,4 +94,6 @@ static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1,
+ return p1 & p2;
+ }
+
++extern u64 x86_read_arch_cap_msr(void);
++
+ #endif /* _ASM_X86_CPU_H */
+diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
+index 0a9407dc08598..6f0acc45e67a7 100644
+--- a/arch/x86/include/asm/hyperv-tlfs.h
++++ b/arch/x86/include/asm/hyperv-tlfs.h
+@@ -546,7 +546,7 @@ struct hv_enlightened_vmcs {
+ u64 guest_rip;
+
+ u32 hv_clean_fields;
+- u32 hv_padding_32;
++ u32 padding32_1;
+ u32 hv_synthetic_controls;
+ struct {
+ u32 nested_flush_hypercall:1;
+@@ -554,7 +554,7 @@ struct hv_enlightened_vmcs {
+ u32 reserved:30;
+ } __packed hv_enlightenments_control;
+ u32 hv_vp_id;
+-
++ u32 padding32_2;
+ u64 hv_vm_id;
+ u64 partition_assist_page;
+ u64 padding64_4[4];
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index 0c3d3440fe278..aa675783412f8 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -9,6 +9,7 @@
+ struct ucode_patch {
+ struct list_head plist;
+ void *data; /* Intel uses only this one */
++ unsigned int size;
+ u32 patch_id;
+ u16 equiv_cpu;
+ };
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index e057e039173cb..9267bfe3c33f1 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -155,6 +155,11 @@
+ * Return Stack Buffer Predictions.
+ */
+
++#define ARCH_CAP_XAPIC_DISABLE BIT(21) /*
++ * IA32_XAPIC_DISABLE_STATUS MSR
++ * supported
++ */
++
+ #define MSR_IA32_FLUSH_CMD 0x0000010b
+ #define L1D_FLUSH BIT(0) /*
+ * Writeback and invalidate the
+@@ -1046,4 +1051,12 @@
+ #define MSR_IA32_HW_FEEDBACK_PTR 0x17d0
+ #define MSR_IA32_HW_FEEDBACK_CONFIG 0x17d1
+
++/* x2APIC locked status */
++#define MSR_IA32_XAPIC_DISABLE_STATUS 0xBD
++#define LEGACY_XAPIC_DISABLED BIT(0) /*
++ * x2APIC mode is locked and
++ * disabling x2APIC will cause
++ * a #GP
++ */
++
+ #endif /* _ASM_X86_MSR_INDEX_H */
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index 89df6c6617f50..bc2e1b67319d3 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -414,8 +414,17 @@ int paravirt_disable_iospace(void);
+ "=c" (__ecx)
+ #define PVOP_CALL_CLOBBERS PVOP_VCALL_CLOBBERS, "=a" (__eax)
+
+-/* void functions are still allowed [re]ax for scratch */
++/*
++ * void functions are still allowed [re]ax for scratch.
++ *
++ * The ZERO_CALL_USED REGS feature may end up zeroing out callee-saved
++ * registers. Make sure we model this with the appropriate clobbers.
++ */
++#ifdef CONFIG_ZERO_CALL_USED_REGS
++#define PVOP_VCALLEE_CLOBBERS "=a" (__eax), PVOP_VCALL_CLOBBERS
++#else
+ #define PVOP_VCALLEE_CLOBBERS "=a" (__eax)
++#endif
+ #define PVOP_CALLEE_CLOBBERS PVOP_VCALLEE_CLOBBERS
+
+ #define EXTRA_CLOBBERS , "r8", "r9", "r10", "r11"
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 189d3a5e471ad..665993b2e80d2 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -61,6 +61,7 @@
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+ #include <asm/irq_regs.h>
++#include <asm/cpu.h>
+
+ unsigned int num_processors;
+
+@@ -1756,11 +1757,26 @@ EXPORT_SYMBOL_GPL(x2apic_mode);
+
+ enum {
+ X2APIC_OFF,
+- X2APIC_ON,
+ X2APIC_DISABLED,
++ /* All states below here have X2APIC enabled */
++ X2APIC_ON,
++ X2APIC_ON_LOCKED
+ };
+ static int x2apic_state;
+
++static bool x2apic_hw_locked(void)
++{
++ u64 ia32_cap;
++ u64 msr;
++
++ ia32_cap = x86_read_arch_cap_msr();
++ if (ia32_cap & ARCH_CAP_XAPIC_DISABLE) {
++ rdmsrl(MSR_IA32_XAPIC_DISABLE_STATUS, msr);
++ return (msr & LEGACY_XAPIC_DISABLED);
++ }
++ return false;
++}
++
+ static void __x2apic_disable(void)
+ {
+ u64 msr;
+@@ -1798,6 +1814,10 @@ static int __init setup_nox2apic(char *str)
+ apicid);
+ return 0;
+ }
++ if (x2apic_hw_locked()) {
++ pr_warn("APIC locked in x2apic mode, can't disable\n");
++ return 0;
++ }
+ pr_warn("x2apic already enabled.\n");
+ __x2apic_disable();
+ }
+@@ -1812,10 +1832,18 @@ early_param("nox2apic", setup_nox2apic);
+ void x2apic_setup(void)
+ {
+ /*
+- * If x2apic is not in ON state, disable it if already enabled
++ * Try to make the AP's APIC state match that of the BSP, but if the
++ * BSP is unlocked and the AP is locked then there is a state mismatch.
++ * Warn about the mismatch in case a GP fault occurs due to a locked AP
++ * trying to be turned off.
++ */
++ if (x2apic_state != X2APIC_ON_LOCKED && x2apic_hw_locked())
++ pr_warn("x2apic lock mismatch between BSP and AP.\n");
++ /*
++ * If x2apic is not in ON or LOCKED state, disable it if already enabled
+ * from BIOS.
+ */
+- if (x2apic_state != X2APIC_ON) {
++ if (x2apic_state < X2APIC_ON) {
+ __x2apic_disable();
+ return;
+ }
+@@ -1836,6 +1864,11 @@ static __init void x2apic_disable(void)
+ if (x2apic_id >= 255)
+ panic("Cannot disable x2apic, id: %08x\n", x2apic_id);
+
++ if (x2apic_hw_locked()) {
++ pr_warn("Cannot disable locked x2apic, id: %08x\n", x2apic_id);
++ return;
++ }
++
+ __x2apic_disable();
+ register_lapic_address(mp_lapic_addr);
+ }
+@@ -1894,7 +1927,10 @@ void __init check_x2apic(void)
+ if (x2apic_enabled()) {
+ pr_info("x2apic: enabled by BIOS, switching to x2apic ops\n");
+ x2apic_mode = 1;
+- x2apic_state = X2APIC_ON;
++ if (x2apic_hw_locked())
++ x2apic_state = X2APIC_ON_LOCKED;
++ else
++ x2apic_state = X2APIC_ON;
+ } else if (!boot_cpu_has(X86_FEATURE_X2APIC)) {
+ x2apic_state = X2APIC_DISABLED;
+ }
+diff --git a/arch/x86/kernel/cpu/feat_ctl.c b/arch/x86/kernel/cpu/feat_ctl.c
+index da696eb4821a0..e77032c5f85cc 100644
+--- a/arch/x86/kernel/cpu/feat_ctl.c
++++ b/arch/x86/kernel/cpu/feat_ctl.c
+@@ -1,11 +1,11 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/tboot.h>
+
++#include <asm/cpu.h>
+ #include <asm/cpufeature.h>
+ #include <asm/msr-index.h>
+ #include <asm/processor.h>
+ #include <asm/vmx.h>
+-#include "cpu.h"
+
+ #undef pr_fmt
+ #define pr_fmt(fmt) "x86/cpu: " fmt
+diff --git a/arch/x86/kernel/cpu/mce/apei.c b/arch/x86/kernel/cpu/mce/apei.c
+index 717192915f28a..8ed341714686a 100644
+--- a/arch/x86/kernel/cpu/mce/apei.c
++++ b/arch/x86/kernel/cpu/mce/apei.c
+@@ -29,15 +29,26 @@
+ void apei_mce_report_mem_error(int severity, struct cper_sec_mem_err *mem_err)
+ {
+ struct mce m;
++ int lsb;
+
+ if (!(mem_err->validation_bits & CPER_MEM_VALID_PA))
+ return;
+
++ /*
++ * Even if the ->validation_bits are set for address mask,
++ * to be extra safe, check and reject an error radius '0',
++ * and fall back to the default page size.
++ */
++ if (mem_err->validation_bits & CPER_MEM_VALID_PA_MASK)
++ lsb = find_first_bit((void *)&mem_err->physical_addr_mask, PAGE_SHIFT);
++ else
++ lsb = PAGE_SHIFT;
++
+ mce_setup(&m);
+ m.bank = -1;
+ /* Fake a memory read error with unknown channel */
+ m.status = MCI_STATUS_VAL | MCI_STATUS_EN | MCI_STATUS_ADDRV | MCI_STATUS_MISCV | 0x9f;
+- m.misc = (MCI_MISC_ADDR_PHYS << 6) | PAGE_SHIFT;
++ m.misc = (MCI_MISC_ADDR_PHYS << 6) | lsb;
+
+ if (severity >= GHES_SEV_RECOVERABLE)
+ m.status |= MCI_STATUS_UC;
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 8b2fcdfa6d316..615bc6efa1dd4 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -788,6 +788,7 @@ static int verify_and_add_patch(u8 family, u8 *fw, unsigned int leftover,
+ kfree(patch);
+ return -EINVAL;
+ }
++ patch->size = *patch_size;
+
+ mc_hdr = (struct microcode_header_amd *)(fw + SECTION_HDR_SIZE);
+ proc_id = mc_hdr->processor_rev_id;
+@@ -869,7 +870,7 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
+ return ret;
+
+ memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
+- memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data), PATCH_MAX_SIZE));
++ memcpy(amd_ucode_patch, p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
+
+ return ret;
+ }
+diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+index db813f819ad6c..4d8398986f784 100644
+--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
++++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+@@ -420,6 +420,7 @@ static int pseudo_lock_fn(void *_rdtgrp)
+ struct pseudo_lock_region *plr = rdtgrp->plr;
+ u32 rmid_p, closid_p;
+ unsigned long i;
++ u64 saved_msr;
+ #ifdef CONFIG_KASAN
+ /*
+ * The registers used for local register variables are also used
+@@ -463,6 +464,7 @@ static int pseudo_lock_fn(void *_rdtgrp)
+ * the buffer and evict pseudo-locked memory read earlier from the
+ * cache.
+ */
++ saved_msr = __rdmsr(MSR_MISC_FEATURE_CONTROL);
+ __wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
+ closid_p = this_cpu_read(pqr_state.cur_closid);
+ rmid_p = this_cpu_read(pqr_state.cur_rmid);
+@@ -514,7 +516,7 @@ static int pseudo_lock_fn(void *_rdtgrp)
+ __wrmsr(IA32_PQR_ASSOC, rmid_p, closid_p);
+
+ /* Re-enable the hardware prefetcher(s) */
+- wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
++ wrmsrl(MSR_MISC_FEATURE_CONTROL, saved_msr);
+ local_irq_enable();
+
+ plr->thread_done = 1;
+@@ -871,6 +873,7 @@ bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d)
+ static int measure_cycles_lat_fn(void *_plr)
+ {
+ struct pseudo_lock_region *plr = _plr;
++ u32 saved_low, saved_high;
+ unsigned long i;
+ u64 start, end;
+ void *mem_r;
+@@ -879,6 +882,7 @@ static int measure_cycles_lat_fn(void *_plr)
+ /*
+ * Disable hardware prefetchers.
+ */
++ rdmsr(MSR_MISC_FEATURE_CONTROL, saved_low, saved_high);
+ wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
+ mem_r = READ_ONCE(plr->kmem);
+ /*
+@@ -895,7 +899,7 @@ static int measure_cycles_lat_fn(void *_plr)
+ end = rdtsc_ordered();
+ trace_pseudo_lock_mem_latency((u32)(end - start));
+ }
+- wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
++ wrmsr(MSR_MISC_FEATURE_CONTROL, saved_low, saved_high);
+ local_irq_enable();
+ plr->thread_done = 1;
+ wake_up_interruptible(&plr->lock_thread_wq);
+@@ -940,6 +944,7 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr,
+ u64 hits_before = 0, hits_after = 0, miss_before = 0, miss_after = 0;
+ struct perf_event *miss_event, *hit_event;
+ int hit_pmcnum, miss_pmcnum;
++ u32 saved_low, saved_high;
+ unsigned int line_size;
+ unsigned int size;
+ unsigned long i;
+@@ -973,6 +978,7 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr,
+ /*
+ * Disable hardware prefetchers.
+ */
++ rdmsr(MSR_MISC_FEATURE_CONTROL, saved_low, saved_high);
+ wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
+
+ /* Initialize rest of local variables */
+@@ -1031,7 +1037,7 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr,
+ */
+ rmb();
+ /* Re-enable hardware prefetchers */
+- wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
++ wrmsr(MSR_MISC_FEATURE_CONTROL, saved_low, saved_high);
+ local_irq_enable();
+ out_hit:
+ perf_event_release_kernel(hit_event);
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 0c4a866813b31..695a5d159de87 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -1955,7 +1955,7 @@ static int em_pop_sreg(struct x86_emulate_ctxt *ctxt)
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+
+- if (ctxt->modrm_reg == VCPU_SREG_SS)
++ if (seg == VCPU_SREG_SS)
+ ctxt->interruptibility = KVM_X86_SHADOW_INT_MOV_SS;
+ if (ctxt->op_bytes > 2)
+ rsp_increment(ctxt, ctxt->op_bytes - 2);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 67215fd6bd4a5..b98b8cede2642 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2322,9 +2322,14 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
+ * are emulated by vmx_set_efer() in prepare_vmcs02(), but speculate
+ * on the related bits (if supported by the CPU) in the hope that
+ * we can avoid VMWrites during vmx_set_efer().
++ *
++ * Similarly, take vmcs01's PERF_GLOBAL_CTRL in the hope that if KVM is
++ * loading PERF_GLOBAL_CTRL via the VMCS for L1, then KVM will want to
++ * do the same for L2.
+ */
+ exec_control = __vm_entry_controls_get(vmcs01);
+- exec_control |= vmcs12->vm_entry_controls;
++ exec_control |= (vmcs12->vm_entry_controls &
++ ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL);
+ exec_control &= ~(VM_ENTRY_IA32E_MODE | VM_ENTRY_LOAD_IA32_EFER);
+ if (cpu_has_load_ia32_efer()) {
+ if (guest_efer & EFER_LMA)
+@@ -3834,7 +3839,16 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu,
+ u32 intr_info = nr | INTR_INFO_VALID_MASK;
+
+ if (vcpu->arch.exception.has_error_code) {
+- vmcs12->vm_exit_intr_error_code = vcpu->arch.exception.error_code;
++ /*
++ * Intel CPUs do not generate error codes with bits 31:16 set,
++ * and more importantly VMX disallows setting bits 31:16 in the
++ * injected error code for VM-Entry. Drop the bits to mimic
++ * hardware and avoid inducing failure on nested VM-Entry if L1
++ * chooses to inject the exception back to L2. AMD CPUs _do_
++ * generate "full" 32-bit error codes, so KVM allows userspace
++ * to inject exception error codes with bits 31:16 set.
++ */
++ vmcs12->vm_exit_intr_error_code = (u16)vcpu->arch.exception.error_code;
+ intr_info |= INTR_INFO_DELIVER_CODE_MASK;
+ }
+
+@@ -4264,14 +4278,6 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ nested_vmx_abort(vcpu,
+ VMX_ABORT_SAVE_GUEST_MSR_FAIL);
+ }
+-
+- /*
+- * Drop what we picked up for L2 via vmx_complete_interrupts. It is
+- * preserved above and would only end up incorrectly in L1.
+- */
+- vcpu->arch.nmi_injected = false;
+- kvm_clear_exception_queue(vcpu);
+- kvm_clear_interrupt_queue(vcpu);
+ }
+
+ /*
+@@ -4611,6 +4617,17 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ WARN_ON_ONCE(nested_early_check);
+ }
+
++ /*
++ * Drop events/exceptions that were queued for re-injection to L2
++ * (picked up via vmx_complete_interrupts()), as well as exceptions
++ * that were pending for L2. Note, this must NOT be hoisted above
++ * prepare_vmcs12(), events/exceptions queued for re-injection need to
++ * be captured in vmcs12 (see vmcs12_save_pending_event()).
++ */
++ vcpu->arch.nmi_injected = false;
++ kvm_clear_exception_queue(vcpu);
++ kvm_clear_interrupt_queue(vcpu);
++
+ vmx_switch_vmcs(vcpu, &vmx->vmcs01);
+
+ /* Update any VMCS fields that might have changed while L2 ran */
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index b09a50e0af29d..98526e708f327 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1687,7 +1687,17 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu)
+ kvm_deliver_exception_payload(vcpu);
+
+ if (has_error_code) {
+- vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, error_code);
++ /*
++ * Despite the error code being architecturally defined as 32
++ * bits, and the VMCS field being 32 bits, Intel CPUs and thus
++ * VMX don't actually supporting setting bits 31:16. Hardware
++ * will (should) never provide a bogus error code, but AMD CPUs
++ * do generate error codes with bits 31:16 set, and so KVM's
++ * ABI lets userspace shove in arbitrary 32-bit values. Drop
++ * the upper bits to avoid VM-Fail, losing information that
++ * does't really exist is preferable to killing the VM.
++ */
++ vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, (u16)error_code);
+ intr_info |= INTR_INFO_DELIVER_CODE_MASK;
+ }
+
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 41d170653e8d9..fc4d899f10f65 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -2216,7 +2216,7 @@ cleanup:
+ return ret;
+ }
+
+-static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs)
++static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs, u8 *image, u8 *buf)
+ {
+ u8 *jg_reloc, *prog = *pprog;
+ int pivot, err, jg_bytes = 1;
+@@ -2232,12 +2232,12 @@ static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs)
+ EMIT2_off32(0x81, add_1reg(0xF8, BPF_REG_3),
+ progs[a]);
+ err = emit_cond_near_jump(&prog, /* je func */
+- (void *)progs[a], prog,
++ (void *)progs[a], image + (prog - buf),
+ X86_JE);
+ if (err)
+ return err;
+
+- emit_indirect_jump(&prog, 2 /* rdx */, prog);
++ emit_indirect_jump(&prog, 2 /* rdx */, image + (prog - buf));
+
+ *pprog = prog;
+ return 0;
+@@ -2262,7 +2262,7 @@ static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs)
+ jg_reloc = prog;
+
+ err = emit_bpf_dispatcher(&prog, a, a + pivot, /* emit lower_part */
+- progs);
++ progs, image, buf);
+ if (err)
+ return err;
+
+@@ -2276,7 +2276,7 @@ static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs)
+ emit_code(jg_reloc - jg_bytes, jg_offset, jg_bytes);
+
+ err = emit_bpf_dispatcher(&prog, a + pivot + 1, /* emit upper_part */
+- b, progs);
++ b, progs, image, buf);
+ if (err)
+ return err;
+
+@@ -2296,12 +2296,12 @@ static int cmp_ips(const void *a, const void *b)
+ return 0;
+ }
+
+-int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs)
++int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs)
+ {
+- u8 *prog = image;
++ u8 *prog = buf;
+
+ sort(funcs, num_funcs, sizeof(funcs[0]), cmp_ips, NULL);
+- return emit_bpf_dispatcher(&prog, 0, num_funcs - 1, funcs);
++ return emit_bpf_dispatcher(&prog, 0, num_funcs - 1, funcs, image, buf);
+ }
+
+ struct x64_jit_data {
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 0ed2e487a693f..9b1a58dda935b 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -765,6 +765,7 @@ static void xen_load_idt(const struct desc_ptr *desc)
+ {
+ static DEFINE_SPINLOCK(lock);
+ static struct trap_info traps[257];
++ static const struct trap_info zero = { };
+ unsigned out;
+
+ trace_xen_cpu_load_idt(desc);
+@@ -774,7 +775,7 @@ static void xen_load_idt(const struct desc_ptr *desc)
+ memcpy(this_cpu_ptr(&idt_desc), desc, sizeof(idt_desc));
+
+ out = xen_convert_trap_info(desc, traps, false);
+- memset(&traps[out], 0, sizeof(traps[0]));
++ traps[out] = zero;
+
+ xen_mc_flush();
+ if (HYPERVISOR_set_trap_table(traps))
+diff --git a/block/bio.c b/block/bio.c
+index eb7cc591ee931..225e2edcb5049 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -760,8 +760,6 @@ EXPORT_SYMBOL(bio_put);
+ static int __bio_clone(struct bio *bio, struct bio *bio_src, gfp_t gfp)
+ {
+ bio_set_flag(bio, BIO_CLONED);
+- if (bio_flagged(bio_src, BIO_THROTTLED))
+- bio_set_flag(bio, BIO_THROTTLED);
+ bio->bi_ioprio = bio_src->bi_ioprio;
+ bio->bi_iter = bio_src->bi_iter;
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 69d0a58f9e2f1..302b8d92deef1 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -4481,14 +4481,14 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ list_add(&qe->node, head);
+
+ /*
+- * After elevator_switch_mq, the previous elevator_queue will be
++ * After elevator_switch, the previous elevator_queue will be
+ * released by elevator_release. The reference of the io scheduler
+ * module get by elevator_get will also be put. So we need to get
+ * a reference of the io scheduler module here to prevent it to be
+ * removed.
+ */
+ __module_get(qe->type->elevator_owner);
+- elevator_switch_mq(q, NULL);
++ elevator_switch(q, NULL);
+ mutex_unlock(&q->sysfs_lock);
+
+ return true;
+@@ -4520,7 +4520,7 @@ static void blk_mq_elv_switch_back(struct list_head *head,
+ kfree(qe);
+
+ mutex_lock(&q->sysfs_lock);
+- elevator_switch_mq(q, t);
++ elevator_switch(q, t);
+ mutex_unlock(&q->sysfs_lock);
+ }
+
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 139b2d7a99e2f..acdd85a07f923 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -806,12 +806,12 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio,
+ u64 bps_limit, unsigned long *wait)
+ {
+ bool rw = bio_data_dir(bio);
+- u64 bytes_allowed, extra_bytes, tmp;
++ u64 bytes_allowed, extra_bytes;
+ unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
+ unsigned int bio_size = throtl_bio_data_size(bio);
+
+ /* no need to throttle if this bio's bytes have been accounted */
+- if (bps_limit == U64_MAX || bio_flagged(bio, BIO_THROTTLED)) {
++ if (bps_limit == U64_MAX || bio_flagged(bio, BIO_BPS_THROTTLED)) {
+ if (wait)
+ *wait = 0;
+ return true;
+@@ -824,10 +824,8 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio,
+ jiffy_elapsed_rnd = tg->td->throtl_slice;
+
+ jiffy_elapsed_rnd = roundup(jiffy_elapsed_rnd, tg->td->throtl_slice);
+-
+- tmp = bps_limit * jiffy_elapsed_rnd;
+- do_div(tmp, HZ);
+- bytes_allowed = tmp;
++ bytes_allowed = mul_u64_u64_div_u64(bps_limit, (u64)jiffy_elapsed_rnd,
++ (u64)HZ);
+
+ if (tg->bytes_disp[rw] + bio_size <= bytes_allowed) {
+ if (wait)
+@@ -921,22 +919,13 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)
+ unsigned int bio_size = throtl_bio_data_size(bio);
+
+ /* Charge the bio to the group */
+- if (!bio_flagged(bio, BIO_THROTTLED)) {
++ if (!bio_flagged(bio, BIO_BPS_THROTTLED)) {
+ tg->bytes_disp[rw] += bio_size;
+ tg->last_bytes_disp[rw] += bio_size;
+ }
+
+ tg->io_disp[rw]++;
+ tg->last_io_disp[rw]++;
+-
+- /*
+- * BIO_THROTTLED is used to prevent the same bio to be throttled
+- * more than once as a throttled bio will go through blk-throtl the
+- * second time when it eventually gets issued. Set it when a bio
+- * is being charged to a tg.
+- */
+- if (!bio_flagged(bio, BIO_THROTTLED))
+- bio_set_flag(bio, BIO_THROTTLED);
+ }
+
+ /**
+@@ -1026,6 +1015,7 @@ static void tg_dispatch_one_bio(struct throtl_grp *tg, bool rw)
+ sq->nr_queued[rw]--;
+
+ throtl_charge_bio(tg, bio);
++ bio_set_flag(bio, BIO_BPS_THROTTLED);
+
+ /*
+ * If our parent is another tg, we just need to transfer @bio to
+@@ -2159,8 +2149,10 @@ again:
+ qn = &tg->qnode_on_parent[rw];
+ sq = sq->parent_sq;
+ tg = sq_to_tg(sq);
+- if (!tg)
++ if (!tg) {
++ bio_set_flag(bio, BIO_BPS_THROTTLED);
+ goto out_unlock;
++ }
+ }
+
+ /* out-of-limit, queue to @tg */
+@@ -2189,8 +2181,6 @@ again:
+ }
+
+ out_unlock:
+- bio_set_flag(bio, BIO_THROTTLED);
+-
+ #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+ if (throttled || !td->track_bio_latency)
+ bio->bi_issue.value |= BIO_ISSUE_THROTL_SKIP_LATENCY;
+diff --git a/block/blk-throttle.h b/block/blk-throttle.h
+index c1b6029961272..ee7299e6dea91 100644
+--- a/block/blk-throttle.h
++++ b/block/blk-throttle.h
+@@ -175,7 +175,7 @@ static inline bool blk_throtl_bio(struct bio *bio)
+ struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
+
+ /* no need to throttle bps any more if the bio has been throttled */
+- if (bio_flagged(bio, BIO_THROTTLED) &&
++ if (bio_flagged(bio, BIO_BPS_THROTTLED) &&
+ !(tg->flags & THROTL_TG_HAS_IOPS_LIMIT))
+ return false;
+
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index ae6ea0b545799..e91d334b2788c 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -841,8 +841,11 @@ int wbt_init(struct request_queue *q)
+ rwb->last_comp = rwb->last_issue = jiffies;
+ rwb->win_nsec = RWB_WINDOW_NSEC;
+ rwb->enable_state = WBT_STATE_ON_DEFAULT;
+- rwb->wc = 1;
++ rwb->wc = test_bit(QUEUE_FLAG_WC, &q->queue_flags);
+ rwb->rq_depth.default_depth = RWB_DEF_DEPTH;
++ rwb->min_lat_nsec = wbt_default_latency_nsec(q);
++
++ wbt_queue_depth_changed(&rwb->rqos);
+
+ /*
+ * Assign rwb and add the stats callback.
+@@ -853,11 +856,6 @@ int wbt_init(struct request_queue *q)
+
+ blk_stat_add_callback(q, rwb->cb);
+
+- rwb->min_lat_nsec = wbt_default_latency_nsec(q);
+-
+- wbt_queue_depth_changed(&rwb->rqos);
+- wbt_set_write_cache(q, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
+-
+ return 0;
+
+ err_free:
+diff --git a/block/blk.h b/block/blk.h
+index 0d6668663ab5d..af2aaea239665 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -260,8 +260,7 @@ bool blk_bio_list_merge(struct request_queue *q, struct list_head *list,
+
+ void blk_insert_flush(struct request *rq);
+
+-int elevator_switch_mq(struct request_queue *q,
+- struct elevator_type *new_e);
++int elevator_switch(struct request_queue *q, struct elevator_type *new_e);
+ void elevator_exit(struct request_queue *q);
+ int elv_register_queue(struct request_queue *q, bool uevent);
+ void elv_unregister_queue(struct request_queue *q);
+diff --git a/block/elevator.c b/block/elevator.c
+index c319765892bb9..bd71f0fc4e4b6 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -588,7 +588,7 @@ void elv_unregister(struct elevator_type *e)
+ }
+ EXPORT_SYMBOL_GPL(elv_unregister);
+
+-int elevator_switch_mq(struct request_queue *q,
++static int elevator_switch_mq(struct request_queue *q,
+ struct elevator_type *new_e)
+ {
+ int ret;
+@@ -723,7 +723,7 @@ void elevator_init_mq(struct request_queue *q)
+ * need for the new one. this way we have a chance of going back to the old
+ * one, if the new one fails init for some reason.
+ */
+-static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
++int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
+ {
+ int err;
+
+diff --git a/crypto/akcipher.c b/crypto/akcipher.c
+index f866085c8a4a3..ab975a420e1e9 100644
+--- a/crypto/akcipher.c
++++ b/crypto/akcipher.c
+@@ -120,6 +120,12 @@ static int akcipher_default_op(struct akcipher_request *req)
+ return -ENOSYS;
+ }
+
++static int akcipher_default_set_key(struct crypto_akcipher *tfm,
++ const void *key, unsigned int keylen)
++{
++ return -ENOSYS;
++}
++
+ int crypto_register_akcipher(struct akcipher_alg *alg)
+ {
+ struct crypto_alg *base = &alg->base;
+@@ -132,6 +138,8 @@ int crypto_register_akcipher(struct akcipher_alg *alg)
+ alg->encrypt = akcipher_default_op;
+ if (!alg->decrypt)
+ alg->decrypt = akcipher_default_op;
++ if (!alg->set_priv_key)
++ alg->set_priv_key = akcipher_default_set_key;
+
+ akcipher_prepare_alg(alg);
+ return crypto_register_alg(base);
+diff --git a/drivers/acpi/acpi_fpdt.c b/drivers/acpi/acpi_fpdt.c
+index 6922a44b3ce70..a2056c4c8cb70 100644
+--- a/drivers/acpi/acpi_fpdt.c
++++ b/drivers/acpi/acpi_fpdt.c
+@@ -143,6 +143,23 @@ static const struct attribute_group boot_attr_group = {
+
+ static struct kobject *fpdt_kobj;
+
++#if defined CONFIG_X86 && defined CONFIG_PHYS_ADDR_T_64BIT
++#include <linux/processor.h>
++static bool fpdt_address_valid(u64 address)
++{
++ /*
++ * On some systems the table contains invalid addresses
++ * with unsuppored high address bits set, check for this.
++ */
++ return !(address >> boot_cpu_data.x86_phys_bits);
++}
++#else
++static bool fpdt_address_valid(u64 address)
++{
++ return true;
++}
++#endif
++
+ static int fpdt_process_subtable(u64 address, u32 subtable_type)
+ {
+ struct fpdt_subtable_header *subtable_header;
+@@ -151,6 +168,11 @@ static int fpdt_process_subtable(u64 address, u32 subtable_type)
+ u32 length, offset;
+ int result;
+
++ if (!fpdt_address_valid(address)) {
++ pr_info(FW_BUG "invalid physical address: 0x%llx!\n", address);
++ return -EINVAL;
++ }
++
+ subtable_header = acpi_os_map_memory(address, sizeof(*subtable_header));
+ if (!subtable_header)
+ return -ENOMEM;
+diff --git a/drivers/acpi/acpi_pcc.c b/drivers/acpi/acpi_pcc.c
+index a12b55d812096..ee4ce5ba1fb24 100644
+--- a/drivers/acpi/acpi_pcc.c
++++ b/drivers/acpi/acpi_pcc.c
+@@ -23,6 +23,12 @@
+
+ #include <acpi/pcc.h>
+
++/*
++ * Arbitrary retries in case the remote processor is slow to respond
++ * to PCC commands
++ */
++#define PCC_CMD_WAIT_RETRIES_NUM 500
++
+ struct pcc_data {
+ struct pcc_mbox_chan *pcc_chan;
+ void __iomem *pcc_comm_addr;
+@@ -63,6 +69,7 @@ acpi_pcc_address_space_setup(acpi_handle region_handle, u32 function,
+ if (IS_ERR(data->pcc_chan)) {
+ pr_err("Failed to find PCC channel for subspace %d\n",
+ ctx->subspace_id);
++ kfree(data);
+ return AE_NOT_FOUND;
+ }
+
+@@ -72,6 +79,8 @@ acpi_pcc_address_space_setup(acpi_handle region_handle, u32 function,
+ if (!data->pcc_comm_addr) {
+ pr_err("Failed to ioremap PCC comm region mem for %d\n",
+ ctx->subspace_id);
++ pcc_mbox_free_channel(data->pcc_chan);
++ kfree(data);
+ return AE_NO_MEMORY;
+ }
+
+@@ -86,6 +95,7 @@ acpi_pcc_address_space_handler(u32 function, acpi_physical_address addr,
+ {
+ int ret;
+ struct pcc_data *data = region_context;
++ u64 usecs_lat;
+
+ reinit_completion(&data->done);
+
+@@ -96,10 +106,22 @@ acpi_pcc_address_space_handler(u32 function, acpi_physical_address addr,
+ if (ret < 0)
+ return AE_ERROR;
+
+- if (data->pcc_chan->mchan->mbox->txdone_irq)
+- wait_for_completion(&data->done);
++ if (data->pcc_chan->mchan->mbox->txdone_irq) {
++ /*
++ * pcc_chan->latency is just a Nominal value. In reality the remote
++ * processor could be much slower to reply. So add an arbitrary
++ * amount of wait on top of Nominal.
++ */
++ usecs_lat = PCC_CMD_WAIT_RETRIES_NUM * data->pcc_chan->latency;
++ ret = wait_for_completion_timeout(&data->done,
++ usecs_to_jiffies(usecs_lat));
++ if (ret == 0) {
++ pr_err("PCC command executed timeout!\n");
++ return AE_TIME;
++ }
++ }
+
+- mbox_client_txdone(data->pcc_chan->mchan, ret);
++ mbox_chan_txdone(data->pcc_chan->mchan, ret);
+
+ memcpy_fromio(value, data->pcc_comm_addr, data->ctx.length);
+
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index eaea733b368ae..03f5f92b603c4 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -496,6 +496,22 @@ static const struct dmi_system_id video_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE R830"),
+ },
+ },
++ {
++ .callback = video_disable_backlight_sysfs_if,
++ .ident = "Toshiba Satellite Z830",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE Z830"),
++ },
++ },
++ {
++ .callback = video_disable_backlight_sysfs_if,
++ .ident = "Toshiba Portege Z830",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PORTEGE Z830"),
++ },
++ },
+ /*
+ * Some machine's _DOD IDs don't have bit 31(Device ID Scheme) set
+ * but the IDs actually follow the Device ID Scheme.
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index d91ad378c00d6..80ad530583c9c 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -985,7 +985,7 @@ static void ghes_proc_in_irq(struct irq_work *irq_work)
+ ghes_estatus_cache_add(generic, estatus);
+ }
+
+- if (task_work_pending && current->mm != &init_mm) {
++ if (task_work_pending && current->mm) {
+ estatus_node->task_work.func = ghes_kick_task_work;
+ estatus_node->task_work_cpu = smp_processor_id();
+ ret = task_work_add(current, &estatus_node->task_work,
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index 664070fc83498..d7cdd8406c84f 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -207,9 +207,26 @@ static const struct x86_cpu_id storage_d3_cpu_ids[] = {
+ {}
+ };
+
++static const struct dmi_system_id force_storage_d3_dmi[] = {
++ {
++ /*
++ * _ADR is ambiguous between GPP1.DEV0 and GPP1.NVME
++ * but .NVME is needed to get StorageD3Enable node
++ * https://bugzilla.kernel.org/show_bug.cgi?id=216440
++ */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 14 7425 2-in-1"),
++ }
++ },
++ {}
++};
++
+ bool force_storage_d3(void)
+ {
+- return x86_match_cpu(storage_d3_cpu_ids);
++ const struct dmi_system_id *dmi_id = dmi_first_match(force_storage_d3_dmi);
++
++ return dmi_id || x86_match_cpu(storage_d3_cpu_ids);
+ }
+
+ /*
+diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c
+index 32495ae96567a..986f1923a76da 100644
+--- a/drivers/ata/libahci_platform.c
++++ b/drivers/ata/libahci_platform.c
+@@ -451,14 +451,24 @@ struct ahci_host_priv *ahci_platform_get_resources(struct platform_device *pdev,
+ }
+ }
+
+- hpriv->nports = child_nodes = of_get_child_count(dev->of_node);
++ /*
++ * Too many sub-nodes most likely means having something wrong with
++ * the firmware.
++ */
++ child_nodes = of_get_child_count(dev->of_node);
++ if (child_nodes > AHCI_MAX_PORTS) {
++ rc = -EINVAL;
++ goto err_out;
++ }
+
+ /*
+ * If no sub-node was found, we still need to set nports to
+ * one in order to be able to use the
+ * ahci_platform_[en|dis]able_[phys|regulators] functions.
+ */
+- if (!child_nodes)
++ if (child_nodes)
++ hpriv->nports = child_nodes;
++ else
+ hpriv->nports = 1;
+
+ hpriv->phys = devm_kcalloc(dev, hpriv->nports, sizeof(*hpriv->phys), GFP_KERNEL);
+diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
+index 579c851a2bd74..5e94767c28774 100644
+--- a/drivers/base/arch_topology.c
++++ b/drivers/base/arch_topology.c
+@@ -791,4 +791,23 @@ void __init init_cpu_topology(void)
+ else if (of_have_populated_dt() && parse_dt_topology())
+ reset_cpu_topology();
+ }
++
++void store_cpu_topology(unsigned int cpuid)
++{
++ struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
++
++ if (cpuid_topo->package_id != -1)
++ goto topology_populated;
++
++ cpuid_topo->thread_id = -1;
++ cpuid_topo->core_id = cpuid;
++ cpuid_topo->package_id = cpu_to_node(cpuid);
++
++ pr_debug("CPU%u: package %d core %d thread %d\n",
++ cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
++ cpuid_topo->thread_id);
++
++topology_populated:
++ update_siblings_masks(cpuid);
++}
+ #endif
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 20e9c53eec53f..3a3680b3c4fe2 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1414,10 +1414,12 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd)
+ mutex_unlock(&nbd->config_lock);
+ ret = wait_event_interruptible(config->recv_wq,
+ atomic_read(&config->recv_threads) == 0);
+- if (ret)
++ if (ret) {
+ sock_shutdown(nbd);
+- flush_workqueue(nbd->recv_workq);
++ nbd_clear_que(nbd);
++ }
+
++ flush_workqueue(nbd->recv_workq);
+ mutex_lock(&nbd->config_lock);
+ nbd_bdev_reset(nbd);
+ /* user requested, ignore socket errors */
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 818681c89db8b..d44a966675179 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2439,15 +2439,20 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ INTEL_ROM_LEGACY_NO_WBS_SUPPORT))
+ set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ &hdev->quirks);
++ if (ver.hw_variant == 0x08 && ver.fw_variant == 0x22)
++ set_bit(HCI_QUIRK_VALID_LE_STATES,
++ &hdev->quirks);
+
+ err = btintel_legacy_rom_setup(hdev, &ver);
+ break;
+ case 0x0b: /* SfP */
+- case 0x0c: /* WsP */
+ case 0x11: /* JfP */
+ case 0x12: /* ThP */
+ case 0x13: /* HrP */
+ case 0x14: /* CcP */
++ set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++ fallthrough;
++ case 0x0c: /* WsP */
+ /* Apply the device specific HCI quirks
+ *
+ * All Legacy bootloader devices support WBS
+@@ -2455,11 +2460,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ &hdev->quirks);
+
+- /* Valid LE States quirk for JfP/ThP familiy */
+- if (ver.hw_variant == 0x11 || ver.hw_variant == 0x12)
+- set_bit(HCI_QUIRK_VALID_LE_STATES,
+- &hdev->quirks);
+-
+ /* Setup MSFT Extension support */
+ btintel_set_msft_opcode(hdev, ver.hw_variant);
+
+@@ -2530,9 +2530,8 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ */
+ set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
+
+- /* Valid LE States quirk for JfP/ThP familiy */
+- if (ver.hw_variant == 0x11 || ver.hw_variant == 0x12)
+- set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++ /* Set Valid LE States quirk */
++ set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+
+ /* Setup MSFT Extension support */
+ btintel_set_msft_opcode(hdev, ver.hw_variant);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index aaba2d7371781..6a320ece32765 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2451,15 +2451,29 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+
+ set_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+
++ /* WMT cmd/event doesn't follow up the generic HCI cmd/event handling,
++ * it needs constantly polling control pipe until the host received the
++ * WMT event, thus, we should require to specifically acquire PM counter
++ * on the USB to prevent the interface from entering auto suspended
++ * while WMT cmd/event in progress.
++ */
++ err = usb_autopm_get_interface(data->intf);
++ if (err < 0)
++ goto err_free_wc;
++
+ err = __hci_cmd_send(hdev, 0xfc6f, hlen, wc);
+
+ if (err < 0) {
+ clear_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
++ usb_autopm_put_interface(data->intf);
+ goto err_free_wc;
+ }
+
+ /* Submit control IN URB on demand to process the WMT event */
+ err = btusb_mtk_submit_wmt_recv_urb(hdev);
++
++ usb_autopm_put_interface(data->intf);
++
+ if (err < 0)
+ goto err_free_wc;
+
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index f537673ede174..865112e96ff9f 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -493,6 +493,11 @@ static int hci_uart_tty_open(struct tty_struct *tty)
+ BT_ERR("Can't allocate control structure");
+ return -ENFILE;
+ }
++ if (percpu_init_rwsem(&hu->proto_lock)) {
++ BT_ERR("Can't allocate semaphore structure");
++ kfree(hu);
++ return -ENOMEM;
++ }
+
+ tty->disc_data = hu;
+ hu->tty = tty;
+@@ -505,8 +510,6 @@ static int hci_uart_tty_open(struct tty_struct *tty)
+ INIT_WORK(&hu->init_ready, hci_uart_init_work);
+ INIT_WORK(&hu->write_work, hci_uart_write_work);
+
+- percpu_init_rwsem(&hu->proto_lock);
+-
+ /* Flush any pending characters in the driver */
+ tty_driver_flush_buffer(tty);
+
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index c0e5f42ec6b7d..f16fd79bc02b8 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -310,11 +310,12 @@ int hci_uart_register_device(struct hci_uart *hu,
+
+ serdev_device_set_client_ops(hu->serdev, &hci_serdev_client_ops);
+
++ if (percpu_init_rwsem(&hu->proto_lock))
++ return -ENOMEM;
++
+ err = serdev_device_open(hu->serdev);
+ if (err)
+- return err;
+-
+- percpu_init_rwsem(&hu->proto_lock);
++ goto err_rwsem;
+
+ err = p->open(hu);
+ if (err)
+@@ -389,6 +390,8 @@ err_alloc:
+ p->close(hu);
+ err_open:
+ serdev_device_close(hu->serdev);
++err_rwsem:
++ percpu_free_rwsem(&hu->proto_lock);
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(hci_uart_register_device);
+@@ -410,5 +413,6 @@ void hci_uart_unregister_device(struct hci_uart *hu)
+ clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+ serdev_device_close(hu->serdev);
+ }
++ percpu_free_rwsem(&hu->proto_lock);
+ }
+ EXPORT_SYMBOL_GPL(hci_uart_unregister_device);
+diff --git a/drivers/char/hw_random/arm_smccc_trng.c b/drivers/char/hw_random/arm_smccc_trng.c
+index b24ac39a903b3..e34c3ea692b6c 100644
+--- a/drivers/char/hw_random/arm_smccc_trng.c
++++ b/drivers/char/hw_random/arm_smccc_trng.c
+@@ -71,8 +71,6 @@ static int smccc_trng_read(struct hwrng *rng, void *data, size_t max, bool wait)
+ MAX_BITS_PER_CALL);
+
+ arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND, bits, &res);
+- if ((int)res.a0 < 0)
+- return (int)res.a0;
+
+ switch ((int)res.a0) {
+ case SMCCC_RET_SUCCESS:
+@@ -88,6 +86,8 @@ static int smccc_trng_read(struct hwrng *rng, void *data, size_t max, bool wait)
+ return copied;
+ cond_resched();
+ break;
++ default:
++ return -EIO;
+ }
+ }
+
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 16f227b995e8a..d7045dfaf16cf 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -507,16 +507,17 @@ static int hwrng_fillfn(void *unused)
+ rng->quality = current_quality; /* obsolete */
+ quality = rng->quality;
+ mutex_unlock(&reading_mutex);
++
++ if (rc <= 0)
++ hwrng_msleep(rng, 10000);
++
+ put_rng(rng);
+
+ if (!quality)
+ break;
+
+- if (rc <= 0) {
+- pr_warn("hwrng: no data available\n");
+- msleep_interruptible(10000);
++ if (rc <= 0)
+ continue;
+- }
+
+ /* If we cannot credit at least one bit of entropy,
+ * keep track of the remainder for the next iteration
+@@ -570,6 +571,7 @@ int hwrng_register(struct hwrng *rng)
+
+ init_completion(&rng->cleanup_done);
+ complete(&rng->cleanup_done);
++ init_completion(&rng->dying);
+
+ if (!current_rng ||
+ (!cur_rng_set_by_user && rng->quality > current_rng->quality)) {
+@@ -617,6 +619,7 @@ void hwrng_unregister(struct hwrng *rng)
+
+ old_rng = current_rng;
+ list_del(&rng->list);
++ complete_all(&rng->dying);
+ if (current_rng == rng) {
+ err = enable_best_rng();
+ if (err) {
+@@ -685,6 +688,14 @@ void devm_hwrng_unregister(struct device *dev, struct hwrng *rng)
+ }
+ EXPORT_SYMBOL_GPL(devm_hwrng_unregister);
+
++long hwrng_msleep(struct hwrng *rng, unsigned int msecs)
++{
++ unsigned long timeout = msecs_to_jiffies(msecs) + 1;
++
++ return wait_for_completion_interruptible_timeout(&rng->dying, timeout);
++}
++EXPORT_SYMBOL_GPL(hwrng_msleep);
++
+ static int __init hwrng_modinit(void)
+ {
+ int ret;
+diff --git a/drivers/char/hw_random/imx-rngc.c b/drivers/char/hw_random/imx-rngc.c
+index b05d676ca814c..2964efeb71c33 100644
+--- a/drivers/char/hw_random/imx-rngc.c
++++ b/drivers/char/hw_random/imx-rngc.c
+@@ -270,13 +270,6 @@ static int imx_rngc_probe(struct platform_device *pdev)
+ goto err;
+ }
+
+- ret = devm_request_irq(&pdev->dev,
+- irq, imx_rngc_irq, 0, pdev->name, (void *)rngc);
+- if (ret) {
+- dev_err(rngc->dev, "Can't get interrupt working.\n");
+- goto err;
+- }
+-
+ init_completion(&rngc->rng_op_done);
+
+ rngc->rng.name = pdev->name;
+@@ -290,6 +283,13 @@ static int imx_rngc_probe(struct platform_device *pdev)
+
+ imx_rngc_irq_mask_clear(rngc);
+
++ ret = devm_request_irq(&pdev->dev,
++ irq, imx_rngc_irq, 0, pdev->name, (void *)rngc);
++ if (ret) {
++ dev_err(rngc->dev, "Can't get interrupt working.\n");
++ return ret;
++ }
++
+ if (self_test) {
+ ret = imx_rngc_self_test(rngc);
+ if (ret) {
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 8dfb28d5ae3fa..5defbc479a5c7 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1178,7 +1178,7 @@ static void __cold entropy_timer(struct timer_list *timer)
+ */
+ static void __cold try_to_generate_entropy(void)
+ {
+- enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 30 };
++ enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 15 };
+ struct entropy_timer_state stack;
+ unsigned int i, num_different = 0;
+ unsigned long last = random_get_entropy();
+@@ -1197,7 +1197,7 @@ static void __cold try_to_generate_entropy(void)
+ timer_setup_on_stack(&stack.timer, entropy_timer, 0);
+ while (!crng_ready() && !signal_pending(current)) {
+ if (!timer_pending(&stack.timer))
+- mod_timer(&stack.timer, jiffies + 1);
++ mod_timer(&stack.timer, jiffies);
+ mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
+ schedule();
+ stack.entropy = random_get_entropy();
+diff --git a/drivers/clk/baikal-t1/ccu-div.c b/drivers/clk/baikal-t1/ccu-div.c
+index 4062092d67f90..a6642f3d33d44 100644
+--- a/drivers/clk/baikal-t1/ccu-div.c
++++ b/drivers/clk/baikal-t1/ccu-div.c
+@@ -34,6 +34,7 @@
+ #define CCU_DIV_CTL_CLKDIV_MASK(_width) \
+ GENMASK((_width) + CCU_DIV_CTL_CLKDIV_FLD - 1, CCU_DIV_CTL_CLKDIV_FLD)
+ #define CCU_DIV_CTL_LOCK_SHIFTED BIT(27)
++#define CCU_DIV_CTL_GATE_REF_BUF BIT(28)
+ #define CCU_DIV_CTL_LOCK_NORMAL BIT(31)
+
+ #define CCU_DIV_RST_DELAY_US 1
+@@ -170,6 +171,40 @@ static int ccu_div_gate_is_enabled(struct clk_hw *hw)
+ return !!(val & CCU_DIV_CTL_EN);
+ }
+
++static int ccu_div_buf_enable(struct clk_hw *hw)
++{
++ struct ccu_div *div = to_ccu_div(hw);
++ unsigned long flags;
++
++ spin_lock_irqsave(&div->lock, flags);
++ regmap_update_bits(div->sys_regs, div->reg_ctl,
++ CCU_DIV_CTL_GATE_REF_BUF, 0);
++ spin_unlock_irqrestore(&div->lock, flags);
++
++ return 0;
++}
++
++static void ccu_div_buf_disable(struct clk_hw *hw)
++{
++ struct ccu_div *div = to_ccu_div(hw);
++ unsigned long flags;
++
++ spin_lock_irqsave(&div->lock, flags);
++ regmap_update_bits(div->sys_regs, div->reg_ctl,
++ CCU_DIV_CTL_GATE_REF_BUF, CCU_DIV_CTL_GATE_REF_BUF);
++ spin_unlock_irqrestore(&div->lock, flags);
++}
++
++static int ccu_div_buf_is_enabled(struct clk_hw *hw)
++{
++ struct ccu_div *div = to_ccu_div(hw);
++ u32 val = 0;
++
++ regmap_read(div->sys_regs, div->reg_ctl, &val);
++
++ return !(val & CCU_DIV_CTL_GATE_REF_BUF);
++}
++
+ static unsigned long ccu_div_var_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+ {
+@@ -323,6 +358,7 @@ static const struct ccu_div_dbgfs_bit ccu_div_bits[] = {
+ CCU_DIV_DBGFS_BIT_ATTR("div_en", CCU_DIV_CTL_EN),
+ CCU_DIV_DBGFS_BIT_ATTR("div_rst", CCU_DIV_CTL_RST),
+ CCU_DIV_DBGFS_BIT_ATTR("div_bypass", CCU_DIV_CTL_SET_CLKDIV),
++ CCU_DIV_DBGFS_BIT_ATTR("div_buf", CCU_DIV_CTL_GATE_REF_BUF),
+ CCU_DIV_DBGFS_BIT_ATTR("div_lock", CCU_DIV_CTL_LOCK_NORMAL)
+ };
+
+@@ -441,6 +477,9 @@ static void ccu_div_var_debug_init(struct clk_hw *hw, struct dentry *dentry)
+ continue;
+ }
+
++ if (!strcmp("div_buf", name))
++ continue;
++
+ bits[didx] = ccu_div_bits[bidx];
+ bits[didx].div = div;
+
+@@ -477,6 +516,21 @@ static void ccu_div_gate_debug_init(struct clk_hw *hw, struct dentry *dentry)
+ &ccu_div_dbgfs_fixed_clkdiv_fops);
+ }
+
++static void ccu_div_buf_debug_init(struct clk_hw *hw, struct dentry *dentry)
++{
++ struct ccu_div *div = to_ccu_div(hw);
++ struct ccu_div_dbgfs_bit *bit;
++
++ bit = kmalloc(sizeof(*bit), GFP_KERNEL);
++ if (!bit)
++ return;
++
++ *bit = ccu_div_bits[3];
++ bit->div = div;
++ debugfs_create_file_unsafe(bit->name, ccu_div_dbgfs_mode, dentry, bit,
++ &ccu_div_dbgfs_bit_fops);
++}
++
+ static void ccu_div_fixed_debug_init(struct clk_hw *hw, struct dentry *dentry)
+ {
+ struct ccu_div *div = to_ccu_div(hw);
+@@ -489,6 +543,7 @@ static void ccu_div_fixed_debug_init(struct clk_hw *hw, struct dentry *dentry)
+
+ #define ccu_div_var_debug_init NULL
+ #define ccu_div_gate_debug_init NULL
++#define ccu_div_buf_debug_init NULL
+ #define ccu_div_fixed_debug_init NULL
+
+ #endif /* !CONFIG_DEBUG_FS */
+@@ -520,6 +575,13 @@ static const struct clk_ops ccu_div_gate_ops = {
+ .debug_init = ccu_div_gate_debug_init
+ };
+
++static const struct clk_ops ccu_div_buf_ops = {
++ .enable = ccu_div_buf_enable,
++ .disable = ccu_div_buf_disable,
++ .is_enabled = ccu_div_buf_is_enabled,
++ .debug_init = ccu_div_buf_debug_init
++};
++
+ static const struct clk_ops ccu_div_fixed_ops = {
+ .recalc_rate = ccu_div_fixed_recalc_rate,
+ .round_rate = ccu_div_fixed_round_rate,
+@@ -566,6 +628,8 @@ struct ccu_div *ccu_div_hw_register(const struct ccu_div_init_data *div_init)
+ } else if (div_init->type == CCU_DIV_GATE) {
+ hw_init.ops = &ccu_div_gate_ops;
+ div->divider = div_init->divider;
++ } else if (div_init->type == CCU_DIV_BUF) {
++ hw_init.ops = &ccu_div_buf_ops;
+ } else if (div_init->type == CCU_DIV_FIXED) {
+ hw_init.ops = &ccu_div_fixed_ops;
+ div->divider = div_init->divider;
+@@ -579,6 +643,7 @@ struct ccu_div *ccu_div_hw_register(const struct ccu_div_init_data *div_init)
+ goto err_free_div;
+ }
+ parent_data.fw_name = div_init->parent_name;
++ parent_data.name = div_init->parent_name;
+ hw_init.parent_data = &parent_data;
+ hw_init.num_parents = 1;
+
+diff --git a/drivers/clk/baikal-t1/ccu-div.h b/drivers/clk/baikal-t1/ccu-div.h
+index 795665caefbdc..4eb49ff4803c6 100644
+--- a/drivers/clk/baikal-t1/ccu-div.h
++++ b/drivers/clk/baikal-t1/ccu-div.h
+@@ -13,6 +13,14 @@
+ #include <linux/bits.h>
+ #include <linux/of.h>
+
++/*
++ * CCU Divider private clock IDs
++ * @CCU_SYS_SATA_CLK: CCU SATA internal clock
++ * @CCU_SYS_XGMAC_CLK: CCU XGMAC internal clock
++ */
++#define CCU_SYS_SATA_CLK -1
++#define CCU_SYS_XGMAC_CLK -2
++
+ /*
+ * CCU Divider private flags
+ * @CCU_DIV_SKIP_ONE: Due to some reason divider can't be set to 1.
+@@ -31,11 +39,13 @@
+ * enum ccu_div_type - CCU Divider types
+ * @CCU_DIV_VAR: Clocks gate with variable divider.
+ * @CCU_DIV_GATE: Clocks gate with fixed divider.
++ * @CCU_DIV_BUF: Clock gate with no divider.
+ * @CCU_DIV_FIXED: Ungateable clock with fixed divider.
+ */
+ enum ccu_div_type {
+ CCU_DIV_VAR,
+ CCU_DIV_GATE,
++ CCU_DIV_BUF,
+ CCU_DIV_FIXED
+ };
+
+diff --git a/drivers/clk/baikal-t1/clk-ccu-div.c b/drivers/clk/baikal-t1/clk-ccu-div.c
+index f141fda12b09a..90f4fda406ee6 100644
+--- a/drivers/clk/baikal-t1/clk-ccu-div.c
++++ b/drivers/clk/baikal-t1/clk-ccu-div.c
+@@ -76,6 +76,16 @@
+ .divider = _divider \
+ }
+
++#define CCU_DIV_BUF_INFO(_id, _name, _pname, _base, _flags) \
++ { \
++ .id = _id, \
++ .name = _name, \
++ .parent_name = _pname, \
++ .base = _base, \
++ .type = CCU_DIV_BUF, \
++ .flags = _flags \
++ }
++
+ #define CCU_DIV_FIXED_INFO(_id, _name, _pname, _divider) \
+ { \
+ .id = _id, \
+@@ -188,11 +198,14 @@ static const struct ccu_div_rst_map axi_rst_map[] = {
+ * for the SoC devices registers IO-operations.
+ */
+ static const struct ccu_div_info sys_info[] = {
+- CCU_DIV_VAR_INFO(CCU_SYS_SATA_REF_CLK, "sys_sata_ref_clk",
++ CCU_DIV_VAR_INFO(CCU_SYS_SATA_CLK, "sys_sata_clk",
+ "sata_clk", CCU_SYS_SATA_REF_BASE, 4,
+ CLK_SET_RATE_GATE,
+ CCU_DIV_SKIP_ONE | CCU_DIV_LOCK_SHIFTED |
+ CCU_DIV_RESET_DOMAIN),
++ CCU_DIV_BUF_INFO(CCU_SYS_SATA_REF_CLK, "sys_sata_ref_clk",
++ "sys_sata_clk", CCU_SYS_SATA_REF_BASE,
++ CLK_SET_RATE_PARENT),
+ CCU_DIV_VAR_INFO(CCU_SYS_APB_CLK, "sys_apb_clk",
+ "pcie_clk", CCU_SYS_APB_BASE, 5,
+ CLK_IS_CRITICAL, CCU_DIV_RESET_DOMAIN),
+@@ -204,10 +217,12 @@ static const struct ccu_div_info sys_info[] = {
+ "eth_clk", CCU_SYS_GMAC1_BASE, 5),
+ CCU_DIV_FIXED_INFO(CCU_SYS_GMAC1_PTP_CLK, "sys_gmac1_ptp_clk",
+ "eth_clk", 10),
+- CCU_DIV_GATE_INFO(CCU_SYS_XGMAC_REF_CLK, "sys_xgmac_ref_clk",
+- "eth_clk", CCU_SYS_XGMAC_BASE, 8),
++ CCU_DIV_GATE_INFO(CCU_SYS_XGMAC_CLK, "sys_xgmac_clk",
++ "eth_clk", CCU_SYS_XGMAC_BASE, 1),
++ CCU_DIV_FIXED_INFO(CCU_SYS_XGMAC_REF_CLK, "sys_xgmac_ref_clk",
++ "sys_xgmac_clk", 8),
+ CCU_DIV_FIXED_INFO(CCU_SYS_XGMAC_PTP_CLK, "sys_xgmac_ptp_clk",
+- "eth_clk", 10),
++ "sys_xgmac_clk", 8),
+ CCU_DIV_GATE_INFO(CCU_SYS_USB_CLK, "sys_usb_clk",
+ "eth_clk", CCU_SYS_USB_BASE, 10),
+ CCU_DIV_VAR_INFO(CCU_SYS_PVT_CLK, "sys_pvt_clk",
+@@ -396,6 +411,9 @@ static int ccu_div_clk_register(struct ccu_div_data *data)
+ init.base = info->base;
+ init.sys_regs = data->sys_regs;
+ init.divider = info->divider;
++ } else if (init.type == CCU_DIV_BUF) {
++ init.base = info->base;
++ init.sys_regs = data->sys_regs;
+ } else {
+ init.divider = info->divider;
+ }
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 48a1eb9f2d551..e74fe6219d14e 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -30,6 +30,7 @@
+ #include <linux/debugfs.h>
+ #include <linux/delay.h>
+ #include <linux/io.h>
++#include <linux/math.h>
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+@@ -502,6 +503,8 @@ struct bcm2835_clock_data {
+ bool low_jitter;
+
+ u32 tcnt_mux;
++
++ bool round_up;
+ };
+
+ struct bcm2835_gate_data {
+@@ -966,9 +969,9 @@ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
+ return div;
+ }
+
+-static long bcm2835_clock_rate_from_divisor(struct bcm2835_clock *clock,
+- unsigned long parent_rate,
+- u32 div)
++static unsigned long bcm2835_clock_rate_from_divisor(struct bcm2835_clock *clock,
++ unsigned long parent_rate,
++ u32 div)
+ {
+ const struct bcm2835_clock_data *data = clock->data;
+ u64 temp;
+@@ -993,12 +996,34 @@ static long bcm2835_clock_rate_from_divisor(struct bcm2835_clock *clock,
+ return temp;
+ }
+
++static unsigned long bcm2835_round_rate(unsigned long rate)
++{
++ unsigned long scaler;
++ unsigned long limit;
++
++ limit = rate / 100000;
++
++ scaler = 1;
++ while (scaler < limit)
++ scaler *= 10;
++
++ /*
++ * If increasing a clock by less than 0.1% changes it
++ * from ..999.. to ..000.., round up.
++ */
++ if ((rate + scaler - 1) / scaler % 1000 == 0)
++ rate = roundup(rate, scaler);
++
++ return rate;
++}
++
+ static unsigned long bcm2835_clock_get_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+ {
+ struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
+ struct bcm2835_cprman *cprman = clock->cprman;
+ const struct bcm2835_clock_data *data = clock->data;
++ unsigned long rate;
+ u32 div;
+
+ if (data->int_bits == 0 && data->frac_bits == 0)
+@@ -1006,7 +1031,12 @@ static unsigned long bcm2835_clock_get_rate(struct clk_hw *hw,
+
+ div = cprman_read(cprman, data->div_reg);
+
+- return bcm2835_clock_rate_from_divisor(clock, parent_rate, div);
++ rate = bcm2835_clock_rate_from_divisor(clock, parent_rate, div);
++
++ if (data->round_up)
++ rate = bcm2835_round_rate(rate);
++
++ return rate;
+ }
+
+ static void bcm2835_clock_wait_busy(struct bcm2835_clock *clock)
+@@ -1784,7 +1814,7 @@ static const struct bcm2835_clk_desc clk_desc_array[] = {
+ .load_mask = CM_PLLC_LOADPER,
+ .hold_mask = CM_PLLC_HOLDPER,
+ .fixed_divider = 1,
+- .flags = CLK_SET_RATE_PARENT),
++ .flags = CLK_IS_CRITICAL | CLK_SET_RATE_PARENT),
+
+ /*
+ * PLLD is the display PLL, used to drive DSI display panels.
+@@ -2143,7 +2173,8 @@ static const struct bcm2835_clk_desc clk_desc_array[] = {
+ .div_reg = CM_UARTDIV,
+ .int_bits = 10,
+ .frac_bits = 12,
+- .tcnt_mux = 28),
++ .tcnt_mux = 28,
++ .round_up = true),
+
+ /* TV encoder clock. Only operating frequency is 108Mhz. */
+ [BCM2835_CLOCK_VEC] = REGISTER_PER_CLK(
+diff --git a/drivers/clk/berlin/bg2.c b/drivers/clk/berlin/bg2.c
+index bccdfa00fd373..67a9edbba29c4 100644
+--- a/drivers/clk/berlin/bg2.c
++++ b/drivers/clk/berlin/bg2.c
+@@ -500,12 +500,15 @@ static void __init berlin2_clock_setup(struct device_node *np)
+ int n, ret;
+
+ clk_data = kzalloc(struct_size(clk_data, hws, MAX_CLKS), GFP_KERNEL);
+- if (!clk_data)
++ if (!clk_data) {
++ of_node_put(parent_np);
+ return;
++ }
+ clk_data->num = MAX_CLKS;
+ hws = clk_data->hws;
+
+ gbase = of_iomap(parent_np, 0);
++ of_node_put(parent_np);
+ if (!gbase)
+ return;
+
+diff --git a/drivers/clk/berlin/bg2q.c b/drivers/clk/berlin/bg2q.c
+index e9518d35f262e..dd2784bb75b64 100644
+--- a/drivers/clk/berlin/bg2q.c
++++ b/drivers/clk/berlin/bg2q.c
+@@ -286,19 +286,23 @@ static void __init berlin2q_clock_setup(struct device_node *np)
+ int n, ret;
+
+ clk_data = kzalloc(struct_size(clk_data, hws, MAX_CLKS), GFP_KERNEL);
+- if (!clk_data)
++ if (!clk_data) {
++ of_node_put(parent_np);
+ return;
++ }
+ clk_data->num = MAX_CLKS;
+ hws = clk_data->hws;
+
+ gbase = of_iomap(parent_np, 0);
+ if (!gbase) {
++ of_node_put(parent_np);
+ pr_err("%pOF: Unable to map global base\n", np);
+ return;
+ }
+
+ /* BG2Q CPU PLL is not part of global registers */
+ cpupll_base = of_iomap(parent_np, 1);
++ of_node_put(parent_np);
+ if (!cpupll_base) {
+ pr_err("%pOF: Unable to map cpupll base\n", np);
+ iounmap(gbase);
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index 24dab2312bc6f..9c3305bcb27ae 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -622,7 +622,7 @@ static int aspeed_g6_clk_probe(struct platform_device *pdev)
+ regmap_write(map, 0x308, 0x12000); /* 3x3 = 9 */
+
+ /* P-Bus (BCLK) clock divider */
+- hw = clk_hw_register_divider_table(dev, "bclk", "hpll", 0,
++ hw = clk_hw_register_divider_table(dev, "bclk", "epll", 0,
+ scu_g6_base + ASPEED_G6_CLK_SELECTION1, 20, 3, 0,
+ ast2600_div_table,
+ &aspeed_g6_clk_lock);
+diff --git a/drivers/clk/clk-oxnas.c b/drivers/clk/clk-oxnas.c
+index cda5e258355bc..584e293156ad6 100644
+--- a/drivers/clk/clk-oxnas.c
++++ b/drivers/clk/clk-oxnas.c
+@@ -207,7 +207,7 @@ static const struct of_device_id oxnas_stdclk_dt_ids[] = {
+
+ static int oxnas_stdclk_probe(struct platform_device *pdev)
+ {
+- struct device_node *np = pdev->dev.of_node;
++ struct device_node *np = pdev->dev.of_node, *parent_np;
+ const struct oxnas_stdclk_data *data;
+ struct regmap *regmap;
+ int ret;
+@@ -215,7 +215,9 @@ static int oxnas_stdclk_probe(struct platform_device *pdev)
+
+ data = of_device_get_match_data(&pdev->dev);
+
+- regmap = syscon_node_to_regmap(of_get_parent(np));
++ parent_np = of_get_parent(np);
++ regmap = syscon_node_to_regmap(parent_np);
++ of_node_put(parent_np);
+ if (IS_ERR(regmap)) {
+ dev_err(&pdev->dev, "failed to have parent regmap\n");
+ return PTR_ERR(regmap);
+diff --git a/drivers/clk/clk-qoriq.c b/drivers/clk/clk-qoriq.c
+index 88898b97a4431..5eddb9f0d6bdb 100644
+--- a/drivers/clk/clk-qoriq.c
++++ b/drivers/clk/clk-qoriq.c
+@@ -1063,8 +1063,13 @@ static void __init _clockgen_init(struct device_node *np, bool legacy);
+ */
+ static void __init legacy_init_clockgen(struct device_node *np)
+ {
+- if (!clockgen.node)
+- _clockgen_init(of_get_parent(np), true);
++ if (!clockgen.node) {
++ struct device_node *parent_np;
++
++ parent_np = of_get_parent(np);
++ _clockgen_init(parent_np, true);
++ of_node_put(parent_np);
++ }
+ }
+
+ /* Legacy node */
+@@ -1159,6 +1164,7 @@ static struct clk * __init create_sysclk(const char *name)
+ sysclk = of_get_child_by_name(clockgen.node, "sysclk");
+ if (sysclk) {
+ clk = sysclk_from_fixed(sysclk, name);
++ of_node_put(sysclk);
+ if (!IS_ERR(clk))
+ return clk;
+ }
+diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
+index e7be3e54b9be4..03cfef494b49b 100644
+--- a/drivers/clk/clk-versaclock5.c
++++ b/drivers/clk/clk-versaclock5.c
+@@ -1204,7 +1204,7 @@ static const struct vc5_chip_info idt_5p49v6901_info = {
+ .model = IDT_VC6_5P49V6901,
+ .clk_fod_cnt = 4,
+ .clk_out_cnt = 5,
+- .flags = VC5_HAS_PFD_FREQ_DBL,
++ .flags = VC5_HAS_PFD_FREQ_DBL | VC5_HAS_BYPASS_SYNC_BIT,
+ };
+
+ static const struct vc5_chip_info idt_5p49v6965_info = {
+diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c
+index c56e406138dbe..1e6870f3671f6 100644
+--- a/drivers/clk/imx/clk-scu.c
++++ b/drivers/clk/imx/clk-scu.c
+@@ -695,7 +695,11 @@ struct clk_hw *imx_clk_scu_alloc_dev(const char *name,
+ pr_warn("%s: failed to attached the power domain %d\n",
+ name, ret);
+
+- platform_device_add(pdev);
++ ret = platform_device_add(pdev);
++ if (ret) {
++ platform_device_put(pdev);
++ return ERR_PTR(ret);
++ }
+
+ /* For API backwards compatiblilty, simply return NULL for success */
+ return NULL;
+diff --git a/drivers/clk/mediatek/clk-mt8183-mfgcfg.c b/drivers/clk/mediatek/clk-mt8183-mfgcfg.c
+index d774edaf760be..230299728859c 100644
+--- a/drivers/clk/mediatek/clk-mt8183-mfgcfg.c
++++ b/drivers/clk/mediatek/clk-mt8183-mfgcfg.c
+@@ -18,9 +18,9 @@ static const struct mtk_gate_regs mfg_cg_regs = {
+ .sta_ofs = 0x0,
+ };
+
+-#define GATE_MFG(_id, _name, _parent, _shift) \
+- GATE_MTK(_id, _name, _parent, &mfg_cg_regs, _shift, \
+- &mtk_clk_gate_ops_setclr)
++#define GATE_MFG(_id, _name, _parent, _shift) \
++ GATE_MTK_FLAGS(_id, _name, _parent, &mfg_cg_regs, _shift, \
++ &mtk_clk_gate_ops_setclr, CLK_SET_RATE_PARENT)
+
+ static const struct mtk_gate mfg_clks[] = {
+ GATE_MFG(CLK_MFG_BG3D, "mfg_bg3d", "mfg_sel", 0)
+diff --git a/drivers/clk/mediatek/clk-mt8195-infra_ao.c b/drivers/clk/mediatek/clk-mt8195-infra_ao.c
+index 8ebe3b9415c48..0faa876815e83 100644
+--- a/drivers/clk/mediatek/clk-mt8195-infra_ao.c
++++ b/drivers/clk/mediatek/clk-mt8195-infra_ao.c
+@@ -54,8 +54,12 @@ static const struct mtk_gate_regs infra_ao4_cg_regs = {
+ #define GATE_INFRA_AO1(_id, _name, _parent, _shift) \
+ GATE_INFRA_AO1_FLAGS(_id, _name, _parent, _shift, 0)
+
++#define GATE_INFRA_AO2_FLAGS(_id, _name, _parent, _shift, _flag) \
++ GATE_MTK_FLAGS(_id, _name, _parent, &infra_ao2_cg_regs, _shift, \
++ &mtk_clk_gate_ops_setclr, _flag)
++
+ #define GATE_INFRA_AO2(_id, _name, _parent, _shift) \
+- GATE_MTK(_id, _name, _parent, &infra_ao2_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
++ GATE_INFRA_AO2_FLAGS(_id, _name, _parent, _shift, 0)
+
+ #define GATE_INFRA_AO3_FLAGS(_id, _name, _parent, _shift, _flag) \
+ GATE_MTK_FLAGS(_id, _name, _parent, &infra_ao3_cg_regs, _shift, \
+@@ -135,8 +139,11 @@ static const struct mtk_gate infra_ao_clks[] = {
+ GATE_INFRA_AO2(CLK_INFRA_AO_UNIPRO_SYS, "infra_ao_unipro_sys", "top_ufs", 11),
+ GATE_INFRA_AO2(CLK_INFRA_AO_UNIPRO_TICK, "infra_ao_unipro_tick", "top_ufs_tick1us", 12),
+ GATE_INFRA_AO2(CLK_INFRA_AO_UFS_MP_SAP_B, "infra_ao_ufs_mp_sap_b", "top_ufs_mp_sap_cfg", 13),
+- GATE_INFRA_AO2(CLK_INFRA_AO_PWRMCU, "infra_ao_pwrmcu", "top_pwrmcu", 15),
+- GATE_INFRA_AO2(CLK_INFRA_AO_PWRMCU_BUS_H, "infra_ao_pwrmcu_bus_h", "top_axi", 17),
++ /* pwrmcu is used by ATF for platform PM: clocks must never be disabled by the kernel */
++ GATE_INFRA_AO2_FLAGS(CLK_INFRA_AO_PWRMCU, "infra_ao_pwrmcu", "top_pwrmcu", 15,
++ CLK_IS_CRITICAL),
++ GATE_INFRA_AO2_FLAGS(CLK_INFRA_AO_PWRMCU_BUS_H, "infra_ao_pwrmcu_bus_h", "top_axi", 17,
++ CLK_IS_CRITICAL),
+ GATE_INFRA_AO2(CLK_INFRA_AO_APDMA_B, "infra_ao_apdma_b", "top_axi", 18),
+ GATE_INFRA_AO2(CLK_INFRA_AO_SPI4, "infra_ao_spi4", "top_spi", 25),
+ GATE_INFRA_AO2(CLK_INFRA_AO_SPI5, "infra_ao_spi5", "top_spi", 26),
+diff --git a/drivers/clk/mediatek/clk-mt8195-mfg.c b/drivers/clk/mediatek/clk-mt8195-mfg.c
+index 9411c556a5a97..c94cb71bd9b94 100644
+--- a/drivers/clk/mediatek/clk-mt8195-mfg.c
++++ b/drivers/clk/mediatek/clk-mt8195-mfg.c
+@@ -17,10 +17,12 @@ static const struct mtk_gate_regs mfg_cg_regs = {
+ };
+
+ #define GATE_MFG(_id, _name, _parent, _shift) \
+- GATE_MTK(_id, _name, _parent, &mfg_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
++ GATE_MTK_FLAGS(_id, _name, _parent, &mfg_cg_regs, \
++ _shift, &mtk_clk_gate_ops_setclr, \
++ CLK_SET_RATE_PARENT)
+
+ static const struct mtk_gate mfg_clks[] = {
+- GATE_MFG(CLK_MFG_BG3D, "mfg_bg3d", "top_mfg_core_tmp", 0),
++ GATE_MFG(CLK_MFG_BG3D, "mfg_bg3d", "mfg_ck_fast_ref", 0),
+ };
+
+ static const struct mtk_clk_desc mfg_desc = {
+diff --git a/drivers/clk/mediatek/clk-mt8195-vdo0.c b/drivers/clk/mediatek/clk-mt8195-vdo0.c
+index 261a7f76dd3cc..07b46bfd50406 100644
+--- a/drivers/clk/mediatek/clk-mt8195-vdo0.c
++++ b/drivers/clk/mediatek/clk-mt8195-vdo0.c
+@@ -37,6 +37,10 @@ static const struct mtk_gate_regs vdo0_2_cg_regs = {
+ #define GATE_VDO0_2(_id, _name, _parent, _shift) \
+ GATE_MTK(_id, _name, _parent, &vdo0_2_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+
++#define GATE_VDO0_2_FLAGS(_id, _name, _parent, _shift, _flags) \
++ GATE_MTK_FLAGS(_id, _name, _parent, &vdo0_2_cg_regs, _shift, \
++ &mtk_clk_gate_ops_setclr, _flags)
++
+ static const struct mtk_gate vdo0_clks[] = {
+ /* VDO0_0 */
+ GATE_VDO0_0(CLK_VDO0_DISP_OVL0, "vdo0_disp_ovl0", "top_vpp", 0),
+@@ -85,7 +89,8 @@ static const struct mtk_gate vdo0_clks[] = {
+ /* VDO0_2 */
+ GATE_VDO0_2(CLK_VDO0_DSI0_DSI, "vdo0_dsi0_dsi", "top_dsi_occ", 0),
+ GATE_VDO0_2(CLK_VDO0_DSI1_DSI, "vdo0_dsi1_dsi", "top_dsi_occ", 8),
+- GATE_VDO0_2(CLK_VDO0_DP_INTF0_DP_INTF, "vdo0_dp_intf0_dp_intf", "top_edp", 16),
++ GATE_VDO0_2_FLAGS(CLK_VDO0_DP_INTF0_DP_INTF, "vdo0_dp_intf0_dp_intf",
++ "top_edp", 16, CLK_SET_RATE_PARENT),
+ };
+
+ static int clk_mt8195_vdo0_probe(struct platform_device *pdev)
+diff --git a/drivers/clk/mediatek/clk-mt8195-vdo1.c b/drivers/clk/mediatek/clk-mt8195-vdo1.c
+index 3378487d2c904..d54d7726d1866 100644
+--- a/drivers/clk/mediatek/clk-mt8195-vdo1.c
++++ b/drivers/clk/mediatek/clk-mt8195-vdo1.c
+@@ -43,6 +43,10 @@ static const struct mtk_gate_regs vdo1_3_cg_regs = {
+ #define GATE_VDO1_2(_id, _name, _parent, _shift) \
+ GATE_MTK(_id, _name, _parent, &vdo1_2_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+
++#define GATE_VDO1_2_FLAGS(_id, _name, _parent, _shift, _flags) \
++ GATE_MTK_FLAGS(_id, _name, _parent, &vdo1_2_cg_regs, _shift, \
++ &mtk_clk_gate_ops_setclr, _flags)
++
+ #define GATE_VDO1_3(_id, _name, _parent, _shift) \
+ GATE_MTK(_id, _name, _parent, &vdo1_3_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+
+@@ -99,7 +103,7 @@ static const struct mtk_gate vdo1_clks[] = {
+ GATE_VDO1_2(CLK_VDO1_DISP_MONITOR_DPI0, "vdo1_disp_monitor_dpi0", "top_vpp", 1),
+ GATE_VDO1_2(CLK_VDO1_DPI1, "vdo1_dpi1", "top_vpp", 8),
+ GATE_VDO1_2(CLK_VDO1_DISP_MONITOR_DPI1, "vdo1_disp_monitor_dpi1", "top_vpp", 9),
+- GATE_VDO1_2(CLK_VDO1_DPINTF, "vdo1_dpintf", "top_vpp", 16),
++ GATE_VDO1_2_FLAGS(CLK_VDO1_DPINTF, "vdo1_dpintf", "top_dp", 16, CLK_SET_RATE_PARENT),
+ GATE_VDO1_2(CLK_VDO1_DISP_MONITOR_DPINTF, "vdo1_disp_monitor_dpintf", "top_vpp", 17),
+ /* VDO1_3 */
+ GATE_VDO1_3(CLK_VDO1_26M_SLOW, "vdo1_26m_slow", "clk26m", 8),
+diff --git a/drivers/clk/mediatek/clk-mtk.c b/drivers/clk/mediatek/clk-mtk.c
+index b9188000ab3c6..35845163edae4 100644
+--- a/drivers/clk/mediatek/clk-mtk.c
++++ b/drivers/clk/mediatek/clk-mtk.c
+@@ -80,7 +80,7 @@ err:
+ if (IS_ERR_OR_NULL(clk_data->hws[rc->id]))
+ continue;
+
+- clk_unregister_fixed_rate(clk_data->hws[rc->id]->clk);
++ clk_hw_unregister_fixed_rate(clk_data->hws[rc->id]);
+ clk_data->hws[rc->id] = ERR_PTR(-ENOENT);
+ }
+
+@@ -102,7 +102,7 @@ void mtk_clk_unregister_fixed_clks(const struct mtk_fixed_clk *clks, int num,
+ if (IS_ERR_OR_NULL(clk_data->hws[rc->id]))
+ continue;
+
+- clk_unregister_fixed_rate(clk_data->hws[rc->id]->clk);
++ clk_hw_unregister_fixed_rate(clk_data->hws[rc->id]);
+ clk_data->hws[rc->id] = ERR_PTR(-ENOENT);
+ }
+ }
+@@ -146,7 +146,7 @@ err:
+ if (IS_ERR_OR_NULL(clk_data->hws[ff->id]))
+ continue;
+
+- clk_unregister_fixed_factor(clk_data->hws[ff->id]->clk);
++ clk_hw_unregister_fixed_factor(clk_data->hws[ff->id]);
+ clk_data->hws[ff->id] = ERR_PTR(-ENOENT);
+ }
+
+@@ -168,7 +168,7 @@ void mtk_clk_unregister_factors(const struct mtk_fixed_factor *clks, int num,
+ if (IS_ERR_OR_NULL(clk_data->hws[ff->id]))
+ continue;
+
+- clk_unregister_fixed_factor(clk_data->hws[ff->id]->clk);
++ clk_hw_unregister_fixed_factor(clk_data->hws[ff->id]);
+ clk_data->hws[ff->id] = ERR_PTR(-ENOENT);
+ }
+ }
+@@ -393,7 +393,7 @@ err:
+ if (IS_ERR_OR_NULL(clk_data->hws[mcd->id]))
+ continue;
+
+- mtk_clk_unregister_composite(clk_data->hws[mcd->id]);
++ clk_hw_unregister_divider(clk_data->hws[mcd->id]);
+ clk_data->hws[mcd->id] = ERR_PTR(-ENOENT);
+ }
+
+@@ -414,7 +414,7 @@ void mtk_clk_unregister_dividers(const struct mtk_clk_divider *mcds, int num,
+ if (IS_ERR_OR_NULL(clk_data->hws[mcd->id]))
+ continue;
+
+- clk_unregister_divider(clk_data->hws[mcd->id]->clk);
++ clk_hw_unregister_divider(clk_data->hws[mcd->id]);
+ clk_data->hws[mcd->id] = ERR_PTR(-ENOENT);
+ }
+ }
+diff --git a/drivers/clk/meson/meson-aoclk.c b/drivers/clk/meson/meson-aoclk.c
+index 27cd2c1f3f612..434cd8f9de826 100644
+--- a/drivers/clk/meson/meson-aoclk.c
++++ b/drivers/clk/meson/meson-aoclk.c
+@@ -38,6 +38,7 @@ int meson_aoclkc_probe(struct platform_device *pdev)
+ struct meson_aoclk_reset_controller *rstc;
+ struct meson_aoclk_data *data;
+ struct device *dev = &pdev->dev;
++ struct device_node *np;
+ struct regmap *regmap;
+ int ret, clkid;
+
+@@ -49,7 +50,9 @@ int meson_aoclkc_probe(struct platform_device *pdev)
+ if (!rstc)
+ return -ENOMEM;
+
+- regmap = syscon_node_to_regmap(of_get_parent(dev->of_node));
++ np = of_get_parent(dev->of_node);
++ regmap = syscon_node_to_regmap(np);
++ of_node_put(np);
+ if (IS_ERR(regmap)) {
+ dev_err(dev, "failed to get regmap\n");
+ return PTR_ERR(regmap);
+diff --git a/drivers/clk/meson/meson-eeclk.c b/drivers/clk/meson/meson-eeclk.c
+index 8d5a5dab955a8..0e5e6b57eb20e 100644
+--- a/drivers/clk/meson/meson-eeclk.c
++++ b/drivers/clk/meson/meson-eeclk.c
+@@ -18,6 +18,7 @@ int meson_eeclkc_probe(struct platform_device *pdev)
+ {
+ const struct meson_eeclkc_data *data;
+ struct device *dev = &pdev->dev;
++ struct device_node *np;
+ struct regmap *map;
+ int ret, i;
+
+@@ -26,7 +27,9 @@ int meson_eeclkc_probe(struct platform_device *pdev)
+ return -EINVAL;
+
+ /* Get the hhi system controller node */
+- map = syscon_node_to_regmap(of_get_parent(dev->of_node));
++ np = of_get_parent(dev->of_node);
++ map = syscon_node_to_regmap(np);
++ of_node_put(np);
+ if (IS_ERR(map)) {
+ dev_err(dev,
+ "failed to get HHI regmap\n");
+diff --git a/drivers/clk/meson/meson8b.c b/drivers/clk/meson/meson8b.c
+index 8f3b7a94a6677..827e78fb16a84 100644
+--- a/drivers/clk/meson/meson8b.c
++++ b/drivers/clk/meson/meson8b.c
+@@ -3792,12 +3792,15 @@ static void __init meson8b_clkc_init_common(struct device_node *np,
+ struct clk_hw_onecell_data *clk_hw_onecell_data)
+ {
+ struct meson8b_clk_reset *rstc;
++ struct device_node *parent_np;
+ const char *notifier_clk_name;
+ struct clk *notifier_clk;
+ struct regmap *map;
+ int i, ret;
+
+- map = syscon_node_to_regmap(of_get_parent(np));
++ parent_np = of_get_parent(np);
++ map = syscon_node_to_regmap(parent_np);
++ of_node_put(parent_np);
+ if (IS_ERR(map)) {
+ pr_err("failed to get HHI regmap - Trying obsolete regs\n");
+ return;
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index bc4dcf356d828..b1b141abc01c2 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -637,6 +637,7 @@ config SM_DISPCC_6350
+
+ config SM_GCC_6115
+ tristate "SM6115 and SM4250 Global Clock Controller"
++ select QCOM_GDSC
+ help
+ Support for the global clock controller on SM6115 and SM4250 devices.
+ Say Y if you want to use peripheral devices such as UART, SPI,
+diff --git a/drivers/clk/qcom/apss-ipq6018.c b/drivers/clk/qcom/apss-ipq6018.c
+index d78ff2f310bfa..b5d93657e1ee3 100644
+--- a/drivers/clk/qcom/apss-ipq6018.c
++++ b/drivers/clk/qcom/apss-ipq6018.c
+@@ -57,7 +57,7 @@ static struct clk_branch apcs_alias0_core_clk = {
+ .parent_hws = (const struct clk_hw *[]){
+ &apcs_alias0_clk_src.clkr.hw },
+ .num_parents = 1,
+- .flags = CLK_SET_RATE_PARENT,
++ .flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
+ .ops = &clk_branch2_ops,
+ },
+ },
+diff --git a/drivers/clk/qcom/gcc-sdm660.c b/drivers/clk/qcom/gcc-sdm660.c
+index 9b97425008ce1..db918c92a522c 100644
+--- a/drivers/clk/qcom/gcc-sdm660.c
++++ b/drivers/clk/qcom/gcc-sdm660.c
+@@ -757,7 +757,7 @@ static struct clk_rcg2 sdcc1_apps_clk_src = {
+ .name = "sdcc1_apps_clk_src",
+ .parent_data = gcc_parent_data_xo_gpll0_gpll4_gpll0_early_div,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_xo_gpll0_gpll4_gpll0_early_div),
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_floor_ops,
+ },
+ };
+
+diff --git a/drivers/clk/qcom/gcc-sm6115.c b/drivers/clk/qcom/gcc-sm6115.c
+index 68fe9f6f0d2f3..e24a977c25806 100644
+--- a/drivers/clk/qcom/gcc-sm6115.c
++++ b/drivers/clk/qcom/gcc-sm6115.c
+@@ -53,11 +53,25 @@ static struct pll_vco gpll10_vco[] = {
+ { 750000000, 1500000000, 1 },
+ };
+
++static const u8 clk_alpha_pll_regs_offset[][PLL_OFF_MAX_REGS] = {
++ [CLK_ALPHA_PLL_TYPE_DEFAULT] = {
++ [PLL_OFF_L_VAL] = 0x04,
++ [PLL_OFF_ALPHA_VAL] = 0x08,
++ [PLL_OFF_ALPHA_VAL_U] = 0x0c,
++ [PLL_OFF_TEST_CTL] = 0x10,
++ [PLL_OFF_TEST_CTL_U] = 0x14,
++ [PLL_OFF_USER_CTL] = 0x18,
++ [PLL_OFF_USER_CTL_U] = 0x1c,
++ [PLL_OFF_CONFIG_CTL] = 0x20,
++ [PLL_OFF_STATUS] = 0x24,
++ },
++};
++
+ static struct clk_alpha_pll gpll0 = {
+ .offset = 0x0,
+ .vco_table = default_vco,
+ .num_vco = ARRAY_SIZE(default_vco),
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr = {
+ .enable_reg = 0x79000,
+ .enable_mask = BIT(0),
+@@ -83,7 +97,7 @@ static struct clk_alpha_pll_postdiv gpll0_out_aux2 = {
+ .post_div_table = post_div_table_gpll0_out_aux2,
+ .num_post_div = ARRAY_SIZE(post_div_table_gpll0_out_aux2),
+ .width = 4,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll0_out_aux2",
+ .parent_hws = (const struct clk_hw *[]){ &gpll0.clkr.hw },
+@@ -115,7 +129,7 @@ static struct clk_alpha_pll_postdiv gpll0_out_main = {
+ .post_div_table = post_div_table_gpll0_out_main,
+ .num_post_div = ARRAY_SIZE(post_div_table_gpll0_out_main),
+ .width = 4,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll0_out_main",
+ .parent_hws = (const struct clk_hw *[]){ &gpll0.clkr.hw },
+@@ -137,7 +151,7 @@ static struct clk_alpha_pll gpll10 = {
+ .offset = 0xa000,
+ .vco_table = gpll10_vco,
+ .num_vco = ARRAY_SIZE(gpll10_vco),
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr = {
+ .enable_reg = 0x79000,
+ .enable_mask = BIT(10),
+@@ -163,7 +177,7 @@ static struct clk_alpha_pll_postdiv gpll10_out_main = {
+ .post_div_table = post_div_table_gpll10_out_main,
+ .num_post_div = ARRAY_SIZE(post_div_table_gpll10_out_main),
+ .width = 4,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll10_out_main",
+ .parent_hws = (const struct clk_hw *[]){ &gpll10.clkr.hw },
+@@ -189,7 +203,7 @@ static struct clk_alpha_pll gpll11 = {
+ .vco_table = default_vco,
+ .num_vco = ARRAY_SIZE(default_vco),
+ .flags = SUPPORTS_DYNAMIC_UPDATE,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr = {
+ .enable_reg = 0x79000,
+ .enable_mask = BIT(11),
+@@ -215,7 +229,7 @@ static struct clk_alpha_pll_postdiv gpll11_out_main = {
+ .post_div_table = post_div_table_gpll11_out_main,
+ .num_post_div = ARRAY_SIZE(post_div_table_gpll11_out_main),
+ .width = 4,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll11_out_main",
+ .parent_hws = (const struct clk_hw *[]){ &gpll11.clkr.hw },
+@@ -229,7 +243,7 @@ static struct clk_alpha_pll gpll3 = {
+ .offset = 0x3000,
+ .vco_table = default_vco,
+ .num_vco = ARRAY_SIZE(default_vco),
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr = {
+ .enable_reg = 0x79000,
+ .enable_mask = BIT(3),
+@@ -248,7 +262,7 @@ static struct clk_alpha_pll gpll4 = {
+ .offset = 0x4000,
+ .vco_table = default_vco,
+ .num_vco = ARRAY_SIZE(default_vco),
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr = {
+ .enable_reg = 0x79000,
+ .enable_mask = BIT(4),
+@@ -274,7 +288,7 @@ static struct clk_alpha_pll_postdiv gpll4_out_main = {
+ .post_div_table = post_div_table_gpll4_out_main,
+ .num_post_div = ARRAY_SIZE(post_div_table_gpll4_out_main),
+ .width = 4,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll4_out_main",
+ .parent_hws = (const struct clk_hw *[]){ &gpll4.clkr.hw },
+@@ -287,7 +301,7 @@ static struct clk_alpha_pll gpll6 = {
+ .offset = 0x6000,
+ .vco_table = default_vco,
+ .num_vco = ARRAY_SIZE(default_vco),
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr = {
+ .enable_reg = 0x79000,
+ .enable_mask = BIT(6),
+@@ -313,7 +327,7 @@ static struct clk_alpha_pll_postdiv gpll6_out_main = {
+ .post_div_table = post_div_table_gpll6_out_main,
+ .num_post_div = ARRAY_SIZE(post_div_table_gpll6_out_main),
+ .width = 4,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll6_out_main",
+ .parent_hws = (const struct clk_hw *[]){ &gpll6.clkr.hw },
+@@ -326,7 +340,7 @@ static struct clk_alpha_pll gpll7 = {
+ .offset = 0x7000,
+ .vco_table = default_vco,
+ .num_vco = ARRAY_SIZE(default_vco),
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr = {
+ .enable_reg = 0x79000,
+ .enable_mask = BIT(7),
+@@ -352,7 +366,7 @@ static struct clk_alpha_pll_postdiv gpll7_out_main = {
+ .post_div_table = post_div_table_gpll7_out_main,
+ .num_post_div = ARRAY_SIZE(post_div_table_gpll7_out_main),
+ .width = 4,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll7_out_main",
+ .parent_hws = (const struct clk_hw *[]){ &gpll7.clkr.hw },
+@@ -380,7 +394,7 @@ static struct clk_alpha_pll gpll8 = {
+ .offset = 0x8000,
+ .vco_table = default_vco,
+ .num_vco = ARRAY_SIZE(default_vco),
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .flags = SUPPORTS_DYNAMIC_UPDATE,
+ .clkr = {
+ .enable_reg = 0x79000,
+@@ -407,7 +421,7 @@ static struct clk_alpha_pll_postdiv gpll8_out_main = {
+ .post_div_table = post_div_table_gpll8_out_main,
+ .num_post_div = ARRAY_SIZE(post_div_table_gpll8_out_main),
+ .width = 4,
+- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++ .regs = clk_alpha_pll_regs_offset[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll8_out_main",
+ .parent_hws = (const struct clk_hw *[]){ &gpll8.clkr.hw },
+diff --git a/drivers/clk/samsung/clk-exynosautov9.c b/drivers/clk/samsung/clk-exynosautov9.c
+index d9e1f8e4a7b45..487a71b32a009 100644
+--- a/drivers/clk/samsung/clk-exynosautov9.c
++++ b/drivers/clk/samsung/clk-exynosautov9.c
+@@ -1170,9 +1170,9 @@ static const struct samsung_cmu_info fsys2_cmu_info __initconst = {
+ #define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_2 0x2058
+ #define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_3 0x205c
+ #define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_4 0x2060
+-#define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_7 0x206c
+ #define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_5 0x2064
+ #define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_6 0x2068
++#define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_7 0x206c
+ #define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_8 0x2070
+ #define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_9 0x2074
+ #define CLK_CON_GAT_GOUT_BLK_PERIC0_UID_PERIC0_TOP0_IPCLKPORT_PCLK_10 0x204c
+@@ -1418,14 +1418,14 @@ static const struct samsung_cmu_info peric0_cmu_info __initconst = {
+ #define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_IPCLK_11 0x2020
+ #define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_0 0x2044
+ #define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_1 0x2048
+-#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_2 0x2058
+-#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_3 0x205c
+-#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_4 0x2060
+-#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_7 0x206c
+-#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_5 0x2064
+-#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_6 0x2068
+-#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_8 0x2070
+-#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_9 0x2074
++#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_2 0x2054
++#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_3 0x2058
++#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_4 0x205c
++#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_5 0x2060
++#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_6 0x2064
++#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_7 0x2068
++#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_8 0x206c
++#define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_9 0x2070
+ #define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_10 0x204c
+ #define CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_11 0x2050
+
+@@ -1463,9 +1463,9 @@ static const unsigned long peric1_clk_regs[] __initconst = {
+ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_2,
+ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_3,
+ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_4,
+- CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_7,
+ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_5,
+ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_6,
++ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_7,
+ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_8,
+ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_9,
+ CLK_CON_GAT_GOUT_BLK_PERIC1_UID_PERIC1_TOP0_IPCLKPORT_PCLK_10,
+diff --git a/drivers/clk/sprd/common.c b/drivers/clk/sprd/common.c
+index d620bbbcdfc88..ce81e4087a8fc 100644
+--- a/drivers/clk/sprd/common.c
++++ b/drivers/clk/sprd/common.c
+@@ -41,7 +41,7 @@ int sprd_clk_regmap_init(struct platform_device *pdev,
+ {
+ void __iomem *base;
+ struct device *dev = &pdev->dev;
+- struct device_node *node = dev->of_node;
++ struct device_node *node = dev->of_node, *np;
+ struct regmap *regmap;
+
+ if (of_find_property(node, "sprd,syscon", NULL)) {
+@@ -50,9 +50,10 @@ int sprd_clk_regmap_init(struct platform_device *pdev,
+ pr_err("%s: failed to get syscon regmap\n", __func__);
+ return PTR_ERR(regmap);
+ }
+- } else if (of_device_is_compatible(of_get_parent(dev->of_node),
+- "syscon")) {
+- regmap = device_node_to_regmap(of_get_parent(dev->of_node));
++ } else if (of_device_is_compatible(np = of_get_parent(node), "syscon") ||
++ (of_node_put(np), 0)) {
++ regmap = device_node_to_regmap(np);
++ of_node_put(np);
+ if (IS_ERR(regmap)) {
+ dev_err(dev, "failed to get regmap from its parent.\n");
+ return PTR_ERR(regmap);
+diff --git a/drivers/clk/st/clkgen-fsyn.c b/drivers/clk/st/clkgen-fsyn.c
+index 582a22c049194..d820292a381d0 100644
+--- a/drivers/clk/st/clkgen-fsyn.c
++++ b/drivers/clk/st/clkgen-fsyn.c
+@@ -987,6 +987,7 @@ static void __init st_of_quadfs_setup(struct device_node *np,
+ const char *pll_name, *clk_parent_name;
+ void __iomem *reg;
+ spinlock_t *lock;
++ struct device_node *parent_np;
+
+ /*
+ * First check for reg property within the node to keep backward
+@@ -994,7 +995,9 @@ static void __init st_of_quadfs_setup(struct device_node *np,
+ */
+ reg = of_iomap(np, 0);
+ if (!reg) {
+- reg = of_iomap(of_get_parent(np), 0);
++ parent_np = of_get_parent(np);
++ reg = of_iomap(parent_np, 0);
++ of_node_put(parent_np);
+ if (!reg) {
+ pr_err("%s: Failed to get base address\n", __func__);
+ return;
+diff --git a/drivers/clk/st/clkgen-mux.c b/drivers/clk/st/clkgen-mux.c
+index ee39af7a0b721..596e939ad905e 100644
+--- a/drivers/clk/st/clkgen-mux.c
++++ b/drivers/clk/st/clkgen-mux.c
+@@ -56,6 +56,7 @@ static void __init st_of_clkgen_mux_setup(struct device_node *np,
+ void __iomem *reg;
+ const char **parents;
+ int num_parents = 0;
++ struct device_node *parent_np;
+
+ /*
+ * First check for reg property within the node to keep backward
+@@ -63,7 +64,9 @@ static void __init st_of_clkgen_mux_setup(struct device_node *np,
+ */
+ reg = of_iomap(np, 0);
+ if (!reg) {
+- reg = of_iomap(of_get_parent(np), 0);
++ parent_np = of_get_parent(np);
++ reg = of_iomap(parent_np, 0);
++ of_node_put(parent_np);
+ if (!reg) {
+ pr_err("%s: Failed to get base address\n", __func__);
+ return;
+diff --git a/drivers/clk/tegra/clk-tegra114.c b/drivers/clk/tegra/clk-tegra114.c
+index ef718c4b38267..f7405a58877e2 100644
+--- a/drivers/clk/tegra/clk-tegra114.c
++++ b/drivers/clk/tegra/clk-tegra114.c
+@@ -1317,6 +1317,7 @@ static void __init tegra114_clock_init(struct device_node *np)
+ }
+
+ pmc_base = of_iomap(node, 0);
++ of_node_put(node);
+ if (!pmc_base) {
+ pr_err("Can't map pmc registers\n");
+ WARN_ON(1);
+diff --git a/drivers/clk/tegra/clk-tegra20.c b/drivers/clk/tegra/clk-tegra20.c
+index be3c33441cfc4..8a4514f6d5033 100644
+--- a/drivers/clk/tegra/clk-tegra20.c
++++ b/drivers/clk/tegra/clk-tegra20.c
+@@ -1131,6 +1131,7 @@ static void __init tegra20_clock_init(struct device_node *np)
+ }
+
+ pmc_base = of_iomap(node, 0);
++ of_node_put(node);
+ if (!pmc_base) {
+ pr_err("Can't map pmc registers\n");
+ BUG();
+diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c
+index b9099012dc7b1..499f999e91e13 100644
+--- a/drivers/clk/tegra/clk-tegra210.c
++++ b/drivers/clk/tegra/clk-tegra210.c
+@@ -3748,6 +3748,7 @@ static void __init tegra210_clock_init(struct device_node *np)
+ }
+
+ pmc_base = of_iomap(node, 0);
++ of_node_put(node);
+ if (!pmc_base) {
+ pr_err("Can't map pmc registers\n");
+ WARN_ON(1);
+diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
+index aa0950c4f4985..5c278d6c985e9 100644
+--- a/drivers/clk/ti/clk-dra7-atl.c
++++ b/drivers/clk/ti/clk-dra7-atl.c
+@@ -253,14 +253,16 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
+ if (rc) {
+ pr_err("%s: failed to lookup atl clock %d\n", __func__,
+ i);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto pm_put;
+ }
+
+ clk = of_clk_get_from_provider(&clkspec);
+ if (IS_ERR(clk)) {
+ pr_err("%s: failed to get atl clock %d from provider\n",
+ __func__, i);
+- return PTR_ERR(clk);
++ ret = PTR_ERR(clk);
++ goto pm_put;
+ }
+
+ cdesc = to_atl_desc(__clk_get_hw(clk));
+@@ -293,8 +295,9 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
+ if (cdesc->enabled)
+ atl_clk_enable(__clk_get_hw(clk));
+ }
+- pm_runtime_put_sync(cinfo->dev);
+
++pm_put:
++ pm_runtime_put_sync(cinfo->dev);
+ return ret;
+ }
+
+diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
+index 121d8610beb15..6b2de32ef88df 100644
+--- a/drivers/clk/ti/clk.c
++++ b/drivers/clk/ti/clk.c
+@@ -148,11 +148,12 @@ static struct device_node *ti_find_clock_provider(struct device_node *from,
+ break;
+ }
+ }
+- of_node_put(from);
+ kfree(tmp);
+
+- if (found)
++ if (found) {
++ of_node_put(from);
+ return np;
++ }
+
+ /* Fall back to using old node name base provider name */
+ return of_find_node_by_name(from, name);
+diff --git a/drivers/clk/zynqmp/clkc.c b/drivers/clk/zynqmp/clkc.c
+index eb25303eefed4..2c9da6623b84e 100644
+--- a/drivers/clk/zynqmp/clkc.c
++++ b/drivers/clk/zynqmp/clkc.c
+@@ -710,6 +710,13 @@ static void zynqmp_get_clock_info(void)
+ FIELD_PREP(CLK_ATTR_NODE_INDEX, i);
+
+ zynqmp_pm_clock_get_name(clock[i].clk_id, &name);
++
++ /*
++ * Terminate with NULL character in case name provided by firmware
++ * is longer and truncated due to size limit.
++ */
++ name.name[sizeof(name.name) - 1] = '\0';
++
+ if (!strcmp(name.name, RESERVED_CLK_NAME))
+ continue;
+ strncpy(clock[i].clk_name, name.name, MAX_NAME_LEN);
+diff --git a/drivers/clk/zynqmp/pll.c b/drivers/clk/zynqmp/pll.c
+index 91a6b4cc910eb..0d3e1377b092c 100644
+--- a/drivers/clk/zynqmp/pll.c
++++ b/drivers/clk/zynqmp/pll.c
+@@ -102,26 +102,25 @@ static long zynqmp_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *prate)
+ {
+ u32 fbdiv;
+- long rate_div, f;
++ u32 mult, div;
+
+- /* Enable the fractional mode if needed */
+- rate_div = (rate * FRAC_DIV) / *prate;
+- f = rate_div % FRAC_DIV;
+- if (f) {
+- if (rate > PS_PLL_VCO_MAX) {
+- fbdiv = rate / PS_PLL_VCO_MAX;
+- rate = rate / (fbdiv + 1);
+- }
+- if (rate < PS_PLL_VCO_MIN) {
+- fbdiv = DIV_ROUND_UP(PS_PLL_VCO_MIN, rate);
+- rate = rate * fbdiv;
+- }
+- return rate;
++ /* Let rate fall inside the range PS_PLL_VCO_MIN ~ PS_PLL_VCO_MAX */
++ if (rate > PS_PLL_VCO_MAX) {
++ div = DIV_ROUND_UP(rate, PS_PLL_VCO_MAX);
++ rate = rate / div;
++ }
++ if (rate < PS_PLL_VCO_MIN) {
++ mult = DIV_ROUND_UP(PS_PLL_VCO_MIN, rate);
++ rate = rate * mult;
+ }
+
+ fbdiv = DIV_ROUND_CLOSEST(rate, *prate);
+- fbdiv = clamp_t(u32, fbdiv, PLL_FBDIV_MIN, PLL_FBDIV_MAX);
+- return *prate * fbdiv;
++ if (fbdiv < PLL_FBDIV_MIN || fbdiv > PLL_FBDIV_MAX) {
++ fbdiv = clamp_t(u32, fbdiv, PLL_FBDIV_MIN, PLL_FBDIV_MAX);
++ rate = *prate * fbdiv;
++ }
++
++ return rate;
+ }
+
+ /**
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 9ab8221ee3c65..a7ff77550e173 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -44,8 +44,8 @@
+ #define CNTACR_RWVT BIT(4)
+ #define CNTACR_RWPT BIT(5)
+
+-#define CNTVCT_LO 0x00
+-#define CNTPCT_LO 0x08
++#define CNTPCT_LO 0x00
++#define CNTVCT_LO 0x08
+ #define CNTFRQ 0x10
+ #define CNTP_CVAL_LO 0x20
+ #define CNTP_CTL 0x2c
+@@ -473,6 +473,8 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = {
+ .desc = "ARM erratum 858921",
+ .read_cntpct_el0 = arm64_858921_read_cntpct_el0,
+ .read_cntvct_el0 = arm64_858921_read_cntvct_el0,
++ .set_next_event_phys = erratum_set_next_event_phys,
++ .set_next_event_virt = erratum_set_next_event_virt,
+ },
+ #endif
+ #ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
+diff --git a/drivers/clocksource/timer-gxp.c b/drivers/clocksource/timer-gxp.c
+index 8b38b32123880..fe4fa8d7b3f13 100644
+--- a/drivers/clocksource/timer-gxp.c
++++ b/drivers/clocksource/timer-gxp.c
+@@ -171,6 +171,7 @@ static int gxp_timer_probe(struct platform_device *pdev)
+ {
+ struct platform_device *gxp_watchdog_device;
+ struct device *dev = &pdev->dev;
++ int ret;
+
+ if (!gxp_timer) {
+ pr_err("Gxp Timer not initialized, cannot create watchdog");
+@@ -187,7 +188,11 @@ static int gxp_timer_probe(struct platform_device *pdev)
+ gxp_watchdog_device->dev.platform_data = gxp_timer->counter;
+ gxp_watchdog_device->dev.parent = dev;
+
+- return platform_device_add(gxp_watchdog_device);
++ ret = platform_device_add(gxp_watchdog_device);
++ if (ret)
++ platform_device_put(gxp_watchdog_device);
++
++ return ret;
+ }
+
+ static const struct of_device_id gxp_timer_of_match[] = {
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 9ac75c1cde9c2..d63a28c5f95a9 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -152,6 +152,7 @@ static inline int amd_pstate_enable(bool enable)
+ static int pstate_init_perf(struct amd_cpudata *cpudata)
+ {
+ u64 cap1;
++ u32 highest_perf;
+
+ int ret = rdmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1,
+ &cap1);
+@@ -163,7 +164,11 @@ static int pstate_init_perf(struct amd_cpudata *cpudata)
+ *
+ * CPPC entry doesn't indicate the highest performance in some ASICs.
+ */
+- WRITE_ONCE(cpudata->highest_perf, amd_get_highest_perf());
++ highest_perf = amd_get_highest_perf();
++ if (highest_perf > AMD_CPPC_HIGHEST_PERF(cap1))
++ highest_perf = AMD_CPPC_HIGHEST_PERF(cap1);
++
++ WRITE_ONCE(cpudata->highest_perf, highest_perf);
+
+ WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1));
+ WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1));
+@@ -175,12 +180,17 @@ static int pstate_init_perf(struct amd_cpudata *cpudata)
+ static int cppc_init_perf(struct amd_cpudata *cpudata)
+ {
+ struct cppc_perf_caps cppc_perf;
++ u32 highest_perf;
+
+ int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
+ if (ret)
+ return ret;
+
+- WRITE_ONCE(cpudata->highest_perf, amd_get_highest_perf());
++ highest_perf = amd_get_highest_perf();
++ if (highest_perf > cppc_perf.highest_perf)
++ highest_perf = cppc_perf.highest_perf;
++
++ WRITE_ONCE(cpudata->highest_perf, highest_perf);
+
+ WRITE_ONCE(cpudata->nominal_perf, cppc_perf.nominal_perf);
+ WRITE_ONCE(cpudata->lowest_nonlinear_perf,
+@@ -312,7 +322,7 @@ static int amd_pstate_target(struct cpufreq_policy *policy,
+ return -ENODEV;
+
+ cap_perf = READ_ONCE(cpudata->highest_perf);
+- min_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
++ min_perf = READ_ONCE(cpudata->lowest_perf);
+ max_perf = cap_perf;
+
+ freqs.old = policy->cur;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 57cdb36798854..fc3ebeb0bbe59 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2416,6 +2416,7 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
+ X86_MATCH(SKYLAKE_X, core_funcs),
+ X86_MATCH(COMETLAKE, core_funcs),
+ X86_MATCH(ICELAKE_X, core_funcs),
++ X86_MATCH(TIGERLAKE, core_funcs),
+ {}
+ };
+ MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index 36c79580fba25..9817fa8e9e4d7 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -317,14 +317,14 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
+ if (IS_ERR(opp)) {
+ dev_warn(dev, "Can't find the OPP for throttling: %pe!\n", opp);
+ } else {
+- throttled_freq = freq_hz / HZ_PER_KHZ;
+-
+- /* Update thermal pressure (the boost frequencies are accepted) */
+- arch_update_thermal_pressure(policy->related_cpus, throttled_freq);
+-
+ dev_pm_opp_put(opp);
+ }
+
++ throttled_freq = freq_hz / HZ_PER_KHZ;
++
++ /* Update thermal pressure (the boost frequencies are accepted) */
++ arch_update_thermal_pressure(policy->related_cpus, throttled_freq);
++
+ /*
+ * In the unlikely case policy is unregistered do not enable
+ * polling or h/w interrupt
+diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
+index 1151e5e2ba824..33c92fec4365f 100644
+--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
++++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
+@@ -97,8 +97,13 @@ static int sbi_cpuidle_enter_state(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int idx)
+ {
+ u32 *states = __this_cpu_read(sbi_cpuidle_data.states);
++ u32 state = states[idx];
+
+- return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, states[idx]);
++ if (state & SBI_HSM_SUSP_NON_RET_BIT)
++ return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, state);
++ else
++ return CPU_PM_CPU_IDLE_ENTER_RETENTION_PARAM(sbi_suspend,
++ idx, state);
+ }
+
+ static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev,
+diff --git a/drivers/crypto/cavium/cpt/cptpf_main.c b/drivers/crypto/cavium/cpt/cptpf_main.c
+index 8c32d0eb8fcf2..6872ac3440010 100644
+--- a/drivers/crypto/cavium/cpt/cptpf_main.c
++++ b/drivers/crypto/cavium/cpt/cptpf_main.c
+@@ -253,6 +253,7 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
+ const struct firmware *fw_entry;
+ struct device *dev = &cpt->pdev->dev;
+ struct ucode_header *ucode;
++ unsigned int code_length;
+ struct microcode *mcode;
+ int j, ret = 0;
+
+@@ -263,11 +264,12 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
+ ucode = (struct ucode_header *)fw_entry->data;
+ mcode = &cpt->mcode[cpt->next_mc_idx];
+ memcpy(mcode->version, (u8 *)fw_entry->data, CPT_UCODE_VERSION_SZ);
+- mcode->code_size = ntohl(ucode->code_length) * 2;
+- if (!mcode->code_size) {
++ code_length = ntohl(ucode->code_length);
++ if (code_length == 0 || code_length >= INT_MAX / 2) {
+ ret = -EINVAL;
+ goto fw_release;
+ }
++ mcode->code_size = code_length * 2;
+
+ mcode->is_ae = is_ae;
+ mcode->core_mask = 0ULL;
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index 7d4b4ad1db1f3..9f753cb4f5f18 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -641,6 +641,10 @@ static void ccp_dma_release(struct ccp_device *ccp)
+ for (i = 0; i < ccp->cmd_q_count; i++) {
+ chan = ccp->ccp_dma_chan + i;
+ dma_chan = &chan->dma_chan;
++
++ if (dma_chan->client_count)
++ dma_release_channel(dma_chan);
++
+ tasklet_kill(&chan->cleanup_tasklet);
+ list_del_rcu(&dma_chan->device_node);
+ }
+@@ -766,8 +770,8 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
+ if (!dmaengine)
+ return;
+
+- dma_async_device_unregister(dma_dev);
+ ccp_dma_release(ccp);
++ dma_async_device_unregister(dma_dev);
+
+ kmem_cache_destroy(ccp->dma_desc_cache);
+ kmem_cache_destroy(ccp->dma_cmd_cache);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 9f588c9728f8b..6c49e6d06114f 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -231,7 +231,7 @@ static int sev_read_init_ex_file(void)
+ return 0;
+ }
+
+-static void sev_write_init_ex_file(void)
++static int sev_write_init_ex_file(void)
+ {
+ struct sev_device *sev = psp_master->sev_data;
+ struct file *fp;
+@@ -241,14 +241,16 @@ static void sev_write_init_ex_file(void)
+ lockdep_assert_held(&sev_cmd_mutex);
+
+ if (!sev_init_ex_buffer)
+- return;
++ return 0;
+
+ fp = open_file_as_root(init_ex_path, O_CREAT | O_WRONLY, 0600);
+ if (IS_ERR(fp)) {
++ int ret = PTR_ERR(fp);
++
+ dev_err(sev->dev,
+- "SEV: could not open file for write, error %ld\n",
+- PTR_ERR(fp));
+- return;
++ "SEV: could not open file for write, error %d\n",
++ ret);
++ return ret;
+ }
+
+ nwrite = kernel_write(fp, sev_init_ex_buffer, NV_LENGTH, &offset);
+@@ -259,18 +261,20 @@ static void sev_write_init_ex_file(void)
+ dev_err(sev->dev,
+ "SEV: failed to write %u bytes to non volatile memory area, ret %ld\n",
+ NV_LENGTH, nwrite);
+- return;
++ return -EIO;
+ }
+
+ dev_dbg(sev->dev, "SEV: write successful to NV file\n");
++
++ return 0;
+ }
+
+-static void sev_write_init_ex_file_if_required(int cmd_id)
++static int sev_write_init_ex_file_if_required(int cmd_id)
+ {
+ lockdep_assert_held(&sev_cmd_mutex);
+
+ if (!sev_init_ex_buffer)
+- return;
++ return 0;
+
+ /*
+ * Only a few platform commands modify the SPI/NV area, but none of the
+@@ -285,10 +289,10 @@ static void sev_write_init_ex_file_if_required(int cmd_id)
+ case SEV_CMD_PEK_GEN:
+ break;
+ default:
+- return;
++ return 0;
+ }
+
+- sev_write_init_ex_file();
++ return sev_write_init_ex_file();
+ }
+
+ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+@@ -361,7 +365,7 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ cmd, reg & PSP_CMDRESP_ERR_MASK);
+ ret = -EIO;
+ } else {
+- sev_write_init_ex_file_if_required(cmd);
++ ret = sev_write_init_ex_file_if_required(cmd);
+ }
+
+ print_hex_dump_debug("(out): ", DUMP_PREFIX_OFFSET, 16, 2, data,
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index b4ca2eb034d7d..eb82e9864d148 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -2229,8 +2229,10 @@ static ssize_t qm_cmd_write(struct file *filp, const char __user *buffer,
+ return ret;
+
+ /* Judge if the instance is being reset. */
+- if (unlikely(atomic_read(&qm->status.flags) == QM_STOP))
+- return 0;
++ if (unlikely(atomic_read(&qm->status.flags) == QM_STOP)) {
++ ret = 0;
++ goto put_dfx_access;
++ }
+
+ if (count > QM_DBG_WRITE_LEN) {
+ ret = -ENOSPC;
+diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c
+index 67869513e48c1..d90b10ae005bc 100644
+--- a/drivers/crypto/hisilicon/zip/zip_crypto.c
++++ b/drivers/crypto/hisilicon/zip/zip_crypto.c
+@@ -122,12 +122,12 @@ static int sgl_sge_nr_set(const char *val, const struct kernel_param *kp)
+ if (ret || n == 0 || n > HISI_ACC_SGL_SGE_NR_MAX)
+ return -EINVAL;
+
+- return param_set_int(val, kp);
++ return param_set_ushort(val, kp);
+ }
+
+ static const struct kernel_param_ops sgl_sge_nr_ops = {
+ .set = sgl_sge_nr_set,
+- .get = param_get_int,
++ .get = param_get_ushort,
+ };
+
+ static u16 sgl_sge_nr = HZIP_SGL_SGE_NR;
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index bc60b58022564..2124416742f84 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -383,7 +383,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ u32 x;
+
+ x = ipad[i] ^ ipad[i + 4];
+- cache[i] ^= swab(x);
++ cache[i] ^= swab32(x);
+ }
+ }
+ cache_len = AES_BLOCK_SIZE;
+@@ -821,7 +821,7 @@ static int safexcel_ahash_final(struct ahash_request *areq)
+ u32 *result = (void *)areq->result;
+
+ /* K3 */
+- result[i] = swab(ctx->base.ipad.word[i + 4]);
++ result[i] = swab32(ctx->base.ipad.word[i + 4]);
+ }
+ areq->result[0] ^= 0x80; // 10- padding
+ crypto_cipher_encrypt_one(ctx->kaes, areq->result, areq->result);
+@@ -2106,7 +2106,7 @@ static int safexcel_xcbcmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ crypto_cipher_encrypt_one(ctx->kaes, (u8 *)key_tmp + AES_BLOCK_SIZE,
+ "\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3");
+ for (i = 0; i < 3 * AES_BLOCK_SIZE / sizeof(u32); i++)
+- ctx->base.ipad.word[i] = swab(key_tmp[i]);
++ ctx->base.ipad.word[i] = swab32(key_tmp[i]);
+
+ crypto_cipher_clear_flags(ctx->kaes, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(ctx->kaes, crypto_ahash_get_flags(tfm) &
+@@ -2189,7 +2189,7 @@ static int safexcel_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ return ret;
+
+ for (i = 0; i < len / sizeof(u32); i++)
+- ctx->base.ipad.word[i + 8] = swab(aes.key_enc[i]);
++ ctx->base.ipad.word[i + 8] = swab32(aes.key_enc[i]);
+
+ /* precompute the CMAC key material */
+ crypto_cipher_clear_flags(ctx->kaes, CRYPTO_TFM_REQ_MASK);
+diff --git a/drivers/crypto/marvell/octeontx/otx_cptpf_ucode.c b/drivers/crypto/marvell/octeontx/otx_cptpf_ucode.c
+index 40b482198ebc5..a765eefb18c2f 100644
+--- a/drivers/crypto/marvell/octeontx/otx_cptpf_ucode.c
++++ b/drivers/crypto/marvell/octeontx/otx_cptpf_ucode.c
+@@ -286,6 +286,7 @@ static int process_tar_file(struct device *dev,
+ struct tar_ucode_info_t *tar_info;
+ struct otx_cpt_ucode_hdr *ucode_hdr;
+ int ucode_type, ucode_size;
++ unsigned int code_length;
+
+ /*
+ * If size is less than microcode header size then don't report
+@@ -303,7 +304,13 @@ static int process_tar_file(struct device *dev,
+ if (get_ucode_type(ucode_hdr, &ucode_type))
+ return 0;
+
+- ucode_size = ntohl(ucode_hdr->code_length) * 2;
++ code_length = ntohl(ucode_hdr->code_length);
++ if (code_length >= INT_MAX / 2) {
++ dev_err(dev, "Invalid code_length %u\n", code_length);
++ return -EINVAL;
++ }
++
++ ucode_size = code_length * 2;
+ if (!ucode_size || (size < round_up(ucode_size, 16) +
+ sizeof(struct otx_cpt_ucode_hdr) + OTX_CPT_UCODE_SIGN_LEN)) {
+ dev_err(dev, "Ucode %s invalid size\n", filename);
+@@ -886,6 +893,7 @@ static int ucode_load(struct device *dev, struct otx_cpt_ucode *ucode,
+ {
+ struct otx_cpt_ucode_hdr *ucode_hdr;
+ const struct firmware *fw;
++ unsigned int code_length;
+ int ret;
+
+ set_ucode_filename(ucode, ucode_filename);
+@@ -896,7 +904,13 @@ static int ucode_load(struct device *dev, struct otx_cpt_ucode *ucode,
+ ucode_hdr = (struct otx_cpt_ucode_hdr *) fw->data;
+ memcpy(ucode->ver_str, ucode_hdr->ver_str, OTX_CPT_UCODE_VER_STR_SZ);
+ ucode->ver_num = ucode_hdr->ver_num;
+- ucode->size = ntohl(ucode_hdr->code_length) * 2;
++ code_length = ntohl(ucode_hdr->code_length);
++ if (code_length >= INT_MAX / 2) {
++ dev_err(dev, "Ucode invalid code_length %u\n", code_length);
++ ret = -EINVAL;
++ goto release_fw;
++ }
++ ucode->size = code_length * 2;
+ if (!ucode->size || (fw->size < round_up(ucode->size, 16)
+ + sizeof(struct otx_cpt_ucode_hdr) + OTX_CPT_UCODE_SIGN_LEN)) {
+ dev_err(dev, "Ucode %s invalid size\n", ucode_filename);
+diff --git a/drivers/crypto/qat/qat_common/adf_gen4_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen4_hw_data.h
+index 43b8f864806bd..4fb4b3df5a188 100644
+--- a/drivers/crypto/qat/qat_common/adf_gen4_hw_data.h
++++ b/drivers/crypto/qat/qat_common/adf_gen4_hw_data.h
+@@ -107,7 +107,7 @@ do { \
+ * Timeout is in cycles. Clock speed may vary across products but this
+ * value should be a few milli-seconds.
+ */
+-#define ADF_SSM_WDT_DEFAULT_VALUE 0x200000
++#define ADF_SSM_WDT_DEFAULT_VALUE 0x7000000ULL
+ #define ADF_SSM_WDT_PKE_DEFAULT_VALUE 0x8000000
+ #define ADF_SSMWDTL_OFFSET 0x54
+ #define ADF_SSMWDTH_OFFSET 0x5C
+diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
+index 148edbe379e31..0828d856d6b00 100644
+--- a/drivers/crypto/qat/qat_common/qat_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_algs.c
+@@ -673,11 +673,14 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst,
+ dma_addr_t blpout = qat_req->buf.bloutp;
+ size_t sz = qat_req->buf.sz;
+ size_t sz_out = qat_req->buf.sz_out;
++ int bl_dma_dir;
+ int i;
+
++ bl_dma_dir = blp != blpout ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
++
+ for (i = 0; i < bl->num_bufs; i++)
+ dma_unmap_single(dev, bl->bufers[i].addr,
+- bl->bufers[i].len, DMA_BIDIRECTIONAL);
++ bl->bufers[i].len, bl_dma_dir);
+
+ dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE);
+
+@@ -691,7 +694,7 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst,
+ for (i = bufless; i < blout->num_bufs; i++) {
+ dma_unmap_single(dev, blout->bufers[i].addr,
+ blout->bufers[i].len,
+- DMA_BIDIRECTIONAL);
++ DMA_FROM_DEVICE);
+ }
+ dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE);
+
+@@ -716,6 +719,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ struct scatterlist *sg;
+ size_t sz_out, sz = struct_size(bufl, bufers, n);
+ int node = dev_to_node(&GET_DEV(inst->accel_dev));
++ int bufl_dma_dir;
+
+ if (unlikely(!n))
+ return -EINVAL;
+@@ -733,6 +737,8 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+ qat_req->buf.sgl_src_valid = true;
+ }
+
++ bufl_dma_dir = sgl != sglout ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
++
+ for_each_sg(sgl, sg, n, i)
+ bufl->bufers[i].addr = DMA_MAPPING_ERROR;
+
+@@ -744,7 +750,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+
+ bufl->bufers[y].addr = dma_map_single(dev, sg_virt(sg),
+ sg->length,
+- DMA_BIDIRECTIONAL);
++ bufl_dma_dir);
+ bufl->bufers[y].len = sg->length;
+ if (unlikely(dma_mapping_error(dev, bufl->bufers[y].addr)))
+ goto err_in;
+@@ -787,7 +793,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
+
+ bufers[y].addr = dma_map_single(dev, sg_virt(sg),
+ sg->length,
+- DMA_BIDIRECTIONAL);
++ DMA_FROM_DEVICE);
+ if (unlikely(dma_mapping_error(dev, bufers[y].addr)))
+ goto err_out;
+ bufers[y].len = sg->length;
+@@ -817,7 +823,7 @@ err_out:
+ if (!dma_mapping_error(dev, buflout->bufers[i].addr))
+ dma_unmap_single(dev, buflout->bufers[i].addr,
+ buflout->bufers[i].len,
+- DMA_BIDIRECTIONAL);
++ DMA_FROM_DEVICE);
+
+ if (!qat_req->buf.sgl_dst_valid)
+ kfree(buflout);
+@@ -831,7 +837,7 @@ err_in:
+ if (!dma_mapping_error(dev, bufl->bufers[i].addr))
+ dma_unmap_single(dev, bufl->bufers[i].addr,
+ bufl->bufers[i].len,
+- DMA_BIDIRECTIONAL);
++ bufl_dma_dir);
+
+ if (!qat_req->buf.sgl_src_valid)
+ kfree(bufl);
+diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
+index 457084b344c17..b07ae4ba165e7 100644
+--- a/drivers/crypto/sahara.c
++++ b/drivers/crypto/sahara.c
+@@ -26,10 +26,10 @@
+ #include <linux/kernel.h>
+ #include <linux/kthread.h>
+ #include <linux/module.h>
+-#include <linux/mutex.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
++#include <linux/spinlock.h>
+
+ #define SHA_BUFFER_LEN PAGE_SIZE
+ #define SAHARA_MAX_SHA_BLOCK_SIZE SHA256_BLOCK_SIZE
+@@ -196,7 +196,7 @@ struct sahara_dev {
+ void __iomem *regs_base;
+ struct clk *clk_ipg;
+ struct clk *clk_ahb;
+- struct mutex queue_mutex;
++ spinlock_t queue_spinlock;
+ struct task_struct *kthread;
+ struct completion dma_completion;
+
+@@ -642,9 +642,9 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode)
+
+ rctx->mode = mode;
+
+- mutex_lock(&dev->queue_mutex);
++ spin_lock_bh(&dev->queue_spinlock);
+ err = crypto_enqueue_request(&dev->queue, &req->base);
+- mutex_unlock(&dev->queue_mutex);
++ spin_unlock_bh(&dev->queue_spinlock);
+
+ wake_up_process(dev->kthread);
+
+@@ -1043,10 +1043,10 @@ static int sahara_queue_manage(void *data)
+ do {
+ __set_current_state(TASK_INTERRUPTIBLE);
+
+- mutex_lock(&dev->queue_mutex);
++ spin_lock_bh(&dev->queue_spinlock);
+ backlog = crypto_get_backlog(&dev->queue);
+ async_req = crypto_dequeue_request(&dev->queue);
+- mutex_unlock(&dev->queue_mutex);
++ spin_unlock_bh(&dev->queue_spinlock);
+
+ if (backlog)
+ backlog->complete(backlog, -EINPROGRESS);
+@@ -1092,9 +1092,9 @@ static int sahara_sha_enqueue(struct ahash_request *req, int last)
+ rctx->first = 1;
+ }
+
+- mutex_lock(&dev->queue_mutex);
++ spin_lock_bh(&dev->queue_spinlock);
+ ret = crypto_enqueue_request(&dev->queue, &req->base);
+- mutex_unlock(&dev->queue_mutex);
++ spin_unlock_bh(&dev->queue_spinlock);
+
+ wake_up_process(dev->kthread);
+
+@@ -1449,7 +1449,7 @@ static int sahara_probe(struct platform_device *pdev)
+
+ crypto_init_queue(&dev->queue, SAHARA_QUEUE_LENGTH);
+
+- mutex_init(&dev->queue_mutex);
++ spin_lock_init(&dev->queue_spinlock);
+
+ dev_ptr = dev;
+
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index 38e8767ec3715..bf11d32205f38 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -124,17 +124,20 @@ static int begin_cpu_udmabuf(struct dma_buf *buf,
+ {
+ struct udmabuf *ubuf = buf->priv;
+ struct device *dev = ubuf->device->this_device;
++ int ret = 0;
+
+ if (!ubuf->sg) {
+ ubuf->sg = get_sg_table(dev, buf, direction);
+- if (IS_ERR(ubuf->sg))
+- return PTR_ERR(ubuf->sg);
++ if (IS_ERR(ubuf->sg)) {
++ ret = PTR_ERR(ubuf->sg);
++ ubuf->sg = NULL;
++ }
+ } else {
+ dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents,
+ direction);
+ }
+
+- return 0;
++ return ret;
+ }
+
+ static int end_cpu_udmabuf(struct dma_buf *buf,
+diff --git a/drivers/dma/hisi_dma.c b/drivers/dma/hisi_dma.c
+index 43817ced3a3e1..0233b42143c77 100644
+--- a/drivers/dma/hisi_dma.c
++++ b/drivers/dma/hisi_dma.c
+@@ -180,7 +180,8 @@ static void hisi_dma_reset_qp_point(struct hisi_dma_dev *hdma_dev, u32 index)
+ hisi_dma_chan_write(hdma_dev->base, HISI_DMA_CQ_HEAD_PTR, index, 0);
+ }
+
+-static void hisi_dma_reset_hw_chan(struct hisi_dma_chan *chan)
++static void hisi_dma_reset_or_disable_hw_chan(struct hisi_dma_chan *chan,
++ bool disable)
+ {
+ struct hisi_dma_dev *hdma_dev = chan->hdma_dev;
+ u32 index = chan->qp_num, tmp;
+@@ -201,8 +202,11 @@ static void hisi_dma_reset_hw_chan(struct hisi_dma_chan *chan)
+ hisi_dma_do_reset(hdma_dev, index);
+ hisi_dma_reset_qp_point(hdma_dev, index);
+ hisi_dma_pause_dma(hdma_dev, index, false);
+- hisi_dma_enable_dma(hdma_dev, index, true);
+- hisi_dma_unmask_irq(hdma_dev, index);
++
++ if (!disable) {
++ hisi_dma_enable_dma(hdma_dev, index, true);
++ hisi_dma_unmask_irq(hdma_dev, index);
++ }
+
+ ret = readl_relaxed_poll_timeout(hdma_dev->base +
+ HISI_DMA_Q_FSM_STS + index * HISI_DMA_OFFSET, tmp,
+@@ -218,7 +222,7 @@ static void hisi_dma_free_chan_resources(struct dma_chan *c)
+ struct hisi_dma_chan *chan = to_hisi_dma_chan(c);
+ struct hisi_dma_dev *hdma_dev = chan->hdma_dev;
+
+- hisi_dma_reset_hw_chan(chan);
++ hisi_dma_reset_or_disable_hw_chan(chan, false);
+ vchan_free_chan_resources(&chan->vc);
+
+ memset(chan->sq, 0, sizeof(struct hisi_dma_sqe) * hdma_dev->chan_depth);
+@@ -267,7 +271,6 @@ static void hisi_dma_start_transfer(struct hisi_dma_chan *chan)
+
+ vd = vchan_next_desc(&chan->vc);
+ if (!vd) {
+- dev_err(&hdma_dev->pdev->dev, "no issued task!\n");
+ chan->desc = NULL;
+ return;
+ }
+@@ -299,7 +302,7 @@ static void hisi_dma_issue_pending(struct dma_chan *c)
+
+ spin_lock_irqsave(&chan->vc.lock, flags);
+
+- if (vchan_issue_pending(&chan->vc))
++ if (vchan_issue_pending(&chan->vc) && !chan->desc)
+ hisi_dma_start_transfer(chan);
+
+ spin_unlock_irqrestore(&chan->vc.lock, flags);
+@@ -394,7 +397,7 @@ static void hisi_dma_enable_qp(struct hisi_dma_dev *hdma_dev, u32 qp_index)
+
+ static void hisi_dma_disable_qp(struct hisi_dma_dev *hdma_dev, u32 qp_index)
+ {
+- hisi_dma_reset_hw_chan(&hdma_dev->chan[qp_index]);
++ hisi_dma_reset_or_disable_hw_chan(&hdma_dev->chan[qp_index], true);
+ }
+
+ static void hisi_dma_enable_qps(struct hisi_dma_dev *hdma_dev)
+@@ -432,18 +435,15 @@ static irqreturn_t hisi_dma_irq(int irq, void *data)
+ desc = chan->desc;
+ cqe = chan->cq + chan->cq_head;
+ if (desc) {
++ chan->cq_head = (chan->cq_head + 1) % hdma_dev->chan_depth;
++ hisi_dma_chan_write(hdma_dev->base, HISI_DMA_CQ_HEAD_PTR,
++ chan->qp_num, chan->cq_head);
+ if (FIELD_GET(STATUS_MASK, cqe->w0) == STATUS_SUCC) {
+- chan->cq_head = (chan->cq_head + 1) %
+- hdma_dev->chan_depth;
+- hisi_dma_chan_write(hdma_dev->base,
+- HISI_DMA_CQ_HEAD_PTR, chan->qp_num,
+- chan->cq_head);
+ vchan_cookie_complete(&desc->vd);
++ hisi_dma_start_transfer(chan);
+ } else {
+ dev_err(&hdma_dev->pdev->dev, "task error!\n");
+ }
+-
+- chan->desc = NULL;
+ }
+
+ spin_unlock(&chan->vc.lock);
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index 743ead5ebc579..5b9921475be6c 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -324,13 +324,11 @@ halt:
+ idxd->state = IDXD_DEV_HALTED;
+ idxd_wqs_quiesce(idxd);
+ idxd_wqs_unmap_portal(idxd);
+- spin_lock(&idxd->dev_lock);
+ idxd_device_clear_state(idxd);
+ dev_err(&idxd->pdev->dev,
+ "idxd halted, need %s.\n",
+ gensts.reset_type == IDXD_DEVICE_RESET_FLR ?
+ "FLR" : "system reset");
+- spin_unlock(&idxd->dev_lock);
+ return -ENXIO;
+ }
+ }
+diff --git a/drivers/dma/ioat/dma.c b/drivers/dma/ioat/dma.c
+index 37ff4ec7db76f..e2070df6cad28 100644
+--- a/drivers/dma/ioat/dma.c
++++ b/drivers/dma/ioat/dma.c
+@@ -656,7 +656,7 @@ static void __cleanup(struct ioatdma_chan *ioat_chan, dma_addr_t phys_complete)
+ if (active - i == 0) {
+ dev_dbg(to_dev(ioat_chan), "%s: cancel completion timeout\n",
+ __func__);
+- mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
++ mod_timer_pending(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
+ }
+
+ /* microsecond delay by sysfs variable per pending descriptor */
+@@ -682,7 +682,7 @@ static void ioat_cleanup(struct ioatdma_chan *ioat_chan)
+
+ if (chanerr &
+ (IOAT_CHANERR_HANDLE_MASK | IOAT_CHANERR_RECOVER_MASK)) {
+- mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
++ mod_timer_pending(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
+ ioat_eh(ioat_chan);
+ }
+ }
+@@ -879,7 +879,7 @@ static void check_active(struct ioatdma_chan *ioat_chan)
+ }
+
+ if (test_and_clear_bit(IOAT_CHAN_ACTIVE, &ioat_chan->state))
+- mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
++ mod_timer_pending(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
+ }
+
+ static void ioat_reboot_chan(struct ioatdma_chan *ioat_chan)
+diff --git a/drivers/dma/mxs-dma.c b/drivers/dma/mxs-dma.c
+index 994fc4d2aca42..dc147cc2436e9 100644
+--- a/drivers/dma/mxs-dma.c
++++ b/drivers/dma/mxs-dma.c
+@@ -670,7 +670,7 @@ static enum dma_status mxs_dma_tx_status(struct dma_chan *chan,
+ return mxs_chan->status;
+ }
+
+-static int __init mxs_dma_init(struct mxs_dma_engine *mxs_dma)
++static int mxs_dma_init(struct mxs_dma_engine *mxs_dma)
+ {
+ int ret;
+
+@@ -741,7 +741,7 @@ static struct dma_chan *mxs_dma_xlate(struct of_phandle_args *dma_spec,
+ ofdma->of_node);
+ }
+
+-static int __init mxs_dma_probe(struct platform_device *pdev)
++static int mxs_dma_probe(struct platform_device *pdev)
+ {
+ struct device_node *np = pdev->dev.of_node;
+ const struct mxs_dma_type *dma_type;
+@@ -839,10 +839,7 @@ static struct platform_driver mxs_dma_driver = {
+ .name = "mxs-dma",
+ .of_match_table = mxs_dma_dt_ids,
+ },
++ .probe = mxs_dma_probe,
+ };
+
+-static int __init mxs_dma_module_init(void)
+-{
+- return platform_driver_probe(&mxs_dma_driver, mxs_dma_probe);
+-}
+-subsys_initcall(mxs_dma_module_init);
++builtin_platform_driver(mxs_dma_driver);
+diff --git a/drivers/dma/qcom/qcom_adm.c b/drivers/dma/qcom/qcom_adm.c
+index facdacf8aede6..d56caf1681ffb 100644
+--- a/drivers/dma/qcom/qcom_adm.c
++++ b/drivers/dma/qcom/qcom_adm.c
+@@ -379,13 +379,13 @@ static struct dma_async_tx_descriptor *adm_prep_slave_sg(struct dma_chan *chan,
+ if (blk_size < 0) {
+ dev_err(adev->dev, "invalid burst value: %d\n",
+ burst);
+- return ERR_PTR(-EINVAL);
++ return NULL;
+ }
+
+ crci = achan->crci & 0xf;
+ if (!crci || achan->crci > 0x1f) {
+ dev_err(adev->dev, "invalid crci value\n");
+- return ERR_PTR(-EINVAL);
++ return NULL;
+ }
+ }
+
+@@ -403,8 +403,10 @@ static struct dma_async_tx_descriptor *adm_prep_slave_sg(struct dma_chan *chan,
+ }
+
+ async_desc = kzalloc(sizeof(*async_desc), GFP_NOWAIT);
+- if (!async_desc)
+- return ERR_PTR(-ENOMEM);
++ if (!async_desc) {
++ dev_err(adev->dev, "not enough memory for async_desc struct\n");
++ return NULL;
++ }
+
+ async_desc->mux = achan->mux ? ADM_CRCI_CTL_MUX_SEL : 0;
+ async_desc->crci = crci;
+@@ -414,8 +416,10 @@ static struct dma_async_tx_descriptor *adm_prep_slave_sg(struct dma_chan *chan,
+ sizeof(*cple) + 2 * ADM_DESC_ALIGN;
+
+ async_desc->cpl = kzalloc(async_desc->dma_len, GFP_NOWAIT);
+- if (!async_desc->cpl)
++ if (!async_desc->cpl) {
++ dev_err(adev->dev, "not enough memory for cpl struct\n");
+ goto free;
++ }
+
+ async_desc->adev = adev;
+
+@@ -437,8 +441,10 @@ static struct dma_async_tx_descriptor *adm_prep_slave_sg(struct dma_chan *chan,
+ async_desc->dma_addr = dma_map_single(adev->dev, async_desc->cpl,
+ async_desc->dma_len,
+ DMA_TO_DEVICE);
+- if (dma_mapping_error(adev->dev, async_desc->dma_addr))
++ if (dma_mapping_error(adev->dev, async_desc->dma_addr)) {
++ dev_err(adev->dev, "dma mapping error for cpl\n");
+ goto free;
++ }
+
+ cple_addr = async_desc->dma_addr + ((void *)cple - async_desc->cpl);
+
+@@ -454,7 +460,7 @@ static struct dma_async_tx_descriptor *adm_prep_slave_sg(struct dma_chan *chan,
+
+ free:
+ kfree(async_desc);
+- return ERR_PTR(-ENOMEM);
++ return NULL;
+ }
+
+ /**
+@@ -494,7 +500,7 @@ static int adm_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
+
+ spin_lock_irqsave(&achan->vc.lock, flag);
+ memcpy(&achan->slave, cfg, sizeof(struct dma_slave_config));
+- if (cfg->peripheral_size == sizeof(config))
++ if (cfg->peripheral_size == sizeof(*config))
+ achan->crci = config->crci;
+ spin_unlock_irqrestore(&achan->vc.lock, flag);
+
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index 2f0d2c68c93c6..fcfcde947b307 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -300,8 +300,6 @@ struct udma_chan {
+
+ struct udma_tx_drain tx_drain;
+
+- u32 bcnt; /* number of bytes completed since the start of the channel */
+-
+ /* Channel configuration parameters */
+ struct udma_chan_config config;
+
+@@ -757,6 +755,20 @@ static void udma_reset_rings(struct udma_chan *uc)
+ }
+ }
+
++static void udma_decrement_byte_counters(struct udma_chan *uc, u32 val)
++{
++ if (uc->desc->dir == DMA_DEV_TO_MEM) {
++ udma_rchanrt_write(uc, UDMA_CHAN_RT_BCNT_REG, val);
++ udma_rchanrt_write(uc, UDMA_CHAN_RT_SBCNT_REG, val);
++ udma_rchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
++ } else {
++ udma_tchanrt_write(uc, UDMA_CHAN_RT_BCNT_REG, val);
++ udma_tchanrt_write(uc, UDMA_CHAN_RT_SBCNT_REG, val);
++ if (!uc->bchan)
++ udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
++ }
++}
++
+ static void udma_reset_counters(struct udma_chan *uc)
+ {
+ u32 val;
+@@ -790,8 +802,6 @@ static void udma_reset_counters(struct udma_chan *uc)
+ val = udma_rchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG);
+ udma_rchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
+ }
+-
+- uc->bcnt = 0;
+ }
+
+ static int udma_reset_chan(struct udma_chan *uc, bool hard)
+@@ -1115,7 +1125,7 @@ static void udma_check_tx_completion(struct work_struct *work)
+ if (uc->desc) {
+ struct udma_desc *d = uc->desc;
+
+- uc->bcnt += d->residue;
++ udma_decrement_byte_counters(uc, d->residue);
+ udma_start(uc);
+ vchan_cookie_complete(&d->vd);
+ break;
+@@ -1168,7 +1178,7 @@ static irqreturn_t udma_ring_irq_handler(int irq, void *data)
+ vchan_cyclic_callback(&d->vd);
+ } else {
+ if (udma_is_desc_really_done(uc, d)) {
+- uc->bcnt += d->residue;
++ udma_decrement_byte_counters(uc, d->residue);
+ udma_start(uc);
+ vchan_cookie_complete(&d->vd);
+ } else {
+@@ -1204,7 +1214,7 @@ static irqreturn_t udma_udma_irq_handler(int irq, void *data)
+ vchan_cyclic_callback(&d->vd);
+ } else {
+ /* TODO: figure out the real amount of data */
+- uc->bcnt += d->residue;
++ udma_decrement_byte_counters(uc, d->residue);
+ udma_start(uc);
+ vchan_cookie_complete(&d->vd);
+ }
+@@ -3809,7 +3819,6 @@ static enum dma_status udma_tx_status(struct dma_chan *chan,
+ bcnt = udma_tchanrt_read(uc, UDMA_CHAN_RT_BCNT_REG);
+ }
+
+- bcnt -= uc->bcnt;
+ if (bcnt && !(bcnt % uc->desc->residue))
+ residue = 0;
+ else
+diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
+index fe567be0f118b..804f542be3f28 100644
+--- a/drivers/firmware/efi/libstub/fdt.c
++++ b/drivers/firmware/efi/libstub/fdt.c
+@@ -280,14 +280,6 @@ efi_status_t allocate_new_fdt_and_exit_boot(void *handle,
+ goto fail;
+ }
+
+- /*
+- * Now that we have done our final memory allocation (and free)
+- * we can get the memory map key needed for exit_boot_services().
+- */
+- status = efi_get_memory_map(&map);
+- if (status != EFI_SUCCESS)
+- goto fail_free_new_fdt;
+-
+ status = update_fdt((void *)fdt_addr, fdt_size,
+ (void *)*new_fdt_addr, MAX_FDT_SIZE, cmdline_ptr,
+ initrd_addr, initrd_size);
+diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
+index adaa492c3d2df..4e2575dfeb908 100644
+--- a/drivers/firmware/google/gsmi.c
++++ b/drivers/firmware/google/gsmi.c
+@@ -681,6 +681,15 @@ static struct notifier_block gsmi_die_notifier = {
+ static int gsmi_panic_callback(struct notifier_block *nb,
+ unsigned long reason, void *arg)
+ {
++
++ /*
++ * Panic callbacks are executed with all other CPUs stopped,
++ * so we must not attempt to spin waiting for gsmi_dev.lock
++ * to be released.
++ */
++ if (spin_is_locked(&gsmi_dev.lock))
++ return NOTIFY_DONE;
++
+ gsmi_shutdown_reason(GSMI_SHUTDOWN_PANIC);
+ return NOTIFY_DONE;
+ }
+diff --git a/drivers/fpga/dfl.c b/drivers/fpga/dfl.c
+index 6bff39ff21a0d..eabaf495a481e 100644
+--- a/drivers/fpga/dfl.c
++++ b/drivers/fpga/dfl.c
+@@ -1866,7 +1866,7 @@ long dfl_feature_ioctl_set_irq(struct platform_device *pdev,
+ return -EINVAL;
+
+ fds = memdup_user((void __user *)(arg + sizeof(hdr)),
+- hdr.count * sizeof(s32));
++ array_size(hdr.count, sizeof(s32)));
+ if (IS_ERR(fds))
+ return PTR_ERR(fds);
+
+diff --git a/drivers/fsi/fsi-core.c b/drivers/fsi/fsi-core.c
+index 3a7b78e367011..5858e6339a10b 100644
+--- a/drivers/fsi/fsi-core.c
++++ b/drivers/fsi/fsi-core.c
+@@ -1314,6 +1314,9 @@ int fsi_master_register(struct fsi_master *master)
+
+ mutex_init(&master->scan_lock);
+ master->idx = ida_simple_get(&master_ida, 0, INT_MAX, GFP_KERNEL);
++ if (master->idx < 0)
++ return master->idx;
++
+ dev_set_name(&master->dev, "fsi%d", master->idx);
+ master->dev.class = &fsi_master_class;
+
+diff --git a/drivers/fsi/fsi-occ.c b/drivers/fsi/fsi-occ.c
+index c9cc75fbdfb9d..28c176d038a26 100644
+--- a/drivers/fsi/fsi-occ.c
++++ b/drivers/fsi/fsi-occ.c
+@@ -94,6 +94,7 @@ static int occ_open(struct inode *inode, struct file *file)
+ client->occ = occ;
+ mutex_init(&client->lock);
+ file->private_data = client;
++ get_device(occ->dev);
+
+ /* We allocate a 1-page buffer, make sure it all fits */
+ BUILD_BUG_ON((OCC_CMD_DATA_BYTES + 3) > PAGE_SIZE);
+@@ -197,6 +198,7 @@ static int occ_release(struct inode *inode, struct file *file)
+ {
+ struct occ_client *client = file->private_data;
+
++ put_device(client->occ->dev);
+ free_page((unsigned long)client->buffer);
+ kfree(client);
+
+@@ -493,12 +495,19 @@ int fsi_occ_submit(struct device *dev, const void *request, size_t req_len,
+ for (i = 1; i < req_len - 2; ++i)
+ checksum += byte_request[i];
+
+- mutex_lock(&occ->occ_lock);
++ rc = mutex_lock_interruptible(&occ->occ_lock);
++ if (rc)
++ return rc;
+
+ occ->client_buffer = response;
+ occ->client_buffer_size = user_resp_len;
+ occ->client_response_size = 0;
+
++ if (!occ->buffer) {
++ rc = -ENOENT;
++ goto done;
++ }
++
+ /*
+ * Get a sequence number and update the counter. Avoid a sequence
+ * number of 0 which would pass the response check below even if the
+@@ -671,10 +680,13 @@ static int occ_remove(struct platform_device *pdev)
+ {
+ struct occ *occ = platform_get_drvdata(pdev);
+
+- kvfree(occ->buffer);
+-
+ misc_deregister(&occ->mdev);
+
++ mutex_lock(&occ->occ_lock);
++ kvfree(occ->buffer);
++ occ->buffer = NULL;
++ mutex_unlock(&occ->occ_lock);
++
+ device_for_each_child(&pdev->dev, NULL, occ_unregister_child);
+
+ ida_simple_remove(&occ_ida, occ->idx);
+diff --git a/drivers/gpio/gpio-rockchip.c b/drivers/gpio/gpio-rockchip.c
+index bb953f6478647..fd31e36f5b9a5 100644
+--- a/drivers/gpio/gpio-rockchip.c
++++ b/drivers/gpio/gpio-rockchip.c
+@@ -19,6 +19,7 @@
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
+ #include <linux/of_irq.h>
++#include <linux/pinctrl/consumer.h>
+ #include <linux/pinctrl/pinconf-generic.h>
+ #include <linux/regmap.h>
+
+@@ -155,6 +156,12 @@ static int rockchip_gpio_set_direction(struct gpio_chip *chip,
+ unsigned long flags;
+ u32 data = input ? 0 : 1;
+
++
++ if (input)
++ pinctrl_gpio_direction_input(bank->pin_base + offset);
++ else
++ pinctrl_gpio_direction_output(bank->pin_base + offset);
++
+ raw_spin_lock_irqsave(&bank->slock, flags);
+ rockchip_gpio_writel_bit(bank, offset, data, bank->gpio_regs->port_ddr);
+ raw_spin_unlock_irqrestore(&bank->slock, flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index b7933c2ce765c..491d4846fc02c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -1674,10 +1674,12 @@ amdgpu_connector_add(struct amdgpu_device *adev,
+ adev->mode_info.dither_property,
+ AMDGPU_FMT_DITHER_DISABLE);
+
+- if (amdgpu_audio != 0)
++ if (amdgpu_audio != 0) {
+ drm_object_attach_property(&amdgpu_connector->base.base,
+ adev->mode_info.audio_property,
+ AMDGPU_AUDIO_AUTO);
++ amdgpu_connector->audio = AMDGPU_AUDIO_AUTO;
++ }
+
+ subpixel_order = SubPixelHorizontalRGB;
+ connector->interlace_allowed = true;
+@@ -1799,6 +1801,7 @@ amdgpu_connector_add(struct amdgpu_device *adev,
+ drm_object_attach_property(&amdgpu_connector->base.base,
+ adev->mode_info.audio_property,
+ AMDGPU_AUDIO_AUTO);
++ amdgpu_connector->audio = AMDGPU_AUDIO_AUTO;
+ }
+ drm_object_attach_property(&amdgpu_connector->base.base,
+ adev->mode_info.dither_property,
+@@ -1852,6 +1855,7 @@ amdgpu_connector_add(struct amdgpu_device *adev,
+ drm_object_attach_property(&amdgpu_connector->base.base,
+ adev->mode_info.audio_property,
+ AMDGPU_AUDIO_AUTO);
++ amdgpu_connector->audio = AMDGPU_AUDIO_AUTO;
+ }
+ drm_object_attach_property(&amdgpu_connector->base.base,
+ adev->mode_info.dither_property,
+@@ -1902,6 +1906,7 @@ amdgpu_connector_add(struct amdgpu_device *adev,
+ drm_object_attach_property(&amdgpu_connector->base.base,
+ adev->mode_info.audio_property,
+ AMDGPU_AUDIO_AUTO);
++ amdgpu_connector->audio = AMDGPU_AUDIO_AUTO;
+ }
+ drm_object_attach_property(&amdgpu_connector->base.base,
+ adev->mode_info.dither_property,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 0a8c15c3a04c3..7e6c59ec3018e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -35,8 +35,6 @@
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
+ #include <drm/drm_crtc_helper.h>
+-#include <drm/drm_damage_helper.h>
+-#include <drm/drm_drv.h>
+ #include <drm/drm_edid.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_fb_helper.h>
+@@ -497,12 +495,6 @@ static const struct drm_framebuffer_funcs amdgpu_fb_funcs = {
+ .create_handle = drm_gem_fb_create_handle,
+ };
+
+-static const struct drm_framebuffer_funcs amdgpu_fb_funcs_atomic = {
+- .destroy = drm_gem_fb_destroy,
+- .create_handle = drm_gem_fb_create_handle,
+- .dirty = drm_atomic_helper_dirtyfb,
+-};
+-
+ uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev,
+ uint64_t bo_flags)
+ {
+@@ -1077,10 +1069,8 @@ static int amdgpu_display_gem_fb_verify_and_init(struct drm_device *dev,
+ if (ret)
+ goto err;
+
+- if (drm_drv_uses_atomic_modeset(dev))
+- ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs_atomic);
+- else
+- ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
++ ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
++
+ if (ret)
+ goto err;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 8890300766a5b..5e8ca32bc3a90 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2548,8 +2548,11 @@ static int amdgpu_pmops_runtime_resume(struct device *dev)
+ amdgpu_device_baco_exit(drm_dev);
+ }
+ ret = amdgpu_device_resume(drm_dev, false);
+- if (ret)
++ if (ret) {
++ if (amdgpu_device_supports_px(drm_dev))
++ pci_disable_device(pdev);
+ return ret;
++ }
+
+ if (amdgpu_device_supports_px(drm_dev))
+ drm_dev->switch_power_state = DRM_SWITCH_POWER_ON;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+index 1fd3cbca20a29..718db7d98e5a3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+@@ -211,12 +211,15 @@ static int amdgpu_vm_sdma_update(struct amdgpu_vm_update_params *p,
+ int r;
+
+ /* Wait for PD/PT moves to be completed */
+- dma_resv_for_each_fence(&cursor, bo->tbo.base.resv,
+- DMA_RESV_USAGE_KERNEL, fence) {
++ dma_resv_iter_begin(&cursor, bo->tbo.base.resv, DMA_RESV_USAGE_KERNEL);
++ dma_resv_for_each_fence_unlocked(&cursor, fence) {
+ r = amdgpu_sync_fence(&p->job->sync, fence);
+- if (r)
++ if (r) {
++ dma_resv_iter_end(&cursor);
+ return r;
++ }
+ }
++ dma_resv_iter_end(&cursor);
+
+ do {
+ ndw = p->num_dw_left;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
+index bc11b2de37aeb..a1d26c4d80b8c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
+@@ -169,17 +169,17 @@ static void mmhub_v3_0_init_system_aperture_regs(struct amdgpu_device *adev)
+ uint64_t value;
+ uint32_t tmp;
+
+- /* Disable AGP. */
+- WREG32_SOC15(MMHUB, 0, regMMMC_VM_AGP_BASE, 0);
+- WREG32_SOC15(MMHUB, 0, regMMMC_VM_AGP_TOP, 0);
+- WREG32_SOC15(MMHUB, 0, regMMMC_VM_AGP_BOT, 0x00FFFFFF);
+-
+ if (!amdgpu_sriov_vf(adev)) {
+ /*
+ * the new L1 policy will block SRIOV guest from writing
+ * these regs, and they will be programed at host.
+ * so skip programing these regs.
+ */
++ /* Disable AGP. */
++ WREG32_SOC15(MMHUB, 0, regMMMC_VM_AGP_BASE, 0);
++ WREG32_SOC15(MMHUB, 0, regMMMC_VM_AGP_TOP, 0);
++ WREG32_SOC15(MMHUB, 0, regMMMC_VM_AGP_BOT, 0x00FFFFFF);
++
+ /* Program the system aperture low logical page number. */
+ WREG32_SOC15(MMHUB, 0, regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR,
+ adev->gmc.vram_start >> 18);
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
+index 8d5c452a91007..6d3bfb0f03469 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
+@@ -551,6 +551,10 @@ static int soc21_common_early_init(void *handle)
+ AMD_PG_SUPPORT_JPEG |
+ AMD_PG_SUPPORT_ATHUB |
+ AMD_PG_SUPPORT_MMHUB;
++ if (amdgpu_sriov_vf(adev)) {
++ adev->cg_flags = 0;
++ adev->pg_flags = 0;
++ }
+ adev->external_rev_id = adev->rev_id + 0x1; // TODO: need update
+ break;
+ case IP_VERSION(11, 0, 2):
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index e1797657b04c7..7d3fc5849466f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -1232,6 +1232,24 @@ static void init_interrupts(struct device_queue_manager *dqm)
+ dqm->dev->kfd2kgd->init_interrupts(dqm->dev->adev, i);
+ }
+
++static void init_sdma_bitmaps(struct device_queue_manager *dqm)
++{
++ unsigned int num_sdma_queues =
++ min_t(unsigned int, sizeof(dqm->sdma_bitmap)*8,
++ get_num_sdma_queues(dqm));
++ unsigned int num_xgmi_sdma_queues =
++ min_t(unsigned int, sizeof(dqm->xgmi_sdma_bitmap)*8,
++ get_num_xgmi_sdma_queues(dqm));
++
++ if (num_sdma_queues)
++ dqm->sdma_bitmap = GENMASK_ULL(num_sdma_queues-1, 0);
++ if (num_xgmi_sdma_queues)
++ dqm->xgmi_sdma_bitmap = GENMASK_ULL(num_xgmi_sdma_queues-1, 0);
++
++ dqm->sdma_bitmap &= ~get_reserved_sdma_queues_bitmap(dqm);
++ pr_info("sdma_bitmap: %llx\n", dqm->sdma_bitmap);
++}
++
+ static int initialize_nocpsch(struct device_queue_manager *dqm)
+ {
+ int pipe, queue;
+@@ -1260,11 +1278,7 @@ static int initialize_nocpsch(struct device_queue_manager *dqm)
+
+ memset(dqm->vmid_pasid, 0, sizeof(dqm->vmid_pasid));
+
+- dqm->sdma_bitmap = ~0ULL >> (64 - get_num_sdma_queues(dqm));
+- dqm->sdma_bitmap &= ~(get_reserved_sdma_queues_bitmap(dqm));
+- pr_info("sdma_bitmap: %llx\n", dqm->sdma_bitmap);
+-
+- dqm->xgmi_sdma_bitmap = ~0ULL >> (64 - get_num_xgmi_sdma_queues(dqm));
++ init_sdma_bitmaps(dqm);
+
+ return 0;
+ }
+@@ -1442,9 +1456,6 @@ static int set_sched_resources(struct device_queue_manager *dqm)
+
+ static int initialize_cpsch(struct device_queue_manager *dqm)
+ {
+- uint64_t num_sdma_queues;
+- uint64_t num_xgmi_sdma_queues;
+-
+ pr_debug("num of pipes: %d\n", get_pipes_per_mec(dqm));
+
+ mutex_init(&dqm->lock_hidden);
+@@ -1453,24 +1464,10 @@ static int initialize_cpsch(struct device_queue_manager *dqm)
+ dqm->active_cp_queue_count = 0;
+ dqm->gws_queue_count = 0;
+ dqm->active_runlist = false;
+-
+- num_sdma_queues = get_num_sdma_queues(dqm);
+- if (num_sdma_queues >= BITS_PER_TYPE(dqm->sdma_bitmap))
+- dqm->sdma_bitmap = ULLONG_MAX;
+- else
+- dqm->sdma_bitmap = (BIT_ULL(num_sdma_queues) - 1);
+-
+- dqm->sdma_bitmap &= ~(get_reserved_sdma_queues_bitmap(dqm));
+- pr_info("sdma_bitmap: %llx\n", dqm->sdma_bitmap);
+-
+- num_xgmi_sdma_queues = get_num_xgmi_sdma_queues(dqm);
+- if (num_xgmi_sdma_queues >= BITS_PER_TYPE(dqm->xgmi_sdma_bitmap))
+- dqm->xgmi_sdma_bitmap = ULLONG_MAX;
+- else
+- dqm->xgmi_sdma_bitmap = (BIT_ULL(num_xgmi_sdma_queues) - 1);
+-
+ INIT_WORK(&dqm->hw_exception_work, kfd_process_hw_exception);
+
++ init_sdma_bitmaps(dqm);
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index c781f92db9590..42a8ebc597230 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1363,13 +1363,21 @@ static struct hpd_rx_irq_offload_work_queue *hpd_rx_irq_create_workqueue(struct
+
+ if (hpd_rx_offload_wq[i].wq == NULL) {
+ DRM_ERROR("create amdgpu_dm_hpd_rx_offload_wq fail!");
+- return NULL;
++ goto out_err;
+ }
+
+ spin_lock_init(&hpd_rx_offload_wq[i].offload_lock);
+ }
+
+ return hpd_rx_offload_wq;
++
++out_err:
++ for (i = 0; i < max_caps; i++) {
++ if (hpd_rx_offload_wq[i].wq)
++ destroy_workqueue(hpd_rx_offload_wq[i].wq);
++ }
++ kfree(hpd_rx_offload_wq);
++ return NULL;
+ }
+
+ struct amdgpu_stutter_quirk {
+@@ -9157,15 +9165,15 @@ static void amdgpu_dm_handle_vrr_transition(struct dm_crtc_state *old_state,
+ * We also need vupdate irq for the actual core vblank handling
+ * at end of vblank.
+ */
+- dm_set_vupdate_irq(new_state->base.crtc, true);
+- drm_crtc_vblank_get(new_state->base.crtc);
++ WARN_ON(dm_set_vupdate_irq(new_state->base.crtc, true) != 0);
++ WARN_ON(drm_crtc_vblank_get(new_state->base.crtc) != 0);
+ DRM_DEBUG_DRIVER("%s: crtc=%u VRR off->on: Get vblank ref\n",
+ __func__, new_state->base.crtc->base.id);
+ } else if (old_vrr_active && !new_vrr_active) {
+ /* Transition VRR active -> inactive:
+ * Allow vblank irq disable again for fixed refresh rate.
+ */
+- dm_set_vupdate_irq(new_state->base.crtc, false);
++ WARN_ON(dm_set_vupdate_irq(new_state->base.crtc, false) != 0);
+ drm_crtc_vblank_put(new_state->base.crtc);
+ DRM_DEBUG_DRIVER("%s: crtc=%u VRR on->off: Drop vblank ref\n",
+ __func__, new_state->base.crtc->base.id);
+@@ -9916,23 +9924,6 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ mutex_unlock(&dm->dc_lock);
+ }
+
+- /* Count number of newly disabled CRTCs for dropping PM refs later. */
+- for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
+- new_crtc_state, i) {
+- if (old_crtc_state->active && !new_crtc_state->active)
+- crtc_disable_count++;
+-
+- dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+- dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+-
+- /* For freesync config update on crtc state and params for irq */
+- update_stream_irq_parameters(dm, dm_new_crtc_state);
+-
+- /* Handle vrr on->off / off->on transitions */
+- amdgpu_dm_handle_vrr_transition(dm_old_crtc_state,
+- dm_new_crtc_state);
+- }
+-
+ /**
+ * Enable interrupts for CRTCs that are newly enabled or went through
+ * a modeset. It was intentionally deferred until after the front end
+@@ -9942,16 +9933,29 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+ struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
+ #ifdef CONFIG_DEBUG_FS
+- bool configure_crc = false;
+ enum amdgpu_dm_pipe_crc_source cur_crc_src;
+ #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+- struct crc_rd_work *crc_rd_wrk = dm->crc_rd_wrk;
++ struct crc_rd_work *crc_rd_wrk;
++#endif
++#endif
++ /* Count number of newly disabled CRTCs for dropping PM refs later. */
++ if (old_crtc_state->active && !new_crtc_state->active)
++ crtc_disable_count++;
++
++ dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
++ dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
++
++ /* For freesync config update on crtc state and params for irq */
++ update_stream_irq_parameters(dm, dm_new_crtc_state);
++
++#ifdef CONFIG_DEBUG_FS
++#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
++ crc_rd_wrk = dm->crc_rd_wrk;
+ #endif
+ spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
+ cur_crc_src = acrtc->dm_irq_params.crc_src;
+ spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
+ #endif
+- dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+
+ if (new_crtc_state->active &&
+ (!old_crtc_state->active ||
+@@ -9959,16 +9963,19 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ dc_stream_retain(dm_new_crtc_state->stream);
+ acrtc->dm_irq_params.stream = dm_new_crtc_state->stream;
+ manage_dm_interrupts(adev, acrtc, true);
++ }
++ /* Handle vrr on->off / off->on transitions */
++ amdgpu_dm_handle_vrr_transition(dm_old_crtc_state, dm_new_crtc_state);
+
+ #ifdef CONFIG_DEBUG_FS
++ if (new_crtc_state->active &&
++ (!old_crtc_state->active ||
++ drm_atomic_crtc_needs_modeset(new_crtc_state))) {
+ /**
+ * Frontend may have changed so reapply the CRC capture
+ * settings for the stream.
+ */
+- dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+-
+ if (amdgpu_dm_is_valid_crc_source(cur_crc_src)) {
+- configure_crc = true;
+ #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+ if (amdgpu_dm_crc_window_is_activated(crtc)) {
+ spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
+@@ -9980,14 +9987,12 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
+ }
+ #endif
+- }
+-
+- if (configure_crc)
+ if (amdgpu_dm_crtc_configure_crc_source(
+ crtc, dm_new_crtc_state, cur_crc_src))
+ DRM_DEBUG_DRIVER("Failed to configure crc source");
+-#endif
++ }
+ }
++#endif
+ }
+
+ for_each_new_crtc_in_state(state, crtc, new_crtc_state, j)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index 141fd2721501e..5b804f81db814 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -60,11 +60,15 @@ static bool link_supports_psrsu(struct dc_link *link)
+ */
+ void amdgpu_dm_set_psr_caps(struct dc_link *link)
+ {
+- if (!(link->connector_signal & SIGNAL_TYPE_EDP))
++ if (!(link->connector_signal & SIGNAL_TYPE_EDP)) {
++ link->psr_settings.psr_feature_enabled = false;
+ return;
++ }
+
+- if (link->type == dc_connection_none)
++ if (link->type == dc_connection_none) {
++ link->psr_settings.psr_feature_enabled = false;
+ return;
++ }
+
+ if (link->dpcd_caps.psr_info.psr_version == 0) {
+ link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 9dbd965d8afb3..6ca29b887fce9 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2632,11 +2632,8 @@ static void copy_stream_update_to_stream(struct dc *dc,
+ if (update->abm_level)
+ stream->abm_level = *update->abm_level;
+
+- if (update->periodic_interrupt0)
+- stream->periodic_interrupt0 = *update->periodic_interrupt0;
+-
+- if (update->periodic_interrupt1)
+- stream->periodic_interrupt1 = *update->periodic_interrupt1;
++ if (update->periodic_interrupt)
++ stream->periodic_interrupt = *update->periodic_interrupt;
+
+ if (update->gamut_remap)
+ stream->gamut_remap_matrix = *update->gamut_remap;
+@@ -2723,13 +2720,8 @@ static void commit_planes_do_stream_update(struct dc *dc,
+
+ if (!pipe_ctx->top_pipe && !pipe_ctx->prev_odm_pipe && pipe_ctx->stream == stream) {
+
+- if (stream_update->periodic_interrupt0 &&
+- dc->hwss.setup_periodic_interrupt)
+- dc->hwss.setup_periodic_interrupt(dc, pipe_ctx, VLINE0);
+-
+- if (stream_update->periodic_interrupt1 &&
+- dc->hwss.setup_periodic_interrupt)
+- dc->hwss.setup_periodic_interrupt(dc, pipe_ctx, VLINE1);
++ if (stream_update->periodic_interrupt && dc->hwss.setup_periodic_interrupt)
++ dc->hwss.setup_periodic_interrupt(dc, pipe_ctx);
+
+ if ((stream_update->hdr_static_metadata && !stream->use_dynamic_meta) ||
+ stream_update->vrr_infopacket ||
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+index 58941f4defb35..a7f319d404a1f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+@@ -200,8 +200,7 @@ struct dc_stream_state {
+ /* DMCU info */
+ unsigned int abm_level;
+
+- struct periodic_interrupt_config periodic_interrupt0;
+- struct periodic_interrupt_config periodic_interrupt1;
++ struct periodic_interrupt_config periodic_interrupt;
+
+ /* from core_stream struct */
+ struct dc_context *ctx;
+@@ -268,8 +267,7 @@ struct dc_stream_update {
+ struct dc_info_packet *hdr_static_metadata;
+ unsigned int *abm_level;
+
+- struct periodic_interrupt_config *periodic_interrupt0;
+- struct periodic_interrupt_config *periodic_interrupt1;
++ struct periodic_interrupt_config *periodic_interrupt;
+
+ struct dc_info_packet *vrr_infopacket;
+ struct dc_info_packet *vsc_infopacket;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index d9ab279915355..33c87e53b6a3f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -3623,7 +3623,7 @@ void dcn10_calc_vupdate_position(
+ {
+ const struct dc_crtc_timing *dc_crtc_timing = &pipe_ctx->stream->timing;
+ int vline_int_offset_from_vupdate =
+- pipe_ctx->stream->periodic_interrupt0.lines_offset;
++ pipe_ctx->stream->periodic_interrupt.lines_offset;
+ int vupdate_offset_from_vsync = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
+ int start_position;
+
+@@ -3648,18 +3648,10 @@ void dcn10_calc_vupdate_position(
+ static void dcn10_cal_vline_position(
+ struct dc *dc,
+ struct pipe_ctx *pipe_ctx,
+- enum vline_select vline,
+ uint32_t *start_line,
+ uint32_t *end_line)
+ {
+- enum vertical_interrupt_ref_point ref_point = INVALID_POINT;
+-
+- if (vline == VLINE0)
+- ref_point = pipe_ctx->stream->periodic_interrupt0.ref_point;
+- else if (vline == VLINE1)
+- ref_point = pipe_ctx->stream->periodic_interrupt1.ref_point;
+-
+- switch (ref_point) {
++ switch (pipe_ctx->stream->periodic_interrupt.ref_point) {
+ case START_V_UPDATE:
+ dcn10_calc_vupdate_position(
+ dc,
+@@ -3668,7 +3660,9 @@ static void dcn10_cal_vline_position(
+ end_line);
+ break;
+ case START_V_SYNC:
+- // Suppose to do nothing because vsync is 0;
++ // vsync is line 0 so start_line is just the requested line offset
++ *start_line = pipe_ctx->stream->periodic_interrupt.lines_offset;
++ *end_line = *start_line + 2;
+ break;
+ default:
+ ASSERT(0);
+@@ -3678,24 +3672,15 @@ static void dcn10_cal_vline_position(
+
+ void dcn10_setup_periodic_interrupt(
+ struct dc *dc,
+- struct pipe_ctx *pipe_ctx,
+- enum vline_select vline)
++ struct pipe_ctx *pipe_ctx)
+ {
+ struct timing_generator *tg = pipe_ctx->stream_res.tg;
++ uint32_t start_line = 0;
++ uint32_t end_line = 0;
+
+- if (vline == VLINE0) {
+- uint32_t start_line = 0;
+- uint32_t end_line = 0;
++ dcn10_cal_vline_position(dc, pipe_ctx, &start_line, &end_line);
+
+- dcn10_cal_vline_position(dc, pipe_ctx, vline, &start_line, &end_line);
+-
+- tg->funcs->setup_vertical_interrupt0(tg, start_line, end_line);
+-
+- } else if (vline == VLINE1) {
+- pipe_ctx->stream_res.tg->funcs->setup_vertical_interrupt1(
+- tg,
+- pipe_ctx->stream->periodic_interrupt1.lines_offset);
+- }
++ tg->funcs->setup_vertical_interrupt0(tg, start_line, end_line);
+ }
+
+ void dcn10_setup_vupdate_interrupt(struct dc *dc, struct pipe_ctx *pipe_ctx)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
+index 9ae07c77fdc01..0ef7bf7ddb75e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
+@@ -175,8 +175,7 @@ void dcn10_set_cursor_attribute(struct pipe_ctx *pipe_ctx);
+ void dcn10_set_cursor_sdr_white_level(struct pipe_ctx *pipe_ctx);
+ void dcn10_setup_periodic_interrupt(
+ struct dc *dc,
+- struct pipe_ctx *pipe_ctx,
+- enum vline_select vline);
++ struct pipe_ctx *pipe_ctx);
+ enum dc_status dcn10_set_clock(struct dc *dc,
+ enum dc_clock_type clock_type,
+ uint32_t clk_khz,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c
+index 23621ff08c905..52fb2bf3d5781 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c
+@@ -150,9 +150,9 @@ static void dcn31_hpo_dp_stream_enc_dp_blank(
+ * 10us*5000=50ms. This covers 41.7ms of minimum 24 Hz mode +
+ * a little more because we may not trust delay accuracy.
+ */
+- //REG_WAIT(DP_SYM32_ENC_VID_STREAM_CONTROL,
+- // VID_STREAM_STATUS, 0,
+- // 10, 5000);
++ REG_WAIT(DP_SYM32_ENC_VID_STREAM_CONTROL,
++ VID_STREAM_STATUS, 0,
++ 10, 5000);
+
+ /* Disable SDP tranmission */
+ REG_UPDATE(DP_SYM32_ENC_SDP_CONTROL,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/calcs/bw_fixed.c b/drivers/gpu/drm/amd/display/dc/dml/calcs/bw_fixed.c
+index 6ca288fb5fb9e..2d46bc527b218 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/calcs/bw_fixed.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/calcs/bw_fixed.c
+@@ -26,12 +26,12 @@
+ #include "bw_fixed.h"
+
+
+-#define MIN_I64 \
+- (int64_t)(-(1LL << 63))
+-
+ #define MAX_I64 \
+ (int64_t)((1ULL << 63) - 1)
+
++#define MIN_I64 \
++ (-MAX_I64 - 1)
++
+ #define FRACTIONAL_PART_MASK \
+ ((1ULL << BW_FIXED_BITS_PER_FRACTIONAL_PART) - 1)
+
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+index 05053f3b4ab7b..21a9eedec0928 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+@@ -32,11 +32,6 @@
+ #include "inc/hw/link_encoder.h"
+ #include "core_status.h"
+
+-enum vline_select {
+- VLINE0,
+- VLINE1
+-};
+-
+ struct pipe_ctx;
+ struct dc_state;
+ struct dc_stream_status;
+@@ -116,8 +111,7 @@ struct hw_sequencer_funcs {
+ int group_index, int group_size,
+ struct pipe_ctx *grouped_pipes[]);
+ void (*setup_periodic_interrupt)(struct dc *dc,
+- struct pipe_ctx *pipe_ctx,
+- enum vline_select vline);
++ struct pipe_ctx *pipe_ctx);
+ void (*set_drr)(struct pipe_ctx **pipe_ctx, int num_pipes,
+ struct dc_crtc_timing_adjust adjust);
+ void (*set_static_screen_control)(struct pipe_ctx **pipe_ctx,
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c b/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
+index 59172acb97380..292f533d8cf0d 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
+@@ -235,7 +235,7 @@ void komeda_crtc_handle_event(struct komeda_crtc *kcrtc,
+ crtc->state->event = NULL;
+ drm_crtc_send_vblank_event(crtc, event);
+ } else {
+- DRM_WARN("CRTC[%d]: FLIP happen but no pending commit.\n",
++ DRM_WARN("CRTC[%d]: FLIP happened but no pending commit.\n",
+ drm_crtc_index(&kcrtc->base));
+ }
+ spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+@@ -286,7 +286,7 @@ komeda_crtc_atomic_enable(struct drm_crtc *crtc,
+ komeda_crtc_do_flush(crtc, old);
+ }
+
+-static void
++void
+ komeda_crtc_flush_and_wait_for_flip_done(struct komeda_crtc *kcrtc,
+ struct completion *input_flip_done)
+ {
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_kms.c b/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
+index 93b7f09b96ca9..327051bba5b68 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
+@@ -69,6 +69,25 @@ static const struct drm_driver komeda_kms_driver = {
+ .minor = 1,
+ };
+
++static void komeda_kms_atomic_commit_hw_done(struct drm_atomic_state *state)
++{
++ struct drm_device *dev = state->dev;
++ struct komeda_kms_dev *kms = to_kdev(dev);
++ int i;
++
++ for (i = 0; i < kms->n_crtcs; i++) {
++ struct komeda_crtc *kcrtc = &kms->crtcs[i];
++
++ if (kcrtc->base.state->active) {
++ struct completion *flip_done = NULL;
++ if (kcrtc->base.state->event)
++ flip_done = kcrtc->base.state->event->base.completion;
++ komeda_crtc_flush_and_wait_for_flip_done(kcrtc, flip_done);
++ }
++ }
++ drm_atomic_helper_commit_hw_done(state);
++}
++
+ static void komeda_kms_commit_tail(struct drm_atomic_state *old_state)
+ {
+ struct drm_device *dev = old_state->dev;
+@@ -81,7 +100,7 @@ static void komeda_kms_commit_tail(struct drm_atomic_state *old_state)
+
+ drm_atomic_helper_commit_modeset_enables(dev, old_state);
+
+- drm_atomic_helper_commit_hw_done(old_state);
++ komeda_kms_atomic_commit_hw_done(old_state);
+
+ drm_atomic_helper_wait_for_flip_done(dev, old_state);
+
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_kms.h b/drivers/gpu/drm/arm/display/komeda/komeda_kms.h
+index 456f3c4357193..bf6e8fba50613 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_kms.h
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_kms.h
+@@ -182,6 +182,8 @@ void komeda_kms_cleanup_private_objs(struct komeda_kms_dev *kms);
+
+ void komeda_crtc_handle_event(struct komeda_crtc *kcrtc,
+ struct komeda_events *evts);
++void komeda_crtc_flush_and_wait_for_flip_done(struct komeda_crtc *kcrtc,
++ struct completion *input_flip_done);
+
+ struct komeda_kms_dev *komeda_kms_attach(struct komeda_dev *mdev);
+ void komeda_kms_detach(struct komeda_kms_dev *kms);
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+index a031a0cd1f181..94de73cbeb2dd 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+@@ -394,10 +394,7 @@ void adv7511_cec_irq_process(struct adv7511 *adv7511, unsigned int irq1);
+ #else
+ static inline int adv7511_cec_init(struct device *dev, struct adv7511 *adv7511)
+ {
+- unsigned int offset = adv7511->type == ADV7533 ?
+- ADV7533_REG_CEC_OFFSET : 0;
+-
+- regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL + offset,
++ regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL,
+ ADV7511_CEC_CTRL_POWER_DOWN);
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c b/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
+index 0b266f28f150f..99964f5a5457b 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
+@@ -359,7 +359,7 @@ int adv7511_cec_init(struct device *dev, struct adv7511 *adv7511)
+ goto err_cec_alloc;
+ }
+
+- regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL + offset, 0);
++ regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL, 0);
+ /* cec soft reset */
+ regmap_write(adv7511->regmap_cec,
+ ADV7511_REG_CEC_SOFT_RESET + offset, 0x01);
+@@ -386,7 +386,7 @@ err_cec_alloc:
+ dev_info(dev, "Initializing CEC failed with error %d, disabling CEC\n",
+ ret);
+ err_cec_parse_dt:
+- regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL + offset,
++ regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL,
+ ADV7511_CEC_CTRL_POWER_DOWN);
+ return ret == -EPROBE_DEFER ? ret : 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 38bf28720f3a2..6031bdd923420 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1340,9 +1340,6 @@ static int adv7511_remove(struct i2c_client *i2c)
+ {
+ struct adv7511 *adv7511 = i2c_get_clientdata(i2c);
+
+- i2c_unregister_device(adv7511->i2c_cec);
+- clk_disable_unprepare(adv7511->cec_clk);
+-
+ adv7511_uninit_regulators(adv7511);
+
+ drm_bridge_remove(&adv7511->bridge);
+@@ -1350,6 +1347,8 @@ static int adv7511_remove(struct i2c_client *i2c)
+ adv7511_audio_exit(adv7511);
+
+ cec_unregister_adapter(adv7511->cec_adap);
++ i2c_unregister_device(adv7511->i2c_cec);
++ clk_disable_unprepare(adv7511->cec_clk);
+
+ i2c_unregister_device(adv7511->i2c_packet);
+ i2c_unregister_device(adv7511->i2c_edid);
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 4b673c4792d77..a09d1a39ab0ae 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -2954,6 +2954,9 @@ static void it6505_bridge_atomic_enable(struct drm_bridge *bridge,
+
+ it6505_int_mask_enable(it6505);
+ it6505_video_reset(it6505);
++
++ it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link,
++ DP_SET_POWER_D0);
+ }
+
+ static void it6505_bridge_atomic_disable(struct drm_bridge *bridge,
+@@ -2965,9 +2968,9 @@ static void it6505_bridge_atomic_disable(struct drm_bridge *bridge,
+ DRM_DEV_DEBUG_DRIVER(dev, "start");
+
+ if (it6505->powered) {
+- it6505_video_disable(it6505);
+ it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link,
+ DP_SET_POWER_D3);
++ it6505_video_disable(it6505);
+ }
+ }
+
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index c0b182d1374e4..7f688ebd36ebc 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -807,13 +807,14 @@ static int lt9611_connector_init(struct drm_bridge *bridge, struct lt9611 *lt961
+
+ drm_connector_helper_add(<9611->connector,
+ <9611_bridge_connector_helper_funcs);
+- drm_connector_attach_encoder(<9611->connector, bridge->encoder);
+
+ if (!bridge->encoder) {
+ DRM_ERROR("Parent encoder object not found");
+ return -ENODEV;
+ }
+
++ drm_connector_attach_encoder(<9611->connector, bridge->encoder);
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+index cce98bf2a4e73..72248a565579e 100644
+--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
++++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+@@ -296,7 +296,9 @@ static void ge_b850v3_lvds_remove(void)
+ * This check is to avoid both the drivers
+ * removing the bridge in their remove() function
+ */
+- if (!ge_b850v3_lvds_ptr)
++ if (!ge_b850v3_lvds_ptr ||
++ !ge_b850v3_lvds_ptr->stdp2690_i2c ||
++ !ge_b850v3_lvds_ptr->stdp4028_i2c)
+ goto out;
+
+ drm_bridge_remove(&ge_b850v3_lvds_ptr->bridge);
+diff --git a/drivers/gpu/drm/bridge/parade-ps8640.c b/drivers/gpu/drm/bridge/parade-ps8640.c
+index edb939b14c04e..38dcc606b4992 100644
+--- a/drivers/gpu/drm/bridge/parade-ps8640.c
++++ b/drivers/gpu/drm/bridge/parade-ps8640.c
+@@ -596,8 +596,8 @@ static int ps8640_probe(struct i2c_client *client)
+ if (!ps_bridge)
+ return -ENOMEM;
+
+- ps_bridge->supplies[0].supply = "vdd33";
+- ps_bridge->supplies[1].supply = "vdd12";
++ ps_bridge->supplies[0].supply = "vdd12";
++ ps_bridge->supplies[1].supply = "vdd33";
+ ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ps_bridge->supplies),
+ ps_bridge->supplies);
+ if (ret)
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index 3e1be9894ed17..0552e9a3c8380 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -3095,6 +3095,7 @@ static irqreturn_t dw_hdmi_irq(int irq, void *dev_id)
+ {
+ struct dw_hdmi *hdmi = dev_id;
+ u8 intr_stat, phy_int_pol, phy_pol_mask, phy_stat;
++ enum drm_connector_status status = connector_status_unknown;
+
+ intr_stat = hdmi_readb(hdmi, HDMI_IH_PHY_STAT0);
+ phy_int_pol = hdmi_readb(hdmi, HDMI_PHY_POL0);
+@@ -3133,13 +3134,15 @@ static irqreturn_t dw_hdmi_irq(int irq, void *dev_id)
+ cec_notifier_phys_addr_invalidate(hdmi->cec_notifier);
+ mutex_unlock(&hdmi->cec_notifier_mutex);
+ }
+- }
+
+- if (intr_stat & HDMI_IH_PHY_STAT0_HPD) {
+- enum drm_connector_status status = phy_int_pol & HDMI_PHY_HPD
+- ? connector_status_connected
+- : connector_status_disconnected;
++ if (phy_stat & HDMI_PHY_HPD)
++ status = connector_status_connected;
++
++ if (!(phy_stat & (HDMI_PHY_HPD | HDMI_PHY_RX_SENSE)))
++ status = connector_status_disconnected;
++ }
+
++ if (status != connector_status_unknown) {
+ dev_dbg(hdmi->dev, "EVENT=%s\n",
+ status == connector_status_connected ?
+ "plugin" : "plugout");
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index 16affb42086ad..c41c6c464b7fc 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1986,9 +1986,10 @@ static int tc_probe_bridge_endpoint(struct tc_data *tc)
+
+ for_each_endpoint_of_node(dev->of_node, node) {
+ of_graph_parse_endpoint(node, &endpoint);
+- if (endpoint.port > 2)
++ if (endpoint.port > 2) {
++ of_node_put(node);
+ return -EINVAL;
+-
++ }
+ mode |= BIT(endpoint.port);
+ }
+
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index e7c22c2ca90c4..f27cd710bc86b 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -2636,17 +2636,8 @@ int drm_dp_set_phy_test_pattern(struct drm_dp_aux *aux,
+ struct drm_dp_phy_test_params *data, u8 dp_rev)
+ {
+ int err, i;
+- u8 link_config[2];
+ u8 test_pattern;
+
+- link_config[0] = drm_dp_link_rate_to_bw_code(data->link_rate);
+- link_config[1] = data->num_lanes;
+- if (data->enhanced_frame_cap)
+- link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
+- err = drm_dp_dpcd_write(aux, DP_LINK_BW_SET, link_config, 2);
+- if (err < 0)
+- return err;
+-
+ test_pattern = data->phy_pattern;
+ if (dp_rev < 0x12) {
+ test_pattern = (test_pattern << 2) &
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index 18f2b6075b780..28dd741f7da1b 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -4916,14 +4916,14 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+ seq_printf(m, "dpcd: %*ph\n", DP_RECEIVER_CAP_SIZE, buf);
+
+ ret = drm_dp_dpcd_read(mgr->aux, DP_FAUX_CAP, buf, 2);
+- if (ret) {
++ if (ret != 2) {
+ seq_printf(m, "faux/mst read failed\n");
+ goto out;
+ }
+ seq_printf(m, "faux/mst: %*ph\n", 2, buf);
+
+ ret = drm_dp_dpcd_read(mgr->aux, DP_MSTM_CTRL, buf, 1);
+- if (ret) {
++ if (ret != 1) {
+ seq_printf(m, "mst ctrl read failed\n");
+ goto out;
+ }
+@@ -4931,7 +4931,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+
+ /* dump the standard OUI branch header */
+ ret = drm_dp_dpcd_read(mgr->aux, DP_BRANCH_OUI, buf, DP_BRANCH_OUI_HEADER_SIZE);
+- if (ret) {
++ if (ret != DP_BRANCH_OUI_HEADER_SIZE) {
+ seq_printf(m, "branch oui read failed\n");
+ goto out;
+ }
+diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
+index c96847fc0ebcb..36ca4092c1ab1 100644
+--- a/drivers/gpu/drm/drm_bridge.c
++++ b/drivers/gpu/drm/drm_bridge.c
+@@ -823,8 +823,8 @@ static int select_bus_fmt_recursive(struct drm_bridge *first_bridge,
+ struct drm_connector_state *conn_state,
+ u32 out_bus_fmt)
+ {
++ unsigned int i, num_in_bus_fmts = 0;
+ struct drm_bridge_state *cur_state;
+- unsigned int num_in_bus_fmts, i;
+ struct drm_bridge *prev_bridge;
+ u32 *in_bus_fmts;
+ int ret;
+@@ -945,7 +945,7 @@ drm_atomic_bridge_chain_select_bus_fmts(struct drm_bridge *bridge,
+ struct drm_connector *conn = conn_state->connector;
+ struct drm_encoder *encoder = bridge->encoder;
+ struct drm_bridge_state *last_bridge_state;
+- unsigned int i, num_out_bus_fmts;
++ unsigned int i, num_out_bus_fmts = 0;
+ struct drm_bridge *last_bridge;
+ u32 *out_bus_fmts;
+ int ret = 0;
+diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
+index 51fcf12980235..7f1097947731d 100644
+--- a/drivers/gpu/drm/drm_ioctl.c
++++ b/drivers/gpu/drm/drm_ioctl.c
+@@ -472,7 +472,13 @@ EXPORT_SYMBOL(drm_invalid_op);
+ */
+ static int drm_copy_field(char __user *buf, size_t *buf_len, const char *value)
+ {
+- int len;
++ size_t len;
++
++ /* don't attempt to copy a NULL pointer */
++ if (WARN_ONCE(!value, "BUG: the value to copy was not set!")) {
++ *buf_len = 0;
++ return 0;
++ }
+
+ /* don't overflow userbuf */
+ len = strlen(value);
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index c40bde96cfdf0..c317ee9fa4458 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -346,6 +346,7 @@ static int mipi_dsi_remove_device_fn(struct device *dev, void *priv)
+ {
+ struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
+
++ mipi_dsi_detach(dsi);
+ mipi_dsi_device_unregister(dsi);
+
+ return 0;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index d4e0f2e855488..2d82f236d6699 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -103,6 +103,12 @@ static const struct drm_dmi_panel_orientation_data lcd800x1280_rightside_up = {
+ .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+
++static const struct drm_dmi_panel_orientation_data lcd1080x1920_leftside_up = {
++ .width = 1080,
++ .height = 1920,
++ .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data lcd1200x1920_rightside_up = {
+ .width = 1200,
+ .height = 1920,
+@@ -128,6 +134,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "One S1003"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* Anbernic Win600 */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Anbernic"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Win600"),
++ },
++ .driver_data = (void *)&lcd720x1280_rightside_up,
+ }, { /* Asus T100HA */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+@@ -152,6 +164,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYA NEO 2021"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* AYA NEO AIR */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_MATCH(DMI_BOARD_NAME, "AIR"),
++ },
++ .driver_data = (void *)&lcd1080x1920_leftside_up,
+ }, { /* AYA NEO NEXT */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AYANEO"),
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
+index 91caf4523b34d..d8d037ed5dd54 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.c
++++ b/drivers/gpu/drm/i915/display/intel_bios.c
+@@ -123,7 +123,7 @@ find_raw_section(const void *_bdb, enum bdb_block_id section_id)
+ * Offset from the start of BDB to the start of the
+ * block data (just past the block header).
+ */
+-static u32 block_offset(const void *bdb, enum bdb_block_id section_id)
++static u32 raw_block_offset(const void *bdb, enum bdb_block_id section_id)
+ {
+ const void *block;
+
+@@ -134,18 +134,6 @@ static u32 block_offset(const void *bdb, enum bdb_block_id section_id)
+ return block - bdb;
+ }
+
+-/* size of the block excluding the header */
+-static u32 block_size(const void *bdb, enum bdb_block_id section_id)
+-{
+- const void *block;
+-
+- block = find_raw_section(bdb, section_id);
+- if (!block)
+- return 0;
+-
+- return get_blocksize(block);
+-}
+-
+ struct bdb_block_entry {
+ struct list_head node;
+ enum bdb_block_id section_id;
+@@ -230,9 +218,14 @@ static bool validate_lfp_data_ptrs(const void *bdb,
+ {
+ int fp_timing_size, dvo_timing_size, panel_pnp_id_size, panel_name_size;
+ int data_block_size, lfp_data_size;
++ const void *data_block;
+ int i;
+
+- data_block_size = block_size(bdb, BDB_LVDS_LFP_DATA);
++ data_block = find_raw_section(bdb, BDB_LVDS_LFP_DATA);
++ if (!data_block)
++ return false;
++
++ data_block_size = get_blocksize(data_block);
+ if (data_block_size == 0)
+ return false;
+
+@@ -260,21 +253,6 @@ static bool validate_lfp_data_ptrs(const void *bdb,
+ if (16 * lfp_data_size > data_block_size)
+ return false;
+
+- /*
+- * Except for vlv/chv machines all real VBTs seem to have 6
+- * unaccounted bytes in the fp_timing table. And it doesn't
+- * appear to be a really intentional hole as the fp_timing
+- * 0xffff terminator is always within those 6 missing bytes.
+- */
+- if (fp_timing_size + dvo_timing_size + panel_pnp_id_size != lfp_data_size &&
+- fp_timing_size + 6 + dvo_timing_size + panel_pnp_id_size != lfp_data_size)
+- return false;
+-
+- if (ptrs->ptr[0].fp_timing.offset + fp_timing_size > ptrs->ptr[0].dvo_timing.offset ||
+- ptrs->ptr[0].dvo_timing.offset + dvo_timing_size != ptrs->ptr[0].panel_pnp_id.offset ||
+- ptrs->ptr[0].panel_pnp_id.offset + panel_pnp_id_size != lfp_data_size)
+- return false;
+-
+ /* make sure the table entries have uniform size */
+ for (i = 1; i < 16; i++) {
+ if (ptrs->ptr[i].fp_timing.table_size != fp_timing_size ||
+@@ -288,6 +266,23 @@ static bool validate_lfp_data_ptrs(const void *bdb,
+ return false;
+ }
+
++ /*
++ * Except for vlv/chv machines all real VBTs seem to have 6
++ * unaccounted bytes in the fp_timing table. And it doesn't
++ * appear to be a really intentional hole as the fp_timing
++ * 0xffff terminator is always within those 6 missing bytes.
++ */
++ if (fp_timing_size + 6 + dvo_timing_size + panel_pnp_id_size == lfp_data_size)
++ fp_timing_size += 6;
++
++ if (fp_timing_size + dvo_timing_size + panel_pnp_id_size != lfp_data_size)
++ return false;
++
++ if (ptrs->ptr[0].fp_timing.offset + fp_timing_size != ptrs->ptr[0].dvo_timing.offset ||
++ ptrs->ptr[0].dvo_timing.offset + dvo_timing_size != ptrs->ptr[0].panel_pnp_id.offset ||
++ ptrs->ptr[0].panel_pnp_id.offset + panel_pnp_id_size != lfp_data_size)
++ return false;
++
+ /* make sure the tables fit inside the data block */
+ for (i = 0; i < 16; i++) {
+ if (ptrs->ptr[i].fp_timing.offset + fp_timing_size > data_block_size ||
+@@ -299,6 +294,15 @@ static bool validate_lfp_data_ptrs(const void *bdb,
+ if (ptrs->panel_name.offset + 16 * panel_name_size > data_block_size)
+ return false;
+
++ /* make sure fp_timing terminators are present at expected locations */
++ for (i = 0; i < 16; i++) {
++ const u16 *t = data_block + ptrs->ptr[i].fp_timing.offset +
++ fp_timing_size - 2;
++
++ if (*t != 0xffff)
++ return false;
++ }
++
+ return true;
+ }
+
+@@ -309,7 +313,7 @@ static bool fixup_lfp_data_ptrs(const void *bdb, void *ptrs_block)
+ u32 offset;
+ int i;
+
+- offset = block_offset(bdb, BDB_LVDS_LFP_DATA);
++ offset = raw_block_offset(bdb, BDB_LVDS_LFP_DATA);
+
+ for (i = 0; i < 16; i++) {
+ if (ptrs->ptr[i].fp_timing.offset < offset ||
+@@ -332,18 +336,6 @@ static bool fixup_lfp_data_ptrs(const void *bdb, void *ptrs_block)
+ return validate_lfp_data_ptrs(bdb, ptrs);
+ }
+
+-static const void *find_fp_timing_terminator(const u8 *data, int size)
+-{
+- int i;
+-
+- for (i = 0; i < size - 1; i++) {
+- if (data[i] == 0xff && data[i+1] == 0xff)
+- return &data[i];
+- }
+-
+- return NULL;
+-}
+-
+ static int make_lfp_data_ptr(struct lvds_lfp_data_ptr_table *table,
+ int table_size, int total_size)
+ {
+@@ -367,11 +359,22 @@ static void next_lfp_data_ptr(struct lvds_lfp_data_ptr_table *next,
+ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
+ const void *bdb)
+ {
+- int i, size, table_size, block_size, offset;
+- const void *t0, *t1, *block;
++ int i, size, table_size, block_size, offset, fp_timing_size;
+ struct bdb_lvds_lfp_data_ptrs *ptrs;
++ const void *block;
+ void *ptrs_block;
+
++ /*
++ * The hardcoded fp_timing_size is only valid for
++ * modernish VBTs. All older VBTs definitely should
++ * include block 41 and thus we don't need to
++ * generate one.
++ */
++ if (i915->vbt.version < 155)
++ return NULL;
++
++ fp_timing_size = 38;
++
+ block = find_raw_section(bdb, BDB_LVDS_LFP_DATA);
+ if (!block)
+ return NULL;
+@@ -380,17 +383,8 @@ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
+
+ block_size = get_blocksize(block);
+
+- size = block_size;
+- t0 = find_fp_timing_terminator(block, size);
+- if (!t0)
+- return NULL;
+-
+- size -= t0 - block - 2;
+- t1 = find_fp_timing_terminator(t0 + 2, size);
+- if (!t1)
+- return NULL;
+-
+- size = t1 - t0;
++ size = fp_timing_size + sizeof(struct lvds_dvo_timing) +
++ sizeof(struct lvds_pnp_id);
+ if (size * 16 > block_size)
+ return NULL;
+
+@@ -408,7 +402,7 @@ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
+ table_size = sizeof(struct lvds_dvo_timing);
+ size = make_lfp_data_ptr(&ptrs->ptr[0].dvo_timing, table_size, size);
+
+- table_size = t0 - block + 2;
++ table_size = fp_timing_size;
+ size = make_lfp_data_ptr(&ptrs->ptr[0].fp_timing, table_size, size);
+
+ if (ptrs->ptr[0].fp_timing.table_size)
+@@ -423,14 +417,14 @@ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
+ return NULL;
+ }
+
+- size = t1 - t0;
++ size = fp_timing_size + sizeof(struct lvds_dvo_timing) +
++ sizeof(struct lvds_pnp_id);
+ for (i = 1; i < 16; i++) {
+ next_lfp_data_ptr(&ptrs->ptr[i].fp_timing, &ptrs->ptr[i-1].fp_timing, size);
+ next_lfp_data_ptr(&ptrs->ptr[i].dvo_timing, &ptrs->ptr[i-1].dvo_timing, size);
+ next_lfp_data_ptr(&ptrs->ptr[i].panel_pnp_id, &ptrs->ptr[i-1].panel_pnp_id, size);
+ }
+
+- size = t1 - t0;
+ table_size = sizeof(struct lvds_lfp_panel_name);
+
+ if (16 * (size + table_size) <= block_size) {
+diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
+index 1bb766c79dcbe..5aaacc53fa4ca 100644
+--- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
+@@ -247,6 +247,7 @@ err_scratch1:
+ i915_gem_object_put(vm->scratch[1]);
+ err_scratch0:
+ i915_gem_object_put(vm->scratch[0]);
++ vm->scratch[0] = NULL;
+ return ret;
+ }
+
+@@ -268,9 +269,10 @@ static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
+ gen6_ppgtt_free_pd(ppgtt);
+ free_scratch(vm);
+
+- mutex_destroy(&ppgtt->flush);
++ if (ppgtt->base.pd)
++ free_pd(&ppgtt->base.vm, ppgtt->base.pd);
+
+- free_pd(&ppgtt->base.vm, ppgtt->base.pd);
++ mutex_destroy(&ppgtt->flush);
+ }
+
+ static void pd_vma_bind(struct i915_address_space *vm,
+@@ -449,19 +451,17 @@ struct i915_ppgtt *gen6_ppgtt_create(struct intel_gt *gt)
+
+ err = gen6_ppgtt_init_scratch(ppgtt);
+ if (err)
+- goto err_free;
++ goto err_put;
+
+ ppgtt->base.pd = gen6_alloc_top_pd(ppgtt);
+ if (IS_ERR(ppgtt->base.pd)) {
+ err = PTR_ERR(ppgtt->base.pd);
+- goto err_scratch;
++ goto err_put;
+ }
+
+ return &ppgtt->base;
+
+-err_scratch:
+- free_scratch(&ppgtt->base.vm);
+-err_free:
+- kfree(ppgtt);
++err_put:
++ i915_vm_put(&ppgtt->base.vm);
+ return ERR_PTR(err);
+ }
+diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+index c7bd5d71b03e5..2128b7a72a257 100644
+--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+@@ -196,7 +196,10 @@ static void gen8_ppgtt_cleanup(struct i915_address_space *vm)
+ if (intel_vgpu_active(vm->i915))
+ gen8_ppgtt_notify_vgt(ppgtt, false);
+
+- __gen8_ppgtt_cleanup(vm, ppgtt->pd, gen8_pd_top_count(vm), vm->top);
++ if (ppgtt->pd)
++ __gen8_ppgtt_cleanup(vm, ppgtt->pd,
++ gen8_pd_top_count(vm), vm->top);
++
+ free_scratch(vm);
+ }
+
+@@ -803,8 +806,10 @@ static int gen8_init_scratch(struct i915_address_space *vm)
+ struct drm_i915_gem_object *obj;
+
+ obj = vm->alloc_pt_dma(vm, I915_GTT_PAGE_SIZE_4K);
+- if (IS_ERR(obj))
++ if (IS_ERR(obj)) {
++ ret = PTR_ERR(obj);
+ goto free_scratch;
++ }
+
+ ret = map_pt_dma(vm, obj);
+ if (ret) {
+@@ -823,7 +828,8 @@ static int gen8_init_scratch(struct i915_address_space *vm)
+ free_scratch:
+ while (i--)
+ i915_gem_object_put(vm->scratch[i]);
+- return -ENOMEM;
++ vm->scratch[0] = NULL;
++ return ret;
+ }
+
+ static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt)
+@@ -901,6 +907,7 @@ err_pd:
+ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt,
+ unsigned long lmem_pt_obj_flags)
+ {
++ struct i915_page_directory *pd;
+ struct i915_ppgtt *ppgtt;
+ int err;
+
+@@ -946,21 +953,7 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt,
+ ppgtt->vm.alloc_scratch_dma = alloc_pt_dma;
+ }
+
+- err = gen8_init_scratch(&ppgtt->vm);
+- if (err)
+- goto err_free;
+-
+- ppgtt->pd = gen8_alloc_top_pd(&ppgtt->vm);
+- if (IS_ERR(ppgtt->pd)) {
+- err = PTR_ERR(ppgtt->pd);
+- goto err_free_scratch;
+- }
+-
+- if (!i915_vm_is_4lvl(&ppgtt->vm)) {
+- err = gen8_preallocate_top_level_pdp(ppgtt);
+- if (err)
+- goto err_free_pd;
+- }
++ ppgtt->vm.pte_encode = gen8_pte_encode;
+
+ ppgtt->vm.bind_async_flags = I915_VMA_LOCAL_BIND;
+ ppgtt->vm.insert_entries = gen8_ppgtt_insert;
+@@ -971,22 +964,31 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt,
+ ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc;
+ ppgtt->vm.clear_range = gen8_ppgtt_clear;
+ ppgtt->vm.foreach = gen8_ppgtt_foreach;
++ ppgtt->vm.cleanup = gen8_ppgtt_cleanup;
+
+- ppgtt->vm.pte_encode = gen8_pte_encode;
++ err = gen8_init_scratch(&ppgtt->vm);
++ if (err)
++ goto err_put;
++
++ pd = gen8_alloc_top_pd(&ppgtt->vm);
++ if (IS_ERR(pd)) {
++ err = PTR_ERR(pd);
++ goto err_put;
++ }
++ ppgtt->pd = pd;
++
++ if (!i915_vm_is_4lvl(&ppgtt->vm)) {
++ err = gen8_preallocate_top_level_pdp(ppgtt);
++ if (err)
++ goto err_put;
++ }
+
+ if (intel_vgpu_active(gt->i915))
+ gen8_ppgtt_notify_vgt(ppgtt, true);
+
+- ppgtt->vm.cleanup = gen8_ppgtt_cleanup;
+-
+ return ppgtt;
+
+-err_free_pd:
+- __gen8_ppgtt_cleanup(&ppgtt->vm, ppgtt->pd,
+- gen8_pd_top_count(&ppgtt->vm), ppgtt->vm.top);
+-err_free_scratch:
+- free_scratch(&ppgtt->vm);
+-err_free:
+- kfree(ppgtt);
++err_put:
++ i915_vm_put(&ppgtt->vm);
+ return ERR_PTR(err);
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
+index b67831833c9a3..2eaeba14319e9 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
+@@ -405,6 +405,9 @@ void free_scratch(struct i915_address_space *vm)
+ {
+ int i;
+
++ if (!vm->scratch[0])
++ return;
++
+ for (i = 0; i <= vm->top; i++)
+ i915_gem_object_put(vm->scratch[i]);
+ }
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 7d5803f2343a9..c2571d2170d91 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -5307,10 +5307,22 @@ skl_compute_wm_params(const struct intel_crtc_state *crtc_state,
+ modifier == I915_FORMAT_MOD_4_TILED ||
+ modifier == I915_FORMAT_MOD_Yf_TILED ||
+ modifier == I915_FORMAT_MOD_Y_TILED_CCS ||
+- modifier == I915_FORMAT_MOD_Yf_TILED_CCS;
++ modifier == I915_FORMAT_MOD_Yf_TILED_CCS ||
++ modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS ||
++ modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS ||
++ modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC ||
++ modifier == I915_FORMAT_MOD_4_TILED_DG2_RC_CCS ||
++ modifier == I915_FORMAT_MOD_4_TILED_DG2_MC_CCS ||
++ modifier == I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC;
+ wp->x_tiled = modifier == I915_FORMAT_MOD_X_TILED;
+ wp->rc_surface = modifier == I915_FORMAT_MOD_Y_TILED_CCS ||
+- modifier == I915_FORMAT_MOD_Yf_TILED_CCS;
++ modifier == I915_FORMAT_MOD_Yf_TILED_CCS ||
++ modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS ||
++ modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS ||
++ modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC ||
++ modifier == I915_FORMAT_MOD_4_TILED_DG2_RC_CCS ||
++ modifier == I915_FORMAT_MOD_4_TILED_DG2_MC_CCS ||
++ modifier == I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC;
+ wp->is_planar = intel_format_info_is_yuv_semiplanar(format, modifier);
+
+ wp->width = width;
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index bd4ca11d3ff53..86b90d0f5780a 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -388,10 +388,14 @@ static void meson_drv_unbind(struct device *dev)
+ drm_dev_unregister(drm);
+ drm_kms_helper_poll_fini(drm);
+ drm_atomic_helper_shutdown(drm);
+- component_unbind_all(dev, drm);
+ free_irq(priv->vsync_irq, drm);
+ drm_dev_put(drm);
+
++ meson_encoder_hdmi_remove(priv);
++ meson_encoder_cvbs_remove(priv);
++
++ component_unbind_all(dev, drm);
++
+ if (priv->afbcd.ops)
+ priv->afbcd.ops->exit(priv);
+ }
+@@ -493,6 +497,13 @@ static int meson_drv_probe(struct platform_device *pdev)
+ return 0;
+ };
+
++static int meson_drv_remove(struct platform_device *pdev)
++{
++ component_master_del(&pdev->dev, &meson_drv_master_ops);
++
++ return 0;
++}
++
+ static struct meson_drm_match_data meson_drm_gxbb_data = {
+ .compat = VPU_COMPATIBLE_GXBB,
+ };
+@@ -530,6 +541,7 @@ static const struct dev_pm_ops meson_drv_pm_ops = {
+
+ static struct platform_driver meson_drm_platform_driver = {
+ .probe = meson_drv_probe,
++ .remove = meson_drv_remove,
+ .shutdown = meson_drv_shutdown,
+ .driver = {
+ .name = "meson-drm",
+diff --git a/drivers/gpu/drm/meson/meson_drv.h b/drivers/gpu/drm/meson/meson_drv.h
+index 177dac3ca3bea..c62ee358456fa 100644
+--- a/drivers/gpu/drm/meson/meson_drv.h
++++ b/drivers/gpu/drm/meson/meson_drv.h
+@@ -25,6 +25,12 @@ enum vpu_compatible {
+ VPU_COMPATIBLE_G12A = 3,
+ };
+
++enum {
++ MESON_ENC_CVBS = 0,
++ MESON_ENC_HDMI,
++ MESON_ENC_LAST,
++};
++
+ struct meson_drm_match_data {
+ enum vpu_compatible compat;
+ struct meson_afbcd_ops *afbcd_ops;
+@@ -51,6 +57,7 @@ struct meson_drm {
+ struct drm_crtc *crtc;
+ struct drm_plane *primary_plane;
+ struct drm_plane *overlay_plane;
++ void *encoders[MESON_ENC_LAST];
+
+ const struct meson_drm_soc_limits *limits;
+
+diff --git a/drivers/gpu/drm/meson/meson_encoder_cvbs.c b/drivers/gpu/drm/meson/meson_encoder_cvbs.c
+index 8110a6e39320f..5675bc2a92cf8 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_cvbs.c
++++ b/drivers/gpu/drm/meson/meson_encoder_cvbs.c
+@@ -281,5 +281,18 @@ int meson_encoder_cvbs_init(struct meson_drm *priv)
+ }
+ drm_connector_attach_encoder(connector, &meson_encoder_cvbs->encoder);
+
++ priv->encoders[MESON_ENC_CVBS] = meson_encoder_cvbs;
++
+ return 0;
+ }
++
++void meson_encoder_cvbs_remove(struct meson_drm *priv)
++{
++ struct meson_encoder_cvbs *meson_encoder_cvbs;
++
++ if (priv->encoders[MESON_ENC_CVBS]) {
++ meson_encoder_cvbs = priv->encoders[MESON_ENC_CVBS];
++ drm_bridge_remove(&meson_encoder_cvbs->bridge);
++ drm_bridge_remove(meson_encoder_cvbs->next_bridge);
++ }
++}
+diff --git a/drivers/gpu/drm/meson/meson_encoder_cvbs.h b/drivers/gpu/drm/meson/meson_encoder_cvbs.h
+index 61d9d183ce7fb..09710fec3c660 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_cvbs.h
++++ b/drivers/gpu/drm/meson/meson_encoder_cvbs.h
+@@ -25,5 +25,6 @@ struct meson_cvbs_mode {
+ extern struct meson_cvbs_mode meson_cvbs_modes[MESON_CVBS_MODES_COUNT];
+
+ int meson_encoder_cvbs_init(struct meson_drm *priv);
++void meson_encoder_cvbs_remove(struct meson_drm *priv);
+
+ #endif /* __MESON_VENC_CVBS_H */
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+index a7692584487cc..af6025037ecc0 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -446,6 +446,8 @@ int meson_encoder_hdmi_init(struct meson_drm *priv)
+ meson_encoder_hdmi->cec_notifier = notifier;
+ }
+
++ priv->encoders[MESON_ENC_HDMI] = meson_encoder_hdmi;
++
+ dev_dbg(priv->dev, "HDMI encoder initialized\n");
+
+ return 0;
+@@ -454,3 +456,14 @@ err_put_node:
+ of_node_put(remote);
+ return ret;
+ }
++
++void meson_encoder_hdmi_remove(struct meson_drm *priv)
++{
++ struct meson_encoder_hdmi *meson_encoder_hdmi;
++
++ if (priv->encoders[MESON_ENC_HDMI]) {
++ meson_encoder_hdmi = priv->encoders[MESON_ENC_HDMI];
++ drm_bridge_remove(&meson_encoder_hdmi->bridge);
++ drm_bridge_remove(meson_encoder_hdmi->next_bridge);
++ }
++}
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.h b/drivers/gpu/drm/meson/meson_encoder_hdmi.h
+index ed19494f09563..a6cd38eb5f71c 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.h
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.h
+@@ -8,5 +8,6 @@
+ #define __MESON_ENCODER_HDMI_H
+
+ int meson_encoder_hdmi_init(struct meson_drm *priv);
++void meson_encoder_hdmi_remove(struct meson_drm *priv);
+
+ #endif /* __MESON_ENCODER_HDMI_H */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index e23e2552e8020..8902d3615ca93 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -383,12 +383,9 @@ static int dpu_kms_parse_data_bus_icc_path(struct dpu_kms *dpu_kms)
+ struct icc_path *path1;
+ struct drm_device *dev = dpu_kms->dev;
+ struct device *dpu_dev = dev->dev;
+- struct device *mdss_dev = dpu_dev->parent;
+
+- /* Interconnects are a part of MDSS device tree binding, not the
+- * MDP/DPU device. */
+- path0 = of_icc_get(mdss_dev, "mdp0-mem");
+- path1 = of_icc_get(mdss_dev, "mdp1-mem");
++ path0 = msm_icc_get(dpu_dev, "mdp0-mem");
++ path1 = msm_icc_get(dpu_dev, "mdp1-mem");
+
+ if (IS_ERR_OR_NULL(path0))
+ return PTR_ERR_OR_ZERO(path0);
+@@ -828,12 +825,10 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms)
+ _dpu_kms_mmu_destroy(dpu_kms);
+
+ if (dpu_kms->catalog) {
+- for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
+- u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
+-
+- if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) {
+- dpu_hw_vbif_destroy(dpu_kms->hw_vbif[vbif_idx]);
+- dpu_kms->hw_vbif[vbif_idx] = NULL;
++ for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {
++ if (dpu_kms->hw_vbif[i]) {
++ dpu_hw_vbif_destroy(dpu_kms->hw_vbif[i]);
++ dpu_kms->hw_vbif[i] = NULL;
+ }
+ }
+ }
+@@ -1135,7 +1130,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+ for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
+ u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
+
+- dpu_kms->hw_vbif[i] = dpu_hw_vbif_init(vbif_idx,
++ dpu_kms->hw_vbif[vbif_idx] = dpu_hw_vbif_init(vbif_idx,
+ dpu_kms->vbif[vbif_idx], dpu_kms->catalog);
+ if (IS_ERR_OR_NULL(dpu_kms->hw_vbif[vbif_idx])) {
+ rc = PTR_ERR(dpu_kms->hw_vbif[vbif_idx]);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
+index 21d20373eb8b3..a18fb649301c9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
+@@ -11,6 +11,14 @@
+ #include "dpu_hw_vbif.h"
+ #include "dpu_trace.h"
+
++static struct dpu_hw_vbif *dpu_get_vbif(struct dpu_kms *dpu_kms, enum dpu_vbif vbif_idx)
++{
++ if (vbif_idx < ARRAY_SIZE(dpu_kms->hw_vbif))
++ return dpu_kms->hw_vbif[vbif_idx];
++
++ return NULL;
++}
++
+ /**
+ * _dpu_vbif_wait_for_xin_halt - wait for the xin to halt
+ * @vbif: Pointer to hardware vbif driver
+@@ -148,20 +156,15 @@ exit:
+ void dpu_vbif_set_ot_limit(struct dpu_kms *dpu_kms,
+ struct dpu_vbif_set_ot_params *params)
+ {
+- struct dpu_hw_vbif *vbif = NULL;
++ struct dpu_hw_vbif *vbif;
+ struct dpu_hw_mdp *mdp;
+ bool forced_on = false;
+ u32 ot_lim;
+- int ret, i;
++ int ret;
+
+ mdp = dpu_kms->hw_mdp;
+
+- for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {
+- if (dpu_kms->hw_vbif[i] &&
+- dpu_kms->hw_vbif[i]->idx == params->vbif_idx)
+- vbif = dpu_kms->hw_vbif[i];
+- }
+-
++ vbif = dpu_get_vbif(dpu_kms, params->vbif_idx);
+ if (!vbif || !mdp) {
+ DRM_DEBUG_ATOMIC("invalid arguments vbif %d mdp %d\n",
+ vbif != NULL, mdp != NULL);
+@@ -204,7 +207,7 @@ void dpu_vbif_set_ot_limit(struct dpu_kms *dpu_kms,
+ void dpu_vbif_set_qos_remap(struct dpu_kms *dpu_kms,
+ struct dpu_vbif_set_qos_params *params)
+ {
+- struct dpu_hw_vbif *vbif = NULL;
++ struct dpu_hw_vbif *vbif;
+ struct dpu_hw_mdp *mdp;
+ bool forced_on = false;
+ const struct dpu_vbif_qos_tbl *qos_tbl;
+@@ -216,13 +219,7 @@ void dpu_vbif_set_qos_remap(struct dpu_kms *dpu_kms,
+ }
+ mdp = dpu_kms->hw_mdp;
+
+- for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {
+- if (dpu_kms->hw_vbif[i] &&
+- dpu_kms->hw_vbif[i]->idx == params->vbif_idx) {
+- vbif = dpu_kms->hw_vbif[i];
+- break;
+- }
+- }
++ vbif = dpu_get_vbif(dpu_kms, params->vbif_idx);
+
+ if (!vbif || !vbif->cap) {
+ DPU_ERROR("invalid vbif %d\n", params->vbif_idx);
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+index 3d5621a68f858..b0c372fef5d51 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+@@ -921,12 +921,9 @@ fail:
+
+ static int mdp5_setup_interconnect(struct platform_device *pdev)
+ {
+- /* Interconnects are a part of MDSS device tree binding, not the
+- * MDP5 device. */
+- struct device *mdss_dev = pdev->dev.parent;
+- struct icc_path *path0 = of_icc_get(mdss_dev, "mdp0-mem");
+- struct icc_path *path1 = of_icc_get(mdss_dev, "mdp1-mem");
+- struct icc_path *path_rot = of_icc_get(mdss_dev, "rotator-mem");
++ struct icc_path *path0 = msm_icc_get(&pdev->dev, "mdp0-mem");
++ struct icc_path *path1 = msm_icc_get(&pdev->dev, "mdp1-mem");
++ struct icc_path *path_rot = msm_icc_get(&pdev->dev, "rotator-mem");
+
+ if (IS_ERR(path0))
+ return PTR_ERR(path0);
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
+index 7257515871a9f..676279d0ca8d9 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
+@@ -431,7 +431,7 @@ void dp_catalog_ctrl_config_msa(struct dp_catalog *dp_catalog,
+
+ if (rate == link_rate_hbr3)
+ pixel_div = 6;
+- else if (rate == 1620000 || rate == 270000)
++ else if (rate == 162000 || rate == 270000)
+ pixel_div = 2;
+ else if (rate == link_rate_hbr2)
+ pixel_div = 4;
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 7c0314d6566af..c5f931b2574ce 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -1169,10 +1169,15 @@ void msm_drv_shutdown(struct platform_device *pdev)
+ struct msm_drm_private *priv = platform_get_drvdata(pdev);
+ struct drm_device *drm = priv ? priv->dev : NULL;
+
+- if (!priv || !priv->kms)
+- return;
+-
+- drm_atomic_helper_shutdown(drm);
++ /*
++ * Shutdown the hw if we're far enough along where things might be on.
++ * If we run this too early, we'll end up panicking in any variety of
++ * places. Since we don't register the drm device until late in
++ * msm_drm_init, drm_dev->registered is used as an indicator that the
++ * shutdown will be successful.
++ */
++ if (drm && drm->registered)
++ drm_atomic_helper_shutdown(drm);
+ }
+
+ static struct platform_driver msm_platform_driver = {
+diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
+index 099a67d10c3a7..17e8b6571f6ff 100644
+--- a/drivers/gpu/drm/msm/msm_drv.h
++++ b/drivers/gpu/drm/msm/msm_drv.h
+@@ -442,6 +442,8 @@ void __iomem *msm_ioremap_size(struct platform_device *pdev, const char *name,
+ phys_addr_t *size);
+ void __iomem *msm_ioremap_quiet(struct platform_device *pdev, const char *name);
+
++struct icc_path *msm_icc_get(struct device *dev, const char *name);
++
+ #define msm_writel(data, addr) writel((data), (addr))
+ #define msm_readl(addr) readl((addr))
+
+diff --git a/drivers/gpu/drm/msm/msm_io_utils.c b/drivers/gpu/drm/msm/msm_io_utils.c
+index 7b504617833ad..d02cd29ce8299 100644
+--- a/drivers/gpu/drm/msm/msm_io_utils.c
++++ b/drivers/gpu/drm/msm/msm_io_utils.c
+@@ -5,6 +5,8 @@
+ * Author: Rob Clark <robdclark@gmail.com>
+ */
+
++#include <linux/interconnect.h>
++
+ #include "msm_drv.h"
+
+ /*
+@@ -124,3 +126,23 @@ void msm_hrtimer_work_init(struct msm_hrtimer_work *work,
+ work->worker = worker;
+ kthread_init_work(&work->work, fn);
+ }
++
++struct icc_path *msm_icc_get(struct device *dev, const char *name)
++{
++ struct device *mdss_dev = dev->parent;
++ struct icc_path *path;
++
++ path = of_icc_get(dev, name);
++ if (path)
++ return path;
++
++ /*
++ * If there are no interconnects attached to the corresponding device
++ * node, of_icc_get() will return NULL.
++ *
++ * If the MDP5/DPU device node doesn't have interconnects, lookup the
++ * path in the parent (MDSS) device.
++ */
++ return of_icc_get(mdss_dev, name);
++
++}
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index e29175e4b44ce..07a327ad5e2a8 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -281,8 +281,10 @@ nouveau_bo_alloc(struct nouveau_cli *cli, u64 *size, int *align, u32 domain,
+ break;
+ }
+
+- if (WARN_ON(pi < 0))
++ if (WARN_ON(pi < 0)) {
++ kfree(nvbo);
+ return ERR_PTR(-EINVAL);
++ }
+
+ /* Disable compression if suitable settings couldn't be found. */
+ if (nvbo->comp && !vmm->page[pi].comp) {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index df83c4654e269..96be2ecb86d4d 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -503,7 +503,8 @@ nouveau_connector_set_encoder(struct drm_connector *connector,
+ connector->interlace_allowed =
+ nv_encoder->caps.dp_interlace;
+ else
+- connector->interlace_allowed = true;
++ connector->interlace_allowed =
++ drm->client.device.info.family < NV_DEVICE_INFO_V0_VOLTA;
+ connector->doublescan_allowed = true;
+ } else
+ if (nv_encoder->dcb->type == DCB_OUTPUT_LVDS ||
+diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
+index 347488685f745..9608121e49b7e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
++++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
+@@ -71,7 +71,6 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
+ ret = nouveau_bo_init(nvbo, size, align, NOUVEAU_GEM_DOMAIN_GART,
+ sg, robj);
+ if (ret) {
+- nouveau_bo_ref(NULL, &nvbo);
+ obj = ERR_PTR(ret);
+ goto unlock;
+ }
+diff --git a/drivers/gpu/drm/omapdrm/dss/dss.c b/drivers/gpu/drm/omapdrm/dss/dss.c
+index 0399f3390a0ad..c4febb8619103 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dss.c
++++ b/drivers/gpu/drm/omapdrm/dss/dss.c
+@@ -1176,6 +1176,7 @@ static void __dss_uninit_ports(struct dss_device *dss, unsigned int num_ports)
+ default:
+ break;
+ }
++ of_node_put(port);
+ }
+ }
+
+@@ -1208,11 +1209,13 @@ static int dss_init_ports(struct dss_device *dss)
+ default:
+ break;
+ }
++ of_node_put(port);
+ }
+
+ return 0;
+
+ error:
++ of_node_put(port);
+ __dss_uninit_ports(dss, i);
+ return r;
+ }
+diff --git a/drivers/gpu/drm/pl111/pl111_versatile.c b/drivers/gpu/drm/pl111/pl111_versatile.c
+index bdd883f4f0da5..963a5d5e6987a 100644
+--- a/drivers/gpu/drm/pl111/pl111_versatile.c
++++ b/drivers/gpu/drm/pl111/pl111_versatile.c
+@@ -402,6 +402,7 @@ static int pl111_vexpress_clcd_init(struct device *dev, struct device_node *np,
+ if (of_device_is_compatible(child, "arm,pl111")) {
+ has_coretile_clcd = true;
+ ct_clcd = child;
++ of_node_put(child);
+ break;
+ }
+ if (of_device_is_compatible(child, "arm,hdlcd")) {
+diff --git a/drivers/gpu/drm/tiny/bochs.c b/drivers/gpu/drm/tiny/bochs.c
+index ed971c8bb4463..0cedb6f6f5591 100644
+--- a/drivers/gpu/drm/tiny/bochs.c
++++ b/drivers/gpu/drm/tiny/bochs.c
+@@ -306,6 +306,8 @@ static void bochs_hw_fini(struct drm_device *dev)
+ static void bochs_hw_blank(struct bochs_device *bochs, bool blank)
+ {
+ DRM_DEBUG_DRIVER("hw_blank %d\n", blank);
++ /* enable color bit (so VGA_IS1_RC access works) */
++ bochs_vga_writeb(bochs, VGA_MIS_W, VGA_MIS_COLOR);
+ /* discard ar_flip_flop */
+ (void)bochs_vga_readb(bochs, VGA_IS1_RC);
+ /* blank or unblank; we need only update index and set 0x20 */
+diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
+index e67c40a48fb46..b43e6ff06310f 100644
+--- a/drivers/gpu/drm/udl/udl_modeset.c
++++ b/drivers/gpu/drm/udl/udl_modeset.c
+@@ -382,9 +382,6 @@ udl_simple_display_pipe_enable(struct drm_simple_display_pipe *pipe,
+
+ udl_handle_damage(fb, &shadow_plane_state->data[0], 0, 0, fb->width, fb->height);
+
+- if (!crtc_state->mode_changed)
+- return;
+-
+ /* enable display */
+ udl_crtc_write_mode_to_hw(crtc);
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_vec.c b/drivers/gpu/drm/vc4/vc4_vec.c
+index 11fc3d6f66b1e..4e2250b8fa23e 100644
+--- a/drivers/gpu/drm/vc4/vc4_vec.c
++++ b/drivers/gpu/drm/vc4/vc4_vec.c
+@@ -256,7 +256,7 @@ static void vc4_vec_ntsc_j_mode_set(struct vc4_vec *vec)
+ static const struct drm_display_mode ntsc_mode = {
+ DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 13500,
+ 720, 720 + 14, 720 + 14 + 64, 720 + 14 + 64 + 60, 0,
+- 480, 480 + 3, 480 + 3 + 3, 480 + 3 + 3 + 16, 0,
++ 480, 480 + 7, 480 + 7 + 6, 525, 0,
+ DRM_MODE_FLAG_INTERLACE)
+ };
+
+@@ -278,7 +278,7 @@ static void vc4_vec_pal_m_mode_set(struct vc4_vec *vec)
+ static const struct drm_display_mode pal_mode = {
+ DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 13500,
+ 720, 720 + 20, 720 + 20 + 64, 720 + 20 + 64 + 60, 0,
+- 576, 576 + 2, 576 + 2 + 3, 576 + 2 + 3 + 20, 0,
++ 576, 576 + 4, 576 + 4 + 6, 625, 0,
+ DRM_MODE_FLAG_INTERLACE)
+ };
+
+diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c
+index f73352e7b8329..96e71813864a7 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_display.c
++++ b/drivers/gpu/drm/virtio/virtgpu_display.c
+@@ -348,6 +348,8 @@ int virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev)
+ vgdev->ddev->mode_config.max_width = XRES_MAX;
+ vgdev->ddev->mode_config.max_height = YRES_MAX;
+
++ vgdev->ddev->mode_config.fb_modifiers_not_supported = true;
++
+ for (i = 0 ; i < vgdev->num_scanouts; ++i)
+ vgdev_output_init(vgdev, i);
+
+diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
+index 580a788098361..7db48d17ee3a8 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
++++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
+@@ -228,8 +228,10 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs)
+
+ for (i = 0; i < objs->nents; ++i) {
+ ret = dma_resv_reserve_fences(objs->objs[i]->resv, 1);
+- if (ret)
++ if (ret) {
++ virtio_gpu_array_unlock_resv(objs);
+ return ret;
++ }
+ }
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+index 9b2702116f93e..5d05093014ac3 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+@@ -47,7 +47,7 @@ static int virtio_gpu_fence_event_create(struct drm_device *dev,
+ struct virtio_gpu_fence_event *e = NULL;
+ int ret;
+
+- if (!(vfpriv->ring_idx_mask & (1 << ring_idx)))
++ if (!(vfpriv->ring_idx_mask & BIT_ULL(ring_idx)))
+ return 0;
+
+ e = kzalloc(sizeof(*e), GFP_KERNEL);
+@@ -168,7 +168,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
+ * array contains any fence from a foreign context.
+ */
+ ret = 0;
+- if (!dma_fence_match_context(in_fence, vgdev->fence_drv.context))
++ if (!dma_fence_match_context(in_fence, fence_ctx + ring_idx))
+ ret = dma_fence_wait(in_fence, true);
+
+ dma_fence_put(in_fence);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index 1cc8f3fc8e4ba..75a159df0af66 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -170,6 +170,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
+ shmem->pages = drm_gem_shmem_get_sg_table(&bo->base);
+ if (IS_ERR(shmem->pages)) {
+ drm_gem_shmem_unpin(&bo->base);
++ shmem->pages = NULL;
+ return PTR_ERR(shmem->pages);
+ }
+
+@@ -248,6 +249,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
+
+ ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+ if (ret != 0) {
++ if (fence)
++ virtio_gpu_array_unlock_resv(objs);
+ virtio_gpu_array_put_free(objs);
+ virtio_gpu_free_object(&shmem_obj->base);
+ return ret;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
+index 6d3cc9e238a4a..7148f3813d8bd 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
++++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
+@@ -266,14 +266,14 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
+ }
+
+ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
+- struct drm_plane_state *old_state)
++ struct drm_plane_state *state)
+ {
+ struct virtio_gpu_framebuffer *vgfb;
+
+- if (!plane->state->fb)
++ if (!state->fb)
+ return;
+
+- vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
++ vgfb = to_virtio_gpu_framebuffer(state->fb);
+ if (vgfb->fence) {
+ dma_fence_put(&vgfb->fence->f);
+ vgfb->fence = NULL;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
+index 7c052efe88365..2edf31806b740 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
+@@ -595,7 +595,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
+ bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
+ struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
+
+- if (use_dma_api)
++ if (virtio_gpu_is_shmem(bo) && use_dma_api)
+ dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
+ shmem->pages, DMA_TO_DEVICE);
+
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+index 2aceac7856e21..089046fa21bea 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+@@ -1076,6 +1076,7 @@ int vmw_mksstat_add_ioctl(struct drm_device *dev, void *data,
+
+ if (desc_len < 0) {
+ atomic_set(&dev_priv->mksstat_user_pids[slot], 0);
++ __free_page(page);
+ return -EFAULT;
+ }
+
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 2e72922e36f56..91a4d3fc30e08 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1186,7 +1186,7 @@ static void mt_touch_report(struct hid_device *hid,
+ int contact_count = -1;
+
+ /* sticky fingers release in progress, abort */
+- if (test_and_set_bit(MT_IO_FLAGS_RUNNING, &td->mt_io_flags))
++ if (test_and_set_bit_lock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags))
+ return;
+
+ scantime = *app->scantime;
+@@ -1267,7 +1267,7 @@ static void mt_touch_report(struct hid_device *hid,
+ del_timer(&td->release_timer);
+ }
+
+- clear_bit(MT_IO_FLAGS_RUNNING, &td->mt_io_flags);
++ clear_bit_unlock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags);
+ }
+
+ static int mt_touch_input_configured(struct hid_device *hdev,
+@@ -1699,11 +1699,11 @@ static void mt_expired_timeout(struct timer_list *t)
+ * An input report came in just before we release the sticky fingers,
+ * it will take care of the sticky fingers.
+ */
+- if (test_and_set_bit(MT_IO_FLAGS_RUNNING, &td->mt_io_flags))
++ if (test_and_set_bit_lock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags))
+ return;
+ if (test_bit(MT_IO_FLAGS_PENDING_SLOTS, &td->mt_io_flags))
+ mt_release_contacts(hdev);
+- clear_bit(MT_IO_FLAGS_RUNNING, &td->mt_io_flags);
++ clear_bit_unlock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags);
+ }
+
+ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+diff --git a/drivers/hid/hid-nintendo.c b/drivers/hid/hid-nintendo.c
+index f33a03c96ba68..cce3248879525 100644
+--- a/drivers/hid/hid-nintendo.c
++++ b/drivers/hid/hid-nintendo.c
+@@ -761,12 +761,31 @@ static int joycon_read_stick_calibration(struct joycon_ctlr *ctlr, u16 cal_addr,
+ cal_y->max = cal_y->center + y_max_above;
+ cal_y->min = cal_y->center - y_min_below;
+
+- return 0;
++ /* check if calibration values are plausible */
++ if (cal_x->min >= cal_x->center || cal_x->center >= cal_x->max ||
++ cal_y->min >= cal_y->center || cal_y->center >= cal_y->max)
++ ret = -EINVAL;
++
++ return ret;
+ }
+
+ static const u16 DFLT_STICK_CAL_CEN = 2000;
+ static const u16 DFLT_STICK_CAL_MAX = 3500;
+ static const u16 DFLT_STICK_CAL_MIN = 500;
++static void joycon_use_default_calibration(struct hid_device *hdev,
++ struct joycon_stick_cal *cal_x,
++ struct joycon_stick_cal *cal_y,
++ const char *stick, int ret)
++{
++ hid_warn(hdev,
++ "Failed to read %s stick cal, using defaults; e=%d\n",
++ stick, ret);
++
++ cal_x->center = cal_y->center = DFLT_STICK_CAL_CEN;
++ cal_x->max = cal_y->max = DFLT_STICK_CAL_MAX;
++ cal_x->min = cal_y->min = DFLT_STICK_CAL_MIN;
++}
++
+ static int joycon_request_calibration(struct joycon_ctlr *ctlr)
+ {
+ u16 left_stick_addr = JC_CAL_FCT_DATA_LEFT_ADDR;
+@@ -794,38 +813,24 @@ static int joycon_request_calibration(struct joycon_ctlr *ctlr)
+ &ctlr->left_stick_cal_x,
+ &ctlr->left_stick_cal_y,
+ true);
+- if (ret) {
+- hid_warn(ctlr->hdev,
+- "Failed to read left stick cal, using dflts; e=%d\n",
+- ret);
+-
+- ctlr->left_stick_cal_x.center = DFLT_STICK_CAL_CEN;
+- ctlr->left_stick_cal_x.max = DFLT_STICK_CAL_MAX;
+- ctlr->left_stick_cal_x.min = DFLT_STICK_CAL_MIN;
+
+- ctlr->left_stick_cal_y.center = DFLT_STICK_CAL_CEN;
+- ctlr->left_stick_cal_y.max = DFLT_STICK_CAL_MAX;
+- ctlr->left_stick_cal_y.min = DFLT_STICK_CAL_MIN;
+- }
++ if (ret)
++ joycon_use_default_calibration(ctlr->hdev,
++ &ctlr->left_stick_cal_x,
++ &ctlr->left_stick_cal_y,
++ "left", ret);
+
+ /* read the right stick calibration data */
+ ret = joycon_read_stick_calibration(ctlr, right_stick_addr,
+ &ctlr->right_stick_cal_x,
+ &ctlr->right_stick_cal_y,
+ false);
+- if (ret) {
+- hid_warn(ctlr->hdev,
+- "Failed to read right stick cal, using dflts; e=%d\n",
+- ret);
+-
+- ctlr->right_stick_cal_x.center = DFLT_STICK_CAL_CEN;
+- ctlr->right_stick_cal_x.max = DFLT_STICK_CAL_MAX;
+- ctlr->right_stick_cal_x.min = DFLT_STICK_CAL_MIN;
+
+- ctlr->right_stick_cal_y.center = DFLT_STICK_CAL_CEN;
+- ctlr->right_stick_cal_y.max = DFLT_STICK_CAL_MAX;
+- ctlr->right_stick_cal_y.min = DFLT_STICK_CAL_MIN;
+- }
++ if (ret)
++ joycon_use_default_calibration(ctlr->hdev,
++ &ctlr->right_stick_cal_x,
++ &ctlr->right_stick_cal_y,
++ "right", ret);
+
+ hid_dbg(ctlr->hdev, "calibration:\n"
+ "l_x_c=%d l_x_max=%d l_x_min=%d\n"
+diff --git a/drivers/hid/hid-roccat.c b/drivers/hid/hid-roccat.c
+index 26373b82fe812..6da80e442fdd1 100644
+--- a/drivers/hid/hid-roccat.c
++++ b/drivers/hid/hid-roccat.c
+@@ -257,6 +257,8 @@ int roccat_report_event(int minor, u8 const *data)
+ if (!new_value)
+ return -ENOMEM;
+
++ mutex_lock(&device->cbuf_lock);
++
+ report = &device->cbuf[device->cbuf_end];
+
+ /* passing NULL is safe */
+@@ -276,6 +278,8 @@ int roccat_report_event(int minor, u8 const *data)
+ reader->cbuf_start = (reader->cbuf_start + 1) % ROCCAT_CBUF_SIZE;
+ }
+
++ mutex_unlock(&device->cbuf_lock);
++
+ wake_up_interruptible(&device->wait);
+ return 0;
+ }
+diff --git a/drivers/hid/hid-uclogic-core.c b/drivers/hid/hid-uclogic-core.c
+index c0fe66e50c58d..cf3315a408c8c 100644
+--- a/drivers/hid/hid-uclogic-core.c
++++ b/drivers/hid/hid-uclogic-core.c
+@@ -153,6 +153,7 @@ static int uclogic_input_configured(struct hid_device *hdev,
+ suffix = "Pad";
+ break;
+ case HID_DG_PEN:
++ case HID_DG_DIGITIZER:
+ suffix = "Pen";
+ break;
+ case HID_CP_CONSUMER_CONTROL:
+diff --git a/drivers/hsi/clients/ssi_protocol.c b/drivers/hsi/clients/ssi_protocol.c
+index 21f11a5b965b1..49ffd808d17ff 100644
+--- a/drivers/hsi/clients/ssi_protocol.c
++++ b/drivers/hsi/clients/ssi_protocol.c
+@@ -931,6 +931,7 @@ static int ssip_pn_open(struct net_device *dev)
+ if (err < 0) {
+ dev_err(&cl->device, "Register HSI port event failed (%d)\n",
+ err);
++ hsi_release_port(cl);
+ return err;
+ }
+ dev_dbg(&cl->device, "Configuring SSI port\n");
+diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c
+index 44a3f5660c109..eb98201583185 100644
+--- a/drivers/hsi/controllers/omap_ssi_core.c
++++ b/drivers/hsi/controllers/omap_ssi_core.c
+@@ -524,6 +524,7 @@ static int ssi_probe(struct platform_device *pd)
+ if (!childpdev) {
+ err = -ENODEV;
+ dev_err(&pd->dev, "failed to create ssi controller port\n");
++ of_node_put(child);
+ goto out3;
+ }
+ }
+diff --git a/drivers/hsi/controllers/omap_ssi_port.c b/drivers/hsi/controllers/omap_ssi_port.c
+index a0cb5be246e1c..b9495b720f1bd 100644
+--- a/drivers/hsi/controllers/omap_ssi_port.c
++++ b/drivers/hsi/controllers/omap_ssi_port.c
+@@ -230,10 +230,10 @@ static int ssi_start_dma(struct hsi_msg *msg, int lch)
+ if (msg->ttype == HSI_MSG_READ) {
+ err = dma_map_sg(&ssi->device, msg->sgt.sgl, msg->sgt.nents,
+ DMA_FROM_DEVICE);
+- if (err < 0) {
++ if (!err) {
+ dev_dbg(&ssi->device, "DMA map SG failed !\n");
+ pm_runtime_put_autosuspend(omap_port->pdev);
+- return err;
++ return -EIO;
+ }
+ csdp = SSI_DST_BURST_4x32_BIT | SSI_DST_MEMORY_PORT |
+ SSI_SRC_SINGLE_ACCESS0 | SSI_SRC_PERIPHERAL_PORT |
+@@ -247,10 +247,10 @@ static int ssi_start_dma(struct hsi_msg *msg, int lch)
+ } else {
+ err = dma_map_sg(&ssi->device, msg->sgt.sgl, msg->sgt.nents,
+ DMA_TO_DEVICE);
+- if (err < 0) {
++ if (!err) {
+ dev_dbg(&ssi->device, "DMA map SG failed !\n");
+ pm_runtime_put_autosuspend(omap_port->pdev);
+- return err;
++ return -EIO;
+ }
+ csdp = SSI_SRC_BURST_4x32_BIT | SSI_SRC_MEMORY_PORT |
+ SSI_DST_SINGLE_ACCESS0 | SSI_DST_PERIPHERAL_PORT |
+diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c
+index 1fe37418ff46c..f29ce49294daf 100644
+--- a/drivers/hwmon/gsc-hwmon.c
++++ b/drivers/hwmon/gsc-hwmon.c
+@@ -267,6 +267,7 @@ gsc_hwmon_get_devtree_pdata(struct device *dev)
+ pdata->nchannels = nchannels;
+
+ /* fan controller base address */
++ of_node_get(dev->parent->of_node);
+ fan = of_find_compatible_node(dev->parent->of_node, NULL, "gw,gsc-fan");
+ if (fan && of_property_read_u32(fan, "reg", &pdata->fan_base)) {
+ dev_err(dev, "fan node without base\n");
+diff --git a/drivers/hwmon/occ/p9_sbe.c b/drivers/hwmon/occ/p9_sbe.c
+index a91937e28e12b..775147f31cb10 100644
+--- a/drivers/hwmon/occ/p9_sbe.c
++++ b/drivers/hwmon/occ/p9_sbe.c
+@@ -14,6 +14,8 @@
+
+ #include "common.h"
+
++#define OCC_CHECKSUM_RETRIES 3
++
+ struct p9_sbe_occ {
+ struct occ occ;
+ bool sbe_error;
+@@ -81,18 +83,23 @@ done:
+ static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len,
+ void *resp, size_t resp_len)
+ {
++ size_t original_resp_len = resp_len;
+ struct p9_sbe_occ *ctx = to_p9_sbe_occ(occ);
+- int rc;
++ int rc, i;
+
+- rc = fsi_occ_submit(ctx->sbe, cmd, len, resp, &resp_len);
+- if (rc < 0) {
++ for (i = 0; i < OCC_CHECKSUM_RETRIES; ++i) {
++ rc = fsi_occ_submit(ctx->sbe, cmd, len, resp, &resp_len);
++ if (rc >= 0)
++ break;
+ if (resp_len) {
+ if (p9_sbe_occ_save_ffdc(ctx, resp, resp_len))
+ sysfs_notify(&occ->bus_dev->kobj, NULL,
+ bin_attr_ffdc.attr.name);
++ return rc;
+ }
+-
+- return rc;
++ if (rc != -EBADE)
++ return rc;
++ resp_len = original_resp_len;
+ }
+
+ switch (((struct occ_response *)resp)->return_status) {
+diff --git a/drivers/hwmon/pmbus/mp2888.c b/drivers/hwmon/pmbus/mp2888.c
+index 8ecd4adfef40e..24e5194706cf6 100644
+--- a/drivers/hwmon/pmbus/mp2888.c
++++ b/drivers/hwmon/pmbus/mp2888.c
+@@ -34,7 +34,7 @@ struct mp2888_data {
+ int curr_sense_gain;
+ };
+
+-#define to_mp2888_data(x) container_of(x, struct mp2888_data, info)
++#define to_mp2888_data(x) container_of(x, struct mp2888_data, info)
+
+ static int mp2888_read_byte_data(struct i2c_client *client, int page, int reg)
+ {
+@@ -109,7 +109,7 @@ mp2888_read_phase(struct i2c_client *client, struct mp2888_data *data, int page,
+ * - Kcs is the DrMOS current sense gain of power stage, which is obtained from the
+ * register MP2888_MFR_VR_CONFIG1, bits 13-12 with the following selection of DrMOS
+ * (data->curr_sense_gain):
+- * 00b - 5µA/A, 01b - 8.5µA/A, 10b - 9.7µA/A, 11b - 10µA/A.
++ * 00b - 8.5µA/A, 01b - 9.7µA/A, 1b - 10µA/A, 11b - 5µA/A.
+ * - Rcs is the internal phase current sense resistor. This parameter depends on hardware
+ * assembly. By default it is set to 1kΩ. In case of different assembly, user should
+ * scale this parameter by dividing it by Rcs.
+@@ -118,10 +118,9 @@ mp2888_read_phase(struct i2c_client *client, struct mp2888_data *data, int page,
+ * because sampling of current occurrence of bit weight has a big deviation, especially for
+ * light load.
+ */
+- ret = DIV_ROUND_CLOSEST(ret * 100 - 9800, data->curr_sense_gain);
+- ret = (data->phase_curr_resolution) ? ret * 2 : ret;
++ ret = DIV_ROUND_CLOSEST(ret * 200 - 19600, data->curr_sense_gain);
+ /* Scale according to total current resolution. */
+- ret = (data->total_curr_resolution) ? ret * 8 : ret * 4;
++ ret = (data->total_curr_resolution) ? ret * 2 : ret;
+ return ret;
+ }
+
+@@ -212,7 +211,7 @@ static int mp2888_read_word_data(struct i2c_client *client, int page, int phase,
+ ret = pmbus_read_word_data(client, page, phase, reg);
+ if (ret < 0)
+ return ret;
+- ret = data->total_curr_resolution ? ret * 2 : ret;
++ ret = data->total_curr_resolution ? ret : DIV_ROUND_CLOSEST(ret, 2);
+ break;
+ case PMBUS_POUT_OP_WARN_LIMIT:
+ ret = pmbus_read_word_data(client, page, phase, reg);
+@@ -223,7 +222,7 @@ static int mp2888_read_word_data(struct i2c_client *client, int page, int phase,
+ * set 1. Actual power is reported with 0.5W or 1W respectively resolution. Scaling
+ * is needed to match both.
+ */
+- ret = data->total_curr_resolution ? ret * 4 : ret * 2;
++ ret = data->total_curr_resolution ? ret * 2 : ret;
+ break;
+ /*
+ * The below registers are not implemented by device or implemented not according to the
+diff --git a/drivers/hwmon/sht4x.c b/drivers/hwmon/sht4x.c
+index c19df3ade48e3..13ac2d8f22c79 100644
+--- a/drivers/hwmon/sht4x.c
++++ b/drivers/hwmon/sht4x.c
+@@ -129,7 +129,7 @@ unlock:
+
+ static ssize_t sht4x_interval_write(struct sht4x_data *data, long val)
+ {
+- data->update_interval = clamp_val(val, SHT4X_MIN_POLL_INTERVAL, UINT_MAX);
++ data->update_interval = clamp_val(val, SHT4X_MIN_POLL_INTERVAL, INT_MAX);
+
+ return 0;
+ }
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index 70b80e7109905..4d3a3b464ecd8 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -126,8 +126,9 @@
+ * status codes
+ */
+ #define STATUS_IDLE 0x0
+-#define STATUS_WRITE_IN_PROGRESS 0x1
+-#define STATUS_READ_IN_PROGRESS 0x2
++#define STATUS_ACTIVE 0x1
++#define STATUS_WRITE_IN_PROGRESS 0x2
++#define STATUS_READ_IN_PROGRESS 0x4
+
+ /*
+ * operation modes
+@@ -334,12 +335,14 @@ void i2c_dw_disable_int(struct dw_i2c_dev *dev);
+
+ static inline void __i2c_dw_enable(struct dw_i2c_dev *dev)
+ {
++ dev->status |= STATUS_ACTIVE;
+ regmap_write(dev->map, DW_IC_ENABLE, 1);
+ }
+
+ static inline void __i2c_dw_disable_nowait(struct dw_i2c_dev *dev)
+ {
+ regmap_write(dev->map, DW_IC_ENABLE, 0);
++ dev->status &= ~STATUS_ACTIVE;
+ }
+
+ void __i2c_dw_disable(struct dw_i2c_dev *dev);
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 44a94b225ed82..dc3c5a15a95b9 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -716,6 +716,19 @@ static int i2c_dw_irq_handler_master(struct dw_i2c_dev *dev)
+ u32 stat;
+
+ stat = i2c_dw_read_clear_intrbits(dev);
++
++ if (!(dev->status & STATUS_ACTIVE)) {
++ /*
++ * Unexpected interrupt in driver point of view. State
++ * variables are either unset or stale so acknowledge and
++ * disable interrupts for suppressing further interrupts if
++ * interrupt really came from this HW (E.g. firmware has left
++ * the HW active).
++ */
++ regmap_write(dev->map, DW_IC_INTR_MASK, 0);
++ return 0;
++ }
++
+ if (stat & DW_IC_INTR_TX_ABRT) {
+ dev->cmd_err |= DW_IC_ERR_TX_ABRT;
+ dev->status = STATUS_IDLE;
+diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
+index 608e612094556..ca368482b2464 100644
+--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
+@@ -27,7 +27,6 @@
+ #include "i2c-ccgx-ucsi.h"
+
+ #define DRIVER_NAME "i2c-designware-pci"
+-#define AMD_CLK_RATE_HZ 100000
+
+ enum dw_pci_ctl_id_t {
+ medfield,
+@@ -100,11 +99,6 @@ static u32 mfld_get_clk_rate_khz(struct dw_i2c_dev *dev)
+ return 25000;
+ }
+
+-static u32 navi_amd_get_clk_rate_khz(struct dw_i2c_dev *dev)
+-{
+- return AMD_CLK_RATE_HZ;
+-}
+-
+ static int mfld_setup(struct pci_dev *pdev, struct dw_pci_controller *c)
+ {
+ struct dw_i2c_dev *dev = dev_get_drvdata(&pdev->dev);
+@@ -126,15 +120,6 @@ static int mfld_setup(struct pci_dev *pdev, struct dw_pci_controller *c)
+ return -ENODEV;
+ }
+
+-static int navi_amd_setup(struct pci_dev *pdev, struct dw_pci_controller *c)
+-{
+- struct dw_i2c_dev *dev = dev_get_drvdata(&pdev->dev);
+-
+- dev->flags |= MODEL_AMD_NAVI_GPU;
+- dev->timings.bus_freq_hz = I2C_MAX_STANDARD_MODE_FREQ;
+- return 0;
+-}
+-
+ static int mrfld_setup(struct pci_dev *pdev, struct dw_pci_controller *c)
+ {
+ /*
+@@ -159,6 +144,20 @@ static u32 ehl_get_clk_rate_khz(struct dw_i2c_dev *dev)
+ return 100000;
+ }
+
++static u32 navi_amd_get_clk_rate_khz(struct dw_i2c_dev *dev)
++{
++ return 100000;
++}
++
++static int navi_amd_setup(struct pci_dev *pdev, struct dw_pci_controller *c)
++{
++ struct dw_i2c_dev *dev = dev_get_drvdata(&pdev->dev);
++
++ dev->flags |= MODEL_AMD_NAVI_GPU;
++ dev->timings.bus_freq_hz = I2C_MAX_STANDARD_MODE_FREQ;
++ return 0;
++}
++
+ static struct dw_pci_controller dw_pci_controllers[] = {
+ [medfield] = {
+ .bus_num = -1,
+@@ -389,6 +388,7 @@ static const struct pci_device_id i2_designware_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0x4bbe), elkhartlake },
+ { PCI_VDEVICE(INTEL, 0x4bbf), elkhartlake },
+ { PCI_VDEVICE(INTEL, 0x4bc0), elkhartlake },
++ /* AMD NAVI */
+ { PCI_VDEVICE(ATI, 0x7314), navi_amd },
+ { PCI_VDEVICE(ATI, 0x73a4), navi_amd },
+ { PCI_VDEVICE(ATI, 0x73e4), navi_amd },
+diff --git a/drivers/i2c/busses/i2c-mlxbf.c b/drivers/i2c/busses/i2c-mlxbf.c
+index ad5efd7497d1c..0e840eba4fd64 100644
+--- a/drivers/i2c/busses/i2c-mlxbf.c
++++ b/drivers/i2c/busses/i2c-mlxbf.c
+@@ -306,6 +306,7 @@ static u64 mlxbf_i2c_corepll_frequency;
+ * exact.
+ */
+ #define MLXBF_I2C_SMBUS_TIMEOUT (300 * 1000) /* 300ms */
++#define MLXBF_I2C_SMBUS_LOCK_POLL_TIMEOUT (300 * 1000) /* 300ms */
+
+ /* Encapsulates timing parameters. */
+ struct mlxbf_i2c_timings {
+@@ -514,6 +515,25 @@ static bool mlxbf_smbus_master_wait_for_idle(struct mlxbf_i2c_priv *priv)
+ return false;
+ }
+
++/*
++ * wait for the lock to be released before acquiring it.
++ */
++static bool mlxbf_i2c_smbus_master_lock(struct mlxbf_i2c_priv *priv)
++{
++ if (mlxbf_smbus_poll(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_GW,
++ MLXBF_I2C_MASTER_LOCK_BIT, true,
++ MLXBF_I2C_SMBUS_LOCK_POLL_TIMEOUT))
++ return true;
++
++ return false;
++}
++
++static void mlxbf_i2c_smbus_master_unlock(struct mlxbf_i2c_priv *priv)
++{
++ /* Clear the gw to clear the lock */
++ writel(0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_GW);
++}
++
+ static bool mlxbf_i2c_smbus_transaction_success(u32 master_status,
+ u32 cause_status)
+ {
+@@ -705,10 +725,19 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ slave = request->slave & GENMASK(6, 0);
+ addr = slave << 1;
+
+- /* First of all, check whether the HW is idle. */
+- if (WARN_ON(!mlxbf_smbus_master_wait_for_idle(priv)))
++ /*
++ * Try to acquire the smbus gw lock before any reads of the GW register since
++ * a read sets the lock.
++ */
++ if (WARN_ON(!mlxbf_i2c_smbus_master_lock(priv)))
+ return -EBUSY;
+
++ /* Check whether the HW is idle */
++ if (WARN_ON(!mlxbf_smbus_master_wait_for_idle(priv))) {
++ ret = -EBUSY;
++ goto out_unlock;
++ }
++
+ /* Set first byte. */
+ data_desc[data_idx++] = addr;
+
+@@ -732,8 +761,10 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ write_en = 1;
+ write_len += operation->length;
+ if (data_idx + operation->length >
+- MLXBF_I2C_MASTER_DATA_DESC_SIZE)
+- return -ENOBUFS;
++ MLXBF_I2C_MASTER_DATA_DESC_SIZE) {
++ ret = -ENOBUFS;
++ goto out_unlock;
++ }
+ memcpy(data_desc + data_idx,
+ operation->buffer, operation->length);
+ data_idx += operation->length;
+@@ -765,7 +796,7 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ ret = mlxbf_i2c_smbus_enable(priv, slave, write_len, block_en,
+ pec_en, 0);
+ if (ret)
+- return ret;
++ goto out_unlock;
+ }
+
+ if (read_en) {
+@@ -792,6 +823,9 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
+ priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_FSM);
+ }
+
++out_unlock:
++ mlxbf_i2c_smbus_master_unlock(priv);
++
+ return ret;
+ }
+
+diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c
+index edad1f30121dd..502253f53d966 100644
+--- a/drivers/iio/adc/ad7923.c
++++ b/drivers/iio/adc/ad7923.c
+@@ -93,6 +93,7 @@ enum ad7923_id {
+ .sign = 'u', \
+ .realbits = (bits), \
+ .storagebits = 16, \
++ .shift = 12 - (bits), \
+ .endianness = IIO_BE, \
+ }, \
+ }
+@@ -268,7 +269,8 @@ static int ad7923_read_raw(struct iio_dev *indio_dev,
+ return ret;
+
+ if (chan->address == EXTRACT(ret, 12, 4))
+- *val = EXTRACT(ret, 0, 12);
++ *val = EXTRACT(ret, chan->scan_type.shift,
++ chan->scan_type.realbits);
+ else
+ return -EIO;
+
+diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
+index b764823ce57e3..2c087d52f1644 100644
+--- a/drivers/iio/adc/at91-sama5d2_adc.c
++++ b/drivers/iio/adc/at91-sama5d2_adc.c
+@@ -77,7 +77,7 @@ struct at91_adc_reg_layout {
+ #define AT91_SAMA5D2_MR_ANACH BIT(23)
+ /* Tracking Time */
+ #define AT91_SAMA5D2_MR_TRACKTIM(v) ((v) << 24)
+-#define AT91_SAMA5D2_MR_TRACKTIM_MAX 0xff
++#define AT91_SAMA5D2_MR_TRACKTIM_MAX 0xf
+ /* Transfer Time */
+ #define AT91_SAMA5D2_MR_TRANSFER(v) ((v) << 28)
+ #define AT91_SAMA5D2_MR_TRANSFER_MAX 0x3
+@@ -1542,10 +1542,12 @@ static int at91_adc_read_info_raw(struct iio_dev *indio_dev,
+ ret = at91_adc_read_position(st, chan->channel,
+ &tmp_val);
+ *val = tmp_val;
++ if (ret > 0)
++ ret = at91_adc_adjust_val_osr(st, val);
+ mutex_unlock(&st->lock);
+ iio_device_release_direct_mode(indio_dev);
+
+- return at91_adc_adjust_val_osr(st, val);
++ return ret;
+ }
+ if (chan->type == IIO_PRESSURE) {
+ ret = iio_device_claim_direct_mode(indio_dev);
+@@ -1556,10 +1558,12 @@ static int at91_adc_read_info_raw(struct iio_dev *indio_dev,
+ ret = at91_adc_read_pressure(st, chan->channel,
+ &tmp_val);
+ *val = tmp_val;
++ if (ret > 0)
++ ret = at91_adc_adjust_val_osr(st, val);
+ mutex_unlock(&st->lock);
+ iio_device_release_direct_mode(indio_dev);
+
+- return at91_adc_adjust_val_osr(st, val);
++ return ret;
+ }
+
+ /* in this case we have a voltage channel */
+@@ -1646,16 +1650,20 @@ static int at91_adc_write_raw(struct iio_dev *indio_dev,
+ /* if no change, optimize out */
+ if (val == st->oversampling_ratio)
+ return 0;
++ mutex_lock(&st->lock);
+ st->oversampling_ratio = val;
+ /* update ratio */
+ at91_adc_config_emr(st);
++ mutex_unlock(&st->lock);
+ return 0;
+ case IIO_CHAN_INFO_SAMP_FREQ:
+ if (val < st->soc_info.min_sample_rate ||
+ val > st->soc_info.max_sample_rate)
+ return -EINVAL;
+
++ mutex_lock(&st->lock);
+ at91_adc_setup_samp_freq(indio_dev, val);
++ mutex_unlock(&st->lock);
+ return 0;
+ default:
+ return -EINVAL;
+@@ -2108,6 +2116,9 @@ static __maybe_unused int at91_adc_suspend(struct device *dev)
+ struct iio_dev *indio_dev = dev_get_drvdata(dev);
+ struct at91_adc_state *st = iio_priv(indio_dev);
+
++ if (iio_buffer_enabled(indio_dev))
++ at91_adc_buffer_postdisable(indio_dev);
++
+ /*
+ * Do a sofware reset of the ADC before we go to suspend.
+ * this will ensure that all pins are free from being muxed by the ADC
+@@ -2151,14 +2162,11 @@ static __maybe_unused int at91_adc_resume(struct device *dev)
+ if (!iio_buffer_enabled(indio_dev))
+ return 0;
+
+- /* check if we are enabling triggered buffer or the touchscreen */
+- if (at91_adc_current_chan_is_touch(indio_dev))
+- return at91_adc_configure_touch(st, true);
+- else
+- return at91_adc_configure_trigger(st->trig, true);
++ ret = at91_adc_buffer_prepare(indio_dev);
++ if (ret)
++ goto vref_disable_resume;
+
+- /* not needed but more explicit */
+- return 0;
++ return at91_adc_configure_trigger(st->trig, true);
+
+ vref_disable_resume:
+ regulator_disable(st->vref);
+diff --git a/drivers/iio/adc/ltc2497.c b/drivers/iio/adc/ltc2497.c
+index f7c786f37ceb1..78b93c99cc47c 100644
+--- a/drivers/iio/adc/ltc2497.c
++++ b/drivers/iio/adc/ltc2497.c
+@@ -41,6 +41,19 @@ static int ltc2497_result_and_measure(struct ltc2497core_driverdata *ddata,
+ }
+
+ *val = (be32_to_cpu(st->buf) >> 14) - (1 << 17);
++
++ /*
++ * The part started a new conversion at the end of the above i2c
++ * transfer, so if the address didn't change since the last call
++ * everything is fine and we can return early.
++ * If not (which should only happen when some sort of bulk
++ * conversion is implemented) we have to program the new
++ * address. Note that this probably fails as the conversion that
++ * was triggered above is like not complete yet and the two
++ * operations have to be done in a single transfer.
++ */
++ if (ddata->addr_prev == address)
++ return 0;
+ }
+
+ ret = i2c_smbus_write_byte(st->client,
+diff --git a/drivers/iio/dac/ad5593r.c b/drivers/iio/dac/ad5593r.c
+index 34e1319a97126..356dc0bab1153 100644
+--- a/drivers/iio/dac/ad5593r.c
++++ b/drivers/iio/dac/ad5593r.c
+@@ -13,6 +13,8 @@
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+
++#include <asm/unaligned.h>
++
+ #define AD5593R_MODE_CONF (0 << 4)
+ #define AD5593R_MODE_DAC_WRITE (1 << 4)
+ #define AD5593R_MODE_ADC_READBACK (4 << 4)
+@@ -20,6 +22,24 @@
+ #define AD5593R_MODE_GPIO_READBACK (6 << 4)
+ #define AD5593R_MODE_REG_READBACK (7 << 4)
+
++static int ad5593r_read_word(struct i2c_client *i2c, u8 reg, u16 *value)
++{
++ int ret;
++ u8 buf[2];
++
++ ret = i2c_smbus_write_byte(i2c, reg);
++ if (ret < 0)
++ return ret;
++
++ ret = i2c_master_recv(i2c, buf, sizeof(buf));
++ if (ret < 0)
++ return ret;
++
++ *value = get_unaligned_be16(buf);
++
++ return 0;
++}
++
+ static int ad5593r_write_dac(struct ad5592r_state *st, unsigned chan, u16 value)
+ {
+ struct i2c_client *i2c = to_i2c_client(st->dev);
+@@ -38,13 +58,7 @@ static int ad5593r_read_adc(struct ad5592r_state *st, unsigned chan, u16 *value)
+ if (val < 0)
+ return (int) val;
+
+- val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_ADC_READBACK);
+- if (val < 0)
+- return (int) val;
+-
+- *value = (u16) val;
+-
+- return 0;
++ return ad5593r_read_word(i2c, AD5593R_MODE_ADC_READBACK, value);
+ }
+
+ static int ad5593r_reg_write(struct ad5592r_state *st, u8 reg, u16 value)
+@@ -58,25 +72,19 @@ static int ad5593r_reg_write(struct ad5592r_state *st, u8 reg, u16 value)
+ static int ad5593r_reg_read(struct ad5592r_state *st, u8 reg, u16 *value)
+ {
+ struct i2c_client *i2c = to_i2c_client(st->dev);
+- s32 val;
+-
+- val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_REG_READBACK | reg);
+- if (val < 0)
+- return (int) val;
+
+- *value = (u16) val;
+-
+- return 0;
++ return ad5593r_read_word(i2c, AD5593R_MODE_REG_READBACK | reg, value);
+ }
+
+ static int ad5593r_gpio_read(struct ad5592r_state *st, u8 *value)
+ {
+ struct i2c_client *i2c = to_i2c_client(st->dev);
+- s32 val;
++ u16 val;
++ int ret;
+
+- val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_GPIO_READBACK);
+- if (val < 0)
+- return (int) val;
++ ret = ad5593r_read_word(i2c, AD5593R_MODE_GPIO_READBACK, &val);
++ if (ret)
++ return ret;
+
+ *value = (u8) val;
+
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index df74765d33dcb..87fd2a0d44f2a 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -165,9 +165,10 @@ static int __of_iio_channel_get(struct iio_channel *channel,
+
+ idev = bus_find_device(&iio_bus_type, NULL, iiospec.np,
+ iio_dev_node_match);
+- of_node_put(iiospec.np);
+- if (idev == NULL)
++ if (idev == NULL) {
++ of_node_put(iiospec.np);
+ return -EPROBE_DEFER;
++ }
+
+ indio_dev = dev_to_iio_dev(idev);
+ channel->indio_dev = indio_dev;
+@@ -175,6 +176,7 @@ static int __of_iio_channel_get(struct iio_channel *channel,
+ index = indio_dev->info->of_xlate(indio_dev, &iiospec);
+ else
+ index = __of_iio_simple_xlate(indio_dev, &iiospec);
++ of_node_put(iiospec.np);
+ if (index < 0)
+ goto err_put;
+ channel->channel = &indio_dev->channels[index];
+@@ -410,6 +412,8 @@ struct iio_channel *devm_of_iio_channel_get_by_name(struct device *dev,
+ channel = of_iio_channel_get_by_name(np, channel_name);
+ if (IS_ERR(channel))
+ return channel;
++ if (!channel)
++ return ERR_PTR(-ENODEV);
+
+ ret = devm_add_action_or_reset(dev, devm_iio_channel_free, channel);
+ if (ret)
+diff --git a/drivers/iio/magnetometer/yamaha-yas530.c b/drivers/iio/magnetometer/yamaha-yas530.c
+index b2bc637150bfa..40192aa46b048 100644
+--- a/drivers/iio/magnetometer/yamaha-yas530.c
++++ b/drivers/iio/magnetometer/yamaha-yas530.c
+@@ -132,7 +132,7 @@ struct yas5xx {
+ unsigned int version;
+ char name[16];
+ struct yas5xx_calibration calibration;
+- u8 hard_offsets[3];
++ s8 hard_offsets[3];
+ struct iio_mount_matrix orientation;
+ struct regmap *map;
+ struct regulator_bulk_data regs[2];
+diff --git a/drivers/iio/pressure/dps310.c b/drivers/iio/pressure/dps310.c
+index 36fb7ae0d0a9d..984a3f511a1ae 100644
+--- a/drivers/iio/pressure/dps310.c
++++ b/drivers/iio/pressure/dps310.c
+@@ -89,6 +89,7 @@ struct dps310_data {
+ s32 c00, c10, c20, c30, c01, c11, c21;
+ s32 pressure_raw;
+ s32 temp_raw;
++ bool timeout_recovery_failed;
+ };
+
+ static const struct iio_chan_spec dps310_channels[] = {
+@@ -159,6 +160,102 @@ static int dps310_get_coefs(struct dps310_data *data)
+ return 0;
+ }
+
++/*
++ * Some versions of the chip will read temperatures in the ~60C range when
++ * it's actually ~20C. This is the manufacturer recommended workaround
++ * to correct the issue. The registers used below are undocumented.
++ */
++static int dps310_temp_workaround(struct dps310_data *data)
++{
++ int rc;
++ int reg;
++
++ rc = regmap_read(data->regmap, 0x32, ®);
++ if (rc)
++ return rc;
++
++ /*
++ * If bit 1 is set then the device is okay, and the workaround does not
++ * need to be applied
++ */
++ if (reg & BIT(1))
++ return 0;
++
++ rc = regmap_write(data->regmap, 0x0e, 0xA5);
++ if (rc)
++ return rc;
++
++ rc = regmap_write(data->regmap, 0x0f, 0x96);
++ if (rc)
++ return rc;
++
++ rc = regmap_write(data->regmap, 0x62, 0x02);
++ if (rc)
++ return rc;
++
++ rc = regmap_write(data->regmap, 0x0e, 0x00);
++ if (rc)
++ return rc;
++
++ return regmap_write(data->regmap, 0x0f, 0x00);
++}
++
++static int dps310_startup(struct dps310_data *data)
++{
++ int rc;
++ int ready;
++
++ /*
++ * Set up pressure sensor in single sample, one measurement per second
++ * mode
++ */
++ rc = regmap_write(data->regmap, DPS310_PRS_CFG, 0);
++ if (rc)
++ return rc;
++
++ /*
++ * Set up external (MEMS) temperature sensor in single sample, one
++ * measurement per second mode
++ */
++ rc = regmap_write(data->regmap, DPS310_TMP_CFG, DPS310_TMP_EXT);
++ if (rc)
++ return rc;
++
++ /* Temp and pressure shifts are disabled when PRC <= 8 */
++ rc = regmap_write_bits(data->regmap, DPS310_CFG_REG,
++ DPS310_PRS_SHIFT_EN | DPS310_TMP_SHIFT_EN, 0);
++ if (rc)
++ return rc;
++
++ /* MEAS_CFG doesn't update correctly unless first written with 0 */
++ rc = regmap_write_bits(data->regmap, DPS310_MEAS_CFG,
++ DPS310_MEAS_CTRL_BITS, 0);
++ if (rc)
++ return rc;
++
++ /* Turn on temperature and pressure measurement in the background */
++ rc = regmap_write_bits(data->regmap, DPS310_MEAS_CFG,
++ DPS310_MEAS_CTRL_BITS, DPS310_PRS_EN |
++ DPS310_TEMP_EN | DPS310_BACKGROUND);
++ if (rc)
++ return rc;
++
++ /*
++ * Calibration coefficients required for reporting temperature.
++ * They are available 40ms after the device has started
++ */
++ rc = regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready,
++ ready & DPS310_COEF_RDY, 10000, 40000);
++ if (rc)
++ return rc;
++
++ rc = dps310_get_coefs(data);
++ if (rc)
++ return rc;
++
++ return dps310_temp_workaround(data);
++}
++
+ static int dps310_get_pres_precision(struct dps310_data *data)
+ {
+ int rc;
+@@ -297,11 +394,69 @@ static int dps310_get_temp_k(struct dps310_data *data)
+ return scale_factors[ilog2(rc)];
+ }
+
++static int dps310_reset_wait(struct dps310_data *data)
++{
++ int rc;
++
++ rc = regmap_write(data->regmap, DPS310_RESET, DPS310_RESET_MAGIC);
++ if (rc)
++ return rc;
++
++ /* Wait for device chip access: 2.5ms in specification */
++ usleep_range(2500, 12000);
++ return 0;
++}
++
++static int dps310_reset_reinit(struct dps310_data *data)
++{
++ int rc;
++
++ rc = dps310_reset_wait(data);
++ if (rc)
++ return rc;
++
++ return dps310_startup(data);
++}
++
++static int dps310_ready_status(struct dps310_data *data, int ready_bit, int timeout)
++{
++ int sleep = DPS310_POLL_SLEEP_US(timeout);
++ int ready;
++
++ return regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready, ready & ready_bit,
++ sleep, timeout);
++}
++
++static int dps310_ready(struct dps310_data *data, int ready_bit, int timeout)
++{
++ int rc;
++
++ rc = dps310_ready_status(data, ready_bit, timeout);
++ if (rc) {
++ if (rc == -ETIMEDOUT && !data->timeout_recovery_failed) {
++ /* Reset and reinitialize the chip. */
++ if (dps310_reset_reinit(data)) {
++ data->timeout_recovery_failed = true;
++ } else {
++ /* Try again to get sensor ready status. */
++ if (dps310_ready_status(data, ready_bit, timeout))
++ data->timeout_recovery_failed = true;
++ else
++ return 0;
++ }
++ }
++
++ return rc;
++ }
++
++ data->timeout_recovery_failed = false;
++ return 0;
++}
++
+ static int dps310_read_pres_raw(struct dps310_data *data)
+ {
+ int rc;
+ int rate;
+- int ready;
+ int timeout;
+ s32 raw;
+ u8 val[3];
+@@ -313,9 +468,7 @@ static int dps310_read_pres_raw(struct dps310_data *data)
+ timeout = DPS310_POLL_TIMEOUT_US(rate);
+
+ /* Poll for sensor readiness; base the timeout upon the sample rate. */
+- rc = regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready,
+- ready & DPS310_PRS_RDY,
+- DPS310_POLL_SLEEP_US(timeout), timeout);
++ rc = dps310_ready(data, DPS310_PRS_RDY, timeout);
+ if (rc)
+ goto done;
+
+@@ -352,7 +505,6 @@ static int dps310_read_temp_raw(struct dps310_data *data)
+ {
+ int rc;
+ int rate;
+- int ready;
+ int timeout;
+
+ if (mutex_lock_interruptible(&data->lock))
+@@ -362,10 +514,8 @@ static int dps310_read_temp_raw(struct dps310_data *data)
+ timeout = DPS310_POLL_TIMEOUT_US(rate);
+
+ /* Poll for sensor readiness; base the timeout upon the sample rate. */
+- rc = regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready,
+- ready & DPS310_TMP_RDY,
+- DPS310_POLL_SLEEP_US(timeout), timeout);
+- if (rc < 0)
++ rc = dps310_ready(data, DPS310_TMP_RDY, timeout);
++ if (rc)
+ goto done;
+
+ rc = dps310_read_temp_ready(data);
+@@ -660,7 +810,7 @@ static void dps310_reset(void *action_data)
+ {
+ struct dps310_data *data = action_data;
+
+- regmap_write(data->regmap, DPS310_RESET, DPS310_RESET_MAGIC);
++ dps310_reset_wait(data);
+ }
+
+ static const struct regmap_config dps310_regmap_config = {
+@@ -677,52 +827,12 @@ static const struct iio_info dps310_info = {
+ .write_raw = dps310_write_raw,
+ };
+
+-/*
+- * Some verions of chip will read temperatures in the ~60C range when
+- * its actually ~20C. This is the manufacturer recommended workaround
+- * to correct the issue. The registers used below are undocumented.
+- */
+-static int dps310_temp_workaround(struct dps310_data *data)
+-{
+- int rc;
+- int reg;
+-
+- rc = regmap_read(data->regmap, 0x32, ®);
+- if (rc < 0)
+- return rc;
+-
+- /*
+- * If bit 1 is set then the device is okay, and the workaround does not
+- * need to be applied
+- */
+- if (reg & BIT(1))
+- return 0;
+-
+- rc = regmap_write(data->regmap, 0x0e, 0xA5);
+- if (rc < 0)
+- return rc;
+-
+- rc = regmap_write(data->regmap, 0x0f, 0x96);
+- if (rc < 0)
+- return rc;
+-
+- rc = regmap_write(data->regmap, 0x62, 0x02);
+- if (rc < 0)
+- return rc;
+-
+- rc = regmap_write(data->regmap, 0x0e, 0x00);
+- if (rc < 0)
+- return rc;
+-
+- return regmap_write(data->regmap, 0x0f, 0x00);
+-}
+-
+ static int dps310_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+ {
+ struct dps310_data *data;
+ struct iio_dev *iio;
+- int rc, ready;
++ int rc;
+
+ iio = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ if (!iio)
+@@ -747,54 +857,8 @@ static int dps310_probe(struct i2c_client *client,
+ if (rc)
+ return rc;
+
+- /*
+- * Set up pressure sensor in single sample, one measurement per second
+- * mode
+- */
+- rc = regmap_write(data->regmap, DPS310_PRS_CFG, 0);
+-
+- /*
+- * Set up external (MEMS) temperature sensor in single sample, one
+- * measurement per second mode
+- */
+- rc = regmap_write(data->regmap, DPS310_TMP_CFG, DPS310_TMP_EXT);
+- if (rc < 0)
+- return rc;
+-
+- /* Temp and pressure shifts are disabled when PRC <= 8 */
+- rc = regmap_write_bits(data->regmap, DPS310_CFG_REG,
+- DPS310_PRS_SHIFT_EN | DPS310_TMP_SHIFT_EN, 0);
+- if (rc < 0)
+- return rc;
+-
+- /* MEAS_CFG doesn't update correctly unless first written with 0 */
+- rc = regmap_write_bits(data->regmap, DPS310_MEAS_CFG,
+- DPS310_MEAS_CTRL_BITS, 0);
+- if (rc < 0)
+- return rc;
+-
+- /* Turn on temperature and pressure measurement in the background */
+- rc = regmap_write_bits(data->regmap, DPS310_MEAS_CFG,
+- DPS310_MEAS_CTRL_BITS, DPS310_PRS_EN |
+- DPS310_TEMP_EN | DPS310_BACKGROUND);
+- if (rc < 0)
+- return rc;
+-
+- /*
+- * Calibration coefficients required for reporting temperature.
+- * They are available 40ms after the device has started
+- */
+- rc = regmap_read_poll_timeout(data->regmap, DPS310_MEAS_CFG, ready,
+- ready & DPS310_COEF_RDY, 10000, 40000);
+- if (rc < 0)
+- return rc;
+-
+- rc = dps310_get_coefs(data);
+- if (rc < 0)
+- return rc;
+-
+- rc = dps310_temp_workaround(data);
+- if (rc < 0)
++ rc = dps310_startup(data);
++ if (rc)
+ return rc;
+
+ rc = devm_iio_device_register(&client->dev, iio);
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index b985e0d9bc05e..5c910f5c01b35 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -1632,14 +1632,13 @@ static void cm_path_set_rec_type(struct ib_device *ib_device, u32 port_num,
+
+ static void cm_format_path_lid_from_req(struct cm_req_msg *req_msg,
+ struct sa_path_rec *primary_path,
+- struct sa_path_rec *alt_path)
++ struct sa_path_rec *alt_path,
++ struct ib_wc *wc)
+ {
+ u32 lid;
+
+ if (primary_path->rec_type != SA_PATH_REC_TYPE_OPA) {
+- sa_path_set_dlid(primary_path,
+- IBA_GET(CM_REQ_PRIMARY_LOCAL_PORT_LID,
+- req_msg));
++ sa_path_set_dlid(primary_path, wc->slid);
+ sa_path_set_slid(primary_path,
+ IBA_GET(CM_REQ_PRIMARY_REMOTE_PORT_LID,
+ req_msg));
+@@ -1676,7 +1675,8 @@ static void cm_format_path_lid_from_req(struct cm_req_msg *req_msg,
+
+ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
+ struct sa_path_rec *primary_path,
+- struct sa_path_rec *alt_path)
++ struct sa_path_rec *alt_path,
++ struct ib_wc *wc)
+ {
+ primary_path->dgid =
+ *IBA_GET_MEM_PTR(CM_REQ_PRIMARY_LOCAL_PORT_GID, req_msg);
+@@ -1734,7 +1734,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
+ if (sa_path_is_roce(alt_path))
+ alt_path->roce.route_resolved = false;
+ }
+- cm_format_path_lid_from_req(req_msg, primary_path, alt_path);
++ cm_format_path_lid_from_req(req_msg, primary_path, alt_path, wc);
+ }
+
+ static u16 cm_get_bth_pkey(struct cm_work *work)
+@@ -2148,7 +2148,7 @@ static int cm_req_handler(struct cm_work *work)
+ if (cm_req_has_alt_path(req_msg))
+ work->path[1].rec_type = work->path[0].rec_type;
+ cm_format_paths_from_req(req_msg, &work->path[0],
+- &work->path[1]);
++ &work->path[1], work->mad_recv_wc->wc);
+ if (cm_id_priv->av.ah_attr.type == RDMA_AH_ATTR_TYPE_ROCE)
+ sa_path_set_dmac(&work->path[0],
+ cm_id_priv->av.ah_attr.roce.dmac);
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 046376bd68e27..4796f6a8828ca 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -739,6 +739,7 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs)
+ mr->uobject = uobj;
+ atomic_inc(&pd->usecnt);
+ mr->iova = cmd.hca_va;
++ mr->length = cmd.length;
+
+ rdma_restrack_new(&mr->res, RDMA_RESTRACK_MR);
+ rdma_restrack_set_name(&mr->res, NULL);
+@@ -861,8 +862,10 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs)
+ mr->pd = new_pd;
+ atomic_inc(&new_pd->usecnt);
+ }
+- if (cmd.flags & IB_MR_REREG_TRANS)
++ if (cmd.flags & IB_MR_REREG_TRANS) {
+ mr->iova = cmd.hca_va;
++ mr->length = cmd.length;
++ }
+ }
+
+ memset(&resp, 0, sizeof(resp));
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index e54b3f1b730e0..f8964c8cf0ade 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -2149,6 +2149,8 @@ struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ mr->pd = pd;
+ mr->dm = NULL;
+ atomic_inc(&pd->usecnt);
++ mr->iova = virt_addr;
++ mr->length = length;
+
+ rdma_restrack_new(&mr->res, RDMA_RESTRACK_MR);
+ rdma_restrack_parent_name(&mr->res, &pd->res);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 867972c2a894d..dedfa56f57731 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -249,7 +249,6 @@ struct ib_mr *hns_roce_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ goto err_alloc_pbl;
+
+ mr->ibmr.rkey = mr->ibmr.lkey = mr->key;
+- mr->ibmr.length = length;
+
+ return &mr->ibmr;
+
+diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h
+index e03e03082a5fb..c1906cab5c8ad 100644
+--- a/drivers/infiniband/hw/irdma/defs.h
++++ b/drivers/infiniband/hw/irdma/defs.h
+@@ -314,6 +314,7 @@ enum irdma_cqp_op_type {
+ #define IRDMA_AE_IB_REMOTE_ACCESS_ERROR 0x020d
+ #define IRDMA_AE_IB_REMOTE_OP_ERROR 0x020e
+ #define IRDMA_AE_WQE_LSMM_TOO_LONG 0x0220
++#define IRDMA_AE_INVALID_REQUEST 0x0223
+ #define IRDMA_AE_DDP_INVALID_MSN_GAP_IN_MSN 0x0301
+ #define IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER 0x0303
+ #define IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION 0x0304
+diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
+index 6bba1335993a1..971cc7a7f3bc0 100644
+--- a/drivers/infiniband/hw/irdma/hw.c
++++ b/drivers/infiniband/hw/irdma/hw.c
+@@ -138,59 +138,68 @@ static void irdma_set_flush_fields(struct irdma_sc_qp *qp,
+ qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC;
+
+ switch (info->ae_id) {
+- case IRDMA_AE_AMP_UNALLOCATED_STAG:
+ case IRDMA_AE_AMP_BOUNDS_VIOLATION:
+ case IRDMA_AE_AMP_INVALID_STAG:
+- qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR;
+- fallthrough;
++ case IRDMA_AE_AMP_RIGHTS_VIOLATION:
++ case IRDMA_AE_AMP_UNALLOCATED_STAG:
+ case IRDMA_AE_AMP_BAD_PD:
+- case IRDMA_AE_UDA_XMIT_BAD_PD:
++ case IRDMA_AE_AMP_BAD_QP:
++ case IRDMA_AE_AMP_BAD_STAG_KEY:
++ case IRDMA_AE_AMP_BAD_STAG_INDEX:
++ case IRDMA_AE_AMP_TO_WRAP:
++ case IRDMA_AE_PRIV_OPERATION_DENIED:
+ qp->flush_code = FLUSH_PROT_ERR;
++ qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR;
+ break;
+- case IRDMA_AE_AMP_BAD_QP:
++ case IRDMA_AE_UDA_XMIT_BAD_PD:
+ case IRDMA_AE_WQE_UNEXPECTED_OPCODE:
+ qp->flush_code = FLUSH_LOC_QP_OP_ERR;
++ qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC;
++ break;
++ case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG:
++ case IRDMA_AE_UDA_XMIT_DGRAM_TOO_SHORT:
++ case IRDMA_AE_UDA_L4LEN_INVALID:
++ case IRDMA_AE_DDP_UBE_INVALID_MO:
++ case IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER:
++ qp->flush_code = FLUSH_LOC_LEN_ERR;
++ qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC;
+ break;
+- case IRDMA_AE_AMP_BAD_STAG_KEY:
+- case IRDMA_AE_AMP_BAD_STAG_INDEX:
+- case IRDMA_AE_AMP_TO_WRAP:
+- case IRDMA_AE_AMP_RIGHTS_VIOLATION:
+ case IRDMA_AE_AMP_INVALIDATE_NO_REMOTE_ACCESS_RIGHTS:
+- case IRDMA_AE_PRIV_OPERATION_DENIED:
+- case IRDMA_AE_IB_INVALID_REQUEST:
+ case IRDMA_AE_IB_REMOTE_ACCESS_ERROR:
+ qp->flush_code = FLUSH_REM_ACCESS_ERR;
+ qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR;
+ break;
+ case IRDMA_AE_LLP_SEGMENT_TOO_SMALL:
+- case IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER:
+- case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG:
+- case IRDMA_AE_UDA_XMIT_DGRAM_TOO_SHORT:
+- case IRDMA_AE_UDA_L4LEN_INVALID:
++ case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR:
+ case IRDMA_AE_ROCE_RSP_LENGTH_ERROR:
+- qp->flush_code = FLUSH_LOC_LEN_ERR;
++ case IRDMA_AE_IB_REMOTE_OP_ERROR:
++ qp->flush_code = FLUSH_REM_OP_ERR;
++ qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC;
+ break;
+ case IRDMA_AE_LCE_QP_CATASTROPHIC:
+ qp->flush_code = FLUSH_FATAL_ERR;
++ qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC;
+ break;
+- case IRDMA_AE_DDP_UBE_INVALID_MO:
+ case IRDMA_AE_IB_RREQ_AND_Q1_FULL:
+- case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR:
+ qp->flush_code = FLUSH_GENERAL_ERR;
+ break;
+ case IRDMA_AE_LLP_TOO_MANY_RETRIES:
+ qp->flush_code = FLUSH_RETRY_EXC_ERR;
++ qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC;
+ break;
+ case IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS:
+ case IRDMA_AE_AMP_MWBIND_BIND_DISABLED:
+ case IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS:
+ qp->flush_code = FLUSH_MW_BIND_ERR;
++ qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR;
+ break;
+- case IRDMA_AE_IB_REMOTE_OP_ERROR:
+- qp->flush_code = FLUSH_REM_OP_ERR;
++ case IRDMA_AE_IB_INVALID_REQUEST:
++ qp->flush_code = FLUSH_REM_INV_REQ_ERR;
++ qp->event_type = IRDMA_QP_EVENT_REQ_ERR;
+ break;
+ default:
+- qp->flush_code = FLUSH_FATAL_ERR;
++ qp->flush_code = FLUSH_GENERAL_ERR;
++ qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC;
+ break;
+ }
+ }
+diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h
+index 9e7b8ecb137ab..517d41a1c2894 100644
+--- a/drivers/infiniband/hw/irdma/type.h
++++ b/drivers/infiniband/hw/irdma/type.h
+@@ -98,6 +98,7 @@ enum irdma_term_mpa_errors {
+ enum irdma_qp_event_type {
+ IRDMA_QP_EVENT_CATASTROPHIC,
+ IRDMA_QP_EVENT_ACCESS_ERR,
++ IRDMA_QP_EVENT_REQ_ERR,
+ };
+
+ enum irdma_hw_stats_index_32b {
+diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h
+index ddd0ebbdd7d54..2ef61923c9268 100644
+--- a/drivers/infiniband/hw/irdma/user.h
++++ b/drivers/infiniband/hw/irdma/user.h
+@@ -103,6 +103,7 @@ enum irdma_flush_opcode {
+ FLUSH_FATAL_ERR,
+ FLUSH_RETRY_EXC_ERR,
+ FLUSH_MW_BIND_ERR,
++ FLUSH_REM_INV_REQ_ERR,
+ };
+
+ enum irdma_cmpl_status {
+diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c
+index f4d774451160d..c9513b9fc42d5 100644
+--- a/drivers/infiniband/hw/irdma/utils.c
++++ b/drivers/infiniband/hw/irdma/utils.c
+@@ -2478,6 +2478,9 @@ void irdma_ib_qp_event(struct irdma_qp *iwqp, enum irdma_qp_event_type event)
+ case IRDMA_QP_EVENT_ACCESS_ERR:
+ ibevent.event = IB_EVENT_QP_ACCESS_ERR;
+ break;
++ case IRDMA_QP_EVENT_REQ_ERR:
++ ibevent.event = IB_EVENT_QP_REQ_ERR;
++ break;
+ }
+ ibevent.device = iwqp->ibqp.device;
+ ibevent.element.qp = &iwqp->ibqp;
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index ab73d1715f991..c5652efb3df22 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -299,13 +299,19 @@ static void irdma_alloc_push_page(struct irdma_qp *iwqp)
+ static int irdma_alloc_ucontext(struct ib_ucontext *uctx,
+ struct ib_udata *udata)
+ {
++#define IRDMA_ALLOC_UCTX_MIN_REQ_LEN offsetofend(struct irdma_alloc_ucontext_req, rsvd8)
++#define IRDMA_ALLOC_UCTX_MIN_RESP_LEN offsetofend(struct irdma_alloc_ucontext_resp, rsvd)
+ struct ib_device *ibdev = uctx->device;
+ struct irdma_device *iwdev = to_iwdev(ibdev);
+- struct irdma_alloc_ucontext_req req;
++ struct irdma_alloc_ucontext_req req = {};
+ struct irdma_alloc_ucontext_resp uresp = {};
+ struct irdma_ucontext *ucontext = to_ucontext(uctx);
+ struct irdma_uk_attrs *uk_attrs;
+
++ if (udata->inlen < IRDMA_ALLOC_UCTX_MIN_REQ_LEN ||
++ udata->outlen < IRDMA_ALLOC_UCTX_MIN_RESP_LEN)
++ return -EINVAL;
++
+ if (ib_copy_from_udata(&req, udata, min(sizeof(req), udata->inlen)))
+ return -EINVAL;
+
+@@ -317,7 +323,7 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx,
+
+ uk_attrs = &iwdev->rf->sc_dev.hw_attrs.uk_attrs;
+ /* GEN_1 legacy support with libi40iw */
+- if (udata->outlen < sizeof(uresp)) {
++ if (udata->outlen == IRDMA_ALLOC_UCTX_MIN_RESP_LEN) {
+ if (uk_attrs->hw_rev != IRDMA_GEN_1)
+ return -EOPNOTSUPP;
+
+@@ -389,6 +395,7 @@ static void irdma_dealloc_ucontext(struct ib_ucontext *context)
+ */
+ static int irdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata)
+ {
++#define IRDMA_ALLOC_PD_MIN_RESP_LEN offsetofend(struct irdma_alloc_pd_resp, rsvd)
+ struct irdma_pd *iwpd = to_iwpd(pd);
+ struct irdma_device *iwdev = to_iwdev(pd->device);
+ struct irdma_sc_dev *dev = &iwdev->rf->sc_dev;
+@@ -398,6 +405,9 @@ static int irdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata)
+ u32 pd_id = 0;
+ int err;
+
++ if (udata && udata->outlen < IRDMA_ALLOC_PD_MIN_RESP_LEN)
++ return -EINVAL;
++
+ err = irdma_alloc_rsrc(rf, rf->allocated_pds, rf->max_pd, &pd_id,
+ &rf->next_pd);
+ if (err)
+@@ -814,12 +824,14 @@ static int irdma_create_qp(struct ib_qp *ibqp,
+ struct ib_qp_init_attr *init_attr,
+ struct ib_udata *udata)
+ {
++#define IRDMA_CREATE_QP_MIN_REQ_LEN offsetofend(struct irdma_create_qp_req, user_compl_ctx)
++#define IRDMA_CREATE_QP_MIN_RESP_LEN offsetofend(struct irdma_create_qp_resp, rsvd)
+ struct ib_pd *ibpd = ibqp->pd;
+ struct irdma_pd *iwpd = to_iwpd(ibpd);
+ struct irdma_device *iwdev = to_iwdev(ibpd->device);
+ struct irdma_pci_f *rf = iwdev->rf;
+ struct irdma_qp *iwqp = to_iwqp(ibqp);
+- struct irdma_create_qp_req req;
++ struct irdma_create_qp_req req = {};
+ struct irdma_create_qp_resp uresp = {};
+ u32 qp_num = 0;
+ int err_code;
+@@ -836,6 +848,10 @@ static int irdma_create_qp(struct ib_qp *ibqp,
+ if (err_code)
+ return err_code;
+
++ if (udata && (udata->inlen < IRDMA_CREATE_QP_MIN_REQ_LEN ||
++ udata->outlen < IRDMA_CREATE_QP_MIN_RESP_LEN))
++ return -EINVAL;
++
+ sq_size = init_attr->cap.max_send_wr;
+ rq_size = init_attr->cap.max_recv_wr;
+
+@@ -1120,6 +1136,8 @@ static int irdma_query_pkey(struct ib_device *ibdev, u32 port, u16 index,
+ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ int attr_mask, struct ib_udata *udata)
+ {
++#define IRDMA_MODIFY_QP_MIN_REQ_LEN offsetofend(struct irdma_modify_qp_req, rq_flush)
++#define IRDMA_MODIFY_QP_MIN_RESP_LEN offsetofend(struct irdma_modify_qp_resp, push_valid)
+ struct irdma_pd *iwpd = to_iwpd(ibqp->pd);
+ struct irdma_qp *iwqp = to_iwqp(ibqp);
+ struct irdma_device *iwdev = iwqp->iwdev;
+@@ -1138,6 +1156,13 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ roce_info = &iwqp->roce_info;
+ udp_info = &iwqp->udp_info;
+
++ if (udata) {
++ /* udata inlen/outlen can be 0 when supporting legacy libi40iw */
++ if ((udata->inlen && udata->inlen < IRDMA_MODIFY_QP_MIN_REQ_LEN) ||
++ (udata->outlen && udata->outlen < IRDMA_MODIFY_QP_MIN_RESP_LEN))
++ return -EINVAL;
++ }
++
+ if (attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
+ return -EOPNOTSUPP;
+
+@@ -1374,7 +1399,7 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+
+ if (iwqp->iwarp_state == IRDMA_QP_STATE_ERROR) {
+ spin_unlock_irqrestore(&iwqp->lock, flags);
+- if (udata) {
++ if (udata && udata->inlen) {
+ if (ib_copy_from_udata(&ureq, udata,
+ min(sizeof(ureq), udata->inlen)))
+ return -EINVAL;
+@@ -1426,7 +1451,7 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ } else {
+ iwqp->ibqp_state = attr->qp_state;
+ }
+- if (udata && dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2) {
++ if (udata && udata->outlen && dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2) {
+ struct irdma_ucontext *ucontext;
+
+ ucontext = rdma_udata_to_drv_context(udata,
+@@ -1466,6 +1491,8 @@ exit:
+ int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask,
+ struct ib_udata *udata)
+ {
++#define IRDMA_MODIFY_QP_MIN_REQ_LEN offsetofend(struct irdma_modify_qp_req, rq_flush)
++#define IRDMA_MODIFY_QP_MIN_RESP_LEN offsetofend(struct irdma_modify_qp_resp, push_valid)
+ struct irdma_qp *iwqp = to_iwqp(ibqp);
+ struct irdma_device *iwdev = iwqp->iwdev;
+ struct irdma_sc_dev *dev = &iwdev->rf->sc_dev;
+@@ -1480,6 +1507,13 @@ int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask,
+ int err;
+ unsigned long flags;
+
++ if (udata) {
++ /* udata inlen/outlen can be 0 when supporting legacy libi40iw */
++ if ((udata->inlen && udata->inlen < IRDMA_MODIFY_QP_MIN_REQ_LEN) ||
++ (udata->outlen && udata->outlen < IRDMA_MODIFY_QP_MIN_RESP_LEN))
++ return -EINVAL;
++ }
++
+ if (attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
+ return -EOPNOTSUPP;
+
+@@ -1565,7 +1599,7 @@ int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask,
+ case IB_QPS_RESET:
+ if (iwqp->iwarp_state == IRDMA_QP_STATE_ERROR) {
+ spin_unlock_irqrestore(&iwqp->lock, flags);
+- if (udata) {
++ if (udata && udata->inlen) {
+ if (ib_copy_from_udata(&ureq, udata,
+ min(sizeof(ureq), udata->inlen)))
+ return -EINVAL;
+@@ -1662,7 +1696,7 @@ int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask,
+ }
+ }
+ }
+- if (attr_mask & IB_QP_STATE && udata &&
++ if (attr_mask & IB_QP_STATE && udata && udata->outlen &&
+ dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2) {
+ struct irdma_ucontext *ucontext;
+
+@@ -1797,6 +1831,7 @@ static int irdma_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ static int irdma_resize_cq(struct ib_cq *ibcq, int entries,
+ struct ib_udata *udata)
+ {
++#define IRDMA_RESIZE_CQ_MIN_REQ_LEN offsetofend(struct irdma_resize_cq_req, user_cq_buffer)
+ struct irdma_cq *iwcq = to_iwcq(ibcq);
+ struct irdma_sc_dev *dev = iwcq->sc_cq.dev;
+ struct irdma_cqp_request *cqp_request;
+@@ -1819,6 +1854,9 @@ static int irdma_resize_cq(struct ib_cq *ibcq, int entries,
+ IRDMA_FEATURE_CQ_RESIZE))
+ return -EOPNOTSUPP;
+
++ if (udata && udata->inlen < IRDMA_RESIZE_CQ_MIN_REQ_LEN)
++ return -EINVAL;
++
+ if (entries > rf->max_cqe)
+ return -EINVAL;
+
+@@ -1951,6 +1989,8 @@ static int irdma_create_cq(struct ib_cq *ibcq,
+ const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata)
+ {
++#define IRDMA_CREATE_CQ_MIN_REQ_LEN offsetofend(struct irdma_create_cq_req, user_cq_buf)
++#define IRDMA_CREATE_CQ_MIN_RESP_LEN offsetofend(struct irdma_create_cq_resp, cq_size)
+ struct ib_device *ibdev = ibcq->device;
+ struct irdma_device *iwdev = to_iwdev(ibdev);
+ struct irdma_pci_f *rf = iwdev->rf;
+@@ -1969,6 +2009,11 @@ static int irdma_create_cq(struct ib_cq *ibcq,
+ err_code = cq_validate_flags(attr->flags, dev->hw_attrs.uk_attrs.hw_rev);
+ if (err_code)
+ return err_code;
++
++ if (udata && (udata->inlen < IRDMA_CREATE_CQ_MIN_REQ_LEN ||
++ udata->outlen < IRDMA_CREATE_CQ_MIN_RESP_LEN))
++ return -EINVAL;
++
+ err_code = irdma_alloc_rsrc(rf, rf->allocated_cqs, rf->max_cq, &cq_num,
+ &rf->next_cq);
+ if (err_code)
+@@ -2738,6 +2783,7 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
+ u64 virt, int access,
+ struct ib_udata *udata)
+ {
++#define IRDMA_MEM_REG_MIN_REQ_LEN offsetofend(struct irdma_mem_reg_req, sq_pages)
+ struct irdma_device *iwdev = to_iwdev(pd->device);
+ struct irdma_ucontext *ucontext;
+ struct irdma_pble_alloc *palloc;
+@@ -2755,6 +2801,9 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
+ if (len > iwdev->rf->sc_dev.hw_attrs.max_mr_size)
+ return ERR_PTR(-EINVAL);
+
++ if (udata->inlen < IRDMA_MEM_REG_MIN_REQ_LEN)
++ return ERR_PTR(-EINVAL);
++
+ region = ib_umem_get(pd->device, start, len, access);
+
+ if (IS_ERR(region)) {
+@@ -3307,6 +3356,8 @@ static enum ib_wc_status irdma_flush_err_to_ib_wc_status(enum irdma_flush_opcode
+ return IB_WC_RETRY_EXC_ERR;
+ case FLUSH_MW_BIND_ERR:
+ return IB_WC_MW_BIND_ERR;
++ case FLUSH_REM_INV_REQ_ERR:
++ return IB_WC_REM_INV_REQ_ERR;
+ case FLUSH_FATAL_ERR:
+ default:
+ return IB_WC_FATAL_ERR;
+@@ -4288,12 +4339,16 @@ static int irdma_create_user_ah(struct ib_ah *ibah,
+ struct rdma_ah_init_attr *attr,
+ struct ib_udata *udata)
+ {
++#define IRDMA_CREATE_AH_MIN_RESP_LEN offsetofend(struct irdma_create_ah_resp, rsvd)
+ struct irdma_ah *ah = container_of(ibah, struct irdma_ah, ibah);
+ struct irdma_device *iwdev = to_iwdev(ibah->pd->device);
+ struct irdma_create_ah_resp uresp;
+ struct irdma_ah *parent_ah;
+ int err;
+
++ if (udata && udata->outlen < IRDMA_CREATE_AH_MIN_RESP_LEN)
++ return -EINVAL;
++
+ err = irdma_setup_ah(ibah, attr);
+ if (err)
+ return err;
+diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
+index 04a67b4816086..a40bf58bcdd3a 100644
+--- a/drivers/infiniband/hw/mlx4/mr.c
++++ b/drivers/infiniband/hw/mlx4/mr.c
+@@ -439,7 +439,6 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ goto err_mr;
+
+ mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key;
+- mr->ibmr.length = length;
+ mr->ibmr.page_size = 1U << shift;
+
+ return &mr->ibmr;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index bb13164124fdb..aa4a2a9cb0d58 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1826,6 +1826,9 @@ static int set_ucontext_resp(struct ib_ucontext *uctx,
+ if (MLX5_CAP_GEN(dev->mdev, drain_sigerr))
+ resp->comp_mask |= MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_SQD2RTS;
+
++ resp->comp_mask |=
++ MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_MKEY_UPDATE_TAG;
++
+ return 0;
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 84da5674e1abc..9151852f04a15 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -795,7 +795,8 @@ static bool mkey_is_eq(struct mlx5_ib_mkey *mmkey, u32 key)
+ {
+ if (!mmkey)
+ return false;
+- if (mmkey->type == MLX5_MKEY_MW)
++ if (mmkey->type == MLX5_MKEY_MW ||
++ mmkey->type == MLX5_MKEY_INDIRECT_DEVX)
+ return mlx5_base_mkey(mmkey->key) == mlx5_base_mkey(key);
+ return mmkey->key == key;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index fd706dc3009de..3df2db893dd3b 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -794,7 +794,9 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+ rxe_cleanup_task(&qp->comp.task);
+
+ /* flush out any receive wr's or pending requests */
+- __rxe_do_task(&qp->req.task);
++ if (qp->req.task.func)
++ __rxe_do_task(&qp->req.task);
++
+ if (qp->sq.queue) {
+ __rxe_do_task(&qp->comp.task);
+ __rxe_do_task(&qp->req.task);
+@@ -830,8 +832,10 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+
+ free_rd_atomic_resources(qp);
+
+- kernel_sock_shutdown(qp->sk, SHUT_RDWR);
+- sock_release(qp->sk);
++ if (qp->sk) {
++ kernel_sock_shutdown(qp->sk, SHUT_RDWR);
++ sock_release(qp->sk);
++ }
+ }
+
+ /* called when the last reference to the qp is dropped */
+diff --git a/drivers/infiniband/sw/rxe/rxe_queue.c b/drivers/infiniband/sw/rxe/rxe_queue.c
+index dbd4971039c0c..d6dbf5a0058dc 100644
+--- a/drivers/infiniband/sw/rxe/rxe_queue.c
++++ b/drivers/infiniband/sw/rxe/rxe_queue.c
+@@ -112,23 +112,25 @@ static int resize_finish(struct rxe_queue *q, struct rxe_queue *new_q,
+ unsigned int num_elem)
+ {
+ enum queue_type type = q->type;
++ u32 new_prod;
+ u32 prod;
+ u32 cons;
+
+ if (!queue_empty(q, q->type) && (num_elem < queue_count(q, type)))
+ return -EINVAL;
+
+- prod = queue_get_producer(new_q, type);
++ new_prod = queue_get_producer(new_q, type);
++ prod = queue_get_producer(q, type);
+ cons = queue_get_consumer(q, type);
+
+- while (!queue_empty(q, type)) {
+- memcpy(queue_addr_from_index(new_q, prod),
++ while ((prod - cons) & q->index_mask) {
++ memcpy(queue_addr_from_index(new_q, new_prod),
+ queue_addr_from_index(q, cons), new_q->elem_size);
+- prod = queue_next_index(new_q, prod);
++ new_prod = queue_next_index(new_q, new_prod);
+ cons = queue_next_index(q, cons);
+ }
+
+- new_q->buf->producer_index = prod;
++ new_q->buf->producer_index = new_prod;
+ q->buf->consumer_index = cons;
+
+ /* update private index copies */
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index e38bf958ab485..2ef21a1cba814 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -787,10 +787,8 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ if (!skb)
+ return RESPST_ERR_RNR;
+
+- err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt),
+- payload, RXE_FROM_MR_OBJ);
+- if (err)
+- pr_err("Failed copying memory\n");
++ rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt),
++ payload, RXE_FROM_MR_OBJ);
+ if (mr)
+ rxe_put(mr);
+
+@@ -801,10 +799,8 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ }
+
+ err = rxe_xmit_packet(qp, &ack_pkt, skb);
+- if (err) {
+- pr_err("Failed sending RDMA reply.\n");
++ if (err)
+ return RESPST_ERR_RNR;
+- }
+
+ res->read.va += payload;
+ res->read.resid -= payload;
+diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h
+index df03d84c6868a..2f3a9cda3850f 100644
+--- a/drivers/infiniband/sw/siw/siw.h
++++ b/drivers/infiniband/sw/siw/siw.h
+@@ -418,6 +418,7 @@ struct siw_qp {
+ struct ib_qp base_qp;
+ struct siw_device *sdev;
+ struct kref ref;
++ struct completion qp_free;
+ struct list_head devq;
+ int tx_cpu;
+ struct siw_qp_attrs attrs;
+diff --git a/drivers/infiniband/sw/siw/siw_qp.c b/drivers/infiniband/sw/siw/siw_qp.c
+index 7e01f2438afc5..e6f634971228e 100644
+--- a/drivers/infiniband/sw/siw/siw_qp.c
++++ b/drivers/infiniband/sw/siw/siw_qp.c
+@@ -1342,6 +1342,6 @@ void siw_free_qp(struct kref *ref)
+ vfree(qp->orq);
+
+ siw_put_tx_cpu(qp->tx_cpu);
+-
++ complete(&qp->qp_free);
+ atomic_dec(&sdev->num_qp);
+ }
+diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/siw/siw_qp_rx.c
+index 875ea6f1b04a2..fd721cc19682e 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_rx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_rx.c
+@@ -961,27 +961,28 @@ out:
+ static int siw_get_trailer(struct siw_qp *qp, struct siw_rx_stream *srx)
+ {
+ struct sk_buff *skb = srx->skb;
++ int avail = min(srx->skb_new, srx->fpdu_part_rem);
+ u8 *tbuf = (u8 *)&srx->trailer.crc - srx->pad;
+ __wsum crc_in, crc_own = 0;
+
+ siw_dbg_qp(qp, "expected %d, available %d, pad %u\n",
+ srx->fpdu_part_rem, srx->skb_new, srx->pad);
+
+- if (srx->skb_new < srx->fpdu_part_rem)
+- return -EAGAIN;
+-
+- skb_copy_bits(skb, srx->skb_offset, tbuf, srx->fpdu_part_rem);
++ skb_copy_bits(skb, srx->skb_offset, tbuf, avail);
+
+- if (srx->mpa_crc_hd && srx->pad)
+- crypto_shash_update(srx->mpa_crc_hd, tbuf, srx->pad);
++ srx->skb_new -= avail;
++ srx->skb_offset += avail;
++ srx->skb_copied += avail;
++ srx->fpdu_part_rem -= avail;
+
+- srx->skb_new -= srx->fpdu_part_rem;
+- srx->skb_offset += srx->fpdu_part_rem;
+- srx->skb_copied += srx->fpdu_part_rem;
++ if (srx->fpdu_part_rem)
++ return -EAGAIN;
+
+ if (!srx->mpa_crc_hd)
+ return 0;
+
++ if (srx->pad)
++ crypto_shash_update(srx->mpa_crc_hd, tbuf, srx->pad);
+ /*
+ * CRC32 is computed, transmitted and received directly in NBO,
+ * so there's never a reason to convert byte order.
+@@ -1083,10 +1084,9 @@ static int siw_get_hdr(struct siw_rx_stream *srx)
+ * completely received.
+ */
+ if (iwarp_pktinfo[opcode].hdr_len > sizeof(struct iwarp_ctrl_tagged)) {
+- bytes = iwarp_pktinfo[opcode].hdr_len - MIN_DDP_HDR;
++ int hdrlen = iwarp_pktinfo[opcode].hdr_len;
+
+- if (srx->skb_new < bytes)
+- return -EAGAIN;
++ bytes = min_t(int, hdrlen - MIN_DDP_HDR, srx->skb_new);
+
+ skb_copy_bits(skb, srx->skb_offset,
+ (char *)c_hdr + srx->fpdu_part_rcvd, bytes);
+@@ -1096,6 +1096,9 @@ static int siw_get_hdr(struct siw_rx_stream *srx)
+ srx->skb_new -= bytes;
+ srx->skb_offset += bytes;
+ srx->skb_copied += bytes;
++
++ if (srx->fpdu_part_rcvd < hdrlen)
++ return -EAGAIN;
+ }
+
+ /*
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index 09316072b7890..598dab44536bc 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -480,6 +480,8 @@ int siw_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
+ list_add_tail(&qp->devq, &sdev->qp_list);
+ spin_unlock_irqrestore(&sdev->lock, flags);
+
++ init_completion(&qp->qp_free);
++
+ return 0;
+
+ err_out_xa:
+@@ -624,6 +626,7 @@ int siw_destroy_qp(struct ib_qp *base_qp, struct ib_udata *udata)
+ qp->scq = qp->rcq = NULL;
+
+ siw_qp_put(qp);
++ wait_for_completion(&qp->qp_free);
+
+ return 0;
+ }
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 3d9c108d73ad8..c3fa65977b3ed 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -2790,7 +2790,7 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
+ static int srp_abort(struct scsi_cmnd *scmnd)
+ {
+ struct srp_target_port *target = host_to_target(scmnd->device->host);
+- struct srp_request *req = (struct srp_request *) scmnd->host_scribble;
++ struct srp_request *req = scsi_cmd_priv(scmnd);
+ u32 tag;
+ u16 ch_idx;
+ struct srp_rdma_ch *ch;
+@@ -2798,8 +2798,6 @@ static int srp_abort(struct scsi_cmnd *scmnd)
+
+ shost_printk(KERN_ERR, target->scsi_host, "SRP abort called\n");
+
+- if (!req)
+- return SUCCESS;
+ tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmnd));
+ ch_idx = blk_mq_unique_tag_to_hwq(tag);
+ if (WARN_ON_ONCE(ch_idx >= target->ch_count))
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 88817a3376ef0..e119ff8396c9e 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2839,6 +2839,26 @@ static int arm_smmu_dev_disable_feature(struct device *dev,
+ }
+ }
+
++/*
++ * HiSilicon PCIe tune and trace device can be used to trace TLP headers on the
++ * PCIe link and save the data to memory by DMA. The hardware is restricted to
++ * use identity mapping only.
++ */
++#define IS_HISI_PTT_DEVICE(pdev) ((pdev)->vendor == PCI_VENDOR_ID_HUAWEI && \
++ (pdev)->device == 0xa12e)
++
++static int arm_smmu_def_domain_type(struct device *dev)
++{
++ if (dev_is_pci(dev)) {
++ struct pci_dev *pdev = to_pci_dev(dev);
++
++ if (IS_HISI_PTT_DEVICE(pdev))
++ return IOMMU_DOMAIN_IDENTITY;
++ }
++
++ return 0;
++}
++
+ static struct iommu_ops arm_smmu_ops = {
+ .capable = arm_smmu_capable,
+ .domain_alloc = arm_smmu_domain_alloc,
+@@ -2856,6 +2876,7 @@ static struct iommu_ops arm_smmu_ops = {
+ .sva_unbind = arm_smmu_sva_unbind,
+ .sva_get_pasid = arm_smmu_sva_get_pasid,
+ .page_response = arm_smmu_page_response,
++ .def_domain_type = arm_smmu_def_domain_type,
+ .pgsize_bitmap = -1UL, /* Restricted during device attach */
+ .owner = THIS_MODULE,
+ .default_domain_ops = &(const struct iommu_domain_ops) {
+diff --git a/drivers/iommu/omap-iommu-debug.c b/drivers/iommu/omap-iommu-debug.c
+index a99afb5d9011c..259f65291d909 100644
+--- a/drivers/iommu/omap-iommu-debug.c
++++ b/drivers/iommu/omap-iommu-debug.c
+@@ -32,12 +32,12 @@ static inline bool is_omap_iommu_detached(struct omap_iommu *obj)
+ ssize_t bytes; \
+ const char *str = "%20s: %08x\n"; \
+ const int maxcol = 32; \
+- bytes = snprintf(p, maxcol, str, __stringify(name), \
++ if (len < maxcol) \
++ goto out; \
++ bytes = scnprintf(p, maxcol, str, __stringify(name), \
+ iommu_read_reg(obj, MMU_##name)); \
+ p += bytes; \
+ len -= bytes; \
+- if (len < maxcol) \
+- goto out; \
+ } while (0)
+
+ static ssize_t
+diff --git a/drivers/isdn/mISDN/l1oip.h b/drivers/isdn/mISDN/l1oip.h
+index 7ea10db20e3a6..48133d0228120 100644
+--- a/drivers/isdn/mISDN/l1oip.h
++++ b/drivers/isdn/mISDN/l1oip.h
+@@ -59,6 +59,7 @@ struct l1oip {
+ int bundle; /* bundle channels in one frm */
+ int codec; /* codec to use for transmis. */
+ int limit; /* limit number of bchannels */
++ bool shutdown; /* if card is released */
+
+ /* timer */
+ struct timer_list keep_tl;
+diff --git a/drivers/isdn/mISDN/l1oip_core.c b/drivers/isdn/mISDN/l1oip_core.c
+index 2c40412466e6f..a77195e378b7b 100644
+--- a/drivers/isdn/mISDN/l1oip_core.c
++++ b/drivers/isdn/mISDN/l1oip_core.c
+@@ -275,7 +275,7 @@ l1oip_socket_send(struct l1oip *hc, u8 localcodec, u8 channel, u32 chanmask,
+ p = frame;
+
+ /* restart timer */
+- if (time_before(hc->keep_tl.expires, jiffies + 5 * HZ))
++ if (time_before(hc->keep_tl.expires, jiffies + 5 * HZ) && !hc->shutdown)
+ mod_timer(&hc->keep_tl, jiffies + L1OIP_KEEPALIVE * HZ);
+ else
+ hc->keep_tl.expires = jiffies + L1OIP_KEEPALIVE * HZ;
+@@ -601,7 +601,9 @@ multiframe:
+ goto multiframe;
+
+ /* restart timer */
+- if (time_before(hc->timeout_tl.expires, jiffies + 5 * HZ) || !hc->timeout_on) {
++ if ((time_before(hc->timeout_tl.expires, jiffies + 5 * HZ) ||
++ !hc->timeout_on) &&
++ !hc->shutdown) {
+ hc->timeout_on = 1;
+ mod_timer(&hc->timeout_tl, jiffies + L1OIP_TIMEOUT * HZ);
+ } else /* only adjust timer */
+@@ -1232,11 +1234,10 @@ release_card(struct l1oip *hc)
+ {
+ int ch;
+
+- if (timer_pending(&hc->keep_tl))
+- del_timer(&hc->keep_tl);
++ hc->shutdown = true;
+
+- if (timer_pending(&hc->timeout_tl))
+- del_timer(&hc->timeout_tl);
++ del_timer_sync(&hc->keep_tl);
++ del_timer_sync(&hc->timeout_tl);
+
+ cancel_work_sync(&hc->workq);
+
+diff --git a/drivers/leds/flash/leds-lm3601x.c b/drivers/leds/flash/leds-lm3601x.c
+index d0e1d4814042e..3d12727482017 100644
+--- a/drivers/leds/flash/leds-lm3601x.c
++++ b/drivers/leds/flash/leds-lm3601x.c
+@@ -444,8 +444,6 @@ static int lm3601x_remove(struct i2c_client *client)
+ {
+ struct lm3601x_led *led = i2c_get_clientdata(client);
+
+- mutex_destroy(&led->lock);
+-
+ return regmap_update_bits(led->regmap, LM3601X_ENABLE_REG,
+ LM3601X_ENABLE_MASK,
+ LM3601X_MODE_STANDBY);
+diff --git a/drivers/mailbox/bcm-flexrm-mailbox.c b/drivers/mailbox/bcm-flexrm-mailbox.c
+index 22acb51531cb5..658e47b21933a 100644
+--- a/drivers/mailbox/bcm-flexrm-mailbox.c
++++ b/drivers/mailbox/bcm-flexrm-mailbox.c
+@@ -632,15 +632,15 @@ static int flexrm_spu_dma_map(struct device *dev, struct brcm_message *msg)
+
+ rc = dma_map_sg(dev, msg->spu.src, sg_nents(msg->spu.src),
+ DMA_TO_DEVICE);
+- if (rc < 0)
+- return rc;
++ if (!rc)
++ return -EIO;
+
+ rc = dma_map_sg(dev, msg->spu.dst, sg_nents(msg->spu.dst),
+ DMA_FROM_DEVICE);
+- if (rc < 0) {
++ if (!rc) {
+ dma_unmap_sg(dev, msg->spu.src, sg_nents(msg->spu.src),
+ DMA_TO_DEVICE);
+- return rc;
++ return -EIO;
+ }
+
+ return 0;
+diff --git a/drivers/mailbox/mailbox-mpfs.c b/drivers/mailbox/mailbox-mpfs.c
+index 4e34854d12389..cfacb3f320a64 100644
+--- a/drivers/mailbox/mailbox-mpfs.c
++++ b/drivers/mailbox/mailbox-mpfs.c
+@@ -62,6 +62,7 @@ struct mpfs_mbox {
+ struct mbox_controller controller;
+ struct device *dev;
+ int irq;
++ void __iomem *ctrl_base;
+ void __iomem *mbox_base;
+ void __iomem *int_reg;
+ struct mbox_chan chans[1];
+@@ -73,7 +74,7 @@ static bool mpfs_mbox_busy(struct mpfs_mbox *mbox)
+ {
+ u32 status;
+
+- status = readl_relaxed(mbox->mbox_base + SERVICES_SR_OFFSET);
++ status = readl_relaxed(mbox->ctrl_base + SERVICES_SR_OFFSET);
+
+ return status & SCB_STATUS_BUSY_MASK;
+ }
+@@ -99,29 +100,27 @@ static int mpfs_mbox_send_data(struct mbox_chan *chan, void *data)
+
+ for (index = 0; index < (msg->cmd_data_size / 4); index++)
+ writel_relaxed(word_buf[index],
+- mbox->mbox_base + MAILBOX_REG_OFFSET + index * 0x4);
++ mbox->mbox_base + msg->mbox_offset + index * 0x4);
+ if (extra_bits) {
+ u8 i;
+ u8 byte_off = ALIGN_DOWN(msg->cmd_data_size, 4);
+ u8 *byte_buf = msg->cmd_data + byte_off;
+
+- val = readl_relaxed(mbox->mbox_base +
+- MAILBOX_REG_OFFSET + index * 0x4);
++ val = readl_relaxed(mbox->mbox_base + msg->mbox_offset + index * 0x4);
+
+ for (i = 0u; i < extra_bits; i++) {
+ val &= ~(0xffu << (i * 8u));
+ val |= (byte_buf[i] << (i * 8u));
+ }
+
+- writel_relaxed(val,
+- mbox->mbox_base + MAILBOX_REG_OFFSET + index * 0x4);
++ writel_relaxed(val, mbox->mbox_base + msg->mbox_offset + index * 0x4);
+ }
+ }
+
+ opt_sel = ((msg->mbox_offset << 7u) | (msg->cmd_opcode & 0x7fu));
+ tx_trigger = (opt_sel << SCB_CTRL_POS) & SCB_CTRL_MASK;
+ tx_trigger |= SCB_CTRL_REQ_MASK | SCB_STATUS_NOTIFY_MASK;
+- writel_relaxed(tx_trigger, mbox->mbox_base + SERVICES_CR_OFFSET);
++ writel_relaxed(tx_trigger, mbox->ctrl_base + SERVICES_CR_OFFSET);
+
+ return 0;
+ }
+@@ -141,7 +140,7 @@ static void mpfs_mbox_rx_data(struct mbox_chan *chan)
+ if (!mpfs_mbox_busy(mbox)) {
+ for (i = 0; i < num_words; i++) {
+ response->resp_msg[i] =
+- readl_relaxed(mbox->mbox_base + MAILBOX_REG_OFFSET
++ readl_relaxed(mbox->mbox_base
+ + mbox->resp_offset + i * 0x4);
+ }
+ }
+@@ -200,14 +199,18 @@ static int mpfs_mbox_probe(struct platform_device *pdev)
+ if (!mbox)
+ return -ENOMEM;
+
+- mbox->mbox_base = devm_platform_get_and_ioremap_resource(pdev, 0, ®s);
+- if (IS_ERR(mbox->mbox_base))
+- return PTR_ERR(mbox->mbox_base);
++ mbox->ctrl_base = devm_platform_get_and_ioremap_resource(pdev, 0, ®s);
++ if (IS_ERR(mbox->ctrl_base))
++ return PTR_ERR(mbox->ctrl_base);
+
+ mbox->int_reg = devm_platform_get_and_ioremap_resource(pdev, 1, ®s);
+ if (IS_ERR(mbox->int_reg))
+ return PTR_ERR(mbox->int_reg);
+
++ mbox->mbox_base = devm_platform_get_and_ioremap_resource(pdev, 2, ®s);
++ if (IS_ERR(mbox->mbox_base)) // account for the old dt-binding w/ 2 regs
++ mbox->mbox_base = mbox->ctrl_base + MAILBOX_REG_OFFSET;
++
+ mbox->irq = platform_get_irq(pdev, 0);
+ if (mbox->irq < 0)
+ return mbox->irq;
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 3f0ff3aab6f23..9c227e4a84654 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -157,6 +157,53 @@ static void __update_writeback_rate(struct cached_dev *dc)
+ dc->writeback_rate_target = target;
+ }
+
++static bool idle_counter_exceeded(struct cache_set *c)
++{
++ int counter, dev_nr;
++
++ /*
++ * If c->idle_counter is overflow (idel for really long time),
++ * reset as 0 and not set maximum rate this time for code
++ * simplicity.
++ */
++ counter = atomic_inc_return(&c->idle_counter);
++ if (counter <= 0) {
++ atomic_set(&c->idle_counter, 0);
++ return false;
++ }
++
++ dev_nr = atomic_read(&c->attached_dev_nr);
++ if (dev_nr == 0)
++ return false;
++
++ /*
++ * c->idle_counter is increased by writeback thread of all
++ * attached backing devices, in order to represent a rough
++ * time period, counter should be divided by dev_nr.
++ * Otherwise the idle time cannot be larger with more backing
++ * device attached.
++ * The following calculation equals to checking
++ * (counter / dev_nr) < (dev_nr * 6)
++ */
++ if (counter < (dev_nr * dev_nr * 6))
++ return false;
++
++ return true;
++}
++
++/*
++ * Idle_counter is increased every time when update_writeback_rate() is
++ * called. If all backing devices attached to the same cache set have
++ * identical dc->writeback_rate_update_seconds values, it is about 6
++ * rounds of update_writeback_rate() on each backing device before
++ * c->at_max_writeback_rate is set to 1, and then max wrteback rate set
++ * to each dc->writeback_rate.rate.
++ * In order to avoid extra locking cost for counting exact dirty cached
++ * devices number, c->attached_dev_nr is used to calculate the idle
++ * throushold. It might be bigger if not all cached device are in write-
++ * back mode, but it still works well with limited extra rounds of
++ * update_writeback_rate().
++ */
+ static bool set_at_max_writeback_rate(struct cache_set *c,
+ struct cached_dev *dc)
+ {
+@@ -167,21 +214,8 @@ static bool set_at_max_writeback_rate(struct cache_set *c,
+ /* Don't set max writeback rate if gc is running */
+ if (!c->gc_mark_valid)
+ return false;
+- /*
+- * Idle_counter is increased everytime when update_writeback_rate() is
+- * called. If all backing devices attached to the same cache set have
+- * identical dc->writeback_rate_update_seconds values, it is about 6
+- * rounds of update_writeback_rate() on each backing device before
+- * c->at_max_writeback_rate is set to 1, and then max wrteback rate set
+- * to each dc->writeback_rate.rate.
+- * In order to avoid extra locking cost for counting exact dirty cached
+- * devices number, c->attached_dev_nr is used to calculate the idle
+- * throushold. It might be bigger if not all cached device are in write-
+- * back mode, but it still works well with limited extra rounds of
+- * update_writeback_rate().
+- */
+- if (atomic_inc_return(&c->idle_counter) <
+- atomic_read(&c->attached_dev_nr) * 6)
++
++ if (!idle_counter_exceeded(c))
+ return false;
+
+ if (atomic_read(&c->at_max_writeback_rate) != 1)
+@@ -195,13 +229,10 @@ static bool set_at_max_writeback_rate(struct cache_set *c,
+ dc->writeback_rate_change = 0;
+
+ /*
+- * Check c->idle_counter and c->at_max_writeback_rate agagain in case
+- * new I/O arrives during before set_at_max_writeback_rate() returns.
+- * Then the writeback rate is set to 1, and its new value should be
+- * decided via __update_writeback_rate().
++ * In case new I/O arrives during before
++ * set_at_max_writeback_rate() returns.
+ */
+- if ((atomic_read(&c->idle_counter) <
+- atomic_read(&c->attached_dev_nr) * 6) ||
++ if (!idle_counter_exceeded(c) ||
+ !atomic_read(&c->at_max_writeback_rate))
+ return false;
+
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 78addfe4a0c92..857c49399c28e 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -47,7 +47,7 @@ static void dump_zones(struct mddev *mddev)
+ int len = 0;
+
+ for (k = 0; k < conf->strip_zone[j].nb_dev; k++)
+- len += snprintf(line+len, 200-len, "%s%pg", k?"/":"",
++ len += scnprintf(line+len, 200-len, "%s%pg", k?"/":"",
+ conf->devlist[j * raid_disks + k]->bdev);
+ pr_debug("md: zone%d=[%s]\n", j, line);
+
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 1c1310d539f2e..cdb5ae435b78b 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -36,6 +36,7 @@
+ */
+
+ #include <linux/blkdev.h>
++#include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/raid/pq.h>
+ #include <linux/async_tx.h>
+@@ -3951,7 +3952,7 @@ static void handle_stripe_fill(struct stripe_head *sh,
+ * back cache (prexor with orig_page, and then xor with
+ * page) in the read path
+ */
+- if (s->injournal && s->failed) {
++ if (s->to_read && s->injournal && s->failed) {
+ if (test_bit(STRIPE_R5C_CACHING, &sh->state))
+ r5c_make_stripe_write_out(sh);
+ goto out;
+@@ -5446,7 +5447,6 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio)
+
+ if (is_badblock(rdev, sector, bio_sectors(raid_bio), &first_bad,
+ &bad_sectors)) {
+- bio_put(raid_bio);
+ rdev_dec_pending(rdev, mddev);
+ return 0;
+ }
+@@ -6553,7 +6553,18 @@ static void raid5d(struct md_thread *thread)
+ spin_unlock_irq(&conf->device_lock);
+ md_check_recovery(mddev);
+ spin_lock_irq(&conf->device_lock);
++
++ /*
++ * Waiting on MD_SB_CHANGE_PENDING below may deadlock
++ * seeing md_check_recovery() is needed to clear
++ * the flag when using mdmon.
++ */
++ continue;
+ }
++
++ wait_event_lock_irq(mddev->sb_wait,
++ !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags),
++ conf->device_lock);
+ }
+ pr_debug("%d stripes handled\n", handled);
+
+diff --git a/drivers/media/pci/cx88/cx88-vbi.c b/drivers/media/pci/cx88/cx88-vbi.c
+index a075788c64d45..469aeaa725ad9 100644
+--- a/drivers/media/pci/cx88/cx88-vbi.c
++++ b/drivers/media/pci/cx88/cx88-vbi.c
+@@ -144,11 +144,10 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ return -EINVAL;
+ vb2_set_plane_payload(vb, 0, size);
+
+- cx88_risc_buffer(dev->pci, &buf->risc, sgt->sgl,
+- 0, VBI_LINE_LENGTH * lines,
+- VBI_LINE_LENGTH, 0,
+- lines);
+- return 0;
++ return cx88_risc_buffer(dev->pci, &buf->risc, sgt->sgl,
++ 0, VBI_LINE_LENGTH * lines,
++ VBI_LINE_LENGTH, 0,
++ lines);
+ }
+
+ static void buffer_finish(struct vb2_buffer *vb)
+diff --git a/drivers/media/pci/cx88/cx88-video.c b/drivers/media/pci/cx88/cx88-video.c
+index d3729be892529..b509c2a03852b 100644
+--- a/drivers/media/pci/cx88/cx88-video.c
++++ b/drivers/media/pci/cx88/cx88-video.c
+@@ -431,6 +431,7 @@ static int queue_setup(struct vb2_queue *q,
+
+ static int buffer_prepare(struct vb2_buffer *vb)
+ {
++ int ret;
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct cx8800_dev *dev = vb->vb2_queue->drv_priv;
+ struct cx88_core *core = dev->core;
+@@ -445,35 +446,35 @@ static int buffer_prepare(struct vb2_buffer *vb)
+
+ switch (core->field) {
+ case V4L2_FIELD_TOP:
+- cx88_risc_buffer(dev->pci, &buf->risc,
+- sgt->sgl, 0, UNSET,
+- buf->bpl, 0, core->height);
++ ret = cx88_risc_buffer(dev->pci, &buf->risc,
++ sgt->sgl, 0, UNSET,
++ buf->bpl, 0, core->height);
+ break;
+ case V4L2_FIELD_BOTTOM:
+- cx88_risc_buffer(dev->pci, &buf->risc,
+- sgt->sgl, UNSET, 0,
+- buf->bpl, 0, core->height);
++ ret = cx88_risc_buffer(dev->pci, &buf->risc,
++ sgt->sgl, UNSET, 0,
++ buf->bpl, 0, core->height);
+ break;
+ case V4L2_FIELD_SEQ_TB:
+- cx88_risc_buffer(dev->pci, &buf->risc,
+- sgt->sgl,
+- 0, buf->bpl * (core->height >> 1),
+- buf->bpl, 0,
+- core->height >> 1);
++ ret = cx88_risc_buffer(dev->pci, &buf->risc,
++ sgt->sgl,
++ 0, buf->bpl * (core->height >> 1),
++ buf->bpl, 0,
++ core->height >> 1);
+ break;
+ case V4L2_FIELD_SEQ_BT:
+- cx88_risc_buffer(dev->pci, &buf->risc,
+- sgt->sgl,
+- buf->bpl * (core->height >> 1), 0,
+- buf->bpl, 0,
+- core->height >> 1);
++ ret = cx88_risc_buffer(dev->pci, &buf->risc,
++ sgt->sgl,
++ buf->bpl * (core->height >> 1), 0,
++ buf->bpl, 0,
++ core->height >> 1);
+ break;
+ case V4L2_FIELD_INTERLACED:
+ default:
+- cx88_risc_buffer(dev->pci, &buf->risc,
+- sgt->sgl, 0, buf->bpl,
+- buf->bpl, buf->bpl,
+- core->height >> 1);
++ ret = cx88_risc_buffer(dev->pci, &buf->risc,
++ sgt->sgl, 0, buf->bpl,
++ buf->bpl, buf->bpl,
++ core->height >> 1);
+ break;
+ }
+ dprintk(2,
+@@ -481,7 +482,7 @@ static int buffer_prepare(struct vb2_buffer *vb)
+ buf, buf->vb.vb2_buf.index, __func__,
+ core->width, core->height, dev->fmt->depth, dev->fmt->fourcc,
+ (unsigned long)buf->risc.dma);
+- return 0;
++ return ret;
+ }
+
+ static void buffer_finish(struct vb2_buffer *vb)
+diff --git a/drivers/media/platform/amlogic/meson-ge2d/ge2d.c b/drivers/media/platform/amlogic/meson-ge2d/ge2d.c
+index 5e7b319f300df..142d421a8d769 100644
+--- a/drivers/media/platform/amlogic/meson-ge2d/ge2d.c
++++ b/drivers/media/platform/amlogic/meson-ge2d/ge2d.c
+@@ -1030,7 +1030,6 @@ static int ge2d_remove(struct platform_device *pdev)
+
+ video_unregister_device(ge2d->vfd);
+ v4l2_m2m_release(ge2d->m2m_dev);
+- video_device_release(ge2d->vfd);
+ v4l2_device_unregister(&ge2d->v4l2_dev);
+ clk_disable_unprepare(ge2d->clk);
+
+diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
+index 44dbca0fe17f1..6d6842ff12e2a 100644
+--- a/drivers/media/platform/amphion/vdec.c
++++ b/drivers/media/platform/amphion/vdec.c
+@@ -808,14 +808,6 @@ static void vdec_init_fmt(struct vpu_inst *inst)
+ inst->cap_format.field = V4L2_FIELD_NONE;
+ else
+ inst->cap_format.field = V4L2_FIELD_SEQ_TB;
+- if (vdec->codec_info.color_primaries == V4L2_COLORSPACE_DEFAULT)
+- vdec->codec_info.color_primaries = V4L2_COLORSPACE_REC709;
+- if (vdec->codec_info.transfer_chars == V4L2_XFER_FUNC_DEFAULT)
+- vdec->codec_info.transfer_chars = V4L2_XFER_FUNC_709;
+- if (vdec->codec_info.matrix_coeffs == V4L2_YCBCR_ENC_DEFAULT)
+- vdec->codec_info.matrix_coeffs = V4L2_YCBCR_ENC_709;
+- if (vdec->codec_info.full_range == V4L2_QUANTIZATION_DEFAULT)
+- vdec->codec_info.full_range = V4L2_QUANTIZATION_LIM_RANGE;
+ }
+
+ static void vdec_init_crop(struct vpu_inst *inst)
+@@ -1556,6 +1548,14 @@ static int vdec_get_debug_info(struct vpu_inst *inst, char *str, u32 size, u32 i
+ vdec->codec_info.frame_rate.numerator,
+ vdec->codec_info.frame_rate.denominator);
+ break;
++ case 9:
++ num = scnprintf(str, size, "colorspace: %d, %d, %d, %d (%d)\n",
++ vdec->codec_info.color_primaries,
++ vdec->codec_info.transfer_chars,
++ vdec->codec_info.matrix_coeffs,
++ vdec->codec_info.full_range,
++ vdec->codec_info.vui_present);
++ break;
+ default:
+ break;
+ }
+diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c
+index 43d61d82f58c2..0f21a181c1de9 100644
+--- a/drivers/media/platform/amphion/venc.c
++++ b/drivers/media/platform/amphion/venc.c
+@@ -644,7 +644,7 @@ static int venc_ctrl_init(struct vpu_inst *inst)
+ BITRATE_DEFAULT_PEAK);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+- V4L2_CID_MPEG_VIDEO_GOP_SIZE, 0, (1 << 16) - 1, 1, 30);
++ V4L2_CID_MPEG_VIDEO_GOP_SIZE, 1, 8000, 1, 30);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_B_FRAMES, 0, 4, 1, 0);
+diff --git a/drivers/media/platform/amphion/vpu.h b/drivers/media/platform/amphion/vpu.h
+index f914de6ed81e9..beac0309ca8d9 100644
+--- a/drivers/media/platform/amphion/vpu.h
++++ b/drivers/media/platform/amphion/vpu.h
+@@ -119,7 +119,6 @@ struct vpu_mbox {
+ enum vpu_core_state {
+ VPU_CORE_DEINIT = 0,
+ VPU_CORE_ACTIVE,
+- VPU_CORE_SNAPSHOT,
+ VPU_CORE_HANG
+ };
+
+diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c
+index 51a764713159a..21a416b8e483d 100644
+--- a/drivers/media/platform/amphion/vpu_core.c
++++ b/drivers/media/platform/amphion/vpu_core.c
+@@ -89,7 +89,7 @@ static int vpu_core_boot_done(struct vpu_core *core)
+ core->supported_instance_count = min(core->supported_instance_count, count);
+ }
+ core->fw_version = fw_version;
+- core->state = VPU_CORE_ACTIVE;
++ vpu_core_set_state(core, VPU_CORE_ACTIVE);
+
+ return 0;
+ }
+@@ -172,10 +172,26 @@ int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf)
+ return __vpu_alloc_dma(core->dev, buf);
+ }
+
+-static void vpu_core_check_hang(struct vpu_core *core)
++void vpu_core_set_state(struct vpu_core *core, enum vpu_core_state state)
+ {
+- if (core->hang_mask)
+- core->state = VPU_CORE_HANG;
++ if (state != core->state)
++ vpu_trace(core->dev, "vpu core state change from %d to %d\n", core->state, state);
++ core->state = state;
++ if (core->state == VPU_CORE_DEINIT)
++ core->hang_mask = 0;
++}
++
++static void vpu_core_update_state(struct vpu_core *core)
++{
++ if (!vpu_iface_get_power_state(core)) {
++ if (core->request_count)
++ vpu_core_set_state(core, VPU_CORE_HANG);
++ else
++ vpu_core_set_state(core, VPU_CORE_DEINIT);
++
++ } else if (core->state == VPU_CORE_ACTIVE && core->hang_mask) {
++ vpu_core_set_state(core, VPU_CORE_HANG);
++ }
+ }
+
+ static struct vpu_core *vpu_core_find_proper_by_type(struct vpu_dev *vpu, u32 type)
+@@ -188,11 +204,13 @@ static struct vpu_core *vpu_core_find_proper_by_type(struct vpu_dev *vpu, u32 ty
+ dev_dbg(c->dev, "instance_mask = 0x%lx, state = %d\n", c->instance_mask, c->state);
+ if (c->type != type)
+ continue;
++ mutex_lock(&c->lock);
++ vpu_core_update_state(c);
++ mutex_unlock(&c->lock);
+ if (c->state == VPU_CORE_DEINIT) {
+ core = c;
+ break;
+ }
+- vpu_core_check_hang(c);
+ if (c->state != VPU_CORE_ACTIVE)
+ continue;
+ if (c->request_count < request_count) {
+@@ -412,6 +430,12 @@ int vpu_inst_register(struct vpu_inst *inst)
+ }
+
+ mutex_lock(&core->lock);
++ if (core->state != VPU_CORE_ACTIVE) {
++ dev_err(core->dev, "vpu core is not active, state = %d\n", core->state);
++ ret = -EINVAL;
++ goto exit;
++ }
++
+ if (inst->id >= 0 && inst->id < core->supported_instance_count)
+ goto exit;
+
+@@ -453,7 +477,7 @@ int vpu_inst_unregister(struct vpu_inst *inst)
+ vpu_core_release_instance(core, inst->id);
+ inst->id = VPU_INST_NULL_ID;
+ }
+- vpu_core_check_hang(core);
++ vpu_core_update_state(core);
+ if (core->state == VPU_CORE_HANG && !core->instance_mask) {
+ int err;
+
+@@ -462,7 +486,7 @@ int vpu_inst_unregister(struct vpu_inst *inst)
+ err = vpu_core_sw_reset(core);
+ mutex_lock(&core->lock);
+ if (!err) {
+- core->state = VPU_CORE_ACTIVE;
++ vpu_core_set_state(core, VPU_CORE_ACTIVE);
+ core->hang_mask = 0;
+ }
+ }
+@@ -612,7 +636,7 @@ static int vpu_core_probe(struct platform_device *pdev)
+ mutex_init(&core->cmd_lock);
+ init_completion(&core->cmp);
+ init_waitqueue_head(&core->ack_wq);
+- core->state = VPU_CORE_DEINIT;
++ vpu_core_set_state(core, VPU_CORE_DEINIT);
+
+ core->res = of_device_get_match_data(dev);
+ if (!core->res)
+@@ -761,33 +785,18 @@ static int __maybe_unused vpu_core_resume(struct device *dev)
+ mutex_lock(&core->lock);
+ pm_runtime_resume_and_get(dev);
+ vpu_core_get_vpu(core);
+- if (core->state != VPU_CORE_SNAPSHOT)
+- goto exit;
+
+- if (!vpu_iface_get_power_state(core)) {
+- if (!list_empty(&core->instances)) {
++ if (core->request_count) {
++ if (!vpu_iface_get_power_state(core))
+ ret = vpu_core_boot(core, false);
+- if (ret) {
+- dev_err(core->dev, "%s boot fail\n", __func__);
+- core->state = VPU_CORE_DEINIT;
+- goto exit;
+- }
+- } else {
+- core->state = VPU_CORE_DEINIT;
+- }
+- } else {
+- if (!list_empty(&core->instances)) {
++ else
+ ret = vpu_core_sw_reset(core);
+- if (ret) {
+- dev_err(core->dev, "%s sw_reset fail\n", __func__);
+- core->state = VPU_CORE_HANG;
+- goto exit;
+- }
++ if (ret) {
++ dev_err(core->dev, "resume fail\n");
++ vpu_core_set_state(core, VPU_CORE_HANG);
+ }
+- core->state = VPU_CORE_ACTIVE;
+ }
+-
+-exit:
++ vpu_core_update_state(core);
+ pm_runtime_put_sync(dev);
+ mutex_unlock(&core->lock);
+
+@@ -801,18 +810,11 @@ static int __maybe_unused vpu_core_suspend(struct device *dev)
+ int ret = 0;
+
+ mutex_lock(&core->lock);
+- if (core->state == VPU_CORE_ACTIVE) {
+- if (!list_empty(&core->instances)) {
+- ret = vpu_core_snapshot(core);
+- if (ret) {
+- mutex_unlock(&core->lock);
+- return ret;
+- }
+- }
+-
+- core->state = VPU_CORE_SNAPSHOT;
+- }
++ if (core->request_count)
++ ret = vpu_core_snapshot(core);
+ mutex_unlock(&core->lock);
++ if (ret)
++ return ret;
+
+ vpu_core_cancel_work(core);
+
+diff --git a/drivers/media/platform/amphion/vpu_core.h b/drivers/media/platform/amphion/vpu_core.h
+index 00a662997da4f..65b562642603a 100644
+--- a/drivers/media/platform/amphion/vpu_core.h
++++ b/drivers/media/platform/amphion/vpu_core.h
+@@ -11,5 +11,6 @@ u32 csr_readl(struct vpu_core *core, u32 reg);
+ int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf);
+ void vpu_free_dma(struct vpu_buffer *buf);
+ struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index);
++void vpu_core_set_state(struct vpu_core *core, enum vpu_core_state state);
+
+ #endif
+diff --git a/drivers/media/platform/amphion/vpu_dbg.c b/drivers/media/platform/amphion/vpu_dbg.c
+index da62bd718fb84..ad41060ce46e5 100644
+--- a/drivers/media/platform/amphion/vpu_dbg.c
++++ b/drivers/media/platform/amphion/vpu_dbg.c
+@@ -15,6 +15,7 @@
+ #include <linux/debugfs.h>
+ #include "vpu.h"
+ #include "vpu_defs.h"
++#include "vpu_core.h"
+ #include "vpu_helpers.h"
+ #include "vpu_cmds.h"
+ #include "vpu_rpc.h"
+@@ -233,6 +234,10 @@ static int vpu_dbg_core(struct seq_file *s, void *data)
+ if (seq_write(s, str, num))
+ return 0;
+
++ num = scnprintf(str, sizeof(str), "power %s\n",
++ vpu_iface_get_power_state(core) ? "on" : "off");
++ if (seq_write(s, str, num))
++ return 0;
+ num = scnprintf(str, sizeof(str), "state = %d\n", core->state);
+ if (seq_write(s, str, num))
+ return 0;
+@@ -346,10 +351,10 @@ static ssize_t vpu_dbg_core_write(struct file *file,
+
+ pm_runtime_resume_and_get(core->dev);
+ mutex_lock(&core->lock);
+- if (core->state != VPU_CORE_DEINIT && !core->instance_mask) {
++ if (vpu_iface_get_power_state(core) && !core->request_count) {
+ dev_info(core->dev, "reset\n");
+ if (!vpu_core_sw_reset(core)) {
+- core->state = VPU_CORE_ACTIVE;
++ vpu_core_set_state(core, VPU_CORE_ACTIVE);
+ core->hang_mask = 0;
+ }
+ }
+diff --git a/drivers/media/platform/amphion/vpu_malone.c b/drivers/media/platform/amphion/vpu_malone.c
+index 542bbe361bd87..10553dd93c29e 100644
+--- a/drivers/media/platform/amphion/vpu_malone.c
++++ b/drivers/media/platform/amphion/vpu_malone.c
+@@ -1277,7 +1277,7 @@ static int vpu_malone_insert_scode_vc1_g_pic(struct malone_scode_t *scode)
+ vbuf = to_vb2_v4l2_buffer(scode->vb);
+ data = vb2_plane_vaddr(scode->vb, 0);
+
+- if (vbuf->sequence == 0 || vpu_vb_is_codecconfig(vbuf))
++ if (scode->inst->total_input_count == 0 || vpu_vb_is_codecconfig(vbuf))
+ return 0;
+ if (MALONE_VC1_CONTAIN_NAL(*data))
+ return 0;
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+index bc5b0a0168ec0..6aa73f1cde188 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+@@ -1411,7 +1411,6 @@ static int mtk_jpeg_remove(struct platform_device *pdev)
+
+ pm_runtime_disable(&pdev->dev);
+ video_unregister_device(jpeg->vdev);
+- video_device_release(jpeg->vdev);
+ v4l2_m2m_release(jpeg->m2m_dev);
+ v4l2_device_unregister(&jpeg->v4l2_dev);
+
+diff --git a/drivers/media/platform/samsung/exynos4-is/fimc-is.c b/drivers/media/platform/samsung/exynos4-is/fimc-is.c
+index e3072d69c49fa..a7704ff069d6c 100644
+--- a/drivers/media/platform/samsung/exynos4-is/fimc-is.c
++++ b/drivers/media/platform/samsung/exynos4-is/fimc-is.c
+@@ -213,6 +213,7 @@ static int fimc_is_register_subdevs(struct fimc_is *is)
+
+ if (ret < 0 || index >= FIMC_IS_SENSORS_NUM) {
+ of_node_put(child);
++ of_node_put(i2c_bus);
+ return ret;
+ }
+ index++;
+diff --git a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
+index 761341934925e..f85d1eebaface 100644
+--- a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
+@@ -1399,6 +1399,7 @@ static int s5p_mfc_probe(struct platform_device *pdev)
+ /* Deinit MFC if probe had failed */
+ err_enc_reg:
+ video_unregister_device(dev->vfd_dec);
++ dev->vfd_dec = NULL;
+ err_dec_reg:
+ video_device_release(dev->vfd_enc);
+ err_enc_alloc:
+@@ -1444,8 +1445,6 @@ static int s5p_mfc_remove(struct platform_device *pdev)
+
+ video_unregister_device(dev->vfd_enc);
+ video_unregister_device(dev->vfd_dec);
+- video_device_release(dev->vfd_enc);
+- video_device_release(dev->vfd_dec);
+ v4l2_device_unregister(&dev->v4l2_dev);
+ s5p_mfc_unconfigure_dma_memory(dev);
+
+diff --git a/drivers/media/platform/xilinx/xilinx-vipp.c b/drivers/media/platform/xilinx/xilinx-vipp.c
+index f34f8b077e03c..0a16c218a50a7 100644
+--- a/drivers/media/platform/xilinx/xilinx-vipp.c
++++ b/drivers/media/platform/xilinx/xilinx-vipp.c
+@@ -471,7 +471,7 @@ static int xvip_graph_dma_init(struct xvip_composite_device *xdev)
+ {
+ struct device_node *ports;
+ struct device_node *port;
+- int ret;
++ int ret = 0;
+
+ ports = of_get_child_by_name(xdev->dev->of_node, "ports");
+ if (ports == NULL) {
+@@ -481,13 +481,14 @@ static int xvip_graph_dma_init(struct xvip_composite_device *xdev)
+
+ for_each_child_of_node(ports, port) {
+ ret = xvip_graph_dma_init_one(xdev, port);
+- if (ret < 0) {
++ if (ret) {
+ of_node_put(port);
+- return ret;
++ break;
+ }
+ }
+
+- return 0;
++ of_node_put(ports);
++ return ret;
+ }
+
+ static void xvip_graph_cleanup(struct xvip_composite_device *xdev)
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 0e78233fc8a00..44071040d7643 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -963,36 +963,56 @@ static s32 __uvc_ctrl_get_value(struct uvc_control_mapping *mapping,
+ return value;
+ }
+
+-static int __uvc_ctrl_get(struct uvc_video_chain *chain,
+- struct uvc_control *ctrl, struct uvc_control_mapping *mapping,
+- s32 *value)
++static int __uvc_ctrl_load_cur(struct uvc_video_chain *chain,
++ struct uvc_control *ctrl)
+ {
++ u8 *data;
+ int ret;
+
+- if ((ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) == 0)
+- return -EACCES;
++ if (ctrl->loaded)
++ return 0;
+
+- if (!ctrl->loaded) {
+- if (ctrl->entity->get_cur) {
+- ret = ctrl->entity->get_cur(chain->dev,
+- ctrl->entity,
+- ctrl->info.selector,
+- uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+- ctrl->info.size);
+- } else {
+- ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR,
+- ctrl->entity->id,
+- chain->dev->intfnum,
+- ctrl->info.selector,
+- uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+- ctrl->info.size);
+- }
+- if (ret < 0)
+- return ret;
++ data = uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT);
+
++ if ((ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) == 0) {
++ memset(data, 0, ctrl->info.size);
+ ctrl->loaded = 1;
++
++ return 0;
+ }
+
++ if (ctrl->entity->get_cur)
++ ret = ctrl->entity->get_cur(chain->dev, ctrl->entity,
++ ctrl->info.selector, data,
++ ctrl->info.size);
++ else
++ ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR,
++ ctrl->entity->id, chain->dev->intfnum,
++ ctrl->info.selector, data,
++ ctrl->info.size);
++
++ if (ret < 0)
++ return ret;
++
++ ctrl->loaded = 1;
++
++ return ret;
++}
++
++static int __uvc_ctrl_get(struct uvc_video_chain *chain,
++ struct uvc_control *ctrl,
++ struct uvc_control_mapping *mapping,
++ s32 *value)
++{
++ int ret;
++
++ if ((ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) == 0)
++ return -EACCES;
++
++ ret = __uvc_ctrl_load_cur(chain, ctrl);
++ if (ret < 0)
++ return ret;
++
+ *value = __uvc_ctrl_get_value(mapping,
+ uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT));
+
+@@ -1783,21 +1803,10 @@ int uvc_ctrl_set(struct uvc_fh *handle,
+ * needs to be loaded from the device to perform the read-modify-write
+ * operation.
+ */
+- if (!ctrl->loaded && (ctrl->info.size * 8) != mapping->size) {
+- if ((ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) == 0) {
+- memset(uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+- 0, ctrl->info.size);
+- } else {
+- ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR,
+- ctrl->entity->id, chain->dev->intfnum,
+- ctrl->info.selector,
+- uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+- ctrl->info.size);
+- if (ret < 0)
+- return ret;
+- }
+-
+- ctrl->loaded = 1;
++ if ((ctrl->info.size * 8) != mapping->size) {
++ ret = __uvc_ctrl_load_cur(chain, ctrl);
++ if (ret < 0)
++ return ret;
+ }
+
+ /* Backup the current value in case we need to rollback later. */
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 6c86faecbea21..28ee45e879ff1 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -1538,10 +1538,6 @@ static int uvc_gpio_parse(struct uvc_device *dev)
+ if (IS_ERR_OR_NULL(gpio_privacy))
+ return PTR_ERR_OR_ZERO(gpio_privacy);
+
+- unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1);
+- if (!unit)
+- return -ENOMEM;
+-
+ irq = gpiod_to_irq(gpio_privacy);
+ if (irq < 0) {
+ if (irq != EPROBE_DEFER)
+@@ -1550,6 +1546,10 @@ static int uvc_gpio_parse(struct uvc_device *dev)
+ return irq;
+ }
+
++ unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1);
++ if (!unit)
++ return -ENOMEM;
++
+ unit->gpio.gpio_privacy = gpio_privacy;
+ unit->gpio.irq = irq;
+ unit->gpio.bControlSize = 1;
+diff --git a/drivers/memory/of_memory.c b/drivers/memory/of_memory.c
+index dbdf87bc0b78e..fcd20d85d3857 100644
+--- a/drivers/memory/of_memory.c
++++ b/drivers/memory/of_memory.c
+@@ -134,6 +134,7 @@ const struct lpddr2_timings *of_get_ddr_timings(struct device_node *np_ddr,
+ for_each_child_of_node(np_ddr, np_tim) {
+ if (of_device_is_compatible(np_tim, tim_compat)) {
+ if (of_do_get_timings(np_tim, &timings[i])) {
++ of_node_put(np_tim);
+ devm_kfree(dev, timings);
+ goto default_timings;
+ }
+@@ -284,6 +285,7 @@ const struct lpddr3_timings
+ if (of_device_is_compatible(np_tim, tim_compat)) {
+ if (of_lpddr3_do_get_timings(np_tim, &timings[i])) {
+ devm_kfree(dev, timings);
++ of_node_put(np_tim);
+ goto default_timings;
+ }
+ i++;
+diff --git a/drivers/memory/pl353-smc.c b/drivers/memory/pl353-smc.c
+index f84b98278745c..d39ee7d06665b 100644
+--- a/drivers/memory/pl353-smc.c
++++ b/drivers/memory/pl353-smc.c
+@@ -122,6 +122,7 @@ static int pl353_smc_probe(struct amba_device *adev, const struct amba_id *id)
+ }
+
+ of_platform_device_create(child, NULL, &adev->dev);
++ of_node_put(child);
+
+ return 0;
+
+diff --git a/drivers/mfd/da9062-core.c b/drivers/mfd/da9062-core.c
+index 2774b2cbaea6d..c2acdbcd5d6b6 100644
+--- a/drivers/mfd/da9062-core.c
++++ b/drivers/mfd/da9062-core.c
+@@ -453,6 +453,7 @@ static const struct regmap_range da9061_aa_writeable_ranges[] = {
+ regmap_reg_range(DA9062AA_VBUCK1_B, DA9062AA_VBUCK4_B),
+ regmap_reg_range(DA9062AA_VBUCK3_B, DA9062AA_VBUCK3_B),
+ regmap_reg_range(DA9062AA_VLDO1_B, DA9062AA_VLDO4_B),
++ regmap_reg_range(DA9062AA_CONFIG_J, DA9062AA_CONFIG_J),
+ regmap_reg_range(DA9062AA_GP_ID_0, DA9062AA_GP_ID_19),
+ };
+
+diff --git a/drivers/mfd/fsl-imx25-tsadc.c b/drivers/mfd/fsl-imx25-tsadc.c
+index 37e5e02a1d059..823595bcc9b7c 100644
+--- a/drivers/mfd/fsl-imx25-tsadc.c
++++ b/drivers/mfd/fsl-imx25-tsadc.c
+@@ -69,7 +69,7 @@ static int mx25_tsadc_setup_irq(struct platform_device *pdev,
+ int irq;
+
+ irq = platform_get_irq(pdev, 0);
+- if (irq <= 0)
++ if (irq < 0)
+ return irq;
+
+ tsadc->domain = irq_domain_add_simple(np, 2, 0, &mx25_tsadc_domain_ops,
+@@ -84,6 +84,19 @@ static int mx25_tsadc_setup_irq(struct platform_device *pdev,
+ return 0;
+ }
+
++static int mx25_tsadc_unset_irq(struct platform_device *pdev)
++{
++ struct mx25_tsadc *tsadc = platform_get_drvdata(pdev);
++ int irq = platform_get_irq(pdev, 0);
++
++ if (irq >= 0) {
++ irq_set_chained_handler_and_data(irq, NULL, NULL);
++ irq_domain_remove(tsadc->domain);
++ }
++
++ return 0;
++}
++
+ static void mx25_tsadc_setup_clk(struct platform_device *pdev,
+ struct mx25_tsadc *tsadc)
+ {
+@@ -171,18 +184,21 @@ static int mx25_tsadc_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, tsadc);
+
+- return devm_of_platform_populate(dev);
++ ret = devm_of_platform_populate(dev);
++ if (ret)
++ goto err_irq;
++
++ return 0;
++
++err_irq:
++ mx25_tsadc_unset_irq(pdev);
++
++ return ret;
+ }
+
+ static int mx25_tsadc_remove(struct platform_device *pdev)
+ {
+- struct mx25_tsadc *tsadc = platform_get_drvdata(pdev);
+- int irq = platform_get_irq(pdev, 0);
+-
+- if (irq) {
+- irq_set_chained_handler_and_data(irq, NULL, NULL);
+- irq_domain_remove(tsadc->domain);
+- }
++ mx25_tsadc_unset_irq(pdev);
+
+ return 0;
+ }
+diff --git a/drivers/mfd/intel_soc_pmic_core.c b/drivers/mfd/intel_soc_pmic_core.c
+index 5e8c94e008ed1..85d070bce0e2b 100644
+--- a/drivers/mfd/intel_soc_pmic_core.c
++++ b/drivers/mfd/intel_soc_pmic_core.c
+@@ -77,6 +77,7 @@ static int intel_soc_pmic_i2c_probe(struct i2c_client *i2c,
+ return 0;
+
+ err_del_irq_chip:
++ pwm_remove_table(crc_pwm_lookup, ARRAY_SIZE(crc_pwm_lookup));
+ regmap_del_irq_chip(pmic->irq, pmic->irq_chip_data);
+ return ret;
+ }
+diff --git a/drivers/mfd/lp8788-irq.c b/drivers/mfd/lp8788-irq.c
+index 348439a3fbbd4..39006297f3d27 100644
+--- a/drivers/mfd/lp8788-irq.c
++++ b/drivers/mfd/lp8788-irq.c
+@@ -175,6 +175,7 @@ int lp8788_irq_init(struct lp8788 *lp, int irq)
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ "lp8788-irq", irqd);
+ if (ret) {
++ irq_domain_remove(lp->irqdm);
+ dev_err(lp->dev, "failed to create a thread for IRQ_N\n");
+ return ret;
+ }
+@@ -188,4 +189,6 @@ void lp8788_irq_exit(struct lp8788 *lp)
+ {
+ if (lp->irq)
+ free_irq(lp->irq, lp->irqdm);
++ if (lp->irqdm)
++ irq_domain_remove(lp->irqdm);
+ }
+diff --git a/drivers/mfd/lp8788.c b/drivers/mfd/lp8788.c
+index c223d2c6a3635..998e8cc408a0e 100644
+--- a/drivers/mfd/lp8788.c
++++ b/drivers/mfd/lp8788.c
+@@ -195,8 +195,16 @@ static int lp8788_probe(struct i2c_client *cl, const struct i2c_device_id *id)
+ if (ret)
+ return ret;
+
+- return mfd_add_devices(lp->dev, -1, lp8788_devs,
+- ARRAY_SIZE(lp8788_devs), NULL, 0, NULL);
++ ret = mfd_add_devices(lp->dev, -1, lp8788_devs,
++ ARRAY_SIZE(lp8788_devs), NULL, 0, NULL);
++ if (ret)
++ goto err_exit_irq;
++
++ return 0;
++
++err_exit_irq:
++ lp8788_irq_exit(lp);
++ return ret;
+ }
+
+ static int lp8788_remove(struct i2c_client *cl)
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index bc0a2c38653e5..3ac4508a6742a 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -1720,7 +1720,12 @@ static struct platform_driver sm501_plat_driver = {
+
+ static int __init sm501_base_init(void)
+ {
+- platform_driver_register(&sm501_plat_driver);
++ int ret;
++
++ ret = platform_driver_register(&sm501_plat_driver);
++ if (ret < 0)
++ return ret;
++
+ return pci_register_driver(&sm501_pci_driver);
+ }
+
+diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
+index 6777c419a8da2..d46dba2df5a10 100644
+--- a/drivers/misc/ocxl/file.c
++++ b/drivers/misc/ocxl/file.c
+@@ -257,6 +257,8 @@ static long afu_ioctl(struct file *file, unsigned int cmd,
+ if (IS_ERR(ev_ctx))
+ return PTR_ERR(ev_ctx);
+ rc = ocxl_irq_set_handler(ctx, irq_id, irq_handler, irq_free, ev_ctx);
++ if (rc)
++ eventfd_ctx_put(ev_ctx);
+ break;
+
+ case OCXL_IOCTL_GET_METADATA:
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 2f89ae55c1773..b2875bee8effb 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1140,8 +1140,12 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
+ {
+ struct mmc_blk_data *md = mq->blkdata;
+ struct mmc_card *card = md->queue.card;
++ unsigned int arg = card->erase_arg;
+
+- mmc_blk_issue_erase_rq(mq, req, MMC_BLK_DISCARD, card->erase_arg);
++ if (mmc_card_broken_sd_discard(card))
++ arg = SD_ERASE_ARG;
++
++ mmc_blk_issue_erase_rq(mq, req, MMC_BLK_DISCARD, arg);
+ }
+
+ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq,
+diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h
+index 99045e138ba48..cfdd1ff40b865 100644
+--- a/drivers/mmc/core/card.h
++++ b/drivers/mmc/core/card.h
+@@ -73,6 +73,7 @@ struct mmc_fixup {
+ #define EXT_CSD_REV_ANY (-1u)
+
+ #define CID_MANFID_SANDISK 0x2
++#define CID_MANFID_SANDISK_SD 0x3
+ #define CID_MANFID_ATP 0x9
+ #define CID_MANFID_TOSHIBA 0x11
+ #define CID_MANFID_MICRON 0x13
+@@ -258,4 +259,9 @@ static inline int mmc_card_broken_hpi(const struct mmc_card *c)
+ return c->quirks & MMC_QUIRK_BROKEN_HPI;
+ }
+
++static inline int mmc_card_broken_sd_discard(const struct mmc_card *c)
++{
++ return c->quirks & MMC_QUIRK_BROKEN_SD_DISCARD;
++}
++
+ #endif
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index be43939880868..29b9497936df9 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -100,6 +100,12 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ MMC_FIXUP("V10016", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc,
+ MMC_QUIRK_TRIM_BROKEN),
+
++ /*
++ * Some SD cards reports discard support while they don't
++ */
++ MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,
++ MMC_QUIRK_BROKEN_SD_DISCARD),
++
+ END_FIXUP
+ };
+
+diff --git a/drivers/mmc/host/au1xmmc.c b/drivers/mmc/host/au1xmmc.c
+index a9a0837153d87..c88b039dc9fbd 100644
+--- a/drivers/mmc/host/au1xmmc.c
++++ b/drivers/mmc/host/au1xmmc.c
+@@ -1097,8 +1097,9 @@ out5:
+ if (host->platdata && host->platdata->cd_setup &&
+ !(mmc->caps & MMC_CAP_NEEDS_POLL))
+ host->platdata->cd_setup(mmc, 0);
+-out_clk:
++
+ clk_disable_unprepare(host->clk);
++out_clk:
+ clk_put(host->clk);
+ out_irq:
+ free_irq(host->irq, host);
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 6edbf5c161ab9..b970699743e0a 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -128,6 +128,7 @@ static unsigned int renesas_sdhi_clk_update(struct tmio_mmc_host *host,
+ struct clk *ref_clk = priv->clk;
+ unsigned int freq, diff, best_freq = 0, diff_min = ~0;
+ unsigned int new_clock, clkh_shift = 0;
++ unsigned int new_upper_limit;
+ int i;
+
+ /*
+@@ -153,13 +154,20 @@ static unsigned int renesas_sdhi_clk_update(struct tmio_mmc_host *host,
+ * greater than, new_clock. As we can divide by 1 << i for
+ * any i in [0, 9] we want the input clock to be as close as
+ * possible, but no greater than, new_clock << i.
++ *
++ * Add an upper limit of 1/1024 rate higher to the clock rate to fix
++ * clk rate jumping to lower rate due to rounding error (eg: RZ/G2L has
++ * 3 clk sources 533.333333 MHz, 400 MHz and 266.666666 MHz. The request
++ * for 533.333333 MHz will selects a slower 400 MHz due to rounding
++ * error (533333333 Hz / 4 * 4 = 533333332 Hz < 533333333 Hz)).
+ */
+ for (i = min(9, ilog2(UINT_MAX / new_clock)); i >= 0; i--) {
+ freq = clk_round_rate(ref_clk, new_clock << i);
+- if (freq > (new_clock << i)) {
++ new_upper_limit = (new_clock << i) + ((new_clock << i) >> 10);
++ if (freq > new_upper_limit) {
+ /* Too fast; look for a slightly slower option */
+ freq = clk_round_rate(ref_clk, (new_clock << i) / 4 * 3);
+- if (freq > (new_clock << i))
++ if (freq > new_upper_limit)
+ continue;
+ }
+
+@@ -181,6 +189,7 @@ static unsigned int renesas_sdhi_clk_update(struct tmio_mmc_host *host,
+ static void renesas_sdhi_set_clock(struct tmio_mmc_host *host,
+ unsigned int new_clock)
+ {
++ unsigned int clk_margin;
+ u32 clk = 0, clock;
+
+ sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, ~CLK_CTL_SCLKEN &
+@@ -194,7 +203,13 @@ static void renesas_sdhi_set_clock(struct tmio_mmc_host *host,
+ host->mmc->actual_clock = renesas_sdhi_clk_update(host, new_clock);
+ clock = host->mmc->actual_clock / 512;
+
+- for (clk = 0x80000080; new_clock >= (clock << 1); clk >>= 1)
++ /*
++ * Add a margin of 1/1024 rate higher to the clock rate in order
++ * to avoid clk variable setting a value of 0 due to the margin
++ * provided for actual_clock in renesas_sdhi_clk_update().
++ */
++ clk_margin = new_clock >> 10;
++ for (clk = 0x80000080; new_clock + clk_margin >= (clock << 1); clk >>= 1)
+ clock <<= 1;
+
+ /* 1/1 clock is option */
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index f33e9349e4e62..3b88c9d3ddf90 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -309,7 +309,7 @@ static unsigned int sdhci_sprd_get_max_clock(struct sdhci_host *host)
+
+ static unsigned int sdhci_sprd_get_min_clock(struct sdhci_host *host)
+ {
+- return 400000;
++ return 100000;
+ }
+
+ static void sdhci_sprd_set_uhs_signaling(struct sdhci_host *host,
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 2d2d8260c6814..413925bce0ca8 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -773,7 +773,7 @@ static void tegra_sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+ dev_err(dev, "failed to set clk rate to %luHz: %d\n",
+ host_clk, err);
+
+- tegra_host->curr_clk_rate = host_clk;
++ tegra_host->curr_clk_rate = clk_get_rate(pltfm_host->clk);
+ if (tegra_host->ddr_signaling)
+ host->max_clk = host_clk;
+ else
+diff --git a/drivers/mmc/host/wmt-sdmmc.c b/drivers/mmc/host/wmt-sdmmc.c
+index 163ac9df8cca0..9b5c503e3a3fc 100644
+--- a/drivers/mmc/host/wmt-sdmmc.c
++++ b/drivers/mmc/host/wmt-sdmmc.c
+@@ -846,7 +846,7 @@ static int wmt_mci_probe(struct platform_device *pdev)
+ if (IS_ERR(priv->clk_sdmmc)) {
+ dev_err(&pdev->dev, "Error getting clock\n");
+ ret = PTR_ERR(priv->clk_sdmmc);
+- goto fail5;
++ goto fail5_and_a_half;
+ }
+
+ ret = clk_prepare_enable(priv->clk_sdmmc);
+@@ -863,6 +863,9 @@ static int wmt_mci_probe(struct platform_device *pdev)
+ return 0;
+ fail6:
+ clk_put(priv->clk_sdmmc);
++fail5_and_a_half:
++ dma_free_coherent(&pdev->dev, mmc->max_blk_count * 16,
++ priv->dma_desc_buffer, priv->dma_desc_device_addr);
+ fail5:
+ free_irq(dma_irq, priv);
+ fail4:
+diff --git a/drivers/mtd/devices/docg3.c b/drivers/mtd/devices/docg3.c
+index 5b0ae5ddad745..27c08f22dec8c 100644
+--- a/drivers/mtd/devices/docg3.c
++++ b/drivers/mtd/devices/docg3.c
+@@ -1974,9 +1974,14 @@ static int __init docg3_probe(struct platform_device *pdev)
+ dev_err(dev, "No I/O memory resource defined\n");
+ return ret;
+ }
+- base = devm_ioremap(dev, ress->start, DOC_IOSPACE_SIZE);
+
+ ret = -ENOMEM;
++ base = devm_ioremap(dev, ress->start, DOC_IOSPACE_SIZE);
++ if (!base) {
++ dev_err(dev, "devm_ioremap dev failed\n");
++ return ret;
++ }
++
+ cascade = devm_kcalloc(dev, DOC_MAX_NBFLOORS, sizeof(*cascade),
+ GFP_KERNEL);
+ if (!cascade)
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 6ef14442c71a0..330d2dafdd2d0 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -405,6 +405,7 @@ static int atmel_nand_dma_transfer(struct atmel_nand_controller *nc,
+
+ dma_async_issue_pending(nc->dmac);
+ wait_for_completion(&finished);
++ dma_unmap_single(nc->dev, buf_dma, len, dir);
+
+ return 0;
+
+diff --git a/drivers/mtd/nand/raw/fsl_elbc_nand.c b/drivers/mtd/nand/raw/fsl_elbc_nand.c
+index aab93b9e6052d..a18d121396aa5 100644
+--- a/drivers/mtd/nand/raw/fsl_elbc_nand.c
++++ b/drivers/mtd/nand/raw/fsl_elbc_nand.c
+@@ -726,36 +726,40 @@ static int fsl_elbc_attach_chip(struct nand_chip *chip)
+ struct fsl_lbc_regs __iomem *lbc = ctrl->regs;
+ unsigned int al;
+
+- switch (chip->ecc.engine_type) {
+ /*
+ * if ECC was not chosen in DT, decide whether to use HW or SW ECC from
+ * CS Base Register
+ */
+- case NAND_ECC_ENGINE_TYPE_NONE:
++ if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_INVALID) {
+ /* If CS Base Register selects full hardware ECC then use it */
+ if ((in_be32(&lbc->bank[priv->bank].br) & BR_DECC) ==
+ BR_DECC_CHK_GEN) {
+- chip->ecc.read_page = fsl_elbc_read_page;
+- chip->ecc.write_page = fsl_elbc_write_page;
+- chip->ecc.write_subpage = fsl_elbc_write_subpage;
+-
+ chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
+- mtd_set_ooblayout(mtd, &fsl_elbc_ooblayout_ops);
+- chip->ecc.size = 512;
+- chip->ecc.bytes = 3;
+- chip->ecc.strength = 1;
+ } else {
+ /* otherwise fall back to default software ECC */
+ chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+ chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ }
++ }
++
++ switch (chip->ecc.engine_type) {
++ /* if HW ECC was chosen, setup ecc and oob layout */
++ case NAND_ECC_ENGINE_TYPE_ON_HOST:
++ chip->ecc.read_page = fsl_elbc_read_page;
++ chip->ecc.write_page = fsl_elbc_write_page;
++ chip->ecc.write_subpage = fsl_elbc_write_subpage;
++ mtd_set_ooblayout(mtd, &fsl_elbc_ooblayout_ops);
++ chip->ecc.size = 512;
++ chip->ecc.bytes = 3;
++ chip->ecc.strength = 1;
+ break;
+
+- /* if SW ECC was chosen in DT, we do not need to set anything here */
++ /* if none or SW ECC was chosen, we do not need to set anything here */
++ case NAND_ECC_ENGINE_TYPE_NONE:
+ case NAND_ECC_ENGINE_TYPE_SOFT:
++ case NAND_ECC_ENGINE_TYPE_ON_DIE:
+ break;
+
+- /* should we also implement *_ECC_ENGINE_CONTROLLER to do as above? */
+ default:
+ return -EINVAL;
+ }
+diff --git a/drivers/mtd/nand/raw/intel-nand-controller.c b/drivers/mtd/nand/raw/intel-nand-controller.c
+index e91b879b32bdb..056835fd45622 100644
+--- a/drivers/mtd/nand/raw/intel-nand-controller.c
++++ b/drivers/mtd/nand/raw/intel-nand-controller.c
+@@ -16,6 +16,7 @@
+ #include <linux/mtd/rawnand.h>
+ #include <linux/mtd/nand.h>
+
++#include <linux/of.h>
+ #include <linux/platform_device.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+@@ -580,6 +581,7 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+ struct ebu_nand_controller *ebu_host;
++ struct device_node *chip_np;
+ struct nand_chip *nand;
+ struct mtd_info *mtd;
+ struct resource *res;
+@@ -604,7 +606,12 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ if (IS_ERR(ebu_host->hsnand))
+ return PTR_ERR(ebu_host->hsnand);
+
+- ret = device_property_read_u32(dev, "reg", &cs);
++ chip_np = of_get_next_child(dev->of_node, NULL);
++ if (!chip_np)
++ return dev_err_probe(dev, -EINVAL,
++ "Could not find child node for the NAND chip\n");
++
++ ret = of_property_read_u32(chip_np, "reg", &cs);
+ if (ret) {
+ dev_err(dev, "failed to get chip select: %d\n", ret);
+ return ret;
+@@ -660,7 +667,7 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ writel(ebu_host->cs[cs].addr_sel | EBU_ADDR_MASK(5) | EBU_ADDR_SEL_REGEN,
+ ebu_host->ebu + EBU_ADDR_SEL(cs));
+
+- nand_set_flash_node(&ebu_host->chip, dev->of_node);
++ nand_set_flash_node(&ebu_host->chip, chip_np);
+
+ mtd = nand_to_mtd(&ebu_host->chip);
+ if (!mtd->name) {
+@@ -716,7 +723,6 @@ static int ebu_nand_remove(struct platform_device *pdev)
+ }
+
+ static const struct of_device_id ebu_nand_match[] = {
+- { .compatible = "intel,nand-controller" },
+ { .compatible = "intel,lgm-ebunand" },
+ {}
+ };
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index 0321801833393..b97adeee4cc14 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -454,7 +454,7 @@ static int meson_nfc_ecc_correct(struct nand_chip *nand, u32 *bitflips,
+ if (ECC_ERR_CNT(*info) != ECC_UNCORRECTABLE) {
+ mtd->ecc_stats.corrected += ECC_ERR_CNT(*info);
+ *bitflips = max_t(u32, *bitflips, ECC_ERR_CNT(*info));
+- *correct_bitmap |= 1 >> i;
++ *correct_bitmap |= BIT_ULL(i);
+ continue;
+ }
+ if ((nand->options & NAND_NEED_SCRAMBLING) &&
+@@ -800,7 +800,7 @@ static int meson_nfc_read_page_hwecc(struct nand_chip *nand, u8 *buf,
+ u8 *data = buf + i * ecc->size;
+ u8 *oob = nand->oob_poi + i * (ecc->bytes + 2);
+
+- if (correct_bitmap & (1 << i))
++ if (correct_bitmap & BIT_ULL(i))
+ continue;
+ ret = nand_check_erased_ecc_chunk(data, ecc->size,
+ oob, ecc->bytes + 2,
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+index eefcbe3aadce7..d018cb5adf838 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+@@ -177,6 +177,8 @@ struct kvaser_usb_dev_cfg {
+ extern const struct kvaser_usb_dev_ops kvaser_usb_hydra_dev_ops;
+ extern const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops;
+
++void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv);
++
+ int kvaser_usb_recv_cmd(const struct kvaser_usb *dev, void *cmd, int len,
+ int *actual_len);
+
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index f211bfcb1d97e..bd4f7be49f39c 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -476,7 +476,7 @@ static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv)
+ /* This method might sleep. Do not call it in the atomic context
+ * of URB completions.
+ */
+-static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv)
++void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv)
+ {
+ usb_kill_anchored_urbs(&priv->tx_submitted);
+ kvaser_usb_reset_tx_urb_contexts(priv);
+@@ -712,6 +712,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ init_usb_anchor(&priv->tx_submitted);
+ init_completion(&priv->start_comp);
+ init_completion(&priv->stop_comp);
++ init_completion(&priv->flush_comp);
+ priv->can.ctrlmode_supported = 0;
+
+ priv->dev = dev;
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index 404093468b2f1..5939234ce2564 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -1914,7 +1914,7 @@ static int kvaser_usb_hydra_flush_queue(struct kvaser_usb_net_priv *priv)
+ {
+ int err;
+
+- init_completion(&priv->flush_comp);
++ reinit_completion(&priv->flush_comp);
+
+ err = kvaser_usb_hydra_send_simple_cmd(priv->dev, CMD_FLUSH_QUEUE,
+ priv->channel);
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index f551fde16a709..7dad7f2efcc9a 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -310,6 +310,38 @@ struct kvaser_cmd {
+ } u;
+ } __packed;
+
++#define CMD_SIZE_ANY 0xff
++#define kvaser_fsize(field) sizeof_field(struct kvaser_cmd, field)
++
++static const u8 kvaser_usb_leaf_cmd_sizes_leaf[] = {
++ [CMD_START_CHIP_REPLY] = kvaser_fsize(u.simple),
++ [CMD_STOP_CHIP_REPLY] = kvaser_fsize(u.simple),
++ [CMD_GET_CARD_INFO_REPLY] = kvaser_fsize(u.cardinfo),
++ [CMD_TX_ACKNOWLEDGE] = kvaser_fsize(u.tx_acknowledge_header),
++ [CMD_GET_SOFTWARE_INFO_REPLY] = kvaser_fsize(u.leaf.softinfo),
++ [CMD_RX_STD_MESSAGE] = kvaser_fsize(u.leaf.rx_can),
++ [CMD_RX_EXT_MESSAGE] = kvaser_fsize(u.leaf.rx_can),
++ [CMD_LEAF_LOG_MESSAGE] = kvaser_fsize(u.leaf.log_message),
++ [CMD_CHIP_STATE_EVENT] = kvaser_fsize(u.leaf.chip_state_event),
++ [CMD_CAN_ERROR_EVENT] = kvaser_fsize(u.leaf.error_event),
++ /* ignored events: */
++ [CMD_FLUSH_QUEUE_REPLY] = CMD_SIZE_ANY,
++};
++
++static const u8 kvaser_usb_leaf_cmd_sizes_usbcan[] = {
++ [CMD_START_CHIP_REPLY] = kvaser_fsize(u.simple),
++ [CMD_STOP_CHIP_REPLY] = kvaser_fsize(u.simple),
++ [CMD_GET_CARD_INFO_REPLY] = kvaser_fsize(u.cardinfo),
++ [CMD_TX_ACKNOWLEDGE] = kvaser_fsize(u.tx_acknowledge_header),
++ [CMD_GET_SOFTWARE_INFO_REPLY] = kvaser_fsize(u.usbcan.softinfo),
++ [CMD_RX_STD_MESSAGE] = kvaser_fsize(u.usbcan.rx_can),
++ [CMD_RX_EXT_MESSAGE] = kvaser_fsize(u.usbcan.rx_can),
++ [CMD_CHIP_STATE_EVENT] = kvaser_fsize(u.usbcan.chip_state_event),
++ [CMD_CAN_ERROR_EVENT] = kvaser_fsize(u.usbcan.error_event),
++ /* ignored events: */
++ [CMD_USBCAN_CLOCK_OVERFLOW_EVENT] = CMD_SIZE_ANY,
++};
++
+ /* Summary of a kvaser error event, for a unified Leaf/Usbcan error
+ * handling. Some discrepancies between the two families exist:
+ *
+@@ -397,6 +429,43 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_32mhz = {
+ .bittiming_const = &kvaser_usb_flexc_bittiming_const,
+ };
+
++static int kvaser_usb_leaf_verify_size(const struct kvaser_usb *dev,
++ const struct kvaser_cmd *cmd)
++{
++ /* buffer size >= cmd->len ensured by caller */
++ u8 min_size = 0;
++
++ switch (dev->driver_info->family) {
++ case KVASER_LEAF:
++ if (cmd->id < ARRAY_SIZE(kvaser_usb_leaf_cmd_sizes_leaf))
++ min_size = kvaser_usb_leaf_cmd_sizes_leaf[cmd->id];
++ break;
++ case KVASER_USBCAN:
++ if (cmd->id < ARRAY_SIZE(kvaser_usb_leaf_cmd_sizes_usbcan))
++ min_size = kvaser_usb_leaf_cmd_sizes_usbcan[cmd->id];
++ break;
++ }
++
++ if (min_size == CMD_SIZE_ANY)
++ return 0;
++
++ if (min_size) {
++ min_size += CMD_HEADER_LEN;
++ if (cmd->len >= min_size)
++ return 0;
++
++ dev_err_ratelimited(&dev->intf->dev,
++ "Received command %u too short (size %u, needed %u)",
++ cmd->id, cmd->len, min_size);
++ return -EIO;
++ }
++
++ dev_warn_ratelimited(&dev->intf->dev,
++ "Unhandled command (%d, size %d)\n",
++ cmd->id, cmd->len);
++ return -EINVAL;
++}
++
+ static void *
+ kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
+ const struct sk_buff *skb, int *cmd_len,
+@@ -502,6 +571,9 @@ static int kvaser_usb_leaf_wait_cmd(const struct kvaser_usb *dev, u8 id,
+ end:
+ kfree(buf);
+
++ if (err == 0)
++ err = kvaser_usb_leaf_verify_size(dev, cmd);
++
+ return err;
+ }
+
+@@ -1132,6 +1204,9 @@ static void kvaser_usb_leaf_stop_chip_reply(const struct kvaser_usb *dev,
+ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
+ const struct kvaser_cmd *cmd)
+ {
++ if (kvaser_usb_leaf_verify_size(dev, cmd) < 0)
++ return;
++
+ switch (cmd->id) {
+ case CMD_START_CHIP_REPLY:
+ kvaser_usb_leaf_start_chip_reply(dev, cmd);
+@@ -1350,9 +1425,13 @@ static int kvaser_usb_leaf_set_mode(struct net_device *netdev,
+
+ switch (mode) {
+ case CAN_MODE_START:
++ kvaser_usb_unlink_tx_urbs(priv);
++
+ err = kvaser_usb_leaf_simple_cmd_async(priv, CMD_START_CHIP);
+ if (err)
+ return err;
++
++ priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ break;
+ default:
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
+index a89b93cb4e26d..d5939586c82ee 100644
+--- a/drivers/net/ethernet/atheros/alx/main.c
++++ b/drivers/net/ethernet/atheros/alx/main.c
+@@ -1912,11 +1912,14 @@ static int alx_suspend(struct device *dev)
+
+ if (!netif_running(alx->dev))
+ return 0;
++
++ rtnl_lock();
+ netif_device_detach(alx->dev);
+
+ mutex_lock(&alx->mtx);
+ __alx_stop(alx);
+ mutex_unlock(&alx->mtx);
++ rtnl_unlock();
+
+ return 0;
+ }
+@@ -1927,6 +1930,7 @@ static int alx_resume(struct device *dev)
+ struct alx_hw *hw = &alx->hw;
+ int err;
+
++ rtnl_lock();
+ mutex_lock(&alx->mtx);
+ alx_reset_phy(hw);
+
+@@ -1943,6 +1947,7 @@ static int alx_resume(struct device *dev)
+
+ unlock:
+ mutex_unlock(&alx->mtx);
++ rtnl_unlock();
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 5729a5ab059d7..4cbd3ba5acb97 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -789,6 +789,7 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp,
+ BNX2X_ERR("skb_put is about to fail... pad %d len %d rx_buf_size %d\n",
+ pad, len, fp->rx_buf_size);
+ bnx2x_panic();
++ bnx2x_frag_free(fp, new_data);
+ return;
+ }
+ #endif
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index 8e316367f6ced..2132ce63193ce 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -505,9 +505,13 @@ static int bnxt_hwrm_ptp_cfg(struct bnxt *bp)
+ ptp->tstamp_filters = flags;
+
+ if (netif_running(bp->dev)) {
+- rc = bnxt_close_nic(bp, false, false);
+- if (!rc)
+- rc = bnxt_open_nic(bp, false, false);
++ if (ptp->rx_filter == HWTSTAMP_FILTER_ALL) {
++ rc = bnxt_close_nic(bp, false, false);
++ if (!rc)
++ rc = bnxt_open_nic(bp, false, false);
++ } else {
++ bnxt_ptp_cfg_tstamp_filters(bp);
++ }
+ if (!rc && !ptp->tstamp_filters)
+ rc = -EIO;
+ }
+diff --git a/drivers/net/ethernet/engleder/tsnep_hw.h b/drivers/net/ethernet/engleder/tsnep_hw.h
+index 916ceac3ada23..e03aaafab559f 100644
+--- a/drivers/net/ethernet/engleder/tsnep_hw.h
++++ b/drivers/net/ethernet/engleder/tsnep_hw.h
+@@ -92,8 +92,7 @@
+
+ /* tsnep register */
+ #define TSNEP_INFO 0x0100
+-#define TSNEP_INFO_RX_ASSIGN 0x00010000
+-#define TSNEP_INFO_TX_TIME 0x00020000
++#define TSNEP_INFO_TX_TIME 0x00010000
+ #define TSNEP_CONTROL 0x0108
+ #define TSNEP_CONTROL_TX_RESET 0x00000001
+ #define TSNEP_CONTROL_TX_ENABLE 0x00000002
+diff --git a/drivers/net/ethernet/freescale/fs_enet/mac-fec.c b/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
+index 99fe2c210d0f6..61f4b6e50d29b 100644
+--- a/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
++++ b/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
+@@ -98,7 +98,7 @@ static int do_pd_setup(struct fs_enet_private *fep)
+ return -EINVAL;
+
+ fep->fec.fecp = of_iomap(ofdev->dev.of_node, 0);
+- if (!fep->fcc.fccp)
++ if (!fep->fec.fecp)
+ return -EINVAL;
+
+ return 0;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 981c43b204ff4..d3822c2646425 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1184,66 +1184,138 @@ static void iavf_up_complete(struct iavf_adapter *adapter)
+ }
+
+ /**
+- * iavf_down - Shutdown the connection processing
++ * iavf_clear_mac_vlan_filters - Remove mac and vlan filters not sent to PF
++ * yet and mark other to be removed.
+ * @adapter: board private structure
+- *
+- * Expects to be called while holding the __IAVF_IN_CRITICAL_TASK bit lock.
+ **/
+-void iavf_down(struct iavf_adapter *adapter)
++static void iavf_clear_mac_vlan_filters(struct iavf_adapter *adapter)
+ {
+- struct net_device *netdev = adapter->netdev;
+- struct iavf_vlan_filter *vlf;
+- struct iavf_cloud_filter *cf;
+- struct iavf_fdir_fltr *fdir;
+- struct iavf_mac_filter *f;
+- struct iavf_adv_rss *rss;
+-
+- if (adapter->state <= __IAVF_DOWN_PENDING)
+- return;
+-
+- netif_carrier_off(netdev);
+- netif_tx_disable(netdev);
+- adapter->link_up = false;
+- iavf_napi_disable_all(adapter);
+- iavf_irq_disable(adapter);
++ struct iavf_vlan_filter *vlf, *vlftmp;
++ struct iavf_mac_filter *f, *ftmp;
+
+ spin_lock_bh(&adapter->mac_vlan_list_lock);
+-
+ /* clear the sync flag on all filters */
+ __dev_uc_unsync(adapter->netdev, NULL);
+ __dev_mc_unsync(adapter->netdev, NULL);
+
+ /* remove all MAC filters */
+- list_for_each_entry(f, &adapter->mac_filter_list, list) {
+- f->remove = true;
++ list_for_each_entry_safe(f, ftmp, &adapter->mac_filter_list,
++ list) {
++ if (f->add) {
++ list_del(&f->list);
++ kfree(f);
++ } else {
++ f->remove = true;
++ }
+ }
+
+ /* remove all VLAN filters */
+- list_for_each_entry(vlf, &adapter->vlan_filter_list, list) {
+- vlf->remove = true;
++ list_for_each_entry_safe(vlf, vlftmp, &adapter->vlan_filter_list,
++ list) {
++ if (vlf->add) {
++ list_del(&vlf->list);
++ kfree(vlf);
++ } else {
++ vlf->remove = true;
++ }
+ }
+-
+ spin_unlock_bh(&adapter->mac_vlan_list_lock);
++}
++
++/**
++ * iavf_clear_cloud_filters - Remove cloud filters not sent to PF yet and
++ * mark other to be removed.
++ * @adapter: board private structure
++ **/
++static void iavf_clear_cloud_filters(struct iavf_adapter *adapter)
++{
++ struct iavf_cloud_filter *cf, *cftmp;
+
+ /* remove all cloud filters */
+ spin_lock_bh(&adapter->cloud_filter_list_lock);
+- list_for_each_entry(cf, &adapter->cloud_filter_list, list) {
+- cf->del = true;
++ list_for_each_entry_safe(cf, cftmp, &adapter->cloud_filter_list,
++ list) {
++ if (cf->add) {
++ list_del(&cf->list);
++ kfree(cf);
++ adapter->num_cloud_filters--;
++ } else {
++ cf->del = true;
++ }
+ }
+ spin_unlock_bh(&adapter->cloud_filter_list_lock);
++}
++
++/**
++ * iavf_clear_fdir_filters - Remove fdir filters not sent to PF yet and mark
++ * other to be removed.
++ * @adapter: board private structure
++ **/
++static void iavf_clear_fdir_filters(struct iavf_adapter *adapter)
++{
++ struct iavf_fdir_fltr *fdir, *fdirtmp;
+
+ /* remove all Flow Director filters */
+ spin_lock_bh(&adapter->fdir_fltr_lock);
+- list_for_each_entry(fdir, &adapter->fdir_list_head, list) {
+- fdir->state = IAVF_FDIR_FLTR_DEL_REQUEST;
++ list_for_each_entry_safe(fdir, fdirtmp, &adapter->fdir_list_head,
++ list) {
++ if (fdir->state == IAVF_FDIR_FLTR_ADD_REQUEST) {
++ list_del(&fdir->list);
++ kfree(fdir);
++ adapter->fdir_active_fltr--;
++ } else {
++ fdir->state = IAVF_FDIR_FLTR_DEL_REQUEST;
++ }
+ }
+ spin_unlock_bh(&adapter->fdir_fltr_lock);
++}
++
++/**
++ * iavf_clear_adv_rss_conf - Remove adv rss conf not sent to PF yet and mark
++ * other to be removed.
++ * @adapter: board private structure
++ **/
++static void iavf_clear_adv_rss_conf(struct iavf_adapter *adapter)
++{
++ struct iavf_adv_rss *rss, *rsstmp;
+
+ /* remove all advance RSS configuration */
+ spin_lock_bh(&adapter->adv_rss_lock);
+- list_for_each_entry(rss, &adapter->adv_rss_list_head, list)
+- rss->state = IAVF_ADV_RSS_DEL_REQUEST;
++ list_for_each_entry_safe(rss, rsstmp, &adapter->adv_rss_list_head,
++ list) {
++ if (rss->state == IAVF_ADV_RSS_ADD_REQUEST) {
++ list_del(&rss->list);
++ kfree(rss);
++ } else {
++ rss->state = IAVF_ADV_RSS_DEL_REQUEST;
++ }
++ }
+ spin_unlock_bh(&adapter->adv_rss_lock);
++}
++
++/**
++ * iavf_down - Shutdown the connection processing
++ * @adapter: board private structure
++ *
++ * Expects to be called while holding the __IAVF_IN_CRITICAL_TASK bit lock.
++ **/
++void iavf_down(struct iavf_adapter *adapter)
++{
++ struct net_device *netdev = adapter->netdev;
++
++ if (adapter->state <= __IAVF_DOWN_PENDING)
++ return;
++
++ netif_carrier_off(netdev);
++ netif_tx_disable(netdev);
++ adapter->link_up = false;
++ iavf_napi_disable_all(adapter);
++ iavf_irq_disable(adapter);
++
++ iavf_clear_mac_vlan_filters(adapter);
++ iavf_clear_cloud_filters(adapter);
++ iavf_clear_fdir_filters(adapter);
++ iavf_clear_adv_rss_conf(adapter);
+
+ if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)) {
+ /* cancel any current operation */
+@@ -1252,11 +1324,16 @@ void iavf_down(struct iavf_adapter *adapter)
+ * here for this to complete. The watchdog is still running
+ * and it will take care of this.
+ */
+- adapter->aq_required = IAVF_FLAG_AQ_DEL_MAC_FILTER;
+- adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER;
+- adapter->aq_required |= IAVF_FLAG_AQ_DEL_CLOUD_FILTER;
+- adapter->aq_required |= IAVF_FLAG_AQ_DEL_FDIR_FILTER;
+- adapter->aq_required |= IAVF_FLAG_AQ_DEL_ADV_RSS_CFG;
++ if (!list_empty(&adapter->mac_filter_list))
++ adapter->aq_required |= IAVF_FLAG_AQ_DEL_MAC_FILTER;
++ if (!list_empty(&adapter->vlan_filter_list))
++ adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER;
++ if (!list_empty(&adapter->cloud_filter_list))
++ adapter->aq_required |= IAVF_FLAG_AQ_DEL_CLOUD_FILTER;
++ if (!list_empty(&adapter->fdir_list_head))
++ adapter->aq_required |= IAVF_FLAG_AQ_DEL_FDIR_FILTER;
++ if (!list_empty(&adapter->adv_rss_list_head))
++ adapter->aq_required |= IAVF_FLAG_AQ_DEL_ADV_RSS_CFG;
+ adapter->aq_required |= IAVF_FLAG_AQ_DISABLE_QUEUES;
+ }
+
+@@ -4082,6 +4159,7 @@ err_unlock:
+ static int iavf_close(struct net_device *netdev)
+ {
+ struct iavf_adapter *adapter = netdev_priv(netdev);
++ u64 aq_to_restore;
+ int status;
+
+ mutex_lock(&adapter->crit_lock);
+@@ -4094,6 +4172,29 @@ static int iavf_close(struct net_device *netdev)
+ set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ if (CLIENT_ENABLED(adapter))
+ adapter->flags |= IAVF_FLAG_CLIENT_NEEDS_CLOSE;
++ /* We cannot send IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS before
++ * IAVF_FLAG_AQ_DISABLE_QUEUES because in such case there is rtnl
++ * deadlock with adminq_task() until iavf_close timeouts. We must send
++ * IAVF_FLAG_AQ_GET_CONFIG before IAVF_FLAG_AQ_DISABLE_QUEUES to make
++ * disable queues possible for vf. Give only necessary flags to
++ * iavf_down and save other to set them right before iavf_close()
++ * returns, when IAVF_FLAG_AQ_DISABLE_QUEUES will be already sent and
++ * iavf will be in DOWN state.
++ */
++ aq_to_restore = adapter->aq_required;
++ adapter->aq_required &= IAVF_FLAG_AQ_GET_CONFIG;
++
++ /* Remove flags which we do not want to send after close or we want to
++ * send before disable queues.
++ */
++ aq_to_restore &= ~(IAVF_FLAG_AQ_GET_CONFIG |
++ IAVF_FLAG_AQ_ENABLE_QUEUES |
++ IAVF_FLAG_AQ_CONFIGURE_QUEUES |
++ IAVF_FLAG_AQ_ADD_VLAN_FILTER |
++ IAVF_FLAG_AQ_ADD_MAC_FILTER |
++ IAVF_FLAG_AQ_ADD_CLOUD_FILTER |
++ IAVF_FLAG_AQ_ADD_FDIR_FILTER |
++ IAVF_FLAG_AQ_ADD_ADV_RSS_CFG);
+
+ iavf_down(adapter);
+ iavf_change_state(adapter, __IAVF_DOWN_PENDING);
+@@ -4117,6 +4218,10 @@ static int iavf_close(struct net_device *netdev)
+ msecs_to_jiffies(500));
+ if (!status)
+ netdev_warn(netdev, "Device resources not yet released\n");
++
++ mutex_lock(&adapter->crit_lock);
++ adapter->aq_required |= aq_to_restore;
++ mutex_unlock(&adapter->crit_lock);
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 4efa5e5846e01..4dfdec11ddc10 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -2826,6 +2826,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
+ tx_rings[i].count = new_tx_cnt;
+ tx_rings[i].desc = NULL;
+ tx_rings[i].tx_buf = NULL;
++ tx_rings[i].tx_tstamps = &pf->ptp.port.tx;
+ err = ice_setup_tx_ring(&tx_rings[i]);
+ if (err) {
+ while (i--)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index ad73a488fc5fb..11e603686a276 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -1530,6 +1530,7 @@ u32 mvpp2_read(struct mvpp2 *priv, u32 offset);
+ void mvpp2_dbgfs_init(struct mvpp2 *priv, const char *name);
+
+ void mvpp2_dbgfs_cleanup(struct mvpp2 *priv);
++void mvpp2_dbgfs_exit(void);
+
+ void mvpp23_rx_fifo_fc_en(struct mvpp2 *priv, int port, bool en);
+
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c
+index 4a3baa7e01424..75e83ea2a926e 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c
+@@ -691,6 +691,13 @@ static int mvpp2_dbgfs_port_init(struct dentry *parent,
+ return 0;
+ }
+
++static struct dentry *mvpp2_root;
++
++void mvpp2_dbgfs_exit(void)
++{
++ debugfs_remove(mvpp2_root);
++}
++
+ void mvpp2_dbgfs_cleanup(struct mvpp2 *priv)
+ {
+ debugfs_remove_recursive(priv->dbgfs_dir);
+@@ -700,10 +707,9 @@ void mvpp2_dbgfs_cleanup(struct mvpp2 *priv)
+
+ void mvpp2_dbgfs_init(struct mvpp2 *priv, const char *name)
+ {
+- struct dentry *mvpp2_dir, *mvpp2_root;
++ struct dentry *mvpp2_dir;
+ int ret, i;
+
+- mvpp2_root = debugfs_lookup(MVPP2_DRIVER_NAME, NULL);
+ if (!mvpp2_root)
+ mvpp2_root = debugfs_create_dir(MVPP2_DRIVER_NAME, NULL);
+
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index b84128b549b44..eaa51cd7456b6 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -7706,7 +7706,18 @@ static struct platform_driver mvpp2_driver = {
+ },
+ };
+
+-module_platform_driver(mvpp2_driver);
++static int __init mvpp2_driver_init(void)
++{
++ return platform_driver_register(&mvpp2_driver);
++}
++module_init(mvpp2_driver_init);
++
++static void __exit mvpp2_driver_exit(void)
++{
++ platform_driver_unregister(&mvpp2_driver);
++ mvpp2_dbgfs_exit();
++}
++module_exit(mvpp2_driver_exit);
+
+ MODULE_DESCRIPTION("Marvell PPv2 Ethernet Driver - www.marvell.com");
+ MODULE_AUTHOR("Marcin Wojtas <mw@semihalf.com>");
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_acl.c b/drivers/net/ethernet/marvell/prestera/prestera_acl.c
+index 3a141f2db8126..c0d4ddc18f87f 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_acl.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_acl.c
+@@ -162,10 +162,14 @@ err_rhashtable_init:
+ return ERR_PTR(err);
+ }
+
+-void prestera_acl_ruleset_keymask_set(struct prestera_acl_ruleset *ruleset,
+- void *keymask)
++int prestera_acl_ruleset_keymask_set(struct prestera_acl_ruleset *ruleset,
++ void *keymask)
+ {
+ ruleset->keymask = kmemdup(keymask, ACL_KEYMASK_SIZE, GFP_KERNEL);
++ if (!ruleset->keymask)
++ return -ENOMEM;
++
++ return 0;
+ }
+
+ int prestera_acl_ruleset_offload(struct prestera_acl_ruleset *ruleset)
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_acl.h b/drivers/net/ethernet/marvell/prestera/prestera_acl.h
+index f963e1e0c0f0b..21dbfe4fe5b8b 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_acl.h
++++ b/drivers/net/ethernet/marvell/prestera/prestera_acl.h
+@@ -185,8 +185,8 @@ struct prestera_acl_ruleset *
+ prestera_acl_ruleset_lookup(struct prestera_acl *acl,
+ struct prestera_flow_block *block,
+ u32 chain_index);
+-void prestera_acl_ruleset_keymask_set(struct prestera_acl_ruleset *ruleset,
+- void *keymask);
++int prestera_acl_ruleset_keymask_set(struct prestera_acl_ruleset *ruleset,
++ void *keymask);
+ bool prestera_acl_ruleset_is_offload(struct prestera_acl_ruleset *ruleset);
+ int prestera_acl_ruleset_offload(struct prestera_acl_ruleset *ruleset);
+ void prestera_acl_ruleset_put(struct prestera_acl_ruleset *ruleset);
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_flower.c b/drivers/net/ethernet/marvell/prestera/prestera_flower.c
+index 4d93ad6a284c0..553413248823b 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_flower.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_flower.c
+@@ -428,7 +428,9 @@ int prestera_flower_tmplt_create(struct prestera_flow_block *block,
+ }
+
+ /* preserve keymask/template to this ruleset */
+- prestera_acl_ruleset_keymask_set(ruleset, rule.re_key.match.mask);
++ err = prestera_acl_ruleset_keymask_set(ruleset, rule.re_key.match.mask);
++ if (err)
++ goto err_ruleset_keymask_set;
+
+ /* skip error, as it is not possible to reject template operation,
+ * so, keep the reference to the ruleset for rules to be added
+@@ -444,6 +446,8 @@ int prestera_flower_tmplt_create(struct prestera_flow_block *block,
+ list_add_rcu(&template->list, &block->template_list);
+ return 0;
+
++err_ruleset_keymask_set:
++ prestera_acl_ruleset_put(ruleset);
+ err_ruleset_get:
+ kfree(template);
+ err_malloc:
+diff --git a/drivers/net/ethernet/microchip/lan743x_ptp.c b/drivers/net/ethernet/microchip/lan743x_ptp.c
+index 6a11e2ceb013b..da3ea905adbb8 100644
+--- a/drivers/net/ethernet/microchip/lan743x_ptp.c
++++ b/drivers/net/ethernet/microchip/lan743x_ptp.c
+@@ -1049,6 +1049,10 @@ static int lan743x_ptpci_verify_pin_config(struct ptp_clock_info *ptp,
+ enum ptp_pin_function func,
+ unsigned int chan)
+ {
++ struct lan743x_ptp *lan_ptp =
++ container_of(ptp, struct lan743x_ptp, ptp_clock_info);
++ struct lan743x_adapter *adapter =
++ container_of(lan_ptp, struct lan743x_adapter, ptp);
+ int result = 0;
+
+ /* Confirm the requested function is supported. Parameter
+@@ -1057,7 +1061,10 @@ static int lan743x_ptpci_verify_pin_config(struct ptp_clock_info *ptp,
+ switch (func) {
+ case PTP_PF_NONE:
+ case PTP_PF_PEROUT:
++ break;
+ case PTP_PF_EXTTS:
++ if (!adapter->is_pci11x1x)
++ result = -1;
+ break;
+ case PTP_PF_PHYSYNC:
+ default:
+diff --git a/drivers/net/ethernet/sunplus/spl2sw_driver.c b/drivers/net/ethernet/sunplus/spl2sw_driver.c
+index 3773ce5e12cc0..37711331ba0f4 100644
+--- a/drivers/net/ethernet/sunplus/spl2sw_driver.c
++++ b/drivers/net/ethernet/sunplus/spl2sw_driver.c
+@@ -248,8 +248,8 @@ static int spl2sw_nvmem_get_mac_address(struct device *dev, struct device_node *
+
+ /* Check if mac address is valid */
+ if (!is_valid_ether_addr(mac)) {
+- kfree(mac);
+ dev_info(dev, "Invalid mac address in nvmem (%pM)!\n", mac);
++ kfree(mac);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
+index fb30bc5d56cb7..fce06663e1e11 100644
+--- a/drivers/net/ethernet/ti/Kconfig
++++ b/drivers/net/ethernet/ti/Kconfig
+@@ -33,6 +33,7 @@ config TI_DAVINCI_MDIO
+ tristate "TI DaVinci MDIO Support"
+ depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST
+ select PHYLIB
++ select MDIO_BITBANG
+ help
+ This driver supports TI's DaVinci MDIO module.
+
+diff --git a/drivers/net/ethernet/ti/davinci_mdio.c b/drivers/net/ethernet/ti/davinci_mdio.c
+index ea37726180431..946b9753ccfb3 100644
+--- a/drivers/net/ethernet/ti/davinci_mdio.c
++++ b/drivers/net/ethernet/ti/davinci_mdio.c
+@@ -26,6 +26,8 @@
+ #include <linux/of_device.h>
+ #include <linux/of_mdio.h>
+ #include <linux/pinctrl/consumer.h>
++#include <linux/mdio-bitbang.h>
++#include <linux/sys_soc.h>
+
+ /*
+ * This timeout definition is a worst-case ultra defensive measure against
+@@ -41,6 +43,7 @@
+
+ struct davinci_mdio_of_param {
+ int autosuspend_delay_ms;
++ bool manual_mode;
+ };
+
+ struct davinci_mdio_regs {
+@@ -49,6 +52,15 @@ struct davinci_mdio_regs {
+ #define CONTROL_IDLE BIT(31)
+ #define CONTROL_ENABLE BIT(30)
+ #define CONTROL_MAX_DIV (0xffff)
++#define CONTROL_CLKDIV GENMASK(15, 0)
++
++#define MDIO_MAN_MDCLK_O BIT(2)
++#define MDIO_MAN_OE BIT(1)
++#define MDIO_MAN_PIN BIT(0)
++#define MDIO_MANUALMODE BIT(31)
++
++#define MDIO_PIN 0
++
+
+ u32 alive;
+ u32 link;
+@@ -59,7 +71,9 @@ struct davinci_mdio_regs {
+ u32 userintmasked;
+ u32 userintmaskset;
+ u32 userintmaskclr;
+- u32 __reserved_1[20];
++ u32 manualif;
++ u32 poll;
++ u32 __reserved_1[18];
+
+ struct {
+ u32 access;
+@@ -79,6 +93,7 @@ static const struct mdio_platform_data default_pdata = {
+
+ struct davinci_mdio_data {
+ struct mdio_platform_data pdata;
++ struct mdiobb_ctrl bb_ctrl;
+ struct davinci_mdio_regs __iomem *regs;
+ struct clk *clk;
+ struct device *dev;
+@@ -90,6 +105,7 @@ struct davinci_mdio_data {
+ */
+ bool skip_scan;
+ u32 clk_div;
++ bool manual_mode;
+ };
+
+ static void davinci_mdio_init_clk(struct davinci_mdio_data *data)
+@@ -128,9 +144,122 @@ static void davinci_mdio_enable(struct davinci_mdio_data *data)
+ writel(data->clk_div | CONTROL_ENABLE, &data->regs->control);
+ }
+
+-static int davinci_mdio_reset(struct mii_bus *bus)
++static void davinci_mdio_disable(struct davinci_mdio_data *data)
++{
++ u32 reg;
++
++ /* Disable MDIO state machine */
++ reg = readl(&data->regs->control);
++
++ reg &= ~CONTROL_CLKDIV;
++ reg |= data->clk_div;
++
++ reg &= ~CONTROL_ENABLE;
++ writel(reg, &data->regs->control);
++}
++
++static void davinci_mdio_enable_manual_mode(struct davinci_mdio_data *data)
++{
++ u32 reg;
++ /* set manual mode */
++ reg = readl(&data->regs->poll);
++ reg |= MDIO_MANUALMODE;
++ writel(reg, &data->regs->poll);
++}
++
++static void davinci_set_mdc(struct mdiobb_ctrl *ctrl, int level)
++{
++ struct davinci_mdio_data *data;
++ u32 reg;
++
++ data = container_of(ctrl, struct davinci_mdio_data, bb_ctrl);
++ reg = readl(&data->regs->manualif);
++
++ if (level)
++ reg |= MDIO_MAN_MDCLK_O;
++ else
++ reg &= ~MDIO_MAN_MDCLK_O;
++
++ writel(reg, &data->regs->manualif);
++}
++
++static void davinci_set_mdio_dir(struct mdiobb_ctrl *ctrl, int output)
++{
++ struct davinci_mdio_data *data;
++ u32 reg;
++
++ data = container_of(ctrl, struct davinci_mdio_data, bb_ctrl);
++ reg = readl(&data->regs->manualif);
++
++ if (output)
++ reg |= MDIO_MAN_OE;
++ else
++ reg &= ~MDIO_MAN_OE;
++
++ writel(reg, &data->regs->manualif);
++}
++
++static void davinci_set_mdio_data(struct mdiobb_ctrl *ctrl, int value)
++{
++ struct davinci_mdio_data *data;
++ u32 reg;
++
++ data = container_of(ctrl, struct davinci_mdio_data, bb_ctrl);
++ reg = readl(&data->regs->manualif);
++
++ if (value)
++ reg |= MDIO_MAN_PIN;
++ else
++ reg &= ~MDIO_MAN_PIN;
++
++ writel(reg, &data->regs->manualif);
++}
++
++static int davinci_get_mdio_data(struct mdiobb_ctrl *ctrl)
++{
++ struct davinci_mdio_data *data;
++ unsigned long reg;
++
++ data = container_of(ctrl, struct davinci_mdio_data, bb_ctrl);
++ reg = readl(&data->regs->manualif);
++ return test_bit(MDIO_PIN, ®);
++}
++
++static int davinci_mdiobb_read(struct mii_bus *bus, int phy, int reg)
++{
++ int ret;
++
++ ret = pm_runtime_resume_and_get(bus->parent);
++ if (ret < 0)
++ return ret;
++
++ ret = mdiobb_read(bus, phy, reg);
++
++ pm_runtime_mark_last_busy(bus->parent);
++ pm_runtime_put_autosuspend(bus->parent);
++
++ return ret;
++}
++
++static int davinci_mdiobb_write(struct mii_bus *bus, int phy, int reg,
++ u16 val)
++{
++ int ret;
++
++ ret = pm_runtime_resume_and_get(bus->parent);
++ if (ret < 0)
++ return ret;
++
++ ret = mdiobb_write(bus, phy, reg, val);
++
++ pm_runtime_mark_last_busy(bus->parent);
++ pm_runtime_put_autosuspend(bus->parent);
++
++ return ret;
++}
++
++static int davinci_mdio_common_reset(struct davinci_mdio_data *data)
+ {
+- struct davinci_mdio_data *data = bus->priv;
+ u32 phy_mask, ver;
+ int ret;
+
+@@ -138,6 +267,11 @@ static int davinci_mdio_reset(struct mii_bus *bus)
+ if (ret < 0)
+ return ret;
+
++ if (data->manual_mode) {
++ davinci_mdio_disable(data);
++ davinci_mdio_enable_manual_mode(data);
++ }
++
+ /* wait for scan logic to settle */
+ msleep(PHY_MAX_ADDR * data->access_time);
+
+@@ -171,6 +305,23 @@ done:
+ return 0;
+ }
+
++static int davinci_mdio_reset(struct mii_bus *bus)
++{
++ struct davinci_mdio_data *data = bus->priv;
++
++ return davinci_mdio_common_reset(data);
++}
++
++static int davinci_mdiobb_reset(struct mii_bus *bus)
++{
++ struct mdiobb_ctrl *ctrl = bus->priv;
++ struct davinci_mdio_data *data;
++
++ data = container_of(ctrl, struct davinci_mdio_data, bb_ctrl);
++
++ return davinci_mdio_common_reset(data);
++}
++
+ /* wait until hardware is ready for another user access */
+ static inline int wait_for_user_access(struct davinci_mdio_data *data)
+ {
+@@ -318,6 +469,28 @@ static int davinci_mdio_probe_dt(struct mdio_platform_data *data,
+ return 0;
+ }
+
++struct k3_mdio_soc_data {
++ bool manual_mode;
++};
++
++static const struct k3_mdio_soc_data am65_mdio_soc_data = {
++ .manual_mode = true,
++};
++
++static const struct soc_device_attribute k3_mdio_socinfo[] = {
++ { .family = "AM62X", .revision = "SR1.0", .data = &am65_mdio_soc_data },
++ { .family = "AM64X", .revision = "SR1.0", .data = &am65_mdio_soc_data },
++ { .family = "AM64X", .revision = "SR2.0", .data = &am65_mdio_soc_data },
++ { .family = "AM65X", .revision = "SR1.0", .data = &am65_mdio_soc_data },
++ { .family = "AM65X", .revision = "SR2.0", .data = &am65_mdio_soc_data },
++ { .family = "J7200", .revision = "SR1.0", .data = &am65_mdio_soc_data },
++ { .family = "J7200", .revision = "SR2.0", .data = &am65_mdio_soc_data },
++ { .family = "J721E", .revision = "SR1.0", .data = &am65_mdio_soc_data },
++ { .family = "J721E", .revision = "SR2.0", .data = &am65_mdio_soc_data },
++ { .family = "J721S2", .revision = "SR1.0", .data = &am65_mdio_soc_data},
++ { /* sentinel */ },
++};
++
+ #if IS_ENABLED(CONFIG_OF)
+ static const struct davinci_mdio_of_param of_cpsw_mdio_data = {
+ .autosuspend_delay_ms = 100,
+@@ -331,6 +504,14 @@ static const struct of_device_id davinci_mdio_of_mtable[] = {
+ MODULE_DEVICE_TABLE(of, davinci_mdio_of_mtable);
+ #endif
+
++static const struct mdiobb_ops davinci_mdiobb_ops = {
++ .owner = THIS_MODULE,
++ .set_mdc = davinci_set_mdc,
++ .set_mdio_dir = davinci_set_mdio_dir,
++ .set_mdio_data = davinci_set_mdio_data,
++ .get_mdio_data = davinci_get_mdio_data,
++};
++
+ static int davinci_mdio_probe(struct platform_device *pdev)
+ {
+ struct mdio_platform_data *pdata = dev_get_platdata(&pdev->dev);
+@@ -345,7 +526,26 @@ static int davinci_mdio_probe(struct platform_device *pdev)
+ if (!data)
+ return -ENOMEM;
+
+- data->bus = devm_mdiobus_alloc(dev);
++ data->manual_mode = false;
++ data->bb_ctrl.ops = &davinci_mdiobb_ops;
++
++ if (IS_ENABLED(CONFIG_OF) && dev->of_node) {
++ const struct soc_device_attribute *soc_match_data;
++
++ soc_match_data = soc_device_match(k3_mdio_socinfo);
++ if (soc_match_data && soc_match_data->data) {
++ const struct k3_mdio_soc_data *socdata =
++ soc_match_data->data;
++
++ data->manual_mode = socdata->manual_mode;
++ }
++ }
++
++ if (data->manual_mode)
++ data->bus = alloc_mdio_bitbang(&data->bb_ctrl);
++ else
++ data->bus = devm_mdiobus_alloc(dev);
++
+ if (!data->bus) {
+ dev_err(dev, "failed to alloc mii bus\n");
+ return -ENOMEM;
+@@ -371,11 +571,20 @@ static int davinci_mdio_probe(struct platform_device *pdev)
+ }
+
+ data->bus->name = dev_name(dev);
+- data->bus->read = davinci_mdio_read;
+- data->bus->write = davinci_mdio_write;
+- data->bus->reset = davinci_mdio_reset;
++
++ if (data->manual_mode) {
++ data->bus->read = davinci_mdiobb_read;
++ data->bus->write = davinci_mdiobb_write;
++ data->bus->reset = davinci_mdiobb_reset;
++
++ dev_info(dev, "Configuring MDIO in manual mode\n");
++ } else {
++ data->bus->read = davinci_mdio_read;
++ data->bus->write = davinci_mdio_write;
++ data->bus->reset = davinci_mdio_reset;
++ data->bus->priv = data;
++ }
+ data->bus->parent = dev;
+- data->bus->priv = data;
+
+ data->clk = devm_clk_get(dev, "fck");
+ if (IS_ERR(data->clk)) {
+@@ -433,9 +642,13 @@ static int davinci_mdio_remove(struct platform_device *pdev)
+ {
+ struct davinci_mdio_data *data = platform_get_drvdata(pdev);
+
+- if (data->bus)
++ if (data->bus) {
+ mdiobus_unregister(data->bus);
+
++ if (data->manual_mode)
++ free_mdio_bitbang(data->bus);
++ }
++
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+
+@@ -452,7 +665,9 @@ static int davinci_mdio_runtime_suspend(struct device *dev)
+ ctrl = readl(&data->regs->control);
+ ctrl &= ~CONTROL_ENABLE;
+ writel(ctrl, &data->regs->control);
+- wait_for_idle(data);
++
++ if (!data->manual_mode)
++ wait_for_idle(data);
+
+ return 0;
+ }
+@@ -461,7 +676,12 @@ static int davinci_mdio_runtime_resume(struct device *dev)
+ {
+ struct davinci_mdio_data *data = dev_get_drvdata(dev);
+
+- davinci_mdio_enable(data);
++ if (data->manual_mode) {
++ davinci_mdio_disable(data);
++ davinci_mdio_enable_manual_mode(data);
++ } else {
++ davinci_mdio_enable(data);
++ }
+ return 0;
+ }
+ #endif
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+index f2e2261b4b7d9..8ff4333de2ad9 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+@@ -402,6 +402,9 @@ struct axidma_bd {
+ * @rx_bd_num: Size of RX buffer descriptor ring
+ * @rx_bd_ci: Stores the index of the Rx buffer descriptor in the ring being
+ * accessed currently.
++ * @rx_packets: RX packet count for statistics
++ * @rx_bytes: RX byte count for statistics
++ * @rx_stat_sync: Synchronization object for RX stats
+ * @napi_tx: NAPI TX control structure
+ * @tx_dma_cr: Nominal content of TX DMA control register
+ * @tx_bd_v: Virtual address of the TX buffer descriptor ring
+@@ -411,6 +414,9 @@ struct axidma_bd {
+ * complete. Only updated at runtime by TX NAPI poll.
+ * @tx_bd_tail: Stores the index of the next Tx buffer descriptor in the ring
+ * to be populated.
++ * @tx_packets: TX packet count for statistics
++ * @tx_bytes: TX byte count for statistics
++ * @tx_stat_sync: Synchronization object for TX stats
+ * @dma_err_task: Work structure to process Axi DMA errors
+ * @tx_irq: Axidma TX IRQ number
+ * @rx_irq: Axidma RX IRQ number
+@@ -458,6 +464,9 @@ struct axienet_local {
+ dma_addr_t rx_bd_p;
+ u32 rx_bd_num;
+ u32 rx_bd_ci;
++ u64_stats_t rx_packets;
++ u64_stats_t rx_bytes;
++ struct u64_stats_sync rx_stat_sync;
+
+ struct napi_struct napi_tx;
+ u32 tx_dma_cr;
+@@ -466,6 +475,9 @@ struct axienet_local {
+ u32 tx_bd_num;
+ u32 tx_bd_ci;
+ u32 tx_bd_tail;
++ u64_stats_t tx_packets;
++ u64_stats_t tx_bytes;
++ struct u64_stats_sync tx_stat_sync;
+
+ struct work_struct dma_err_task;
+
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 1760930ec0c49..9262988d26a32 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -752,8 +752,10 @@ static int axienet_tx_poll(struct napi_struct *napi, int budget)
+ if (lp->tx_bd_ci >= lp->tx_bd_num)
+ lp->tx_bd_ci %= lp->tx_bd_num;
+
+- ndev->stats.tx_packets += packets;
+- ndev->stats.tx_bytes += size;
++ u64_stats_update_begin(&lp->tx_stat_sync);
++ u64_stats_add(&lp->tx_packets, packets);
++ u64_stats_add(&lp->tx_bytes, size);
++ u64_stats_update_end(&lp->tx_stat_sync);
+
+ /* Matches barrier in axienet_start_xmit */
+ smp_mb();
+@@ -984,8 +986,10 @@ static int axienet_rx_poll(struct napi_struct *napi, int budget)
+ cur_p = &lp->rx_bd_v[lp->rx_bd_ci];
+ }
+
+- lp->ndev->stats.rx_packets += packets;
+- lp->ndev->stats.rx_bytes += size;
++ u64_stats_update_begin(&lp->rx_stat_sync);
++ u64_stats_add(&lp->rx_packets, packets);
++ u64_stats_add(&lp->rx_bytes, size);
++ u64_stats_update_end(&lp->rx_stat_sync);
+
+ if (tail_p)
+ axienet_dma_out_addr(lp, XAXIDMA_RX_TDESC_OFFSET, tail_p);
+@@ -1292,10 +1296,32 @@ static int axienet_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+ return phylink_mii_ioctl(lp->phylink, rq, cmd);
+ }
+
++static void
++axienet_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats)
++{
++ struct axienet_local *lp = netdev_priv(dev);
++ unsigned int start;
++
++ netdev_stats_to_stats64(stats, &dev->stats);
++
++ do {
++ start = u64_stats_fetch_begin_irq(&lp->rx_stat_sync);
++ stats->rx_packets = u64_stats_read(&lp->rx_packets);
++ stats->rx_bytes = u64_stats_read(&lp->rx_bytes);
++ } while (u64_stats_fetch_retry_irq(&lp->rx_stat_sync, start));
++
++ do {
++ start = u64_stats_fetch_begin_irq(&lp->tx_stat_sync);
++ stats->tx_packets = u64_stats_read(&lp->tx_packets);
++ stats->tx_bytes = u64_stats_read(&lp->tx_bytes);
++ } while (u64_stats_fetch_retry_irq(&lp->tx_stat_sync, start));
++}
++
+ static const struct net_device_ops axienet_netdev_ops = {
+ .ndo_open = axienet_open,
+ .ndo_stop = axienet_stop,
+ .ndo_start_xmit = axienet_start_xmit,
++ .ndo_get_stats64 = axienet_get_stats64,
+ .ndo_change_mtu = axienet_change_mtu,
+ .ndo_set_mac_address = netdev_set_mac_address,
+ .ndo_validate_addr = eth_validate_addr,
+@@ -1850,6 +1876,9 @@ static int axienet_probe(struct platform_device *pdev)
+ lp->rx_bd_num = RX_BD_NUM_DEFAULT;
+ lp->tx_bd_num = TX_BD_NUM_DEFAULT;
+
++ u64_stats_init(&lp->rx_stat_sync);
++ u64_stats_init(&lp->tx_stat_sync);
++
+ netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll, NAPI_POLL_WEIGHT);
+ netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll, NAPI_POLL_WEIGHT);
+
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index 25b38a374e3c3..dd5919ec408bf 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -1051,7 +1051,8 @@ struct net_device_context {
+ u32 vf_alloc;
+ /* Serial number of the VF to team with */
+ u32 vf_serial;
+-
++ /* completion variable to confirm vf association */
++ struct completion vf_add;
+ /* Is the current data path through the VF NIC? */
+ bool data_path_is_vf;
+
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 6e42cb03e226a..456db7c28a34c 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -1580,6 +1580,10 @@ static void netvsc_send_vf(struct net_device *ndev,
+
+ net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated;
+ net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial;
++
++ if (net_device_ctx->vf_alloc)
++ complete(&net_device_ctx->vf_add);
++
+ netdev_info(ndev, "VF slot %u %s\n",
+ net_device_ctx->vf_serial,
+ net_device_ctx->vf_alloc ? "added" : "removed");
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 15ebd54266049..8113ac17ab70a 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2313,6 +2313,18 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
+
+ }
+
++ /* Fallback path to check synthetic vf with
++ * help of mac addr
++ */
++ list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {
++ ndev = hv_get_drvdata(ndev_ctx->device_ctx);
++ if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr)) {
++ netdev_notice(vf_netdev,
++ "falling back to mac addr based matching\n");
++ return ndev;
++ }
++ }
++
+ netdev_notice(vf_netdev,
+ "no netdev found for vf serial:%u\n", serial);
+ return NULL;
+@@ -2409,6 +2421,11 @@ static int netvsc_vf_changed(struct net_device *vf_netdev, unsigned long event)
+ if (net_device_ctx->data_path_is_vf == vf_is_up)
+ return NOTIFY_OK;
+
++ if (vf_is_up && !net_device_ctx->vf_alloc) {
++ netdev_info(ndev, "Waiting for the VF association from host\n");
++ wait_for_completion(&net_device_ctx->vf_add);
++ }
++
+ ret = netvsc_switch_datapath(ndev, vf_is_up);
+
+ if (ret) {
+@@ -2440,6 +2457,7 @@ static int netvsc_unregister_vf(struct net_device *vf_netdev)
+
+ netvsc_vf_setxdp(vf_netdev, NULL);
+
++ reinit_completion(&net_device_ctx->vf_add);
+ netdev_rx_handler_unregister(vf_netdev);
+ netdev_upper_dev_unlink(vf_netdev, ndev);
+ RCU_INIT_POINTER(net_device_ctx->vf_netdev, NULL);
+@@ -2479,6 +2497,7 @@ static int netvsc_probe(struct hv_device *dev,
+
+ INIT_DELAYED_WORK(&net_device_ctx->dwork, netvsc_link_change);
+
++ init_completion(&net_device_ctx->vf_add);
+ spin_lock_init(&net_device_ctx->lock);
+ INIT_LIST_HEAD(&net_device_ctx->reconfig_events);
+ INIT_DELAYED_WORK(&net_device_ctx->vf_takeover, netvsc_vf_setup);
+diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c
+index ff5d0e98a0881..ab3f045629802 100644
+--- a/drivers/net/thunderbolt.c
++++ b/drivers/net/thunderbolt.c
+@@ -612,18 +612,13 @@ static void tbnet_connected_work(struct work_struct *work)
+ return;
+ }
+
+- /* Both logins successful so enable the high-speed DMA paths and
+- * start the network device queue.
++ /* Both logins successful so enable the rings, high-speed DMA
++ * paths and start the network device queue.
++ *
++ * Note we enable the DMA paths last to make sure we have primed
++ * the Rx ring before any incoming packets are allowed to
++ * arrive.
+ */
+- ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
+- net->rx_ring.ring->hop,
+- net->remote_transmit_path,
+- net->tx_ring.ring->hop);
+- if (ret) {
+- netdev_err(net->dev, "failed to enable DMA paths\n");
+- return;
+- }
+-
+ tb_ring_start(net->tx_ring.ring);
+ tb_ring_start(net->rx_ring.ring);
+
+@@ -635,10 +630,21 @@ static void tbnet_connected_work(struct work_struct *work)
+ if (ret)
+ goto err_free_rx_buffers;
+
++ ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
++ net->rx_ring.ring->hop,
++ net->remote_transmit_path,
++ net->tx_ring.ring->hop);
++ if (ret) {
++ netdev_err(net->dev, "failed to enable DMA paths\n");
++ goto err_free_tx_buffers;
++ }
++
+ netif_carrier_on(net->dev);
+ netif_start_queue(net->dev);
+ return;
+
++err_free_tx_buffers:
++ tbnet_free_buffers(&net->tx_ring);
+ err_free_rx_buffers:
+ tbnet_free_buffers(&net->rx_ring);
+ err_stop_rings:
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 688905ea0a6d3..e7b0b59e2bc8c 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -1874,7 +1874,9 @@ static void intr_callback(struct urb *urb)
+ "Stop submitting intr, status %d\n", status);
+ return;
+ case -EOVERFLOW:
+- netif_info(tp, intr, tp->netdev, "intr status -EOVERFLOW\n");
++ if (net_ratelimit())
++ netif_info(tp, intr, tp->netdev,
++ "intr status -EOVERFLOW\n");
+ goto resubmit;
+ /* -EPIPE: should clear the halt */
+ default:
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 688177453b072..07c4a4f0ed33d 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -95,6 +95,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = true,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA988X_HW_2_0_VERSION,
+@@ -133,6 +134,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = true,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA9887_HW_1_0_VERSION,
+@@ -172,6 +174,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA6174_HW_3_2_VERSION,
+@@ -206,6 +209,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .supports_peer_stats_info = true,
+ .dynamic_sar_support = true,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA6174_HW_2_1_VERSION,
+@@ -244,6 +248,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA6174_HW_2_1_VERSION,
+@@ -282,6 +287,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA6174_HW_3_0_VERSION,
+@@ -320,6 +326,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA6174_HW_3_2_VERSION,
+@@ -362,6 +369,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .supports_peer_stats_info = true,
+ .dynamic_sar_support = true,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA99X0_HW_2_0_DEV_VERSION,
+@@ -406,6 +414,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA9984_HW_1_0_DEV_VERSION,
+@@ -457,6 +466,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA9888_HW_2_0_DEV_VERSION,
+@@ -505,6 +515,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA9377_HW_1_0_DEV_VERSION,
+@@ -543,6 +554,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA9377_HW_1_1_DEV_VERSION,
+@@ -583,6 +595,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA9377_HW_1_1_DEV_VERSION,
+@@ -614,6 +627,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .credit_size_workaround = true,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = QCA4019_HW_1_0_DEV_VERSION,
+@@ -659,6 +673,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = false,
+ .hw_restart_disconnect = false,
++ .use_fw_tx_credits = true,
+ },
+ {
+ .id = WCN3990_HW_1_0_DEV_VERSION,
+@@ -690,6 +705,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ .tx_stats_over_pktlog = false,
+ .dynamic_sar_support = true,
+ .hw_restart_disconnect = true,
++ .use_fw_tx_credits = false,
+ },
+ };
+
+diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
+index fab398046a3f2..6d1784f74bea4 100644
+--- a/drivers/net/wireless/ath/ath10k/htc.c
++++ b/drivers/net/wireless/ath/ath10k/htc.c
+@@ -947,13 +947,18 @@ int ath10k_htc_wait_target(struct ath10k_htc *htc)
+ return -ECOMM;
+ }
+
+- htc->total_transmit_credits = __le16_to_cpu(msg->ready.credit_count);
++ if (ar->hw_params.use_fw_tx_credits)
++ htc->total_transmit_credits = __le16_to_cpu(msg->ready.credit_count);
++ else
++ htc->total_transmit_credits = 1;
++
+ htc->target_credit_size = __le16_to_cpu(msg->ready.credit_size);
+
+ ath10k_dbg(ar, ATH10K_DBG_HTC,
+- "Target ready! transmit resources: %d size:%d\n",
++ "Target ready! transmit resources: %d size:%d actual credits:%d\n",
+ htc->total_transmit_credits,
+- htc->target_credit_size);
++ htc->target_credit_size,
++ msg->ready.credit_count);
+
+ if ((htc->total_transmit_credits == 0) ||
+ (htc->target_credit_size == 0)) {
+diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
+index 93acf0dd580a6..1b99f3a39a113 100644
+--- a/drivers/net/wireless/ath/ath10k/hw.h
++++ b/drivers/net/wireless/ath/ath10k/hw.h
+@@ -635,6 +635,8 @@ struct ath10k_hw_params {
+ bool dynamic_sar_support;
+
+ bool hw_restart_disconnect;
++
++ bool use_fw_tx_credits;
+ };
+
+ struct htt_resp;
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 6407f509e91b8..9a1c970f8f55e 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -864,11 +864,36 @@ static int ath10k_peer_delete(struct ath10k *ar, u32 vdev_id, const u8 *addr)
+ return 0;
+ }
+
++static void ath10k_peer_map_cleanup(struct ath10k *ar, struct ath10k_peer *peer)
++{
++ int peer_id, i;
++
++ lockdep_assert_held(&ar->conf_mutex);
++
++ for_each_set_bit(peer_id, peer->peer_ids,
++ ATH10K_MAX_NUM_PEER_IDS) {
++ ar->peer_map[peer_id] = NULL;
++ }
++
++ /* Double check that peer is properly un-referenced from
++ * the peer_map
++ */
++ for (i = 0; i < ARRAY_SIZE(ar->peer_map); i++) {
++ if (ar->peer_map[i] == peer) {
++ ath10k_warn(ar, "removing stale peer_map entry for %pM (ptr %pK idx %d)\n",
++ peer->addr, peer, i);
++ ar->peer_map[i] = NULL;
++ }
++ }
++
++ list_del(&peer->list);
++ kfree(peer);
++ ar->num_peers--;
++}
++
+ static void ath10k_peer_cleanup(struct ath10k *ar, u32 vdev_id)
+ {
+ struct ath10k_peer *peer, *tmp;
+- int peer_id;
+- int i;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+@@ -880,25 +905,7 @@ static void ath10k_peer_cleanup(struct ath10k *ar, u32 vdev_id)
+ ath10k_warn(ar, "removing stale peer %pM from vdev_id %d\n",
+ peer->addr, vdev_id);
+
+- for_each_set_bit(peer_id, peer->peer_ids,
+- ATH10K_MAX_NUM_PEER_IDS) {
+- ar->peer_map[peer_id] = NULL;
+- }
+-
+- /* Double check that peer is properly un-referenced from
+- * the peer_map
+- */
+- for (i = 0; i < ARRAY_SIZE(ar->peer_map); i++) {
+- if (ar->peer_map[i] == peer) {
+- ath10k_warn(ar, "removing stale peer_map entry for %pM (ptr %pK idx %d)\n",
+- peer->addr, peer, i);
+- ar->peer_map[i] = NULL;
+- }
+- }
+-
+- list_del(&peer->list);
+- kfree(peer);
+- ar->num_peers--;
++ ath10k_peer_map_cleanup(ar, peer);
+ }
+ spin_unlock_bh(&ar->data_lock);
+ }
+@@ -7586,10 +7593,7 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
+ /* Clean up the peer object as well since we
+ * must have failed to do this above.
+ */
+- list_del(&peer->list);
+- ar->peer_map[i] = NULL;
+- kfree(peer);
+- ar->num_peers--;
++ ath10k_peer_map_cleanup(ar, peer);
+ }
+ }
+ spin_unlock_bh(&ar->data_lock);
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index c474147101382..911eee9646a45 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -1088,20 +1088,10 @@ err_core_free:
+ return ret;
+ }
+
+-static int ath11k_ahb_remove(struct platform_device *pdev)
++static void ath11k_ahb_remove_prepare(struct ath11k_base *ab)
+ {
+- struct ath11k_base *ab = platform_get_drvdata(pdev);
+ unsigned long left;
+
+- if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
+- ath11k_ahb_power_down(ab);
+- ath11k_debugfs_soc_destroy(ab);
+- ath11k_qmi_deinit_service(ab);
+- goto qmi_fail;
+- }
+-
+- reinit_completion(&ab->driver_recovery);
+-
+ if (test_bit(ATH11K_FLAG_RECOVERY, &ab->dev_flags)) {
+ left = wait_for_completion_timeout(&ab->driver_recovery,
+ ATH11K_AHB_RECOVERY_TIMEOUT);
+@@ -1111,19 +1101,60 @@ static int ath11k_ahb_remove(struct platform_device *pdev)
+
+ set_bit(ATH11K_FLAG_UNREGISTERING, &ab->dev_flags);
+ cancel_work_sync(&ab->restart_work);
++ cancel_work_sync(&ab->qmi.event_work);
++}
++
++static void ath11k_ahb_free_resources(struct ath11k_base *ab)
++{
++ struct platform_device *pdev = ab->pdev;
+
+- ath11k_core_deinit(ab);
+-qmi_fail:
+ ath11k_ahb_free_irq(ab);
+ ath11k_hal_srng_deinit(ab);
+ ath11k_ahb_fw_resource_deinit(ab);
+ ath11k_ce_free_pipes(ab);
+ ath11k_core_free(ab);
+ platform_set_drvdata(pdev, NULL);
++}
++
++static int ath11k_ahb_remove(struct platform_device *pdev)
++{
++ struct ath11k_base *ab = platform_get_drvdata(pdev);
++
++ if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
++ ath11k_ahb_power_down(ab);
++ ath11k_debugfs_soc_destroy(ab);
++ ath11k_qmi_deinit_service(ab);
++ goto qmi_fail;
++ }
++
++ ath11k_ahb_remove_prepare(ab);
++ ath11k_core_deinit(ab);
++
++qmi_fail:
++ ath11k_ahb_free_resources(ab);
+
+ return 0;
+ }
+
++static void ath11k_ahb_shutdown(struct platform_device *pdev)
++{
++ struct ath11k_base *ab = platform_get_drvdata(pdev);
++
++ /* platform shutdown() & remove() are mutually exclusive.
++ * remove() is invoked during rmmod & shutdown() during
++ * system reboot/shutdown.
++ */
++ ath11k_ahb_remove_prepare(ab);
++
++ if (!(test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags)))
++ goto free_resources;
++
++ ath11k_core_deinit(ab);
++
++free_resources:
++ ath11k_ahb_free_resources(ab);
++}
++
+ static struct platform_driver ath11k_ahb_driver = {
+ .driver = {
+ .name = "ath11k",
+@@ -1131,6 +1162,7 @@ static struct platform_driver ath11k_ahb_driver = {
+ },
+ .probe = ath11k_ahb_probe,
+ .remove = ath11k_ahb_remove,
++ .shutdown = ath11k_ahb_shutdown,
+ };
+
+ static int ath11k_ahb_init(void)
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 6ddc698f4a2dc..209345bedd09b 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -1635,6 +1635,8 @@ static void ath11k_core_pre_reconfigure_recovery(struct ath11k_base *ab)
+
+ wake_up(&ab->wmi_ab.tx_credits_wq);
+ wake_up(&ab->peer_mapping_wq);
++
++ reinit_completion(&ab->driver_recovery);
+ }
+
+ static void ath11k_core_post_reconfigure_recovery(struct ath11k_base *ab)
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index b3e133add1ce5..1459e3b1c4f50 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -5194,7 +5194,8 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id,
+ if (log_type != ATH11K_PKTLOG_TYPE_INVALID)
+ trace_ath11k_htt_rxdesc(ar, skb->data, log_type, rx_buf_sz);
+
+- memset(ppdu_info, 0, sizeof(struct hal_rx_mon_ppdu_info));
++ memset(ppdu_info, 0, sizeof(*ppdu_info));
++ ppdu_info->peer_id = HAL_INVALID_PEERID;
+ hal_status = ath11k_hal_rx_parse_mon_status(ab, ppdu_info, skb);
+
+ if (test_bit(ATH11K_FLAG_MONITOR_STARTED, &ar->monitor_flags) &&
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 06b86dcc3826b..94d9a7190953b 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -4949,6 +4949,8 @@ static int ath11k_mac_set_txbf_conf(struct ath11k_vif *arvif)
+ if (vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE)) {
+ nsts = vht_cap & IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
+ nsts >>= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
++ if (nsts > (ar->num_rx_chains - 1))
++ nsts = ar->num_rx_chains - 1;
+ value |= SM(nsts, WMI_TXBF_STS_CAP_OFFSET);
+ }
+
+@@ -4989,7 +4991,7 @@ static int ath11k_mac_set_txbf_conf(struct ath11k_vif *arvif)
+ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+ {
+ bool subfer, subfee;
+- int sound_dim = 0;
++ int sound_dim = 0, nsts = 0;
+
+ subfer = !!(*vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE));
+ subfee = !!(*vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE));
+@@ -4999,6 +5001,11 @@ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+ subfer = false;
+ }
+
++ if (ar->num_rx_chains < 2) {
++ *vht_cap &= ~(IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE);
++ subfee = false;
++ }
++
+ /* If SU Beaformer is not set, then disable MU Beamformer Capability */
+ if (!subfer)
+ *vht_cap &= ~(IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE);
+@@ -5011,7 +5018,9 @@ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+ sound_dim >>= IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_SHIFT;
+ *vht_cap &= ~IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK;
+
+- /* TODO: Need to check invalid STS and Sound_dim values set by FW? */
++ nsts = (*vht_cap & IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK);
++ nsts >>= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
++ *vht_cap &= ~IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
+
+ /* Enable Sounding Dimension Field only if SU BF is enabled */
+ if (subfer) {
+@@ -5023,9 +5032,15 @@ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+ *vht_cap |= sound_dim;
+ }
+
+- /* Use the STS advertised by FW unless SU Beamformee is not supported*/
+- if (!subfee)
+- *vht_cap &= ~(IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK);
++ /* Enable Beamformee STS Field only if SU BF is enabled */
++ if (subfee) {
++ if (nsts > (ar->num_rx_chains - 1))
++ nsts = ar->num_rx_chains - 1;
++
++ nsts <<= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
++ nsts &= IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
++ *vht_cap |= nsts;
++ }
+ }
+
+ static struct ieee80211_sta_vht_cap
+diff --git a/drivers/net/wireless/ath/ath11k/mhi.c b/drivers/net/wireless/ath/ath11k/mhi.c
+index c44df17719f64..86995e8dc9135 100644
+--- a/drivers/net/wireless/ath/ath11k/mhi.c
++++ b/drivers/net/wireless/ath/ath11k/mhi.c
+@@ -402,8 +402,7 @@ int ath11k_mhi_register(struct ath11k_pci *ab_pci)
+ ret = ath11k_mhi_get_msi(ab_pci);
+ if (ret) {
+ ath11k_err(ab, "failed to get msi for mhi\n");
+- mhi_free_controller(mhi_ctrl);
+- return ret;
++ goto free_controller;
+ }
+
+ if (!test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
+@@ -412,7 +411,7 @@ int ath11k_mhi_register(struct ath11k_pci *ab_pci)
+ if (test_bit(ATH11K_FLAG_FIXED_MEM_RGN, &ab->dev_flags)) {
+ ret = ath11k_mhi_read_addr_from_dt(mhi_ctrl);
+ if (ret < 0)
+- return ret;
++ goto free_controller;
+ } else {
+ mhi_ctrl->iova_start = 0;
+ mhi_ctrl->iova_stop = 0xFFFFFFFF;
+@@ -440,18 +439,22 @@ int ath11k_mhi_register(struct ath11k_pci *ab_pci)
+ default:
+ ath11k_err(ab, "failed assign mhi_config for unknown hw rev %d\n",
+ ab->hw_rev);
+- mhi_free_controller(mhi_ctrl);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto free_controller;
+ }
+
+ ret = mhi_register_controller(mhi_ctrl, ath11k_mhi_config);
+ if (ret) {
+ ath11k_err(ab, "failed to register to mhi bus, err = %d\n", ret);
+- mhi_free_controller(mhi_ctrl);
+- return ret;
++ goto free_controller;
+ }
+
+ return 0;
++
++free_controller:
++ mhi_free_controller(mhi_ctrl);
++ ab_pci->mhi_ctrl = NULL;
++ return ret;
+ }
+
+ void ath11k_mhi_unregister(struct ath11k_pci *ab_pci)
+diff --git a/drivers/net/wireless/ath/ath11k/peer.c b/drivers/net/wireless/ath/ath11k/peer.c
+index 9e22aaf34b88c..1ae7af02c364e 100644
+--- a/drivers/net/wireless/ath/ath11k/peer.c
++++ b/drivers/net/wireless/ath/ath11k/peer.c
+@@ -302,6 +302,21 @@ static int __ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, const u8 *addr)
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath11k_peer_find_by_addr(ab, addr);
++ /* Check if the found peer is what we want to remove.
++ * While the sta is transitioning to another band we may
++ * have 2 peer with the same addr assigned to different
++ * vdev_id. Make sure we are deleting the correct peer.
++ */
++ if (peer && peer->vdev_id == vdev_id)
++ ath11k_peer_rhash_delete(ab, peer);
++
++ /* Fallback to peer list search if the correct peer can't be found.
++ * Skip the deletion of the peer from the rhash since it has already
++ * been deleted in peer add.
++ */
++ if (!peer)
++ peer = ath11k_peer_find(ab, vdev_id, addr);
++
+ if (!peer) {
+ spin_unlock_bh(&ab->base_lock);
+ mutex_unlock(&ab->tbl_mtx_lock);
+@@ -312,8 +327,6 @@ static int __ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, const u8 *addr)
+ return -EINVAL;
+ }
+
+- ath11k_peer_rhash_delete(ab, peer);
+-
+ spin_unlock_bh(&ab->base_lock);
+ mutex_unlock(&ab->tbl_mtx_lock);
+
+@@ -372,8 +385,17 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
+ spin_lock_bh(&ar->ab->base_lock);
+ peer = ath11k_peer_find_by_addr(ar->ab, param->peer_addr);
+ if (peer) {
+- spin_unlock_bh(&ar->ab->base_lock);
+- return -EINVAL;
++ if (peer->vdev_id == param->vdev_id) {
++ spin_unlock_bh(&ar->ab->base_lock);
++ return -EINVAL;
++ }
++
++ /* Assume sta is transitioning to another band.
++ * Remove here the peer from rhash.
++ */
++ mutex_lock(&ar->ab->tbl_mtx_lock);
++ ath11k_peer_rhash_delete(ar->ab, peer);
++ mutex_unlock(&ar->ab->tbl_mtx_lock);
+ }
+ spin_unlock_bh(&ar->ab->base_lock);
+
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index 61ead37a944a8..109f4b618428e 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -1696,6 +1696,13 @@ static struct qmi_elem_info qmi_wlanfw_wlan_ini_resp_msg_v01_ei[] = {
+ },
+ };
+
++static struct qmi_elem_info qmi_wlfw_fw_init_done_ind_msg_v01_ei[] = {
++ {
++ .data_type = QMI_EOTI,
++ .array_type = NO_ARRAY,
++ },
++};
++
+ static int ath11k_qmi_host_cap_send(struct ath11k_base *ab)
+ {
+ struct qmi_wlanfw_host_cap_req_msg_v01 req;
+@@ -3006,6 +3013,10 @@ static void ath11k_qmi_msg_fw_ready_cb(struct qmi_handle *qmi_hdl,
+ struct ath11k_base *ab = qmi->ab;
+
+ ath11k_dbg(ab, ATH11K_DBG_QMI, "qmi firmware ready\n");
++
++ ab->qmi.cal_done = 1;
++ wake_up(&ab->qmi.cold_boot_waitq);
++
+ ath11k_qmi_driver_event_post(qmi, ATH11K_QMI_EVENT_FW_READY, NULL);
+ }
+
+@@ -3018,11 +3029,22 @@ static void ath11k_qmi_msg_cold_boot_cal_done_cb(struct qmi_handle *qmi_hdl,
+ struct ath11k_qmi, handle);
+ struct ath11k_base *ab = qmi->ab;
+
+- ab->qmi.cal_done = 1;
+- wake_up(&ab->qmi.cold_boot_waitq);
+ ath11k_dbg(ab, ATH11K_DBG_QMI, "qmi cold boot calibration done\n");
+ }
+
++static void ath11k_qmi_msg_fw_init_done_cb(struct qmi_handle *qmi_hdl,
++ struct sockaddr_qrtr *sq,
++ struct qmi_txn *txn,
++ const void *decoded)
++{
++ struct ath11k_qmi *qmi = container_of(qmi_hdl,
++ struct ath11k_qmi, handle);
++ struct ath11k_base *ab = qmi->ab;
++
++ ath11k_qmi_driver_event_post(qmi, ATH11K_QMI_EVENT_FW_INIT_DONE, NULL);
++ ath11k_dbg(ab, ATH11K_DBG_QMI, "qmi firmware init done\n");
++}
++
+ static const struct qmi_msg_handler ath11k_qmi_msg_handlers[] = {
+ {
+ .type = QMI_INDICATION,
+@@ -3053,6 +3075,14 @@ static const struct qmi_msg_handler ath11k_qmi_msg_handlers[] = {
+ sizeof(struct qmi_wlanfw_fw_cold_cal_done_ind_msg_v01),
+ .fn = ath11k_qmi_msg_cold_boot_cal_done_cb,
+ },
++ {
++ .type = QMI_INDICATION,
++ .msg_id = QMI_WLFW_FW_INIT_DONE_IND_V01,
++ .ei = qmi_wlfw_fw_init_done_ind_msg_v01_ei,
++ .decoded_size =
++ sizeof(struct qmi_wlfw_fw_init_done_ind_msg_v01),
++ .fn = ath11k_qmi_msg_fw_init_done_cb,
++ },
+ };
+
+ static int ath11k_qmi_ops_new_server(struct qmi_handle *qmi_hdl,
+@@ -3145,7 +3175,7 @@ static void ath11k_qmi_driver_event_work(struct work_struct *work)
+ }
+
+ break;
+- case ATH11K_QMI_EVENT_FW_READY:
++ case ATH11K_QMI_EVENT_FW_INIT_DONE:
+ clear_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags);
+ if (test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags)) {
+ ath11k_hal_dump_srng_stats(ab);
+@@ -3168,6 +3198,8 @@ static void ath11k_qmi_driver_event_work(struct work_struct *work)
+ set_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags);
+ }
+
++ break;
++ case ATH11K_QMI_EVENT_FW_READY:
+ break;
+ case ATH11K_QMI_EVENT_COLD_BOOT_CAL_DONE:
+ break;
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.h b/drivers/net/wireless/ath/ath11k/qmi.h
+index c83cf822be81a..2ec56a34fa810 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.h
++++ b/drivers/net/wireless/ath/ath11k/qmi.h
+@@ -31,8 +31,9 @@
+
+ #define QMI_WLFW_REQUEST_MEM_IND_V01 0x0035
+ #define QMI_WLFW_FW_MEM_READY_IND_V01 0x0037
+-#define QMI_WLFW_COLD_BOOT_CAL_DONE_IND_V01 0x0021
+-#define QMI_WLFW_FW_READY_IND_V01 0x0038
++#define QMI_WLFW_COLD_BOOT_CAL_DONE_IND_V01 0x003E
++#define QMI_WLFW_FW_READY_IND_V01 0x0021
++#define QMI_WLFW_FW_INIT_DONE_IND_V01 0x0038
+
+ #define QMI_WLANFW_MAX_DATA_SIZE_V01 6144
+ #define ATH11K_FIRMWARE_MODE_OFF 4
+@@ -69,6 +70,7 @@ enum ath11k_qmi_event_type {
+ ATH11K_QMI_EVENT_FORCE_FW_ASSERT,
+ ATH11K_QMI_EVENT_POWER_UP,
+ ATH11K_QMI_EVENT_POWER_DOWN,
++ ATH11K_QMI_EVENT_FW_INIT_DONE,
+ ATH11K_QMI_EVENT_MAX,
+ };
+
+@@ -291,6 +293,10 @@ struct qmi_wlanfw_fw_cold_cal_done_ind_msg_v01 {
+ char placeholder;
+ };
+
++struct qmi_wlfw_fw_init_done_ind_msg_v01 {
++ char placeholder;
++};
++
+ #define QMI_WLANFW_CAP_REQ_MSG_V01_MAX_LEN 0
+ #define QMI_WLANFW_CAP_RESP_MSG_V01_MAX_LEN 235
+ #define QMI_WLANFW_CAP_REQ_V01 0x0024
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index cc84bd53ddae9..1c8aa503e6144 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -9003,12 +9003,13 @@ int ath11k_wmi_sta_keepalive(struct ath11k *ar,
+ cmd->interval = arg->interval;
+ cmd->method = arg->method;
+
++ arp = (struct wmi_sta_keepalive_arp_resp *)(cmd + 1);
++ arp->tlv_header = FIELD_PREP(WMI_TLV_TAG,
++ WMI_TAG_STA_KEEPALIVE_ARP_RESPONSE) |
++ FIELD_PREP(WMI_TLV_LEN, sizeof(*arp) - TLV_HDR_SIZE);
++
+ if (arg->method == WMI_STA_KEEPALIVE_METHOD_UNSOLICITED_ARP_RESPONSE ||
+ arg->method == WMI_STA_KEEPALIVE_METHOD_GRATUITOUS_ARP_REQUEST) {
+- arp = (struct wmi_sta_keepalive_arp_resp *)(cmd + 1);
+- arp->tlv_header = FIELD_PREP(WMI_TLV_TAG,
+- WMI_TAG_STA_KEEPALVE_ARP_RESPONSE) |
+- FIELD_PREP(WMI_TLV_LEN, sizeof(*arp) - TLV_HDR_SIZE);
+ arp->src_ip4_addr = arg->src_ip4_addr;
+ arp->dest_ip4_addr = arg->dest_ip4_addr;
+ ether_addr_copy(arp->dest_mac_addr.addr, arg->dest_mac_addr);
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.h b/drivers/net/wireless/ath/ath11k/wmi.h
+index b1fad4707dc60..ca3b9a384d605 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.h
++++ b/drivers/net/wireless/ath/ath11k/wmi.h
+@@ -1214,7 +1214,7 @@ enum wmi_tlv_tag {
+ WMI_TAG_NS_OFFLOAD_TUPLE,
+ WMI_TAG_FTM_INTG_CMD,
+ WMI_TAG_STA_KEEPALIVE_CMD,
+- WMI_TAG_STA_KEEPALVE_ARP_RESPONSE,
++ WMI_TAG_STA_KEEPALIVE_ARP_RESPONSE,
+ WMI_TAG_P2P_SET_VENDOR_IE_DATA_CMD,
+ WMI_TAG_AP_PS_PEER_CMD,
+ WMI_TAG_PEER_RATE_RETRY_SCHED_CMD,
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index 994ec48b2f669..ca05b07a45e67 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -364,33 +364,27 @@ ret:
+ }
+
+ static void ath9k_htc_fw_panic_report(struct htc_target *htc_handle,
+- struct sk_buff *skb)
++ struct sk_buff *skb, u32 len)
+ {
+ uint32_t *pattern = (uint32_t *)skb->data;
+
+- switch (*pattern) {
+- case 0x33221199:
+- {
++ if (*pattern == 0x33221199 && len >= sizeof(struct htc_panic_bad_vaddr)) {
+ struct htc_panic_bad_vaddr *htc_panic;
+ htc_panic = (struct htc_panic_bad_vaddr *) skb->data;
+ dev_err(htc_handle->dev, "ath: firmware panic! "
+ "exccause: 0x%08x; pc: 0x%08x; badvaddr: 0x%08x.\n",
+ htc_panic->exccause, htc_panic->pc,
+ htc_panic->badvaddr);
+- break;
+- }
+- case 0x33221299:
+- {
++ return;
++ }
++ if (*pattern == 0x33221299) {
+ struct htc_panic_bad_epid *htc_panic;
+ htc_panic = (struct htc_panic_bad_epid *) skb->data;
+ dev_err(htc_handle->dev, "ath: firmware panic! "
+ "bad epid: 0x%08x\n", htc_panic->epid);
+- break;
+- }
+- default:
+- dev_err(htc_handle->dev, "ath: unknown panic pattern!\n");
+- break;
++ return;
+ }
++ dev_err(htc_handle->dev, "ath: unknown panic pattern!\n");
+ }
+
+ /*
+@@ -411,16 +405,26 @@ void ath9k_htc_rx_msg(struct htc_target *htc_handle,
+ if (!htc_handle || !skb)
+ return;
+
++ /* A valid message requires len >= 8.
++ *
++ * sizeof(struct htc_frame_hdr) == 8
++ * sizeof(struct htc_ready_msg) == 8
++ * sizeof(struct htc_panic_bad_vaddr) == 16
++ * sizeof(struct htc_panic_bad_epid) == 8
++ */
++ if (unlikely(len < sizeof(struct htc_frame_hdr)))
++ goto invalid;
+ htc_hdr = (struct htc_frame_hdr *) skb->data;
+ epid = htc_hdr->endpoint_id;
+
+ if (epid == 0x99) {
+- ath9k_htc_fw_panic_report(htc_handle, skb);
++ ath9k_htc_fw_panic_report(htc_handle, skb, len);
+ kfree_skb(skb);
+ return;
+ }
+
+ if (epid < 0 || epid >= ENDPOINT_MAX) {
++invalid:
+ if (pipe_id != USB_REG_IN_PIPE)
+ dev_kfree_skb_any(skb);
+ else
+@@ -432,21 +436,30 @@ void ath9k_htc_rx_msg(struct htc_target *htc_handle,
+
+ /* Handle trailer */
+ if (htc_hdr->flags & HTC_FLAGS_RECV_TRAILER) {
+- if (be32_to_cpu(*(__be32 *) skb->data) == 0x00C60000)
++ if (be32_to_cpu(*(__be32 *) skb->data) == 0x00C60000) {
+ /* Move past the Watchdog pattern */
+ htc_hdr = (struct htc_frame_hdr *)(skb->data + 4);
++ len -= 4;
++ }
+ }
+
+ /* Get the message ID */
++ if (unlikely(len < sizeof(struct htc_frame_hdr) + sizeof(__be16)))
++ goto invalid;
+ msg_id = (__be16 *) ((void *) htc_hdr +
+ sizeof(struct htc_frame_hdr));
+
+ /* Now process HTC messages */
+ switch (be16_to_cpu(*msg_id)) {
+ case HTC_MSG_READY_ID:
++ if (unlikely(len < sizeof(struct htc_ready_msg)))
++ goto invalid;
+ htc_process_target_rdy(htc_handle, htc_hdr);
+ break;
+ case HTC_MSG_CONNECT_SERVICE_RESPONSE_ID:
++ if (unlikely(len < sizeof(struct htc_frame_hdr) +
++ sizeof(struct htc_conn_svc_rspmsg)))
++ goto invalid;
+ htc_process_conn_rsp(htc_handle, htc_hdr);
+ break;
+ default:
+diff --git a/drivers/net/wireless/ath/ath9k/rng.c b/drivers/net/wireless/ath/ath9k/rng.c
+index cb5414265a9b5..58c0ab01771b0 100644
+--- a/drivers/net/wireless/ath/ath9k/rng.c
++++ b/drivers/net/wireless/ath/ath9k/rng.c
+@@ -83,7 +83,8 @@ static int ath9k_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
+ if (!wait || !max || likely(bytes_read) || fail_stats > 110)
+ break;
+
+- msleep_interruptible(ath9k_rng_delay_get(++fail_stats));
++ if (hwrng_msleep(rng, ath9k_rng_delay_get(++fail_stats)))
++ break;
+ }
+
+ if (wait && !bytes_read && max)
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 87aef211b35f3..12ee8b7163fd6 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -296,6 +296,7 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
+ struct brcmf_pub *drvr = ifp->drvr;
+ struct ethhdr *eh;
+ int head_delta;
++ unsigned int tx_bytes = skb->len;
+
+ brcmf_dbg(DATA, "Enter, bsscfgidx=%d\n", ifp->bsscfgidx);
+
+@@ -370,7 +371,7 @@ done:
+ ndev->stats.tx_dropped++;
+ } else {
+ ndev->stats.tx_packets++;
+- ndev->stats.tx_bytes += skb->len;
++ ndev->stats.tx_bytes += tx_bytes;
+ }
+
+ /* Return ok: we always eat the packet */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
+index fabfbb0b40b0c..d0a7465be586d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
+@@ -158,12 +158,12 @@ static int brcmf_pno_set_random(struct brcmf_if *ifp, struct brcmf_pno_info *pi)
+ struct brcmf_pno_macaddr_le pfn_mac;
+ u8 *mac_addr = NULL;
+ u8 *mac_mask = NULL;
+- int err, i;
++ int err, i, ri;
+
+- for (i = 0; i < pi->n_reqs; i++)
+- if (pi->reqs[i]->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
+- mac_addr = pi->reqs[i]->mac_addr;
+- mac_mask = pi->reqs[i]->mac_addr_mask;
++ for (ri = 0; ri < pi->n_reqs; ri++)
++ if (pi->reqs[ri]->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
++ mac_addr = pi->reqs[ri]->mac_addr;
++ mac_mask = pi->reqs[ri]->mac_addr_mask;
+ break;
+ }
+
+@@ -185,7 +185,7 @@ static int brcmf_pno_set_random(struct brcmf_if *ifp, struct brcmf_pno_info *pi)
+ pfn_mac.mac[0] |= 0x02;
+
+ brcmf_dbg(SCAN, "enabling random mac: reqid=%llu mac=%pM\n",
+- pi->reqs[i]->reqid, pfn_mac.mac);
++ pi->reqs[ri]->reqid, pfn_mac.mac);
+ err = brcmf_fil_iovar_data_set(ifp, "pfn_macaddr", &pfn_mac,
+ sizeof(pfn_mac));
+ if (err)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index d722c3c177bee..4a1e6b92ff734 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -1194,12 +1194,16 @@ static void mt7615_sta_set_decap_offload(struct ieee80211_hw *hw,
+ struct mt7615_dev *dev = mt7615_hw_dev(hw);
+ struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
+
++ mt7615_mutex_acquire(dev);
++
+ if (enabled)
+ set_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
+ else
+ clear_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
+
+ mt7615_mcu_set_sta_decap_offload(dev, vif, sta);
++
++ mt7615_mutex_release(dev);
+ }
+
+ #ifdef CONFIG_PM
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 7eb23805aa942..d10b441eac4f8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -258,8 +258,10 @@ mt76_connac_mcu_add_nested_tlv(struct sk_buff *skb, int tag, int len,
+ ntlv = le16_to_cpu(ntlv_hdr->tlv_num);
+ ntlv_hdr->tlv_num = cpu_to_le16(ntlv + 1);
+
+- if (sta_hdr)
+- le16_add_cpu(&sta_hdr->len, len);
++ if (sta_hdr) {
++ len += le16_to_cpu(sta_hdr->len);
++ sta_hdr->len = cpu_to_le16(len);
++ }
+
+ return ptlv;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index fd76db8f5269c..6ef3431cad648 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -23,9 +23,9 @@ mt7915_implicit_txbf_set(void *data, u64 val)
+ {
+ struct mt7915_dev *dev = data;
+
+- if (test_bit(MT76_STATE_RUNNING, &dev->mphy.state))
+- return -EBUSY;
+-
++ /* The existing connected stations shall reconnect to apply
++ * new implicit txbf configuration.
++ */
+ dev->ibf = !!val;
+
+ return mt7915_mcu_set_txbf(dev, MT_BF_TYPE_UPDATE);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 89f10bf885ba8..4f3a3a88f0863 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -2536,8 +2536,9 @@ void mt7915_mac_add_twt_setup(struct ieee80211_hw *hw,
+ }
+
+ flowid = ffs(~msta->twt.flowid_mask) - 1;
+- le16p_replace_bits(&twt_agrt->req_type, flowid,
+- IEEE80211_TWT_REQTYPE_FLOWID);
++ twt_agrt->req_type &= ~cpu_to_le16(IEEE80211_TWT_REQTYPE_FLOWID);
++ twt_agrt->req_type |= le16_encode_bits(flowid,
++ IEEE80211_TWT_REQTYPE_FLOWID);
+
+ table_id = ffs(~dev->twt.table_mask) - 1;
+ exp = FIELD_GET(IEEE80211_TWT_REQTYPE_WAKE_INT_EXP, req_type);
+@@ -2587,8 +2588,9 @@ void mt7915_mac_add_twt_setup(struct ieee80211_hw *hw,
+ unlock:
+ mutex_unlock(&dev->mt76.mutex);
+ out:
+- le16p_replace_bits(&twt_agrt->req_type, setup_cmd,
+- IEEE80211_TWT_REQTYPE_SETUP_CMD);
++ twt_agrt->req_type &= ~cpu_to_le16(IEEE80211_TWT_REQTYPE_SETUP_CMD);
++ twt_agrt->req_type |=
++ le16_encode_bits(setup_cmd, IEEE80211_TWT_REQTYPE_SETUP_CMD);
+ twt->control = (twt->control & IEEE80211_TWT_CONTROL_WAKE_DUR_UNIT) |
+ (twt->control & IEEE80211_TWT_CONTROL_RX_DISABLED);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 17fa2acc0d070..ec8a5083466f7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -1460,7 +1460,7 @@ mt7915_mcu_add_rate_ctrl_fixed(struct mt7915_dev *dev,
+ struct sta_phy phy = {};
+ int ret, nrates = 0;
+
+-#define __sta_phy_bitrate_mask_check(_mcs, _gi, _he) \
++#define __sta_phy_bitrate_mask_check(_mcs, _gi, _ht, _he) \
+ do { \
+ u8 i, gi = mask->control[band]._gi; \
+ gi = (_he) ? gi : gi == NL80211_TXRATE_FORCE_SGI; \
+@@ -1473,15 +1473,17 @@ mt7915_mcu_add_rate_ctrl_fixed(struct mt7915_dev *dev,
+ continue; \
+ nrates += hweight16(mask->control[band]._mcs[i]); \
+ phy.mcs = ffs(mask->control[band]._mcs[i]) - 1; \
++ if (_ht) \
++ phy.mcs += 8 * i; \
+ } \
+ } while (0)
+
+ if (sta->deflink.he_cap.has_he) {
+- __sta_phy_bitrate_mask_check(he_mcs, he_gi, 1);
++ __sta_phy_bitrate_mask_check(he_mcs, he_gi, 0, 1);
+ } else if (sta->deflink.vht_cap.vht_supported) {
+- __sta_phy_bitrate_mask_check(vht_mcs, gi, 0);
++ __sta_phy_bitrate_mask_check(vht_mcs, gi, 0, 0);
+ } else if (sta->deflink.ht_cap.ht_supported) {
+- __sta_phy_bitrate_mask_check(ht_mcs, gi, 0);
++ __sta_phy_bitrate_mask_check(ht_mcs, gi, 1, 0);
+ } else {
+ nrates = hweight32(mask->control[band].legacy);
+ phy.mcs = ffs(mask->control[band].legacy) - 1;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index 2a2ea7b9977a4..7e0cddc2aeab5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -1215,6 +1215,7 @@ void mt7921_mac_reset_work(struct work_struct *work)
+ void mt7921_reset(struct mt76_dev *mdev)
+ {
+ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++ struct mt76_connac_pm *pm = &dev->pm;
+
+ if (!dev->hw_init_done)
+ return;
+@@ -1222,8 +1223,12 @@ void mt7921_reset(struct mt76_dev *mdev)
+ if (dev->hw_full_reset)
+ return;
+
++ if (pm->suspended)
++ return;
++
+ queue_work(dev->mt76.wq, &dev->reset_work);
+ }
++EXPORT_SYMBOL_GPL(mt7921_reset);
+
+ void mt7921_mac_update_mib_stats(struct mt7921_phy *phy)
+ {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index d3f310877248b..94dd0c1d4cb8b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -738,6 +738,7 @@ void mt7921_mac_sta_assoc(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+
+ mt7921_mac_wtbl_update(dev, msta->wcid.idx,
+ MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
++ memset(msta->airtime_ac, 0, sizeof(msta->airtime_ac));
+
+ mt7921_mcu_sta_update(dev, sta, vif, true, MT76_STA_INFO_STATE_ASSOC);
+
+@@ -1390,6 +1391,8 @@ static void mt7921_sta_set_decap_offload(struct ieee80211_hw *hw,
+ struct mt7921_sta *msta = (struct mt7921_sta *)sta->drv_priv;
+ struct mt7921_dev *dev = mt7921_hw_dev(hw);
+
++ mt7921_mutex_acquire(dev);
++
+ if (enabled)
+ set_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
+ else
+@@ -1397,6 +1400,8 @@ static void mt7921_sta_set_decap_offload(struct ieee80211_hw *hw,
+
+ mt76_connac_mcu_sta_update_hdr_trans(&dev->mt76, vif, &msta->wcid,
+ MCU_UNI_CMD(STA_REC_UPDATE));
++
++ mt7921_mutex_release(dev);
+ }
+
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -1499,17 +1504,23 @@ mt7921_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ struct mt7921_dev *dev = mt7921_hw_dev(hw);
+ int err;
+
++ mt7921_mutex_acquire(dev);
++
+ err = mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid,
+ true);
+ if (err)
+- return err;
++ goto out;
+
+ err = mt7921_mcu_set_bss_pm(dev, vif, true);
+ if (err)
+- return err;
++ goto out;
++
++ err = mt7921_mcu_sta_update(dev, NULL, vif, true,
++ MT76_STA_INFO_STATE_NONE);
++out:
++ mt7921_mutex_release(dev);
+
+- return mt7921_mcu_sta_update(dev, NULL, vif, true,
+- MT76_STA_INFO_STATE_NONE);
++ return err;
+ }
+
+ static void
+@@ -1520,11 +1531,16 @@ mt7921_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ struct mt7921_dev *dev = mt7921_hw_dev(hw);
+ int err;
+
++ mt7921_mutex_acquire(dev);
++
+ err = mt7921_mcu_set_bss_pm(dev, vif, false);
+ if (err)
+- return;
++ goto out;
+
+ mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid, false);
++
++out:
++ mt7921_mutex_release(dev);
+ }
+
+ const struct ieee80211_ops mt7921_ops = {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index b5fb22b8e0869..d8347b33641ee 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -289,6 +289,8 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ goto err_free_pci_vec;
+ }
+
++ pci_set_drvdata(pdev, mdev);
++
+ dev = container_of(mdev, struct mt7921_dev, mt76);
+ dev->hif_ops = &mt7921_pcie_ops;
+
+@@ -368,6 +370,7 @@ static int mt7921_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+ int i, err;
+
+ pm->suspended = true;
++ flush_work(&dev->reset_work);
+ cancel_delayed_work_sync(&pm->ps_work);
+ cancel_work_sync(&pm->wake_work);
+
+@@ -433,6 +436,9 @@ restore_napi:
+ restore_suspend:
+ pm->suspended = false;
+
++ if (err < 0)
++ mt7921_reset(&dev->mt76);
++
+ return err;
+ }
+
+@@ -451,7 +457,7 @@ static int mt7921_pci_resume(struct pci_dev *pdev)
+
+ err = mt7921_mcu_drv_pmctrl(dev);
+ if (err < 0)
+- return err;
++ goto failed;
+
+ mt7921_wpdma_reinit_cond(dev);
+
+@@ -481,11 +487,12 @@ static int mt7921_pci_resume(struct pci_dev *pdev)
+ mt76_connac_mcu_set_deep_sleep(&dev->mt76, false);
+
+ err = mt76_connac_mcu_set_hif_suspend(mdev, false);
+- if (err)
+- return err;
+-
++failed:
+ pm->suspended = false;
+
++ if (err < 0)
++ mt7921_reset(&dev->mt76);
++
+ return err;
+ }
+ #endif /* CONFIG_PM */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c b/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
+index af26d59fa2f04..5610c63fe1e60 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
+@@ -206,6 +206,7 @@ static int mt7921s_suspend(struct device *__dev)
+ pm->suspended = true;
+ set_bit(MT76_STATE_SUSPEND, &mdev->phy.state);
+
++ flush_work(&dev->reset_work);
+ cancel_delayed_work_sync(&pm->ps_work);
+ cancel_work_sync(&pm->wake_work);
+
+@@ -261,6 +262,9 @@ restore_suspend:
+ clear_bit(MT76_STATE_SUSPEND, &mdev->phy.state);
+ pm->suspended = false;
+
++ if (err < 0)
++ mt7921_reset(&dev->mt76);
++
+ return err;
+ }
+
+@@ -276,7 +280,7 @@ static int mt7921s_resume(struct device *__dev)
+
+ err = mt7921_mcu_drv_pmctrl(dev);
+ if (err < 0)
+- return err;
++ goto failed;
+
+ mt76_worker_enable(&mdev->tx_worker);
+ mt76_worker_enable(&mdev->sdio.txrx_worker);
+@@ -288,11 +292,12 @@ static int mt7921s_resume(struct device *__dev)
+ mt76_connac_mcu_set_deep_sleep(mdev, false);
+
+ err = mt76_connac_mcu_set_hif_suspend(mdev, false);
+- if (err)
+- return err;
+-
++failed:
+ pm->suspended = false;
+
++ if (err < 0)
++ mt7921_reset(&dev->mt76);
++
+ return err;
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+index dc38baef273a7..25b4a8001b9e5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+@@ -292,11 +292,15 @@ static void mt7921u_disconnect(struct usb_interface *usb_intf)
+ static int mt7921u_suspend(struct usb_interface *intf, pm_message_t state)
+ {
+ struct mt7921_dev *dev = usb_get_intfdata(intf);
++ struct mt76_connac_pm *pm = &dev->pm;
+ int err;
+
++ pm->suspended = true;
++ flush_work(&dev->reset_work);
++
+ err = mt76_connac_mcu_set_hif_suspend(&dev->mt76, true);
+ if (err)
+- return err;
++ goto failed;
+
+ mt76u_stop_rx(&dev->mt76);
+ mt76u_stop_tx(&dev->mt76);
+@@ -304,11 +308,20 @@ static int mt7921u_suspend(struct usb_interface *intf, pm_message_t state)
+ set_bit(MT76_STATE_SUSPEND, &dev->mphy.state);
+
+ return 0;
++
++failed:
++ pm->suspended = false;
++
++ if (err < 0)
++ mt7921_reset(&dev->mt76);
++
++ return err;
+ }
+
+ static int mt7921u_resume(struct usb_interface *intf)
+ {
+ struct mt7921_dev *dev = usb_get_intfdata(intf);
++ struct mt76_connac_pm *pm = &dev->pm;
+ bool reinit = true;
+ int err, i;
+
+@@ -330,16 +343,23 @@ static int mt7921u_resume(struct usb_interface *intf)
+ if (reinit || mt7921_dma_need_reinit(dev)) {
+ err = mt7921u_dma_init(dev, true);
+ if (err)
+- return err;
++ goto failed;
+ }
+
+ clear_bit(MT76_STATE_SUSPEND, &dev->mphy.state);
+
+ err = mt76u_resume_rx(&dev->mt76);
+ if (err < 0)
+- return err;
++ goto failed;
++
++ err = mt76_connac_mcu_set_hif_suspend(&dev->mt76, false);
++failed:
++ pm->suspended = false;
++
++ if (err < 0)
++ mt7921_reset(&dev->mt76);
+
+- return mt76_connac_mcu_set_hif_suspend(&dev->mt76, false);
++ return err;
+ }
+ #endif /* CONFIG_PM */
+
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c
+index def7f325f5c54..140145e03f12c 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/sdio.c
+@@ -480,14 +480,14 @@ static void mt76s_status_worker(struct mt76_worker *w)
+ if (ndata_frames > 0)
+ resched = true;
+
+- if (dev->drv->tx_status_data &&
++ if (dev->drv->tx_status_data && ndata_frames > 0 &&
+ !test_and_set_bit(MT76_READING_STATS, &dev->phy.state) &&
+ !test_bit(MT76_STATE_SUSPEND, &dev->phy.state))
+- queue_work(dev->wq, &dev->sdio.stat_work);
++ ieee80211_queue_work(dev->hw, &dev->sdio.stat_work);
+ } while (nframes > 0);
+
+ if (resched)
+- mt76_worker_schedule(&dev->sdio.txrx_worker);
++ mt76_worker_schedule(&dev->tx_worker);
+ }
+
+ static void mt76s_tx_status_data(struct work_struct *work)
+@@ -510,7 +510,7 @@ static void mt76s_tx_status_data(struct work_struct *work)
+ }
+
+ if (count && test_bit(MT76_STATE_RUNNING, &dev->phy.state))
+- queue_work(dev->wq, &sdio->stat_work);
++ ieee80211_queue_work(dev->hw, &sdio->stat_work);
+ else
+ clear_bit(MT76_READING_STATS, &dev->phy.state);
+ }
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+index cbdaf7992f98a..cf3ef44f18f25 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+@@ -4164,7 +4164,10 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev,
+ rt2800_bbp_write(rt2x00dev, 62, 0x37 - rt2x00dev->lna_gain);
+ rt2800_bbp_write(rt2x00dev, 63, 0x37 - rt2x00dev->lna_gain);
+ rt2800_bbp_write(rt2x00dev, 64, 0x37 - rt2x00dev->lna_gain);
+- rt2800_bbp_write(rt2x00dev, 86, 0);
++ if (rt2x00_rt(rt2x00dev, RT6352))
++ rt2800_bbp_write(rt2x00dev, 86, 0x38);
++ else
++ rt2800_bbp_write(rt2x00dev, 86, 0);
+ }
+
+ if (rf->channel <= 14) {
+@@ -4365,7 +4368,8 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev,
+ reg = (rf->channel <= 14 ? 0x1c : 0x24) + 2*rt2x00dev->lna_gain;
+ rt2800_bbp_write_with_rx_chain(rt2x00dev, 66, reg);
+
+- rt2800_iq_calibrate(rt2x00dev, rf->channel);
++ if (rt2x00_rt(rt2x00dev, RT5592))
++ rt2800_iq_calibrate(rt2x00dev, rf->channel);
+ }
+
+ bbp = rt2800_bbp_read(rt2x00dev, 4);
+@@ -5644,7 +5648,8 @@ static inline void rt2800_set_vgc(struct rt2x00_dev *rt2x00dev,
+ if (qual->vgc_level != vgc_level) {
+ if (rt2x00_rt(rt2x00dev, RT3572) ||
+ rt2x00_rt(rt2x00dev, RT3593) ||
+- rt2x00_rt(rt2x00dev, RT3883)) {
++ rt2x00_rt(rt2x00dev, RT3883) ||
++ rt2x00_rt(rt2x00dev, RT6352)) {
+ rt2800_bbp_write_with_rx_chain(rt2x00dev, 66,
+ vgc_level);
+ } else if (rt2x00_rt(rt2x00dev, RT5592)) {
+@@ -5867,7 +5872,7 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev)
+ rt2800_register_write(rt2x00dev, TX_SW_CFG0, 0x00000404);
+ } else if (rt2x00_rt(rt2x00dev, RT6352)) {
+ rt2800_register_write(rt2x00dev, TX_SW_CFG0, 0x00000401);
+- rt2800_register_write(rt2x00dev, TX_SW_CFG1, 0x000C0000);
++ rt2800_register_write(rt2x00dev, TX_SW_CFG1, 0x000C0001);
+ rt2800_register_write(rt2x00dev, TX_SW_CFG2, 0x00000000);
+ rt2800_register_write(rt2x00dev, TX_ALC_VGA3, 0x00000000);
+ rt2800_register_write(rt2x00dev, TX0_BB_GAIN_ATTEN, 0x0);
+@@ -6129,6 +6134,27 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev)
+ reg = rt2800_register_read(rt2x00dev, US_CYC_CNT);
+ rt2x00_set_field32(®, US_CYC_CNT_CLOCK_CYCLE, 125);
+ rt2800_register_write(rt2x00dev, US_CYC_CNT, reg);
++ } else if (rt2x00_is_soc(rt2x00dev)) {
++ struct clk *clk = clk_get_sys("bus", NULL);
++ int rate;
++
++ if (IS_ERR(clk)) {
++ clk = clk_get_sys("cpu", NULL);
++
++ if (IS_ERR(clk)) {
++ rate = 125;
++ } else {
++ rate = clk_get_rate(clk) / 3000000;
++ clk_put(clk);
++ }
++ } else {
++ rate = clk_get_rate(clk) / 1000000;
++ clk_put(clk);
++ }
++
++ reg = rt2800_register_read(rt2x00dev, US_CYC_CNT);
++ rt2x00_set_field32(®, US_CYC_CNT_CLOCK_CYCLE, rate);
++ rt2800_register_write(rt2x00dev, US_CYC_CNT, reg);
+ }
+
+ reg = rt2800_register_read(rt2x00dev, HT_FBK_CFG0);
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index 7ddce3c3f0c48..782b089a2e1ba 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -1425,7 +1425,7 @@ struct rtl8xxxu_fileops {
+ void (*set_tx_power) (struct rtl8xxxu_priv *priv, int channel,
+ bool ht40);
+ void (*update_rate_mask) (struct rtl8xxxu_priv *priv,
+- u32 ramask, u8 rateid, int sgi);
++ u32 ramask, u8 rateid, int sgi, int txbw_40mhz);
+ void (*report_connect) (struct rtl8xxxu_priv *priv,
+ u8 macid, bool connect);
+ void (*fill_txdesc) (struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+@@ -1511,9 +1511,9 @@ void rtl8xxxu_gen2_config_channel(struct ieee80211_hw *hw);
+ void rtl8xxxu_gen1_usb_quirks(struct rtl8xxxu_priv *priv);
+ void rtl8xxxu_gen2_usb_quirks(struct rtl8xxxu_priv *priv);
+ void rtl8xxxu_update_rate_mask(struct rtl8xxxu_priv *priv,
+- u32 ramask, u8 rateid, int sgi);
++ u32 ramask, u8 rateid, int sgi, int txbw_40mhz);
+ void rtl8xxxu_gen2_update_rate_mask(struct rtl8xxxu_priv *priv,
+- u32 ramask, u8 rateid, int sgi);
++ u32 ramask, u8 rateid, int sgi, int txbw_40mhz);
+ void rtl8xxxu_gen1_report_connect(struct rtl8xxxu_priv *priv,
+ u8 macid, bool connect);
+ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 8b2ca9e8eac6b..57b5370a256b1 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -1878,13 +1878,6 @@ static int rtl8xxxu_read_efuse(struct rtl8xxxu_priv *priv)
+
+ /* We have 8 bits to indicate validity */
+ map_addr = offset * 8;
+- if (map_addr >= EFUSE_MAP_LEN) {
+- dev_warn(dev, "%s: Illegal map_addr (%04x), "
+- "efuse corrupt!\n",
+- __func__, map_addr);
+- ret = -EINVAL;
+- goto exit;
+- }
+ for (i = 0; i < EFUSE_MAX_WORD_UNIT; i++) {
+ /* Check word enable condition in the section */
+ if (word_mask & BIT(i)) {
+@@ -1895,6 +1888,13 @@ static int rtl8xxxu_read_efuse(struct rtl8xxxu_priv *priv)
+ ret = rtl8xxxu_read_efuse8(priv, efuse_addr++, &val8);
+ if (ret)
+ goto exit;
++ if (map_addr >= EFUSE_MAP_LEN - 1) {
++ dev_warn(dev, "%s: Illegal map_addr (%04x), "
++ "efuse corrupt!\n",
++ __func__, map_addr);
++ ret = -EINVAL;
++ goto exit;
++ }
+ priv->efuse_wifi.raw[map_addr++] = val8;
+
+ ret = rtl8xxxu_read_efuse8(priv, efuse_addr++, &val8);
+@@ -2929,12 +2929,12 @@ bool rtl8xxxu_gen2_simularity_compare(struct rtl8xxxu_priv *priv,
+ }
+
+ if (!(simubitmap & 0x30) && priv->tx_paths > 1) {
+- /* path B RX OK */
++ /* path B TX OK */
+ for (i = 4; i < 6; i++)
+ result[3][i] = result[c1][i];
+ }
+
+- if (!(simubitmap & 0x30) && priv->tx_paths > 1) {
++ if (!(simubitmap & 0xc0) && priv->tx_paths > 1) {
+ /* path B RX OK */
+ for (i = 6; i < 8; i++)
+ result[3][i] = result[c1][i];
+@@ -4320,7 +4320,7 @@ static void rtl8xxxu_sw_scan_complete(struct ieee80211_hw *hw,
+ }
+
+ void rtl8xxxu_update_rate_mask(struct rtl8xxxu_priv *priv,
+- u32 ramask, u8 rateid, int sgi)
++ u32 ramask, u8 rateid, int sgi, int txbw_40mhz)
+ {
+ struct h2c_cmd h2c;
+
+@@ -4340,10 +4340,15 @@ void rtl8xxxu_update_rate_mask(struct rtl8xxxu_priv *priv,
+ }
+
+ void rtl8xxxu_gen2_update_rate_mask(struct rtl8xxxu_priv *priv,
+- u32 ramask, u8 rateid, int sgi)
++ u32 ramask, u8 rateid, int sgi, int txbw_40mhz)
+ {
+ struct h2c_cmd h2c;
+- u8 bw = RTL8XXXU_CHANNEL_WIDTH_20;
++ u8 bw;
++
++ if (txbw_40mhz)
++ bw = RTL8XXXU_CHANNEL_WIDTH_40;
++ else
++ bw = RTL8XXXU_CHANNEL_WIDTH_20;
+
+ memset(&h2c, 0, sizeof(struct h2c_cmd));
+
+@@ -4353,15 +4358,14 @@ void rtl8xxxu_gen2_update_rate_mask(struct rtl8xxxu_priv *priv,
+ h2c.b_macid_cfg.ramask2 = (ramask >> 16) & 0xff;
+ h2c.b_macid_cfg.ramask3 = (ramask >> 24) & 0xff;
+
+- h2c.ramask.arg = 0x80;
+ h2c.b_macid_cfg.data1 = rateid;
+ if (sgi)
+ h2c.b_macid_cfg.data1 |= BIT(7);
+
+ h2c.b_macid_cfg.data2 = bw;
+
+- dev_dbg(&priv->udev->dev, "%s: rate mask %08x, arg %02x, size %zi\n",
+- __func__, ramask, h2c.ramask.arg, sizeof(h2c.b_macid_cfg));
++ dev_dbg(&priv->udev->dev, "%s: rate mask %08x, rateid %02x, sgi %d, size %zi\n",
++ __func__, ramask, rateid, sgi, sizeof(h2c.b_macid_cfg));
+ rtl8xxxu_gen2_h2c_cmd(priv, &h2c, sizeof(h2c.b_macid_cfg));
+ }
+
+@@ -4556,6 +4560,53 @@ rtl8xxxu_wireless_mode(struct ieee80211_hw *hw, struct ieee80211_sta *sta)
+ return network_type;
+ }
+
++static void rtl8xxxu_set_aifs(struct rtl8xxxu_priv *priv, u8 slot_time)
++{
++ u32 reg_edca_param[IEEE80211_NUM_ACS] = {
++ [IEEE80211_AC_VO] = REG_EDCA_VO_PARAM,
++ [IEEE80211_AC_VI] = REG_EDCA_VI_PARAM,
++ [IEEE80211_AC_BE] = REG_EDCA_BE_PARAM,
++ [IEEE80211_AC_BK] = REG_EDCA_BK_PARAM,
++ };
++ u32 val32;
++ u16 wireless_mode = 0;
++ u8 aifs, aifsn, sifs;
++ int i;
++
++ if (priv->vif) {
++ struct ieee80211_sta *sta;
++
++ rcu_read_lock();
++ sta = ieee80211_find_sta(priv->vif, priv->vif->bss_conf.bssid);
++ if (sta)
++ wireless_mode = rtl8xxxu_wireless_mode(priv->hw, sta);
++ rcu_read_unlock();
++ }
++
++ if (priv->hw->conf.chandef.chan->band == NL80211_BAND_5GHZ ||
++ (wireless_mode & WIRELESS_MODE_N_24G))
++ sifs = 16;
++ else
++ sifs = 10;
++
++ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
++ val32 = rtl8xxxu_read32(priv, reg_edca_param[i]);
++
++ /* It was set in conf_tx. */
++ aifsn = val32 & 0xff;
++
++ /* aifsn not set yet or already fixed */
++ if (aifsn < 2 || aifsn > 15)
++ continue;
++
++ aifs = aifsn * slot_time + sifs;
++
++ val32 &= ~0xff;
++ val32 |= aifs;
++ rtl8xxxu_write32(priv, reg_edca_param[i], val32);
++ }
++}
++
+ static void
+ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *bss_conf, u32 changed)
+@@ -4622,7 +4673,11 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ RATE_INFO_FLAGS_SHORT_GI;
+ }
+
+- rarpt->txrate.bw |= RATE_INFO_BW_20;
++ if (rtl8xxxu_ht40_2g &&
++ (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40))
++ rarpt->txrate.bw = RATE_INFO_BW_40;
++ else
++ rarpt->txrate.bw = RATE_INFO_BW_20;
+ }
+ bit_rate = cfg80211_calculate_bitrate(&rarpt->txrate);
+ rarpt->bit_rate = bit_rate;
+@@ -4631,7 +4686,7 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ priv->vif = vif;
+ priv->rssi_level = RTL8XXXU_RATR_STA_INIT;
+
+- priv->fops->update_rate_mask(priv, ramask, 0, sgi);
++ priv->fops->update_rate_mask(priv, ramask, 0, sgi, rarpt->txrate.bw == RATE_INFO_BW_40);
+
+ rtl8xxxu_write8(priv, REG_BCN_MAX_ERR, 0xff);
+
+@@ -4671,6 +4726,8 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ else
+ val8 = 20;
+ rtl8xxxu_write8(priv, REG_SLOT, val8);
++
++ rtl8xxxu_set_aifs(priv, val8);
+ }
+
+ if (changed & BSS_CHANGED_BSSID) {
+@@ -5062,6 +5119,8 @@ static void rtl8xxxu_tx(struct ieee80211_hw *hw,
+ if (control && control->sta)
+ sta = control->sta;
+
++ queue = rtl8xxxu_queue_select(hw, skb);
++
+ tx_desc = skb_push(skb, tx_desc_size);
+
+ memset(tx_desc, 0, tx_desc_size);
+@@ -5074,7 +5133,6 @@ static void rtl8xxxu_tx(struct ieee80211_hw *hw,
+ is_broadcast_ether_addr(ieee80211_get_DA(hdr)))
+ tx_desc->txdw0 |= TXDESC_BROADMULTICAST;
+
+- queue = rtl8xxxu_queue_select(hw, skb);
+ tx_desc->txdw1 = cpu_to_le32(queue << TXDESC_QUEUE_SHIFT);
+
+ if (tx_info->control.hw_key) {
+@@ -6343,7 +6401,7 @@ static void rtl8xxxu_refresh_rate_mask(struct rtl8xxxu_priv *priv,
+ }
+
+ priv->rssi_level = rssi_level;
+- priv->fops->update_rate_mask(priv, rate_bitmap, ratr_idx, sgi);
++ priv->fops->update_rate_mask(priv, rate_bitmap, ratr_idx, sgi, txbw_40mhz);
+ }
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+index 15e6a6aded319..d18c092b61426 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+@@ -2386,11 +2386,10 @@ void rtl92d_phy_reload_iqk_setting(struct ieee80211_hw *hw, u8 channel)
+ rtl_dbg(rtlpriv, COMP_SCAN, DBG_LOUD,
+ "Just Read IQK Matrix reg for channel:%d....\n",
+ channel);
+- _rtl92d_phy_patha_fill_iqk_matrix(hw, true,
+- rtlphy->iqk_matrix[
+- indexforchannel].value, 0,
+- (rtlphy->iqk_matrix[
+- indexforchannel].value[0][2] == 0));
++ if (rtlphy->iqk_matrix[indexforchannel].value[0][0] != 0)
++ _rtl92d_phy_patha_fill_iqk_matrix(hw, true,
++ rtlphy->iqk_matrix[indexforchannel].value, 0,
++ rtlphy->iqk_matrix[indexforchannel].value[0][2] == 0);
+ if (IS_92D_SINGLEPHY(rtlhal->version)) {
+ if ((rtlphy->iqk_matrix[
+ indexforchannel].value[0][4] != 0)
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 645ef1d018953..c364482ab331d 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -2037,7 +2037,7 @@ int rtw_core_init(struct rtw_dev *rtwdev)
+ ret = rtw_load_firmware(rtwdev, RTW_NORMAL_FW);
+ if (ret) {
+ rtw_warn(rtwdev, "no firmware loaded\n");
+- return ret;
++ goto out;
+ }
+
+ if (chip->wow_fw_name) {
+@@ -2047,11 +2047,15 @@ int rtw_core_init(struct rtw_dev *rtwdev)
+ wait_for_completion(&rtwdev->fw.completion);
+ if (rtwdev->fw.firmware)
+ release_firmware(rtwdev->fw.firmware);
+- return ret;
++ goto out;
+ }
+ }
+
+ return 0;
++
++out:
++ destroy_workqueue(rtwdev->tx_wq);
++ return ret;
+ }
+ EXPORT_SYMBOL(rtw_core_init);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index a6a90572e74bf..7313eb80fb1e3 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -860,6 +860,7 @@ int rtw89_h2c_tx(struct rtw89_dev *rtwdev,
+ rtw89_debug(rtwdev, RTW89_DBG_FW,
+ "ignore h2c due to power is off with firmware state=%d\n",
+ test_bit(RTW89_FLAG_FW_RDY, rtwdev->flags));
++ dev_kfree_skb(skb);
+ return 0;
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 4718aced1428a..e7f86d84d91d7 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -2288,6 +2288,7 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ {
+ struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct cfg80211_scan_request *req = &scan_req->req;
++ u32 rx_fltr = rtwdev->hal.rx_fltr;
+ u8 mac_addr[ETH_ALEN];
+
+ rtwdev->scan_info.scanning_vif = vif;
+@@ -2302,13 +2303,13 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ ether_addr_copy(mac_addr, vif->addr);
+ rtw89_core_scan_start(rtwdev, rtwvif, mac_addr, true);
+
+- rtwdev->hal.rx_fltr &= ~B_AX_A_BCN_CHK_EN;
+- rtwdev->hal.rx_fltr &= ~B_AX_A_BC;
+- rtwdev->hal.rx_fltr &= ~B_AX_A_A1_MATCH;
++ rx_fltr &= ~B_AX_A_BCN_CHK_EN;
++ rx_fltr &= ~B_AX_A_BC;
++ rx_fltr &= ~B_AX_A_A1_MATCH;
+ rtw89_write32_mask(rtwdev,
+ rtw89_mac_reg_by_idx(R_AX_RX_FLTR_OPT, RTW89_MAC_0),
+ B_AX_RX_FLTR_CFG_MASK,
+- rtwdev->hal.rx_fltr);
++ rx_fltr);
+ }
+
+ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+@@ -2322,9 +2323,6 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ if (!vif)
+ return;
+
+- rtwdev->hal.rx_fltr |= B_AX_A_BCN_CHK_EN;
+- rtwdev->hal.rx_fltr |= B_AX_A_BC;
+- rtwdev->hal.rx_fltr |= B_AX_A_A1_MATCH;
+ rtw89_write32_mask(rtwdev,
+ rtw89_mac_reg_by_idx(R_AX_RX_FLTR_OPT, RTW89_MAC_0),
+ B_AX_RX_FLTR_CFG_MASK,
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index 0ef7821b2e0fc..bc8132e919928 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -756,7 +756,8 @@ static irqreturn_t rtw89_pci_interrupt_threadfn(int irq, void *dev)
+
+ enable_intr:
+ spin_lock_irqsave(&rtwpci->irq_lock, flags);
+- rtw89_chip_enable_intr(rtwdev, rtwpci);
++ if (likely(rtwpci->running))
++ rtw89_chip_enable_intr(rtwdev, rtwpci);
+ spin_unlock_irqrestore(&rtwpci->irq_lock, flags);
+ return IRQ_HANDLED;
+ }
+@@ -921,10 +922,12 @@ u32 __rtw89_pci_check_and_reclaim_tx_resource_noio(struct rtw89_dev *rtwdev,
+ {
+ struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
+ struct rtw89_pci_tx_ring *tx_ring = &rtwpci->tx_rings[txch];
++ struct rtw89_pci_tx_wd_ring *wd_ring = &tx_ring->wd_ring;
+ u32 cnt;
+
+ spin_lock_bh(&rtwpci->trx_lock);
+ cnt = rtw89_pci_get_avail_txbd_num(tx_ring);
++ cnt = min(cnt, wd_ring->curr_num);
+ spin_unlock_bh(&rtwpci->trx_lock);
+
+ return cnt;
+diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c
+index 9e95ed9727108..5d88200cbd3e5 100644
+--- a/drivers/net/wireless/realtek/rtw89/ser.c
++++ b/drivers/net/wireless/realtek/rtw89/ser.c
+@@ -152,7 +152,10 @@ static void ser_state_run(struct rtw89_ser *ser, u8 evt)
+ rtw89_debug(rtwdev, RTW89_DBG_SER, "ser: %s receive %s\n",
+ ser_st_name(ser), ser_ev_name(ser, evt));
+
++ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_lps(rtwdev);
++ mutex_unlock(&rtwdev->mutex);
++
+ ser->st_tbl[ser->state].st_func(ser, evt);
+ }
+
+diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/silabs/wfx/main.c
+index e015bfb8d221f..84d82ddded567 100644
+--- a/drivers/net/wireless/silabs/wfx/main.c
++++ b/drivers/net/wireless/silabs/wfx/main.c
+@@ -181,7 +181,7 @@ int wfx_send_pds(struct wfx_dev *wdev, u8 *buf, size_t len)
+ while (len > 0) {
+ chunk_type = get_unaligned_le16(buf + 0);
+ chunk_len = get_unaligned_le16(buf + 2);
+- if (chunk_len > len) {
++ if (chunk_len < 4 || chunk_len > len) {
+ dev_err(wdev->dev, "PDS:%d: corrupted file\n", chunk_num);
+ return -EINVAL;
+ }
+diff --git a/drivers/net/wireless/st/cw1200/queue.c b/drivers/net/wireless/st/cw1200/queue.c
+index e06da4b3b0d46..805a3c1bf8fe2 100644
+--- a/drivers/net/wireless/st/cw1200/queue.c
++++ b/drivers/net/wireless/st/cw1200/queue.c
+@@ -91,23 +91,25 @@ static void __cw1200_queue_gc(struct cw1200_queue *queue,
+ bool unlock)
+ {
+ struct cw1200_queue_stats *stats = queue->stats;
+- struct cw1200_queue_item *item = NULL, *tmp;
++ struct cw1200_queue_item *item = NULL, *iter, *tmp;
+ bool wakeup_stats = false;
+
+- list_for_each_entry_safe(item, tmp, &queue->queue, head) {
+- if (time_is_after_jiffies(item->queue_timestamp + queue->ttl))
++ list_for_each_entry_safe(iter, tmp, &queue->queue, head) {
++ if (time_is_after_jiffies(iter->queue_timestamp + queue->ttl)) {
++ item = iter;
+ break;
++ }
+ --queue->num_queued;
+- --queue->link_map_cache[item->txpriv.link_id];
++ --queue->link_map_cache[iter->txpriv.link_id];
+ spin_lock_bh(&stats->lock);
+ --stats->num_queued;
+- if (!--stats->link_map_cache[item->txpriv.link_id])
++ if (!--stats->link_map_cache[iter->txpriv.link_id])
+ wakeup_stats = true;
+ spin_unlock_bh(&stats->lock);
+ cw1200_debug_tx_ttl(stats->priv);
+- cw1200_queue_register_post_gc(head, item);
+- item->skb = NULL;
+- list_move_tail(&item->head, &queue->free_pool);
++ cw1200_queue_register_post_gc(head, iter);
++ iter->skb = NULL;
++ list_move_tail(&iter->head, &queue->free_pool);
+ }
+
+ if (wakeup_stats)
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.c b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
+index 27151148c782c..4712f01a7e33e 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_wwan.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
+@@ -323,15 +323,16 @@ struct iosm_wwan *ipc_wwan_init(struct iosm_imem *ipc_imem, struct device *dev)
+ ipc_wwan->dev = dev;
+ ipc_wwan->ipc_imem = ipc_imem;
+
++ mutex_init(&ipc_wwan->if_mutex);
++
+ /* WWAN core will create a netdev for the default IP MUX channel */
+ if (wwan_register_ops(ipc_wwan->dev, &iosm_wwan_ops, ipc_wwan,
+ IP_MUX_SESSION_DEFAULT)) {
++ mutex_destroy(&ipc_wwan->if_mutex);
+ kfree(ipc_wwan);
+ return NULL;
+ }
+
+- mutex_init(&ipc_wwan->if_mutex);
+-
+ return ipc_wwan;
+ }
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 326ad33537ede..698e65883d822 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1089,8 +1089,8 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ return effects;
+ }
+
+-static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects,
+- struct nvme_command *cmd, int status)
++void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects,
++ struct nvme_command *cmd, int status)
+ {
+ if (effects & NVME_CMD_EFFECTS_CSE_MASK) {
+ nvme_unfreeze(ctrl);
+@@ -1126,21 +1126,16 @@ static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects,
+ break;
+ }
+ }
++EXPORT_SYMBOL_NS_GPL(nvme_passthru_end, NVME_TARGET_PASSTHRU);
+
+-int nvme_execute_passthru_rq(struct request *rq)
++int nvme_execute_passthru_rq(struct request *rq, u32 *effects)
+ {
+ struct nvme_command *cmd = nvme_req(rq)->cmd;
+ struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
+ struct nvme_ns *ns = rq->q->queuedata;
+- u32 effects;
+- int ret;
+
+- effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
+- ret = nvme_execute_rq(rq, false);
+- if (effects) /* nothing to be done for zero cmd effects */
+- nvme_passthru_end(ctrl, effects, cmd, ret);
+-
+- return ret;
++ *effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
++ return nvme_execute_rq(rq, false);
+ }
+ EXPORT_SYMBOL_NS_GPL(nvme_execute_passthru_rq, NVME_TARGET_PASSTHRU);
+
+@@ -2801,7 +2796,6 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ nvme_init_subnqn(subsys, ctrl, id);
+ memcpy(subsys->serial, id->sn, sizeof(subsys->serial));
+ memcpy(subsys->model, id->mn, sizeof(subsys->model));
+- memcpy(subsys->firmware_rev, id->fr, sizeof(subsys->firmware_rev));
+ subsys->vendor_id = le16_to_cpu(id->vid);
+ subsys->cmic = id->cmic;
+
+@@ -3020,6 +3014,8 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
+ ctrl->quirks |= core_quirks[i].quirks;
+ }
+ }
++ memcpy(ctrl->subsys->firmware_rev, id->fr,
++ sizeof(ctrl->subsys->firmware_rev));
+
+ if (force_apst && (ctrl->quirks & NVME_QUIRK_NO_DEEPEST_PS)) {
+ dev_warn(ctrl->device, "forcibly allowing all power states due to nvme_core.force_apst -- use at your own risk\n");
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index a2e89db1cd639..15a60e1f290a0 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -136,9 +136,11 @@ static int nvme_submit_user_cmd(struct request_queue *q,
+ unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
+ u32 meta_seed, u64 *result, unsigned timeout, bool vec)
+ {
++ struct nvme_ctrl *ctrl;
+ struct request *req;
+ void *meta = NULL;
+ struct bio *bio;
++ u32 effects;
+ int ret;
+
+ req = nvme_alloc_user_request(q, cmd, ubuffer, bufflen, meta_buffer,
+@@ -147,8 +149,9 @@ static int nvme_submit_user_cmd(struct request_queue *q,
+ return PTR_ERR(req);
+
+ bio = req->bio;
++ ctrl = nvme_req(req)->ctrl;
+
+- ret = nvme_execute_passthru_rq(req);
++ ret = nvme_execute_passthru_rq(req, &effects);
+
+ if (result)
+ *result = le64_to_cpu(nvme_req(req)->result.u64);
+@@ -158,6 +161,10 @@ static int nvme_submit_user_cmd(struct request_queue *q,
+ if (bio)
+ blk_rq_unmap_user(bio);
+ blk_mq_free_request(req);
++
++ if (effects)
++ nvme_passthru_end(ctrl, effects, cmd, ret);
++
+ return ret;
+ }
+
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 432ea9793a849..3046d49855811 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -182,6 +182,7 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
+
+ for_each_node(node)
+ rcu_assign_pointer(head->current_path[node], NULL);
++ kblockd_schedule_work(&head->requeue_work);
+ }
+
+ static bool nvme_path_is_disabled(struct nvme_ns *ns)
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 5558f88121579..e154e23042037 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -994,7 +994,9 @@ static inline bool nvme_ctrl_sgl_supported(struct nvme_ctrl *ctrl)
+
+ u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ u8 opcode);
+-int nvme_execute_passthru_rq(struct request *rq);
++int nvme_execute_passthru_rq(struct request *rq, u32 *effects);
++void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects,
++ struct nvme_command *cmd, int status);
+ struct nvme_ctrl *nvme_ctrl_from_file(struct file *file);
+ struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid);
+ void nvme_put_ns(struct nvme_ns *ns);
+diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
+index 6f39a29828b12..94d3153bae54d 100644
+--- a/drivers/nvme/target/passthru.c
++++ b/drivers/nvme/target/passthru.c
+@@ -215,9 +215,11 @@ static void nvmet_passthru_execute_cmd_work(struct work_struct *w)
+ {
+ struct nvmet_req *req = container_of(w, struct nvmet_req, p.work);
+ struct request *rq = req->p.rq;
++ struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
++ u32 effects;
+ int status;
+
+- status = nvme_execute_passthru_rq(rq);
++ status = nvme_execute_passthru_rq(rq, &effects);
+
+ if (status == NVME_SC_SUCCESS &&
+ req->cmd->common.opcode == nvme_admin_identify) {
+@@ -238,6 +240,9 @@ static void nvmet_passthru_execute_cmd_work(struct work_struct *w)
+ req->cqe->result = nvme_req(rq)->result;
+ nvmet_req_complete(req, status);
+ blk_mq_free_request(rq);
++
++ if (effects)
++ nvme_passthru_end(ctrl, effects, req->cmd, status);
+ }
+
+ static void nvmet_passthru_req_done(struct request *rq,
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index a3694a32f6d52..7dcf88cde1893 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -935,10 +935,17 @@ static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue)
+ struct nvme_tcp_data_pdu *data = &queue->pdu.data;
+ struct nvmet_tcp_cmd *cmd;
+
+- if (likely(queue->nr_cmds))
++ if (likely(queue->nr_cmds)) {
++ if (unlikely(data->ttag >= queue->nr_cmds)) {
++ pr_err("queue %d: received out of bound ttag %u, nr_cmds %u\n",
++ queue->idx, data->ttag, queue->nr_cmds);
++ nvmet_tcp_fatal_error(queue);
++ return -EPROTO;
++ }
+ cmd = &queue->cmds[data->ttag];
+- else
++ } else {
+ cmd = &queue->connect;
++ }
+
+ if (le32_to_cpu(data->data_offset) != cmd->rbytes_done) {
+ pr_err("ttag %u unexpected data offset %u (expected %u)\n",
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 1e3c754efd0d8..2164efd12ba9b 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -829,21 +829,18 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ nvmem->dev.groups = nvmem_dev_groups;
+ #endif
+
+- if (nvmem->nkeepout) {
+- rval = nvmem_validate_keepouts(nvmem);
+- if (rval) {
+- ida_free(&nvmem_ida, nvmem->id);
+- kfree(nvmem);
+- return ERR_PTR(rval);
+- }
+- }
+-
+ dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
+
+ rval = device_register(&nvmem->dev);
+ if (rval)
+ goto err_put_device;
+
++ if (nvmem->nkeepout) {
++ rval = nvmem_validate_keepouts(nvmem);
++ if (rval)
++ goto err_device_del;
++ }
++
+ if (config->compat) {
+ rval = nvmem_sysfs_setup_compat(nvmem, config);
+ if (rval)
+diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
+index 439ac5f5907a6..b492e67c3d871 100644
+--- a/drivers/pci/setup-res.c
++++ b/drivers/pci/setup-res.c
+@@ -214,6 +214,17 @@ static int pci_revert_fw_address(struct resource *res, struct pci_dev *dev,
+
+ root = pci_find_parent_resource(dev, res);
+ if (!root) {
++ /*
++ * If dev is behind a bridge, accesses will only reach it
++ * if res is inside the relevant bridge window.
++ */
++ if (pci_upstream_bridge(dev))
++ return -ENXIO;
++
++ /*
++ * On the root bus, assume the host bridge will forward
++ * everything.
++ */
+ if (res->flags & IORESOURCE_IO)
+ root = &ioport_resource;
+ else
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 1ec5baa673f92..54be0b37e01ef 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -639,8 +639,11 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node)
+ struct riscv_pmu *pmu = hlist_entry_safe(node, struct riscv_pmu, node);
+ struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events);
+
+- /* Enable the access for TIME csr only from the user mode now */
+- csr_write(CSR_SCOUNTEREN, 0x2);
++ /*
++ * Enable the access for CYCLE, TIME, and INSTRET CSRs from userspace,
++ * as is necessary to maintain uABI compatibility.
++ */
++ csr_write(CSR_SCOUNTEREN, 0x7);
+
+ /* Stop all the counters so that they can be enabled from perf */
+ pmu_sbi_stop_all(pmu);
+diff --git a/drivers/phy/amlogic/phy-meson-axg-mipi-pcie-analog.c b/drivers/phy/amlogic/phy-meson-axg-mipi-pcie-analog.c
+index 1027ece6ca123..a3e1108b736d6 100644
+--- a/drivers/phy/amlogic/phy-meson-axg-mipi-pcie-analog.c
++++ b/drivers/phy/amlogic/phy-meson-axg-mipi-pcie-analog.c
+@@ -197,7 +197,7 @@ static int phy_axg_mipi_pcie_analog_probe(struct platform_device *pdev)
+ struct phy_provider *phy;
+ struct device *dev = &pdev->dev;
+ struct phy_axg_mipi_pcie_analog_priv *priv;
+- struct device_node *np = dev->of_node;
++ struct device_node *np = dev->of_node, *parent_np;
+ struct regmap *map;
+ int ret;
+
+@@ -206,7 +206,9 @@ static int phy_axg_mipi_pcie_analog_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ /* Get the hhi system controller node */
+- map = syscon_node_to_regmap(of_get_parent(dev->of_node));
++ parent_np = of_get_parent(dev->of_node);
++ map = syscon_node_to_regmap(parent_np);
++ of_node_put(parent_np);
+ if (IS_ERR(map)) {
+ dev_err(dev,
+ "failed to get HHI regmap\n");
+diff --git a/drivers/phy/mediatek/phy-mtk-tphy.c b/drivers/phy/mediatek/phy-mtk-tphy.c
+index 8ee7682b8e93e..bdffc21858f6b 100644
+--- a/drivers/phy/mediatek/phy-mtk-tphy.c
++++ b/drivers/phy/mediatek/phy-mtk-tphy.c
+@@ -906,7 +906,7 @@ static int phy_type_syscon_get(struct mtk_phy_instance *instance,
+ static int phy_type_set(struct mtk_phy_instance *instance)
+ {
+ int type;
+- u32 mask;
++ u32 offset;
+
+ if (!instance->type_sw)
+ return 0;
+@@ -929,8 +929,9 @@ static int phy_type_set(struct mtk_phy_instance *instance)
+ return 0;
+ }
+
+- mask = RG_PHY_SW_TYPE << (instance->type_sw_index * BITS_PER_BYTE);
+- regmap_update_bits(instance->type_sw, instance->type_sw_reg, mask, type);
++ offset = instance->type_sw_index * BITS_PER_BYTE;
++ regmap_update_bits(instance->type_sw, instance->type_sw_reg,
++ RG_PHY_SW_TYPE << offset, type << offset);
+
+ return 0;
+ }
+diff --git a/drivers/phy/qualcomm/phy-qcom-usb-hsic.c b/drivers/phy/qualcomm/phy-qcom-usb-hsic.c
+index 716a77748ed83..20f6dd37c7c10 100644
+--- a/drivers/phy/qualcomm/phy-qcom-usb-hsic.c
++++ b/drivers/phy/qualcomm/phy-qcom-usb-hsic.c
+@@ -54,8 +54,10 @@ static int qcom_usb_hsic_phy_power_on(struct phy *phy)
+
+ /* Configure pins for HSIC functionality */
+ pins_default = pinctrl_lookup_state(uphy->pctl, PINCTRL_STATE_DEFAULT);
+- if (IS_ERR(pins_default))
+- return PTR_ERR(pins_default);
++ if (IS_ERR(pins_default)) {
++ ret = PTR_ERR(pins_default);
++ goto err_ulpi;
++ }
+
+ ret = pinctrl_select_state(uphy->pctl, pins_default);
+ if (ret)
+diff --git a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+index 5223d4c9afdfc..39f14a5b78cdc 100644
+--- a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
++++ b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+@@ -1124,7 +1124,7 @@ static int rockchip_usb2phy_otg_port_init(struct rockchip_usb2phy *rphy,
+ struct rockchip_usb2phy_port *rport,
+ struct device_node *child_np)
+ {
+- int ret;
++ int ret, id;
+
+ rport->port_id = USB2PHY_PORT_OTG;
+ rport->port_cfg = &rphy->phy_cfg->port_cfgs[USB2PHY_PORT_OTG];
+@@ -1162,13 +1162,15 @@ static int rockchip_usb2phy_otg_port_init(struct rockchip_usb2phy *rphy,
+
+ ret = devm_extcon_register_notifier(rphy->dev, rphy->edev,
+ EXTCON_USB_HOST, &rport->event_nb);
+- if (ret)
++ if (ret) {
+ dev_err(rphy->dev, "register USB HOST notifier failed\n");
++ goto out;
++ }
+
+ if (!of_property_read_bool(rphy->dev->of_node, "extcon")) {
+ /* do initial sync of usb state */
+- ret = property_enabled(rphy->grf, &rport->port_cfg->utmi_id);
+- extcon_set_state_sync(rphy->edev, EXTCON_USB_HOST, !ret);
++ id = property_enabled(rphy->grf, &rport->port_cfg->utmi_id);
++ extcon_set_state_sync(rphy->edev, EXTCON_USB_HOST, !id);
+ }
+ }
+
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 32e41395fc768..c84bd0e1ce5a6 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -2393,11 +2393,24 @@ static int rockchip_pmx_set(struct pinctrl_dev *pctldev, unsigned selector,
+ return 0;
+ }
+
++static int rockchip_pmx_gpio_set_direction(struct pinctrl_dev *pctldev,
++ struct pinctrl_gpio_range *range,
++ unsigned offset,
++ bool input)
++{
++ struct rockchip_pinctrl *info = pinctrl_dev_get_drvdata(pctldev);
++ struct rockchip_pin_bank *bank;
++
++ bank = pin_to_bank(info, offset);
++ return rockchip_set_mux(bank, offset - bank->pin_base, RK_FUNC_GPIO);
++}
++
+ static const struct pinmux_ops rockchip_pmx_ops = {
+ .get_functions_count = rockchip_pmx_get_funcs_count,
+ .get_function_name = rockchip_pmx_get_func_name,
+ .get_function_groups = rockchip_pmx_get_groups,
+ .set_mux = rockchip_pmx_set,
++ .gpio_set_direction = rockchip_pmx_gpio_set_direction,
+ };
+
+ /*
+diff --git a/drivers/platform/chrome/chromeos_laptop.c b/drivers/platform/chrome/chromeos_laptop.c
+index 4e14b4d6635d7..a2cdbfbaeae6b 100644
+--- a/drivers/platform/chrome/chromeos_laptop.c
++++ b/drivers/platform/chrome/chromeos_laptop.c
+@@ -740,6 +740,7 @@ static int __init
+ chromeos_laptop_prepare_i2c_peripherals(struct chromeos_laptop *cros_laptop,
+ const struct chromeos_laptop *src)
+ {
++ struct i2c_peripheral *i2c_peripherals;
+ struct i2c_peripheral *i2c_dev;
+ struct i2c_board_info *info;
+ int i;
+@@ -748,17 +749,15 @@ chromeos_laptop_prepare_i2c_peripherals(struct chromeos_laptop *cros_laptop,
+ if (!src->num_i2c_peripherals)
+ return 0;
+
+- cros_laptop->i2c_peripherals = kmemdup(src->i2c_peripherals,
+- src->num_i2c_peripherals *
+- sizeof(*src->i2c_peripherals),
+- GFP_KERNEL);
+- if (!cros_laptop->i2c_peripherals)
++ i2c_peripherals = kmemdup(src->i2c_peripherals,
++ src->num_i2c_peripherals *
++ sizeof(*src->i2c_peripherals),
++ GFP_KERNEL);
++ if (!i2c_peripherals)
+ return -ENOMEM;
+
+- cros_laptop->num_i2c_peripherals = src->num_i2c_peripherals;
+-
+- for (i = 0; i < cros_laptop->num_i2c_peripherals; i++) {
+- i2c_dev = &cros_laptop->i2c_peripherals[i];
++ for (i = 0; i < src->num_i2c_peripherals; i++) {
++ i2c_dev = &i2c_peripherals[i];
+ info = &i2c_dev->board_info;
+
+ error = chromeos_laptop_setup_irq(i2c_dev);
+@@ -775,16 +774,19 @@ chromeos_laptop_prepare_i2c_peripherals(struct chromeos_laptop *cros_laptop,
+ }
+ }
+
++ cros_laptop->i2c_peripherals = i2c_peripherals;
++ cros_laptop->num_i2c_peripherals = src->num_i2c_peripherals;
++
+ return 0;
+
+ err_out:
+ while (--i >= 0) {
+- i2c_dev = &cros_laptop->i2c_peripherals[i];
++ i2c_dev = &i2c_peripherals[i];
+ info = &i2c_dev->board_info;
+ if (!IS_ERR_OR_NULL(info->fwnode))
+ fwnode_remove_software_node(info->fwnode);
+ }
+- kfree(cros_laptop->i2c_peripherals);
++ kfree(i2c_peripherals);
+ return error;
+ }
+
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index 00381490dd3e3..4b0934ef77142 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -352,10 +352,16 @@ EXPORT_SYMBOL(cros_ec_suspend);
+
+ static void cros_ec_report_events_during_suspend(struct cros_ec_device *ec_dev)
+ {
++ bool wake_event;
++
+ while (ec_dev->mkbp_event_supported &&
+- cros_ec_get_next_event(ec_dev, NULL, NULL) > 0)
++ cros_ec_get_next_event(ec_dev, &wake_event, NULL) > 0) {
+ blocking_notifier_call_chain(&ec_dev->event_notifier,
+ 1, ec_dev);
++
++ if (wake_event && device_may_wakeup(ec_dev->dev))
++ pm_wakeup_event(ec_dev->dev, 0);
++ }
+ }
+
+ /**
+diff --git a/drivers/platform/chrome/cros_ec_chardev.c b/drivers/platform/chrome/cros_ec_chardev.c
+index fd33de546aee0..0de7c255254e0 100644
+--- a/drivers/platform/chrome/cros_ec_chardev.c
++++ b/drivers/platform/chrome/cros_ec_chardev.c
+@@ -327,6 +327,9 @@ static long cros_ec_chardev_ioctl_readmem(struct cros_ec_dev *ec,
+ if (copy_from_user(&s_mem, arg, sizeof(s_mem)))
+ return -EFAULT;
+
++ if (s_mem.bytes > sizeof(s_mem.buffer))
++ return -EINVAL;
++
+ num = ec_dev->cmd_readmem(ec_dev, s_mem.offset, s_mem.bytes,
+ s_mem.buffer);
+ if (num <= 0)
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index 40dc048d18ad3..6a3aa15630c1a 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -750,6 +750,7 @@ int cros_ec_get_next_event(struct cros_ec_device *ec_dev,
+ u8 event_type;
+ u32 host_event;
+ int ret;
++ u32 ver_mask;
+
+ /*
+ * Default value for wake_event.
+@@ -771,6 +772,37 @@ int cros_ec_get_next_event(struct cros_ec_device *ec_dev,
+ return get_keyboard_state_event(ec_dev);
+
+ ret = get_next_event(ec_dev);
++ /*
++ * -ENOPROTOOPT is returned when EC returns EC_RES_INVALID_VERSION.
++ * This can occur when EC based device (e.g. Fingerprint MCU) jumps to
++ * the RO image which doesn't support newer version of the command. In
++ * this case we will attempt to update maximum supported version of the
++ * EC_CMD_GET_NEXT_EVENT.
++ */
++ if (ret == -ENOPROTOOPT) {
++ dev_dbg(ec_dev->dev,
++ "GET_NEXT_EVENT returned invalid version error.\n");
++ ret = cros_ec_get_host_command_version_mask(ec_dev,
++ EC_CMD_GET_NEXT_EVENT,
++ &ver_mask);
++ if (ret < 0 || ver_mask == 0)
++ /*
++ * Do not change the MKBP supported version if we can't
++ * obtain supported version correctly. Please note that
++ * calling EC_CMD_GET_NEXT_EVENT returned
++ * EC_RES_INVALID_VERSION which means that the command
++ * is present.
++ */
++ return -ENOPROTOOPT;
++
++ ec_dev->mkbp_event_supported = fls(ver_mask);
++ dev_dbg(ec_dev->dev, "MKBP support version changed to %u\n",
++ ec_dev->mkbp_event_supported - 1);
++
++ /* Try to get next event with new MKBP support version set. */
++ ret = get_next_event(ec_dev);
++ }
++
+ if (ret <= 0)
+ return ret;
+
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 7cb2e35c4dede..1305a22a88313 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -669,7 +669,7 @@ static int cros_typec_register_altmodes(struct cros_typec_data *typec, int port_
+ for (j = 0; j < sop_disc->svids[i].mode_count; j++) {
+ memset(&desc, 0, sizeof(desc));
+ desc.svid = sop_disc->svids[i].svid;
+- desc.mode = j;
++ desc.mode = j + 1;
+ desc.vdo = sop_disc->svids[i].mode_vdo[j];
+
+ if (is_partner)
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index bc7020e9df9e8..fc8dbbd6fc7c2 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -177,7 +177,8 @@ enum hp_thermal_profile_omen_v1 {
+ enum hp_thermal_profile {
+ HP_THERMAL_PROFILE_PERFORMANCE = 0x00,
+ HP_THERMAL_PROFILE_DEFAULT = 0x01,
+- HP_THERMAL_PROFILE_COOL = 0x02
++ HP_THERMAL_PROFILE_COOL = 0x02,
++ HP_THERMAL_PROFILE_QUIET = 0x03,
+ };
+
+ #define IS_HWBLOCKED(x) ((x & HPWMI_POWER_FW_OR_HW) != HPWMI_POWER_FW_OR_HW)
+@@ -1194,6 +1195,9 @@ static int hp_wmi_platform_profile_get(struct platform_profile_handler *pprof,
+ case HP_THERMAL_PROFILE_COOL:
+ *profile = PLATFORM_PROFILE_COOL;
+ break;
++ case HP_THERMAL_PROFILE_QUIET:
++ *profile = PLATFORM_PROFILE_QUIET;
++ break;
+ default:
+ return -EINVAL;
+ }
+@@ -1216,6 +1220,9 @@ static int hp_wmi_platform_profile_set(struct platform_profile_handler *pprof,
+ case PLATFORM_PROFILE_COOL:
+ tp = HP_THERMAL_PROFILE_COOL;
+ break;
++ case PLATFORM_PROFILE_QUIET:
++ tp = HP_THERMAL_PROFILE_QUIET;
++ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+@@ -1263,6 +1270,8 @@ static int thermal_profile_setup(void)
+
+ platform_profile_handler.profile_get = hp_wmi_platform_profile_get;
+ platform_profile_handler.profile_set = hp_wmi_platform_profile_set;
++
++ set_bit(PLATFORM_PROFILE_QUIET, platform_profile_handler.choices);
+ }
+
+ set_bit(PLATFORM_PROFILE_COOL, platform_profile_handler.choices);
+diff --git a/drivers/platform/x86/msi-laptop.c b/drivers/platform/x86/msi-laptop.c
+index 24ffc8e2d2d1e..0e804b6c2d242 100644
+--- a/drivers/platform/x86/msi-laptop.c
++++ b/drivers/platform/x86/msi-laptop.c
+@@ -596,11 +596,10 @@ static const struct dmi_system_id msi_dmi_table[] __initconst = {
+ {
+ .ident = "MSI S270",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "MICRO-STAR INT'L CO.,LTD"),
++ DMI_MATCH(DMI_SYS_VENDOR, "MICRO-STAR INT"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "MS-1013"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "0131"),
+- DMI_MATCH(DMI_CHASSIS_VENDOR,
+- "MICRO-STAR INT'L CO.,LTD")
++ DMI_MATCH(DMI_CHASSIS_VENDOR, "MICRO-STAR INT")
+ },
+ .driver_data = &quirk_old_ec_model,
+ .callback = dmi_check_cb
+@@ -633,8 +632,7 @@ static const struct dmi_system_id msi_dmi_table[] __initconst = {
+ DMI_MATCH(DMI_SYS_VENDOR, "NOTEBOOK"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "SAM2000"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "0131"),
+- DMI_MATCH(DMI_CHASSIS_VENDOR,
+- "MICRO-STAR INT'L CO.,LTD")
++ DMI_MATCH(DMI_CHASSIS_VENDOR, "MICRO-STAR INT")
+ },
+ .driver_data = &quirk_old_ec_model,
+ .callback = dmi_check_cb
+@@ -1048,8 +1046,7 @@ static int __init msi_init(void)
+ return -EINVAL;
+
+ /* Register backlight stuff */
+-
+- if (quirks->old_ec_model ||
++ if (quirks->old_ec_model &&
+ acpi_video_get_backlight_type() == acpi_backlight_vendor) {
+ struct backlight_properties props;
+ memset(&props, 0, sizeof(struct backlight_properties));
+@@ -1117,6 +1114,8 @@ fail_create_attr:
+ fail_create_group:
+ if (quirks->load_scm_model) {
+ i8042_remove_filter(msi_laptop_i8042_filter);
++ cancel_delayed_work_sync(&msi_touchpad_dwork);
++ input_unregister_device(msi_laptop_input_dev);
+ cancel_delayed_work_sync(&msi_rfkill_dwork);
+ cancel_work_sync(&msi_rfkill_work);
+ rfkill_cleanup();
+@@ -1137,6 +1136,7 @@ static void __exit msi_cleanup(void)
+ {
+ if (quirks->load_scm_model) {
+ i8042_remove_filter(msi_laptop_i8042_filter);
++ cancel_delayed_work_sync(&msi_touchpad_dwork);
+ input_unregister_device(msi_laptop_input_dev);
+ cancel_delayed_work_sync(&msi_rfkill_dwork);
+ cancel_work_sync(&msi_rfkill_work);
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index 5c757c7f64dee..f4046572a9fe5 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -354,7 +354,7 @@ static bool pmc_clk_is_critical = true;
+
+ static int dmi_callback(const struct dmi_system_id *d)
+ {
+- pr_info("%s critclks quirk enabled\n", d->ident);
++ pr_info("%s: PMC critical clocks quirk enabled\n", d->ident);
+
+ return 1;
+ }
+diff --git a/drivers/power/supply/adp5061.c b/drivers/power/supply/adp5061.c
+index 003557043ab3a..daee1161c3059 100644
+--- a/drivers/power/supply/adp5061.c
++++ b/drivers/power/supply/adp5061.c
+@@ -427,11 +427,11 @@ static int adp5061_get_chg_type(struct adp5061_state *st,
+ if (ret < 0)
+ return ret;
+
+- chg_type = adp5061_chg_type[ADP5061_CHG_STATUS_1_CHG_STATUS(status1)];
+- if (chg_type > ADP5061_CHG_FAST_CV)
++ chg_type = ADP5061_CHG_STATUS_1_CHG_STATUS(status1);
++ if (chg_type >= ARRAY_SIZE(adp5061_chg_type))
+ val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+ else
+- val->intval = chg_type;
++ val->intval = adp5061_chg_type[chg_type];
+
+ return ret;
+ }
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index a9c99d9e8b428..c053fac05cc24 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -994,6 +994,9 @@ static u64 rapl_compute_time_window_core(struct rapl_package *rp, u64 value,
+ y = value & 0x1f;
+ value = (1 << y) * (4 + f) * rp->time_unit / 4;
+ } else {
++ if (value < rp->time_unit)
++ return 0;
++
+ do_div(value, rp->time_unit);
+ y = ilog2(value);
+ f = div64_u64(4 * (value - (1 << y)), 1 << y);
+@@ -1035,7 +1038,6 @@ static const struct rapl_defaults rapl_defaults_spr_server = {
+ .check_unit = rapl_check_unit_core,
+ .set_floor_freq = set_floor_freq_default,
+ .compute_time_window = rapl_compute_time_window_core,
+- .dram_domain_energy_unit = 15300,
+ .psys_domain_energy_unit = 1000000000,
+ .spr_psys_bits = true,
+ };
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index a9daaf4d5aaab..9567d2fc3f00c 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2680,7 +2680,7 @@ static int _regulator_do_enable(struct regulator_dev *rdev)
+ * return -ETIMEDOUT.
+ */
+ if (rdev->desc->poll_enabled_time) {
+- unsigned int time_remaining = delay;
++ int time_remaining = delay;
+
+ while (time_remaining > 0) {
+ _regulator_delay_helper(rdev->desc->poll_enabled_time);
+diff --git a/drivers/regulator/qcom_rpm-regulator.c b/drivers/regulator/qcom_rpm-regulator.c
+index 7f9d66ac37ff8..3c41b71a1f529 100644
+--- a/drivers/regulator/qcom_rpm-regulator.c
++++ b/drivers/regulator/qcom_rpm-regulator.c
+@@ -802,6 +802,12 @@ static const struct rpm_regulator_data rpm_pm8018_regulators[] = {
+ };
+
+ static const struct rpm_regulator_data rpm_pm8058_regulators[] = {
++ { "s0", QCOM_RPM_PM8058_SMPS0, &pm8058_smps, "vdd_s0" },
++ { "s1", QCOM_RPM_PM8058_SMPS1, &pm8058_smps, "vdd_s1" },
++ { "s2", QCOM_RPM_PM8058_SMPS2, &pm8058_smps, "vdd_s2" },
++ { "s3", QCOM_RPM_PM8058_SMPS3, &pm8058_smps, "vdd_s3" },
++ { "s4", QCOM_RPM_PM8058_SMPS4, &pm8058_smps, "vdd_s4" },
++
+ { "l0", QCOM_RPM_PM8058_LDO0, &pm8058_nldo, "vdd_l0_l1_lvs" },
+ { "l1", QCOM_RPM_PM8058_LDO1, &pm8058_nldo, "vdd_l0_l1_lvs" },
+ { "l2", QCOM_RPM_PM8058_LDO2, &pm8058_pldo, "vdd_l2_l11_l12" },
+@@ -829,12 +835,6 @@ static const struct rpm_regulator_data rpm_pm8058_regulators[] = {
+ { "l24", QCOM_RPM_PM8058_LDO24, &pm8058_nldo, "vdd_l23_l24_l25" },
+ { "l25", QCOM_RPM_PM8058_LDO25, &pm8058_nldo, "vdd_l23_l24_l25" },
+
+- { "s0", QCOM_RPM_PM8058_SMPS0, &pm8058_smps, "vdd_s0" },
+- { "s1", QCOM_RPM_PM8058_SMPS1, &pm8058_smps, "vdd_s1" },
+- { "s2", QCOM_RPM_PM8058_SMPS2, &pm8058_smps, "vdd_s2" },
+- { "s3", QCOM_RPM_PM8058_SMPS3, &pm8058_smps, "vdd_s3" },
+- { "s4", QCOM_RPM_PM8058_SMPS4, &pm8058_smps, "vdd_s4" },
+-
+ { "lvs0", QCOM_RPM_PM8058_LVS0, &pm8058_switch, "vdd_l0_l1_lvs" },
+ { "lvs1", QCOM_RPM_PM8058_LVS1, &pm8058_switch, "vdd_l0_l1_lvs" },
+
+@@ -843,6 +843,12 @@ static const struct rpm_regulator_data rpm_pm8058_regulators[] = {
+ };
+
+ static const struct rpm_regulator_data rpm_pm8901_regulators[] = {
++ { "s0", QCOM_RPM_PM8901_SMPS0, &pm8901_ftsmps, "vdd_s0" },
++ { "s1", QCOM_RPM_PM8901_SMPS1, &pm8901_ftsmps, "vdd_s1" },
++ { "s2", QCOM_RPM_PM8901_SMPS2, &pm8901_ftsmps, "vdd_s2" },
++ { "s3", QCOM_RPM_PM8901_SMPS3, &pm8901_ftsmps, "vdd_s3" },
++ { "s4", QCOM_RPM_PM8901_SMPS4, &pm8901_ftsmps, "vdd_s4" },
++
+ { "l0", QCOM_RPM_PM8901_LDO0, &pm8901_nldo, "vdd_l0" },
+ { "l1", QCOM_RPM_PM8901_LDO1, &pm8901_pldo, "vdd_l1" },
+ { "l2", QCOM_RPM_PM8901_LDO2, &pm8901_pldo, "vdd_l2" },
+@@ -851,12 +857,6 @@ static const struct rpm_regulator_data rpm_pm8901_regulators[] = {
+ { "l5", QCOM_RPM_PM8901_LDO5, &pm8901_pldo, "vdd_l5" },
+ { "l6", QCOM_RPM_PM8901_LDO6, &pm8901_pldo, "vdd_l6" },
+
+- { "s0", QCOM_RPM_PM8901_SMPS0, &pm8901_ftsmps, "vdd_s0" },
+- { "s1", QCOM_RPM_PM8901_SMPS1, &pm8901_ftsmps, "vdd_s1" },
+- { "s2", QCOM_RPM_PM8901_SMPS2, &pm8901_ftsmps, "vdd_s2" },
+- { "s3", QCOM_RPM_PM8901_SMPS3, &pm8901_ftsmps, "vdd_s3" },
+- { "s4", QCOM_RPM_PM8901_SMPS4, &pm8901_ftsmps, "vdd_s4" },
+-
+ { "lvs0", QCOM_RPM_PM8901_LVS0, &pm8901_switch, "lvs0_in" },
+ { "lvs1", QCOM_RPM_PM8901_LVS1, &pm8901_switch, "lvs1_in" },
+ { "lvs2", QCOM_RPM_PM8901_LVS2, &pm8901_switch, "lvs2_in" },
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index 02a04ab34a230..9d86470df79bb 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -518,12 +518,13 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr,
+ struct fw_rsc_vdev *rsc = ptr;
+ struct device *dev = &rproc->dev;
+ struct rproc_vdev *rvdev;
++ size_t rsc_size;
+ int i, ret;
+ char name[16];
+
+ /* make sure resource isn't truncated */
+- if (struct_size(rsc, vring, rsc->num_of_vrings) + rsc->config_len >
+- avail) {
++ rsc_size = struct_size(rsc, vring, rsc->num_of_vrings);
++ if (size_add(rsc_size, rsc->config_len) > avail) {
+ dev_err(dev, "vdev rsc is truncated\n");
+ return -EINVAL;
+ }
+diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c
+index 4f2189111494a..0408ce58183c1 100644
+--- a/drivers/rpmsg/rpmsg_char.c
++++ b/drivers/rpmsg/rpmsg_char.c
+@@ -76,7 +76,9 @@ int rpmsg_chrdev_eptdev_destroy(struct device *dev, void *data)
+
+ mutex_lock(&eptdev->ept_lock);
+ if (eptdev->ept) {
+- rpmsg_destroy_ept(eptdev->ept);
++ /* The default endpoint is released by the rpmsg core */
++ if (!eptdev->default_ept)
++ rpmsg_destroy_ept(eptdev->ept);
+ eptdev->ept = NULL;
+ }
+ mutex_unlock(&eptdev->ept_lock);
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index cd823ff5deab2..6cb9cca9565b9 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -2006,7 +2006,7 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ retval = pci_enable_device(pdev);
+ if (retval) {
+ TW_PRINTK(host, TW_DRIVER, 0x34, "Failed to enable pci device");
+- goto out_disable_device;
++ return -ENODEV;
+ }
+
+ pci_set_master(pdev);
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index 32abdf0fa9aab..af281e271f886 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -1455,7 +1455,7 @@ void cxgbi_conn_tx_open(struct cxgbi_sock *csk)
+ if (conn) {
+ log_debug(1 << CXGBI_DBG_SOCK,
+ "csk 0x%p, cid %d.\n", csk, conn->id);
+- iscsi_conn_queue_work(conn);
++ iscsi_conn_queue_xmit(conn);
+ }
+ }
+ EXPORT_SYMBOL_GPL(cxgbi_conn_tx_open);
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index 52c6f70d60ec4..7e99070ea6118 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -52,6 +52,10 @@ static struct iscsi_transport iscsi_sw_tcp_transport;
+ static unsigned int iscsi_max_lun = ~0;
+ module_param_named(max_lun, iscsi_max_lun, uint, S_IRUGO);
+
++static bool iscsi_recv_from_iscsi_q;
++module_param_named(recv_from_iscsi_q, iscsi_recv_from_iscsi_q, bool, 0644);
++MODULE_PARM_DESC(recv_from_iscsi_q, "Set to true to read iSCSI data/headers from the iscsi_q workqueue. The default is false which will perform reads from the network softirq context.");
++
+ static int iscsi_sw_tcp_dbg;
+ module_param_named(debug_iscsi_tcp, iscsi_sw_tcp_dbg, int,
+ S_IRUGO | S_IWUSR);
+@@ -122,20 +126,13 @@ static inline int iscsi_sw_sk_state_check(struct sock *sk)
+ return 0;
+ }
+
+-static void iscsi_sw_tcp_data_ready(struct sock *sk)
++static void iscsi_sw_tcp_recv_data(struct iscsi_conn *conn)
+ {
+- struct iscsi_conn *conn;
+- struct iscsi_tcp_conn *tcp_conn;
++ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
++ struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
++ struct sock *sk = tcp_sw_conn->sock->sk;
+ read_descriptor_t rd_desc;
+
+- read_lock_bh(&sk->sk_callback_lock);
+- conn = sk->sk_user_data;
+- if (!conn) {
+- read_unlock_bh(&sk->sk_callback_lock);
+- return;
+- }
+- tcp_conn = conn->dd_data;
+-
+ /*
+ * Use rd_desc to pass 'conn' to iscsi_tcp_recv.
+ * We set count to 1 because we want the network layer to
+@@ -144,13 +141,48 @@ static void iscsi_sw_tcp_data_ready(struct sock *sk)
+ */
+ rd_desc.arg.data = conn;
+ rd_desc.count = 1;
+- tcp_read_sock(sk, &rd_desc, iscsi_sw_tcp_recv);
+
+- iscsi_sw_sk_state_check(sk);
++ tcp_read_sock(sk, &rd_desc, iscsi_sw_tcp_recv);
+
+ /* If we had to (atomically) map a highmem page,
+ * unmap it now. */
+ iscsi_tcp_segment_unmap(&tcp_conn->in.segment);
++
++ iscsi_sw_sk_state_check(sk);
++}
++
++static void iscsi_sw_tcp_recv_data_work(struct work_struct *work)
++{
++ struct iscsi_conn *conn = container_of(work, struct iscsi_conn,
++ recvwork);
++ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
++ struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
++ struct sock *sk = tcp_sw_conn->sock->sk;
++
++ lock_sock(sk);
++ iscsi_sw_tcp_recv_data(conn);
++ release_sock(sk);
++}
++
++static void iscsi_sw_tcp_data_ready(struct sock *sk)
++{
++ struct iscsi_sw_tcp_conn *tcp_sw_conn;
++ struct iscsi_tcp_conn *tcp_conn;
++ struct iscsi_conn *conn;
++
++ read_lock_bh(&sk->sk_callback_lock);
++ conn = sk->sk_user_data;
++ if (!conn) {
++ read_unlock_bh(&sk->sk_callback_lock);
++ return;
++ }
++ tcp_conn = conn->dd_data;
++ tcp_sw_conn = tcp_conn->dd_data;
++
++ if (tcp_sw_conn->queue_recv)
++ iscsi_conn_queue_recv(conn);
++ else
++ iscsi_sw_tcp_recv_data(conn);
+ read_unlock_bh(&sk->sk_callback_lock);
+ }
+
+@@ -205,7 +237,7 @@ static void iscsi_sw_tcp_write_space(struct sock *sk)
+ old_write_space(sk);
+
+ ISCSI_SW_TCP_DBG(conn, "iscsi_write_space\n");
+- iscsi_conn_queue_work(conn);
++ iscsi_conn_queue_xmit(conn);
+ }
+
+ static void iscsi_sw_tcp_conn_set_callbacks(struct iscsi_conn *conn)
+@@ -276,6 +308,9 @@ static int iscsi_sw_tcp_xmit_segment(struct iscsi_tcp_conn *tcp_conn,
+ if (segment->total_copied + segment->size < segment->total_size)
+ flags |= MSG_MORE;
+
++ if (tcp_sw_conn->queue_recv)
++ flags |= MSG_DONTWAIT;
++
+ /* Use sendpage if we can; else fall back to sendmsg */
+ if (!segment->data) {
+ sg = segment->sg;
+@@ -557,6 +592,10 @@ iscsi_sw_tcp_conn_create(struct iscsi_cls_session *cls_session,
+ conn = cls_conn->dd_data;
+ tcp_conn = conn->dd_data;
+ tcp_sw_conn = tcp_conn->dd_data;
++ INIT_WORK(&conn->recvwork, iscsi_sw_tcp_recv_data_work);
++ tcp_sw_conn->queue_recv = iscsi_recv_from_iscsi_q;
++
++ mutex_init(&tcp_sw_conn->sock_lock);
+
+ tfm = crypto_alloc_ahash("crc32c", 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(tfm))
+@@ -592,11 +631,15 @@ free_conn:
+
+ static void iscsi_sw_tcp_release_conn(struct iscsi_conn *conn)
+ {
+- struct iscsi_session *session = conn->session;
+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+ struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
+ struct socket *sock = tcp_sw_conn->sock;
+
++ /*
++ * The iscsi transport class will make sure we are not called in
++ * parallel with start, stop, bind and destroys. However, this can be
++ * called twice if userspace does a stop then a destroy.
++ */
+ if (!sock)
+ return;
+
+@@ -610,9 +653,11 @@ static void iscsi_sw_tcp_release_conn(struct iscsi_conn *conn)
+ iscsi_sw_tcp_conn_restore_callbacks(conn);
+ sock_put(sock->sk);
+
+- spin_lock_bh(&session->frwd_lock);
++ iscsi_suspend_rx(conn);
++
++ mutex_lock(&tcp_sw_conn->sock_lock);
+ tcp_sw_conn->sock = NULL;
+- spin_unlock_bh(&session->frwd_lock);
++ mutex_unlock(&tcp_sw_conn->sock_lock);
+ sockfd_put(sock);
+ }
+
+@@ -664,7 +709,6 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session,
+ struct iscsi_cls_conn *cls_conn, uint64_t transport_eph,
+ int is_leading)
+ {
+- struct iscsi_session *session = cls_session->dd_data;
+ struct iscsi_conn *conn = cls_conn->dd_data;
+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+ struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
+@@ -684,10 +728,10 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session,
+ if (err)
+ goto free_socket;
+
+- spin_lock_bh(&session->frwd_lock);
++ mutex_lock(&tcp_sw_conn->sock_lock);
+ /* bind iSCSI connection and socket */
+ tcp_sw_conn->sock = sock;
+- spin_unlock_bh(&session->frwd_lock);
++ mutex_unlock(&tcp_sw_conn->sock_lock);
+
+ /* setup Socket parameters */
+ sk = sock->sk;
+@@ -724,8 +768,15 @@ static int iscsi_sw_tcp_conn_set_param(struct iscsi_cls_conn *cls_conn,
+ break;
+ case ISCSI_PARAM_DATADGST_EN:
+ iscsi_set_param(cls_conn, param, buf, buflen);
++
++ mutex_lock(&tcp_sw_conn->sock_lock);
++ if (!tcp_sw_conn->sock) {
++ mutex_unlock(&tcp_sw_conn->sock_lock);
++ return -ENOTCONN;
++ }
+ tcp_sw_conn->sendpage = conn->datadgst_en ?
+ sock_no_sendpage : tcp_sw_conn->sock->ops->sendpage;
++ mutex_unlock(&tcp_sw_conn->sock_lock);
+ break;
+ case ISCSI_PARAM_MAX_R2T:
+ return iscsi_tcp_set_max_r2t(conn, buf);
+@@ -740,8 +791,8 @@ static int iscsi_sw_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ enum iscsi_param param, char *buf)
+ {
+ struct iscsi_conn *conn = cls_conn->dd_data;
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
++ struct iscsi_sw_tcp_conn *tcp_sw_conn;
++ struct iscsi_tcp_conn *tcp_conn;
+ struct sockaddr_in6 addr;
+ struct socket *sock;
+ int rc;
+@@ -751,21 +802,36 @@ static int iscsi_sw_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ case ISCSI_PARAM_CONN_ADDRESS:
+ case ISCSI_PARAM_LOCAL_PORT:
+ spin_lock_bh(&conn->session->frwd_lock);
+- if (!tcp_sw_conn || !tcp_sw_conn->sock) {
++ if (!conn->session->leadconn) {
+ spin_unlock_bh(&conn->session->frwd_lock);
+ return -ENOTCONN;
+ }
+- sock = tcp_sw_conn->sock;
+- sock_hold(sock->sk);
++ /*
++ * The conn has been setup and bound, so just grab a ref
++ * incase a destroy runs while we are in the net layer.
++ */
++ iscsi_get_conn(conn->cls_conn);
+ spin_unlock_bh(&conn->session->frwd_lock);
+
++ tcp_conn = conn->dd_data;
++ tcp_sw_conn = tcp_conn->dd_data;
++
++ mutex_lock(&tcp_sw_conn->sock_lock);
++ sock = tcp_sw_conn->sock;
++ if (!sock) {
++ rc = -ENOTCONN;
++ goto sock_unlock;
++ }
++
+ if (param == ISCSI_PARAM_LOCAL_PORT)
+ rc = kernel_getsockname(sock,
+ (struct sockaddr *)&addr);
+ else
+ rc = kernel_getpeername(sock,
+ (struct sockaddr *)&addr);
+- sock_put(sock->sk);
++sock_unlock:
++ mutex_unlock(&tcp_sw_conn->sock_lock);
++ iscsi_put_conn(conn->cls_conn);
+ if (rc < 0)
+ return rc;
+
+@@ -803,17 +869,21 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
+ }
+ tcp_conn = conn->dd_data;
+ tcp_sw_conn = tcp_conn->dd_data;
+- sock = tcp_sw_conn->sock;
+- if (!sock) {
+- spin_unlock_bh(&session->frwd_lock);
+- return -ENOTCONN;
+- }
+- sock_hold(sock->sk);
++ /*
++ * The conn has been setup and bound, so just grab a ref
++ * incase a destroy runs while we are in the net layer.
++ */
++ iscsi_get_conn(conn->cls_conn);
+ spin_unlock_bh(&session->frwd_lock);
+
+- rc = kernel_getsockname(sock,
+- (struct sockaddr *)&addr);
+- sock_put(sock->sk);
++ mutex_lock(&tcp_sw_conn->sock_lock);
++ sock = tcp_sw_conn->sock;
++ if (!sock)
++ rc = -ENOTCONN;
++ else
++ rc = kernel_getsockname(sock, (struct sockaddr *)&addr);
++ mutex_unlock(&tcp_sw_conn->sock_lock);
++ iscsi_put_conn(conn->cls_conn);
+ if (rc < 0)
+ return rc;
+
+diff --git a/drivers/scsi/iscsi_tcp.h b/drivers/scsi/iscsi_tcp.h
+index 791453195099c..68e14a344904f 100644
+--- a/drivers/scsi/iscsi_tcp.h
++++ b/drivers/scsi/iscsi_tcp.h
+@@ -28,6 +28,11 @@ struct iscsi_sw_tcp_send {
+
+ struct iscsi_sw_tcp_conn {
+ struct socket *sock;
++ /* Taken when accessing the sock from the netlink/sysfs interface */
++ struct mutex sock_lock;
++
++ struct work_struct recvwork;
++ bool queue_recv;
+
+ struct iscsi_sw_tcp_send out;
+ /* old values for socket callbacks */
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 3ddb701cd29c7..8f73c8d6ef222 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -83,7 +83,7 @@ MODULE_PARM_DESC(debug_libiscsi_eh,
+ "%s " dbg_fmt, __func__, ##arg); \
+ } while (0);
+
+-inline void iscsi_conn_queue_work(struct iscsi_conn *conn)
++inline void iscsi_conn_queue_xmit(struct iscsi_conn *conn)
+ {
+ struct Scsi_Host *shost = conn->session->host;
+ struct iscsi_host *ihost = shost_priv(shost);
+@@ -91,7 +91,17 @@ inline void iscsi_conn_queue_work(struct iscsi_conn *conn)
+ if (ihost->workq)
+ queue_work(ihost->workq, &conn->xmitwork);
+ }
+-EXPORT_SYMBOL_GPL(iscsi_conn_queue_work);
++EXPORT_SYMBOL_GPL(iscsi_conn_queue_xmit);
++
++inline void iscsi_conn_queue_recv(struct iscsi_conn *conn)
++{
++ struct Scsi_Host *shost = conn->session->host;
++ struct iscsi_host *ihost = shost_priv(shost);
++
++ if (ihost->workq && !test_bit(ISCSI_CONN_FLAG_SUSPEND_RX, &conn->flags))
++ queue_work(ihost->workq, &conn->recvwork);
++}
++EXPORT_SYMBOL_GPL(iscsi_conn_queue_recv);
+
+ static void __iscsi_update_cmdsn(struct iscsi_session *session,
+ uint32_t exp_cmdsn, uint32_t max_cmdsn)
+@@ -765,7 +775,7 @@ __iscsi_conn_send_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ goto free_task;
+ } else {
+ list_add_tail(&task->running, &conn->mgmtqueue);
+- iscsi_conn_queue_work(conn);
++ iscsi_conn_queue_xmit(conn);
+ }
+
+ return task;
+@@ -1513,7 +1523,7 @@ void iscsi_requeue_task(struct iscsi_task *task)
+ */
+ iscsi_put_task(task);
+ }
+- iscsi_conn_queue_work(conn);
++ iscsi_conn_queue_xmit(conn);
+ spin_unlock_bh(&conn->session->frwd_lock);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_requeue_task);
+@@ -1782,7 +1792,7 @@ int iscsi_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *sc)
+ }
+ } else {
+ list_add_tail(&task->running, &conn->cmdqueue);
+- iscsi_conn_queue_work(conn);
++ iscsi_conn_queue_xmit(conn);
+ }
+
+ session->queued_cmdsn++;
+@@ -1943,7 +1953,7 @@ EXPORT_SYMBOL_GPL(iscsi_suspend_queue);
+
+ /**
+ * iscsi_suspend_tx - suspend iscsi_data_xmit
+- * @conn: iscsi conn tp stop processing IO on.
++ * @conn: iscsi conn to stop processing IO on.
+ *
+ * This function sets the suspend bit to prevent iscsi_data_xmit
+ * from sending new IO, and if work is queued on the xmit thread
+@@ -1956,15 +1966,30 @@ void iscsi_suspend_tx(struct iscsi_conn *conn)
+
+ set_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags);
+ if (ihost->workq)
+- flush_workqueue(ihost->workq);
++ flush_work(&conn->xmitwork);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_suspend_tx);
+
+ static void iscsi_start_tx(struct iscsi_conn *conn)
+ {
+ clear_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags);
+- iscsi_conn_queue_work(conn);
++ iscsi_conn_queue_xmit(conn);
++}
++
++/**
++ * iscsi_suspend_rx - Prevent recvwork from running again.
++ * @conn: iscsi conn to stop.
++ */
++void iscsi_suspend_rx(struct iscsi_conn *conn)
++{
++ struct Scsi_Host *shost = conn->session->host;
++ struct iscsi_host *ihost = shost_priv(shost);
++
++ set_bit(ISCSI_CONN_FLAG_SUSPEND_RX, &conn->flags);
++ if (ihost->workq)
++ flush_work(&conn->recvwork);
+ }
++EXPORT_SYMBOL_GPL(iscsi_suspend_rx);
+
+ /*
+ * We want to make sure a ping is in flight. It has timed out.
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 260e735d06fa4..1ec5f4c8e4301 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -67,7 +67,7 @@ static int smp_execute_task_sg(struct domain_device *dev,
+ res = i->dft->lldd_execute_task(task, GFP_KERNEL);
+
+ if (res) {
+- del_timer(&task->slow_task->timer);
++ del_timer_sync(&task->slow_task->timer);
+ pr_notice("executing SMP task failed:%d\n", res);
+ break;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 212f9b9621878..c9fb075952e79 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -1576,10 +1576,7 @@ struct lpfc_hba {
+ u32 cgn_acqe_cnt;
+
+ /* RX monitor handling for CMF */
+- struct rxtable_entry *rxtable; /* RX_monitor information */
+- atomic_t rxtable_idx_head;
+-#define LPFC_RXMONITOR_TABLE_IN_USE (LPFC_MAX_RXMONITOR_ENTRY + 73)
+- atomic_t rxtable_idx_tail;
++ struct lpfc_rx_info_monitor *rx_monitor;
+ atomic_t rx_max_read_cnt; /* Maximum read bytes */
+ uint64_t rx_block_cnt;
+
+@@ -1628,7 +1625,7 @@ struct lpfc_hba {
+
+ #define LPFC_MAX_RXMONITOR_ENTRY 800
+ #define LPFC_MAX_RXMONITOR_DUMP 32
+-struct rxtable_entry {
++struct rx_info_entry {
+ uint64_t cmf_bytes; /* Total no of read bytes for CMF_SYNC_WQE */
+ uint64_t total_bytes; /* Total no of read bytes requested */
+ uint64_t rcv_bytes; /* Total no of read bytes completed */
+@@ -1643,6 +1640,13 @@ struct rxtable_entry {
+ uint32_t timer_interval;
+ };
+
++struct lpfc_rx_info_monitor {
++ struct rx_info_entry *ring; /* info organized in a circular buffer */
++ u32 head_idx, tail_idx; /* index to head/tail of ring */
++ spinlock_t lock; /* spinlock for ring */
++ u32 entries; /* storing number entries/size of ring */
++};
++
+ static inline struct Scsi_Host *
+ lpfc_shost_from_vport(struct lpfc_vport *vport)
+ {
+diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
+index f5d74958b6643..27389e055398c 100644
+--- a/drivers/scsi/lpfc/lpfc_crtn.h
++++ b/drivers/scsi/lpfc/lpfc_crtn.h
+@@ -92,6 +92,14 @@ void lpfc_cgn_dump_rxmonitor(struct lpfc_hba *phba);
+ void lpfc_cgn_update_stat(struct lpfc_hba *phba, uint32_t dtag);
+ void lpfc_unblock_requests(struct lpfc_hba *phba);
+ void lpfc_block_requests(struct lpfc_hba *phba);
++int lpfc_rx_monitor_create_ring(struct lpfc_rx_info_monitor *rx_monitor,
++ u32 entries);
++void lpfc_rx_monitor_destroy_ring(struct lpfc_rx_info_monitor *rx_monitor);
++void lpfc_rx_monitor_record(struct lpfc_rx_info_monitor *rx_monitor,
++ struct rx_info_entry *entry);
++u32 lpfc_rx_monitor_report(struct lpfc_hba *phba,
++ struct lpfc_rx_info_monitor *rx_monitor, char *buf,
++ u32 buf_len, u32 max_read_entries);
+
+ void lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *, LPFC_MBOXQ_t *);
+ void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 13dfe285493d1..b555ccb5ae345 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -1509,7 +1509,7 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_sli_ct_request *CTrsp;
+ int did;
+ struct lpfc_nodelist *ndlp = NULL;
+- struct lpfc_nodelist *ns_ndlp = NULL;
++ struct lpfc_nodelist *ns_ndlp = cmdiocb->ndlp;
+ uint32_t fc4_data_0, fc4_data_1;
+ u32 ulp_status = get_job_ulpstatus(phba, rspiocb);
+ u32 ulp_word4 = get_job_word4(phba, rspiocb);
+@@ -1522,15 +1522,12 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ ulp_status, ulp_word4, did);
+
+ /* Ignore response if link flipped after this request was made */
+- if ((uint32_t) cmdiocb->event_tag != phba->fc_eventTag) {
++ if ((uint32_t)cmdiocb->event_tag != phba->fc_eventTag) {
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+ "9046 Event tag mismatch. Ignoring NS rsp\n");
+ goto out;
+ }
+
+- /* Preserve the nameserver node to release the reference. */
+- ns_ndlp = cmdiocb->ndlp;
+-
+ if (ulp_status == IOSTAT_SUCCESS) {
+ /* Good status, continue checking */
+ CTrsp = (struct lpfc_sli_ct_request *)outp->virt;
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 25deacc92b020..24fbf21ea051b 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -5531,7 +5531,7 @@ lpfc_rx_monitor_open(struct inode *inode, struct file *file)
+ if (!debug)
+ goto out;
+
+- debug->buffer = vmalloc(MAX_DEBUGFS_RX_TABLE_SIZE);
++ debug->buffer = vmalloc(MAX_DEBUGFS_RX_INFO_SIZE);
+ if (!debug->buffer) {
+ kfree(debug);
+ goto out;
+@@ -5552,57 +5552,18 @@ lpfc_rx_monitor_read(struct file *file, char __user *buf, size_t nbytes,
+ struct lpfc_rx_monitor_debug *debug = file->private_data;
+ struct lpfc_hba *phba = (struct lpfc_hba *)debug->i_private;
+ char *buffer = debug->buffer;
+- struct rxtable_entry *entry;
+- int i, len = 0, head, tail, last, start;
+-
+- head = atomic_read(&phba->rxtable_idx_head);
+- while (head == LPFC_RXMONITOR_TABLE_IN_USE) {
+- /* Table is getting updated */
+- msleep(20);
+- head = atomic_read(&phba->rxtable_idx_head);
+- }
+
+- tail = atomic_xchg(&phba->rxtable_idx_tail, head);
+- if (!phba->rxtable || head == tail) {
+- len += scnprintf(buffer + len, MAX_DEBUGFS_RX_TABLE_SIZE - len,
+- "Rxtable is empty\n");
+- goto out;
+- }
+- last = (head > tail) ? head : LPFC_MAX_RXMONITOR_ENTRY;
+- start = tail;
+-
+- len += scnprintf(buffer + len, MAX_DEBUGFS_RX_TABLE_SIZE - len,
+- " MaxBPI Tot_Data_CMF Tot_Data_Cmd "
+- "Tot_Data_Cmpl Lat(us) Avg_IO Max_IO "
+- "Bsy IO_cnt Info BWutil(ms)\n");
+-get_table:
+- for (i = start; i < last; i++) {
+- entry = &phba->rxtable[i];
+- len += scnprintf(buffer + len, MAX_DEBUGFS_RX_TABLE_SIZE - len,
+- "%3d:%12lld %12lld %12lld %12lld "
+- "%7lldus %8lld %7lld "
+- "%2d %4d %2d %2d(%2d)\n",
+- i, entry->max_bytes_per_interval,
+- entry->cmf_bytes,
+- entry->total_bytes,
+- entry->rcv_bytes,
+- entry->avg_io_latency,
+- entry->avg_io_size,
+- entry->max_read_cnt,
+- entry->cmf_busy,
+- entry->io_cnt,
+- entry->cmf_info,
+- entry->timer_utilization,
+- entry->timer_interval);
++ if (!phba->rx_monitor) {
++ scnprintf(buffer, MAX_DEBUGFS_RX_INFO_SIZE,
++ "Rx Monitor Info is empty.\n");
++ } else {
++ lpfc_rx_monitor_report(phba, phba->rx_monitor, buffer,
++ MAX_DEBUGFS_RX_INFO_SIZE,
++ LPFC_MAX_RXMONITOR_ENTRY);
+ }
+
+- if (head != last) {
+- start = 0;
+- last = head;
+- goto get_table;
+- }
+-out:
+- return simple_read_from_buffer(buf, nbytes, ppos, buffer, len);
++ return simple_read_from_buffer(buf, nbytes, ppos, buffer,
++ strlen(buffer));
+ }
+
+ static int
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.h b/drivers/scsi/lpfc/lpfc_debugfs.h
+index 6dd361c1fd318..f71e5b6311ac0 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.h
++++ b/drivers/scsi/lpfc/lpfc_debugfs.h
+@@ -282,7 +282,7 @@ struct lpfc_idiag {
+ void *ptr_private;
+ };
+
+-#define MAX_DEBUGFS_RX_TABLE_SIZE (128 * LPFC_MAX_RXMONITOR_ENTRY)
++#define MAX_DEBUGFS_RX_INFO_SIZE (128 * LPFC_MAX_RXMONITOR_ENTRY)
+ struct lpfc_rx_monitor_debug {
+ char *i_private;
+ char *buffer;
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 2ddc431cbd337..df8216a07d9b7 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -5571,38 +5571,12 @@ lpfc_async_link_speed_to_read_top(struct lpfc_hba *phba, uint8_t speed_code)
+ void
+ lpfc_cgn_dump_rxmonitor(struct lpfc_hba *phba)
+ {
+- struct rxtable_entry *entry;
+- int cnt = 0, head, tail, last, start;
+-
+- head = atomic_read(&phba->rxtable_idx_head);
+- tail = atomic_read(&phba->rxtable_idx_tail);
+- if (!phba->rxtable || head == tail) {
+- lpfc_printf_log(phba, KERN_ERR, LOG_CGN_MGMT,
+- "4411 Rxtable is empty\n");
+- return;
+- }
+- last = tail;
+- start = head;
+-
+- /* Display the last LPFC_MAX_RXMONITOR_DUMP entries from the rxtable */
+- while (start != last) {
+- if (start)
+- start--;
+- else
+- start = LPFC_MAX_RXMONITOR_ENTRY - 1;
+- entry = &phba->rxtable[start];
++ if (!phba->rx_monitor) {
+ lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT,
+- "4410 %02d: MBPI %lld Xmit %lld Cmpl %lld "
+- "Lat %lld ASz %lld Info %02d BWUtil %d "
+- "Int %d slot %d\n",
+- cnt, entry->max_bytes_per_interval,
+- entry->total_bytes, entry->rcv_bytes,
+- entry->avg_io_latency, entry->avg_io_size,
+- entry->cmf_info, entry->timer_utilization,
+- entry->timer_interval, start);
+- cnt++;
+- if (cnt >= LPFC_MAX_RXMONITOR_DUMP)
+- return;
++ "4411 Rx Monitor Info is empty.\n");
++ } else {
++ lpfc_rx_monitor_report(phba, phba->rx_monitor, NULL, 0,
++ LPFC_MAX_RXMONITOR_DUMP);
+ }
+ }
+
+@@ -6009,9 +5983,8 @@ lpfc_cmf_timer(struct hrtimer *timer)
+ {
+ struct lpfc_hba *phba = container_of(timer, struct lpfc_hba,
+ cmf_timer);
+- struct rxtable_entry *entry;
++ struct rx_info_entry entry;
+ uint32_t io_cnt;
+- uint32_t head, tail;
+ uint32_t busy, max_read;
+ uint64_t total, rcv, lat, mbpi, extra, cnt;
+ int timer_interval = LPFC_CMF_INTERVAL;
+@@ -6131,40 +6104,30 @@ lpfc_cmf_timer(struct hrtimer *timer)
+ }
+
+ /* Save rxmonitor information for debug */
+- if (phba->rxtable) {
+- head = atomic_xchg(&phba->rxtable_idx_head,
+- LPFC_RXMONITOR_TABLE_IN_USE);
+- entry = &phba->rxtable[head];
+- entry->total_bytes = total;
+- entry->cmf_bytes = total + extra;
+- entry->rcv_bytes = rcv;
+- entry->cmf_busy = busy;
+- entry->cmf_info = phba->cmf_active_info;
++ if (phba->rx_monitor) {
++ entry.total_bytes = total;
++ entry.cmf_bytes = total + extra;
++ entry.rcv_bytes = rcv;
++ entry.cmf_busy = busy;
++ entry.cmf_info = phba->cmf_active_info;
+ if (io_cnt) {
+- entry->avg_io_latency = div_u64(lat, io_cnt);
+- entry->avg_io_size = div_u64(rcv, io_cnt);
++ entry.avg_io_latency = div_u64(lat, io_cnt);
++ entry.avg_io_size = div_u64(rcv, io_cnt);
+ } else {
+- entry->avg_io_latency = 0;
+- entry->avg_io_size = 0;
++ entry.avg_io_latency = 0;
++ entry.avg_io_size = 0;
+ }
+- entry->max_read_cnt = max_read;
+- entry->io_cnt = io_cnt;
+- entry->max_bytes_per_interval = mbpi;
++ entry.max_read_cnt = max_read;
++ entry.io_cnt = io_cnt;
++ entry.max_bytes_per_interval = mbpi;
+ if (phba->cmf_active_mode == LPFC_CFG_MANAGED)
+- entry->timer_utilization = phba->cmf_last_ts;
++ entry.timer_utilization = phba->cmf_last_ts;
+ else
+- entry->timer_utilization = ms;
+- entry->timer_interval = ms;
++ entry.timer_utilization = ms;
++ entry.timer_interval = ms;
+ phba->cmf_last_ts = 0;
+
+- /* Increment rxtable index */
+- head = (head + 1) % LPFC_MAX_RXMONITOR_ENTRY;
+- tail = atomic_read(&phba->rxtable_idx_tail);
+- if (head == tail) {
+- tail = (tail + 1) % LPFC_MAX_RXMONITOR_ENTRY;
+- atomic_set(&phba->rxtable_idx_tail, tail);
+- }
+- atomic_set(&phba->rxtable_idx_head, head);
++ lpfc_rx_monitor_record(phba->rx_monitor, &entry);
+ }
+
+ if (phba->cmf_active_mode == LPFC_CFG_MONITOR) {
+diff --git a/drivers/scsi/lpfc/lpfc_mem.c b/drivers/scsi/lpfc/lpfc_mem.c
+index 870e53b8f81dd..5d36b35148646 100644
+--- a/drivers/scsi/lpfc/lpfc_mem.c
++++ b/drivers/scsi/lpfc/lpfc_mem.c
+@@ -344,9 +344,12 @@ lpfc_mem_free_all(struct lpfc_hba *phba)
+ phba->cgn_i = NULL;
+ }
+
+- /* Free RX table */
+- kfree(phba->rxtable);
+- phba->rxtable = NULL;
++ /* Free RX Monitor */
++ if (phba->rx_monitor) {
++ lpfc_rx_monitor_destroy_ring(phba->rx_monitor);
++ kfree(phba->rx_monitor);
++ phba->rx_monitor = NULL;
++ }
+
+ /* Free the iocb lookup array */
+ kfree(psli->iocbq_lookup);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index e2127e85ff325..2269253aeb3df 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -7954,6 +7954,172 @@ static void lpfc_sli4_dip(struct lpfc_hba *phba)
+ }
+ }
+
++/**
++ * lpfc_rx_monitor_create_ring - Initialize ring buffer for rx_monitor
++ * @rx_monitor: Pointer to lpfc_rx_info_monitor object
++ * @entries: Number of rx_info_entry objects to allocate in ring
++ *
++ * Return:
++ * 0 - Success
++ * ENOMEM - Failure to kmalloc
++ **/
++int lpfc_rx_monitor_create_ring(struct lpfc_rx_info_monitor *rx_monitor,
++ u32 entries)
++{
++ rx_monitor->ring = kmalloc_array(entries, sizeof(struct rx_info_entry),
++ GFP_KERNEL);
++ if (!rx_monitor->ring)
++ return -ENOMEM;
++
++ rx_monitor->head_idx = 0;
++ rx_monitor->tail_idx = 0;
++ spin_lock_init(&rx_monitor->lock);
++ rx_monitor->entries = entries;
++
++ return 0;
++}
++
++/**
++ * lpfc_rx_monitor_destroy_ring - Free ring buffer for rx_monitor
++ * @rx_monitor: Pointer to lpfc_rx_info_monitor object
++ **/
++void lpfc_rx_monitor_destroy_ring(struct lpfc_rx_info_monitor *rx_monitor)
++{
++ spin_lock(&rx_monitor->lock);
++ kfree(rx_monitor->ring);
++ rx_monitor->ring = NULL;
++ rx_monitor->entries = 0;
++ rx_monitor->head_idx = 0;
++ rx_monitor->tail_idx = 0;
++ spin_unlock(&rx_monitor->lock);
++}
++
++/**
++ * lpfc_rx_monitor_record - Insert an entry into rx_monitor's ring
++ * @rx_monitor: Pointer to lpfc_rx_info_monitor object
++ * @entry: Pointer to rx_info_entry
++ *
++ * Used to insert an rx_info_entry into rx_monitor's ring. Note that this is a
++ * deep copy of rx_info_entry not a shallow copy of the rx_info_entry ptr.
++ *
++ * This is called from lpfc_cmf_timer, which is in timer/softirq context.
++ *
++ * In cases of old data overflow, we do a best effort of FIFO order.
++ **/
++void lpfc_rx_monitor_record(struct lpfc_rx_info_monitor *rx_monitor,
++ struct rx_info_entry *entry)
++{
++ struct rx_info_entry *ring = rx_monitor->ring;
++ u32 *head_idx = &rx_monitor->head_idx;
++ u32 *tail_idx = &rx_monitor->tail_idx;
++ spinlock_t *ring_lock = &rx_monitor->lock;
++ u32 ring_size = rx_monitor->entries;
++
++ spin_lock(ring_lock);
++ memcpy(&ring[*tail_idx], entry, sizeof(*entry));
++ *tail_idx = (*tail_idx + 1) % ring_size;
++
++ /* Best effort of FIFO saved data */
++ if (*tail_idx == *head_idx)
++ *head_idx = (*head_idx + 1) % ring_size;
++
++ spin_unlock(ring_lock);
++}
++
++/**
++ * lpfc_rx_monitor_report - Read out rx_monitor's ring
++ * @phba: Pointer to lpfc_hba object
++ * @rx_monitor: Pointer to lpfc_rx_info_monitor object
++ * @buf: Pointer to char buffer that will contain rx monitor info data
++ * @buf_len: Length buf including null char
++ * @max_read_entries: Maximum number of entries to read out of ring
++ *
++ * Used to dump/read what's in rx_monitor's ring buffer.
++ *
++ * If buf is NULL || buf_len == 0, then it is implied that we want to log the
++ * information to kmsg instead of filling out buf.
++ *
++ * Return:
++ * Number of entries read out of the ring
++ **/
++u32 lpfc_rx_monitor_report(struct lpfc_hba *phba,
++ struct lpfc_rx_info_monitor *rx_monitor, char *buf,
++ u32 buf_len, u32 max_read_entries)
++{
++ struct rx_info_entry *ring = rx_monitor->ring;
++ struct rx_info_entry *entry;
++ u32 *head_idx = &rx_monitor->head_idx;
++ u32 *tail_idx = &rx_monitor->tail_idx;
++ spinlock_t *ring_lock = &rx_monitor->lock;
++ u32 ring_size = rx_monitor->entries;
++ u32 cnt = 0;
++ char tmp[DBG_LOG_STR_SZ] = {0};
++ bool log_to_kmsg = (!buf || !buf_len) ? true : false;
++
++ if (!log_to_kmsg) {
++ /* clear the buffer to be sure */
++ memset(buf, 0, buf_len);
++
++ scnprintf(buf, buf_len, "\t%-16s%-16s%-16s%-16s%-8s%-8s%-8s"
++ "%-8s%-8s%-8s%-16s\n",
++ "MaxBPI", "Tot_Data_CMF",
++ "Tot_Data_Cmd", "Tot_Data_Cmpl",
++ "Lat(us)", "Avg_IO", "Max_IO", "Bsy",
++ "IO_cnt", "Info", "BWutil(ms)");
++ }
++
++ /* Needs to be _bh because record is called from timer interrupt
++ * context
++ */
++ spin_lock_bh(ring_lock);
++ while (*head_idx != *tail_idx) {
++ entry = &ring[*head_idx];
++
++ /* Read out this entry's data. */
++ if (!log_to_kmsg) {
++ /* If !log_to_kmsg, then store to buf. */
++ scnprintf(tmp, sizeof(tmp),
++ "%03d:\t%-16llu%-16llu%-16llu%-16llu%-8llu"
++ "%-8llu%-8llu%-8u%-8u%-8u%u(%u)\n",
++ *head_idx, entry->max_bytes_per_interval,
++ entry->cmf_bytes, entry->total_bytes,
++ entry->rcv_bytes, entry->avg_io_latency,
++ entry->avg_io_size, entry->max_read_cnt,
++ entry->cmf_busy, entry->io_cnt,
++ entry->cmf_info, entry->timer_utilization,
++ entry->timer_interval);
++
++ /* Check for buffer overflow */
++ if ((strlen(buf) + strlen(tmp)) >= buf_len)
++ break;
++
++ /* Append entry's data to buffer */
++ strlcat(buf, tmp, buf_len);
++ } else {
++ lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT,
++ "4410 %02u: MBPI %llu Xmit %llu "
++ "Cmpl %llu Lat %llu ASz %llu Info %02u "
++ "BWUtil %u Int %u slot %u\n",
++ cnt, entry->max_bytes_per_interval,
++ entry->total_bytes, entry->rcv_bytes,
++ entry->avg_io_latency,
++ entry->avg_io_size, entry->cmf_info,
++ entry->timer_utilization,
++ entry->timer_interval, *head_idx);
++ }
++
++ *head_idx = (*head_idx + 1) % ring_size;
++
++ /* Don't feed more than max_read_entries */
++ cnt++;
++ if (cnt >= max_read_entries)
++ break;
++ }
++ spin_unlock_bh(ring_lock);
++
++ return cnt;
++}
++
+ /**
+ * lpfc_cmf_setup - Initialize idle_stat tracking
+ * @phba: Pointer to HBA context object.
+@@ -8128,19 +8294,29 @@ no_cmf:
+ phba->cmf_interval_rate = LPFC_CMF_INTERVAL;
+
+ /* Allocate RX Monitor Buffer */
+- if (!phba->rxtable) {
+- phba->rxtable = kmalloc_array(LPFC_MAX_RXMONITOR_ENTRY,
+- sizeof(struct rxtable_entry),
+- GFP_KERNEL);
+- if (!phba->rxtable) {
++ if (!phba->rx_monitor) {
++ phba->rx_monitor = kzalloc(sizeof(*phba->rx_monitor),
++ GFP_KERNEL);
++
++ if (!phba->rx_monitor) {
+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+ "2644 Failed to alloc memory "
+ "for RX Monitor Buffer\n");
+ return -ENOMEM;
+ }
++
++ /* Instruct the rx_monitor object to instantiate its ring */
++ if (lpfc_rx_monitor_create_ring(phba->rx_monitor,
++ LPFC_MAX_RXMONITOR_ENTRY)) {
++ kfree(phba->rx_monitor);
++ phba->rx_monitor = NULL;
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "2645 Failed to alloc memory "
++ "for RX Monitor's Ring\n");
++ return -ENOMEM;
++ }
+ }
+- atomic_set(&phba->rxtable_idx_head, 0);
+- atomic_set(&phba->rxtable_idx_tail, 0);
++
+ return 0;
+ }
+
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 991eb01bb1e08..0ccaefc35d6b4 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -3608,6 +3608,10 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ pm8001_dbg(pm8001_ha, FAIL, " TASK NULL. RETURNING !!!\n");
+ return -1;
+ }
++
++ if (t->task_proto == SAS_PROTOCOL_INTERNAL_ABORT)
++ atomic_dec(&pm8001_dev->running_req);
++
+ ts = &t->task_status;
+ if (status != 0)
+ pm8001_dbg(pm8001_ha, FAIL, "task abort failed status 0x%x ,tag = 0x%x, scp= 0x%x\n",
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index bbc4d5890ae6a..e045c6e250902 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1921,6 +1921,27 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
+ fc_vport_setlink(vn_port);
+ }
+
++ /* Set symbolic node name */
++ if (base_qedf->pdev->device == QL45xxx)
++ snprintf(fc_host_symbolic_name(vn_port->host), 256,
++ "Marvell FastLinQ 45xxx FCoE v%s", QEDF_VERSION);
++
++ if (base_qedf->pdev->device == QL41xxx)
++ snprintf(fc_host_symbolic_name(vn_port->host), 256,
++ "Marvell FastLinQ 41xxx FCoE v%s", QEDF_VERSION);
++
++ /* Set supported speed */
++ fc_host_supported_speeds(vn_port->host) = n_port->link_supported_speeds;
++
++ /* Set speed */
++ vn_port->link_speed = n_port->link_speed;
++
++ /* Set port type */
++ fc_host_port_type(vn_port->host) = FC_PORTTYPE_NPIV;
++
++ /* Set maxframe size */
++ fc_host_maxframe_size(vn_port->host) = n_port->mfs;
++
+ QEDF_INFO(&(base_qedf->dbg_ctx), QEDF_LOG_NPIV, "vn_port=%p.\n",
+ vn_port);
+
+diff --git a/drivers/slimbus/Kconfig b/drivers/slimbus/Kconfig
+index 1235b7dc8496c..2ed821f75816c 100644
+--- a/drivers/slimbus/Kconfig
++++ b/drivers/slimbus/Kconfig
+@@ -22,7 +22,8 @@ config SLIM_QCOM_CTRL
+
+ config SLIM_QCOM_NGD_CTRL
+ tristate "Qualcomm SLIMbus Satellite Non-Generic Device Component"
+- depends on HAS_IOMEM && DMA_ENGINE && NET && QCOM_RPROC_COMMON
++ depends on HAS_IOMEM && DMA_ENGINE && NET
++ depends on QCOM_RPROC_COMMON || COMPILE_TEST
+ depends on ARCH_QCOM || COMPILE_TEST
+ select QCOM_QMI_HELPERS
+ select QCOM_PDR_HELPERS
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index 0aa8408464add..d29a1a9cf12fa 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -1470,7 +1470,13 @@ static int of_qcom_slim_ngd_register(struct device *parent,
+ ngd->pdev->dev.of_node = node;
+ ctrl->ngd = ngd;
+
+- platform_device_add(ngd->pdev);
++ ret = platform_device_add(ngd->pdev);
++ if (ret) {
++ platform_device_put(ngd->pdev);
++ kfree(ngd);
++ of_node_put(node);
++ return ret;
++ }
+ ngd->base = ctrl->base + ngd->id * data->offset +
+ (ngd->id - 1) * data->size;
+
+@@ -1576,17 +1582,27 @@ static int qcom_slim_ngd_ctrl_probe(struct platform_device *pdev)
+ ctrl->pdr = pdr_handle_alloc(slim_pd_status, ctrl);
+ if (IS_ERR(ctrl->pdr)) {
+ dev_err(dev, "Failed to init PDR handle\n");
+- return PTR_ERR(ctrl->pdr);
++ ret = PTR_ERR(ctrl->pdr);
++ goto err_pdr_alloc;
+ }
+
+ pds = pdr_add_lookup(ctrl->pdr, "avs/audio", "msm/adsp/audio_pd");
+ if (IS_ERR(pds) && PTR_ERR(pds) != -EALREADY) {
++ ret = PTR_ERR(pds);
+ dev_err(dev, "pdr add lookup failed: %d\n", ret);
+- return PTR_ERR(pds);
++ goto err_pdr_lookup;
+ }
+
+ platform_driver_register(&qcom_slim_ngd_driver);
+ return of_qcom_slim_ngd_register(dev, ctrl);
++
++err_pdr_alloc:
++ qcom_unregister_ssr_notifier(ctrl->notifier, &ctrl->nb);
++
++err_pdr_lookup:
++ pdr_handle_release(ctrl->pdr);
++
++ return ret;
+ }
+
+ static int qcom_slim_ngd_ctrl_remove(struct platform_device *pdev)
+diff --git a/drivers/soc/qcom/smem_state.c b/drivers/soc/qcom/smem_state.c
+index 31faf4aa868e6..e848cc9a3cf80 100644
+--- a/drivers/soc/qcom/smem_state.c
++++ b/drivers/soc/qcom/smem_state.c
+@@ -136,6 +136,7 @@ static void qcom_smem_state_release(struct kref *ref)
+ struct qcom_smem_state *state = container_of(ref, struct qcom_smem_state, refcount);
+
+ list_del(&state->list);
++ of_node_put(state->of_node);
+ kfree(state);
+ }
+
+@@ -205,7 +206,7 @@ struct qcom_smem_state *qcom_smem_state_register(struct device_node *of_node,
+
+ kref_init(&state->refcount);
+
+- state->of_node = of_node;
++ state->of_node = of_node_get(of_node);
+ state->ops = *ops;
+ state->priv = priv;
+
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index 9df9bba242f3e..3e8994d6110e6 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -526,7 +526,7 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ for (id = 0; id < smsm->num_hosts; id++) {
+ ret = smsm_parse_ipc(smsm, id);
+ if (ret < 0)
+- return ret;
++ goto out_put;
+ }
+
+ /* Acquire the main SMSM state vector */
+@@ -534,13 +534,14 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ smsm->num_entries * sizeof(u32));
+ if (ret < 0 && ret != -EEXIST) {
+ dev_err(&pdev->dev, "unable to allocate shared state entry\n");
+- return ret;
++ goto out_put;
+ }
+
+ states = qcom_smem_get(QCOM_SMEM_HOST_ANY, SMEM_SMSM_SHARED_STATE, NULL);
+ if (IS_ERR(states)) {
+ dev_err(&pdev->dev, "Unable to acquire shared state entry\n");
+- return PTR_ERR(states);
++ ret = PTR_ERR(states);
++ goto out_put;
+ }
+
+ /* Acquire the list of interrupt mask vectors */
+@@ -548,13 +549,14 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ ret = qcom_smem_alloc(QCOM_SMEM_HOST_ANY, SMEM_SMSM_CPU_INTR_MASK, size);
+ if (ret < 0 && ret != -EEXIST) {
+ dev_err(&pdev->dev, "unable to allocate smsm interrupt mask\n");
+- return ret;
++ goto out_put;
+ }
+
+ intr_mask = qcom_smem_get(QCOM_SMEM_HOST_ANY, SMEM_SMSM_CPU_INTR_MASK, NULL);
+ if (IS_ERR(intr_mask)) {
+ dev_err(&pdev->dev, "unable to acquire shared memory interrupt mask\n");
+- return PTR_ERR(intr_mask);
++ ret = PTR_ERR(intr_mask);
++ goto out_put;
+ }
+
+ /* Setup the reference to the local state bits */
+@@ -565,7 +567,8 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ smsm->state = qcom_smem_state_register(local_node, &smsm_state_ops, smsm);
+ if (IS_ERR(smsm->state)) {
+ dev_err(smsm->dev, "failed to register qcom_smem_state\n");
+- return PTR_ERR(smsm->state);
++ ret = PTR_ERR(smsm->state);
++ goto out_put;
+ }
+
+ /* Register handlers for remote processor entries of interest. */
+@@ -595,16 +598,19 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ }
+
+ platform_set_drvdata(pdev, smsm);
++ of_node_put(local_node);
+
+ return 0;
+
+ unwind_interfaces:
++ of_node_put(node);
+ for (id = 0; id < smsm->num_entries; id++)
+ if (smsm->entries[id].domain)
+ irq_domain_remove(smsm->entries[id].domain);
+
+ qcom_smem_state_unregister(smsm->state);
+-
++out_put:
++ of_node_put(local_node);
+ return ret;
+ }
+
+diff --git a/drivers/soc/tegra/Kconfig b/drivers/soc/tegra/Kconfig
+index 5725c8ef0406a..6f601227da3cb 100644
+--- a/drivers/soc/tegra/Kconfig
++++ b/drivers/soc/tegra/Kconfig
+@@ -136,7 +136,6 @@ config SOC_TEGRA_FUSE
+ def_bool y
+ depends on ARCH_TEGRA
+ select SOC_BUS
+- select TEGRA20_APB_DMA if ARCH_TEGRA_2x_SOC
+
+ config SOC_TEGRA_FLOWCTRL
+ bool
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index 4fbb19557f5ed..42c5fae80efbf 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -544,9 +544,12 @@ cdns_fill_msg_resp(struct sdw_cdns *cdns,
+ return SDW_CMD_IGNORED;
+ }
+
+- /* fill response */
+- for (i = 0; i < count; i++)
+- msg->buf[i + offset] = FIELD_GET(CDNS_MCP_RESP_RDATA, cdns->response_buf[i]);
++ if (msg->flags == SDW_MSG_FLAG_READ) {
++ /* fill response */
++ for (i = 0; i < count; i++)
++ msg->buf[i + offset] = FIELD_GET(CDNS_MCP_RESP_RDATA,
++ cdns->response_buf[i]);
++ }
+
+ return SDW_CMD_OK;
+ }
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 505c5ef061e3f..865d91ecb8627 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -1401,7 +1401,6 @@ int intel_link_startup(struct auxiliary_device *auxdev)
+ ret = intel_register_dai(sdw);
+ if (ret) {
+ dev_err(dev, "DAI registration failed: %d\n", ret);
+- snd_soc_unregister_component(dev);
+ goto err_interrupt;
+ }
+
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 72b1a5a2298c5..106c09ffa4251 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1619,7 +1619,7 @@ static int cqspi_probe(struct platform_device *pdev)
+ pm_runtime_enable(dev);
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0)
+- return ret;
++ goto probe_pm_failed;
+
+ ret = clk_prepare_enable(cqspi->clk);
+ if (ret) {
+@@ -1712,6 +1712,7 @@ probe_reset_failed:
+ clk_disable_unprepare(cqspi->clk);
+ probe_clk_failed:
+ pm_runtime_put_sync(dev);
++probe_pm_failed:
+ pm_runtime_disable(dev);
+ return ret;
+ }
+diff --git a/drivers/spi/spi-dw-bt1.c b/drivers/spi/spi-dw-bt1.c
+index c065534161237..3fb89dee595e7 100644
+--- a/drivers/spi/spi-dw-bt1.c
++++ b/drivers/spi/spi-dw-bt1.c
+@@ -293,8 +293,10 @@ static int dw_spi_bt1_probe(struct platform_device *pdev)
+ pm_runtime_enable(&pdev->dev);
+
+ ret = dw_spi_add_host(&pdev->dev, dws);
+- if (ret)
++ if (ret) {
++ pm_runtime_disable(&pdev->dev);
+ goto err_disable_clk;
++ }
+
+ platform_set_drvdata(pdev, dwsbt1);
+
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index e4cb52e1fe261..6974a1c947aad 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -537,7 +537,7 @@ static unsigned long meson_spicc_pow2_recalc_rate(struct clk_hw *hw,
+ struct clk_divider *divider = to_clk_divider(hw);
+ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
+
+- if (!spicc->master->cur_msg || !spicc->master->busy)
++ if (!spicc->master->cur_msg)
+ return 0;
+
+ return clk_divider_ops.recalc_rate(hw, parent_rate);
+@@ -549,7 +549,7 @@ static int meson_spicc_pow2_determine_rate(struct clk_hw *hw,
+ struct clk_divider *divider = to_clk_divider(hw);
+ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
+
+- if (!spicc->master->cur_msg || !spicc->master->busy)
++ if (!spicc->master->cur_msg)
+ return -EINVAL;
+
+ return clk_divider_ops.determine_rate(hw, req);
+@@ -561,7 +561,7 @@ static int meson_spicc_pow2_set_rate(struct clk_hw *hw, unsigned long rate,
+ struct clk_divider *divider = to_clk_divider(hw);
+ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
+
+- if (!spicc->master->cur_msg || !spicc->master->busy)
++ if (!spicc->master->cur_msg)
+ return -EINVAL;
+
+ return clk_divider_ops.set_rate(hw, rate, parent_rate);
+diff --git a/drivers/spi/spi-mt7621.c b/drivers/spi/spi-mt7621.c
+index b4b9b7309b5e9..351b0ef52bbc8 100644
+--- a/drivers/spi/spi-mt7621.c
++++ b/drivers/spi/spi-mt7621.c
+@@ -340,11 +340,9 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ return PTR_ERR(base);
+
+ clk = devm_clk_get(&pdev->dev, NULL);
+- if (IS_ERR(clk)) {
+- dev_err(&pdev->dev, "unable to get SYS clock, err=%d\n",
+- status);
+- return PTR_ERR(clk);
+- }
++ if (IS_ERR(clk))
++ return dev_err_probe(&pdev->dev, PTR_ERR(clk),
++ "unable to get SYS clock\n");
+
+ status = clk_prepare_enable(clk);
+ if (status)
+diff --git a/drivers/spi/spi-omap-100k.c b/drivers/spi/spi-omap-100k.c
+index 20b0471729651..061f7394e5b9b 100644
+--- a/drivers/spi/spi-omap-100k.c
++++ b/drivers/spi/spi-omap-100k.c
+@@ -412,6 +412,7 @@ static int omap1_spi100k_probe(struct platform_device *pdev)
+ return status;
+
+ err_fck:
++ pm_runtime_disable(&pdev->dev);
+ clk_disable_unprepare(spi100k->fck);
+ err_ick:
+ clk_disable_unprepare(spi100k->ick);
+diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c
+index 00d6084306b4a..7d89510dc3f00 100644
+--- a/drivers/spi/spi-qup.c
++++ b/drivers/spi/spi-qup.c
+@@ -1198,8 +1198,10 @@ static int spi_qup_pm_resume_runtime(struct device *device)
+ return ret;
+
+ ret = clk_prepare_enable(controller->cclk);
+- if (ret)
++ if (ret) {
++ clk_disable_unprepare(controller->iclk);
+ return ret;
++ }
+
+ /* Disable clocks auto gaiting */
+ config = readl_relaxed(controller->base + QUP_CONFIG);
+@@ -1245,14 +1247,25 @@ static int spi_qup_resume(struct device *device)
+ return ret;
+
+ ret = clk_prepare_enable(controller->cclk);
+- if (ret)
++ if (ret) {
++ clk_disable_unprepare(controller->iclk);
+ return ret;
++ }
+
+ ret = spi_qup_set_state(controller, QUP_STATE_RESET);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+- return spi_master_resume(master);
++ ret = spi_master_resume(master);
++ if (ret)
++ goto disable_clk;
++
++ return 0;
++
++disable_clk:
++ clk_disable_unprepare(controller->cclk);
++ clk_disable_unprepare(controller->iclk);
++ return ret;
+ }
+ #endif /* CONFIG_PM_SLEEP */
+
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index 8fa21afc6a35b..b77c98bcf93f0 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -83,6 +83,7 @@
+ #define S3C64XX_SPI_ST_TX_FIFORDY (1<<0)
+
+ #define S3C64XX_SPI_PACKET_CNT_EN (1<<16)
++#define S3C64XX_SPI_PACKET_CNT_MASK GENMASK(15, 0)
+
+ #define S3C64XX_SPI_PND_TX_UNDERRUN_CLR (1<<4)
+ #define S3C64XX_SPI_PND_TX_OVERRUN_CLR (1<<3)
+@@ -663,6 +664,13 @@ static int s3c64xx_spi_prepare_message(struct spi_master *master,
+ return 0;
+ }
+
++static size_t s3c64xx_spi_max_transfer_size(struct spi_device *spi)
++{
++ struct spi_controller *ctlr = spi->controller;
++
++ return ctlr->can_dma ? S3C64XX_SPI_PACKET_CNT_MASK : SIZE_MAX;
++}
++
+ static int s3c64xx_spi_transfer_one(struct spi_master *master,
+ struct spi_device *spi,
+ struct spi_transfer *xfer)
+@@ -1100,6 +1108,7 @@ static int s3c64xx_spi_probe(struct platform_device *pdev)
+ master->prepare_transfer_hardware = s3c64xx_spi_prepare_transfer;
+ master->prepare_message = s3c64xx_spi_prepare_message;
+ master->transfer_one = s3c64xx_spi_transfer_one;
++ master->max_transfer_size = s3c64xx_spi_max_transfer_size;
+ master->num_chipselect = sci->num_cs;
+ master->use_gpio_descriptors = true;
+ master->dma_alignment = 8;
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 2c616024f7c02..f595e516058c2 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1047,6 +1047,8 @@ void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev,
+ if (sgt->orig_nents) {
+ dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+ sg_free_table(sgt);
++ sgt->orig_nents = 0;
++ sgt->nents = 0;
+ }
+ }
+
+diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
+index 2113be40b5a97..58f580e7aacc5 100644
+--- a/drivers/spmi/spmi-pmic-arb.c
++++ b/drivers/spmi/spmi-pmic-arb.c
+@@ -992,7 +992,8 @@ static int pmic_arb_read_apid_map_v5(struct spmi_pmic_arb *pmic_arb)
+ * version 5, there is more than one APID mapped to each PPID.
+ * The owner field for each of these mappings specifies the EE which is
+ * allowed to write to the APID. The owner of the last (highest) APID
+- * for a given PPID will receive interrupts from the PPID.
++ * which has the IRQ owner bit set for a given PPID will receive
++ * interrupts from the PPID.
+ */
+ for (i = 0; ; i++, apidd++) {
+ offset = pmic_arb->ver_ops->apid_map_offset(i);
+@@ -1015,16 +1016,16 @@ static int pmic_arb_read_apid_map_v5(struct spmi_pmic_arb *pmic_arb)
+ apid = pmic_arb->ppid_to_apid[ppid] & ~PMIC_ARB_APID_VALID;
+ prev_apidd = &pmic_arb->apid_data[apid];
+
+- if (valid && is_irq_ee &&
+- prev_apidd->write_ee == pmic_arb->ee) {
++ if (!valid || apidd->write_ee == pmic_arb->ee) {
++ /* First PPID mapping or one for this EE */
++ pmic_arb->ppid_to_apid[ppid] = i | PMIC_ARB_APID_VALID;
++ } else if (valid && is_irq_ee &&
++ prev_apidd->write_ee == pmic_arb->ee) {
+ /*
+ * Duplicate PPID mapping after the one for this EE;
+ * override the irq owner
+ */
+ prev_apidd->irq_ee = apidd->irq_ee;
+- } else if (!valid || is_irq_ee) {
+- /* First PPID mapping or duplicate for another EE */
+- pmic_arb->ppid_to_apid[ppid] = i | PMIC_ARB_APID_VALID;
+ }
+
+ apidd->ppid = ppid;
+diff --git a/drivers/staging/greybus/audio_helper.c b/drivers/staging/greybus/audio_helper.c
+index 843760675876a..79bb2bd8e0007 100644
+--- a/drivers/staging/greybus/audio_helper.c
++++ b/drivers/staging/greybus/audio_helper.c
+@@ -3,7 +3,6 @@
+ * Greybus Audio Sound SoC helper APIs
+ */
+
+-#include <linux/debugfs.h>
+ #include <sound/core.h>
+ #include <sound/soc.h>
+ #include <sound/soc-dapm.h>
+@@ -116,10 +115,6 @@ int gbaudio_dapm_free_controls(struct snd_soc_dapm_context *dapm,
+ {
+ int i;
+ struct snd_soc_dapm_widget *w, *next_w;
+-#ifdef CONFIG_DEBUG_FS
+- struct dentry *parent = dapm->debugfs_dapm;
+- struct dentry *debugfs_w = NULL;
+-#endif
+
+ mutex_lock(&dapm->card->dapm_mutex);
+ for (i = 0; i < num; i++) {
+@@ -139,12 +134,6 @@ int gbaudio_dapm_free_controls(struct snd_soc_dapm_context *dapm,
+ continue;
+ }
+ widget++;
+-#ifdef CONFIG_DEBUG_FS
+- if (!parent)
+- debugfs_w = debugfs_lookup(w->name, parent);
+- debugfs_remove(debugfs_w);
+- debugfs_w = NULL;
+-#endif
+ gbaudio_dapm_free_widget(w);
+ }
+ mutex_unlock(&dapm->card->dapm_mutex);
+diff --git a/drivers/staging/media/meson/vdec/vdec_hevc.c b/drivers/staging/media/meson/vdec/vdec_hevc.c
+index 9530e580e57a2..afced435c9070 100644
+--- a/drivers/staging/media/meson/vdec/vdec_hevc.c
++++ b/drivers/staging/media/meson/vdec/vdec_hevc.c
+@@ -167,8 +167,12 @@ static int vdec_hevc_start(struct amvdec_session *sess)
+
+ clk_set_rate(core->vdec_hevc_clk, 666666666);
+ ret = clk_prepare_enable(core->vdec_hevc_clk);
+- if (ret)
++ if (ret) {
++ if (core->platform->revision == VDEC_REVISION_G12A ||
++ core->platform->revision == VDEC_REVISION_SM1)
++ clk_disable_unprepare(core->vdec_hevcf_clk);
+ return ret;
++ }
+
+ if (core->platform->revision == VDEC_REVISION_SM1)
+ regmap_update_bits(core->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus.c b/drivers/staging/media/sunxi/cedrus/cedrus.c
+index 68b3dcdb5df38..e88a1fe1315cd 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus.c
+@@ -422,6 +422,8 @@ static int cedrus_probe(struct platform_device *pdev)
+ if (!dev)
+ return -ENOMEM;
+
++ platform_set_drvdata(pdev, dev);
++
+ dev->vfd = cedrus_video_device;
+ dev->dev = &pdev->dev;
+ dev->pdev = pdev;
+@@ -495,8 +497,6 @@ static int cedrus_probe(struct platform_device *pdev)
+ goto err_m2m_mc;
+ }
+
+- platform_set_drvdata(pdev, dev);
+-
+ return 0;
+
+ err_m2m_mc:
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index 04419381ea56b..9b6c2eff35af1 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -234,8 +234,9 @@ static void cedrus_h265_skip_bits(struct cedrus_dev *dev, int num)
+ cedrus_write(dev, VE_DEC_H265_TRIGGER,
+ VE_DEC_H265_TRIGGER_FLUSH_BITS |
+ VE_DEC_H265_TRIGGER_TYPE_N_BITS(tmp));
+- while (cedrus_read(dev, VE_DEC_H265_STATUS) & VE_DEC_H265_STATUS_VLD_BUSY)
+- udelay(1);
++
++ if (cedrus_wait_for(dev, VE_DEC_H265_STATUS, VE_DEC_H265_STATUS_VLD_BUSY))
++ dev_err_ratelimited(dev->dev, "timed out waiting to skip bits\n");
+
+ count += tmp;
+ }
+diff --git a/drivers/staging/rtl8723bs/core/rtw_cmd.c b/drivers/staging/rtl8723bs/core/rtw_cmd.c
+index b4170f64d1186..03c2c66dbf665 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_cmd.c
++++ b/drivers/staging/rtl8723bs/core/rtw_cmd.c
+@@ -161,8 +161,6 @@ static struct cmd_hdl wlancmds[] = {
+
+ int rtw_init_cmd_priv(struct cmd_priv *pcmdpriv)
+ {
+- int res = 0;
+-
+ init_completion(&pcmdpriv->cmd_queue_comp);
+ init_completion(&pcmdpriv->terminate_cmdthread_comp);
+
+@@ -175,18 +173,16 @@ int rtw_init_cmd_priv(struct cmd_priv *pcmdpriv)
+
+ pcmdpriv->cmd_allocated_buf = rtw_zmalloc(MAX_CMDSZ + CMDBUFF_ALIGN_SZ);
+
+- if (!pcmdpriv->cmd_allocated_buf) {
+- res = -ENOMEM;
+- goto exit;
+- }
++ if (!pcmdpriv->cmd_allocated_buf)
++ return -ENOMEM;
+
+ pcmdpriv->cmd_buf = pcmdpriv->cmd_allocated_buf + CMDBUFF_ALIGN_SZ - ((SIZE_PTR)(pcmdpriv->cmd_allocated_buf) & (CMDBUFF_ALIGN_SZ-1));
+
+ pcmdpriv->rsp_allocated_buf = rtw_zmalloc(MAX_RSPSZ + 4);
+
+ if (!pcmdpriv->rsp_allocated_buf) {
+- res = -ENOMEM;
+- goto exit;
++ kfree(pcmdpriv->cmd_allocated_buf);
++ return -ENOMEM;
+ }
+
+ pcmdpriv->rsp_buf = pcmdpriv->rsp_allocated_buf + 4 - ((SIZE_PTR)(pcmdpriv->rsp_allocated_buf) & 3);
+@@ -196,8 +192,8 @@ int rtw_init_cmd_priv(struct cmd_priv *pcmdpriv)
+ pcmdpriv->rsp_cnt = 0;
+
+ mutex_init(&pcmdpriv->sctx_mutex);
+-exit:
+- return res;
++
++ return 0;
+ }
+
+ static void c2h_wk_callback(struct work_struct *work);
+diff --git a/drivers/staging/rtl8723bs/os_dep/os_intfs.c b/drivers/staging/rtl8723bs/os_dep/os_intfs.c
+index 380d8c9e1239e..68bba3c0e757a 100644
+--- a/drivers/staging/rtl8723bs/os_dep/os_intfs.c
++++ b/drivers/staging/rtl8723bs/os_dep/os_intfs.c
+@@ -664,51 +664,36 @@ void rtw_reset_drv_sw(struct adapter *padapter)
+
+ u8 rtw_init_drv_sw(struct adapter *padapter)
+ {
+- u8 ret8 = _SUCCESS;
+-
+ rtw_init_default_value(padapter);
+
+ rtw_init_hal_com_default_value(padapter);
+
+- if (rtw_init_cmd_priv(&padapter->cmdpriv)) {
+- ret8 = _FAIL;
+- goto exit;
+- }
++ if (rtw_init_cmd_priv(&padapter->cmdpriv))
++ return _FAIL;
+
+ padapter->cmdpriv.padapter = padapter;
+
+- if (rtw_init_evt_priv(&padapter->evtpriv)) {
+- ret8 = _FAIL;
+- goto exit;
+- }
++ if (rtw_init_evt_priv(&padapter->evtpriv))
++ goto free_cmd_priv;
+
+-
+- if (rtw_init_mlme_priv(padapter) == _FAIL) {
+- ret8 = _FAIL;
+- goto exit;
+- }
++ if (rtw_init_mlme_priv(padapter) == _FAIL)
++ goto free_evt_priv;
+
+ init_mlme_ext_priv(padapter);
+
+- if (_rtw_init_xmit_priv(&padapter->xmitpriv, padapter) == _FAIL) {
+- ret8 = _FAIL;
+- goto exit;
+- }
++ if (_rtw_init_xmit_priv(&padapter->xmitpriv, padapter) == _FAIL)
++ goto free_mlme_ext;
+
+- if (_rtw_init_recv_priv(&padapter->recvpriv, padapter) == _FAIL) {
+- ret8 = _FAIL;
+- goto exit;
+- }
++ if (_rtw_init_recv_priv(&padapter->recvpriv, padapter) == _FAIL)
++ goto free_xmit_priv;
+ /* add for CONFIG_IEEE80211W, none 11w also can use */
+ spin_lock_init(&padapter->security_key_mutex);
+
+ /* We don't need to memset padapter->XXX to zero, because adapter is allocated by vzalloc(). */
+ /* memset((unsigned char *)&padapter->securitypriv, 0, sizeof (struct security_priv)); */
+
+- if (_rtw_init_sta_priv(&padapter->stapriv) == _FAIL) {
+- ret8 = _FAIL;
+- goto exit;
+- }
++ if (_rtw_init_sta_priv(&padapter->stapriv) == _FAIL)
++ goto free_recv_priv;
+
+ padapter->stapriv.padapter = padapter;
+ padapter->setband = GHZ24_50;
+@@ -719,9 +704,26 @@ u8 rtw_init_drv_sw(struct adapter *padapter)
+
+ rtw_hal_dm_init(padapter);
+
+-exit:
++ return _SUCCESS;
++
++free_recv_priv:
++ _rtw_free_recv_priv(&padapter->recvpriv);
++
++free_xmit_priv:
++ _rtw_free_xmit_priv(&padapter->xmitpriv);
++
++free_mlme_ext:
++ free_mlme_ext_priv(&padapter->mlmeextpriv);
+
+- return ret8;
++ rtw_free_mlme_priv(&padapter->mlmepriv);
++
++free_evt_priv:
++ rtw_free_evt_priv(&padapter->evtpriv);
++
++free_cmd_priv:
++ rtw_free_cmd_priv(&padapter->cmdpriv);
++
++ return _FAIL;
+ }
+
+ void rtw_cancel_all_timer(struct adapter *padapter)
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index afaf331fe125d..a91c834c96c08 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -564,7 +564,7 @@ err_free_rd:
+ kfree(desc->rd_info);
+
+ err_free_desc:
+- while (--i) {
++ while (i--) {
+ desc = &priv->aRD0Ring[i];
+ device_free_rx_buf(priv, desc);
+ kfree(desc->rd_info);
+@@ -610,7 +610,7 @@ err_free_rd:
+ kfree(desc->rd_info);
+
+ err_free_desc:
+- while (--i) {
++ while (i--) {
+ desc = &priv->aRD1Ring[i];
+ device_free_rx_buf(priv, desc);
+ kfree(desc->rd_info);
+@@ -675,7 +675,7 @@ static int device_init_td0_ring(struct vnt_private *priv)
+ return 0;
+
+ err_free_desc:
+- while (--i) {
++ while (i--) {
+ desc = &priv->apTD0Rings[i];
+ kfree(desc->td_info);
+ }
+@@ -715,7 +715,7 @@ static int device_init_td1_ring(struct vnt_private *priv)
+ return 0;
+
+ err_free_desc:
+- while (--i) {
++ while (i--) {
+ desc = &priv->apTD1Rings[i];
+ kfree(desc->td_info);
+ }
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index dc19e7c80751a..ca5746f53d9ea 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -530,17 +530,17 @@ __cpufreq_cooling_register(struct device_node *np,
+ struct thermal_cooling_device_ops *cooling_ops;
+ char *name;
+
++ if (IS_ERR_OR_NULL(policy)) {
++ pr_err("%s: cpufreq policy isn't valid: %p\n", __func__, policy);
++ return ERR_PTR(-EINVAL);
++ }
++
+ dev = get_cpu_device(policy->cpu);
+ if (unlikely(!dev)) {
+ pr_warn("No cpu device for cpu %d\n", policy->cpu);
+ return ERR_PTR(-ENODEV);
+ }
+
+- if (IS_ERR_OR_NULL(policy)) {
+- pr_err("%s: cpufreq policy isn't valid: %p\n", __func__, policy);
+- return ERR_PTR(-EINVAL);
+- }
+-
+ i = cpufreq_table_count_valid_entries(policy);
+ if (!i) {
+ pr_debug("%s: CPUFreq table not found or has no valid entries\n",
+diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c
+index c841ab37e7c6d..46cd799af148d 100644
+--- a/drivers/thermal/intel/intel_powerclamp.c
++++ b/drivers/thermal/intel/intel_powerclamp.c
+@@ -532,8 +532,10 @@ static int start_power_clamp(void)
+
+ /* prefer BSP */
+ control_cpu = 0;
+- if (!cpu_online(control_cpu))
+- control_cpu = smp_processor_id();
++ if (!cpu_online(control_cpu)) {
++ control_cpu = get_cpu();
++ put_cpu();
++ }
+
+ clamping = true;
+ schedule_delayed_work(&poll_pkg_cstate_work, 0);
+diff --git a/drivers/thermal/qcom/tsens-v0_1.c b/drivers/thermal/qcom/tsens-v0_1.c
+index f136cb3502384..327f37202c69f 100644
+--- a/drivers/thermal/qcom/tsens-v0_1.c
++++ b/drivers/thermal/qcom/tsens-v0_1.c
+@@ -604,7 +604,7 @@ static const struct tsens_ops ops_8939 = {
+ struct tsens_plat_data data_8939 = {
+ .num_sensors = 10,
+ .ops = &ops_8939,
+- .hw_ids = (unsigned int []){ 0, 1, 2, 4, 5, 6, 7, 8, 9, 10 },
++ .hw_ids = (unsigned int []){ 0, 1, 2, 3, 5, 6, 7, 8, 9, 10 },
+
+ .feat = &tsens_v0_1_feat,
+ .fields = tsens_v0_1_regfields,
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index 1333b158a95eb..407a89047473b 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -28,7 +28,11 @@
+ #define RING_TYPE(ring) ((ring)->is_tx ? "TX ring" : "RX ring")
+
+ #define RING_FIRST_USABLE_HOPID 1
+-
++/*
++ * Used with QUIRK_E2E to specify an unused HopID the Rx credits are
++ * transferred.
++ */
++#define RING_E2E_RESERVED_HOPID RING_FIRST_USABLE_HOPID
+ /*
+ * Minimal number of vectors when we use MSI-X. Two for control channel
+ * Rx/Tx and the rest four are for cross domain DMA paths.
+@@ -38,7 +42,9 @@
+
+ #define NHI_MAILBOX_TIMEOUT 500 /* ms */
+
++/* Host interface quirks */
+ #define QUIRK_AUTO_CLEAR_INT BIT(0)
++#define QUIRK_E2E BIT(1)
+
+ static int ring_interrupt_index(struct tb_ring *ring)
+ {
+@@ -458,8 +464,18 @@ static void ring_release_msix(struct tb_ring *ring)
+
+ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
+ {
++ unsigned int start_hop = RING_FIRST_USABLE_HOPID;
+ int ret = 0;
+
++ if (nhi->quirks & QUIRK_E2E) {
++ start_hop = RING_FIRST_USABLE_HOPID + 1;
++ if (ring->flags & RING_FLAG_E2E && !ring->is_tx) {
++ dev_dbg(&nhi->pdev->dev, "quirking E2E TX HopID %u -> %u\n",
++ ring->e2e_tx_hop, RING_E2E_RESERVED_HOPID);
++ ring->e2e_tx_hop = RING_E2E_RESERVED_HOPID;
++ }
++ }
++
+ spin_lock_irq(&nhi->lock);
+
+ if (ring->hop < 0) {
+@@ -469,7 +485,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
+ * Automatically allocate HopID from the non-reserved
+ * range 1 .. hop_count - 1.
+ */
+- for (i = RING_FIRST_USABLE_HOPID; i < nhi->hop_count; i++) {
++ for (i = start_hop; i < nhi->hop_count; i++) {
+ if (ring->is_tx) {
+ if (!nhi->tx_rings[i]) {
+ ring->hop = i;
+@@ -484,6 +500,11 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
+ }
+ }
+
++ if (ring->hop > 0 && ring->hop < start_hop) {
++ dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
++ ret = -EINVAL;
++ goto err_unlock;
++ }
+ if (ring->hop < 0 || ring->hop >= nhi->hop_count) {
+ dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
+ ret = -EINVAL;
+@@ -1097,12 +1118,26 @@ static void nhi_shutdown(struct tb_nhi *nhi)
+
+ static void nhi_check_quirks(struct tb_nhi *nhi)
+ {
+- /*
+- * Intel hardware supports auto clear of the interrupt status
+- * reqister right after interrupt is being issued.
+- */
+- if (nhi->pdev->vendor == PCI_VENDOR_ID_INTEL)
++ if (nhi->pdev->vendor == PCI_VENDOR_ID_INTEL) {
++ /*
++ * Intel hardware supports auto clear of the interrupt
++ * status register right after interrupt is being
++ * issued.
++ */
+ nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
++
++ switch (nhi->pdev->device) {
++ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
++ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
++ /*
++ * Falcon Ridge controller needs the end-to-end
++ * flow control workaround to avoid losing Rx
++ * packets when RING_FLAG_E2E is set.
++ */
++ nhi->quirks |= QUIRK_E2E;
++ break;
++ }
++ }
+ }
+
+ static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data)
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 0508da6f63d9e..744a274d108b0 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2822,6 +2822,26 @@ static void tb_switch_credits_init(struct tb_switch *sw)
+ tb_sw_info(sw, "failed to determine preferred buffer allocation, using defaults\n");
+ }
+
++static int tb_switch_port_hotplug_enable(struct tb_switch *sw)
++{
++ struct tb_port *port;
++
++ if (tb_switch_is_icm(sw))
++ return 0;
++
++ tb_switch_for_each_port(sw, port) {
++ int res;
++
++ if (!port->cap_usb4)
++ continue;
++
++ res = usb4_port_hotplug_enable(port);
++ if (res)
++ return res;
++ }
++ return 0;
++}
++
+ /**
+ * tb_switch_add() - Add a switch to the domain
+ * @sw: Switch to add
+@@ -2891,6 +2911,10 @@ int tb_switch_add(struct tb_switch *sw)
+ return ret;
+ }
+
++ ret = tb_switch_port_hotplug_enable(sw);
++ if (ret)
++ return ret;
++
+ ret = device_add(&sw->dev);
+ if (ret) {
+ dev_err(&sw->dev, "failed to add device: %d\n", ret);
+diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
+index 4602c69913fa0..eef6336bd1663 100644
+--- a/drivers/thunderbolt/tb.h
++++ b/drivers/thunderbolt/tb.h
+@@ -1170,6 +1170,7 @@ int usb4_switch_add_ports(struct tb_switch *sw);
+ void usb4_switch_remove_ports(struct tb_switch *sw);
+
+ int usb4_port_unlock(struct tb_port *port);
++int usb4_port_hotplug_enable(struct tb_port *port);
+ int usb4_port_configure(struct tb_port *port);
+ void usb4_port_unconfigure(struct tb_port *port);
+ int usb4_port_configure_xdomain(struct tb_port *port);
+diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
+index 6a16f61a72a1b..4e0c2a1ccab0c 100644
+--- a/drivers/thunderbolt/tb_regs.h
++++ b/drivers/thunderbolt/tb_regs.h
+@@ -302,6 +302,7 @@ struct tb_regs_port_header {
+ #define ADP_CS_5 0x05
+ #define ADP_CS_5_LCA_MASK GENMASK(28, 22)
+ #define ADP_CS_5_LCA_SHIFT 22
++#define ADP_CS_5_DHP BIT(31)
+
+ /* TMU adapter registers */
+ #define TMU_ADP_CS_3 0x03
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index 3a2e7126db9dc..f0b5a8f1ed3a3 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -1046,6 +1046,26 @@ int usb4_port_unlock(struct tb_port *port)
+ return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_4, 1);
+ }
+
++/**
++ * usb4_port_hotplug_enable() - Enables hotplug for a port
++ * @port: USB4 port to operate on
++ *
++ * Enables hot plug events on a given port. This is only intended
++ * to be used on lane, DP-IN, and DP-OUT adapters.
++ */
++int usb4_port_hotplug_enable(struct tb_port *port)
++{
++ int ret;
++ u32 val;
++
++ ret = tb_port_read(port, &val, TB_CFG_PORT, ADP_CS_5, 1);
++ if (ret)
++ return ret;
++
++ val &= ~ADP_CS_5_DHP;
++ return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_5, 1);
++}
++
+ static int usb4_port_set_configured(struct tb_port *port, bool configured)
+ {
+ int ret;
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 82726cda60663..f05544e93eae1 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -298,10 +298,9 @@ static void serial8250_backup_timeout(struct timer_list *t)
+ jiffies + uart_poll_timeout(&up->port) + HZ / 5);
+ }
+
+-static int univ8250_setup_irq(struct uart_8250_port *up)
++static void univ8250_setup_timer(struct uart_8250_port *up)
+ {
+ struct uart_port *port = &up->port;
+- int retval = 0;
+
+ /*
+ * The above check will only give an accurate result the first time
+@@ -322,10 +321,16 @@ static int univ8250_setup_irq(struct uart_8250_port *up)
+ */
+ if (!port->irq)
+ mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
+- else
+- retval = serial_link_irq_chain(up);
++}
+
+- return retval;
++static int univ8250_setup_irq(struct uart_8250_port *up)
++{
++ struct uart_port *port = &up->port;
++
++ if (port->irq)
++ return serial_link_irq_chain(up);
++
++ return 0;
+ }
+
+ static void univ8250_release_irq(struct uart_8250_port *up)
+@@ -381,6 +386,7 @@ static struct uart_ops univ8250_port_ops;
+ static const struct uart_8250_ops univ8250_driver_ops = {
+ .setup_irq = univ8250_setup_irq,
+ .release_irq = univ8250_release_irq,
++ .setup_timer = univ8250_setup_timer,
+ };
+
+ static struct uart_8250_port serial8250_ports[UART_NR];
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index f6732c1ed2385..defb293958f2a 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -1232,6 +1232,10 @@ static void pci_oxsemi_tornado_set_mctrl(struct uart_port *port,
+ serial8250_do_set_mctrl(port, mctrl);
+ }
+
++/*
++ * We require EFR features for clock programming, so set UPF_FULL_PROBE
++ * for full probing regardless of CONFIG_SERIAL_8250_16550A_VARIANTS setting.
++ */
+ static int pci_oxsemi_tornado_setup(struct serial_private *priv,
+ const struct pciserial_board *board,
+ struct uart_8250_port *up, int idx)
+@@ -1239,6 +1243,7 @@ static int pci_oxsemi_tornado_setup(struct serial_private *priv,
+ struct pci_dev *dev = priv->dev;
+
+ if (pci_oxsemi_tornado_p(dev)) {
++ up->port.flags |= UPF_FULL_PROBE;
+ up->port.get_divisor = pci_oxsemi_tornado_get_divisor;
+ up->port.set_divisor = pci_oxsemi_tornado_set_divisor;
+ up->port.set_mctrl = pci_oxsemi_tornado_set_mctrl;
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 2b86c55ed374e..c66a029882e6b 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1029,7 +1029,8 @@ static void autoconfig_16550a(struct uart_8250_port *up)
+ up->port.type = PORT_16550A;
+ up->capabilities |= UART_CAP_FIFO;
+
+- if (!IS_ENABLED(CONFIG_SERIAL_8250_16550A_VARIANTS))
++ if (!IS_ENABLED(CONFIG_SERIAL_8250_16550A_VARIANTS) &&
++ !(up->port.flags & UPF_FULL_PROBE))
+ return;
+
+ /*
+@@ -2302,6 +2303,10 @@ int serial8250_do_startup(struct uart_port *port)
+ if (port->irq && (up->port.flags & UPF_SHARE_IRQ))
+ up->port.irqflags |= IRQF_SHARED;
+
++ retval = up->ops->setup_irq(up);
++ if (retval)
++ goto out;
++
+ if (port->irq && !(up->port.flags & UPF_NO_THRE_TEST)) {
+ unsigned char iir1;
+
+@@ -2344,9 +2349,7 @@ int serial8250_do_startup(struct uart_port *port)
+ }
+ }
+
+- retval = up->ops->setup_irq(up);
+- if (retval)
+- goto out;
++ up->ops->setup_timer(up);
+
+ /*
+ * Now, initialize the UART
+@@ -3322,8 +3325,13 @@ static void serial8250_console_restore(struct uart_8250_port *up)
+ unsigned int baud, quot, frac = 0;
+
+ termios.c_cflag = port->cons->cflag;
+- if (port->state->port.tty && termios.c_cflag == 0)
++ termios.c_ispeed = port->cons->ispeed;
++ termios.c_ospeed = port->cons->ospeed;
++ if (port->state->port.tty && termios.c_cflag == 0) {
+ termios.c_cflag = port->state->port.tty->termios.c_cflag;
++ termios.c_ispeed = port->state->port.tty->termios.c_ispeed;
++ termios.c_ospeed = port->state->port.tty->termios.c_ospeed;
++ }
+
+ baud = serial8250_get_baud_rate(port, &termios, NULL);
+ quot = serial8250_get_divisor(port, baud, &frac);
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index db07d6a5d764d..fa5c4633086e6 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1214,12 +1214,6 @@ static int cpm_uart_init_port(struct device_node *np,
+ pinfo->port.fifosize = pinfo->tx_nrfifos * pinfo->tx_fifosize;
+ spin_lock_init(&pinfo->port.lock);
+
+- pinfo->port.irq = irq_of_parse_and_map(np, 0);
+- if (pinfo->port.irq == NO_IRQ) {
+- ret = -EINVAL;
+- goto out_pram;
+- }
+-
+ for (i = 0; i < NUM_GPIOS; i++) {
+ struct gpio_desc *gpiod;
+
+@@ -1229,7 +1223,7 @@ static int cpm_uart_init_port(struct device_node *np,
+
+ if (IS_ERR(gpiod)) {
+ ret = PTR_ERR(gpiod);
+- goto out_irq;
++ goto out_pram;
+ }
+
+ if (gpiod) {
+@@ -1255,8 +1249,6 @@ static int cpm_uart_init_port(struct device_node *np,
+
+ return cpm_uart_request_port(&pinfo->port);
+
+-out_irq:
+- irq_dispose_mapping(pinfo->port.irq);
+ out_pram:
+ cpm_uart_unmap_pram(pinfo, pram);
+ out_mem:
+@@ -1436,11 +1428,17 @@ static int cpm_uart_probe(struct platform_device *ofdev)
+ /* initialize the device pointer for the port */
+ pinfo->port.dev = &ofdev->dev;
+
++ pinfo->port.irq = irq_of_parse_and_map(ofdev->dev.of_node, 0);
++ if (!pinfo->port.irq)
++ return -EINVAL;
++
+ ret = cpm_uart_init_port(ofdev->dev.of_node, pinfo);
+- if (ret)
+- return ret;
++ if (!ret)
++ return uart_add_one_port(&cpm_reg, &pinfo->port);
+
+- return uart_add_one_port(&cpm_reg, &pinfo->port);
++ irq_dispose_mapping(pinfo->port.irq);
++
++ return ret;
+ }
+
+ static int cpm_uart_remove(struct platform_device *ofdev)
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index cb83c66bd8a82..a6471af9653c9 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1768,6 +1768,7 @@ static void lpuart_dma_shutdown(struct lpuart_port *sport)
+ if (sport->lpuart_dma_rx_use) {
+ del_timer_sync(&sport->lpuart_timer);
+ lpuart_dma_rx_free(&sport->port);
++ sport->lpuart_dma_rx_use = false;
+ }
+
+ if (sport->lpuart_dma_tx_use) {
+@@ -1776,6 +1777,7 @@ static void lpuart_dma_shutdown(struct lpuart_port *sport)
+ sport->dma_tx_in_progress = false;
+ dmaengine_terminate_all(sport->dma_tx_chan);
+ }
++ sport->lpuart_dma_tx_use = false;
+ }
+
+ if (sport->dma_tx_chan)
+diff --git a/drivers/tty/serial/jsm/jsm_driver.c b/drivers/tty/serial/jsm/jsm_driver.c
+index 0ea799bf8dbb1..417a5b6bffc34 100644
+--- a/drivers/tty/serial/jsm/jsm_driver.c
++++ b/drivers/tty/serial/jsm/jsm_driver.c
+@@ -211,7 +211,8 @@ static int jsm_probe_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ break;
+ default:
+- return -ENXIO;
++ rc = -ENXIO;
++ goto out_kfree_brd;
+ }
+
+ rc = request_irq(brd->irq, brd->bd_ops->intr, IRQF_SHARED, "JSM", brd);
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 0973b03eeeaa4..aa2d141760b6e 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -62,6 +62,53 @@ static void stm32_usart_clr_bits(struct uart_port *port, u32 reg, u32 bits)
+ writel_relaxed(val, port->membase + reg);
+ }
+
++static unsigned int stm32_usart_tx_empty(struct uart_port *port)
++{
++ struct stm32_port *stm32_port = to_stm32_port(port);
++ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++
++ if (readl_relaxed(port->membase + ofs->isr) & USART_SR_TC)
++ return TIOCSER_TEMT;
++
++ return 0;
++}
++
++static void stm32_usart_rs485_rts_enable(struct uart_port *port)
++{
++ struct stm32_port *stm32_port = to_stm32_port(port);
++ struct serial_rs485 *rs485conf = &port->rs485;
++
++ if (stm32_port->hw_flow_control ||
++ !(rs485conf->flags & SER_RS485_ENABLED))
++ return;
++
++ if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
++ mctrl_gpio_set(stm32_port->gpios,
++ stm32_port->port.mctrl | TIOCM_RTS);
++ } else {
++ mctrl_gpio_set(stm32_port->gpios,
++ stm32_port->port.mctrl & ~TIOCM_RTS);
++ }
++}
++
++static void stm32_usart_rs485_rts_disable(struct uart_port *port)
++{
++ struct stm32_port *stm32_port = to_stm32_port(port);
++ struct serial_rs485 *rs485conf = &port->rs485;
++
++ if (stm32_port->hw_flow_control ||
++ !(rs485conf->flags & SER_RS485_ENABLED))
++ return;
++
++ if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
++ mctrl_gpio_set(stm32_port->gpios,
++ stm32_port->port.mctrl & ~TIOCM_RTS);
++ } else {
++ mctrl_gpio_set(stm32_port->gpios,
++ stm32_port->port.mctrl | TIOCM_RTS);
++ }
++}
++
+ static void stm32_usart_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
+ u32 delay_DDE, u32 baud)
+ {
+@@ -145,6 +192,12 @@ static int stm32_usart_config_rs485(struct uart_port *port,
+
+ stm32_usart_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+
++ /* Adjust RTS polarity in case it's driven in software */
++ if (stm32_usart_tx_empty(port))
++ stm32_usart_rs485_rts_disable(port);
++ else
++ stm32_usart_rs485_rts_enable(port);
++
+ return 0;
+ }
+
+@@ -460,42 +513,6 @@ static void stm32_usart_tc_interrupt_disable(struct uart_port *port)
+ stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_TCIE);
+ }
+
+-static void stm32_usart_rs485_rts_enable(struct uart_port *port)
+-{
+- struct stm32_port *stm32_port = to_stm32_port(port);
+- struct serial_rs485 *rs485conf = &port->rs485;
+-
+- if (stm32_port->hw_flow_control ||
+- !(rs485conf->flags & SER_RS485_ENABLED))
+- return;
+-
+- if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
+- mctrl_gpio_set(stm32_port->gpios,
+- stm32_port->port.mctrl | TIOCM_RTS);
+- } else {
+- mctrl_gpio_set(stm32_port->gpios,
+- stm32_port->port.mctrl & ~TIOCM_RTS);
+- }
+-}
+-
+-static void stm32_usart_rs485_rts_disable(struct uart_port *port)
+-{
+- struct stm32_port *stm32_port = to_stm32_port(port);
+- struct serial_rs485 *rs485conf = &port->rs485;
+-
+- if (stm32_port->hw_flow_control ||
+- !(rs485conf->flags & SER_RS485_ENABLED))
+- return;
+-
+- if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
+- mctrl_gpio_set(stm32_port->gpios,
+- stm32_port->port.mctrl & ~TIOCM_RTS);
+- } else {
+- mctrl_gpio_set(stm32_port->gpios,
+- stm32_port->port.mctrl | TIOCM_RTS);
+- }
+-}
+-
+ static void stm32_usart_transmit_chars_pio(struct uart_port *port)
+ {
+ struct stm32_port *stm32_port = to_stm32_port(port);
+@@ -738,17 +755,6 @@ static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr)
+ return IRQ_HANDLED;
+ }
+
+-static unsigned int stm32_usart_tx_empty(struct uart_port *port)
+-{
+- struct stm32_port *stm32_port = to_stm32_port(port);
+- const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-
+- if (readl_relaxed(port->membase + ofs->isr) & USART_SR_TC)
+- return TIOCSER_TEMT;
+-
+- return 0;
+-}
+-
+ static void stm32_usart_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ {
+ struct stm32_port *stm32_port = to_stm32_port(port);
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 9e01fe6c0ab8c..e08d2c3305ba9 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -361,6 +361,8 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
+ isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
+ }
+
++ isrstatus &= port->read_status_mask;
++ isrstatus &= ~port->ignore_status_mask;
+ /*
+ * Skip RX processing if RX is disabled as RXEMPTY will never be set
+ * as read bytes will not be removed from the FIFO.
+diff --git a/drivers/usb/common/debug.c b/drivers/usb/common/debug.c
+index 075f6b1b2a1a1..f204cec8d380a 100644
+--- a/drivers/usb/common/debug.c
++++ b/drivers/usb/common/debug.c
+@@ -208,30 +208,28 @@ static void usb_decode_set_isoch_delay(__u8 wValue, char *str, size_t size)
+ snprintf(str, size, "Set Isochronous Delay(Delay = %d ns)", wValue);
+ }
+
+-/**
+- * usb_decode_ctrl - Returns human readable representation of control request.
+- * @str: buffer to return a human-readable representation of control request.
+- * This buffer should have about 200 bytes.
+- * @size: size of str buffer.
+- * @bRequestType: matches the USB bmRequestType field
+- * @bRequest: matches the USB bRequest field
+- * @wValue: matches the USB wValue field (CPU byte order)
+- * @wIndex: matches the USB wIndex field (CPU byte order)
+- * @wLength: matches the USB wLength field (CPU byte order)
+- *
+- * Function returns decoded, formatted and human-readable description of
+- * control request packet.
+- *
+- * The usage scenario for this is for tracepoints, so function as a return
+- * use the same value as in parameters. This approach allows to use this
+- * function in TP_printk
+- *
+- * Important: wValue, wIndex, wLength parameters before invoking this function
+- * should be processed by le16_to_cpu macro.
+- */
+-const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
+- __u8 bRequest, __u16 wValue, __u16 wIndex,
+- __u16 wLength)
++static void usb_decode_ctrl_generic(char *str, size_t size, __u8 bRequestType,
++ __u8 bRequest, __u16 wValue, __u16 wIndex,
++ __u16 wLength)
++{
++ u8 recip = bRequestType & USB_RECIP_MASK;
++ u8 type = bRequestType & USB_TYPE_MASK;
++
++ snprintf(str, size,
++ "Type=%s Recipient=%s Dir=%s bRequest=%u wValue=%u wIndex=%u wLength=%u",
++ (type == USB_TYPE_STANDARD) ? "Standard" :
++ (type == USB_TYPE_VENDOR) ? "Vendor" :
++ (type == USB_TYPE_CLASS) ? "Class" : "Unknown",
++ (recip == USB_RECIP_DEVICE) ? "Device" :
++ (recip == USB_RECIP_INTERFACE) ? "Interface" :
++ (recip == USB_RECIP_ENDPOINT) ? "Endpoint" : "Unknown",
++ (bRequestType & USB_DIR_IN) ? "IN" : "OUT",
++ bRequest, wValue, wIndex, wLength);
++}
++
++static void usb_decode_ctrl_standard(char *str, size_t size, __u8 bRequestType,
++ __u8 bRequest, __u16 wValue, __u16 wIndex,
++ __u16 wLength)
+ {
+ switch (bRequest) {
+ case USB_REQ_GET_STATUS:
+@@ -272,14 +270,48 @@ const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
+ usb_decode_set_isoch_delay(wValue, str, size);
+ break;
+ default:
+- snprintf(str, size, "%02x %02x %02x %02x %02x %02x %02x %02x",
+- bRequestType, bRequest,
+- (u8)(cpu_to_le16(wValue) & 0xff),
+- (u8)(cpu_to_le16(wValue) >> 8),
+- (u8)(cpu_to_le16(wIndex) & 0xff),
+- (u8)(cpu_to_le16(wIndex) >> 8),
+- (u8)(cpu_to_le16(wLength) & 0xff),
+- (u8)(cpu_to_le16(wLength) >> 8));
++ usb_decode_ctrl_generic(str, size, bRequestType, bRequest,
++ wValue, wIndex, wLength);
++ break;
++ }
++}
++
++/**
++ * usb_decode_ctrl - Returns human readable representation of control request.
++ * @str: buffer to return a human-readable representation of control request.
++ * This buffer should have about 200 bytes.
++ * @size: size of str buffer.
++ * @bRequestType: matches the USB bmRequestType field
++ * @bRequest: matches the USB bRequest field
++ * @wValue: matches the USB wValue field (CPU byte order)
++ * @wIndex: matches the USB wIndex field (CPU byte order)
++ * @wLength: matches the USB wLength field (CPU byte order)
++ *
++ * Function returns decoded, formatted and human-readable description of
++ * control request packet.
++ *
++ * The usage scenario for this is for tracepoints, so function as a return
++ * use the same value as in parameters. This approach allows to use this
++ * function in TP_printk
++ *
++ * Important: wValue, wIndex, wLength parameters before invoking this function
++ * should be processed by le16_to_cpu macro.
++ */
++const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
++ __u8 bRequest, __u16 wValue, __u16 wIndex,
++ __u16 wLength)
++{
++ switch (bRequestType & USB_TYPE_MASK) {
++ case USB_TYPE_STANDARD:
++ usb_decode_ctrl_standard(str, size, bRequestType, bRequest,
++ wValue, wIndex, wLength);
++ break;
++ case USB_TYPE_VENDOR:
++ case USB_TYPE_CLASS:
++ default:
++ usb_decode_ctrl_generic(str, size, bRequestType, bRequest,
++ wValue, wIndex, wLength);
++ break;
+ }
+
+ return str;
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index f99a65a64588f..999b7c9697fcd 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -437,6 +437,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x1532, 0x0116), .driver_info =
+ USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
+
++ /* Lenovo ThinkPad OneLink+ Dock twin hub controllers (VIA Labs VL812) */
++ { USB_DEVICE(0x17ef, 0x1018), .driver_info = USB_QUIRK_RESET_RESUME },
++ { USB_DEVICE(0x17ef, 0x1019), .driver_info = USB_QUIRK_RESET_RESUME },
++
+ /* Lenovo USB-C to Ethernet Adapter RTL8153-04 */
+ { USB_DEVICE(0x17ef, 0x720c), .driver_info = USB_QUIRK_NO_LPM },
+
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index ebf3afad378ba..21fa2e2795d84 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -407,6 +407,10 @@ static void dwc3_ref_clk_period(struct dwc3 *dwc)
+ reg |= FIELD_PREP(DWC3_GFLADJ_REFCLK_FLADJ_MASK, fladj)
+ | FIELD_PREP(DWC3_GFLADJ_240MHZDECR, decr >> 1)
+ | FIELD_PREP(DWC3_GFLADJ_240MHZDECR_PLS1, decr & 1);
++
++ if (dwc->gfladj_refclk_lpm_sel)
++ reg |= DWC3_GFLADJ_REFCLK_LPM_SEL;
++
+ dwc3_writel(dwc->regs, DWC3_GFLADJ, reg);
+ }
+
+@@ -788,7 +792,7 @@ static int dwc3_phy_setup(struct dwc3 *dwc)
+ else
+ reg |= DWC3_GUSB2PHYCFG_ENBLSLPM;
+
+- if (dwc->dis_u2_freeclk_exists_quirk)
++ if (dwc->dis_u2_freeclk_exists_quirk || dwc->gfladj_refclk_lpm_sel)
+ reg &= ~DWC3_GUSB2PHYCFG_U2_FREECLK_EXISTS;
+
+ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+@@ -1145,6 +1149,21 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ dwc3_writel(dwc->regs, DWC3_GUCTL2, reg);
+ }
+
++ /*
++ * When configured in HOST mode, after issuing U3/L2 exit controller
++ * fails to send proper CRC checksum in CRC5 feild. Because of this
++ * behaviour Transaction Error is generated, resulting in reset and
++ * re-enumeration of usb device attached. All the termsel, xcvrsel,
++ * opmode becomes 0 during end of resume. Enabling bit 10 of GUCTL1
++ * will correct this problem. This option is to support certain
++ * legacy ULPI PHYs.
++ */
++ if (dwc->resume_hs_terminations) {
++ reg = dwc3_readl(dwc->regs, DWC3_GUCTL1);
++ reg |= DWC3_GUCTL1_RESUME_OPMODE_HS_HOST;
++ dwc3_writel(dwc->regs, DWC3_GUCTL1, reg);
++ }
++
+ if (!DWC3_VER_IS_PRIOR(DWC3, 250A)) {
+ reg = dwc3_readl(dwc->regs, DWC3_GUCTL1);
+
+@@ -1488,8 +1507,12 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ "snps,dis-del-phy-power-chg-quirk");
+ dwc->dis_tx_ipgap_linecheck_quirk = device_property_read_bool(dev,
+ "snps,dis-tx-ipgap-linecheck-quirk");
++ dwc->resume_hs_terminations = device_property_read_bool(dev,
++ "snps,resume-hs-terminations");
+ dwc->parkmode_disable_ss_quirk = device_property_read_bool(dev,
+ "snps,parkmode-disable-ss-quirk");
++ dwc->gfladj_refclk_lpm_sel = device_property_read_bool(dev,
++ "snps,gfladj-refclk-lpm-sel-quirk");
+
+ dwc->tx_de_emphasis_quirk = device_property_read_bool(dev,
+ "snps,tx_de_emphasis_quirk");
+@@ -1678,8 +1701,10 @@ static int dwc3_probe(struct platform_device *pdev)
+ dwc3_get_properties(dwc);
+
+ dwc->reset = devm_reset_control_array_get_optional_shared(dev);
+- if (IS_ERR(dwc->reset))
+- return PTR_ERR(dwc->reset);
++ if (IS_ERR(dwc->reset)) {
++ ret = PTR_ERR(dwc->reset);
++ goto put_usb_psy;
++ }
+
+ if (dev->of_node) {
+ /*
+@@ -1689,45 +1714,57 @@ static int dwc3_probe(struct platform_device *pdev)
+ * check for them to retain backwards compatibility.
+ */
+ dwc->bus_clk = devm_clk_get_optional(dev, "bus_early");
+- if (IS_ERR(dwc->bus_clk))
+- return dev_err_probe(dev, PTR_ERR(dwc->bus_clk),
+- "could not get bus clock\n");
++ if (IS_ERR(dwc->bus_clk)) {
++ ret = dev_err_probe(dev, PTR_ERR(dwc->bus_clk),
++ "could not get bus clock\n");
++ goto put_usb_psy;
++ }
+
+ if (dwc->bus_clk == NULL) {
+ dwc->bus_clk = devm_clk_get_optional(dev, "bus_clk");
+- if (IS_ERR(dwc->bus_clk))
+- return dev_err_probe(dev, PTR_ERR(dwc->bus_clk),
+- "could not get bus clock\n");
++ if (IS_ERR(dwc->bus_clk)) {
++ ret = dev_err_probe(dev, PTR_ERR(dwc->bus_clk),
++ "could not get bus clock\n");
++ goto put_usb_psy;
++ }
+ }
+
+ dwc->ref_clk = devm_clk_get_optional(dev, "ref");
+- if (IS_ERR(dwc->ref_clk))
+- return dev_err_probe(dev, PTR_ERR(dwc->ref_clk),
+- "could not get ref clock\n");
++ if (IS_ERR(dwc->ref_clk)) {
++ ret = dev_err_probe(dev, PTR_ERR(dwc->ref_clk),
++ "could not get ref clock\n");
++ goto put_usb_psy;
++ }
+
+ if (dwc->ref_clk == NULL) {
+ dwc->ref_clk = devm_clk_get_optional(dev, "ref_clk");
+- if (IS_ERR(dwc->ref_clk))
+- return dev_err_probe(dev, PTR_ERR(dwc->ref_clk),
+- "could not get ref clock\n");
++ if (IS_ERR(dwc->ref_clk)) {
++ ret = dev_err_probe(dev, PTR_ERR(dwc->ref_clk),
++ "could not get ref clock\n");
++ goto put_usb_psy;
++ }
+ }
+
+ dwc->susp_clk = devm_clk_get_optional(dev, "suspend");
+- if (IS_ERR(dwc->susp_clk))
+- return dev_err_probe(dev, PTR_ERR(dwc->susp_clk),
+- "could not get suspend clock\n");
++ if (IS_ERR(dwc->susp_clk)) {
++ ret = dev_err_probe(dev, PTR_ERR(dwc->susp_clk),
++ "could not get suspend clock\n");
++ goto put_usb_psy;
++ }
+
+ if (dwc->susp_clk == NULL) {
+ dwc->susp_clk = devm_clk_get_optional(dev, "suspend_clk");
+- if (IS_ERR(dwc->susp_clk))
+- return dev_err_probe(dev, PTR_ERR(dwc->susp_clk),
+- "could not get suspend clock\n");
++ if (IS_ERR(dwc->susp_clk)) {
++ ret = dev_err_probe(dev, PTR_ERR(dwc->susp_clk),
++ "could not get suspend clock\n");
++ goto put_usb_psy;
++ }
+ }
+ }
+
+ ret = reset_control_deassert(dwc->reset);
+ if (ret)
+- return ret;
++ goto put_usb_psy;
+
+ ret = dwc3_clk_enable(dwc);
+ if (ret)
+@@ -1827,7 +1864,7 @@ disable_clks:
+ dwc3_clk_disable(dwc);
+ assert_reset:
+ reset_control_assert(dwc->reset);
+-
++put_usb_psy:
+ if (dwc->usb_psy)
+ power_supply_put(dwc->usb_psy);
+
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 81c486b3941ce..b9fa0fa5ba7c0 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -262,6 +262,7 @@
+ #define DWC3_GUCTL1_DEV_FORCE_20_CLK_FOR_30_CLK BIT(26)
+ #define DWC3_GUCTL1_DEV_L1_EXIT_BY_HW BIT(24)
+ #define DWC3_GUCTL1_PARKMODE_DISABLE_SS BIT(17)
++#define DWC3_GUCTL1_RESUME_OPMODE_HS_HOST BIT(10)
+
+ /* Global Status Register */
+ #define DWC3_GSTS_OTG_IP BIT(10)
+@@ -390,6 +391,7 @@
+ #define DWC3_GFLADJ_30MHZ_SDBND_SEL BIT(7)
+ #define DWC3_GFLADJ_30MHZ_MASK 0x3f
+ #define DWC3_GFLADJ_REFCLK_FLADJ_MASK GENMASK(21, 8)
++#define DWC3_GFLADJ_REFCLK_LPM_SEL BIT(23)
+ #define DWC3_GFLADJ_240MHZDECR GENMASK(30, 24)
+ #define DWC3_GFLADJ_240MHZDECR_PLS1 BIT(31)
+
+@@ -1093,6 +1095,8 @@ struct dwc3_scratchpad_array {
+ * change quirk.
+ * @dis_tx_ipgap_linecheck_quirk: set if we disable u2mac linestate
+ * check during HS transmit.
++ * @resume-hs-terminations: Set if we enable quirk for fixing improper crc
++ * generation after resume from suspend.
+ * @parkmode_disable_ss_quirk: set if we need to disable all SuperSpeed
+ * instances in park mode.
+ * @tx_de_emphasis_quirk: set if we enable Tx de-emphasis quirk
+@@ -1308,7 +1312,9 @@ struct dwc3 {
+ unsigned dis_u2_freeclk_exists_quirk:1;
+ unsigned dis_del_phy_power_chg_quirk:1;
+ unsigned dis_tx_ipgap_linecheck_quirk:1;
++ unsigned resume_hs_terminations:1;
+ unsigned parkmode_disable_ss_quirk:1;
++ unsigned gfladj_refclk_lpm_sel:1;
+
+ unsigned tx_de_emphasis_quirk:1;
+ unsigned tx_de_emphasis:2;
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index e0fa4b186ec6d..36184a7625273 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -2645,10 +2645,10 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
+ unsigned i = 0;
+ vla_group(d);
+ vla_item(d, struct usb_gadget_strings *, stringtabs,
+- lang_count + 1);
++ size_add(lang_count, 1));
+ vla_item(d, struct usb_gadget_strings, stringtab, lang_count);
+ vla_item(d, struct usb_string, strings,
+- lang_count*(needed_count+1));
++ size_mul(lang_count, (needed_count + 1)));
+
+ char *vlabuf = kmalloc(vla_group_size(d), GFP_KERNEL);
+
+diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c
+index abec5c58f5251..a881c69b1f2bf 100644
+--- a/drivers/usb/gadget/function/f_printer.c
++++ b/drivers/usb/gadget/function/f_printer.c
+@@ -89,7 +89,7 @@ struct printer_dev {
+ u8 printer_cdev_open;
+ wait_queue_head_t wait;
+ unsigned q_len;
+- char *pnp_string; /* We don't own memory! */
++ char **pnp_string; /* We don't own memory! */
+ struct usb_function function;
+ };
+
+@@ -1000,16 +1000,16 @@ static int printer_func_setup(struct usb_function *f,
+ if ((wIndex>>8) != dev->interface)
+ break;
+
+- if (!dev->pnp_string) {
++ if (!*dev->pnp_string) {
+ value = 0;
+ break;
+ }
+- value = strlen(dev->pnp_string);
++ value = strlen(*dev->pnp_string);
+ buf[0] = (value >> 8) & 0xFF;
+ buf[1] = value & 0xFF;
+- memcpy(buf + 2, dev->pnp_string, value);
++ memcpy(buf + 2, *dev->pnp_string, value);
+ DBG(dev, "1284 PNP String: %x %s\n", value,
+- dev->pnp_string);
++ *dev->pnp_string);
+ break;
+
+ case GET_PORT_STATUS: /* Get Port Status */
+@@ -1475,7 +1475,7 @@ static struct usb_function *gprinter_alloc(struct usb_function_instance *fi)
+ kref_init(&dev->kref);
+ ++opts->refcnt;
+ dev->minor = opts->minor;
+- dev->pnp_string = opts->pnp_string;
++ dev->pnp_string = &opts->pnp_string;
+ dev->q_len = opts->q_len;
+ mutex_unlock(&opts->lock);
+
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index 71669e0e4d007..7ec223849d949 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -421,7 +421,7 @@ uvc_register_video(struct uvc_device *uvc)
+ int ret;
+
+ /* TODO reference counting. */
+- memset(&uvc->vdev, 0, sizeof(uvc->video));
++ memset(&uvc->vdev, 0, sizeof(uvc->vdev));
+ uvc->vdev.v4l2_dev = &uvc->v4l2_dev;
+ uvc->vdev.v4l2_dev->dev = &cdev->gadget->dev;
+ uvc->vdev.fops = &uvc_v4l2_fops;
+@@ -897,10 +897,14 @@ static void uvc_function_unbind(struct usb_configuration *c,
+ {
+ struct usb_composite_dev *cdev = c->cdev;
+ struct uvc_device *uvc = to_uvc(f);
++ struct uvc_video *video = &uvc->video;
+ long wait_ret = 1;
+
+ uvcg_info(f, "%s()\n", __func__);
+
++ if (video->async_wq)
++ destroy_workqueue(video->async_wq);
++
+ /*
+ * If we know we're connected via v4l2, then there should be a cleanup
+ * of the device from userspace either via UVC_EVENT_DISCONNECT or
+diff --git a/drivers/usb/gadget/function/uvc.h b/drivers/usb/gadget/function/uvc.h
+index 58e383afdd440..1a31e6c6a5ffb 100644
+--- a/drivers/usb/gadget/function/uvc.h
++++ b/drivers/usb/gadget/function/uvc.h
+@@ -88,6 +88,7 @@ struct uvc_video {
+ struct usb_ep *ep;
+
+ struct work_struct pump;
++ struct workqueue_struct *async_wq;
+
+ /* Frame parameters */
+ u8 bpp;
+diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
+index fd8f73bb726dd..fddc392b8ab95 100644
+--- a/drivers/usb/gadget/function/uvc_v4l2.c
++++ b/drivers/usb/gadget/function/uvc_v4l2.c
+@@ -170,7 +170,7 @@ uvc_v4l2_qbuf(struct file *file, void *fh, struct v4l2_buffer *b)
+ return ret;
+
+ if (uvc->state == UVC_STATE_STREAMING)
+- schedule_work(&video->pump);
++ queue_work(video->async_wq, &video->pump);
+
+ return ret;
+ }
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index c00ce0e91f5d5..bb037fcc90e69 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -277,7 +277,7 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
+ spin_unlock_irqrestore(&video->req_lock, flags);
+
+ if (uvc->state == UVC_STATE_STREAMING)
+- schedule_work(&video->pump);
++ queue_work(video->async_wq, &video->pump);
+ }
+
+ static int
+@@ -485,7 +485,7 @@ int uvcg_video_enable(struct uvc_video *video, int enable)
+
+ video->req_int_count = 0;
+
+- schedule_work(&video->pump);
++ queue_work(video->async_wq, &video->pump);
+
+ return ret;
+ }
+@@ -499,6 +499,11 @@ int uvcg_video_init(struct uvc_video *video, struct uvc_device *uvc)
+ spin_lock_init(&video->req_lock);
+ INIT_WORK(&video->pump, uvcg_video_pump);
+
++ /* Allocate a work queue for asynchronous video pump handler. */
++ video->async_wq = alloc_workqueue("uvcgadget", WQ_UNBOUND | WQ_HIGHPRI, 0);
++ if (!video->async_wq)
++ return -EINVAL;
++
+ video->uvc = uvc;
+ video->fcc = V4L2_PIX_FMT_YUYV;
+ video->bpp = 16;
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index e61155fa63796..f1367b53b2600 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -988,7 +988,7 @@ xhci_alloc_dbc(struct device *dev, void __iomem *base, const struct dbc_driver *
+ dbc->driver = driver;
+
+ if (readl(&dbc->regs->control) & DBC_CTRL_DBC_ENABLE)
+- return NULL;
++ goto err;
+
+ INIT_DELAYED_WORK(&dbc->event_work, xhci_dbc_handle_events);
+ spin_lock_init(&dbc->lock);
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 8c19e151a9454..9e56aa28efcd4 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -641,7 +641,7 @@ struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci,
+ num_stream_ctxs, &stream_info->ctx_array_dma,
+ mem_flags);
+ if (!stream_info->stream_ctx_array)
+- goto cleanup_ctx;
++ goto cleanup_ring_array;
+ memset(stream_info->stream_ctx_array, 0,
+ sizeof(struct xhci_stream_ctx)*num_stream_ctxs);
+
+@@ -702,6 +702,11 @@ cleanup_rings:
+ }
+ xhci_free_command(xhci, stream_info->free_streams_command);
+ cleanup_ctx:
++ xhci_free_stream_ctx(xhci,
++ stream_info->num_stream_ctxs,
++ stream_info->stream_ctx_array,
++ stream_info->ctx_array_dma);
++cleanup_ring_array:
+ kfree(stream_info->stream_rings);
+ cleanup_info:
+ kfree(stream_info);
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index a8641b6536eea..5fb55bf194931 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -123,7 +123,7 @@ static const struct xhci_plat_priv xhci_plat_renesas_rcar_gen3 = {
+ };
+
+ static const struct xhci_plat_priv xhci_plat_brcm = {
+- .quirks = XHCI_RESET_ON_RESUME,
++ .quirks = XHCI_RESET_ON_RESUME | XHCI_SUSPEND_RESUME_CLKS,
+ };
+
+ static const struct of_device_id usb_xhci_of_match[] = {
+@@ -437,7 +437,16 @@ static int __maybe_unused xhci_plat_suspend(struct device *dev)
+ * xhci_suspend() needs `do_wakeup` to know whether host is allowed
+ * to do wakeup during suspend.
+ */
+- return xhci_suspend(xhci, device_may_wakeup(dev));
++ ret = xhci_suspend(xhci, device_may_wakeup(dev));
++ if (ret)
++ return ret;
++
++ if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
++ clk_disable_unprepare(xhci->clk);
++ clk_disable_unprepare(xhci->reg_clk);
++ }
++
++ return 0;
+ }
+
+ static int __maybe_unused xhci_plat_resume(struct device *dev)
+@@ -446,6 +455,11 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ int ret;
+
++ if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
++ clk_prepare_enable(xhci->clk);
++ clk_prepare_enable(xhci->reg_clk);
++ }
++
+ ret = xhci_priv_resume_quirk(hcd);
+ if (ret)
+ return ret;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 38649284ff889..a7ef675f00fdd 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1183,7 +1183,8 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ /* re-initialize the HC on Restore Error, or Host Controller Error */
+ if (temp & (STS_SRE | STS_HCE)) {
+ reinit_xhc = true;
+- xhci_warn(xhci, "xHC error in resume, USBSTS 0x%x, Reinit\n", temp);
++ if (!xhci->broken_suspend)
++ xhci_warn(xhci, "xHC error in resume, USBSTS 0x%x, Reinit\n", temp);
+ }
+
+ if (reinit_xhc) {
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 7caa0db5e826d..6dfbf73ee840d 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1899,6 +1899,7 @@ struct xhci_hcd {
+ #define XHCI_NO_SOFT_RETRY BIT_ULL(40)
+ #define XHCI_BROKEN_D3COLD BIT_ULL(41)
+ #define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42)
++#define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43)
+
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+diff --git a/drivers/usb/misc/idmouse.c b/drivers/usb/misc/idmouse.c
+index e9437a176518a..ea39243efee39 100644
+--- a/drivers/usb/misc/idmouse.c
++++ b/drivers/usb/misc/idmouse.c
+@@ -177,10 +177,6 @@ static int idmouse_create_image(struct usb_idmouse *dev)
+ bytes_read += bulk_read;
+ }
+
+- /* reset the device */
+-reset:
+- ftip_command(dev, FTIP_RELEASE, 0, 0);
+-
+ /* check for valid image */
+ /* right border should be black (0x00) */
+ for (bytes_read = sizeof(HEADER)-1 + WIDTH-1; bytes_read < IMGSIZE; bytes_read += WIDTH)
+@@ -192,6 +188,10 @@ reset:
+ if (dev->bulk_in_buffer[bytes_read] != 0xFF)
+ return -EAGAIN;
+
++ /* reset the device */
++reset:
++ ftip_command(dev, FTIP_RELEASE, 0, 0);
++
+ /* should be IMGSIZE == 65040 */
+ dev_dbg(&dev->interface->dev, "read %d bytes fingerprint data\n",
+ bytes_read);
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index c4a2c37abf628..3ea5145a842b1 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -971,8 +971,6 @@ int ssusb_gadget_init(struct ssusb_mtk *ssusb)
+ goto irq_err;
+ }
+
+- device_init_wakeup(dev, true);
+-
+ /* power down device IP for power saving by default */
+ mtu3_stop(mtu);
+
+diff --git a/drivers/usb/mtu3/mtu3_plat.c b/drivers/usb/mtu3/mtu3_plat.c
+index 4309ed939178a..845b25320fd28 100644
+--- a/drivers/usb/mtu3/mtu3_plat.c
++++ b/drivers/usb/mtu3/mtu3_plat.c
+@@ -332,6 +332,8 @@ static int mtu3_probe(struct platform_device *pdev)
+ pm_runtime_enable(dev);
+ pm_runtime_get_sync(dev);
+
++ device_init_wakeup(dev, true);
++
+ ret = ssusb_rscs_init(ssusb);
+ if (ret)
+ goto comm_init_err;
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index 51274b87f46c9..dc67fff8e9418 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -760,6 +760,9 @@ static void rxstate(struct musb *musb, struct musb_request *req)
+ musb_writew(epio, MUSB_RXCSR, csr);
+
+ buffer_aint_mapped:
++ fifo_count = min_t(unsigned int,
++ request->length - request->actual,
++ (unsigned int)fifo_count);
+ musb_read_fifo(musb_ep->hw_ep, fifo_count, (u8 *)
+ (request->buf + request->actual));
+ request->actual += fifo_count;
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 4993227ab2930..20dcbccb290b3 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -1275,12 +1275,6 @@ UNUSUAL_DEV( 0x090a, 0x1200, 0x0000, 0x9999,
+ USB_SC_RBC, USB_PR_BULK, NULL,
+ 0 ),
+
+-UNUSUAL_DEV(0x090c, 0x1000, 0x1100, 0x1100,
+- "Samsung",
+- "Flash Drive FIT",
+- USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+- US_FL_MAX_SECTORS_64),
+-
+ /* aeb */
+ UNUSUAL_DEV( 0x090c, 0x1132, 0x0000, 0xffff,
+ "Feiya",
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 6364f0d467ea3..74fb5a4c6f21b 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -1067,11 +1067,9 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
+
+ cap->fwnode = ucsi_find_fwnode(con);
+ con->usb_role_sw = fwnode_usb_role_switch_get(cap->fwnode);
+- if (IS_ERR(con->usb_role_sw)) {
+- dev_err(ucsi->dev, "con%d: failed to get usb role switch\n",
+- con->num);
+- return PTR_ERR(con->usb_role_sw);
+- }
++ if (IS_ERR(con->usb_role_sw))
++ return dev_err_probe(ucsi->dev, PTR_ERR(con->usb_role_sw),
++ "con%d: failed to get usb role switch\n", con->num);
+
+ /* Delay other interactions with the con until registration is complete */
+ mutex_lock(&con->lock);
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index 368330417bde2..5703775af1297 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -393,7 +393,7 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq,
+ return NULL;
+ }
+
+- pkt->buf = kmalloc(pkt->len, GFP_KERNEL);
++ pkt->buf = kvmalloc(pkt->len, GFP_KERNEL);
+ if (!pkt->buf) {
+ kfree(pkt);
+ return NULL;
+diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
+index d7aa5511c3617..e65bdc499c236 100644
+--- a/drivers/video/fbdev/smscufx.c
++++ b/drivers/video/fbdev/smscufx.c
+@@ -137,6 +137,8 @@ static int ufx_submit_urb(struct ufx_data *dev, struct urb * urb, size_t len);
+ static int ufx_alloc_urb_list(struct ufx_data *dev, int count, size_t size);
+ static void ufx_free_urb_list(struct ufx_data *dev);
+
++static DEFINE_MUTEX(disconnect_mutex);
++
+ /* reads a control register */
+ static int ufx_reg_read(struct ufx_data *dev, u32 index, u32 *data)
+ {
+@@ -1071,9 +1073,13 @@ static int ufx_ops_open(struct fb_info *info, int user)
+ if (user == 0 && !console)
+ return -EBUSY;
+
++ mutex_lock(&disconnect_mutex);
++
+ /* If the USB device is gone, we don't accept new opens */
+- if (dev->virtualized)
++ if (dev->virtualized) {
++ mutex_unlock(&disconnect_mutex);
+ return -ENODEV;
++ }
+
+ dev->fb_count++;
+
+@@ -1097,6 +1103,8 @@ static int ufx_ops_open(struct fb_info *info, int user)
+ pr_debug("open /dev/fb%d user=%d fb_info=%p count=%d",
+ info->node, user, info, dev->fb_count);
+
++ mutex_unlock(&disconnect_mutex);
++
+ return 0;
+ }
+
+@@ -1741,6 +1749,8 @@ static void ufx_usb_disconnect(struct usb_interface *interface)
+ {
+ struct ufx_data *dev;
+
++ mutex_lock(&disconnect_mutex);
++
+ dev = usb_get_intfdata(interface);
+
+ pr_debug("USB disconnect starting\n");
+@@ -1761,6 +1771,8 @@ static void ufx_usb_disconnect(struct usb_interface *interface)
+ kref_put(&dev->kref, ufx_free);
+
+ /* consider ufx_data freed */
++
++ mutex_unlock(&disconnect_mutex);
+ }
+
+ static struct usb_driver ufx_driver = {
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index 38a861e22c339..7753e586e65a0 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -1298,7 +1298,7 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
+
+ /* limit fbsize to max visible screen size */
+ if (fix->smem_len > yres*fix->line_length)
+- fix->smem_len = yres*fix->line_length;
++ fix->smem_len = ALIGN(yres*fix->line_length, 4*1024*1024);
+
+ fix->accel = FB_ACCEL_NONE;
+
+diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
+index 40ef379c28ab0..9c286b2a19001 100644
+--- a/drivers/xen/gntdev-common.h
++++ b/drivers/xen/gntdev-common.h
+@@ -44,9 +44,10 @@ struct gntdev_unmap_notify {
+ };
+
+ struct gntdev_grant_map {
++ atomic_t in_use;
+ struct mmu_interval_notifier notifier;
++ bool notifier_init;
+ struct list_head next;
+- struct vm_area_struct *vma;
+ int index;
+ int count;
+ int flags;
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 84b143eef395b..4d9a3050de6a3 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -286,6 +286,9 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
+ */
+ }
+
++ if (use_ptemod && map->notifier_init)
++ mmu_interval_notifier_remove(&map->notifier);
++
+ if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
+ notify_remote_via_evtchn(map->notify.event);
+ evtchn_put(map->notify.event);
+@@ -298,7 +301,7 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
+ static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data)
+ {
+ struct gntdev_grant_map *map = data;
+- unsigned int pgnr = (addr - map->vma->vm_start) >> PAGE_SHIFT;
++ unsigned int pgnr = (addr - map->pages_vm_start) >> PAGE_SHIFT;
+ int flags = map->flags | GNTMAP_application_map | GNTMAP_contains_pte |
+ (1 << _GNTMAP_guest_avail0);
+ u64 pte_maddr;
+@@ -367,8 +370,7 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
+ for (i = 0; i < map->count; i++) {
+ if (map->map_ops[i].status == GNTST_okay) {
+ map->unmap_ops[i].handle = map->map_ops[i].handle;
+- if (!use_ptemod)
+- alloced++;
++ alloced++;
+ } else if (!err)
+ err = -EINVAL;
+
+@@ -377,8 +379,7 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
+
+ if (use_ptemod) {
+ if (map->kmap_ops[i].status == GNTST_okay) {
+- if (map->map_ops[i].status == GNTST_okay)
+- alloced++;
++ alloced++;
+ map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
+ } else if (!err)
+ err = -EINVAL;
+@@ -394,8 +395,14 @@ static void __unmap_grant_pages_done(int result,
+ unsigned int i;
+ struct gntdev_grant_map *map = data->data;
+ unsigned int offset = data->unmap_ops - map->unmap_ops;
++ int successful_unmaps = 0;
++ int live_grants;
+
+ for (i = 0; i < data->count; i++) {
++ if (map->unmap_ops[offset + i].status == GNTST_okay &&
++ map->unmap_ops[offset + i].handle != INVALID_GRANT_HANDLE)
++ successful_unmaps++;
++
+ WARN_ON(map->unmap_ops[offset + i].status != GNTST_okay &&
+ map->unmap_ops[offset + i].handle != INVALID_GRANT_HANDLE);
+ pr_debug("unmap handle=%d st=%d\n",
+@@ -403,6 +410,10 @@ static void __unmap_grant_pages_done(int result,
+ map->unmap_ops[offset+i].status);
+ map->unmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
+ if (use_ptemod) {
++ if (map->kunmap_ops[offset + i].status == GNTST_okay &&
++ map->kunmap_ops[offset + i].handle != INVALID_GRANT_HANDLE)
++ successful_unmaps++;
++
+ WARN_ON(map->kunmap_ops[offset + i].status != GNTST_okay &&
+ map->kunmap_ops[offset + i].handle != INVALID_GRANT_HANDLE);
+ pr_debug("kunmap handle=%u st=%d\n",
+@@ -411,11 +422,15 @@ static void __unmap_grant_pages_done(int result,
+ map->kunmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
+ }
+ }
++
+ /*
+ * Decrease the live-grant counter. This must happen after the loop to
+ * prevent premature reuse of the grants by gnttab_mmap().
+ */
+- atomic_sub(data->count, &map->live_grants);
++ live_grants = atomic_sub_return(successful_unmaps, &map->live_grants);
++ if (WARN_ON(live_grants < 0))
++ pr_err("%s: live_grants became negative (%d) after unmapping %d pages!\n",
++ __func__, live_grants, successful_unmaps);
+
+ /* Release reference taken by __unmap_grant_pages */
+ gntdev_put_map(NULL, map);
+@@ -496,11 +511,7 @@ static void gntdev_vma_close(struct vm_area_struct *vma)
+ struct gntdev_priv *priv = file->private_data;
+
+ pr_debug("gntdev_vma_close %p\n", vma);
+- if (use_ptemod) {
+- WARN_ON(map->vma != vma);
+- mmu_interval_notifier_remove(&map->notifier);
+- map->vma = NULL;
+- }
++
+ vma->vm_private_data = NULL;
+ gntdev_put_map(priv, map);
+ }
+@@ -528,29 +539,30 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
+ struct gntdev_grant_map *map =
+ container_of(mn, struct gntdev_grant_map, notifier);
+ unsigned long mstart, mend;
++ unsigned long map_start, map_end;
+
+ if (!mmu_notifier_range_blockable(range))
+ return false;
+
++ map_start = map->pages_vm_start;
++ map_end = map->pages_vm_start + (map->count << PAGE_SHIFT);
++
+ /*
+ * If the VMA is split or otherwise changed the notifier is not
+ * updated, but we don't want to process VA's outside the modified
+ * VMA. FIXME: It would be much more understandable to just prevent
+ * modifying the VMA in the first place.
+ */
+- if (map->vma->vm_start >= range->end ||
+- map->vma->vm_end <= range->start)
++ if (map_start >= range->end || map_end <= range->start)
+ return true;
+
+- mstart = max(range->start, map->vma->vm_start);
+- mend = min(range->end, map->vma->vm_end);
++ mstart = max(range->start, map_start);
++ mend = min(range->end, map_end);
+ pr_debug("map %d+%d (%lx %lx), range %lx %lx, mrange %lx %lx\n",
+- map->index, map->count,
+- map->vma->vm_start, map->vma->vm_end,
+- range->start, range->end, mstart, mend);
+- unmap_grant_pages(map,
+- (mstart - map->vma->vm_start) >> PAGE_SHIFT,
+- (mend - mstart) >> PAGE_SHIFT);
++ map->index, map->count, map_start, map_end,
++ range->start, range->end, mstart, mend);
++ unmap_grant_pages(map, (mstart - map_start) >> PAGE_SHIFT,
++ (mend - mstart) >> PAGE_SHIFT);
+
+ return true;
+ }
+@@ -1030,18 +1042,15 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ return -EINVAL;
+
+ pr_debug("map %d+%d at %lx (pgoff %lx)\n",
+- index, count, vma->vm_start, vma->vm_pgoff);
++ index, count, vma->vm_start, vma->vm_pgoff);
+
+ mutex_lock(&priv->lock);
+ map = gntdev_find_map_index(priv, index, count);
+ if (!map)
+ goto unlock_out;
+- if (use_ptemod && map->vma)
++ if (!atomic_add_unless(&map->in_use, 1, 1))
+ goto unlock_out;
+- if (atomic_read(&map->live_grants)) {
+- err = -EAGAIN;
+- goto unlock_out;
+- }
++
+ refcount_inc(&map->users);
+
+ vma->vm_ops = &gntdev_vmops;
+@@ -1062,15 +1071,16 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ map->flags |= GNTMAP_readonly;
+ }
+
++ map->pages_vm_start = vma->vm_start;
++
+ if (use_ptemod) {
+- map->vma = vma;
+ err = mmu_interval_notifier_insert_locked(
+ &map->notifier, vma->vm_mm, vma->vm_start,
+ vma->vm_end - vma->vm_start, &gntdev_mmu_ops);
+- if (err) {
+- map->vma = NULL;
++ if (err)
+ goto out_unlock_put;
+- }
++
++ map->notifier_init = true;
+ }
+ mutex_unlock(&priv->lock);
+
+@@ -1087,7 +1097,6 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ */
+ mmu_interval_read_begin(&map->notifier);
+
+- map->pages_vm_start = vma->vm_start;
+ err = apply_to_page_range(vma->vm_mm, vma->vm_start,
+ vma->vm_end - vma->vm_start,
+ find_grant_ptes, map);
+@@ -1116,13 +1125,8 @@ unlock_out:
+ out_unlock_put:
+ mutex_unlock(&priv->lock);
+ out_put_map:
+- if (use_ptemod) {
++ if (use_ptemod)
+ unmap_grant_pages(map, 0, map->count);
+- if (map->vma) {
+- mmu_interval_notifier_remove(&map->notifier);
+- map->vma = NULL;
+- }
+- }
+ gntdev_put_map(priv, map);
+ return err;
+ }
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index a8ecd83abb11e..bcf9952813d48 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2191,7 +2191,16 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info)
+ int need_clear = 0;
+ u64 cache_gen;
+
+- if (!root)
++ /*
++ * Either no extent root (with ibadroots rescue option) or we have
++ * unsupported RO options. The fs can never be mounted read-write, so no
++ * need to waste time searching block group items.
++ *
++ * This also allows new extent tree related changes to be RO compat,
++ * no need for a full incompat flag.
++ */
++ if (!root || (btrfs_super_compat_ro_flags(info->super_copy) &
++ ~BTRFS_FEATURE_COMPAT_RO_SUPP))
+ return fill_dummy_bgs(info);
+
+ key.objectid = 0;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 92f3f5ed8bf1e..7ce39983dc8ab 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4904,6 +4904,9 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ !test_bit(BTRFS_ROOT_RESET_LOCKDEP_CLASS, &root->state))
+ lockdep_owner = BTRFS_FS_TREE_OBJECTID;
+
++ /* btrfs_clean_tree_block() accesses generation field. */
++ btrfs_set_header_generation(buf, trans->transid);
++
+ /*
+ * This needs to stay, because we could allocate a freed block from an
+ * old tree into a new tree, so we need to make sure this new block is
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 78df9b8557ddd..51da01074bb07 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -523,6 +523,7 @@ void btrfs_drop_extent_cache(struct btrfs_inode *inode, u64 start, u64 end,
+ testend = 0;
+ }
+ while (1) {
++ bool ends_after_range = false;
+ int no_splits = 0;
+
+ modified = false;
+@@ -539,10 +540,12 @@ void btrfs_drop_extent_cache(struct btrfs_inode *inode, u64 start, u64 end,
+ write_unlock(&em_tree->lock);
+ break;
+ }
++ if (testend && em->start + em->len > start + len)
++ ends_after_range = true;
+ flags = em->flags;
+ gen = em->generation;
+ if (skip_pinned && test_bit(EXTENT_FLAG_PINNED, &em->flags)) {
+- if (testend && em->start + em->len >= start + len) {
++ if (ends_after_range) {
+ free_extent_map(em);
+ write_unlock(&em_tree->lock);
+ break;
+@@ -592,7 +595,7 @@ void btrfs_drop_extent_cache(struct btrfs_inode *inode, u64 start, u64 end,
+ split = split2;
+ split2 = NULL;
+ }
+- if (testend && em->start + em->len > start + len) {
++ if (ends_after_range) {
+ u64 diff = start + len - em->start;
+
+ split->start = start + len;
+@@ -630,14 +633,42 @@ void btrfs_drop_extent_cache(struct btrfs_inode *inode, u64 start, u64 end,
+ } else {
+ ret = add_extent_mapping(em_tree, split,
+ modified);
+- ASSERT(ret == 0); /* Logic error */
++ /* Logic error, shouldn't happen. */
++ ASSERT(ret == 0);
++ if (WARN_ON(ret != 0) && modified)
++ btrfs_set_inode_full_sync(inode);
+ }
+ free_extent_map(split);
+ split = NULL;
+ }
+ next:
+- if (extent_map_in_tree(em))
++ if (extent_map_in_tree(em)) {
++ /*
++ * If the extent map is still in the tree it means that
++ * either of the following is true:
++ *
++ * 1) It fits entirely in our range (doesn't end beyond
++ * it or starts before it);
++ *
++ * 2) It starts before our range and/or ends after our
++ * range, and we were not able to allocate the extent
++ * maps for split operations, @split and @split2.
++ *
++ * If we are at case 2) then we just remove the entire
++ * extent map - this is fine since if anyone needs it to
++ * access the subranges outside our range, will just
++ * load it again from the subvolume tree's file extent
++ * item. However if the extent map was in the list of
++ * modified extents, then we must mark the inode for a
++ * full fsync, otherwise a fast fsync will miss this
++ * extent if it's new and needs to be logged.
++ */
++ if ((em->start < start || ends_after_range) && modified) {
++ ASSERT(no_splits);
++ btrfs_set_inode_full_sync(inode);
++ }
+ remove_extent_mapping(em_tree, em);
++ }
+ write_unlock(&em_tree->lock);
+
+ /* once for us */
+@@ -2201,14 +2232,6 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+
+ atomic_inc(&root->log_batch);
+
+- /*
+- * Always check for the full sync flag while holding the inode's lock,
+- * to avoid races with other tasks. The flag must be either set all the
+- * time during logging or always off all the time while logging.
+- */
+- full_sync = test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+- &BTRFS_I(inode)->runtime_flags);
+-
+ /*
+ * Before we acquired the inode's lock and the mmap lock, someone may
+ * have dirtied more pages in the target range. We need to make sure
+@@ -2233,6 +2256,17 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ goto out;
+ }
+
++ /*
++ * Always check for the full sync flag while holding the inode's lock,
++ * to avoid races with other tasks. The flag must be either set all the
++ * time during logging or always off all the time while logging.
++ * We check the flag here after starting delalloc above, because when
++ * running delalloc the full sync flag may be set if we need to drop
++ * extra extent map ranges due to temporary memory allocation failures.
++ */
++ full_sync = test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
++ &BTRFS_I(inode)->runtime_flags);
++
+ /*
+ * We have to do this here to avoid the priority inversion of waiting on
+ * IO of a lower priority task while holding a transaction open.
+@@ -3810,6 +3844,7 @@ const struct file_operations btrfs_file_operations = {
+ .mmap = btrfs_file_mmap,
+ .open = btrfs_file_open,
+ .release = btrfs_release_file,
++ .get_unmapped_area = thp_get_unmapped_area,
+ .fsync = btrfs_sync_file,
+ .fallocate = btrfs_fallocate,
+ .unlocked_ioctl = btrfs_ioctl,
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index b1ae3ba2ca2c3..5bdfc47eb6dd5 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -48,6 +48,25 @@ static void bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+ struct btrfs_free_space *info, u64 offset,
+ u64 bytes, bool update_stats);
+
++static void __btrfs_remove_free_space_cache_locked(
++ struct btrfs_free_space_ctl *ctl)
++{
++ struct btrfs_free_space *info;
++ struct rb_node *node;
++
++ while ((node = rb_last(&ctl->free_space_offset)) != NULL) {
++ info = rb_entry(node, struct btrfs_free_space, offset_index);
++ if (!info->bitmap) {
++ unlink_free_space(ctl, info, true);
++ kmem_cache_free(btrfs_free_space_cachep, info);
++ } else {
++ free_bitmap(ctl, info);
++ }
++
++ cond_resched_lock(&ctl->tree_lock);
++ }
++}
++
+ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ struct btrfs_path *path,
+ u64 offset)
+@@ -693,6 +712,12 @@ static void recalculate_thresholds(struct btrfs_free_space_ctl *ctl)
+
+ max_bitmaps = max_t(u64, max_bitmaps, 1);
+
++ if (ctl->total_bitmaps > max_bitmaps)
++ btrfs_err(block_group->fs_info,
++"invalid free space control: bg start=%llu len=%llu total_bitmaps=%u unit=%u max_bitmaps=%llu bytes_per_bg=%llu",
++ block_group->start, block_group->length,
++ ctl->total_bitmaps, ctl->unit, max_bitmaps,
++ bytes_per_bg);
+ ASSERT(ctl->total_bitmaps <= max_bitmaps);
+
+ /*
+@@ -875,7 +900,14 @@ out:
+ return ret;
+ free_cache:
+ io_ctl_drop_pages(&io_ctl);
+- __btrfs_remove_free_space_cache(ctl);
++
++ /*
++ * We need to call the _locked variant so we don't try to update the
++ * discard counters.
++ */
++ spin_lock(&ctl->tree_lock);
++ __btrfs_remove_free_space_cache_locked(ctl);
++ spin_unlock(&ctl->tree_lock);
+ goto out;
+ }
+
+@@ -1001,7 +1033,13 @@ int load_free_space_cache(struct btrfs_block_group *block_group)
+ if (ret == 0)
+ ret = 1;
+ } else {
++ /*
++ * We need to call the _locked variant so we don't try to update
++ * the discard counters.
++ */
++ spin_lock(&tmp_ctl.tree_lock);
+ __btrfs_remove_free_space_cache(&tmp_ctl);
++ spin_unlock(&tmp_ctl.tree_lock);
+ btrfs_warn(fs_info,
+ "block group %llu has wrong amount of free space",
+ block_group->start);
+@@ -2964,25 +3002,6 @@ static void __btrfs_return_cluster_to_free_space(
+ btrfs_put_block_group(block_group);
+ }
+
+-static void __btrfs_remove_free_space_cache_locked(
+- struct btrfs_free_space_ctl *ctl)
+-{
+- struct btrfs_free_space *info;
+- struct rb_node *node;
+-
+- while ((node = rb_last(&ctl->free_space_offset)) != NULL) {
+- info = rb_entry(node, struct btrfs_free_space, offset_index);
+- if (!info->bitmap) {
+- unlink_free_space(ctl, info, true);
+- kmem_cache_free(btrfs_free_space_cachep, info);
+- } else {
+- free_bitmap(ctl, info);
+- }
+-
+- cond_resched_lock(&ctl->tree_lock);
+- }
+-}
+-
+ void __btrfs_remove_free_space_cache(struct btrfs_free_space_ctl *ctl)
+ {
+ spin_lock(&ctl->tree_lock);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index db723c0026bd2..ba323dcb0a0b8 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1174,6 +1174,21 @@ out_add_root:
+ fs_info->qgroup_rescan_running = true;
+ btrfs_queue_work(fs_info->qgroup_rescan_workers,
+ &fs_info->qgroup_rescan_work);
++ } else {
++ /*
++ * We have set both BTRFS_FS_QUOTA_ENABLED and
++ * BTRFS_QGROUP_STATUS_FLAG_ON, so we can only fail with
++ * -EINPROGRESS. That can happen because someone started the
++ * rescan worker by calling quota rescan ioctl before we
++ * attempted to initialize the rescan worker. Failure due to
++ * quotas disabled in the meanwhile is not possible, because
++ * we are holding a write lock on fs_info->subvol_sem, which
++ * is also acquired when disabling quotas.
++ * Ignore such error, and any other error would need to undo
++ * everything we did in the transaction we just committed.
++ */
++ ASSERT(ret == -EINPROGRESS);
++ ret = 0;
+ }
+
+ out_free_path:
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index e7b0323e6efd8..a14f97bf2a408 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -731,6 +731,13 @@ static void scrub_print_warning(const char *errstr, struct scrub_block *sblock)
+ dev = sblock->sectors[0]->dev;
+ fs_info = sblock->sctx->fs_info;
+
++ /* Super block error, no need to search extent tree. */
++ if (sblock->sectors[0]->flags & BTRFS_EXTENT_FLAG_SUPER) {
++ btrfs_warn_in_rcu(fs_info, "%s on device %s, physical %llu",
++ errstr, rcu_str_deref(dev->name),
++ sblock->sectors[0]->physical);
++ return;
++ }
+ path = btrfs_alloc_path();
+ if (!path)
+ return;
+@@ -806,7 +813,7 @@ static inline void scrub_put_recover(struct btrfs_fs_info *fs_info,
+ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
+ {
+ struct scrub_ctx *sctx = sblock_to_check->sctx;
+- struct btrfs_device *dev;
++ struct btrfs_device *dev = sblock_to_check->sectors[0]->dev;
+ struct btrfs_fs_info *fs_info;
+ u64 logical;
+ unsigned int failed_mirror_index;
+@@ -827,13 +834,15 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
+ fs_info = sctx->fs_info;
+ if (sblock_to_check->sectors[0]->flags & BTRFS_EXTENT_FLAG_SUPER) {
+ /*
+- * if we find an error in a super block, we just report it.
++ * If we find an error in a super block, we just report it.
+ * They will get written with the next transaction commit
+ * anyway
+ */
++ scrub_print_warning("super block error", sblock_to_check);
+ spin_lock(&sctx->stat_lock);
+ ++sctx->stat.super_errors;
+ spin_unlock(&sctx->stat_lock);
++ btrfs_dev_stat_inc_and_print(dev, BTRFS_DEV_STAT_CORRUPTION_ERRS);
+ return 0;
+ }
+ logical = sblock_to_check->sectors[0]->logical;
+@@ -842,7 +851,6 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
+ is_metadata = !(sblock_to_check->sectors[0]->flags &
+ BTRFS_EXTENT_FLAG_DATA);
+ have_csum = sblock_to_check->sectors[0]->have_csum;
+- dev = sblock_to_check->sectors[0]->dev;
+
+ if (!sctx->is_dev_replace && btrfs_repair_one_zone(fs_info, logical))
+ return 0;
+@@ -1773,7 +1781,7 @@ static int scrub_checksum(struct scrub_block *sblock)
+ else if (flags & BTRFS_EXTENT_FLAG_TREE_BLOCK)
+ ret = scrub_checksum_tree_block(sblock);
+ else if (flags & BTRFS_EXTENT_FLAG_SUPER)
+- (void)scrub_checksum_super(sblock);
++ ret = scrub_checksum_super(sblock);
+ else
+ WARN_ON(1);
+ if (ret)
+@@ -1912,23 +1920,6 @@ static int scrub_checksum_super(struct scrub_block *sblock)
+ if (memcmp(calculated_csum, s->csum, sctx->fs_info->csum_size))
+ ++fail_cor;
+
+- if (fail_cor + fail_gen) {
+- /*
+- * if we find an error in a super block, we just report it.
+- * They will get written with the next transaction commit
+- * anyway
+- */
+- spin_lock(&sctx->stat_lock);
+- ++sctx->stat.super_errors;
+- spin_unlock(&sctx->stat_lock);
+- if (fail_cor)
+- btrfs_dev_stat_inc_and_print(sector->dev,
+- BTRFS_DEV_STAT_CORRUPTION_ERRS);
+- else
+- btrfs_dev_stat_inc_and_print(sector->dev,
+- BTRFS_DEV_STAT_GENERATION_ERRS);
+- }
+-
+ return fail_cor + fail_gen;
+ }
+
+@@ -4121,6 +4112,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ int ret;
+ struct btrfs_device *dev;
+ unsigned int nofs_flag;
++ bool need_commit = false;
+
+ if (btrfs_fs_closing(fs_info))
+ return -EAGAIN;
+@@ -4224,6 +4216,12 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ */
+ nofs_flag = memalloc_nofs_save();
+ if (!is_dev_replace) {
++ u64 old_super_errors;
++
++ spin_lock(&sctx->stat_lock);
++ old_super_errors = sctx->stat.super_errors;
++ spin_unlock(&sctx->stat_lock);
++
+ btrfs_info(fs_info, "scrub: started on devid %llu", devid);
+ /*
+ * by holding device list mutex, we can
+@@ -4232,6 +4230,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ ret = scrub_supers(sctx, dev);
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
++
++ spin_lock(&sctx->stat_lock);
++ /*
++ * Super block errors found, but we can not commit transaction
++ * at current context, since btrfs_commit_transaction() needs
++ * to pause the current running scrub (hold by ourselves).
++ */
++ if (sctx->stat.super_errors > old_super_errors && !sctx->readonly)
++ need_commit = true;
++ spin_unlock(&sctx->stat_lock);
+ }
+
+ if (!ret)
+@@ -4258,6 +4266,25 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ scrub_workers_put(fs_info);
+ scrub_put_ctx(sctx);
+
++ /*
++ * We found some super block errors before, now try to force a
++ * transaction commit, as scrub has finished.
++ */
++ if (need_commit) {
++ struct btrfs_trans_handle *trans;
++
++ trans = btrfs_start_transaction(fs_info->tree_root, 0);
++ if (IS_ERR(trans)) {
++ ret = PTR_ERR(trans);
++ btrfs_err(fs_info,
++ "scrub: failed to start transaction to fix super block errors: %d", ret);
++ return ret;
++ }
++ ret = btrfs_commit_transaction(trans);
++ if (ret < 0)
++ btrfs_err(fs_info,
++ "scrub: failed to commit transaction to fix super block errors: %d", ret);
++ }
+ return ret;
+ out:
+ scrub_workers_put(fs_info);
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 6627dd7875ee0..b2a5291e5e4b2 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -625,6 +625,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ int saved_compress_level;
+ bool saved_compress_force;
+ int no_compress = 0;
++ const bool remounting = test_bit(BTRFS_FS_STATE_REMOUNTING, &info->fs_state);
+
+ if (btrfs_fs_compat_ro(info, FREE_SPACE_TREE))
+ btrfs_set_opt(info->mount_opt, FREE_SPACE_TREE);
+@@ -1136,10 +1137,12 @@ out:
+ }
+ if (!ret)
+ ret = btrfs_check_mountopts_zoned(info);
+- if (!ret && btrfs_test_opt(info, SPACE_CACHE))
+- btrfs_info(info, "disk space caching is enabled");
+- if (!ret && btrfs_test_opt(info, FREE_SPACE_TREE))
+- btrfs_info(info, "using free space tree");
++ if (!ret && !remounting) {
++ if (btrfs_test_opt(info, SPACE_CACHE))
++ btrfs_info(info, "disk space caching is enabled");
++ if (btrfs_test_opt(info, FREE_SPACE_TREE))
++ btrfs_info(info, "using free space tree");
++ }
+ return ret;
+ }
+
+@@ -2113,6 +2116,15 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data)
+ ret = -EINVAL;
+ goto restore;
+ }
++ if (btrfs_super_compat_ro_flags(fs_info->super_copy) &
++ ~BTRFS_FEATURE_COMPAT_RO_SUPP) {
++ btrfs_err(fs_info,
++ "can not remount read-write due to unsupported optional flags 0x%llx",
++ btrfs_super_compat_ro_flags(fs_info->super_copy) &
++ ~BTRFS_FEATURE_COMPAT_RO_SUPP);
++ ret = -EINVAL;
++ goto restore;
++ }
+ if (fs_info->fs_devices->rw_devices == 0) {
+ ret = -EACCES;
+ goto restore;
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index d59aebefa71cd..66305d11cd6d0 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -642,7 +642,7 @@ cifs_chan_is_iface_active(struct cifs_ses *ses,
+ int
+ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server);
+ int
+-SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon);
++SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon, bool in_mount);
+
+ void extract_unc_hostname(const char *unc, const char **h, size_t *len);
+ int copy_path_name(char *dst, const char *src);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index bdc3efdb12219..999d6fd31ac9a 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -155,7 +155,7 @@ static void smb2_query_server_interfaces(struct work_struct *work)
+ /*
+ * query server network interfaces, in case they change
+ */
+- rc = SMB3_request_interfaces(0, tcon);
++ rc = SMB3_request_interfaces(0, tcon, false);
+ if (rc) {
+ cifs_dbg(FYI, "%s: failed to query server interfaces: %d\n",
+ __func__, rc);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 02dd591acabb3..34739cf0c25df 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4024,6 +4024,15 @@ static ssize_t __cifs_readv(
+ len = ctx->len;
+ }
+
++ if (direct) {
++ rc = filemap_write_and_wait_range(file->f_inode->i_mapping,
++ offset, offset + len - 1);
++ if (rc) {
++ kref_put(&ctx->refcount, cifs_aio_ctx_release);
++ return -EAGAIN;
++ }
++ }
++
+ /* grab a lock here due to read response handlers can access ctx */
+ mutex_lock(&ctx->aio_mutex);
+
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index cc180d37b8ce1..60b006d982c25 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -511,8 +511,7 @@ smb3_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+
+ static int
+ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+- size_t buf_len,
+- struct cifs_ses *ses)
++ size_t buf_len, struct cifs_ses *ses, bool in_mount)
+ {
+ struct network_interface_info_ioctl_rsp *p;
+ struct sockaddr_in *addr4;
+@@ -542,6 +541,20 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ }
+ spin_unlock(&ses->iface_lock);
+
++ /*
++ * Samba server e.g. can return an empty interface list in some cases,
++ * which would only be a problem if we were requesting multichannel
++ */
++ if (bytes_left == 0) {
++ /* avoid spamming logs every 10 minutes, so log only in mount */
++ if ((ses->chan_max > 1) && in_mount)
++ cifs_dbg(VFS,
++ "empty network interface list returned by server %s\n",
++ ses->server->hostname);
++ rc = -EINVAL;
++ goto out;
++ }
++
+ while (bytes_left >= sizeof(*p)) {
+ memset(&tmp_iface, 0, sizeof(tmp_iface));
+ tmp_iface.speed = le64_to_cpu(p->LinkSpeed);
+@@ -672,7 +685,7 @@ out:
+ }
+
+ int
+-SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
++SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon, bool in_mount)
+ {
+ int rc;
+ unsigned int ret_data_len = 0;
+@@ -692,7 +705,7 @@ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
+ goto out;
+ }
+
+- rc = parse_server_interfaces(out_buf, ret_data_len, ses);
++ rc = parse_server_interfaces(out_buf, ret_data_len, ses, in_mount);
+ if (rc)
+ goto out;
+
+@@ -1022,7 +1035,7 @@ smb3_qfs_tcon(const unsigned int xid, struct cifs_tcon *tcon,
+ if (rc)
+ return;
+
+- SMB3_request_interfaces(xid, tcon);
++ SMB3_request_interfaces(xid, tcon, true /* called during mount */);
+
+ SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
+ FS_ATTRIBUTE_INFORMATION);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 31d37afae741f..816ae63d795a3 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1168,9 +1168,9 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ pneg_inbuf->Dialects[0] =
+ cpu_to_le16(server->vals->protocol_id);
+ pneg_inbuf->DialectCount = cpu_to_le16(1);
+- /* structure is big enough for 3 dialects, sending only 1 */
++ /* structure is big enough for 4 dialects, sending only 1 */
+ inbuflen = sizeof(*pneg_inbuf) -
+- sizeof(pneg_inbuf->Dialects[0]) * 2;
++ sizeof(pneg_inbuf->Dialects[0]) * 3;
+ }
+
+ rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+@@ -2410,7 +2410,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ unsigned int acelen, acl_size, ace_count;
+ unsigned int owner_offset = 0;
+ unsigned int group_offset = 0;
+- struct smb3_acl acl;
++ struct smb3_acl acl = {};
+
+ *len = roundup(sizeof(struct crt_sd_ctxt) + (sizeof(struct cifs_ace) * 4), 8);
+
+@@ -2483,6 +2483,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ acl.AclRevision = ACL_REVISION; /* See 2.4.4.1 of MS-DTYP */
+ acl.AclSize = cpu_to_le16(acl_size);
+ acl.AceCount = cpu_to_le16(ace_count);
++ /* acl.Sbz1 and Sbz2 MBZ so are not set here, but initialized above */
+ memcpy(aclptr, &acl, sizeof(struct smb3_acl));
+
+ buf->ccontext.DataLength = cpu_to_le32(ptr - (__u8 *)&buf->sd);
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 55e79f6ee78d1..334d8471346ff 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -225,9 +225,9 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ struct smb_rqst drqst;
+
+ ses = smb2_find_smb_ses(server, le64_to_cpu(shdr->SessionId));
+- if (!ses) {
++ if (unlikely(!ses)) {
+ cifs_server_dbg(VFS, "%s: Could not find session\n", __func__);
+- return 0;
++ return -ENOENT;
+ }
+
+ memset(smb2_signature, 0x0, SMB2_HMACSHA256_SIZE);
+@@ -557,8 +557,10 @@ smb3_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ u8 key[SMB3_SIGN_KEY_SIZE];
+
+ rc = smb2_get_sign_key(le64_to_cpu(shdr->SessionId), server, key);
+- if (rc)
+- return 0;
++ if (unlikely(rc)) {
++ cifs_server_dbg(VFS, "%s: Could not get signing key\n", __func__);
++ return rc;
++ }
+
+ if (allocate_crypto) {
+ rc = cifs_alloc_hash("cmac(aes)", &hash, &sdesc);
+diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
+index bfac462dd3e8f..730739869ba02 100644
+--- a/fs/dlm/ast.c
++++ b/fs/dlm/ast.c
+@@ -200,13 +200,13 @@ void dlm_add_cb(struct dlm_lkb *lkb, uint32_t flags, int mode, int status,
+ if (!prev_seq) {
+ kref_get(&lkb->lkb_ref);
+
++ mutex_lock(&ls->ls_cb_mutex);
+ if (test_bit(LSFL_CB_DELAY, &ls->ls_flags)) {
+- mutex_lock(&ls->ls_cb_mutex);
+ list_add(&lkb->lkb_cb_list, &ls->ls_cb_delay);
+- mutex_unlock(&ls->ls_cb_mutex);
+ } else {
+ queue_work(ls->ls_callback_wq, &lkb->lkb_cb_work);
+ }
++ mutex_unlock(&ls->ls_cb_mutex);
+ }
+ out:
+ mutex_unlock(&lkb->lkb_cb_mutex);
+@@ -288,7 +288,9 @@ void dlm_callback_stop(struct dlm_ls *ls)
+
+ void dlm_callback_suspend(struct dlm_ls *ls)
+ {
++ mutex_lock(&ls->ls_cb_mutex);
+ set_bit(LSFL_CB_DELAY, &ls->ls_flags);
++ mutex_unlock(&ls->ls_cb_mutex);
+
+ if (ls->ls_callback_wq)
+ flush_workqueue(ls->ls_callback_wq);
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index 226822f49d309..86aa37ab0e79f 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -2920,17 +2920,9 @@ static int set_unlock_args(uint32_t flags, void *astarg, struct dlm_args *args)
+ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
+ struct dlm_args *args)
+ {
+- int rv = -EINVAL;
++ int rv = -EBUSY;
+
+ if (args->flags & DLM_LKF_CONVERT) {
+- if (lkb->lkb_flags & DLM_IFL_MSTCPY)
+- goto out;
+-
+- if (args->flags & DLM_LKF_QUECVT &&
+- !__quecvt_compat_matrix[lkb->lkb_grmode+1][args->mode+1])
+- goto out;
+-
+- rv = -EBUSY;
+ if (lkb->lkb_status != DLM_LKSTS_GRANTED)
+ goto out;
+
+@@ -2940,6 +2932,14 @@ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
+
+ if (is_overlap(lkb))
+ goto out;
++
++ rv = -EINVAL;
++ if (lkb->lkb_flags & DLM_IFL_MSTCPY)
++ goto out;
++
++ if (args->flags & DLM_LKF_QUECVT &&
++ !__quecvt_compat_matrix[lkb->lkb_grmode+1][args->mode+1])
++ goto out;
+ }
+
+ lkb->lkb_exflags = args->flags;
+@@ -3672,7 +3672,7 @@ static void send_args(struct dlm_rsb *r, struct dlm_lkb *lkb,
+ case cpu_to_le32(DLM_MSG_REQUEST_REPLY):
+ case cpu_to_le32(DLM_MSG_CONVERT_REPLY):
+ case cpu_to_le32(DLM_MSG_GRANT):
+- if (!lkb->lkb_lvbptr)
++ if (!lkb->lkb_lvbptr || !(lkb->lkb_exflags & DLM_LKF_VALBLK))
+ break;
+ memcpy(ms->m_extra, lkb->lkb_lvbptr, r->res_ls->ls_lvblen);
+ break;
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 19e82f08c0e0c..c80ee6a95d171 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -1336,6 +1336,8 @@ struct dlm_msg *dlm_lowcomms_new_msg(int nodeid, int len, gfp_t allocation,
+ return NULL;
+ }
+
++ /* for dlm_lowcomms_commit_msg() */
++ kref_get(&msg->ref);
+ /* we assume if successful commit must called */
+ msg->idx = idx;
+ return msg;
+@@ -1375,6 +1377,8 @@ void dlm_lowcomms_commit_msg(struct dlm_msg *msg)
+ {
+ _dlm_lowcomms_commit_msg(msg);
+ srcu_read_unlock(&connections_srcu, msg->idx);
++ /* because dlm_lowcomms_new_msg() */
++ kref_put(&msg->ref, dlm_msg_release);
+ }
+ #endif
+
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index 95a403720e8c7..16cf9a2835574 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -214,7 +214,7 @@ static int erofs_fill_symlink(struct inode *inode, void *kaddr,
+
+ /* if it cannot be handled with fast symlink scheme */
+ if (vi->datalayout != EROFS_INODE_FLAT_INLINE ||
+- inode->i_size >= EROFS_BLKSIZ) {
++ inode->i_size >= EROFS_BLKSIZ || inode->i_size < 0) {
+ inode->i_op = &erofs_symlink_iops;
+ return 0;
+ }
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 95addc5c9d34d..ddf8f737cfb58 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -877,7 +877,7 @@ static void erofs_kill_sb(struct super_block *sb)
+ WARN_ON(sb->s_magic != EROFS_SUPER_MAGIC);
+
+ if (erofs_is_fscache_mode(sb))
+- generic_shutdown_super(sb);
++ kill_anon_super(sb);
+ else
+ kill_block_super(sb);
+
+diff --git a/fs/eventfd.c b/fs/eventfd.c
+index 3627dd7d25db8..c0ffee99ad238 100644
+--- a/fs/eventfd.c
++++ b/fs/eventfd.c
+@@ -69,17 +69,17 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
+ * it returns false, the eventfd_signal() call should be deferred to a
+ * safe context.
+ */
+- if (WARN_ON_ONCE(current->in_eventfd_signal))
++ if (WARN_ON_ONCE(current->in_eventfd))
+ return 0;
+
+ spin_lock_irqsave(&ctx->wqh.lock, flags);
+- current->in_eventfd_signal = 1;
++ current->in_eventfd = 1;
+ if (ULLONG_MAX - ctx->count < n)
+ n = ULLONG_MAX - ctx->count;
+ ctx->count += n;
+ if (waitqueue_active(&ctx->wqh))
+ wake_up_locked_poll(&ctx->wqh, EPOLLIN);
+- current->in_eventfd_signal = 0;
++ current->in_eventfd = 0;
+ spin_unlock_irqrestore(&ctx->wqh.lock, flags);
+
+ return n;
+@@ -253,8 +253,10 @@ static ssize_t eventfd_read(struct kiocb *iocb, struct iov_iter *to)
+ __set_current_state(TASK_RUNNING);
+ }
+ eventfd_ctx_do_read(ctx, &ucnt);
++ current->in_eventfd = 1;
+ if (waitqueue_active(&ctx->wqh))
+ wake_up_locked_poll(&ctx->wqh, EPOLLOUT);
++ current->in_eventfd = 0;
+ spin_unlock_irq(&ctx->wqh.lock);
+ if (unlikely(copy_to_iter(&ucnt, sizeof(ucnt), to) != sizeof(ucnt)))
+ return -EFAULT;
+@@ -301,8 +303,10 @@ static ssize_t eventfd_write(struct file *file, const char __user *buf, size_t c
+ }
+ if (likely(res > 0)) {
+ ctx->count += ucnt;
++ current->in_eventfd = 1;
+ if (waitqueue_active(&ctx->wqh))
+ wake_up_locked_poll(&ctx->wqh, EPOLLIN);
++ current->in_eventfd = 0;
+ }
+ spin_unlock_irq(&ctx->wqh.lock);
+
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index cdffa2a041af8..f53ab39bb8e87 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -163,7 +163,7 @@ static void ext2_put_super (struct super_block * sb)
+ db_count = sbi->s_gdb_count;
+ for (i = 0; i < db_count; i++)
+ brelse(sbi->s_group_desc[i]);
+- kfree(sbi->s_group_desc);
++ kvfree(sbi->s_group_desc);
+ kfree(sbi->s_debts);
+ percpu_counter_destroy(&sbi->s_freeblocks_counter);
+ percpu_counter_destroy(&sbi->s_freeinodes_counter);
+@@ -1053,6 +1053,13 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ sbi->s_blocks_per_group);
+ goto failed_mount;
+ }
++ /* At least inode table, bitmaps, and sb have to fit in one group */
++ if (sbi->s_blocks_per_group <= sbi->s_itb_per_group + 3) {
++ ext2_msg(sb, KERN_ERR,
++ "error: #blocks per group smaller than metadata size: %lu <= %lu",
++ sbi->s_blocks_per_group, sbi->s_inodes_per_group + 3);
++ goto failed_mount;
++ }
+ if (sbi->s_frags_per_group > sb->s_blocksize * 8) {
+ ext2_msg(sb, KERN_ERR,
+ "error: #fragments per group too big: %lu",
+@@ -1066,9 +1073,14 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ sbi->s_inodes_per_group);
+ goto failed_mount;
+ }
++ if (sb_bdev_nr_blocks(sb) < le32_to_cpu(es->s_blocks_count)) {
++ ext2_msg(sb, KERN_ERR,
++ "bad geometry: block count %u exceeds size of device (%u blocks)",
++ le32_to_cpu(es->s_blocks_count),
++ (unsigned)sb_bdev_nr_blocks(sb));
++ goto failed_mount;
++ }
+
+- if (EXT2_BLOCKS_PER_GROUP(sb) == 0)
+- goto cantfind_ext2;
+ sbi->s_groups_count = ((le32_to_cpu(es->s_blocks_count) -
+ le32_to_cpu(es->s_first_data_block) - 1)
+ / EXT2_BLOCKS_PER_GROUP(sb)) + 1;
+@@ -1081,7 +1093,7 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ }
+ db_count = (sbi->s_groups_count + EXT2_DESC_PER_BLOCK(sb) - 1) /
+ EXT2_DESC_PER_BLOCK(sb);
+- sbi->s_group_desc = kmalloc_array(db_count,
++ sbi->s_group_desc = kvmalloc_array(db_count,
+ sizeof(struct buffer_head *),
+ GFP_KERNEL);
+ if (sbi->s_group_desc == NULL) {
+@@ -1207,7 +1219,7 @@ failed_mount2:
+ for (i = 0; i < db_count; i++)
+ brelse(sbi->s_group_desc[i]);
+ failed_mount_group_desc:
+- kfree(sbi->s_group_desc);
++ kvfree(sbi->s_group_desc);
+ kfree(sbi->s_debts);
+ failed_mount:
+ brelse(bh);
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 795a60ad18978..a25a3af139521 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -874,22 +874,25 @@ static int ext4_fc_write_inode(struct inode *inode, u32 *crc)
+ tl.fc_tag = cpu_to_le16(EXT4_FC_TAG_INODE);
+ tl.fc_len = cpu_to_le16(inode_len + sizeof(fc_inode.fc_ino));
+
++ ret = -ECANCELED;
+ dst = ext4_fc_reserve_space(inode->i_sb,
+ sizeof(tl) + inode_len + sizeof(fc_inode.fc_ino), crc);
+ if (!dst)
+- return -ECANCELED;
++ goto err;
+
+ if (!ext4_fc_memcpy(inode->i_sb, dst, &tl, sizeof(tl), crc))
+- return -ECANCELED;
++ goto err;
+ dst += sizeof(tl);
+ if (!ext4_fc_memcpy(inode->i_sb, dst, &fc_inode, sizeof(fc_inode), crc))
+- return -ECANCELED;
++ goto err;
+ dst += sizeof(fc_inode);
+ if (!ext4_fc_memcpy(inode->i_sb, dst, (u8 *)ext4_raw_inode(&iloc),
+ inode_len, crc))
+- return -ECANCELED;
+-
+- return 0;
++ goto err;
++ ret = 0;
++err:
++ brelse(iloc.bh);
++ return ret;
+ }
+
+ /*
+@@ -1491,13 +1494,15 @@ static int ext4_fc_record_modified_inode(struct super_block *sb, int ino)
+ if (state->fc_modified_inodes[i] == ino)
+ return 0;
+ if (state->fc_modified_inodes_used == state->fc_modified_inodes_size) {
+- state->fc_modified_inodes = krealloc(
+- state->fc_modified_inodes,
++ int *fc_modified_inodes;
++
++ fc_modified_inodes = krealloc(state->fc_modified_inodes,
+ sizeof(int) * (state->fc_modified_inodes_size +
+ EXT4_FC_REPLAY_REALLOC_INCREMENT),
+ GFP_KERNEL);
+- if (!state->fc_modified_inodes)
++ if (!fc_modified_inodes)
+ return -ENOMEM;
++ state->fc_modified_inodes = fc_modified_inodes;
+ state->fc_modified_inodes_size +=
+ EXT4_FC_REPLAY_REALLOC_INCREMENT;
+ }
+@@ -1682,15 +1687,18 @@ int ext4_fc_record_regions(struct super_block *sb, int ino,
+ if (replay && state->fc_regions_used != state->fc_regions_valid)
+ state->fc_regions_used = state->fc_regions_valid;
+ if (state->fc_regions_used == state->fc_regions_size) {
++ struct ext4_fc_alloc_region *fc_regions;
++
++ fc_regions = krealloc(state->fc_regions,
++ sizeof(struct ext4_fc_alloc_region) *
++ (state->fc_regions_size +
++ EXT4_FC_REPLAY_REALLOC_INCREMENT),
++ GFP_KERNEL);
++ if (!fc_regions)
++ return -ENOMEM;
+ state->fc_regions_size +=
+ EXT4_FC_REPLAY_REALLOC_INCREMENT;
+- state->fc_regions = krealloc(
+- state->fc_regions,
+- state->fc_regions_size *
+- sizeof(struct ext4_fc_alloc_region),
+- GFP_KERNEL);
+- if (!state->fc_regions)
+- return -ENOMEM;
++ state->fc_regions = fc_regions;
+ }
+ region = &state->fc_regions[state->fc_regions_used++];
+ region->ino = ino;
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 109d07629f81f..847a2f806b8f6 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -528,6 +528,12 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ ret = -EAGAIN;
+ goto out;
+ }
++ /*
++ * Make sure inline data cannot be created anymore since we are going
++ * to allocate blocks for DIO. We know the inode does not have any
++ * inline data now because ext4_dio_supported() checked for that.
++ */
++ ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+
+ offset = iocb->ki_pos;
+ count = ret;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 560cf8dc59359..7e5e8457026a1 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1188,6 +1188,13 @@ retry_grab:
+ page = grab_cache_page_write_begin(mapping, index);
+ if (!page)
+ return -ENOMEM;
++ /*
++ * The same as page allocation, we prealloc buffer heads before
++ * starting the handle.
++ */
++ if (!page_has_buffers(page))
++ create_empty_buffers(page, inode->i_sb->s_blocksize, 0);
++
+ unlock_page(page);
+
+ retry_journal:
+@@ -5340,6 +5347,7 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ int error, rc = 0;
+ int orphan = 0;
+ const unsigned int ia_valid = attr->ia_valid;
++ bool inc_ivers = true;
+
+ if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb))))
+ return -EIO;
+@@ -5425,8 +5433,8 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ return -EINVAL;
+ }
+
+- if (IS_I_VERSION(inode) && attr->ia_size != inode->i_size)
+- inode_inc_iversion(inode);
++ if (attr->ia_size == inode->i_size)
++ inc_ivers = false;
+
+ if (shrink) {
+ if (ext4_should_order_data(inode)) {
+@@ -5528,6 +5536,8 @@ out_mmap_sem:
+ }
+
+ if (!error) {
++ if (inc_ivers)
++ inode_inc_iversion(inode);
+ setattr_copy(mnt_userns, inode, attr);
+ mark_inode_dirty(inode);
+ }
+@@ -5731,9 +5741,6 @@ int ext4_mark_iloc_dirty(handle_t *handle,
+ }
+ ext4_fc_track_inode(handle, inode);
+
+- if (IS_I_VERSION(inode))
+- inode_inc_iversion(inode);
+-
+ /* the do_update_inode consumes one bh->b_count */
+ get_bh(iloc->bh);
+
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index cb01c1da0f9d5..b6f694d195997 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -442,6 +442,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ swap_inode_data(inode, inode_bl);
+
+ inode->i_ctime = inode_bl->i_ctime = current_time(inode);
++ inode_inc_iversion(inode);
+
+ inode->i_generation = prandom_u32();
+ inode_bl->i_generation = prandom_u32();
+@@ -655,6 +656,7 @@ static int ext4_ioctl_setflags(struct inode *inode,
+ ext4_set_inode_flags(inode, false);
+
+ inode->i_ctime = current_time(inode);
++ inode_inc_iversion(inode);
+
+ err = ext4_mark_iloc_dirty(handle, inode, &iloc);
+ flags_err:
+@@ -765,6 +767,7 @@ static int ext4_ioctl_setproject(struct inode *inode, __u32 projid)
+
+ EXT4_I(inode)->i_projid = kprojid;
+ inode->i_ctime = current_time(inode);
++ inode_inc_iversion(inode);
+ out_dirty:
+ rc = ext4_mark_iloc_dirty(handle, inode, &iloc);
+ if (!err)
+@@ -1178,6 +1181,7 @@ static long __ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ err = ext4_reserve_inode_write(handle, inode, &iloc);
+ if (err == 0) {
+ inode->i_ctime = current_time(inode);
++ inode_inc_iversion(inode);
+ inode->i_generation = generation;
+ err = ext4_mark_iloc_dirty(handle, inode, &iloc);
+ }
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 3a31b662f6619..4183a4cb4a21e 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -85,15 +85,20 @@ static struct buffer_head *ext4_append(handle_t *handle,
+ return bh;
+ inode->i_size += inode->i_sb->s_blocksize;
+ EXT4_I(inode)->i_disksize = inode->i_size;
++ err = ext4_mark_inode_dirty(handle, inode);
++ if (err)
++ goto out;
+ BUFFER_TRACE(bh, "get_write_access");
+ err = ext4_journal_get_write_access(handle, inode->i_sb, bh,
+ EXT4_JTR_NONE);
+- if (err) {
+- brelse(bh);
+- ext4_std_error(inode->i_sb, err);
+- return ERR_PTR(err);
+- }
++ if (err)
++ goto out;
+ return bh;
++
++out:
++ brelse(bh);
++ ext4_std_error(inode->i_sb, err);
++ return ERR_PTR(err);
+ }
+
+ static int ext4_dx_csum_verify(struct inode *inode,
+@@ -126,7 +131,7 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode,
+ struct ext4_dir_entry *dirent;
+ int is_dx_block = 0;
+
+- if (block >= inode->i_size) {
++ if (block >= inode->i_size >> inode->i_blkbits) {
+ ext4_error_inode(inode, func, line, block,
+ "Attempting to read directory block (%u) that is past i_size (%llu)",
+ block, inode->i_size);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index cb5a64293881e..b2d84bd112a4a 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -2100,7 +2100,7 @@ retry:
+ goto out;
+ }
+
+- if (ext4_blocks_count(es) == n_blocks_count)
++ if (ext4_blocks_count(es) == n_blocks_count && n_blocks_count_retry == 0)
+ goto out;
+
+ err = ext4_alloc_flex_bg_array(sb, n_group + 1);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 845f2f8aee5f9..5cacd513d0dff 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -205,19 +205,12 @@ int ext4_read_bh(struct buffer_head *bh, int op_flags, bh_end_io_t *end_io)
+
+ int ext4_read_bh_lock(struct buffer_head *bh, int op_flags, bool wait)
+ {
+- if (trylock_buffer(bh)) {
+- if (wait)
+- return ext4_read_bh(bh, op_flags, NULL);
++ lock_buffer(bh);
++ if (!wait) {
+ ext4_read_bh_nowait(bh, op_flags, NULL);
+ return 0;
+ }
+- if (wait) {
+- wait_on_buffer(bh);
+- if (buffer_uptodate(bh))
+- return 0;
+- return -EIO;
+- }
+- return 0;
++ return ext4_read_bh(bh, op_flags, NULL);
+ }
+
+ /*
+@@ -264,7 +257,8 @@ void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block)
+ struct buffer_head *bh = sb_getblk_gfp(sb, block, 0);
+
+ if (likely(bh)) {
+- ext4_read_bh_lock(bh, REQ_RAHEAD, false);
++ if (trylock_buffer(bh))
++ ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL);
+ brelse(bh);
+ }
+ }
+@@ -1585,7 +1579,7 @@ enum {
+ Opt_inlinecrypt,
+ Opt_usrjquota, Opt_grpjquota, Opt_quota,
+ Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err,
+- Opt_usrquota, Opt_grpquota, Opt_prjquota, Opt_i_version,
++ Opt_usrquota, Opt_grpquota, Opt_prjquota,
+ Opt_dax, Opt_dax_always, Opt_dax_inode, Opt_dax_never,
+ Opt_stripe, Opt_delalloc, Opt_nodelalloc, Opt_warn_on_error,
+ Opt_nowarn_on_error, Opt_mblk_io_submit, Opt_debug_want_extra_isize,
+@@ -1694,7 +1688,7 @@ static const struct fs_parameter_spec ext4_param_specs[] = {
+ fsparam_flag ("barrier", Opt_barrier),
+ fsparam_u32 ("barrier", Opt_barrier),
+ fsparam_flag ("nobarrier", Opt_nobarrier),
+- fsparam_flag ("i_version", Opt_i_version),
++ fsparam_flag ("i_version", Opt_removed),
+ fsparam_flag ("dax", Opt_dax),
+ fsparam_enum ("dax", Opt_dax_type, ext4_param_dax),
+ fsparam_u32 ("stripe", Opt_stripe),
+@@ -2140,11 +2134,6 @@ static int ext4_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ case Opt_abort:
+ ctx_set_mount_flag(ctx, EXT4_MF_FS_ABORTED);
+ return 0;
+- case Opt_i_version:
+- ext4_msg(NULL, KERN_WARNING, deprecated_msg, param->key, "5.20");
+- ext4_msg(NULL, KERN_WARNING, "Use iversion instead\n");
+- ctx_set_flags(ctx, SB_I_VERSION);
+- return 0;
+ case Opt_inlinecrypt:
+ #ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+ ctx_set_flags(ctx, SB_INLINECRYPT);
+@@ -2814,14 +2803,6 @@ static void ext4_apply_options(struct fs_context *fc, struct super_block *sb)
+ sb->s_flags &= ~ctx->mask_s_flags;
+ sb->s_flags |= ctx->vals_s_flags;
+
+- /*
+- * i_version differs from common mount option iversion so we have
+- * to let vfs know that it was set, otherwise it would get cleared
+- * on remount
+- */
+- if (ctx->mask_s_flags & SB_I_VERSION)
+- fc->sb_flags |= SB_I_VERSION;
+-
+ #define APPLY(X) ({ if (ctx->spec & EXT4_SPEC_##X) sbi->X = ctx->X; })
+ APPLY(s_commit_interval);
+ APPLY(s_stripe);
+@@ -2970,8 +2951,6 @@ static int _ext4_show_options(struct seq_file *seq, struct super_block *sb,
+ SEQ_OPTS_PRINT("min_batch_time=%u", sbi->s_min_batch_time);
+ if (nodefs || sbi->s_max_batch_time != EXT4_DEF_MAX_BATCH_TIME)
+ SEQ_OPTS_PRINT("max_batch_time=%u", sbi->s_max_batch_time);
+- if (sb->s_flags & SB_I_VERSION)
+- SEQ_OPTS_PUTS("i_version");
+ if (nodefs || sbi->s_stripe)
+ SEQ_OPTS_PRINT("stripe=%lu", sbi->s_stripe);
+ if (nodefs || EXT4_MOUNT_DATA_FLAGS &
+@@ -3758,6 +3737,7 @@ static int ext4_lazyinit_thread(void *arg)
+ unsigned long next_wakeup, cur;
+
+ BUG_ON(NULL == eli);
++ set_freezable();
+
+ cont_thread:
+ while (true) {
+@@ -3973,9 +3953,9 @@ int ext4_register_li_request(struct super_block *sb,
+ goto out;
+ }
+
+- if (test_opt(sb, NO_PREFETCH_BLOCK_BITMAPS) &&
+- (first_not_zeroed == ngroups || sb_rdonly(sb) ||
+- !test_opt(sb, INIT_INODE_TABLE)))
++ if (sb_rdonly(sb) ||
++ (test_opt(sb, NO_PREFETCH_BLOCK_BITMAPS) &&
++ (first_not_zeroed == ngroups || !test_opt(sb, INIT_INODE_TABLE))))
+ goto out;
+
+ elr = ext4_li_request_new(sb, first_not_zeroed);
+@@ -4630,6 +4610,9 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |
+ (test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);
+
++ /* i_version is always enabled now */
++ sb->s_flags |= SB_I_VERSION;
++
+ if (le32_to_cpu(es->s_rev_level) == EXT4_GOOD_OLD_REV &&
+ (ext4_has_compat_features(sb) ||
+ ext4_has_ro_compat_features(sb) ||
+@@ -6643,7 +6626,7 @@ static int ext4_write_info(struct super_block *sb, int type)
+ handle_t *handle;
+
+ /* Data block + inode block */
+- handle = ext4_journal_start(d_inode(sb->s_root), EXT4_HT_QUOTA, 2);
++ handle = ext4_journal_start_sb(sb, EXT4_HT_QUOTA, 2);
+ if (IS_ERR(handle))
+ return PTR_ERR(handle);
+ ret = dquot_commit_info(sb, type);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 533216e80fa2b..36d6ba7190b6d 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2412,6 +2412,7 @@ retry_inode:
+ if (!error) {
+ ext4_xattr_update_super_block(handle, inode->i_sb);
+ inode->i_ctime = current_time(inode);
++ inode_inc_iversion(inode);
+ if (!value)
+ no_expand = 0;
+ error = ext4_mark_iloc_dirty(handle, inode, &is.iloc);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 6d8b2bf14de0f..5c77ae07150d3 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -140,7 +140,7 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr,
+ unsigned int segno, offset;
+ bool exist;
+
+- if (type != DATA_GENERIC_ENHANCE && type != DATA_GENERIC_ENHANCE_READ)
++ if (type == DATA_GENERIC)
+ return true;
+
+ segno = GET_SEGNO(sbi, blkaddr);
+@@ -148,6 +148,13 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr,
+ se = get_seg_entry(sbi, segno);
+
+ exist = f2fs_test_bit(offset, se->cur_valid_map);
++ if (exist && type == DATA_GENERIC_ENHANCE_UPDATE) {
++ f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d",
++ blkaddr, exist);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ return exist;
++ }
++
+ if (!exist && type == DATA_GENERIC_ENHANCE) {
+ f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d",
+ blkaddr, exist);
+@@ -185,6 +192,7 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
+ case DATA_GENERIC:
+ case DATA_GENERIC_ENHANCE:
+ case DATA_GENERIC_ENHANCE_READ:
++ case DATA_GENERIC_ENHANCE_UPDATE:
+ if (unlikely(blkaddr >= MAX_BLKADDR(sbi) ||
+ blkaddr < MAIN_BLKADDR(sbi))) {
+ f2fs_warn(sbi, "access invalid blkaddr:%u",
+@@ -1055,7 +1063,8 @@ void f2fs_remove_dirty_inode(struct inode *inode)
+ spin_unlock(&sbi->inode_lock[type]);
+ }
+
+-int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type)
++int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type,
++ bool from_cp)
+ {
+ struct list_head *head;
+ struct inode *inode;
+@@ -1090,11 +1099,15 @@ retry:
+ if (inode) {
+ unsigned long cur_ino = inode->i_ino;
+
+- F2FS_I(inode)->cp_task = current;
++ if (from_cp)
++ F2FS_I(inode)->cp_task = current;
++ F2FS_I(inode)->wb_task = current;
+
+ filemap_fdatawrite(inode->i_mapping);
+
+- F2FS_I(inode)->cp_task = NULL;
++ F2FS_I(inode)->wb_task = NULL;
++ if (from_cp)
++ F2FS_I(inode)->cp_task = NULL;
+
+ iput(inode);
+ /* We need to give cpu to another writers. */
+@@ -1223,7 +1236,7 @@ retry_flush_dents:
+ /* write all the dirty dentry pages */
+ if (get_pages(sbi, F2FS_DIRTY_DENTS)) {
+ f2fs_unlock_all(sbi);
+- err = f2fs_sync_dirty_inodes(sbi, DIR_INODE);
++ err = f2fs_sync_dirty_inodes(sbi, DIR_INODE, true);
+ if (err)
+ return err;
+ cond_resched();
+@@ -1894,15 +1907,27 @@ int f2fs_start_ckpt_thread(struct f2fs_sb_info *sbi)
+ void f2fs_stop_ckpt_thread(struct f2fs_sb_info *sbi)
+ {
+ struct ckpt_req_control *cprc = &sbi->cprc_info;
++ struct task_struct *ckpt_task;
+
+- if (cprc->f2fs_issue_ckpt) {
+- struct task_struct *ckpt_task = cprc->f2fs_issue_ckpt;
++ if (!cprc->f2fs_issue_ckpt)
++ return;
+
+- cprc->f2fs_issue_ckpt = NULL;
+- kthread_stop(ckpt_task);
++ ckpt_task = cprc->f2fs_issue_ckpt;
++ cprc->f2fs_issue_ckpt = NULL;
++ kthread_stop(ckpt_task);
+
+- flush_remained_ckpt_reqs(sbi, NULL);
+- }
++ f2fs_flush_ckpt_thread(sbi);
++}
++
++void f2fs_flush_ckpt_thread(struct f2fs_sb_info *sbi)
++{
++ struct ckpt_req_control *cprc = &sbi->cprc_info;
++
++ flush_remained_ckpt_reqs(sbi, NULL);
++
++ /* Let's wait for the previous dispatched checkpoint. */
++ while (atomic_read(&cprc->queued_ckpt))
++ io_schedule_timeout(DEFAULT_IO_TIMEOUT);
+ }
+
+ void f2fs_init_ckpt_req_control(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index f2a2726134779..d3768115e3b8d 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2843,7 +2843,7 @@ out:
+ }
+ unlock_page(page);
+ if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) &&
+- !F2FS_I(inode)->cp_task && allow_balance)
++ !F2FS_I(inode)->wb_task && allow_balance)
+ f2fs_balance_fs(sbi, need_balance_fs);
+
+ if (unlikely(f2fs_cp_error(sbi))) {
+@@ -3141,7 +3141,7 @@ static inline bool __should_serialize_io(struct inode *inode,
+ struct writeback_control *wbc)
+ {
+ /* to avoid deadlock in path of data flush */
+- if (F2FS_I(inode)->cp_task)
++ if (F2FS_I(inode)->wb_task)
+ return false;
+
+ if (!S_ISREG(inode->i_mode))
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 866e72b29bd5a..761fd42c93f23 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -804,9 +804,8 @@ void f2fs_drop_extent_tree(struct inode *inode)
+ if (!f2fs_may_extent_tree(inode))
+ return;
+
+- set_inode_flag(inode, FI_NO_EXTENT);
+-
+ write_lock(&et->lock);
++ set_inode_flag(inode, FI_NO_EXTENT);
+ __free_extent_tree(sbi, et);
+ if (et->largest.len) {
+ et->largest.len = 0;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 7006fa7dd5cb8..7896cbadbcd75 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -266,6 +266,10 @@ enum {
+ * condition of read on truncated area
+ * by extent_cache
+ */
++ DATA_GENERIC_ENHANCE_UPDATE, /*
++ * strong check on range and segment
++ * bitmap for update case
++ */
+ META_GENERIC,
+ };
+
+@@ -780,6 +784,7 @@ struct f2fs_inode_info {
+ unsigned int clevel; /* maximum level of given file name */
+ struct task_struct *task; /* lookup and create consistency */
+ struct task_struct *cp_task; /* separate cp/wb IO stats*/
++ struct task_struct *wb_task; /* indicate inode is in context of writeback */
+ nid_t i_xattr_nid; /* node id that contains xattrs */
+ loff_t last_disk_size; /* lastly written file size */
+ spinlock_t i_size_lock; /* protect last_disk_size */
+@@ -3676,6 +3681,7 @@ static inline bool f2fs_need_rand_seg(struct f2fs_sb_info *sbi)
+ * checkpoint.c
+ */
+ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io);
++void f2fs_flush_ckpt_thread(struct f2fs_sb_info *sbi);
+ struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
+ struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
+ struct page *f2fs_get_meta_page_retry(struct f2fs_sb_info *sbi, pgoff_t index);
+@@ -3705,7 +3711,8 @@ int f2fs_recover_orphan_inodes(struct f2fs_sb_info *sbi);
+ int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi);
+ void f2fs_update_dirty_folio(struct inode *inode, struct folio *folio);
+ void f2fs_remove_dirty_inode(struct inode *inode);
+-int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type);
++int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type,
++ bool from_cp);
+ void f2fs_wait_on_all_pages(struct f2fs_sb_info *sbi, int type);
+ u64 f2fs_get_sectors_written(struct f2fs_sb_info *sbi);
+ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc);
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index d5fb426e07474..e88ed284e05c0 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -97,14 +97,10 @@ static int gc_thread_func(void *data)
+ */
+ if (sbi->gc_mode == GC_URGENT_HIGH) {
+ spin_lock(&sbi->gc_urgent_high_lock);
+- if (sbi->gc_urgent_high_limited) {
+- if (!sbi->gc_urgent_high_remaining) {
+- sbi->gc_urgent_high_limited = false;
+- spin_unlock(&sbi->gc_urgent_high_lock);
+- sbi->gc_mode = GC_NORMAL;
+- continue;
+- }
+- sbi->gc_urgent_high_remaining--;
++ if (sbi->gc_urgent_high_limited &&
++ !sbi->gc_urgent_high_remaining--) {
++ sbi->gc_urgent_high_limited = false;
++ sbi->gc_mode = GC_NORMAL;
+ }
+ spin_unlock(&sbi->gc_urgent_high_lock);
+ }
+@@ -1079,7 +1075,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ {
+ struct page *node_page;
+ nid_t nid;
+- unsigned int ofs_in_node;
++ unsigned int ofs_in_node, max_addrs;
+ block_t source_blkaddr;
+
+ nid = le32_to_cpu(sum->nid);
+@@ -1105,6 +1101,14 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ return false;
+ }
+
++ max_addrs = IS_INODE(node_page) ? DEF_ADDRS_PER_INODE :
++ DEF_ADDRS_PER_BLOCK;
++ if (ofs_in_node >= max_addrs) {
++ f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%u, nid:%u, max:%u",
++ ofs_in_node, dni->ino, dni->nid, max_addrs);
++ return false;
++ }
++
+ *nofs = ofs_of_node(node_page);
+ source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node);
+ f2fs_put_page(node_page, 1);
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 3cb7f8a43b4d7..4907bb084fd66 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -474,7 +474,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
+ struct dnode_of_data tdn = *dn;
+ nid_t ino, nid;
+ struct inode *inode;
+- unsigned int offset;
++ unsigned int offset, ofs_in_node, max_addrs;
+ block_t bidx;
+ int i;
+
+@@ -501,15 +501,24 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
+ got_it:
+ /* Use the locked dnode page and inode */
+ nid = le32_to_cpu(sum.nid);
++ ofs_in_node = le16_to_cpu(sum.ofs_in_node);
++
++ max_addrs = ADDRS_PER_PAGE(dn->node_page, dn->inode);
++ if (ofs_in_node >= max_addrs) {
++ f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%lu, nid:%u, max:%u",
++ ofs_in_node, dn->inode->i_ino, nid, max_addrs);
++ return -EFSCORRUPTED;
++ }
++
+ if (dn->inode->i_ino == nid) {
+ tdn.nid = nid;
+ if (!dn->inode_page_locked)
+ lock_page(dn->inode_page);
+ tdn.node_page = dn->inode_page;
+- tdn.ofs_in_node = le16_to_cpu(sum.ofs_in_node);
++ tdn.ofs_in_node = ofs_in_node;
+ goto truncate_out;
+ } else if (dn->nid == nid) {
+- tdn.ofs_in_node = le16_to_cpu(sum.ofs_in_node);
++ tdn.ofs_in_node = ofs_in_node;
+ goto truncate_out;
+ }
+
+@@ -698,6 +707,14 @@ retry_prev:
+ goto err;
+ }
+
++ if (f2fs_is_valid_blkaddr(sbi, dest,
++ DATA_GENERIC_ENHANCE_UPDATE)) {
++ f2fs_err(sbi, "Inconsistent dest blkaddr:%u, ino:%lu, ofs:%u",
++ dest, inode->i_ino, dn.ofs_in_node);
++ err = -EFSCORRUPTED;
++ goto err;
++ }
++
+ /* write dummy data page */
+ f2fs_replace_block(sbi, &dn, src, dest,
+ ni.version, false, false);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 52df19a0638b1..b740ff81024e4 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -469,7 +469,7 @@ do_sync:
+ mutex_lock(&sbi->flush_lock);
+
+ blk_start_plug(&plug);
+- f2fs_sync_dirty_inodes(sbi, FILE_INODE);
++ f2fs_sync_dirty_inodes(sbi, FILE_INODE, false);
+ blk_finish_plug(&plug);
+
+ mutex_unlock(&sbi->flush_lock);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 37221e94e5eff..77e190a52565c 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -298,10 +298,10 @@ static void f2fs_destroy_casefold_cache(void) { }
+
+ static inline void limit_reserve_root(struct f2fs_sb_info *sbi)
+ {
+- block_t limit = min((sbi->user_block_count << 1) / 1000,
++ block_t limit = min((sbi->user_block_count >> 3),
+ sbi->user_block_count - sbi->reserved_blocks);
+
+- /* limit is 0.2% */
++ /* limit is 12.5% */
+ if (test_opt(sbi, RESERVE_ROOT) &&
+ F2FS_OPTION(sbi).root_reserved_blocks > limit) {
+ F2FS_OPTION(sbi).root_reserved_blocks = limit;
+@@ -1637,9 +1637,8 @@ static int f2fs_freeze(struct super_block *sb)
+ if (is_sbi_flag_set(F2FS_SB(sb), SBI_IS_DIRTY))
+ return -EINVAL;
+
+- /* ensure no checkpoint required */
+- if (!llist_empty(&F2FS_SB(sb)->cprc_info.issue_list))
+- return -EINVAL;
++ /* Let's flush checkpoints and stop the thread. */
++ f2fs_flush_ckpt_thread(F2FS_SB(sb));
+
+ /* to avoid deadlock on f2fs_evict_inode->SB_FREEZE_FS */
+ set_sbi_flag(F2FS_SB(sb), SBI_IS_FREEZING);
+@@ -2146,6 +2145,9 @@ static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi)
+ f2fs_up_write(&sbi->gc_lock);
+
+ f2fs_sync_fs(sbi->sb, 1);
++
++ /* Let's ensure there's no pending checkpoint anymore */
++ f2fs_flush_ckpt_thread(sbi);
+ }
+
+ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+@@ -2311,6 +2313,9 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ f2fs_stop_ckpt_thread(sbi);
+ need_restart_ckpt = true;
+ } else {
++ /* Flush if the prevous checkpoint, if exists. */
++ f2fs_flush_ckpt_thread(sbi);
++
+ err = f2fs_start_ckpt_thread(sbi);
+ if (err) {
+ f2fs_err(sbi,
+diff --git a/fs/file_table.c b/fs/file_table.c
+index 5424e3a8df5fa..543a501b02470 100644
+--- a/fs/file_table.c
++++ b/fs/file_table.c
+@@ -321,12 +321,7 @@ static void __fput(struct file *file)
+ }
+ fops_put(file->f_op);
+ put_pid(file->f_owner.pid);
+- if ((mode & (FMODE_READ | FMODE_WRITE)) == FMODE_READ)
+- i_readcount_dec(inode);
+- if (mode & FMODE_WRITER) {
+- put_write_access(inode);
+- __mnt_drop_write(mnt);
+- }
++ put_file_access(file);
+ dput(dentry);
+ if (unlikely(mode & FMODE_NEED_UNMOUNT))
+ dissolve_on_fput(mnt);
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 08a1993ab7fd3..443f83382b9bd 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -1718,9 +1718,14 @@ static int writeback_single_inode(struct inode *inode,
+ */
+ if (!(inode->i_state & I_DIRTY_ALL))
+ inode_cgwb_move_to_attached(inode, wb);
+- else if (!(inode->i_state & I_SYNC_QUEUED) &&
+- (inode->i_state & I_DIRTY))
+- redirty_tail_locked(inode, wb);
++ else if (!(inode->i_state & I_SYNC_QUEUED)) {
++ if ((inode->i_state & I_DIRTY))
++ redirty_tail_locked(inode, wb);
++ else if (inode->i_state & I_DIRTY_TIME) {
++ inode->dirtied_when = jiffies;
++ inode_io_list_move_locked(inode, wb, &wb->b_dirty_time);
++ }
++ }
+
+ spin_unlock(&wb->list_lock);
+ inode_sync_complete(inode);
+@@ -2369,6 +2374,20 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ trace_writeback_mark_inode_dirty(inode, flags);
+
+ if (flags & I_DIRTY_INODE) {
++ /*
++ * Inode timestamp update will piggback on this dirtying.
++ * We tell ->dirty_inode callback that timestamps need to
++ * be updated by setting I_DIRTY_TIME in flags.
++ */
++ if (inode->i_state & I_DIRTY_TIME) {
++ spin_lock(&inode->i_lock);
++ if (inode->i_state & I_DIRTY_TIME) {
++ inode->i_state &= ~I_DIRTY_TIME;
++ flags |= I_DIRTY_TIME;
++ }
++ spin_unlock(&inode->i_lock);
++ }
++
+ /*
+ * Notify the filesystem about the inode being dirtied, so that
+ * (if needed) it can update on-disk fields and journal the
+@@ -2378,7 +2397,8 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ */
+ trace_writeback_dirty_inode_start(inode, flags);
+ if (sb->s_op->dirty_inode)
+- sb->s_op->dirty_inode(inode, flags & I_DIRTY_INODE);
++ sb->s_op->dirty_inode(inode,
++ flags & (I_DIRTY_INODE | I_DIRTY_TIME));
+ trace_writeback_dirty_inode(inode, flags);
+
+ /* I_DIRTY_INODE supersedes I_DIRTY_TIME. */
+@@ -2399,21 +2419,15 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ */
+ smp_mb();
+
+- if (((inode->i_state & flags) == flags) ||
+- (dirtytime && (inode->i_state & I_DIRTY_INODE)))
++ if ((inode->i_state & flags) == flags)
+ return;
+
+ spin_lock(&inode->i_lock);
+- if (dirtytime && (inode->i_state & I_DIRTY_INODE))
+- goto out_unlock_inode;
+ if ((inode->i_state & flags) != flags) {
+ const int was_dirty = inode->i_state & I_DIRTY;
+
+ inode_attach_wb(inode, NULL);
+
+- /* I_DIRTY_INODE supersedes I_DIRTY_TIME. */
+- if (flags & I_DIRTY_INODE)
+- inode->i_state &= ~I_DIRTY_TIME;
+ inode->i_state |= flags;
+
+ /*
+@@ -2486,7 +2500,6 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ out_unlock:
+ if (wb)
+ spin_unlock(&wb->list_lock);
+-out_unlock_inode:
+ spin_unlock(&inode->i_lock);
+ }
+ EXPORT_SYMBOL(__mark_inode_dirty);
+diff --git a/fs/internal.h b/fs/internal.h
+index 3e206d3e317c4..4372d67a37533 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -102,6 +102,16 @@ extern void chroot_fs_refs(const struct path *, const struct path *);
+ extern struct file *alloc_empty_file(int, const struct cred *);
+ extern struct file *alloc_empty_file_noaccount(int, const struct cred *);
+
++static inline void put_file_access(struct file *file)
++{
++ if ((file->f_mode & (FMODE_READ | FMODE_WRITE)) == FMODE_READ) {
++ i_readcount_dec(file->f_inode);
++ } else if (file->f_mode & FMODE_WRITER) {
++ put_write_access(file->f_inode);
++ __mnt_drop_write(file->f_path.mnt);
++ }
++}
++
+ /*
+ * super.c
+ */
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index d2a9f699e17ed..f2f102dd5e6b3 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1412,7 +1412,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
+ if (!count)
+ folio_end_writeback(folio);
+ done:
+- mapping_set_error(folio->mapping, error);
++ mapping_set_error(inode->i_mapping, error);
+ return error;
+ }
+
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index af1a9191368cb..56057682402be 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -570,7 +570,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ journal->j_running_transaction = NULL;
+ start_time = ktime_get();
+ commit_transaction->t_log_start = journal->j_head;
+- wake_up(&journal->j_wait_transaction_locked);
++ wake_up_all(&journal->j_wait_transaction_locked);
+ write_unlock(&journal->j_state_lock);
+
+ jbd_debug(3, "JBD2: commit phase 2a\n");
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index c0cbeeaec2d1a..e4c1994f01ad6 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -926,10 +926,16 @@ int jbd2_fc_wait_bufs(journal_t *journal, int num_blks)
+ for (i = j_fc_off - 1; i >= j_fc_off - num_blks; i--) {
+ bh = journal->j_fc_wbuf[i];
+ wait_on_buffer(bh);
++ /*
++ * Update j_fc_off so jbd2_fc_release_bufs can release remain
++ * buffer head.
++ */
++ if (unlikely(!buffer_uptodate(bh))) {
++ journal->j_fc_off = i + 1;
++ return -EIO;
++ }
+ put_bh(bh);
+ journal->j_fc_wbuf[i] = NULL;
+- if (unlikely(!buffer_uptodate(bh)))
+- return -EIO;
+ }
+
+ return 0;
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index 8ca3527189f87..3c5dd010e39d2 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -256,6 +256,7 @@ static int fc_do_one_pass(journal_t *journal,
+ err = journal->j_fc_replay_callback(journal, bh, pass,
+ next_fc_block - journal->j_fc_first,
+ expected_commit_id);
++ brelse(bh);
+ next_fc_block++;
+ if (err < 0 || err == JBD2_FC_REPLAY_STOP)
+ break;
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index e0377f558eb14..f3ab28cf4a65d 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -168,7 +168,7 @@ static void wait_transaction_locked(journal_t *journal)
+ int need_to_start;
+ tid_t tid = journal->j_running_transaction->t_tid;
+
+- prepare_to_wait(&journal->j_wait_transaction_locked, &wait,
++ prepare_to_wait_exclusive(&journal->j_wait_transaction_locked, &wait,
+ TASK_UNINTERRUPTIBLE);
+ need_to_start = !tid_geq(journal->j_commit_request, tid);
+ read_unlock(&journal->j_state_lock);
+@@ -194,7 +194,7 @@ static void wait_transaction_switching(journal_t *journal)
+ read_unlock(&journal->j_state_lock);
+ return;
+ }
+- prepare_to_wait(&journal->j_wait_transaction_locked, &wait,
++ prepare_to_wait_exclusive(&journal->j_wait_transaction_locked, &wait,
+ TASK_UNINTERRUPTIBLE);
+ read_unlock(&journal->j_state_lock);
+ /*
+@@ -920,7 +920,7 @@ void jbd2_journal_unlock_updates (journal_t *journal)
+ write_lock(&journal->j_state_lock);
+ --journal->j_barrier_count;
+ write_unlock(&journal->j_state_lock);
+- wake_up(&journal->j_wait_transaction_locked);
++ wake_up_all(&journal->j_wait_transaction_locked);
+ }
+
+ static void warn_dirty_buffer(struct buffer_head *bh)
+diff --git a/fs/ksmbd/server.c b/fs/ksmbd/server.c
+index 4cd03d661df0b..9996e8dc421b4 100644
+--- a/fs/ksmbd/server.c
++++ b/fs/ksmbd/server.c
+@@ -235,10 +235,8 @@ send:
+ if (work->sess && work->sess->enc && work->encrypted &&
+ conn->ops->encrypt_resp) {
+ rc = conn->ops->encrypt_resp(work);
+- if (rc < 0) {
++ if (rc < 0)
+ conn->ops->set_rsp_status(work, STATUS_DATA_ERROR);
+- goto send;
+- }
+ }
+
+ ksmbd_conn_write(work);
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index 35f5ea1c9dfcd..466256bf7ba42 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -3798,11 +3798,6 @@ static int __query_dir(struct dir_context *ctx, const char *name, int namlen,
+ return 0;
+ }
+
+-static void restart_ctx(struct dir_context *ctx)
+-{
+- ctx->pos = 0;
+-}
+-
+ static int verify_info_level(int info_level)
+ {
+ switch (info_level) {
+@@ -3911,7 +3906,6 @@ int smb2_query_dir(struct ksmbd_work *work)
+ if (srch_flag & SMB2_REOPEN || srch_flag & SMB2_RESTART_SCANS) {
+ ksmbd_debug(SMB, "Restart directory scan\n");
+ generic_file_llseek(dir_fp->filp, 0, SEEK_SET);
+- restart_ctx(&dir_fp->readdir_data.ctx);
+ }
+
+ memset(&d_info, 0, sizeof(struct ksmbd_dir_info));
+@@ -3958,11 +3952,9 @@ int smb2_query_dir(struct ksmbd_work *work)
+ */
+ if (!d_info.out_buf_len && !d_info.num_entry)
+ goto no_buf_len;
+- if (rc == 0)
+- restart_ctx(&dir_fp->readdir_data.ctx);
+- if (rc == -ENOSPC)
++ if (rc > 0 || rc == -ENOSPC)
+ rc = 0;
+- if (rc)
++ else if (rc)
+ goto err_out;
+
+ d_info.wptr = d_info.rptr;
+@@ -4019,6 +4011,8 @@ err_out2:
+ rsp->hdr.Status = STATUS_NO_MEMORY;
+ else if (rc == -EFAULT)
+ rsp->hdr.Status = STATUS_INVALID_INFO_CLASS;
++ else if (rc == -EIO)
++ rsp->hdr.Status = STATUS_FILE_CORRUPT_ERROR;
+ if (!rsp->hdr.Status)
+ rsp->hdr.Status = STATUS_UNEXPECTED_IO_ERROR;
+
+@@ -7633,11 +7627,16 @@ int smb2_ioctl(struct ksmbd_work *work)
+ goto out;
+ }
+
+- if (in_buf_len < sizeof(struct validate_negotiate_info_req))
+- return -EINVAL;
++ if (in_buf_len < offsetof(struct validate_negotiate_info_req,
++ Dialects)) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+- if (out_buf_len < sizeof(struct validate_negotiate_info_rsp))
+- return -EINVAL;
++ if (out_buf_len < sizeof(struct validate_negotiate_info_rsp)) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ ret = fsctl_validate_negotiate_info(conn,
+ (struct validate_negotiate_info_req *)&req->Buffer[0],
+diff --git a/fs/ksmbd/smb_common.c b/fs/ksmbd/smb_common.c
+index 7f8ab14fb8ec1..d96da872d70a1 100644
+--- a/fs/ksmbd/smb_common.c
++++ b/fs/ksmbd/smb_common.c
+@@ -4,6 +4,8 @@
+ * Copyright (C) 2018 Namjae Jeon <linkinjeon@kernel.org>
+ */
+
++#include <linux/user_namespace.h>
++
+ #include "smb_common.h"
+ #include "server.h"
+ #include "misc.h"
+@@ -625,8 +627,8 @@ int ksmbd_override_fsids(struct ksmbd_work *work)
+ if (!cred)
+ return -ENOMEM;
+
+- cred->fsuid = make_kuid(current_user_ns(), uid);
+- cred->fsgid = make_kgid(current_user_ns(), gid);
++ cred->fsuid = make_kuid(&init_user_ns, uid);
++ cred->fsgid = make_kgid(&init_user_ns, gid);
+
+ gi = groups_alloc(0);
+ if (!gi) {
+diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
+index 981a3a7a6e160..57854ca022d18 100644
+--- a/fs/nfsd/nfs3proc.c
++++ b/fs/nfsd/nfs3proc.c
+@@ -147,7 +147,6 @@ nfsd3_proc_read(struct svc_rqst *rqstp)
+ {
+ struct nfsd3_readargs *argp = rqstp->rq_argp;
+ struct nfsd3_readres *resp = rqstp->rq_resp;
+- u32 max_blocksize = svc_max_payload(rqstp);
+ unsigned int len;
+ int v;
+
+@@ -156,7 +155,8 @@ nfsd3_proc_read(struct svc_rqst *rqstp)
+ (unsigned long) argp->count,
+ (unsigned long long) argp->offset);
+
+- argp->count = min_t(u32, argp->count, max_blocksize);
++ argp->count = min_t(u32, argp->count, svc_max_payload(rqstp));
++ argp->count = min_t(u32, argp->count, rqstp->rq_res.buflen);
+ if (argp->offset > (u64)OFFSET_MAX)
+ argp->offset = (u64)OFFSET_MAX;
+ if (argp->offset + argp->count > (u64)OFFSET_MAX)
+@@ -550,13 +550,14 @@ static void nfsd3_init_dirlist_pages(struct svc_rqst *rqstp,
+ {
+ struct xdr_buf *buf = &resp->dirlist;
+ struct xdr_stream *xdr = &resp->xdr;
+-
+- count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp));
++ unsigned int sendbuf = min_t(unsigned int, rqstp->rq_res.buflen,
++ svc_max_payload(rqstp));
+
+ memset(buf, 0, sizeof(*buf));
+
+ /* Reserve room for the NULL ptr & eof flag (-2 words) */
+- buf->buflen = count - XDR_UNIT * 2;
++ buf->buflen = clamp(count, (u32)(XDR_UNIT * 2), sendbuf);
++ buf->buflen -= XDR_UNIT * 2;
+ buf->pages = rqstp->rq_next_page;
+ rqstp->rq_next_page += (buf->buflen + PAGE_SIZE - 1) >> PAGE_SHIFT;
+
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 3895eb52d2b10..c12e66cc58a27 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -2663,9 +2663,6 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ status = nfserr_minor_vers_mismatch;
+ if (nfsd_minorversion(nn, args->minorversion, NFSD_TEST) <= 0)
+ goto out;
+- status = nfserr_resource;
+- if (args->opcnt > NFSD_MAX_OPS_PER_COMPOUND)
+- goto out;
+
+ status = nfs41_check_op_ordering(args);
+ if (status) {
+@@ -2678,10 +2675,20 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+
+ rqstp->rq_lease_breaker = (void **)&cstate->clp;
+
+- trace_nfsd_compound(rqstp, args->opcnt);
++ trace_nfsd_compound(rqstp, args->client_opcnt);
+ while (!status && resp->opcnt < args->opcnt) {
+ op = &args->ops[resp->opcnt++];
+
++ if (unlikely(resp->opcnt == NFSD_MAX_OPS_PER_COMPOUND)) {
++ /* If there are still more operations to process,
++ * stop here and report NFS4ERR_RESOURCE. */
++ if (cstate->minorversion == 0 &&
++ args->client_opcnt > resp->opcnt) {
++ op->status = nfserr_resource;
++ goto encode_op;
++ }
++ }
++
+ /*
+ * The XDR decode routines may have pre-set op->status;
+ * for example, if there is a miscellaneous XDR error
+@@ -2757,8 +2764,8 @@ encode_op:
+ status = op->status;
+ }
+
+- trace_nfsd_compound_status(args->opcnt, resp->opcnt, status,
+- nfsd4_op_name(op->opnum));
++ trace_nfsd_compound_status(args->client_opcnt, resp->opcnt,
++ status, nfsd4_op_name(op->opnum));
+
+ nfsd4_cstate_clear_replay(cstate);
+ nfsd4_increment_op_stats(op->opnum);
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index c634483d85d2a..8f24485e0f04f 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -815,8 +815,10 @@ __cld_pipe_inprogress_downcall(const struct cld_msg_v2 __user *cmsg,
+ princhash.data = memdup_user(
+ &ci->cc_princhash.cp_data,
+ princhashlen);
+- if (IS_ERR_OR_NULL(princhash.data))
++ if (IS_ERR_OR_NULL(princhash.data)) {
++ kfree(name.data);
+ return -EFAULT;
++ }
+ princhash.len = princhashlen;
+ } else
+ princhash.len = 0;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 9409a0dc1b767..c16646f9db31f 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1049,6 +1049,7 @@ static struct nfs4_ol_stateid * nfs4_alloc_open_stateid(struct nfs4_client *clp)
+
+ static void nfs4_free_deleg(struct nfs4_stid *stid)
+ {
++ WARN_ON(!list_empty(&stid->sc_cp_list));
+ kmem_cache_free(deleg_slab, stid);
+ atomic_long_dec(&num_delegations);
+ }
+@@ -1463,6 +1464,7 @@ static void nfs4_free_ol_stateid(struct nfs4_stid *stid)
+ release_all_access(stp);
+ if (stp->st_stateowner)
+ nfs4_put_stateowner(stp->st_stateowner);
++ WARN_ON(!list_empty(&stid->sc_cp_list));
+ kmem_cache_free(stateid_slab, stid);
+ }
+
+@@ -6608,6 +6610,7 @@ static void nfsd4_close_open_stateid(struct nfs4_ol_stateid *s)
+ struct nfs4_client *clp = s->st_stid.sc_client;
+ bool unhashed;
+ LIST_HEAD(reaplist);
++ struct nfs4_ol_stateid *stp;
+
+ spin_lock(&clp->cl_lock);
+ unhashed = unhash_open_stateid(s, &reaplist);
+@@ -6616,6 +6619,8 @@ static void nfsd4_close_open_stateid(struct nfs4_ol_stateid *s)
+ if (unhashed)
+ put_ol_stateid_locked(s, &reaplist);
+ spin_unlock(&clp->cl_lock);
++ list_for_each_entry(stp, &reaplist, st_locks)
++ nfs4_free_cpntf_statelist(clp->net, &stp->st_stid);
+ free_ol_stateid_reaplist(&reaplist);
+ } else {
+ spin_unlock(&clp->cl_lock);
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 2acea7792bb26..1e5822d000430 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -2347,16 +2347,10 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp)
+
+ if (xdr_stream_decode_u32(argp->xdr, &argp->minorversion) < 0)
+ return false;
+- if (xdr_stream_decode_u32(argp->xdr, &argp->opcnt) < 0)
++ if (xdr_stream_decode_u32(argp->xdr, &argp->client_opcnt) < 0)
+ return false;
+-
+- /*
+- * NFS4ERR_RESOURCE is a more helpful error than GARBAGE_ARGS
+- * here, so we return success at the xdr level so that
+- * nfsd4_proc can handle this is an NFS-level error.
+- */
+- if (argp->opcnt > NFSD_MAX_OPS_PER_COMPOUND)
+- return true;
++ argp->opcnt = min_t(u32, argp->client_opcnt,
++ NFSD_MAX_OPS_PER_COMPOUND);
+
+ if (argp->opcnt > ARRAY_SIZE(argp->iops)) {
+ argp->ops = kzalloc(argp->opcnt * sizeof(*argp->ops), GFP_KERNEL);
+@@ -4001,7 +3995,7 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ if (resp->xdr->buf->page_len &&
+ test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags)) {
+ WARN_ON_ONCE(1);
+- return nfserr_resource;
++ return nfserr_serverfault;
+ }
+ xdr_commit_encode(xdr);
+
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index fcdab8a8a41f4..f65eba938a57d 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -182,6 +182,7 @@ nfsd_proc_read(struct svc_rqst *rqstp)
+ argp->count, argp->offset);
+
+ argp->count = min_t(u32, argp->count, NFSSVC_MAXBLKSIZE_V2);
++ argp->count = min_t(u32, argp->count, rqstp->rq_res.buflen);
+
+ v = 0;
+ len = argp->count;
+@@ -556,12 +557,11 @@ static void nfsd_init_dirlist_pages(struct svc_rqst *rqstp,
+ struct xdr_buf *buf = &resp->dirlist;
+ struct xdr_stream *xdr = &resp->xdr;
+
+- count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp));
+-
+ memset(buf, 0, sizeof(*buf));
+
+ /* Reserve room for the NULL ptr & eof flag (-2 words) */
+- buf->buflen = count - XDR_UNIT * 2;
++ buf->buflen = clamp(count, (u32)(XDR_UNIT * 2), (u32)PAGE_SIZE);
++ buf->buflen -= XDR_UNIT * 2;
+ buf->pages = rqstp->rq_next_page;
+ rqstp->rq_next_page++;
+
+diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h
+index 7b744011f2d3d..77286e8c9ab02 100644
+--- a/fs/nfsd/xdr4.h
++++ b/fs/nfsd/xdr4.h
+@@ -689,9 +689,10 @@ struct nfsd4_compoundargs {
+ struct svcxdr_tmpbuf *to_free;
+ struct svc_rqst *rqstp;
+
+- u32 taglen;
+ char * tag;
++ u32 taglen;
+ u32 minorversion;
++ u32 client_opcnt;
+ u32 opcnt;
+ struct nfsd4_op *ops;
+ struct nfsd4_op iops[8];
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 803ff4c63c318..3b94ad22d9c0b 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -1941,8 +1941,6 @@ const struct inode_operations ntfs_link_inode_operations = {
+ .setattr = ntfs3_setattr,
+ .listxattr = ntfs_listxattr,
+ .permission = ntfs_permission,
+- .get_acl = ntfs_get_acl,
+- .set_acl = ntfs_set_acl,
+ };
+
+ const struct address_space_operations ntfs_aops = {
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index e3d443ccb9be6..19ce48726b007 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -625,67 +625,6 @@ int ntfs_set_acl(struct user_namespace *mnt_userns, struct inode *inode,
+ return ntfs_set_acl_ex(mnt_userns, inode, acl, type, false);
+ }
+
+-static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns,
+- struct inode *inode, int type, void *buffer,
+- size_t size)
+-{
+- struct posix_acl *acl;
+- int err;
+-
+- if (!(inode->i_sb->s_flags & SB_POSIXACL)) {
+- ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");
+- return -EOPNOTSUPP;
+- }
+-
+- acl = ntfs_get_acl(inode, type, false);
+- if (IS_ERR(acl))
+- return PTR_ERR(acl);
+-
+- if (!acl)
+- return -ENODATA;
+-
+- err = posix_acl_to_xattr(&init_user_ns, acl, buffer, size);
+- posix_acl_release(acl);
+-
+- return err;
+-}
+-
+-static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns,
+- struct inode *inode, int type, const void *value,
+- size_t size)
+-{
+- struct posix_acl *acl;
+- int err;
+-
+- if (!(inode->i_sb->s_flags & SB_POSIXACL)) {
+- ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");
+- return -EOPNOTSUPP;
+- }
+-
+- if (!inode_owner_or_capable(mnt_userns, inode))
+- return -EPERM;
+-
+- if (!value) {
+- acl = NULL;
+- } else {
+- acl = posix_acl_from_xattr(&init_user_ns, value, size);
+- if (IS_ERR(acl))
+- return PTR_ERR(acl);
+-
+- if (acl) {
+- err = posix_acl_valid(&init_user_ns, acl);
+- if (err)
+- goto release_and_out;
+- }
+- }
+-
+- err = ntfs_set_acl(mnt_userns, inode, acl, type);
+-
+-release_and_out:
+- posix_acl_release(acl);
+- return err;
+-}
+-
+ /*
+ * ntfs_init_acl - Initialize the ACLs of a new inode.
+ *
+@@ -852,23 +791,6 @@ static int ntfs_getxattr(const struct xattr_handler *handler, struct dentry *de,
+ goto out;
+ }
+
+-#ifdef CONFIG_NTFS3_FS_POSIX_ACL
+- if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
+- !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
+- sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
+- (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
+- !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
+- sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
+- /* TODO: init_user_ns? */
+- err = ntfs_xattr_get_acl(
+- &init_user_ns, inode,
+- name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1
+- ? ACL_TYPE_ACCESS
+- : ACL_TYPE_DEFAULT,
+- buffer, size);
+- goto out;
+- }
+-#endif
+ /* Deal with NTFS extended attribute. */
+ err = ntfs_get_ea(inode, name, name_len, buffer, size, NULL);
+
+@@ -981,22 +903,6 @@ set_new_fa:
+ goto out;
+ }
+
+-#ifdef CONFIG_NTFS3_FS_POSIX_ACL
+- if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
+- !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
+- sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
+- (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
+- !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
+- sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
+- err = ntfs_xattr_set_acl(
+- mnt_userns, inode,
+- name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1
+- ? ACL_TYPE_ACCESS
+- : ACL_TYPE_DEFAULT,
+- value, size);
+- goto out;
+- }
+-#endif
+ /* Deal with NTFS extended attribute. */
+ err = ntfs_set_ea(inode, name, name_len, value, size, flags, 0);
+
+@@ -1086,7 +992,7 @@ static bool ntfs_xattr_user_list(struct dentry *dentry)
+ }
+
+ // clang-format off
+-static const struct xattr_handler ntfs_xattr_handler = {
++static const struct xattr_handler ntfs_other_xattr_handler = {
+ .prefix = "",
+ .get = ntfs_getxattr,
+ .set = ntfs_setxattr,
+@@ -1094,7 +1000,11 @@ static const struct xattr_handler ntfs_xattr_handler = {
+ };
+
+ const struct xattr_handler *ntfs_xattr_handlers[] = {
+- &ntfs_xattr_handler,
++#ifdef CONFIG_NTFS3_FS_POSIX_ACL
++ &posix_acl_access_xattr_handler,
++ &posix_acl_default_xattr_handler,
++#endif
++ &ntfs_other_xattr_handler,
+ NULL,
+ };
+ // clang-format on
+diff --git a/fs/open.c b/fs/open.c
+index 1d57fbde2feb1..5874258b54bd8 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -810,7 +810,9 @@ static int do_dentry_open(struct file *f,
+ return 0;
+ }
+
+- if (f->f_mode & FMODE_WRITE && !special_file(inode->i_mode)) {
++ if ((f->f_mode & (FMODE_READ | FMODE_WRITE)) == FMODE_READ) {
++ i_readcount_inc(inode);
++ } else if (f->f_mode & FMODE_WRITE && !special_file(inode->i_mode)) {
+ error = get_write_access(inode);
+ if (unlikely(error))
+ goto cleanup_file;
+@@ -850,8 +852,6 @@ static int do_dentry_open(struct file *f,
+ goto cleanup_all;
+ }
+ f->f_mode |= FMODE_OPENED;
+- if ((f->f_mode & (FMODE_READ | FMODE_WRITE)) == FMODE_READ)
+- i_readcount_inc(inode);
+ if ((f->f_mode & FMODE_READ) &&
+ likely(f->f_op->read || f->f_op->read_iter))
+ f->f_mode |= FMODE_CAN_READ;
+@@ -902,10 +902,7 @@ cleanup_all:
+ if (WARN_ON_ONCE(error > 0))
+ error = -EINVAL;
+ fops_put(f->f_op);
+- if (f->f_mode & FMODE_WRITER) {
+- put_write_access(inode);
+- __mnt_drop_write(f->f_path.mnt);
+- }
++ put_file_access(f);
+ cleanup_file:
+ path_put(&f->f_path);
+ f->f_path.mnt = NULL;
+diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
+index 5f2405994280a..7e65d67de9f33 100644
+--- a/fs/quota/quota_tree.c
++++ b/fs/quota/quota_tree.c
+@@ -71,6 +71,35 @@ static ssize_t write_blk(struct qtree_mem_dqinfo *info, uint blk, char *buf)
+ return ret;
+ }
+
++static inline int do_check_range(struct super_block *sb, const char *val_name,
++ uint val, uint min_val, uint max_val)
++{
++ if (val < min_val || val > max_val) {
++ quota_error(sb, "Getting %s %u out of range %u-%u",
++ val_name, val, min_val, max_val);
++ return -EUCLEAN;
++ }
++
++ return 0;
++}
++
++static int check_dquot_block_header(struct qtree_mem_dqinfo *info,
++ struct qt_disk_dqdbheader *dh)
++{
++ int err = 0;
++
++ err = do_check_range(info->dqi_sb, "dqdh_next_free",
++ le32_to_cpu(dh->dqdh_next_free), 0,
++ info->dqi_blocks - 1);
++ if (err)
++ return err;
++ err = do_check_range(info->dqi_sb, "dqdh_prev_free",
++ le32_to_cpu(dh->dqdh_prev_free), 0,
++ info->dqi_blocks - 1);
++
++ return err;
++}
++
+ /* Remove empty block from list and return it */
+ static int get_free_dqblk(struct qtree_mem_dqinfo *info)
+ {
+@@ -85,6 +114,9 @@ static int get_free_dqblk(struct qtree_mem_dqinfo *info)
+ ret = read_blk(info, blk, buf);
+ if (ret < 0)
+ goto out_buf;
++ ret = check_dquot_block_header(info, dh);
++ if (ret)
++ goto out_buf;
+ info->dqi_free_blk = le32_to_cpu(dh->dqdh_next_free);
+ }
+ else {
+@@ -232,6 +264,9 @@ static uint find_free_dqentry(struct qtree_mem_dqinfo *info,
+ *err = read_blk(info, blk, buf);
+ if (*err < 0)
+ goto out_buf;
++ *err = check_dquot_block_header(info, dh);
++ if (*err)
++ goto out_buf;
+ } else {
+ blk = get_free_dqblk(info);
+ if ((int)blk < 0) {
+@@ -424,6 +459,9 @@ static int free_dqentry(struct qtree_mem_dqinfo *info, struct dquot *dquot,
+ goto out_buf;
+ }
+ dh = (struct qt_disk_dqdbheader *)buf;
++ ret = check_dquot_block_header(info, dh);
++ if (ret)
++ goto out_buf;
+ le16_add_cpu(&dh->dqdh_entries, -1);
+ if (!le16_to_cpu(dh->dqdh_entries)) { /* Block got free? */
+ ret = remove_free_dqentry(info, buf, blk);
+diff --git a/fs/splice.c b/fs/splice.c
+index 93a2c9bf62494..047b79db8eb52 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -814,15 +814,17 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
+ {
+ struct pipe_inode_info *pipe;
+ long ret, bytes;
++ umode_t i_mode;
+ size_t len;
+ int i, flags, more;
+
+ /*
+- * We require the input to be seekable, as we don't want to randomly
+- * drop data for eg socket -> socket splicing. Use the piped splicing
+- * for that!
++ * We require the input being a regular file, as we don't want to
++ * randomly drop data for eg socket -> socket splicing. Use the
++ * piped splicing for that!
+ */
+- if (unlikely(!(in->f_mode & FMODE_LSEEK)))
++ i_mode = file_inode(in)->i_mode;
++ if (unlikely(!S_ISREG(i_mode) && !S_ISBLK(i_mode)))
+ return -EINVAL;
+
+ /*
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index ab0576d372d6e..fa0a2fa5debbd 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -991,7 +991,7 @@ static int resolve_userfault_fork(struct userfaultfd_ctx *new,
+ int fd;
+
+ fd = anon_inode_getfd_secure("[userfaultfd]", &userfaultfd_fops, new,
+- O_RDWR | (new->flags & UFFD_SHARED_FCNTL_FLAGS), inode);
++ O_RDONLY | (new->flags & UFFD_SHARED_FCNTL_FLAGS), inode);
+ if (fd < 0)
+ return fd;
+
+@@ -2096,7 +2096,7 @@ SYSCALL_DEFINE1(userfaultfd, int, flags)
+ mmgrab(ctx->mm);
+
+ fd = anon_inode_getfd_secure("[userfaultfd]", &userfaultfd_fops, ctx,
+- O_RDWR | (flags & UFFD_SHARED_FCNTL_FLAGS), NULL);
++ O_RDONLY | (flags & UFFD_SHARED_FCNTL_FLAGS), NULL);
+ if (fd < 0) {
+ mmdrop(ctx->mm);
+ kmem_cache_free(userfaultfd_ctx_cachep, ctx);
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index aa977c7ea370b..a845e3b237a66 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -650,7 +650,7 @@ xfs_fs_destroy_inode(
+ static void
+ xfs_fs_dirty_inode(
+ struct inode *inode,
+- int flag)
++ int flags)
+ {
+ struct xfs_inode *ip = XFS_I(inode);
+ struct xfs_mount *mp = ip->i_mount;
+@@ -658,7 +658,13 @@ xfs_fs_dirty_inode(
+
+ if (!(inode->i_sb->s_flags & SB_LAZYTIME))
+ return;
+- if (flag != I_DIRTY_SYNC || !(inode->i_state & I_DIRTY_TIME))
++
++ /*
++ * Only do the timestamp update if the inode is dirty (I_DIRTY_SYNC)
++ * and has dirty timestamp (I_DIRTY_TIME). I_DIRTY_TIME can be passed
++ * in flags possibly together with I_DIRTY_SYNC.
++ */
++ if ((flags & ~I_DIRTY_TIME) != I_DIRTY_SYNC || !(flags & I_DIRTY_TIME))
+ return;
+
+ if (xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, 0, &tp))
+diff --git a/include/dt-bindings/clock/samsung,exynosautov9.h b/include/dt-bindings/clock/samsung,exynosautov9.h
+index ea9f91b4eb1a3..a7db6516593fe 100644
+--- a/include/dt-bindings/clock/samsung,exynosautov9.h
++++ b/include/dt-bindings/clock/samsung,exynosautov9.h
+@@ -226,21 +226,21 @@
+ #define CLK_GOUT_PERIC0_IPCLK_8 28
+ #define CLK_GOUT_PERIC0_IPCLK_9 29
+ #define CLK_GOUT_PERIC0_IPCLK_10 30
+-#define CLK_GOUT_PERIC0_IPCLK_11 30
+-#define CLK_GOUT_PERIC0_PCLK_0 31
+-#define CLK_GOUT_PERIC0_PCLK_1 32
+-#define CLK_GOUT_PERIC0_PCLK_2 33
+-#define CLK_GOUT_PERIC0_PCLK_3 34
+-#define CLK_GOUT_PERIC0_PCLK_4 35
+-#define CLK_GOUT_PERIC0_PCLK_5 36
+-#define CLK_GOUT_PERIC0_PCLK_6 37
+-#define CLK_GOUT_PERIC0_PCLK_7 38
+-#define CLK_GOUT_PERIC0_PCLK_8 39
+-#define CLK_GOUT_PERIC0_PCLK_9 40
+-#define CLK_GOUT_PERIC0_PCLK_10 41
+-#define CLK_GOUT_PERIC0_PCLK_11 42
++#define CLK_GOUT_PERIC0_IPCLK_11 31
++#define CLK_GOUT_PERIC0_PCLK_0 32
++#define CLK_GOUT_PERIC0_PCLK_1 33
++#define CLK_GOUT_PERIC0_PCLK_2 34
++#define CLK_GOUT_PERIC0_PCLK_3 35
++#define CLK_GOUT_PERIC0_PCLK_4 36
++#define CLK_GOUT_PERIC0_PCLK_5 37
++#define CLK_GOUT_PERIC0_PCLK_6 38
++#define CLK_GOUT_PERIC0_PCLK_7 39
++#define CLK_GOUT_PERIC0_PCLK_8 40
++#define CLK_GOUT_PERIC0_PCLK_9 41
++#define CLK_GOUT_PERIC0_PCLK_10 42
++#define CLK_GOUT_PERIC0_PCLK_11 43
+
+-#define PERIC0_NR_CLK 43
++#define PERIC0_NR_CLK 44
+
+ /* CMU_PERIC1 */
+ #define CLK_MOUT_PERIC1_BUS_USER 1
+@@ -272,21 +272,21 @@
+ #define CLK_GOUT_PERIC1_IPCLK_8 28
+ #define CLK_GOUT_PERIC1_IPCLK_9 29
+ #define CLK_GOUT_PERIC1_IPCLK_10 30
+-#define CLK_GOUT_PERIC1_IPCLK_11 30
+-#define CLK_GOUT_PERIC1_PCLK_0 31
+-#define CLK_GOUT_PERIC1_PCLK_1 32
+-#define CLK_GOUT_PERIC1_PCLK_2 33
+-#define CLK_GOUT_PERIC1_PCLK_3 34
+-#define CLK_GOUT_PERIC1_PCLK_4 35
+-#define CLK_GOUT_PERIC1_PCLK_5 36
+-#define CLK_GOUT_PERIC1_PCLK_6 37
+-#define CLK_GOUT_PERIC1_PCLK_7 38
+-#define CLK_GOUT_PERIC1_PCLK_8 39
+-#define CLK_GOUT_PERIC1_PCLK_9 40
+-#define CLK_GOUT_PERIC1_PCLK_10 41
+-#define CLK_GOUT_PERIC1_PCLK_11 42
++#define CLK_GOUT_PERIC1_IPCLK_11 31
++#define CLK_GOUT_PERIC1_PCLK_0 32
++#define CLK_GOUT_PERIC1_PCLK_1 33
++#define CLK_GOUT_PERIC1_PCLK_2 34
++#define CLK_GOUT_PERIC1_PCLK_3 35
++#define CLK_GOUT_PERIC1_PCLK_4 36
++#define CLK_GOUT_PERIC1_PCLK_5 37
++#define CLK_GOUT_PERIC1_PCLK_6 38
++#define CLK_GOUT_PERIC1_PCLK_7 39
++#define CLK_GOUT_PERIC1_PCLK_8 40
++#define CLK_GOUT_PERIC1_PCLK_9 41
++#define CLK_GOUT_PERIC1_PCLK_10 42
++#define CLK_GOUT_PERIC1_PCLK_11 43
+
+-#define PERIC1_NR_CLK 43
++#define PERIC1_NR_CLK 44
+
+ /* CMU_PERIS */
+ #define CLK_MOUT_PERIS_BUS_USER 1
+diff --git a/include/linux/ata.h b/include/linux/ata.h
+index 21292b5bbb550..e3050e153a716 100644
+--- a/include/linux/ata.h
++++ b/include/linux/ata.h
+@@ -566,6 +566,18 @@ struct ata_bmdma_prd {
+ ((((id)[ATA_ID_SATA_CAPABILITY] != 0x0000) && \
+ ((id)[ATA_ID_SATA_CAPABILITY] != 0xffff)) && \
+ ((id)[ATA_ID_FEATURE_SUPP] & (1 << 2)))
++#define ata_id_has_devslp(id) \
++ ((((id)[ATA_ID_SATA_CAPABILITY] != 0x0000) && \
++ ((id)[ATA_ID_SATA_CAPABILITY] != 0xffff)) && \
++ ((id)[ATA_ID_FEATURE_SUPP] & (1 << 8)))
++#define ata_id_has_ncq_autosense(id) \
++ ((((id)[ATA_ID_SATA_CAPABILITY] != 0x0000) && \
++ ((id)[ATA_ID_SATA_CAPABILITY] != 0xffff)) && \
++ ((id)[ATA_ID_FEATURE_SUPP] & (1 << 7)))
++#define ata_id_has_dipm(id) \
++ ((((id)[ATA_ID_SATA_CAPABILITY] != 0x0000) && \
++ ((id)[ATA_ID_SATA_CAPABILITY] != 0xffff)) && \
++ ((id)[ATA_ID_FEATURE_SUPP] & (1 << 3)))
+ #define ata_id_iordy_disable(id) ((id)[ATA_ID_CAPABILITY] & (1 << 10))
+ #define ata_id_has_iordy(id) ((id)[ATA_ID_CAPABILITY] & (1 << 11))
+ #define ata_id_u32(id,n) \
+@@ -578,9 +590,6 @@ struct ata_bmdma_prd {
+
+ #define ata_id_cdb_intr(id) (((id)[ATA_ID_CONFIG] & 0x60) == 0x20)
+ #define ata_id_has_da(id) ((id)[ATA_ID_SATA_CAPABILITY_2] & (1 << 4))
+-#define ata_id_has_devslp(id) ((id)[ATA_ID_FEATURE_SUPP] & (1 << 8))
+-#define ata_id_has_ncq_autosense(id) \
+- ((id)[ATA_ID_FEATURE_SUPP] & (1 << 7))
+
+ static inline bool ata_id_has_hipm(const u16 *id)
+ {
+@@ -592,17 +601,6 @@ static inline bool ata_id_has_hipm(const u16 *id)
+ return val & (1 << 9);
+ }
+
+-static inline bool ata_id_has_dipm(const u16 *id)
+-{
+- u16 val = id[ATA_ID_FEATURE_SUPP];
+-
+- if (val == 0 || val == 0xffff)
+- return false;
+-
+- return val & (1 << 3);
+-}
+-
+-
+ static inline bool ata_id_has_fua(const u16 *id)
+ {
+ if ((id[ATA_ID_CFSSE] & 0xC000) != 0x4000)
+@@ -771,16 +769,21 @@ static inline bool ata_id_has_read_log_dma_ext(const u16 *id)
+
+ static inline bool ata_id_has_sense_reporting(const u16 *id)
+ {
+- if (!(id[ATA_ID_CFS_ENABLE_2] & (1 << 15)))
++ if (!(id[ATA_ID_CFS_ENABLE_2] & BIT(15)))
++ return false;
++ if ((id[ATA_ID_COMMAND_SET_3] & (BIT(15) | BIT(14))) != BIT(14))
+ return false;
+- return id[ATA_ID_COMMAND_SET_3] & (1 << 6);
++ return id[ATA_ID_COMMAND_SET_3] & BIT(6);
+ }
+
+ static inline bool ata_id_sense_reporting_enabled(const u16 *id)
+ {
+- if (!(id[ATA_ID_CFS_ENABLE_2] & (1 << 15)))
++ if (!ata_id_has_sense_reporting(id))
++ return false;
++ /* ata_id_has_sense_reporting() == true, word 86 must have bit 15 set */
++ if ((id[ATA_ID_COMMAND_SET_4] & (BIT(15) | BIT(14))) != BIT(14))
+ return false;
+- return id[ATA_ID_COMMAND_SET_4] & (1 << 6);
++ return id[ATA_ID_COMMAND_SET_4] & BIT(6);
+ }
+
+ /**
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index 992ee987f2738..c436874109409 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -509,7 +509,7 @@ static inline void bio_set_dev(struct bio *bio, struct block_device *bdev)
+ {
+ bio_clear_flag(bio, BIO_REMAPPED);
+ if (bio->bi_bdev != bdev)
+- bio_clear_flag(bio, BIO_THROTTLED);
++ bio_clear_flag(bio, BIO_BPS_THROTTLED);
+ bio->bi_bdev = bdev;
+ bio_associate_blkg(bio);
+ }
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index a24d4078fb214..5b5dc01c006d5 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -323,7 +323,7 @@ enum {
+ BIO_QUIET, /* Make BIO Quiet */
+ BIO_CHAIN, /* chained bio, ->bi_remaining in effect */
+ BIO_REFFED, /* bio has elevated ->bi_cnt */
+- BIO_THROTTLED, /* This bio has already been subjected to
++ BIO_BPS_THROTTLED, /* This bio has already been subjected to
+ * throttling rules. Don't do it again. */
+ BIO_TRACE_COMPLETION, /* bio_endio() should trace the final completion
+ * of this bio. */
+diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
+index 695d1224a71ba..5d268e76d8e6c 100644
+--- a/include/linux/bpf-cgroup-defs.h
++++ b/include/linux/bpf-cgroup-defs.h
+@@ -47,8 +47,8 @@ struct cgroup_bpf {
+ * have either zero or one element
+ * when BPF_F_ALLOW_MULTI the list can have up to BPF_CGROUP_MAX_PROGS
+ */
+- struct list_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
+- u32 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
++ struct hlist_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
++ u8 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
+
+ /* list of cgroup shared storages */
+ struct list_head storages;
+diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
+index 669d96d074ada..6673acfbf2ef2 100644
+--- a/include/linux/bpf-cgroup.h
++++ b/include/linux/bpf-cgroup.h
+@@ -95,7 +95,7 @@ struct bpf_cgroup_link {
+ };
+
+ struct bpf_prog_list {
+- struct list_head node;
++ struct hlist_node node;
+ struct bpf_prog *prog;
+ struct bpf_cgroup_link *link;
+ struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE];
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index ed352c00330cd..33ec4658c1ee5 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -869,6 +869,7 @@ struct bpf_dispatcher {
+ struct bpf_dispatcher_prog progs[BPF_DISPATCHER_MAX];
+ int num_progs;
+ void *image;
++ void *rw_image;
+ u32 image_off;
+ struct bpf_ksym ksym;
+ };
+@@ -888,7 +889,7 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampolin
+ struct bpf_trampoline *bpf_trampoline_get(u64 key,
+ struct bpf_attach_target_info *tgt_info);
+ void bpf_trampoline_put(struct bpf_trampoline *tr);
+-int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs);
++int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs);
+ #define BPF_DISPATCHER_INIT(_name) { \
+ .mutex = __MUTEX_INITIALIZER(_name.mutex), \
+ .func = &_name##_func, \
+@@ -2273,12 +2274,9 @@ extern const struct bpf_func_proto bpf_for_each_map_elem_proto;
+ extern const struct bpf_func_proto bpf_btf_find_by_name_kind_proto;
+ extern const struct bpf_func_proto bpf_sk_setsockopt_proto;
+ extern const struct bpf_func_proto bpf_sk_getsockopt_proto;
+-extern const struct bpf_func_proto bpf_kallsyms_lookup_name_proto;
+ extern const struct bpf_func_proto bpf_find_vma_proto;
+ extern const struct bpf_func_proto bpf_loop_proto;
+-extern const struct bpf_func_proto bpf_strncmp_proto;
+ extern const struct bpf_func_proto bpf_copy_from_user_task_proto;
+-extern const struct bpf_func_proto bpf_kptr_xchg_proto;
+
+ const struct bpf_func_proto *tracing_prog_func_proto(
+ enum bpf_func_id func_id, const struct bpf_prog *prog);
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index e8439f6cbe57a..e66ee8d87e27e 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -212,6 +212,17 @@ struct bpf_reference_state {
+ * is used purely to inform the user of a reference leak.
+ */
+ int insn_idx;
++ /* There can be a case like:
++ * main (frame 0)
++ * cb (frame 1)
++ * func (frame 3)
++ * cb (frame 4)
++ * Hence for frame 4, if callback_ref just stored boolean, it would be
++ * impossible to distinguish nested callback refs. Hence store the
++ * frameno and compare that to callback_ref in check_reference_leak when
++ * exiting a callback function.
++ */
++ int callback_ref;
+ };
+
+ /* state of the program:
+diff --git a/include/linux/dynamic_debug.h b/include/linux/dynamic_debug.h
+index dce631e678dd6..8d9eec5f6d8bb 100644
+--- a/include/linux/dynamic_debug.h
++++ b/include/linux/dynamic_debug.h
+@@ -55,9 +55,6 @@ struct _ddebug {
+
+ #if defined(CONFIG_DYNAMIC_DEBUG_CORE)
+
+-/* exported for module authors to exercise >control */
+-int dynamic_debug_exec_queries(const char *query, const char *modname);
+-
+ int ddebug_add_module(struct _ddebug *tab, unsigned int n,
+ const char *modname);
+ extern int ddebug_remove_module(const char *mod_name);
+@@ -201,7 +198,7 @@ static inline int ddebug_remove_module(const char *mod)
+ static inline int ddebug_dyndbg_module_param_cb(char *param, char *val,
+ const char *modname)
+ {
+- if (strstr(param, "dyndbg")) {
++ if (!strcmp(param, "dyndbg")) {
+ /* avoid pr_warn(), which wants pr_fmt() fully defined */
+ printk(KERN_WARNING "dyndbg param is supported only in "
+ "CONFIG_DYNAMIC_DEBUG builds\n");
+@@ -221,12 +218,6 @@ static inline int ddebug_dyndbg_module_param_cb(char *param, char *val,
+ rowsize, groupsize, buf, len, ascii); \
+ } while (0)
+
+-static inline int dynamic_debug_exec_queries(const char *query, const char *modname)
+-{
+- pr_warn("kernel not built with CONFIG_DYNAMIC_DEBUG_CORE\n");
+- return 0;
+-}
+-
+ #endif /* !CONFIG_DYNAMIC_DEBUG_CORE */
+
+ #endif
+diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
+index 305d5f19093b9..30eb30d6909b0 100644
+--- a/include/linux/eventfd.h
++++ b/include/linux/eventfd.h
+@@ -46,7 +46,7 @@ void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt);
+
+ static inline bool eventfd_signal_allowed(void)
+ {
+- return !current->in_eventfd_signal;
++ return !current->in_eventfd;
+ }
+
+ #else /* CONFIG_EVENTFD */
+diff --git a/include/linux/export-internal.h b/include/linux/export-internal.h
+index c2b1d4fd59873..fe7e6ba918f10 100644
+--- a/include/linux/export-internal.h
++++ b/include/linux/export-internal.h
+@@ -10,8 +10,10 @@
+ #include <linux/compiler.h>
+ #include <linux/types.h>
+
+-/* __used is needed to keep __crc_* for LTO */
+ #define SYMBOL_CRC(sym, crc, sec) \
+- u32 __section("___kcrctab" sec "+" #sym) __used __crc_##sym = crc
++ asm(".section \"___kcrctab" sec "+" #sym "\",\"a\"" "\n" \
++ "__crc_" #sym ":" "\n" \
++ ".long " #crc "\n" \
++ ".previous" "\n")
+
+ #endif /* __LINUX_EXPORT_INTERNAL_H__ */
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 8fd2e2f58eeb2..e11335c70982e 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -1052,6 +1052,8 @@ extern long bpf_jit_limit_max;
+
+ typedef void (*bpf_jit_fill_hole_t)(void *area, unsigned int size);
+
++void bpf_jit_fill_hole_with_zero(void *area, unsigned int size);
++
+ struct bpf_binary_header *
+ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
+ unsigned int alignment,
+@@ -1064,6 +1066,9 @@ void bpf_jit_free(struct bpf_prog *fp);
+ struct bpf_binary_header *
+ bpf_jit_binary_pack_hdr(const struct bpf_prog *fp);
+
++void *bpf_prog_pack_alloc(u32 size, bpf_jit_fill_hole_t bpf_fill_ill_insns);
++void bpf_prog_pack_free(struct bpf_binary_header *hdr);
++
+ static inline bool bpf_prog_kallsyms_verify_off(const struct bpf_prog *fp)
+ {
+ return list_empty(&fp->aux->ksym.lnode) ||
+diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h
+index 3b401fa0f3746..fce2fb2fc9626 100644
+--- a/include/linux/fortify-string.h
++++ b/include/linux/fortify-string.h
+@@ -19,7 +19,8 @@ void __write_overflow_field(size_t avail, size_t wanted) __compiletime_warning("
+ unsigned char *__p = (unsigned char *)(p); \
+ size_t __ret = (size_t)-1; \
+ size_t __p_size = __builtin_object_size(p, 1); \
+- if (__p_size != (size_t)-1) { \
++ if (__p_size != (size_t)-1 && \
++ __builtin_constant_p(*__p)) { \
+ size_t __p_len = __p_size - 1; \
+ if (__builtin_constant_p(__p[__p_len]) && \
+ __p[__p_len] == '\0') \
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 9ad5e3520fae5..fb19f532c76c6 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2241,13 +2241,14 @@ static inline void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
+ * don't have to write inode on fdatasync() when only
+ * e.g. the timestamps have changed.
+ * I_DIRTY_PAGES Inode has dirty pages. Inode itself may be clean.
+- * I_DIRTY_TIME The inode itself only has dirty timestamps, and the
++ * I_DIRTY_TIME The inode itself has dirty timestamps, and the
+ * lazytime mount option is enabled. We keep track of this
+ * separately from I_DIRTY_SYNC in order to implement
+ * lazytime. This gets cleared if I_DIRTY_INODE
+- * (I_DIRTY_SYNC and/or I_DIRTY_DATASYNC) gets set. I.e.
+- * either I_DIRTY_TIME *or* I_DIRTY_INODE can be set in
+- * i_state, but not both. I_DIRTY_PAGES may still be set.
++ * (I_DIRTY_SYNC and/or I_DIRTY_DATASYNC) gets set. But
++ * I_DIRTY_TIME can still be set if I_DIRTY_SYNC is already
++ * in place because writeback might already be in progress
++ * and we don't want to lose the time update
+ * I_NEW Serves as both a mutex and completion notification.
+ * New inodes set I_NEW. If two processes both create
+ * the same inode, one of them will release its inode and
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 756b66ff025e5..ab940b97decca 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -203,8 +203,8 @@ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
+ struct page *follow_huge_pd(struct vm_area_struct *vma,
+ unsigned long address, hugepd_t hpd,
+ int flags, int pdshift);
+-struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+- pmd_t *pmd, int flags);
++struct page *follow_huge_pmd_pte(struct vm_area_struct *vma, unsigned long address,
++ int flags);
+ struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
+ pud_t *pud, int flags);
+ struct page *follow_huge_pgd(struct mm_struct *mm, unsigned long address,
+@@ -308,8 +308,8 @@ static inline struct page *follow_huge_pd(struct vm_area_struct *vma,
+ return NULL;
+ }
+
+-static inline struct page *follow_huge_pmd(struct mm_struct *mm,
+- unsigned long address, pmd_t *pmd, int flags)
++static inline struct page *follow_huge_pmd_pte(struct vm_area_struct *vma,
++ unsigned long address, int flags)
+ {
+ return NULL;
+ }
+diff --git a/include/linux/hw_random.h b/include/linux/hw_random.h
+index aa1d4da03538b..77c2885c4c130 100644
+--- a/include/linux/hw_random.h
++++ b/include/linux/hw_random.h
+@@ -50,6 +50,7 @@ struct hwrng {
+ struct list_head list;
+ struct kref ref;
+ struct completion cleanup_done;
++ struct completion dying;
+ };
+
+ struct device;
+@@ -61,4 +62,6 @@ extern int devm_hwrng_register(struct device *dev, struct hwrng *rng);
+ extern void hwrng_unregister(struct hwrng *rng);
+ extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng);
+
++extern long hwrng_msleep(struct hwrng *rng, unsigned int msecs);
++
+ #endif /* LINUX_HWRANDOM_H_ */
+diff --git a/include/linux/iova.h b/include/linux/iova.h
+index 320a70e402330..f1dba47cfc976 100644
+--- a/include/linux/iova.h
++++ b/include/linux/iova.h
+@@ -75,7 +75,7 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
+ return iova >> iova_shift(iovad);
+ }
+
+-#if IS_ENABLED(CONFIG_IOMMU_IOVA)
++#if IS_REACHABLE(CONFIG_IOMMU_IOVA)
+ int iova_cache_get(void);
+ void iova_cache_put(void);
+
+diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
+index 37f9758751020..12c7f2d3e2107 100644
+--- a/include/linux/mmc/card.h
++++ b/include/linux/mmc/card.h
+@@ -292,6 +292,7 @@ struct mmc_card {
+ #define MMC_QUIRK_BROKEN_IRQ_POLLING (1<<11) /* Polling SDIO_CCCR_INTx could create a fake interrupt */
+ #define MMC_QUIRK_TRIM_BROKEN (1<<12) /* Skip trim */
+ #define MMC_QUIRK_BROKEN_HPI (1<<13) /* Disable broken HPI support */
++#define MMC_QUIRK_BROKEN_SD_DISCARD (1<<14) /* Disable broken SD discard support */
+
+ bool reenable_cmdq; /* Re-enable Command Queue */
+
+diff --git a/include/linux/once.h b/include/linux/once.h
+index f54523052bbcb..aebc038e79e58 100644
+--- a/include/linux/once.h
++++ b/include/linux/once.h
+@@ -5,10 +5,18 @@
+ #include <linux/types.h>
+ #include <linux/jump_label.h>
+
++/* Helpers used from arbitrary contexts.
++ * Hard irqs are blocked, be cautious.
++ */
+ bool __do_once_start(bool *done, unsigned long *flags);
+ void __do_once_done(bool *done, struct static_key_true *once_key,
+ unsigned long *flags, struct module *mod);
+
++/* Variant for process contexts only. */
++bool __do_once_slow_start(bool *done);
++void __do_once_slow_done(bool *done, struct static_key_true *once_key,
++ struct module *mod);
++
+ /* Call a function exactly once. The idea of DO_ONCE() is to perform
+ * a function call such as initialization of random seeds, etc, only
+ * once, where DO_ONCE() can live in the fast-path. After @func has
+@@ -52,9 +60,29 @@ void __do_once_done(bool *done, struct static_key_true *once_key,
+ ___ret; \
+ })
+
++/* Variant of DO_ONCE() for process/sleepable contexts. */
++#define DO_ONCE_SLOW(func, ...) \
++ ({ \
++ bool ___ret = false; \
++ static bool __section(".data.once") ___done = false; \
++ static DEFINE_STATIC_KEY_TRUE(___once_key); \
++ if (static_branch_unlikely(&___once_key)) { \
++ ___ret = __do_once_slow_start(&___done); \
++ if (unlikely(___ret)) { \
++ func(__VA_ARGS__); \
++ __do_once_slow_done(&___done, &___once_key, \
++ THIS_MODULE); \
++ } \
++ } \
++ ___ret; \
++ })
++
+ #define get_random_once(buf, nbytes) \
+ DO_ONCE(get_random_bytes, (buf), (nbytes))
+ #define get_random_once_wait(buf, nbytes) \
+ DO_ONCE(get_random_bytes_wait, (buf), (nbytes)) \
+
++#define get_random_slow_once(buf, nbytes) \
++ DO_ONCE_SLOW(get_random_bytes, (buf), (nbytes))
++
+ #endif /* _LINUX_ONCE_H */
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index dac53fd3afea3..2504df9a0453e 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -101,7 +101,7 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *k
+ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
+ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ struct file *filp, poll_table *poll_table);
+-
++void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu);
+
+ #define RING_BUFFER_ALL_CPUS -1
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 6d877c7e22ffd..e02dc270fa2ce 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -934,7 +934,7 @@ struct task_struct {
+ #endif
+ #ifdef CONFIG_EVENTFD
+ /* Recursion prevention for eventfd_signal() */
+- unsigned in_eventfd_signal:1;
++ unsigned in_eventfd:1;
+ #endif
+ #ifdef CONFIG_IOMMU_SVA
+ unsigned pasid_activated:1;
+diff --git a/include/linux/serial_8250.h b/include/linux/serial_8250.h
+index ff84a3ed10ea9..b0183e90fe90b 100644
+--- a/include/linux/serial_8250.h
++++ b/include/linux/serial_8250.h
+@@ -74,6 +74,7 @@ struct uart_8250_port;
+ struct uart_8250_ops {
+ int (*setup_irq)(struct uart_8250_port *);
+ void (*release_irq)(struct uart_8250_port *);
++ void (*setup_timer)(struct uart_8250_port *);
+ };
+
+ struct uart_8250_em485 {
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 037a8d81a66cf..690a8de5324eb 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -101,7 +101,7 @@ struct uart_icount {
+ __u32 buf_overrun;
+ };
+
+-typedef unsigned int __bitwise upf_t;
++typedef u64 __bitwise upf_t;
+ typedef unsigned int __bitwise upstat_t;
+
+ struct uart_port {
+@@ -208,6 +208,7 @@ struct uart_port {
+ #define UPF_FIXED_PORT ((__force upf_t) (1 << 29))
+ #define UPF_DEAD ((__force upf_t) (1 << 30))
+ #define UPF_IOREMAP ((__force upf_t) (1 << 31))
++#define UPF_FULL_PROBE ((__force upf_t) (1ULL << 32))
+
+ #define __UPF_CHANGE_MASK 0x17fff
+ #define UPF_CHANGE_MASK ((__force upf_t) __UPF_CHANGE_MASK)
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 63d0a21b63162..f8a240817b4cf 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -965,6 +965,7 @@ typedef unsigned char *sk_buff_data_t;
+ * @csum_level: indicates the number of consecutive checksums found in
+ * the packet minus one that have been verified as
+ * CHECKSUM_UNNECESSARY (max 3)
++ * @scm_io_uring: SKB holds io_uring registered files
+ * @dst_pending_confirm: need to confirm neighbour
+ * @decrypted: Decrypted SKB
+ * @slow_gro: state present at GRO time, slower prepare step required
+@@ -1144,6 +1145,7 @@ struct sk_buff {
+ #endif
+ __u8 slow_gro:1;
+ __u8 csum_not_inet:1;
++ __u8 scm_io_uring:1;
+
+ #ifdef CONFIG_NET_SCHED
+ __u16 tc_index; /* traffic control index */
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index daecb009c05b5..0ca8a8ffb47e4 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -544,16 +544,27 @@ static inline void svc_reserve_auth(struct svc_rqst *rqstp, int space)
+ }
+
+ /**
+- * svcxdr_init_decode - Prepare an xdr_stream for svc Call decoding
++ * svcxdr_init_decode - Prepare an xdr_stream for Call decoding
+ * @rqstp: controlling server RPC transaction context
+ *
++ * This function currently assumes the RPC header in rq_arg has
++ * already been decoded. Upon return, xdr->p points to the
++ * location of the upper layer header.
+ */
+ static inline void svcxdr_init_decode(struct svc_rqst *rqstp)
+ {
+ struct xdr_stream *xdr = &rqstp->rq_arg_stream;
+- struct kvec *argv = rqstp->rq_arg.head;
++ struct xdr_buf *buf = &rqstp->rq_arg;
++ struct kvec *argv = buf->head;
+
+- xdr_init_decode(xdr, &rqstp->rq_arg, argv->iov_base, NULL);
++ /*
++ * svc_getnl() and friends do not keep the xdr_buf's ::len
++ * field up to date. Refresh that field before initializing
++ * the argument decoding stream.
++ */
++ buf->len = buf->head->iov_len + buf->page_len + buf->tail->iov_len;
++
++ xdr_init_decode(xdr, buf, argv->iov_base, NULL);
+ xdr_set_scratch_page(xdr, rqstp->rq_scratch_page);
+ }
+
+@@ -576,7 +587,7 @@ static inline void svcxdr_init_encode(struct svc_rqst *rqstp)
+ xdr->end = resv->iov_base + PAGE_SIZE - rqstp->rq_auth_slack;
+ buf->len = resv->iov_len;
+ xdr->page_ptr = buf->pages - 1;
+- buf->buflen = PAGE_SIZE * (1 + rqstp->rq_page_end - buf->pages);
++ buf->buflen = PAGE_SIZE * (rqstp->rq_page_end - buf->pages);
+ buf->buflen -= rqstp->rq_auth_slack;
+ xdr->rqst = NULL;
+ }
+diff --git a/include/linux/tcp.h b/include/linux/tcp.h
+index 1168302b79274..bb31d60addace 100644
+--- a/include/linux/tcp.h
++++ b/include/linux/tcp.h
+@@ -265,7 +265,7 @@ struct tcp_sock {
+ u32 packets_out; /* Packets which are "in flight" */
+ u32 retrans_out; /* Retransmitted packets out */
+ u32 max_packets_out; /* max packets_out in last window */
+- u32 max_packets_seq; /* right edge of max_packets_out flight */
++ u32 cwnd_usage_seq; /* right edge of cwnd usage tracking flight */
+
+ u16 urg_data; /* Saved octet of OOB data and control flags */
+ u8 ecn_flags; /* ECN status bits. */
+diff --git a/include/linux/trace.h b/include/linux/trace.h
+index bf169612ffe12..b5e16e438448f 100644
+--- a/include/linux/trace.h
++++ b/include/linux/trace.h
+@@ -2,8 +2,6 @@
+ #ifndef _LINUX_TRACE_H
+ #define _LINUX_TRACE_H
+
+-#ifdef CONFIG_TRACING
+-
+ #define TRACE_EXPORT_FUNCTION BIT(0)
+ #define TRACE_EXPORT_EVENT BIT(1)
+ #define TRACE_EXPORT_MARKER BIT(2)
+@@ -28,6 +26,8 @@ struct trace_export {
+ int flags;
+ };
+
++#ifdef CONFIG_TRACING
++
+ int register_ftrace_export(struct trace_export *export);
+ int unregister_ftrace_export(struct trace_export *export);
+
+@@ -48,6 +48,38 @@ void osnoise_arch_unregister(void);
+ void osnoise_trace_irq_entry(int id);
+ void osnoise_trace_irq_exit(int id, const char *desc);
+
++#else /* CONFIG_TRACING */
++static inline int register_ftrace_export(struct trace_export *export)
++{
++ return -EINVAL;
++}
++static inline int unregister_ftrace_export(struct trace_export *export)
++{
++ return 0;
++}
++static inline void trace_printk_init_buffers(void)
++{
++}
++static inline int trace_array_printk(struct trace_array *tr, unsigned long ip,
++ const char *fmt, ...)
++{
++ return 0;
++}
++static inline int trace_array_init_printk(struct trace_array *tr)
++{
++ return -EINVAL;
++}
++static inline void trace_array_put(struct trace_array *tr)
++{
++}
++static inline struct trace_array *trace_array_get_by_name(const char *name)
++{
++ return NULL;
++}
++static inline int trace_array_destroy(struct trace_array *tr)
++{
++ return 0;
++}
+ #endif /* CONFIG_TRACING */
+
+ #endif /* _LINUX_TRACE_H */
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index b18759a673c66..b6a54d92d4a09 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -92,6 +92,7 @@ struct trace_iterator {
+ unsigned int temp_size;
+ char *fmt; /* modified format holder */
+ unsigned int fmt_size;
++ long wait_index;
+
+ /* trace_seq for __print_flags() and __print_symbolic() etc. */
+ struct trace_seq tmp_seq;
+diff --git a/include/net/ieee802154_netdev.h b/include/net/ieee802154_netdev.h
+index a8994f307fc38..03b64bf876a46 100644
+--- a/include/net/ieee802154_netdev.h
++++ b/include/net/ieee802154_netdev.h
+@@ -185,21 +185,27 @@ static inline int
+ ieee802154_sockaddr_check_size(struct sockaddr_ieee802154 *daddr, int len)
+ {
+ struct ieee802154_addr_sa *sa;
++ int ret = 0;
+
+ sa = &daddr->addr;
+ if (len < IEEE802154_MIN_NAMELEN)
+ return -EINVAL;
+ switch (sa->addr_type) {
++ case IEEE802154_ADDR_NONE:
++ break;
+ case IEEE802154_ADDR_SHORT:
+ if (len < IEEE802154_NAMELEN_SHORT)
+- return -EINVAL;
++ ret = -EINVAL;
+ break;
+ case IEEE802154_ADDR_LONG:
+ if (len < IEEE802154_NAMELEN_LONG)
+- return -EINVAL;
++ ret = -EINVAL;
++ break;
++ default:
++ ret = -EINVAL;
+ break;
+ }
+- return 0;
++ return ret;
+ }
+
+ static inline void ieee802154_addr_from_sa(struct ieee802154_addr *a,
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 78a64e1b33a7e..788b1f17b5e35 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1289,11 +1289,14 @@ static inline bool tcp_is_cwnd_limited(const struct sock *sk)
+ {
+ const struct tcp_sock *tp = tcp_sk(sk);
+
++ if (tp->is_cwnd_limited)
++ return true;
++
+ /* If in slow start, ensure cwnd grows to twice what was ACKed. */
+ if (tcp_in_slow_start(tp))
+ return tcp_snd_cwnd(tp) < 2 * tp->max_packets_out;
+
+- return tp->is_cwnd_limited;
++ return false;
+ }
+
+ /* BBR congestion control needs pacing.
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index 9758a4a9923f5..5a10e5acfad26 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -213,6 +213,8 @@ struct iscsi_conn {
+ struct list_head cmdqueue; /* data-path cmd queue */
+ struct list_head requeue; /* tasks needing another run */
+ struct work_struct xmitwork; /* per-conn. xmit workqueue */
++ /* recv */
++ struct work_struct recvwork;
+ unsigned long flags; /* ISCSI_CONN_FLAGs */
+
+ /* negotiated params */
+@@ -452,8 +454,10 @@ extern int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ extern int iscsi_conn_get_addr_param(struct sockaddr_storage *addr,
+ enum iscsi_param param, char *buf);
+ extern void iscsi_suspend_tx(struct iscsi_conn *conn);
++extern void iscsi_suspend_rx(struct iscsi_conn *conn);
+ extern void iscsi_suspend_queue(struct iscsi_conn *conn);
+-extern void iscsi_conn_queue_work(struct iscsi_conn *conn);
++extern void iscsi_conn_queue_xmit(struct iscsi_conn *conn);
++extern void iscsi_conn_queue_recv(struct iscsi_conn *conn);
+
+ #define iscsi_conn_printk(prefix, _c, fmt, a...) \
+ iscsi_cls_conn_printk(prefix, ((struct iscsi_conn *)_c)->cls_conn, \
+diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
+index 86be4a92b67bf..a96b7d2770e15 100644
+--- a/include/uapi/rdma/mlx5-abi.h
++++ b/include/uapi/rdma/mlx5-abi.h
+@@ -104,6 +104,7 @@ enum mlx5_ib_alloc_ucontext_resp_mask {
+ MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_ECE = 1UL << 2,
+ MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_SQD2RTS = 1UL << 3,
+ MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_REAL_TIME_TS = 1UL << 4,
++ MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_MKEY_UPDATE_TAG = 1UL << 5,
+ };
+
+ enum mlx5_user_cmds_supp_uhw {
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 15a6f1e93e5af..096b6d14f40d7 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -4215,6 +4215,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+ return -EAGAIN;
+ }
+
++ req->cqe.res = iov_iter_count(&s->iter);
+ /*
+ * Now retry read with the IOCB_WAITQ parts set in the iocb. If
+ * we get -EIOCBQUEUED, then we'll get a notification when the
+@@ -5849,10 +5850,13 @@ static int io_setup_async_msg(struct io_kiocb *req,
+ async_msg = req->async_data;
+ req->flags |= REQ_F_NEED_CLEANUP;
+ memcpy(async_msg, kmsg, sizeof(*kmsg));
+- async_msg->msg.msg_name = &async_msg->addr;
++ if (async_msg->msg.msg_name)
++ async_msg->msg.msg_name = &async_msg->addr;
+ /* if were using fast_iov, set it to the new one */
+- if (!async_msg->free_iov)
+- async_msg->msg.msg_iter.iov = async_msg->fast_iov;
++ if (!kmsg->free_iov) {
++ size_t fast_idx = kmsg->msg.msg_iter.iov - kmsg->fast_iov;
++ async_msg->msg.msg_iter.iov = &async_msg->fast_iov[fast_idx];
++ }
+
+ return -EAGAIN;
+ }
+@@ -9480,6 +9484,7 @@ static int io_scm_file_account(struct io_ring_ctx *ctx, struct file *file)
+
+ UNIXCB(skb).fp = fpl;
+ skb->sk = sk;
++ skb->scm_io_uring = 1;
+ skb->destructor = unix_destruct_scm;
+ refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+ }
+@@ -10706,12 +10711,6 @@ static void io_flush_apoll_cache(struct io_ring_ctx *ctx)
+ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ {
+ io_sq_thread_finish(ctx);
+-
+- if (ctx->mm_account) {
+- mmdrop(ctx->mm_account);
+- ctx->mm_account = NULL;
+- }
+-
+ io_rsrc_refs_drop(ctx);
+ /* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
+ io_wait_rsrc_data(ctx->buf_data);
+@@ -10750,6 +10749,10 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ #endif
+ WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
+
++ if (ctx->mm_account) {
++ mmdrop(ctx->mm_account);
++ ctx->mm_account = NULL;
++ }
+ io_mem_free(ctx->rings);
+ io_mem_free(ctx->sq_sqes);
+
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index 12ad7860bb88f..83370fef88796 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -1746,6 +1746,7 @@ out_filesystem:
+ unregister_filesystem(&mqueue_fs_type);
+ out_sysctl:
+ kmem_cache_destroy(mqueue_inode_cachep);
++ retire_mq_sysctls(&init_ipc_ns);
+ return error;
+ }
+
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 0c33e04c293ad..73121c0185cea 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1016,7 +1016,6 @@ static void audit_reset_context(struct audit_context *ctx)
+ WARN_ON(!list_empty(&ctx->killed_trees));
+ audit_free_module(ctx);
+ ctx->fds[0] = -1;
+- audit_proctitle_free(ctx);
+ ctx->type = 0; /* reset last for audit_free_*() */
+ }
+
+@@ -1102,6 +1101,7 @@ static inline void audit_free_context(struct audit_context *context)
+ {
+ /* resetting is extra work, but it is likely just noise */
+ audit_reset_context(context);
++ audit_proctitle_free(context);
+ free_tree_refs(context);
+ kfree(context->filterkey);
+ kfree(context);
+@@ -2094,7 +2094,7 @@ void __audit_syscall_exit(int success, long return_code)
+ /* run through both filters to ensure we set the filterkey properly */
+ audit_filter_syscall(current, context);
+ audit_filter_inodes(current, context);
+- if (context->current_state < AUDIT_STATE_RECORD)
++ if (context->current_state != AUDIT_STATE_RECORD)
+ goto out;
+
+ audit_log_exit();
+diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
+index 8ce40fd869f6a..d13ffb00e9813 100644
+--- a/kernel/bpf/bpf_local_storage.c
++++ b/kernel/bpf/bpf_local_storage.c
+@@ -555,11 +555,11 @@ void bpf_local_storage_map_free(struct bpf_local_storage_map *smap,
+ struct bpf_local_storage_elem, map_node))) {
+ if (busy_counter) {
+ migrate_disable();
+- __this_cpu_inc(*busy_counter);
++ this_cpu_inc(*busy_counter);
+ }
+ bpf_selem_unlink(selem, false);
+ if (busy_counter) {
+- __this_cpu_dec(*busy_counter);
++ this_cpu_dec(*busy_counter);
+ migrate_enable();
+ }
+ cond_resched_rcu();
+diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
+index e9014dc626820..6f290623347e0 100644
+--- a/kernel/bpf/bpf_task_storage.c
++++ b/kernel/bpf/bpf_task_storage.c
+@@ -26,20 +26,20 @@ static DEFINE_PER_CPU(int, bpf_task_storage_busy);
+ static void bpf_task_storage_lock(void)
+ {
+ migrate_disable();
+- __this_cpu_inc(bpf_task_storage_busy);
++ this_cpu_inc(bpf_task_storage_busy);
+ }
+
+ static void bpf_task_storage_unlock(void)
+ {
+- __this_cpu_dec(bpf_task_storage_busy);
++ this_cpu_dec(bpf_task_storage_busy);
+ migrate_enable();
+ }
+
+ static bool bpf_task_storage_trylock(void)
+ {
+ migrate_disable();
+- if (unlikely(__this_cpu_inc_return(bpf_task_storage_busy) != 1)) {
+- __this_cpu_dec(bpf_task_storage_busy);
++ if (unlikely(this_cpu_inc_return(bpf_task_storage_busy) != 1)) {
++ this_cpu_dec(bpf_task_storage_busy);
+ migrate_enable();
+ return false;
+ }
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index eb12d4f705cce..ff4a2c0b14ead 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -3120,7 +3120,7 @@ static int btf_struct_resolve(struct btf_verifier_env *env,
+ if (v->next_member) {
+ const struct btf_type *last_member_type;
+ const struct btf_member *last_member;
+- u16 last_member_type_id;
++ u32 last_member_type_id;
+
+ last_member = btf_type_member(v->t) + v->next_member - 1;
+ last_member_type_id = last_member->type;
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 34dfa45ef4f3b..13526b01f1fd1 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -157,11 +157,12 @@ static void cgroup_bpf_release(struct work_struct *work)
+ mutex_lock(&cgroup_mutex);
+
+ for (atype = 0; atype < ARRAY_SIZE(cgrp->bpf.progs); atype++) {
+- struct list_head *progs = &cgrp->bpf.progs[atype];
+- struct bpf_prog_list *pl, *pltmp;
++ struct hlist_head *progs = &cgrp->bpf.progs[atype];
++ struct bpf_prog_list *pl;
++ struct hlist_node *pltmp;
+
+- list_for_each_entry_safe(pl, pltmp, progs, node) {
+- list_del(&pl->node);
++ hlist_for_each_entry_safe(pl, pltmp, progs, node) {
++ hlist_del(&pl->node);
+ if (pl->prog)
+ bpf_prog_put(pl->prog);
+ if (pl->link)
+@@ -217,12 +218,12 @@ static struct bpf_prog *prog_list_prog(struct bpf_prog_list *pl)
+ /* count number of elements in the list.
+ * it's slow but the list cannot be long
+ */
+-static u32 prog_list_length(struct list_head *head)
++static u32 prog_list_length(struct hlist_head *head)
+ {
+ struct bpf_prog_list *pl;
+ u32 cnt = 0;
+
+- list_for_each_entry(pl, head, node) {
++ hlist_for_each_entry(pl, head, node) {
+ if (!prog_list_prog(pl))
+ continue;
+ cnt++;
+@@ -291,7 +292,7 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ if (cnt > 0 && !(p->bpf.flags[atype] & BPF_F_ALLOW_MULTI))
+ continue;
+
+- list_for_each_entry(pl, &p->bpf.progs[atype], node) {
++ hlist_for_each_entry(pl, &p->bpf.progs[atype], node) {
+ if (!prog_list_prog(pl))
+ continue;
+
+@@ -342,7 +343,7 @@ int cgroup_bpf_inherit(struct cgroup *cgrp)
+ cgroup_bpf_get(p);
+
+ for (i = 0; i < NR; i++)
+- INIT_LIST_HEAD(&cgrp->bpf.progs[i]);
++ INIT_HLIST_HEAD(&cgrp->bpf.progs[i]);
+
+ INIT_LIST_HEAD(&cgrp->bpf.storages);
+
+@@ -418,7 +419,7 @@ cleanup:
+
+ #define BPF_CGROUP_MAX_PROGS 64
+
+-static struct bpf_prog_list *find_attach_entry(struct list_head *progs,
++static struct bpf_prog_list *find_attach_entry(struct hlist_head *progs,
+ struct bpf_prog *prog,
+ struct bpf_cgroup_link *link,
+ struct bpf_prog *replace_prog,
+@@ -428,12 +429,12 @@ static struct bpf_prog_list *find_attach_entry(struct list_head *progs,
+
+ /* single-attach case */
+ if (!allow_multi) {
+- if (list_empty(progs))
++ if (hlist_empty(progs))
+ return NULL;
+- return list_first_entry(progs, typeof(*pl), node);
++ return hlist_entry(progs->first, typeof(*pl), node);
+ }
+
+- list_for_each_entry(pl, progs, node) {
++ hlist_for_each_entry(pl, progs, node) {
+ if (prog && pl->prog == prog && prog != replace_prog)
+ /* disallow attaching the same prog twice */
+ return ERR_PTR(-EINVAL);
+@@ -444,7 +445,7 @@ static struct bpf_prog_list *find_attach_entry(struct list_head *progs,
+
+ /* direct prog multi-attach w/ replacement case */
+ if (replace_prog) {
+- list_for_each_entry(pl, progs, node) {
++ hlist_for_each_entry(pl, progs, node) {
+ if (pl->prog == replace_prog)
+ /* a match found */
+ return pl;
+@@ -480,7 +481,7 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
+ struct bpf_cgroup_storage *new_storage[MAX_BPF_CGROUP_STORAGE_TYPE] = {};
+ enum cgroup_bpf_attach_type atype;
+ struct bpf_prog_list *pl;
+- struct list_head *progs;
++ struct hlist_head *progs;
+ int err;
+
+ if (((flags & BPF_F_ALLOW_OVERRIDE) && (flags & BPF_F_ALLOW_MULTI)) ||
+@@ -503,7 +504,7 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
+ if (!hierarchy_allows_attach(cgrp, atype))
+ return -EPERM;
+
+- if (!list_empty(progs) && cgrp->bpf.flags[atype] != saved_flags)
++ if (!hlist_empty(progs) && cgrp->bpf.flags[atype] != saved_flags)
+ /* Disallow attaching non-overridable on top
+ * of existing overridable in this cgroup.
+ * Disallow attaching multi-prog if overridable or none
+@@ -525,12 +526,22 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
+ if (pl) {
+ old_prog = pl->prog;
+ } else {
++ struct hlist_node *last = NULL;
++
+ pl = kmalloc(sizeof(*pl), GFP_KERNEL);
+ if (!pl) {
+ bpf_cgroup_storages_free(new_storage);
+ return -ENOMEM;
+ }
+- list_add_tail(&pl->node, progs);
++ if (hlist_empty(progs))
++ hlist_add_head(&pl->node, progs);
++ else
++ hlist_for_each(last, progs) {
++ if (last->next)
++ continue;
++ hlist_add_behind(&pl->node, last);
++ break;
++ }
+ }
+
+ pl->prog = prog;
+@@ -556,7 +567,7 @@ cleanup:
+ }
+ bpf_cgroup_storages_free(new_storage);
+ if (!old_prog) {
+- list_del(&pl->node);
++ hlist_del(&pl->node);
+ kfree(pl);
+ }
+ return err;
+@@ -587,7 +598,7 @@ static void replace_effective_prog(struct cgroup *cgrp,
+ struct cgroup_subsys_state *css;
+ struct bpf_prog_array *progs;
+ struct bpf_prog_list *pl;
+- struct list_head *head;
++ struct hlist_head *head;
+ struct cgroup *cg;
+ int pos;
+
+@@ -603,7 +614,7 @@ static void replace_effective_prog(struct cgroup *cgrp,
+ continue;
+
+ head = &cg->bpf.progs[atype];
+- list_for_each_entry(pl, head, node) {
++ hlist_for_each_entry(pl, head, node) {
+ if (!prog_list_prog(pl))
+ continue;
+ if (pl->link == link)
+@@ -637,7 +648,7 @@ static int __cgroup_bpf_replace(struct cgroup *cgrp,
+ enum cgroup_bpf_attach_type atype;
+ struct bpf_prog *old_prog;
+ struct bpf_prog_list *pl;
+- struct list_head *progs;
++ struct hlist_head *progs;
+ bool found = false;
+
+ atype = to_cgroup_bpf_attach_type(link->type);
+@@ -649,7 +660,7 @@ static int __cgroup_bpf_replace(struct cgroup *cgrp,
+ if (link->link.prog->type != new_prog->type)
+ return -EINVAL;
+
+- list_for_each_entry(pl, progs, node) {
++ hlist_for_each_entry(pl, progs, node) {
+ if (pl->link == link) {
+ found = true;
+ break;
+@@ -688,7 +699,7 @@ out_unlock:
+ return ret;
+ }
+
+-static struct bpf_prog_list *find_detach_entry(struct list_head *progs,
++static struct bpf_prog_list *find_detach_entry(struct hlist_head *progs,
+ struct bpf_prog *prog,
+ struct bpf_cgroup_link *link,
+ bool allow_multi)
+@@ -696,14 +707,14 @@ static struct bpf_prog_list *find_detach_entry(struct list_head *progs,
+ struct bpf_prog_list *pl;
+
+ if (!allow_multi) {
+- if (list_empty(progs))
++ if (hlist_empty(progs))
+ /* report error when trying to detach and nothing is attached */
+ return ERR_PTR(-ENOENT);
+
+ /* to maintain backward compatibility NONE and OVERRIDE cgroups
+ * allow detaching with invalid FD (prog==NULL) in legacy mode
+ */
+- return list_first_entry(progs, typeof(*pl), node);
++ return hlist_entry(progs->first, typeof(*pl), node);
+ }
+
+ if (!prog && !link)
+@@ -713,7 +724,7 @@ static struct bpf_prog_list *find_detach_entry(struct list_head *progs,
+ return ERR_PTR(-EINVAL);
+
+ /* find the prog or link and detach it */
+- list_for_each_entry(pl, progs, node) {
++ hlist_for_each_entry(pl, progs, node) {
+ if (pl->prog == prog && pl->link == link)
+ return pl;
+ }
+@@ -737,7 +748,7 @@ static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
+ struct cgroup_subsys_state *css;
+ struct bpf_prog_array *progs;
+ struct bpf_prog_list *pl;
+- struct list_head *head;
++ struct hlist_head *head;
+ struct cgroup *cg;
+ int pos;
+
+@@ -754,7 +765,7 @@ static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
+ continue;
+
+ head = &cg->bpf.progs[atype];
+- list_for_each_entry(pl, head, node) {
++ hlist_for_each_entry(pl, head, node) {
+ if (!prog_list_prog(pl))
+ continue;
+ if (pl->prog == prog && pl->link == link)
+@@ -793,7 +804,7 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ enum cgroup_bpf_attach_type atype;
+ struct bpf_prog *old_prog;
+ struct bpf_prog_list *pl;
+- struct list_head *progs;
++ struct hlist_head *progs;
+ u32 flags;
+
+ atype = to_cgroup_bpf_attach_type(type);
+@@ -824,9 +835,10 @@ static int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
+ }
+
+ /* now can actually delete it from this cgroup list */
+- list_del(&pl->node);
++ hlist_del(&pl->node);
++
+ kfree(pl);
+- if (list_empty(progs))
++ if (hlist_empty(progs))
+ /* last program was detached, reset flags to zero */
+ cgrp->bpf.flags[atype] = 0;
+ if (old_prog)
+@@ -854,7 +866,7 @@ static int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
+ enum bpf_attach_type type = attr->query.attach_type;
+ enum cgroup_bpf_attach_type atype;
+ struct bpf_prog_array *effective;
+- struct list_head *progs;
++ struct hlist_head *progs;
+ struct bpf_prog *prog;
+ int cnt, ret = 0, i;
+ u32 flags;
+@@ -893,7 +905,7 @@ static int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
+ u32 id;
+
+ i = 0;
+- list_for_each_entry(pl, progs, node) {
++ hlist_for_each_entry(pl, progs, node) {
+ prog = prog_list_prog(pl);
+ id = prog->aux->id;
+ if (copy_to_user(prog_ids + i, &id, sizeof(id)))
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index cf44ff50b1f23..be736aa979275 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -822,6 +822,11 @@ struct bpf_prog_pack {
+ unsigned long bitmap[];
+ };
+
++void bpf_jit_fill_hole_with_zero(void *area, unsigned int size)
++{
++ memset(area, 0, size);
++}
++
+ #define BPF_PROG_SIZE_TO_NBITS(size) (round_up(size, BPF_PROG_CHUNK_SIZE) / BPF_PROG_CHUNK_SIZE)
+
+ static size_t bpf_prog_pack_size = -1;
+@@ -892,7 +897,7 @@ static struct bpf_prog_pack *alloc_new_pack(bpf_jit_fill_hole_t bpf_fill_ill_ins
+ return pack;
+ }
+
+-static void *bpf_prog_pack_alloc(u32 size, bpf_jit_fill_hole_t bpf_fill_ill_insns)
++void *bpf_prog_pack_alloc(u32 size, bpf_jit_fill_hole_t bpf_fill_ill_insns)
+ {
+ unsigned int nbits = BPF_PROG_SIZE_TO_NBITS(size);
+ struct bpf_prog_pack *pack;
+@@ -936,7 +941,7 @@ out:
+ return ptr;
+ }
+
+-static void bpf_prog_pack_free(struct bpf_binary_header *hdr)
++void bpf_prog_pack_free(struct bpf_binary_header *hdr)
+ {
+ struct bpf_prog_pack *pack = NULL, *tmp;
+ unsigned int nbits;
+diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c
+index 2444bd15cc2d0..fa64b80b8bcab 100644
+--- a/kernel/bpf/dispatcher.c
++++ b/kernel/bpf/dispatcher.c
+@@ -85,12 +85,12 @@ static bool bpf_dispatcher_remove_prog(struct bpf_dispatcher *d,
+ return false;
+ }
+
+-int __weak arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs)
++int __weak arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs)
+ {
+ return -ENOTSUPP;
+ }
+
+-static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image)
++static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image, void *buf)
+ {
+ s64 ips[BPF_DISPATCHER_MAX] = {}, *ipsp = &ips[0];
+ int i;
+@@ -99,12 +99,12 @@ static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image)
+ if (d->progs[i].prog)
+ *ipsp++ = (s64)(uintptr_t)d->progs[i].prog->bpf_func;
+ }
+- return arch_prepare_bpf_dispatcher(image, &ips[0], d->num_progs);
++ return arch_prepare_bpf_dispatcher(image, buf, &ips[0], d->num_progs);
+ }
+
+ static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs)
+ {
+- void *old, *new;
++ void *old, *new, *tmp;
+ u32 noff;
+ int err;
+
+@@ -117,8 +117,14 @@ static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs)
+ }
+
+ new = d->num_progs ? d->image + noff : NULL;
++ tmp = d->num_progs ? d->rw_image + noff : NULL;
+ if (new) {
+- if (bpf_dispatcher_prepare(d, new))
++ /* Prepare the dispatcher in d->rw_image. Then use
++ * bpf_arch_text_copy to update d->image, which is RO+X.
++ */
++ if (bpf_dispatcher_prepare(d, new, tmp))
++ return;
++ if (IS_ERR(bpf_arch_text_copy(new, tmp, PAGE_SIZE / 2)))
+ return;
+ }
+
+@@ -140,9 +146,18 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+
+ mutex_lock(&d->mutex);
+ if (!d->image) {
+- d->image = bpf_jit_alloc_exec_page();
++ d->image = bpf_prog_pack_alloc(PAGE_SIZE, bpf_jit_fill_hole_with_zero);
+ if (!d->image)
+ goto out;
++ d->rw_image = bpf_jit_alloc_exec(PAGE_SIZE);
++ if (!d->rw_image) {
++ u32 size = PAGE_SIZE;
++
++ bpf_arch_text_copy(d->image, &size, sizeof(size));
++ bpf_prog_pack_free((struct bpf_binary_header *)d->image);
++ d->image = NULL;
++ goto out;
++ }
+ bpf_image_ksym_add(d->image, &d->ksym);
+ }
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 4dd5e0005afa1..e20f3d0e3fc7d 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -162,17 +162,25 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
+ unsigned long *pflags)
+ {
+ unsigned long flags;
++ bool use_raw_lock;
+
+ hash = hash & HASHTAB_MAP_LOCK_MASK;
+
+- migrate_disable();
++ use_raw_lock = htab_use_raw_lock(htab);
++ if (use_raw_lock)
++ preempt_disable();
++ else
++ migrate_disable();
+ if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
+ __this_cpu_dec(*(htab->map_locked[hash]));
+- migrate_enable();
++ if (use_raw_lock)
++ preempt_enable();
++ else
++ migrate_enable();
+ return -EBUSY;
+ }
+
+- if (htab_use_raw_lock(htab))
++ if (use_raw_lock)
+ raw_spin_lock_irqsave(&b->raw_lock, flags);
+ else
+ spin_lock_irqsave(&b->lock, flags);
+@@ -185,13 +193,18 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
+ struct bucket *b, u32 hash,
+ unsigned long flags)
+ {
++ bool use_raw_lock = htab_use_raw_lock(htab);
++
+ hash = hash & HASHTAB_MAP_LOCK_MASK;
+- if (htab_use_raw_lock(htab))
++ if (use_raw_lock)
+ raw_spin_unlock_irqrestore(&b->raw_lock, flags);
+ else
+ spin_unlock_irqrestore(&b->lock, flags);
+ __this_cpu_dec(*(htab->map_locked[hash]));
+- migrate_enable();
++ if (use_raw_lock)
++ preempt_enable();
++ else
++ migrate_enable();
+ }
+
+ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node);
+@@ -1691,8 +1704,11 @@ again_nocopy:
+ /* do not grab the lock unless need it (bucket_cnt > 0). */
+ if (locked) {
+ ret = htab_lock_bucket(htab, b, batch, &flags);
+- if (ret)
+- goto next_batch;
++ if (ret) {
++ rcu_read_unlock();
++ bpf_enable_instrumentation();
++ goto after_loop;
++ }
+ }
+
+ bucket_cnt = 0;
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 9853db0ce487b..ed7649b047041 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -584,7 +584,7 @@ BPF_CALL_3(bpf_strncmp, const char *, s1, u32, s1_sz, const char *, s2)
+ return strncmp(s1, s2, s1_sz);
+ }
+
+-const struct bpf_func_proto bpf_strncmp_proto = {
++static const struct bpf_func_proto bpf_strncmp_proto = {
+ .func = bpf_strncmp,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+@@ -1402,7 +1402,7 @@ BPF_CALL_2(bpf_kptr_xchg, void *, map_value, void *, ptr)
+ */
+ #define BPF_PTR_POISON ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA))
+
+-const struct bpf_func_proto bpf_kptr_xchg_proto = {
++static const struct bpf_func_proto bpf_kptr_xchg_proto = {
+ .func = bpf_kptr_xchg,
+ .gpl_only = false,
+ .ret_type = RET_PTR_TO_BTF_ID_OR_NULL,
+@@ -1468,6 +1468,8 @@ BPF_CALL_4(bpf_dynptr_from_mem, void *, data, u32, size, u64, flags, struct bpf_
+ {
+ int err;
+
++ BTF_TYPE_EMIT(struct bpf_dynptr);
++
+ err = bpf_dynptr_check_size(size);
+ if (err)
+ goto error;
+@@ -1487,7 +1489,7 @@ error:
+ return err;
+ }
+
+-const struct bpf_func_proto bpf_dynptr_from_mem_proto = {
++static const struct bpf_func_proto bpf_dynptr_from_mem_proto = {
+ .func = bpf_dynptr_from_mem,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+@@ -1514,7 +1516,7 @@ BPF_CALL_5(bpf_dynptr_read, void *, dst, u32, len, struct bpf_dynptr_kern *, src
+ return 0;
+ }
+
+-const struct bpf_func_proto bpf_dynptr_read_proto = {
++static const struct bpf_func_proto bpf_dynptr_read_proto = {
+ .func = bpf_dynptr_read,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+@@ -1542,7 +1544,7 @@ BPF_CALL_5(bpf_dynptr_write, struct bpf_dynptr_kern *, dst, u32, offset, void *,
+ return 0;
+ }
+
+-const struct bpf_func_proto bpf_dynptr_write_proto = {
++static const struct bpf_func_proto bpf_dynptr_write_proto = {
+ .func = bpf_dynptr_write,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+@@ -1570,7 +1572,7 @@ BPF_CALL_3(bpf_dynptr_data, struct bpf_dynptr_kern *, ptr, u32, offset, u32, len
+ return (unsigned long)(ptr->data + ptr->offset + offset);
+ }
+
+-const struct bpf_func_proto bpf_dynptr_data_proto = {
++static const struct bpf_func_proto bpf_dynptr_data_proto = {
+ .func = bpf_dynptr_data,
+ .gpl_only = false,
+ .ret_type = RET_PTR_TO_DYNPTR_MEM_OR_NULL,
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index d334aeb234076..494ba88054e81 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -4361,7 +4361,9 @@ static int bpf_task_fd_query(const union bpf_attr *attr,
+ if (attr->task_fd_query.flags != 0)
+ return -EINVAL;
+
++ rcu_read_lock();
+ task = get_pid_task(find_vpid(pid), PIDTYPE_PID);
++ rcu_read_unlock();
+ if (!task)
+ return -ENOENT;
+
+@@ -5138,7 +5140,7 @@ BPF_CALL_4(bpf_kallsyms_lookup_name, const char *, name, int, name_sz, int, flag
+ return *res ? 0 : -ENOENT;
+ }
+
+-const struct bpf_func_proto bpf_kallsyms_lookup_name_proto = {
++static const struct bpf_func_proto bpf_kallsyms_lookup_name_proto = {
+ .func = bpf_kallsyms_lookup_name,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index 93c7675f0c9e7..fe4f4d9d043b4 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -585,7 +585,7 @@ u64 notrace __bpf_prog_enter(struct bpf_prog *prog, struct bpf_tramp_run_ctx *ru
+
+ run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx);
+
+- if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) {
++ if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
+ inc_misses_counter(prog);
+ return 0;
+ }
+@@ -620,7 +620,7 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start, struct bpf_tramp_
+ bpf_reset_run_ctx(run_ctx->saved_run_ctx);
+
+ update_prog_stats(prog, start);
+- __this_cpu_dec(*(prog->active));
++ this_cpu_dec(*(prog->active));
+ migrate_enable();
+ rcu_read_unlock();
+ }
+@@ -631,7 +631,7 @@ u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog, struct bpf_tramp_r
+ migrate_disable();
+ might_fault();
+
+- if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) {
++ if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
+ inc_misses_counter(prog);
+ return 0;
+ }
+@@ -647,7 +647,7 @@ void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start,
+ bpf_reset_run_ctx(run_ctx->saved_run_ctx);
+
+ update_prog_stats(prog, start);
+- __this_cpu_dec(*(prog->active));
++ this_cpu_dec(*(prog->active));
+ migrate_enable();
+ rcu_read_unlock_trace();
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 339147061127a..b908ff6e520fe 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -467,25 +467,11 @@ static bool type_is_rdonly_mem(u32 type)
+ return type & MEM_RDONLY;
+ }
+
+-static bool arg_type_may_be_refcounted(enum bpf_arg_type type)
+-{
+- return type == ARG_PTR_TO_SOCK_COMMON;
+-}
+-
+ static bool type_may_be_null(u32 type)
+ {
+ return type & PTR_MAYBE_NULL;
+ }
+
+-static bool may_be_acquire_function(enum bpf_func_id func_id)
+-{
+- return func_id == BPF_FUNC_sk_lookup_tcp ||
+- func_id == BPF_FUNC_sk_lookup_udp ||
+- func_id == BPF_FUNC_skc_lookup_tcp ||
+- func_id == BPF_FUNC_map_lookup_elem ||
+- func_id == BPF_FUNC_ringbuf_reserve;
+-}
+-
+ static bool is_acquire_function(enum bpf_func_id func_id,
+ const struct bpf_map *map)
+ {
+@@ -518,6 +504,26 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
+ func_id == BPF_FUNC_skc_to_tcp_request_sock;
+ }
+
++static bool is_dynptr_acquire_function(enum bpf_func_id func_id)
++{
++ return func_id == BPF_FUNC_dynptr_data;
++}
++
++static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
++ const struct bpf_map *map)
++{
++ int ref_obj_uses = 0;
++
++ if (is_ptr_cast_function(func_id))
++ ref_obj_uses++;
++ if (is_acquire_function(func_id, map))
++ ref_obj_uses++;
++ if (is_dynptr_acquire_function(func_id))
++ ref_obj_uses++;
++
++ return ref_obj_uses > 1;
++}
++
+ static bool is_cmpxchg_insn(const struct bpf_insn *insn)
+ {
+ return BPF_CLASS(insn->code) == BPF_STX &&
+@@ -1086,6 +1092,7 @@ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx)
+ id = ++env->id_gen;
+ state->refs[new_ofs].id = id;
+ state->refs[new_ofs].insn_idx = insn_idx;
++ state->refs[new_ofs].callback_ref = state->in_callback_fn ? state->frameno : 0;
+
+ return id;
+ }
+@@ -1098,6 +1105,9 @@ static int release_reference_state(struct bpf_func_state *state, int ptr_id)
+ last_idx = state->acquired_refs - 1;
+ for (i = 0; i < state->acquired_refs; i++) {
+ if (state->refs[i].id == ptr_id) {
++ /* Cannot release caller references in callbacks */
++ if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno)
++ return -EINVAL;
+ if (last_idx && i != last_idx)
+ memcpy(&state->refs[i], &state->refs[last_idx],
+ sizeof(*state->refs));
+@@ -6456,33 +6466,6 @@ static bool check_arg_pair_ok(const struct bpf_func_proto *fn)
+ return true;
+ }
+
+-static bool check_refcount_ok(const struct bpf_func_proto *fn, int func_id)
+-{
+- int count = 0;
+-
+- if (arg_type_may_be_refcounted(fn->arg1_type))
+- count++;
+- if (arg_type_may_be_refcounted(fn->arg2_type))
+- count++;
+- if (arg_type_may_be_refcounted(fn->arg3_type))
+- count++;
+- if (arg_type_may_be_refcounted(fn->arg4_type))
+- count++;
+- if (arg_type_may_be_refcounted(fn->arg5_type))
+- count++;
+-
+- /* A reference acquiring function cannot acquire
+- * another refcounted ptr.
+- */
+- if (may_be_acquire_function(func_id) && count)
+- return false;
+-
+- /* We only support one arg being unreferenced at the moment,
+- * which is sufficient for the helper functions we have right now.
+- */
+- return count <= 1;
+-}
+-
+ static bool check_btf_id_ok(const struct bpf_func_proto *fn)
+ {
+ int i;
+@@ -6506,8 +6489,7 @@ static int check_func_proto(const struct bpf_func_proto *fn, int func_id,
+ {
+ return check_raw_mode_ok(fn) &&
+ check_arg_pair_ok(fn) &&
+- check_btf_id_ok(fn) &&
+- check_refcount_ok(fn, func_id) ? 0 : -EINVAL;
++ check_btf_id_ok(fn) ? 0 : -EINVAL;
+ }
+
+ /* Packet data might have moved, any old PTR_TO_PACKET[_META,_END]
+@@ -6941,10 +6923,17 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
+ caller->regs[BPF_REG_0] = *r0;
+ }
+
+- /* Transfer references to the caller */
+- err = copy_reference_state(caller, callee);
+- if (err)
+- return err;
++ /* callback_fn frame should have released its own additions to parent's
++ * reference state at this point, or check_reference_leak would
++ * complain, hence it must be the same as the caller. There is no need
++ * to copy it back.
++ */
++ if (!callee->in_callback_fn) {
++ /* Transfer references to the caller */
++ err = copy_reference_state(caller, callee);
++ if (err)
++ return err;
++ }
+
+ *insn_idx = callee->callsite + 1;
+ if (env->log.level & BPF_LOG_LEVEL) {
+@@ -7066,13 +7055,20 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
+ static int check_reference_leak(struct bpf_verifier_env *env)
+ {
+ struct bpf_func_state *state = cur_func(env);
++ bool refs_lingering = false;
+ int i;
+
++ if (state->frameno && !state->in_callback_fn)
++ return 0;
++
+ for (i = 0; i < state->acquired_refs; i++) {
++ if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno)
++ continue;
+ verbose(env, "Unreleased reference id=%d alloc_insn=%d\n",
+ state->refs[i].id, state->refs[i].insn_idx);
++ refs_lingering = true;
+ }
+- return state->acquired_refs ? -EINVAL : 0;
++ return refs_lingering ? -EINVAL : 0;
+ }
+
+ static int check_bpf_snprintf_call(struct bpf_verifier_env *env,
+@@ -7410,6 +7406,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ if (type_may_be_null(regs[BPF_REG_0].type))
+ regs[BPF_REG_0].id = ++env->id_gen;
+
++ if (helper_multiple_ref_obj_use(func_id, meta.map_ptr)) {
++ verbose(env, "verifier internal error: func %s#%d sets ref_obj_id more than once\n",
++ func_id_name(func_id), func_id);
++ return -EFAULT;
++ }
++
+ if (is_ptr_cast_function(func_id)) {
+ /* For release_reference() */
+ regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
+@@ -7422,10 +7424,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ regs[BPF_REG_0].id = id;
+ /* For release_reference() */
+ regs[BPF_REG_0].ref_obj_id = id;
+- } else if (func_id == BPF_FUNC_dynptr_data) {
++ } else if (is_dynptr_acquire_function(func_id)) {
+ int dynptr_id = 0, i;
+
+- /* Find the id of the dynptr we're acquiring a reference to */
++ /* Find the id of the dynptr we're tracking the reference of */
+ for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
+ if (arg_type_is_dynptr(fn->arg_type[i])) {
+ if (dynptr_id) {
+@@ -12260,6 +12262,16 @@ static int do_check(struct bpf_verifier_env *env)
+ return -EINVAL;
+ }
+
++ /* We must do check_reference_leak here before
++ * prepare_func_exit to handle the case when
++ * state->curframe > 0, it may be a callback
++ * function, for which reference_state must
++ * match caller reference state when it exits.
++ */
++ err = check_reference_leak(env);
++ if (err)
++ return err;
++
+ if (state->curframe) {
+ /* exit from nested function */
+ err = prepare_func_exit(env, &env->insn_idx);
+@@ -12269,10 +12281,6 @@ static int do_check(struct bpf_verifier_env *env)
+ continue;
+ }
+
+- err = check_reference_leak(env);
+- if (err)
+- return err;
+-
+ err = check_return_code(env);
+ if (err)
+ return err;
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 80c23f48f3b4b..90019724c7191 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -6615,8 +6615,12 @@ struct cgroup *cgroup_get_from_path(const char *path)
+ {
+ struct kernfs_node *kn;
+ struct cgroup *cgrp = ERR_PTR(-ENOENT);
++ struct cgroup *root_cgrp;
+
+- kn = kernfs_walk_and_get(cgrp_dfl_root.cgrp.kn, path);
++ spin_lock_irq(&css_set_lock);
++ root_cgrp = current_cgns_cgroup_from_root(&cgrp_dfl_root);
++ kn = kernfs_walk_and_get(root_cgrp->kn, path);
++ spin_unlock_irq(&css_set_lock);
+ if (!kn)
+ goto out;
+
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 1f3a55297f39d..50bf837571ac8 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -33,6 +33,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/kernel.h>
+ #include <linux/kmod.h>
++#include <linux/kthread.h>
+ #include <linux/list.h>
+ #include <linux/mempolicy.h>
+ #include <linux/mm.h>
+@@ -1127,10 +1128,18 @@ static void update_tasks_cpumask(struct cpuset *cs)
+ {
+ struct css_task_iter it;
+ struct task_struct *task;
++ bool top_cs = cs == &top_cpuset;
+
+ css_task_iter_start(&cs->css, 0, &it);
+- while ((task = css_task_iter_next(&it)))
++ while ((task = css_task_iter_next(&it))) {
++ /*
++ * Percpu kthreads in top_cpuset are ignored
++ */
++ if (top_cs && (task->flags & PF_KTHREAD) &&
++ kthread_is_per_cpu(task))
++ continue;
+ set_cpus_allowed_ptr(task, cs->effective_cpus);
++ }
+ css_task_iter_end(&it);
+ }
+
+@@ -2092,12 +2101,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
+ update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ }
+
+- /*
+- * Update cpumask of parent's tasks except when it is the top
+- * cpuset as some system daemons cannot be mapped to other CPUs.
+- */
+- if (parent != &top_cpuset)
+- update_tasks_cpumask(parent);
++ update_tasks_cpumask(parent);
+
+ if (parent->child_ecpus_count)
+ update_sibling_cpumasks(parent, cs, &tmpmask);
+diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
+index 5d03a2ad10661..30187b1d82759 100644
+--- a/kernel/livepatch/transition.c
++++ b/kernel/livepatch/transition.c
+@@ -610,9 +610,23 @@ void klp_reverse_transition(void)
+ /* Called from copy_process() during fork */
+ void klp_copy_process(struct task_struct *child)
+ {
+- child->patch_state = current->patch_state;
+
+- /* TIF_PATCH_PENDING gets copied in setup_thread_stack() */
++ /*
++ * The parent process may have gone through a KLP transition since
++ * the thread flag was copied in setup_thread_stack earlier. Bring
++ * the task flag up to date with the parent here.
++ *
++ * The operation is serialized against all klp_*_transition()
++ * operations by the tasklist_lock. The only exception is
++ * klp_update_patch_state(current), but we cannot race with
++ * that because we are current.
++ */
++ if (test_tsk_thread_flag(current, TIF_PATCH_PENDING))
++ set_tsk_thread_flag(child, TIF_PATCH_PENDING);
++ else
++ clear_tsk_thread_flag(child, TIF_PATCH_PENDING);
++
++ child->patch_state = current->patch_state;
+ }
+
+ /*
+diff --git a/kernel/module/tracking.c b/kernel/module/tracking.c
+index 7f8133044d092..af52cabfe6321 100644
+--- a/kernel/module/tracking.c
++++ b/kernel/module/tracking.c
+@@ -21,6 +21,9 @@ int try_add_tainted_module(struct module *mod)
+
+ module_assert_mutex_or_preempt();
+
++ if (!mod->taints)
++ goto out;
++
+ list_for_each_entry_rcu(mod_taint, &unloaded_tainted_modules, list,
+ lockdep_is_held(&module_mutex)) {
+ if (!strcmp(mod_taint->name, mod->name) &&
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index c25ba442044a6..54a3a19c4c0ba 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3508,15 +3508,16 @@ static void fill_page_cache_func(struct work_struct *work)
+ bnode = (struct kvfree_rcu_bulk_data *)
+ __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
+
+- if (bnode) {
+- raw_spin_lock_irqsave(&krcp->lock, flags);
+- pushed = put_cached_bnode(krcp, bnode);
+- raw_spin_unlock_irqrestore(&krcp->lock, flags);
++ if (!bnode)
++ break;
+
+- if (!pushed) {
+- free_page((unsigned long) bnode);
+- break;
+- }
++ raw_spin_lock_irqsave(&krcp->lock, flags);
++ pushed = put_cached_bnode(krcp, bnode);
++ raw_spin_unlock_irqrestore(&krcp->lock, flags);
++
++ if (!pushed) {
++ free_page((unsigned long) bnode);
++ break;
+ }
+ }
+
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index c8ba0fe17267c..d164938528cde 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -641,7 +641,8 @@ static void rcu_read_unlock_special(struct task_struct *t)
+
+ expboost = (t->rcu_blocked_node && READ_ONCE(t->rcu_blocked_node->exp_tasks)) ||
+ (rdp->grpmask & READ_ONCE(rnp->expmask)) ||
+- IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) ||
++ (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) &&
++ ((rdp->grpmask & READ_ONCE(rnp->qsmask)) || t->rcu_blocked_node)) ||
+ (IS_ENABLED(CONFIG_RCU_BOOST) && irqs_were_disabled &&
+ t->rcu_blocked_node);
+ // Need to defer quiescent state until everything is enabled.
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 88589d74a892e..af13fdf1d86c3 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1026,6 +1026,22 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
+ .arg1_type = ARG_PTR_TO_CTX,
+ };
+
++#ifdef CONFIG_X86_KERNEL_IBT
++static unsigned long get_entry_ip(unsigned long fentry_ip)
++{
++ u32 instr;
++
++ /* Being extra safe in here in case entry ip is on the page-edge. */
++ if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
++ return fentry_ip;
++ if (is_endbr(instr))
++ fentry_ip -= ENDBR_INSN_SIZE;
++ return fentry_ip;
++}
++#else
++#define get_entry_ip(fentry_ip) fentry_ip
++#endif
++
+ BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
+ {
+ struct kprobe *kp = kprobe_running();
+@@ -2414,13 +2430,13 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
+ }
+
+ static void
+-kprobe_multi_link_handler(struct fprobe *fp, unsigned long entry_ip,
++kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip,
+ struct pt_regs *regs)
+ {
+ struct bpf_kprobe_multi_link *link;
+
+ link = container_of(fp, struct bpf_kprobe_multi_link, fp);
+- kprobe_multi_link_prog_run(link, entry_ip, regs);
++ kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs);
+ }
+
+ static int symbols_cmp_r(const void *a, const void *b, const void *priv)
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 4baa99363b166..2efc1ddd8e26e 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -1644,6 +1644,18 @@ ftrace_find_tramp_ops_any_other(struct dyn_ftrace *rec, struct ftrace_ops *op_ex
+ static struct ftrace_ops *
+ ftrace_find_tramp_ops_next(struct dyn_ftrace *rec, struct ftrace_ops *ops);
+
++static bool skip_record(struct dyn_ftrace *rec)
++{
++ /*
++ * At boot up, weak functions are set to disable. Function tracing
++ * can be enabled before they are, and they still need to be disabled now.
++ * If the record is disabled, still continue if it is marked as already
++ * enabled (this is needed to keep the accounting working).
++ */
++ return rec->flags & FTRACE_FL_DISABLED &&
++ !(rec->flags & FTRACE_FL_ENABLED);
++}
++
+ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
+ int filter_hash,
+ bool inc)
+@@ -1693,7 +1705,7 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
+ int in_hash = 0;
+ int match = 0;
+
+- if (rec->flags & FTRACE_FL_DISABLED)
++ if (skip_record(rec))
+ continue;
+
+ if (all) {
+@@ -2090,7 +2102,7 @@ static int ftrace_check_record(struct dyn_ftrace *rec, bool enable, bool update)
+
+ ftrace_bug_type = FTRACE_BUG_UNKNOWN;
+
+- if (rec->flags & FTRACE_FL_DISABLED)
++ if (skip_record(rec))
+ return FTRACE_UPDATE_IGNORE;
+
+ /*
+@@ -2205,7 +2217,7 @@ static int ftrace_check_record(struct dyn_ftrace *rec, bool enable, bool update)
+ if (update) {
+ /* If there's no more users, clear all flags */
+ if (!ftrace_rec_count(rec))
+- rec->flags = 0;
++ rec->flags &= FTRACE_FL_DISABLED;
+ else
+ /*
+ * Just disable the record, but keep the ops TRAMP
+@@ -2599,7 +2611,7 @@ void __weak ftrace_replace_code(int mod_flags)
+
+ do_for_each_ftrace_rec(pg, rec) {
+
+- if (rec->flags & FTRACE_FL_DISABLED)
++ if (skip_record(rec))
+ continue;
+
+ failed = __ftrace_replace_code(rec, enable);
+@@ -6037,8 +6049,12 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
+
+ if (filter_hash) {
+ orig_hash = &iter->ops->func_hash->filter_hash;
+- if (iter->tr && !list_empty(&iter->tr->mod_trace))
+- iter->hash->flags |= FTRACE_HASH_FL_MOD;
++ if (iter->tr) {
++ if (list_empty(&iter->tr->mod_trace))
++ iter->hash->flags &= ~FTRACE_HASH_FL_MOD;
++ else
++ iter->hash->flags |= FTRACE_HASH_FL_MOD;
++ }
+ } else
+ orig_hash = &iter->ops->func_hash->notrace_hash;
+
+diff --git a/kernel/trace/kprobe_event_gen_test.c b/kernel/trace/kprobe_event_gen_test.c
+index 18b0f1cbb947f..80e04a1e19772 100644
+--- a/kernel/trace/kprobe_event_gen_test.c
++++ b/kernel/trace/kprobe_event_gen_test.c
+@@ -35,6 +35,45 @@
+ static struct trace_event_file *gen_kprobe_test;
+ static struct trace_event_file *gen_kretprobe_test;
+
++#define KPROBE_GEN_TEST_FUNC "do_sys_open"
++
++/* X86 */
++#if defined(CONFIG_X86_64) || defined(CONFIG_X86_32)
++#define KPROBE_GEN_TEST_ARG0 "dfd=%ax"
++#define KPROBE_GEN_TEST_ARG1 "filename=%dx"
++#define KPROBE_GEN_TEST_ARG2 "flags=%cx"
++#define KPROBE_GEN_TEST_ARG3 "mode=+4($stack)"
++
++/* ARM64 */
++#elif defined(CONFIG_ARM64)
++#define KPROBE_GEN_TEST_ARG0 "dfd=%x0"
++#define KPROBE_GEN_TEST_ARG1 "filename=%x1"
++#define KPROBE_GEN_TEST_ARG2 "flags=%x2"
++#define KPROBE_GEN_TEST_ARG3 "mode=%x3"
++
++/* ARM */
++#elif defined(CONFIG_ARM)
++#define KPROBE_GEN_TEST_ARG0 "dfd=%r0"
++#define KPROBE_GEN_TEST_ARG1 "filename=%r1"
++#define KPROBE_GEN_TEST_ARG2 "flags=%r2"
++#define KPROBE_GEN_TEST_ARG3 "mode=%r3"
++
++/* RISCV */
++#elif defined(CONFIG_RISCV)
++#define KPROBE_GEN_TEST_ARG0 "dfd=%a0"
++#define KPROBE_GEN_TEST_ARG1 "filename=%a1"
++#define KPROBE_GEN_TEST_ARG2 "flags=%a2"
++#define KPROBE_GEN_TEST_ARG3 "mode=%a3"
++
++/* others */
++#else
++#define KPROBE_GEN_TEST_ARG0 NULL
++#define KPROBE_GEN_TEST_ARG1 NULL
++#define KPROBE_GEN_TEST_ARG2 NULL
++#define KPROBE_GEN_TEST_ARG3 NULL
++#endif
++
++
+ /*
+ * Test to make sure we can create a kprobe event, then add more
+ * fields.
+@@ -58,14 +97,14 @@ static int __init test_gen_kprobe_cmd(void)
+ * fields.
+ */
+ ret = kprobe_event_gen_cmd_start(&cmd, "gen_kprobe_test",
+- "do_sys_open",
+- "dfd=%ax", "filename=%dx");
++ KPROBE_GEN_TEST_FUNC,
++ KPROBE_GEN_TEST_ARG0, KPROBE_GEN_TEST_ARG1);
+ if (ret)
+ goto free;
+
+ /* Use kprobe_event_add_fields to add the rest of the fields */
+
+- ret = kprobe_event_add_fields(&cmd, "flags=%cx", "mode=+4($stack)");
++ ret = kprobe_event_add_fields(&cmd, KPROBE_GEN_TEST_ARG2, KPROBE_GEN_TEST_ARG3);
+ if (ret)
+ goto free;
+
+@@ -128,7 +167,7 @@ static int __init test_gen_kretprobe_cmd(void)
+ * Define the kretprobe event.
+ */
+ ret = kretprobe_event_gen_cmd_start(&cmd, "gen_kretprobe_test",
+- "do_sys_open",
++ KPROBE_GEN_TEST_FUNC,
+ "$retval");
+ if (ret)
+ goto free;
+@@ -206,7 +245,7 @@ static void __exit kprobe_event_gen_test_exit(void)
+ WARN_ON(kprobe_event_delete("gen_kprobe_test"));
+
+ /* Disable the event or you can't remove it */
+- WARN_ON(trace_array_set_clr_event(gen_kprobe_test->tr,
++ WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr,
+ "kprobes",
+ "gen_kretprobe_test", false));
+
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index d59b6a328b7fe..c3f354cfc5ba1 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -413,6 +413,7 @@ struct rb_irq_work {
+ struct irq_work work;
+ wait_queue_head_t waiters;
+ wait_queue_head_t full_waiters;
++ long wait_index;
+ bool waiters_pending;
+ bool full_waiters_pending;
+ bool wakeup_full;
+@@ -917,12 +918,44 @@ static void rb_wake_up_waiters(struct irq_work *work)
+ struct rb_irq_work *rbwork = container_of(work, struct rb_irq_work, work);
+
+ wake_up_all(&rbwork->waiters);
+- if (rbwork->wakeup_full) {
++ if (rbwork->full_waiters_pending || rbwork->wakeup_full) {
+ rbwork->wakeup_full = false;
++ rbwork->full_waiters_pending = false;
+ wake_up_all(&rbwork->full_waiters);
+ }
+ }
+
++/**
++ * ring_buffer_wake_waiters - wake up any waiters on this ring buffer
++ * @buffer: The ring buffer to wake waiters on
++ *
++ * In the case of a file that represents a ring buffer is closing,
++ * it is prudent to wake up any waiters that are on this.
++ */
++void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu)
++{
++ struct ring_buffer_per_cpu *cpu_buffer;
++ struct rb_irq_work *rbwork;
++
++ if (cpu == RING_BUFFER_ALL_CPUS) {
++
++ /* Wake up individual ones too. One level recursion */
++ for_each_buffer_cpu(buffer, cpu)
++ ring_buffer_wake_waiters(buffer, cpu);
++
++ rbwork = &buffer->irq_work;
++ } else {
++ cpu_buffer = buffer->buffers[cpu];
++ rbwork = &cpu_buffer->irq_work;
++ }
++
++ rbwork->wait_index++;
++ /* make sure the waiters see the new index */
++ smp_wmb();
++
++ rb_wake_up_waiters(&rbwork->work);
++}
++
+ /**
+ * ring_buffer_wait - wait for input to the ring buffer
+ * @buffer: buffer to wait on
+@@ -938,6 +971,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ struct ring_buffer_per_cpu *cpu_buffer;
+ DEFINE_WAIT(wait);
+ struct rb_irq_work *work;
++ long wait_index;
+ int ret = 0;
+
+ /*
+@@ -956,6 +990,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ work = &cpu_buffer->irq_work;
+ }
+
++ wait_index = READ_ONCE(work->wait_index);
+
+ while (true) {
+ if (full)
+@@ -1011,7 +1046,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ nr_pages = cpu_buffer->nr_pages;
+ dirty = ring_buffer_nr_dirty_pages(buffer, cpu);
+ if (!cpu_buffer->shortest_full ||
+- cpu_buffer->shortest_full < full)
++ cpu_buffer->shortest_full > full)
+ cpu_buffer->shortest_full = full;
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ if (!pagebusy &&
+@@ -1020,6 +1055,11 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ }
+
+ schedule();
++
++ /* Make sure to see the new wait index */
++ smp_rmb();
++ if (wait_index != work->wait_index)
++ break;
+ }
+
+ if (full)
+@@ -2608,6 +2648,9 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ /* Mark the rest of the page with padding */
+ rb_event_set_padding(event);
+
++ /* Make sure the padding is visible before the write update */
++ smp_wmb();
++
+ /* Set the write back to the previous setting */
+ local_sub(length, &tail_page->write);
+ return;
+@@ -2619,6 +2662,9 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ /* time delta must be non zero */
+ event->time_delta = 1;
+
++ /* Make sure the padding is visible before the tail_page->write update */
++ smp_wmb();
++
+ /* Set write to end of buffer */
+ length = (tail + length) - BUF_PAGE_SIZE;
+ local_sub(length, &tail_page->write);
+@@ -4587,6 +4633,33 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
+ arch_spin_unlock(&cpu_buffer->lock);
+ local_irq_restore(flags);
+
++ /*
++ * The writer has preempt disable, wait for it. But not forever
++ * Although, 1 second is pretty much "forever"
++ */
++#define USECS_WAIT 1000000
++ for (nr_loops = 0; nr_loops < USECS_WAIT; nr_loops++) {
++ /* If the write is past the end of page, a writer is still updating it */
++ if (likely(!reader || rb_page_write(reader) <= BUF_PAGE_SIZE))
++ break;
++
++ udelay(1);
++
++ /* Get the latest version of the reader write value */
++ smp_rmb();
++ }
++
++ /* The writer is not moving forward? Something is wrong */
++ if (RB_WARN_ON(cpu_buffer, nr_loops == USECS_WAIT))
++ reader = NULL;
++
++ /*
++ * Make sure we see any padding after the write update
++ * (see rb_reset_tail())
++ */
++ smp_rmb();
++
++
+ return reader;
+ }
+
+@@ -5616,7 +5689,15 @@ int ring_buffer_read_page(struct trace_buffer *buffer,
+ unsigned int pos = 0;
+ unsigned int size;
+
+- if (full)
++ /*
++ * If a full page is expected, this can still be returned
++ * if there's been a previous partial read and the
++ * rest of the page can be read and the commit page is off
++ * the reader page.
++ */
++ if (full &&
++ (!read || (len < (commit - read)) ||
++ cpu_buffer->reader_page == cpu_buffer->commit_page))
+ goto out_unlock;
+
+ if (len > (commit - read))
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b8dd546270750..818fdf24f5ec6 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1193,12 +1193,14 @@ void *tracing_cond_snapshot_data(struct trace_array *tr)
+ {
+ void *cond_data = NULL;
+
++ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+
+ if (tr->cond_snapshot)
+ cond_data = tr->cond_snapshot->cond_data;
+
+ arch_spin_unlock(&tr->max_lock);
++ local_irq_enable();
+
+ return cond_data;
+ }
+@@ -1334,9 +1336,11 @@ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data,
+ goto fail_unlock;
+ }
+
++ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+ tr->cond_snapshot = cond_snapshot;
+ arch_spin_unlock(&tr->max_lock);
++ local_irq_enable();
+
+ mutex_unlock(&trace_types_lock);
+
+@@ -1363,6 +1367,7 @@ int tracing_snapshot_cond_disable(struct trace_array *tr)
+ {
+ int ret = 0;
+
++ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+
+ if (!tr->cond_snapshot)
+@@ -1373,6 +1378,7 @@ int tracing_snapshot_cond_disable(struct trace_array *tr)
+ }
+
+ arch_spin_unlock(&tr->max_lock);
++ local_irq_enable();
+
+ return ret;
+ }
+@@ -2200,6 +2206,11 @@ static size_t tgid_map_max;
+
+ #define SAVED_CMDLINES_DEFAULT 128
+ #define NO_CMDLINE_MAP UINT_MAX
++/*
++ * Preemption must be disabled before acquiring trace_cmdline_lock.
++ * The various trace_arrays' max_lock must be acquired in a context
++ * where interrupt is disabled.
++ */
+ static arch_spinlock_t trace_cmdline_lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ struct saved_cmdlines_buffer {
+ unsigned map_pid_to_cmdline[PID_MAX_DEFAULT+1];
+@@ -2412,7 +2423,11 @@ static int trace_save_cmdline(struct task_struct *tsk)
+ * the lock, but we also don't want to spin
+ * nor do we want to disable interrupts,
+ * so if we miss here, then better luck next time.
++ *
++ * This is called within the scheduler and wake up, so interrupts
++ * had better been disabled and run queue lock been held.
+ */
++ lockdep_assert_preemption_disabled();
+ if (!arch_spin_trylock(&trace_cmdline_lock))
+ return 0;
+
+@@ -5890,9 +5905,11 @@ tracing_saved_cmdlines_size_read(struct file *filp, char __user *ubuf,
+ char buf[64];
+ int r;
+
++ preempt_disable();
+ arch_spin_lock(&trace_cmdline_lock);
+ r = scnprintf(buf, sizeof(buf), "%u\n", savedcmd->cmdline_num);
+ arch_spin_unlock(&trace_cmdline_lock);
++ preempt_enable();
+
+ return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
+ }
+@@ -5917,10 +5934,12 @@ static int tracing_resize_saved_cmdlines(unsigned int val)
+ return -ENOMEM;
+ }
+
++ preempt_disable();
+ arch_spin_lock(&trace_cmdline_lock);
+ savedcmd_temp = savedcmd;
+ savedcmd = s;
+ arch_spin_unlock(&trace_cmdline_lock);
++ preempt_enable();
+ free_saved_cmdlines_buffer(savedcmd_temp);
+
+ return 0;
+@@ -6373,10 +6392,12 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+
+ #ifdef CONFIG_TRACER_SNAPSHOT
+ if (t->use_max_tr) {
++ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+ if (tr->cond_snapshot)
+ ret = -EBUSY;
+ arch_spin_unlock(&tr->max_lock);
++ local_irq_enable();
+ if (ret)
+ goto out;
+ }
+@@ -6407,12 +6428,12 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ if (tr->current_trace->reset)
+ tr->current_trace->reset(tr);
+
++#ifdef CONFIG_TRACER_MAX_TRACE
++ had_max_tr = tr->current_trace->use_max_tr;
++
+ /* Current trace needs to be nop_trace before synchronize_rcu */
+ tr->current_trace = &nop_trace;
+
+-#ifdef CONFIG_TRACER_MAX_TRACE
+- had_max_tr = tr->allocated_snapshot;
+-
+ if (had_max_tr && !t->use_max_tr) {
+ /*
+ * We need to make sure that the update_max_tr sees that
+@@ -6425,11 +6446,13 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ free_snapshot(tr);
+ }
+
+- if (t->use_max_tr && !had_max_tr) {
++ if (t->use_max_tr && !tr->allocated_snapshot) {
+ ret = tracing_alloc_snapshot_instance(tr);
+ if (ret < 0)
+ goto out;
+ }
++#else
++ tr->current_trace = &nop_trace;
+ #endif
+
+ if (t->init) {
+@@ -7436,10 +7459,12 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ goto out;
+ }
+
++ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+ if (tr->cond_snapshot)
+ ret = -EBUSY;
+ arch_spin_unlock(&tr->max_lock);
++ local_irq_enable();
+ if (ret)
+ goto out;
+
+@@ -8137,6 +8162,12 @@ static int tracing_buffers_release(struct inode *inode, struct file *file)
+
+ __trace_array_put(iter->tr);
+
++ iter->wait_index++;
++ /* Make sure the waiters see the new wait_index */
++ smp_wmb();
++
++ ring_buffer_wake_waiters(iter->array_buffer->buffer, iter->cpu_file);
++
+ if (info->spare)
+ ring_buffer_free_read_page(iter->array_buffer->buffer,
+ info->spare_cpu, info->spare);
+@@ -8290,6 +8321,8 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
+
+ /* did we read anything? */
+ if (!spd.nr_pages) {
++ long wait_index;
++
+ if (ret)
+ goto out;
+
+@@ -8297,10 +8330,21 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
+ if ((file->f_flags & O_NONBLOCK) || (flags & SPLICE_F_NONBLOCK))
+ goto out;
+
++ wait_index = READ_ONCE(iter->wait_index);
++
+ ret = wait_on_pipe(iter, iter->tr->buffer_percent);
+ if (ret)
+ goto out;
+
++ /* No need to wait after waking up when tracing is off */
++ if (!tracer_tracing_is_on(iter->tr))
++ goto out;
++
++ /* Make sure we see the new wait_index */
++ smp_rmb();
++ if (wait_index != iter->wait_index)
++ goto out;
++
+ goto again;
+ }
+
+@@ -8311,12 +8355,34 @@ out:
+ return ret;
+ }
+
++/* An ioctl call with cmd 0 to the ring buffer file will wake up all waiters */
++static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++ struct ftrace_buffer_info *info = file->private_data;
++ struct trace_iterator *iter = &info->iter;
++
++ if (cmd)
++ return -ENOIOCTLCMD;
++
++ mutex_lock(&trace_types_lock);
++
++ iter->wait_index++;
++ /* Make sure the waiters see the new wait_index */
++ smp_wmb();
++
++ ring_buffer_wake_waiters(iter->array_buffer->buffer, iter->cpu_file);
++
++ mutex_unlock(&trace_types_lock);
++ return 0;
++}
++
+ static const struct file_operations tracing_buffers_fops = {
+ .open = tracing_buffers_open,
+ .read = tracing_buffers_read,
+ .poll = tracing_buffers_poll,
+ .release = tracing_buffers_release,
+ .splice_read = tracing_buffers_splice_read,
++ .unlocked_ioctl = tracing_buffers_ioctl,
+ .llseek = no_llseek,
+ };
+
+@@ -9005,6 +9071,8 @@ rb_simple_write(struct file *filp, const char __user *ubuf,
+ tracer_tracing_off(tr);
+ if (tr->current_trace->stop)
+ tr->current_trace->stop(tr);
++ /* Wake up any waiters */
++ ring_buffer_wake_waiters(buffer, RING_BUFFER_ALL_CPUS);
+ }
+ mutex_unlock(&trace_types_lock);
+ }
+diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
+index 4c57fc89fa17b..29a66a9d42ae5 100644
+--- a/kernel/trace/trace_eprobe.c
++++ b/kernel/trace/trace_eprobe.c
+@@ -16,6 +16,7 @@
+ #include "trace_dynevent.h"
+ #include "trace_probe.h"
+ #include "trace_probe_tmpl.h"
++#include "trace_probe_kernel.h"
+
+ #define EPROBE_EVENT_SYSTEM "eprobes"
+
+@@ -452,29 +453,14 @@ NOKPROBE_SYMBOL(process_fetch_insn)
+ static nokprobe_inline int
+ fetch_store_strlen_user(unsigned long addr)
+ {
+- const void __user *uaddr = (__force const void __user *)addr;
+-
+- return strnlen_user_nofault(uaddr, MAX_STRING_SIZE);
++ return kern_fetch_store_strlen_user(addr);
+ }
+
+ /* Return the length of string -- including null terminal byte */
+ static nokprobe_inline int
+ fetch_store_strlen(unsigned long addr)
+ {
+- int ret, len = 0;
+- u8 c;
+-
+-#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+- if (addr < TASK_SIZE)
+- return fetch_store_strlen_user(addr);
+-#endif
+-
+- do {
+- ret = copy_from_kernel_nofault(&c, (u8 *)addr + len, 1);
+- len++;
+- } while (c && ret == 0 && len < MAX_STRING_SIZE);
+-
+- return (ret < 0) ? ret : len;
++ return kern_fetch_store_strlen(addr);
+ }
+
+ /*
+@@ -484,21 +470,7 @@ fetch_store_strlen(unsigned long addr)
+ static nokprobe_inline int
+ fetch_store_string_user(unsigned long addr, void *dest, void *base)
+ {
+- const void __user *uaddr = (__force const void __user *)addr;
+- int maxlen = get_loc_len(*(u32 *)dest);
+- void *__dest;
+- long ret;
+-
+- if (unlikely(!maxlen))
+- return -ENOMEM;
+-
+- __dest = get_loc_data(dest, base);
+-
+- ret = strncpy_from_user_nofault(__dest, uaddr, maxlen);
+- if (ret >= 0)
+- *(u32 *)dest = make_data_loc(ret, __dest - base);
+-
+- return ret;
++ return kern_fetch_store_string_user(addr, dest, base);
+ }
+
+ /*
+@@ -508,29 +480,7 @@ fetch_store_string_user(unsigned long addr, void *dest, void *base)
+ static nokprobe_inline int
+ fetch_store_string(unsigned long addr, void *dest, void *base)
+ {
+- int maxlen = get_loc_len(*(u32 *)dest);
+- void *__dest;
+- long ret;
+-
+-#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+- if ((unsigned long)addr < TASK_SIZE)
+- return fetch_store_string_user(addr, dest, base);
+-#endif
+-
+- if (unlikely(!maxlen))
+- return -ENOMEM;
+-
+- __dest = get_loc_data(dest, base);
+-
+- /*
+- * Try to get string again, since the string can be changed while
+- * probing.
+- */
+- ret = strncpy_from_kernel_nofault(__dest, (void *)addr, maxlen);
+- if (ret >= 0)
+- *(u32 *)dest = make_data_loc(ret, __dest - base);
+-
+- return ret;
++ return kern_fetch_store_string(addr, dest, base);
+ }
+
+ static nokprobe_inline int
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index 5e8c07aef071b..e310052dc83ce 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -17,6 +17,8 @@
+ /* for gfp flag names */
+ #include <linux/trace_events.h>
+ #include <trace/events/mmflags.h>
++#include "trace_probe.h"
++#include "trace_probe_kernel.h"
+
+ #include "trace_synth.h"
+
+@@ -409,6 +411,7 @@ static unsigned int trace_string(struct synth_trace_event *entry,
+ {
+ unsigned int len = 0;
+ char *str_field;
++ int ret;
+
+ if (is_dynamic) {
+ u32 data_offset;
+@@ -417,19 +420,27 @@ static unsigned int trace_string(struct synth_trace_event *entry,
+ data_offset += event->n_u64 * sizeof(u64);
+ data_offset += data_size;
+
+- str_field = (char *)entry + data_offset;
+-
+- len = strlen(str_val) + 1;
+- strscpy(str_field, str_val, len);
++ len = kern_fetch_store_strlen((unsigned long)str_val);
+
+ data_offset |= len << 16;
+ *(u32 *)&entry->fields[*n_u64] = data_offset;
+
++ ret = kern_fetch_store_string((unsigned long)str_val, &entry->fields[*n_u64], entry);
++
+ (*n_u64)++;
+ } else {
+ str_field = (char *)&entry->fields[*n_u64];
+
+- strscpy(str_field, str_val, STR_VAR_LEN_MAX);
++#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
++ if ((unsigned long)str_val < TASK_SIZE)
++ ret = strncpy_from_user_nofault(str_field, str_val, STR_VAR_LEN_MAX);
++ else
++#endif
++ ret = strncpy_from_kernel_nofault(str_field, str_val, STR_VAR_LEN_MAX);
++
++ if (ret < 0)
++ strcpy(str_field, FAULT_STRING);
++
+ (*n_u64) += STR_VAR_LEN_MAX / sizeof(u64);
+ }
+
+@@ -462,7 +473,7 @@ static notrace void trace_event_raw_event_synth(void *__data,
+ val_idx = var_ref_idx[field_pos];
+ str_val = (char *)(long)var_ref_vals[val_idx];
+
+- len = strlen(str_val) + 1;
++ len = kern_fetch_store_strlen((unsigned long)str_val);
+
+ fields_size += len;
+ }
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index a245ea673715d..656e2c239c532 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -20,6 +20,7 @@
+ #include "trace_kprobe_selftest.h"
+ #include "trace_probe.h"
+ #include "trace_probe_tmpl.h"
++#include "trace_probe_kernel.h"
+
+ #define KPROBE_EVENT_SYSTEM "kprobes"
+ #define KRETPROBE_MAXACTIVE_MAX 4096
+@@ -1219,29 +1220,14 @@ static const struct file_operations kprobe_profile_ops = {
+ static nokprobe_inline int
+ fetch_store_strlen_user(unsigned long addr)
+ {
+- const void __user *uaddr = (__force const void __user *)addr;
+-
+- return strnlen_user_nofault(uaddr, MAX_STRING_SIZE);
++ return kern_fetch_store_strlen_user(addr);
+ }
+
+ /* Return the length of string -- including null terminal byte */
+ static nokprobe_inline int
+ fetch_store_strlen(unsigned long addr)
+ {
+- int ret, len = 0;
+- u8 c;
+-
+-#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+- if (addr < TASK_SIZE)
+- return fetch_store_strlen_user(addr);
+-#endif
+-
+- do {
+- ret = copy_from_kernel_nofault(&c, (u8 *)addr + len, 1);
+- len++;
+- } while (c && ret == 0 && len < MAX_STRING_SIZE);
+-
+- return (ret < 0) ? ret : len;
++ return kern_fetch_store_strlen(addr);
+ }
+
+ /*
+@@ -1251,21 +1237,7 @@ fetch_store_strlen(unsigned long addr)
+ static nokprobe_inline int
+ fetch_store_string_user(unsigned long addr, void *dest, void *base)
+ {
+- const void __user *uaddr = (__force const void __user *)addr;
+- int maxlen = get_loc_len(*(u32 *)dest);
+- void *__dest;
+- long ret;
+-
+- if (unlikely(!maxlen))
+- return -ENOMEM;
+-
+- __dest = get_loc_data(dest, base);
+-
+- ret = strncpy_from_user_nofault(__dest, uaddr, maxlen);
+- if (ret >= 0)
+- *(u32 *)dest = make_data_loc(ret, __dest - base);
+-
+- return ret;
++ return kern_fetch_store_string_user(addr, dest, base);
+ }
+
+ /*
+@@ -1275,29 +1247,7 @@ fetch_store_string_user(unsigned long addr, void *dest, void *base)
+ static nokprobe_inline int
+ fetch_store_string(unsigned long addr, void *dest, void *base)
+ {
+- int maxlen = get_loc_len(*(u32 *)dest);
+- void *__dest;
+- long ret;
+-
+-#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+- if ((unsigned long)addr < TASK_SIZE)
+- return fetch_store_string_user(addr, dest, base);
+-#endif
+-
+- if (unlikely(!maxlen))
+- return -ENOMEM;
+-
+- __dest = get_loc_data(dest, base);
+-
+- /*
+- * Try to get string again, since the string can be changed while
+- * probing.
+- */
+- ret = strncpy_from_kernel_nofault(__dest, (void *)addr, maxlen);
+- if (ret >= 0)
+- *(u32 *)dest = make_data_loc(ret, __dest - base);
+-
+- return ret;
++ return kern_fetch_store_string(addr, dest, base);
+ }
+
+ static nokprobe_inline int
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 313439920a8ce..78d536d3ff3db 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1786,8 +1786,9 @@ static int start_per_cpu_kthreads(void)
+ for_each_cpu(cpu, current_mask) {
+ retval = start_kthread(cpu);
+ if (retval) {
++ cpus_read_unlock();
+ stop_per_cpu_kthreads();
+- break;
++ return retval;
+ }
+ }
+
+diff --git a/kernel/trace/trace_probe_kernel.h b/kernel/trace/trace_probe_kernel.h
+new file mode 100644
+index 0000000000000..77dbd9ff97826
+--- /dev/null
++++ b/kernel/trace/trace_probe_kernel.h
+@@ -0,0 +1,115 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __TRACE_PROBE_KERNEL_H_
++#define __TRACE_PROBE_KERNEL_H_
++
++#define FAULT_STRING "(fault)"
++
++/*
++ * This depends on trace_probe.h, but can not include it due to
++ * the way trace_probe_tmpl.h is used by trace_kprobe.c and trace_eprobe.c.
++ * Which means that any other user must include trace_probe.h before including
++ * this file.
++ */
++/* Return the length of string -- including null terminal byte */
++static nokprobe_inline int
++kern_fetch_store_strlen_user(unsigned long addr)
++{
++ const void __user *uaddr = (__force const void __user *)addr;
++ int ret;
++
++ ret = strnlen_user_nofault(uaddr, MAX_STRING_SIZE);
++ /*
++ * strnlen_user_nofault returns zero on fault, insert the
++ * FAULT_STRING when that occurs.
++ */
++ if (ret <= 0)
++ return strlen(FAULT_STRING) + 1;
++ return ret;
++}
++
++/* Return the length of string -- including null terminal byte */
++static nokprobe_inline int
++kern_fetch_store_strlen(unsigned long addr)
++{
++ int ret, len = 0;
++ u8 c;
++
++#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
++ if (addr < TASK_SIZE)
++ return kern_fetch_store_strlen_user(addr);
++#endif
++
++ do {
++ ret = copy_from_kernel_nofault(&c, (u8 *)addr + len, 1);
++ len++;
++ } while (c && ret == 0 && len < MAX_STRING_SIZE);
++
++ /* For faults, return enough to hold the FAULT_STRING */
++ return (ret < 0) ? strlen(FAULT_STRING) + 1 : len;
++}
++
++static nokprobe_inline void set_data_loc(int ret, void *dest, void *__dest, void *base, int len)
++{
++ if (ret >= 0) {
++ *(u32 *)dest = make_data_loc(ret, __dest - base);
++ } else {
++ strscpy(__dest, FAULT_STRING, len);
++ ret = strlen(__dest) + 1;
++ }
++}
++
++/*
++ * Fetch a null-terminated string from user. Caller MUST set *(u32 *)buf
++ * with max length and relative data location.
++ */
++static nokprobe_inline int
++kern_fetch_store_string_user(unsigned long addr, void *dest, void *base)
++{
++ const void __user *uaddr = (__force const void __user *)addr;
++ int maxlen = get_loc_len(*(u32 *)dest);
++ void *__dest;
++ long ret;
++
++ if (unlikely(!maxlen))
++ return -ENOMEM;
++
++ __dest = get_loc_data(dest, base);
++
++ ret = strncpy_from_user_nofault(__dest, uaddr, maxlen);
++ set_data_loc(ret, dest, __dest, base, maxlen);
++
++ return ret;
++}
++
++/*
++ * Fetch a null-terminated string. Caller MUST set *(u32 *)buf with max
++ * length and relative data location.
++ */
++static nokprobe_inline int
++kern_fetch_store_string(unsigned long addr, void *dest, void *base)
++{
++ int maxlen = get_loc_len(*(u32 *)dest);
++ void *__dest;
++ long ret;
++
++#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
++ if ((unsigned long)addr < TASK_SIZE)
++ return kern_fetch_store_string_user(addr, dest, base);
++#endif
++
++ if (unlikely(!maxlen))
++ return -ENOMEM;
++
++ __dest = get_loc_data(dest, base);
++
++ /*
++ * Try to get string again, since the string can be changed while
++ * probing.
++ */
++ ret = strncpy_from_kernel_nofault(__dest, (void *)addr, maxlen);
++ set_data_loc(ret, dest, __dest, base, maxlen);
++
++ return ret;
++}
++
++#endif /* __TRACE_PROBE_KERNEL_H_ */
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index c399ab486557f..8c31c98f0bfce 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -231,6 +231,11 @@ config DEBUG_INFO
+ in the "Debug information" choice below, indicating that debug
+ information will be generated for build targets.
+
++# Clang is known to generate .{s,u}leb128 with symbol deltas with DWARF5, which
++# some targets may not support: https://sourceware.org/bugzilla/show_bug.cgi?id=27215
++config AS_HAS_NON_CONST_LEB128
++ def_bool $(as-instr,.uleb128 .Lexpr_end4 - .Lexpr_start3\n.Lexpr_start3:\n.Lexpr_end4:)
++
+ choice
+ prompt "Debug information"
+ depends on DEBUG_KERNEL
+@@ -253,6 +258,7 @@ config DEBUG_INFO_NONE
+ config DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
+ bool "Rely on the toolchain's implicit default DWARF version"
+ select DEBUG_INFO
++ depends on !CC_IS_CLANG || AS_IS_LLVM || CLANG_VERSION < 140000 || (AS_IS_GNU && AS_VERSION >= 23502 && AS_HAS_NON_CONST_LEB128)
+ help
+ The implicit default version of DWARF debug info produced by a
+ toolchain changes over time.
+@@ -264,7 +270,7 @@ config DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
+ config DEBUG_INFO_DWARF4
+ bool "Generate DWARF Version 4 debuginfo"
+ select DEBUG_INFO
+- depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)))
++ depends on !CC_IS_CLANG || AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)
+ help
+ Generate DWARF v4 debug info. This requires gcc 4.5+, binutils 2.35.2
+ if using clang without clang's integrated assembler, and gdb 7.0+.
+@@ -276,7 +282,7 @@ config DEBUG_INFO_DWARF4
+ config DEBUG_INFO_DWARF5
+ bool "Generate DWARF Version 5 debuginfo"
+ select DEBUG_INFO
+- depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)))
++ depends on !CC_IS_CLANG || AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502 && AS_HAS_NON_CONST_LEB128)
+ help
+ Generate DWARF v5 debug info. Requires binutils 2.35.2, gcc 5.0+ (gcc
+ 5.0+ accepts the -gdwarf-5 flag but only had partial support for some
+diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
+index dd7f56af9aed3..c9b3d9e5d470f 100644
+--- a/lib/dynamic_debug.c
++++ b/lib/dynamic_debug.c
+@@ -211,10 +211,11 @@ static int ddebug_change(const struct ddebug_query *query,
+ continue;
+ #ifdef CONFIG_JUMP_LABEL
+ if (dp->flags & _DPRINTK_FLAGS_PRINT) {
+- if (!(modifiers->flags & _DPRINTK_FLAGS_PRINT))
++ if (!(newflags & _DPRINTK_FLAGS_PRINT))
+ static_branch_disable(&dp->key.dd_key_true);
+- } else if (modifiers->flags & _DPRINTK_FLAGS_PRINT)
++ } else if (newflags & _DPRINTK_FLAGS_PRINT) {
+ static_branch_enable(&dp->key.dd_key_true);
++ }
+ #endif
+ dp->flags = newflags;
+ v4pr_info("changed %s:%d [%s]%s =%s\n",
+@@ -383,10 +384,6 @@ static int ddebug_parse_query(char *words[], int nwords,
+ return -EINVAL;
+ }
+
+- if (modname)
+- /* support $modname.dyndbg=<multiple queries> */
+- query->module = modname;
+-
+ for (i = 0; i < nwords; i += 2) {
+ char *keyword = words[i];
+ char *arg = words[i+1];
+@@ -427,6 +424,13 @@ static int ddebug_parse_query(char *words[], int nwords,
+ if (rc)
+ return rc;
+ }
++ if (!query->module && modname)
++ /*
++ * support $modname.dyndbg=<multiple queries>, when
++ * not given in the query itself
++ */
++ query->module = modname;
++
+ vpr_info_dq(query, "parsed");
+ return 0;
+ }
+@@ -553,35 +557,6 @@ static int ddebug_exec_queries(char *query, const char *modname)
+ return nfound;
+ }
+
+-/**
+- * dynamic_debug_exec_queries - select and change dynamic-debug prints
+- * @query: query-string described in admin-guide/dynamic-debug-howto
+- * @modname: string containing module name, usually &module.mod_name
+- *
+- * This uses the >/proc/dynamic_debug/control reader, allowing module
+- * authors to modify their dynamic-debug callsites. The modname is
+- * canonically struct module.mod_name, but can also be null or a
+- * module-wildcard, for example: "drm*".
+- */
+-int dynamic_debug_exec_queries(const char *query, const char *modname)
+-{
+- int rc;
+- char *qry; /* writable copy of query */
+-
+- if (!query) {
+- pr_err("non-null query/command string expected\n");
+- return -EINVAL;
+- }
+- qry = kstrndup(query, PAGE_SIZE, GFP_KERNEL);
+- if (!qry)
+- return -ENOMEM;
+-
+- rc = ddebug_exec_queries(qry, modname);
+- kfree(qry);
+- return rc;
+-}
+-EXPORT_SYMBOL_GPL(dynamic_debug_exec_queries);
+-
+ #define PREFIX_SIZE 64
+
+ static int remaining(int wrote)
+diff --git a/lib/once.c b/lib/once.c
+index 59149bf3bfb4a..351f66aad310a 100644
+--- a/lib/once.c
++++ b/lib/once.c
+@@ -66,3 +66,33 @@ void __do_once_done(bool *done, struct static_key_true *once_key,
+ once_disable_jump(once_key, mod);
+ }
+ EXPORT_SYMBOL(__do_once_done);
++
++static DEFINE_MUTEX(once_mutex);
++
++bool __do_once_slow_start(bool *done)
++ __acquires(once_mutex)
++{
++ mutex_lock(&once_mutex);
++ if (*done) {
++ mutex_unlock(&once_mutex);
++ /* Keep sparse happy by restoring an even lock count on
++ * this mutex. In case we return here, we don't call into
++ * __do_once_done but return early in the DO_ONCE_SLOW() macro.
++ */
++ __acquire(once_mutex);
++ return false;
++ }
++
++ return true;
++}
++EXPORT_SYMBOL(__do_once_slow_start);
++
++void __do_once_slow_done(bool *done, struct static_key_true *once_key,
++ struct module *mod)
++ __releases(once_mutex)
++{
++ *done = true;
++ mutex_unlock(&once_mutex);
++ once_disable_jump(once_key, mod);
++}
++EXPORT_SYMBOL(__do_once_slow_done);
+diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
+index 3c7b9d6dca95d..1d16c6c796386 100644
+--- a/mm/damon/vaddr.c
++++ b/mm/damon/vaddr.c
+@@ -304,6 +304,11 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
+
+ if (pmd_huge(*pmd)) {
+ ptl = pmd_lock(walk->mm, pmd);
++ if (!pmd_present(*pmd)) {
++ spin_unlock(ptl);
++ return 0;
++ }
++
+ if (pmd_huge(*pmd)) {
+ damon_pmdp_mkold(pmd, walk->mm, addr);
+ spin_unlock(ptl);
+@@ -431,6 +436,11 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ if (pmd_huge(*pmd)) {
+ ptl = pmd_lock(walk->mm, pmd);
++ if (!pmd_present(*pmd)) {
++ spin_unlock(ptl);
++ return 0;
++ }
++
+ if (!pmd_huge(*pmd)) {
+ spin_unlock(ptl);
+ goto regular_page;
+diff --git a/mm/gup.c b/mm/gup.c
+index 0d500cdfa6e0e..31898908e04b4 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -531,6 +531,18 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
+ if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
+ (FOLL_PIN | FOLL_GET)))
+ return ERR_PTR(-EINVAL);
++
++ /*
++ * Considering PTE level hugetlb, like continuous-PTE hugetlb on
++ * ARM64 architecture.
++ */
++ if (is_vm_hugetlb_page(vma)) {
++ page = follow_huge_pmd_pte(vma, address, flags);
++ if (page)
++ return page;
++ return no_page_table(vma, flags);
++ }
++
+ retry:
+ if (unlikely(pmd_bad(*pmd)))
+ return no_page_table(vma, flags);
+@@ -663,7 +675,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
+ if (pmd_none(pmdval))
+ return no_page_table(vma, flags);
+ if (pmd_huge(pmdval) && is_vm_hugetlb_page(vma)) {
+- page = follow_huge_pmd(mm, address, pmd, flags);
++ page = follow_huge_pmd_pte(vma, address, flags);
+ if (page)
+ return page;
+ return no_page_table(vma, flags);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index b508efbdcdbed..ff232eaca1b93 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5050,6 +5050,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
+ * unmapped and its refcount is dropped, so just clear pte here.
+ */
+ if (unlikely(!pte_present(pte))) {
++#ifdef CONFIG_PTE_MARKER_UFFD_WP
+ /*
+ * If the pte was wr-protected by uffd-wp in any of the
+ * swap forms, meanwhile the caller does not want to
+@@ -5061,6 +5062,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
+ set_huge_pte_at(mm, address, ptep,
+ make_pte_marker(PTE_MARKER_UFFD_WP));
+ else
++#endif
+ huge_pte_clear(mm, address, ptep, sz);
+ spin_unlock(ptl);
+ continue;
+@@ -5089,11 +5091,13 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
+ tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
+ if (huge_pte_dirty(pte))
+ set_page_dirty(page);
++#ifdef CONFIG_PTE_MARKER_UFFD_WP
+ /* Leave a uffd-wp pte marker if needed */
+ if (huge_pte_uffd_wp(pte) &&
+ !(zap_flags & ZAP_FLAG_DROP_MARKER))
+ set_huge_pte_at(mm, address, ptep,
+ make_pte_marker(PTE_MARKER_UFFD_WP));
++#endif
+ hugetlb_count_sub(pages_per_huge_page(h), mm);
+ page_remove_rmap(page, vma, true);
+
+@@ -5463,7 +5467,6 @@ static inline vm_fault_t hugetlb_handle_userfault(struct vm_area_struct *vma,
+ unsigned long addr,
+ unsigned long reason)
+ {
+- vm_fault_t ret;
+ u32 hash;
+ struct vm_fault vmf = {
+ .vma = vma,
+@@ -5481,18 +5484,14 @@ static inline vm_fault_t hugetlb_handle_userfault(struct vm_area_struct *vma,
+ };
+
+ /*
+- * hugetlb_fault_mutex and i_mmap_rwsem must be
+- * dropped before handling userfault. Reacquire
+- * after handling fault to make calling code simpler.
++ * vma_lock and hugetlb_fault_mutex must be dropped before handling
++ * userfault. Also mmap_lock will be dropped during handling
++ * userfault, any vma operation should be careful from here.
+ */
+ hash = hugetlb_fault_mutex_hash(mapping, idx);
+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ i_mmap_unlock_read(mapping);
+- ret = handle_userfault(&vmf, reason);
+- i_mmap_lock_read(mapping);
+- mutex_lock(&hugetlb_fault_mutex_table[hash]);
+-
+- return ret;
++ return handle_userfault(&vmf, reason);
+ }
+
+ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
+@@ -5510,6 +5509,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
+ spinlock_t *ptl;
+ unsigned long haddr = address & huge_page_mask(h);
+ bool new_page, new_pagecache_page = false;
++ u32 hash = hugetlb_fault_mutex_hash(mapping, idx);
+
+ /*
+ * Currently, we are forced to kill the process in the event the
+@@ -5520,7 +5520,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
+ if (is_vma_resv_set(vma, HPAGE_RESV_UNMAPPED)) {
+ pr_warn_ratelimited("PID %d killed due to inadequate hugepage pool\n",
+ current->pid);
+- return ret;
++ goto out;
+ }
+
+ /*
+@@ -5537,12 +5537,10 @@ retry:
+ page = find_lock_page(mapping, idx);
+ if (!page) {
+ /* Check for page in userfault range */
+- if (userfaultfd_missing(vma)) {
+- ret = hugetlb_handle_userfault(vma, mapping, idx,
++ if (userfaultfd_missing(vma))
++ return hugetlb_handle_userfault(vma, mapping, idx,
+ flags, haddr, address,
+ VM_UFFD_MISSING);
+- goto out;
+- }
+
+ page = alloc_huge_page(vma, haddr, 0);
+ if (IS_ERR(page)) {
+@@ -5602,10 +5600,9 @@ retry:
+ if (userfaultfd_minor(vma)) {
+ unlock_page(page);
+ put_page(page);
+- ret = hugetlb_handle_userfault(vma, mapping, idx,
++ return hugetlb_handle_userfault(vma, mapping, idx,
+ flags, haddr, address,
+ VM_UFFD_MINOR);
+- goto out;
+ }
+ }
+
+@@ -5663,6 +5660,8 @@ retry:
+
+ unlock_page(page);
+ out:
++ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
++ i_mmap_unlock_read(mapping);
+ return ret;
+
+ backout:
+@@ -5761,11 +5760,13 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+
+ entry = huge_ptep_get(ptep);
+ /* PTE markers should be handled the same way as none pte */
+- if (huge_pte_none_mostly(entry)) {
+- ret = hugetlb_no_page(mm, vma, mapping, idx, address, ptep,
++ if (huge_pte_none_mostly(entry))
++ /*
++ * hugetlb_no_page will drop vma lock and hugetlb fault
++ * mutex internally, which make us return immediately.
++ */
++ return hugetlb_no_page(mm, vma, mapping, idx, address, ptep,
+ entry, flags);
+- goto out_mutex;
+- }
+
+ ret = 0;
+
+@@ -6906,12 +6907,13 @@ follow_huge_pd(struct vm_area_struct *vma,
+ }
+
+ struct page * __weak
+-follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+- pmd_t *pmd, int flags)
++follow_huge_pmd_pte(struct vm_area_struct *vma, unsigned long address, int flags)
+ {
++ struct hstate *h = hstate_vma(vma);
++ struct mm_struct *mm = vma->vm_mm;
+ struct page *page = NULL;
+ spinlock_t *ptl;
+- pte_t pte;
++ pte_t *ptep, pte;
+
+ /*
+ * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
+@@ -6921,17 +6923,15 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+ return NULL;
+
+ retry:
+- ptl = pmd_lockptr(mm, pmd);
+- spin_lock(ptl);
+- /*
+- * make sure that the address range covered by this pmd is not
+- * unmapped from other threads.
+- */
+- if (!pmd_huge(*pmd))
+- goto out;
+- pte = huge_ptep_get((pte_t *)pmd);
++ ptep = huge_pte_offset(mm, address, huge_page_size(h));
++ if (!ptep)
++ return NULL;
++
++ ptl = huge_pte_lock(h, mm, ptep);
++ pte = huge_ptep_get(ptep);
+ if (pte_present(pte)) {
+- page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
++ page = pte_page(pte) +
++ ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+ /*
+ * try_grab_page() should always succeed here, because: a) we
+ * hold the pmd (ptl) lock, and b) we've just checked that the
+@@ -6947,7 +6947,7 @@ retry:
+ } else {
+ if (is_hugetlb_entry_migration(pte)) {
+ spin_unlock(ptl);
+- __migration_entry_wait_huge((pte_t *)pmd, ptl);
++ __migration_entry_wait_huge(ptep, ptl);
+ goto retry;
+ }
+ /*
+diff --git a/mm/memory.c b/mm/memory.c
+index e644f6fad3892..9658a9abf7c0d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1385,10 +1385,12 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *pte,
+ struct zap_details *details, pte_t pteval)
+ {
++#ifdef CONFIG_PTE_MARKER_UFFD_WP
+ if (zap_drop_file_uffd_wp(details))
+ return;
+
+ pte_install_uffd_wp_if_needed(vma, addr, pte, pteval);
++#endif
+ }
+
+ static unsigned long zap_pte_range(struct mmu_gather *tlb,
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 3b284b091bb7e..921a16f83c2b8 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1845,7 +1845,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
+ if (!arch_validate_flags(vma->vm_flags)) {
+ error = -EINVAL;
+ if (file)
+- goto unmap_and_free_vma;
++ goto close_and_free_vma;
+ else
+ goto free_vma;
+ }
+@@ -1892,6 +1892,9 @@ out:
+
+ return addr;
+
++close_and_free_vma:
++ if (vma->vm_ops && vma->vm_ops->close)
++ vma->vm_ops->close(vma);
+ unmap_and_free_vma:
+ fput(vma->vm_file);
+ vma->vm_file = NULL;
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index 0d38d5b637621..6bc74c67407e7 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -222,6 +222,7 @@ static unsigned long change_pte_range(struct mmu_gather *tlb,
+ } else {
+ /* It must be an none page, or what else?.. */
+ WARN_ON_ONCE(!pte_none(oldpte));
++#ifdef CONFIG_PTE_MARKER_UFFD_WP
+ if (unlikely(uffd_wp && !vma_is_anonymous(vma))) {
+ /*
+ * For file-backed mem, we need to be able to
+@@ -233,6 +234,7 @@ static unsigned long change_pte_range(struct mmu_gather *tlb,
+ make_pte_marker(PTE_MARKER_UFFD_WP));
+ pages++;
+ }
++#endif
+ }
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+ arch_leave_lazy_mmu_mode();
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 48029a390c65a..8ecc0a18df762 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3370,15 +3370,27 @@ static inline int __get_blocks(struct hci_dev *hdev, struct sk_buff *skb)
+ return DIV_ROUND_UP(skb->len - HCI_ACL_HDR_SIZE, hdev->block_len);
+ }
+
+-static void __check_timeout(struct hci_dev *hdev, unsigned int cnt)
++static void __check_timeout(struct hci_dev *hdev, unsigned int cnt, u8 type)
+ {
+- if (!hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
+- /* ACL tx timeout must be longer than maximum
+- * link supervision timeout (40.9 seconds) */
+- if (!cnt && time_after(jiffies, hdev->acl_last_tx +
+- HCI_ACL_TX_TIMEOUT))
+- hci_link_tx_to(hdev, ACL_LINK);
++ unsigned long last_tx;
++
++ if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED))
++ return;
++
++ switch (type) {
++ case LE_LINK:
++ last_tx = hdev->le_last_tx;
++ break;
++ default:
++ last_tx = hdev->acl_last_tx;
++ break;
+ }
++
++ /* tx timeout must be longer than maximum link supervision timeout
++ * (40.9 seconds)
++ */
++ if (!cnt && time_after(jiffies, last_tx + HCI_ACL_TX_TIMEOUT))
++ hci_link_tx_to(hdev, type);
+ }
+
+ /* Schedule SCO */
+@@ -3436,7 +3448,7 @@ static void hci_sched_acl_pkt(struct hci_dev *hdev)
+ struct sk_buff *skb;
+ int quote;
+
+- __check_timeout(hdev, cnt);
++ __check_timeout(hdev, cnt, ACL_LINK);
+
+ while (hdev->acl_cnt &&
+ (chan = hci_chan_sent(hdev, ACL_LINK, "e))) {
+@@ -3479,8 +3491,6 @@ static void hci_sched_acl_blk(struct hci_dev *hdev)
+ int quote;
+ u8 type;
+
+- __check_timeout(hdev, cnt);
+-
+ BT_DBG("%s", hdev->name);
+
+ if (hdev->dev_type == HCI_AMP)
+@@ -3488,6 +3498,8 @@ static void hci_sched_acl_blk(struct hci_dev *hdev)
+ else
+ type = ACL_LINK;
+
++ __check_timeout(hdev, cnt, type);
++
+ while (hdev->block_cnt > 0 &&
+ (chan = hci_chan_sent(hdev, type, "e))) {
+ u32 priority = (skb_peek(&chan->data_q))->priority;
+@@ -3561,7 +3573,7 @@ static void hci_sched_le(struct hci_dev *hdev)
+
+ cnt = hdev->le_pkts ? hdev->le_cnt : hdev->acl_cnt;
+
+- __check_timeout(hdev, cnt);
++ __check_timeout(hdev, cnt, LE_LINK);
+
+ tmp = cnt;
+ while (cnt && (chan = hci_chan_sent(hdev, LE_LINK, "e))) {
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 3b4cee67bbd60..a7c0cd2fabfb2 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4033,6 +4033,7 @@ setup_failed:
+ hci_dev_test_flag(hdev, HCI_MGMT) &&
+ hdev->dev_type == HCI_PRIMARY) {
+ ret = hci_powered_update_sync(hdev);
++ mgmt_power_on(hdev, ret);
+ }
+ } else {
+ /* Init failed, cleanup */
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index 4e3e0451b08c1..08542dfc2dc53 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -48,6 +48,9 @@ void hci_conn_add_sysfs(struct hci_conn *conn)
+
+ BT_DBG("conn %p", conn);
+
++ if (device_is_registered(&conn->dev))
++ return;
++
+ dev_set_name(&conn->dev, "%s:%d", hdev->name, conn->handle);
+
+ if (device_add(&conn->dev) < 0) {
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 48fbd0ae882bf..0f98c5d8c4de9 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -61,6 +61,9 @@ static void l2cap_send_disconn_req(struct l2cap_chan *chan, int err);
+
+ static void l2cap_tx(struct l2cap_chan *chan, struct l2cap_ctrl *control,
+ struct sk_buff_head *skbs, u8 event);
++static void l2cap_retrans_timeout(struct work_struct *work);
++static void l2cap_monitor_timeout(struct work_struct *work);
++static void l2cap_ack_timeout(struct work_struct *work);
+
+ static inline u8 bdaddr_type(u8 link_type, u8 bdaddr_type)
+ {
+@@ -476,6 +479,9 @@ struct l2cap_chan *l2cap_chan_create(void)
+ write_unlock(&chan_list_lock);
+
+ INIT_DELAYED_WORK(&chan->chan_timer, l2cap_chan_timeout);
++ INIT_DELAYED_WORK(&chan->retrans_timer, l2cap_retrans_timeout);
++ INIT_DELAYED_WORK(&chan->monitor_timer, l2cap_monitor_timeout);
++ INIT_DELAYED_WORK(&chan->ack_timer, l2cap_ack_timeout);
+
+ chan->state = BT_OPEN;
+
+@@ -3319,10 +3325,6 @@ int l2cap_ertm_init(struct l2cap_chan *chan)
+ chan->rx_state = L2CAP_RX_STATE_RECV;
+ chan->tx_state = L2CAP_TX_STATE_XMIT;
+
+- INIT_DELAYED_WORK(&chan->retrans_timer, l2cap_retrans_timeout);
+- INIT_DELAYED_WORK(&chan->monitor_timer, l2cap_monitor_timeout);
+- INIT_DELAYED_WORK(&chan->ack_timer, l2cap_ack_timeout);
+-
+ skb_queue_head_init(&chan->srej_q);
+
+ err = l2cap_seq_list_init(&chan->srej_list, chan->tx_win);
+@@ -4306,6 +4308,12 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ }
+ }
+
++ chan = l2cap_chan_hold_unless_zero(chan);
++ if (!chan) {
++ err = -EBADSLT;
++ goto unlock;
++ }
++
+ err = 0;
+
+ l2cap_chan_lock(chan);
+@@ -4335,6 +4343,7 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ }
+
+ l2cap_chan_unlock(chan);
++ l2cap_chan_put(chan);
+
+ unlock:
+ mutex_unlock(&conn->chan_lock);
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index 4bf4ea6cbb5ee..21e24da4847f0 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -902,7 +902,10 @@ static int rfcomm_sock_shutdown(struct socket *sock, int how)
+ lock_sock(sk);
+ if (!sk->sk_shutdown) {
+ sk->sk_shutdown = SHUTDOWN_MASK;
++
++ release_sock(sk);
+ __rfcomm_sock_close(sk);
++ lock_sock(sk);
+
+ if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime &&
+ !(current->flags & PF_EXITING))
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index e60161bec850a..f16271a7ae2e8 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -274,6 +274,7 @@ static void bcm_can_tx(struct bcm_op *op)
+ struct sk_buff *skb;
+ struct net_device *dev;
+ struct canfd_frame *cf = op->frames + op->cfsiz * op->currframe;
++ int err;
+
+ /* no target device? => exit */
+ if (!op->ifindex)
+@@ -298,11 +299,11 @@ static void bcm_can_tx(struct bcm_op *op)
+ /* send with loopback */
+ skb->dev = dev;
+ can_skb_set_owner(skb, op->sk);
+- can_send(skb, 1);
++ err = can_send(skb, 1);
++ if (!err)
++ op->frames_abs++;
+
+- /* update statistics */
+ op->currframe++;
+- op->frames_abs++;
+
+ /* reached last frame? */
+ if (op->currframe >= op->nframes)
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index bcba61ef5b378..ac63604330036 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1168,8 +1168,8 @@ proto_again:
+ nhoff += sizeof(*vlan);
+ }
+
+- if (dissector_uses_key(flow_dissector,
+- FLOW_DISSECTOR_KEY_NUM_OF_VLANS)) {
++ if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_NUM_OF_VLANS) &&
++ !(key_control->flags & FLOW_DIS_ENCAPSULATION)) {
+ struct flow_dissector_key_num_of_vlans *key_nvs;
+
+ key_nvs = skb_flow_dissector_target(flow_dissector,
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 69ac686c7cae3..864cd7ded2ca6 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -435,8 +435,10 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ if (copied + copy > len)
+ copy = len - copied;
+ copy = copy_page_to_iter(page, sge->offset, copy, iter);
+- if (!copy)
+- return copied ? copied : -EFAULT;
++ if (!copy) {
++ copied = copied ? copied : -EFAULT;
++ goto out;
++ }
+
+ copied += copy;
+ if (likely(!peek)) {
+@@ -456,7 +458,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ * didn't copy the entire length lets just break.
+ */
+ if (copy != sge->length)
+- return copied;
++ goto out;
+ sk_msg_iter_var_next(i);
+ }
+
+@@ -478,7 +480,9 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ }
+ msg_rx = sk_psock_peek_msg(psock);
+ }
+-
++out:
++ if (psock->work_state.skb && copied > 0)
++ schedule_work(&psock->work);
+ return copied;
+ }
+ EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
+diff --git a/net/core/stream.c b/net/core/stream.c
+index 06b36c730ce8a..2ee82115b919a 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -159,7 +159,8 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p)
+ *timeo_p = current_timeo;
+ }
+ out:
+- remove_wait_queue(sk_sleep(sk), &wait);
++ if (!sock_flag(sk, SOCK_DEAD))
++ remove_wait_queue(sk_sleep(sk), &wait);
+ return err;
+
+ do_error:
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index 7889e1ef7fad6..6e55fae4c6860 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -272,6 +272,10 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ err = -EMSGSIZE;
+ goto out_dev;
+ }
++ if (!size) {
++ err = 0;
++ goto out_dev;
++ }
+
+ hlen = LL_RESERVED_SPACE(dev);
+ tlen = dev->needed_tailroom;
+diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
+index ffd57523331fd..405a8c2aea641 100644
+--- a/net/ipv4/datagram.c
++++ b/net/ipv4/datagram.c
+@@ -42,6 +42,8 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
+ oif = inet->mc_index;
+ if (!saddr)
+ saddr = inet->mc_addr;
++ } else if (!oif) {
++ oif = inet->uc_index;
+ }
+ fl4 = &inet->cork.fl.u.ip4;
+ rt = ip_route_connect(fl4, usin->sin_addr.s_addr, saddr, oif,
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 935026f4c807e..170152772d332 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -110,7 +110,10 @@ static struct sk_buff *xfrm4_tunnel_gso_segment(struct xfrm_state *x,
+ struct sk_buff *skb,
+ netdev_features_t features)
+ {
+- return skb_eth_gso_segment(skb, features, htons(ETH_P_IP));
++ __be16 type = x->inner_mode.family == AF_INET6 ? htons(ETH_P_IPV6)
++ : htons(ETH_P_IP);
++
++ return skb_eth_gso_segment(skb, features, type);
+ }
+
+ static struct sk_buff *xfrm4_transport_gso_segment(struct xfrm_state *x,
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index b9d995b5ce24c..f5950a7172d61 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -729,8 +729,8 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ if (likely(remaining > 1))
+ remaining &= ~1U;
+
+- net_get_random_once(table_perturb,
+- INET_TABLE_PERTURB_SIZE * sizeof(*table_perturb));
++ get_random_slow_once(table_perturb,
++ INET_TABLE_PERTURB_SIZE * sizeof(*table_perturb));
+ index = port_offset & (INET_TABLE_PERTURB_SIZE - 1);
+
+ offset = READ_ONCE(table_perturb[index]) + (port_offset >> 32);
+diff --git a/net/ipv4/netfilter/nft_fib_ipv4.c b/net/ipv4/netfilter/nft_fib_ipv4.c
+index b75cac69bd7e6..7ade04ff972d7 100644
+--- a/net/ipv4/netfilter/nft_fib_ipv4.c
++++ b/net/ipv4/netfilter/nft_fib_ipv4.c
+@@ -83,6 +83,9 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ else
+ oif = NULL;
+
++ if (priv->flags & NFTA_FIB_F_IIF)
++ fl4.flowi4_oif = l3mdev_master_ifindex_rcu(oif);
++
+ if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+ nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+ nft_fib_store_result(dest, priv, nft_in(pkt));
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index ab03977b65781..83fa8886f8685 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3042,6 +3042,8 @@ int tcp_disconnect(struct sock *sk, int flags)
+ tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+ tcp_snd_cwnd_set(tp, TCP_INIT_CWND);
+ tp->snd_cwnd_cnt = 0;
++ tp->is_cwnd_limited = 0;
++ tp->max_packets_out = 0;
+ tp->window_clamp = 0;
+ tp->delivered = 0;
+ tp->delivered_ce = 0;
+@@ -4347,12 +4349,16 @@ static void __tcp_alloc_md5sig_pool(void)
+ * to memory. See smp_rmb() in tcp_get_md5sig_pool()
+ */
+ smp_wmb();
+- tcp_md5sig_pool_populated = true;
++ /* Paired with READ_ONCE() from tcp_alloc_md5sig_pool()
++ * and tcp_get_md5sig_pool().
++ */
++ WRITE_ONCE(tcp_md5sig_pool_populated, true);
+ }
+
+ bool tcp_alloc_md5sig_pool(void)
+ {
+- if (unlikely(!tcp_md5sig_pool_populated)) {
++ /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
++ if (unlikely(!READ_ONCE(tcp_md5sig_pool_populated))) {
+ mutex_lock(&tcp_md5sig_mutex);
+
+ if (!tcp_md5sig_pool_populated) {
+@@ -4363,7 +4369,8 @@ bool tcp_alloc_md5sig_pool(void)
+
+ mutex_unlock(&tcp_md5sig_mutex);
+ }
+- return tcp_md5sig_pool_populated;
++ /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
++ return READ_ONCE(tcp_md5sig_pool_populated);
+ }
+ EXPORT_SYMBOL(tcp_alloc_md5sig_pool);
+
+@@ -4379,7 +4386,8 @@ struct tcp_md5sig_pool *tcp_get_md5sig_pool(void)
+ {
+ local_bh_disable();
+
+- if (tcp_md5sig_pool_populated) {
++ /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
++ if (READ_ONCE(tcp_md5sig_pool_populated)) {
+ /* coupled with smp_wmb() in __tcp_alloc_md5sig_pool() */
+ smp_rmb();
+ return this_cpu_ptr(&tcp_md5sig_pool);
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 84314de754f87..a16139cacc454 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1875,15 +1875,20 @@ static void tcp_cwnd_validate(struct sock *sk, bool is_cwnd_limited)
+ const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops;
+ struct tcp_sock *tp = tcp_sk(sk);
+
+- /* Track the maximum number of outstanding packets in each
+- * window, and remember whether we were cwnd-limited then.
++ /* Track the strongest available signal of the degree to which the cwnd
++ * is fully utilized. If cwnd-limited then remember that fact for the
++ * current window. If not cwnd-limited then track the maximum number of
++ * outstanding packets in the current window. (If cwnd-limited then we
++ * chose to not update tp->max_packets_out to avoid an extra else
++ * clause with no functional impact.)
+ */
+- if (!before(tp->snd_una, tp->max_packets_seq) ||
+- tp->packets_out > tp->max_packets_out ||
+- is_cwnd_limited) {
+- tp->max_packets_out = tp->packets_out;
+- tp->max_packets_seq = tp->snd_nxt;
++ if (!before(tp->snd_una, tp->cwnd_usage_seq) ||
++ is_cwnd_limited ||
++ (!tp->is_cwnd_limited &&
++ tp->packets_out > tp->max_packets_out)) {
+ tp->is_cwnd_limited = is_cwnd_limited;
++ tp->max_packets_out = tp->packets_out;
++ tp->cwnd_usage_seq = tp->snd_nxt;
+ }
+
+ if (tcp_is_cwnd_limited(sk)) {
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 3a293838a91db..79d43548279cb 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -145,7 +145,10 @@ static struct sk_buff *xfrm6_tunnel_gso_segment(struct xfrm_state *x,
+ struct sk_buff *skb,
+ netdev_features_t features)
+ {
+- return skb_eth_gso_segment(skb, features, htons(ETH_P_IPV6));
++ __be16 type = x->inner_mode.family == AF_INET ? htons(ETH_P_IP)
++ : htons(ETH_P_IPV6);
++
++ return skb_eth_gso_segment(skb, features, type);
+ }
+
+ static struct sk_buff *xfrm6_transport_gso_segment(struct xfrm_state *x,
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index 8970d0b4faeb4..1d7e520d9966c 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -41,6 +41,9 @@ static int nft_fib6_flowi_init(struct flowi6 *fl6, const struct nft_fib *priv,
+ if (ipv6_addr_type(&fl6->daddr) & IPV6_ADDR_LINKLOCAL) {
+ lookup_flags |= RT6_LOOKUP_F_IFACE;
+ fl6->flowi6_oif = get_ifindex(dev ? dev : pkt->skb->dev);
++ } else if ((priv->flags & NFTA_FIB_F_IIF) &&
++ (netif_is_l3_master(dev) || netif_is_l3_slave(dev))) {
++ fl6->flowi6_oif = dev->ifindex;
+ }
+
+ if (ipv6_addr_type(&fl6->saddr) & IPV6_ADDR_UNICAST)
+@@ -197,7 +200,8 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ if (rt->rt6i_flags & (RTF_REJECT | RTF_ANYCAST | RTF_LOCAL))
+ goto put_rt_err;
+
+- if (oif && oif != rt->rt6i_idev->dev)
++ if (oif && oif != rt->rt6i_idev->dev &&
++ l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) != oif->ifindex)
+ goto put_rt_err;
+
+ nft_fib_store_result(dest, priv, rt->rt6i_idev->dev);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 9ca25ae503b04..37484c26259d7 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -3545,9 +3545,6 @@ static int ieee80211_set_csa_beacon(struct ieee80211_sub_if_data *sdata,
+ case NL80211_IFTYPE_MESH_POINT: {
+ struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
+
+- if (params->chandef.width != sdata->vif.bss_conf.chandef.width)
+- return -EINVAL;
+-
+ /* changes into another band are not supported */
+ if (sdata->vif.bss_conf.chandef.chan->band !=
+ params->chandef.chan->band)
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 369aeabb94fe2..8ef19f033773c 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -67,6 +67,7 @@ struct conntrack_gc_work {
+ struct delayed_work dwork;
+ u32 next_bucket;
+ u32 avg_timeout;
++ u32 count;
+ u32 start_time;
+ bool exiting;
+ bool early_drop;
+@@ -85,10 +86,12 @@ static DEFINE_MUTEX(nf_conntrack_mutex);
+ /* clamp timeouts to this value (TCP unacked) */
+ #define GC_SCAN_INTERVAL_CLAMP (300ul * HZ)
+
+-/* large initial bias so that we don't scan often just because we have
+- * three entries with a 1s timeout.
++/* Initial bias pretending we have 100 entries at the upper bound so we don't
++ * wakeup often just because we have three entries with a 1s timeout while still
++ * allowing non-idle machines to wakeup more often when needed.
+ */
+-#define GC_SCAN_INTERVAL_INIT INT_MAX
++#define GC_SCAN_INITIAL_COUNT 100
++#define GC_SCAN_INTERVAL_INIT GC_SCAN_INTERVAL_MAX
+
+ #define GC_SCAN_MAX_DURATION msecs_to_jiffies(10)
+ #define GC_SCAN_EXPIRED_MAX (64000u / HZ)
+@@ -1468,6 +1471,7 @@ static void gc_worker(struct work_struct *work)
+ unsigned int expired_count = 0;
+ unsigned long next_run;
+ s32 delta_time;
++ long count;
+
+ gc_work = container_of(work, struct conntrack_gc_work, dwork.work);
+
+@@ -1477,10 +1481,12 @@ static void gc_worker(struct work_struct *work)
+
+ if (i == 0) {
+ gc_work->avg_timeout = GC_SCAN_INTERVAL_INIT;
++ gc_work->count = GC_SCAN_INITIAL_COUNT;
+ gc_work->start_time = start_time;
+ }
+
+ next_run = gc_work->avg_timeout;
++ count = gc_work->count;
+
+ end_time = start_time + GC_SCAN_MAX_DURATION;
+
+@@ -1500,8 +1506,8 @@ static void gc_worker(struct work_struct *work)
+
+ hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[i], hnnode) {
+ struct nf_conntrack_net *cnet;
+- unsigned long expires;
+ struct net *net;
++ long expires;
+
+ tmp = nf_ct_tuplehash_to_ctrack(h);
+
+@@ -1515,6 +1521,7 @@ static void gc_worker(struct work_struct *work)
+
+ gc_work->next_bucket = i;
+ gc_work->avg_timeout = next_run;
++ gc_work->count = count;
+
+ delta_time = nfct_time_stamp - gc_work->start_time;
+
+@@ -1530,8 +1537,8 @@ static void gc_worker(struct work_struct *work)
+ }
+
+ expires = clamp(nf_ct_expires(tmp), GC_SCAN_INTERVAL_MIN, GC_SCAN_INTERVAL_CLAMP);
++ expires = (expires - (long)next_run) / ++count;
+ next_run += expires;
+- next_run /= 2u;
+
+ if (nf_conntrack_max95 == 0 || gc_worker_skip_ct(tmp))
+ continue;
+@@ -1572,6 +1579,7 @@ static void gc_worker(struct work_struct *work)
+ delta_time = nfct_time_stamp - end_time;
+ if (delta_time > 0 && i < hashsz) {
+ gc_work->avg_timeout = next_run;
++ gc_work->count = count;
+ gc_work->next_bucket = i;
+ next_run = 0;
+ goto early_exit;
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 6c9d153afbeee..93c596e3b22b9 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -252,10 +252,17 @@ void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key)
+
+ upcall.mru = OVS_CB(skb)->mru;
+ error = ovs_dp_upcall(dp, skb, key, &upcall, 0);
+- if (unlikely(error))
+- kfree_skb(skb);
+- else
++ switch (error) {
++ case 0:
++ case -EAGAIN:
++ case -ERESTARTSYS:
++ case -EINTR:
+ consume_skb(skb);
++ break;
++ default:
++ kfree_skb(skb);
++ break;
++ }
+ stats_counter = &stats->n_missed;
+ goto out;
+ }
+@@ -551,8 +558,9 @@ static int queue_userspace_packet(struct datapath *dp, struct sk_buff *skb,
+ out:
+ if (err)
+ skb_tx_error(skb);
+- kfree_skb(user_skb);
+- kfree_skb(nskb);
++ consume_skb(user_skb);
++ consume_skb(nskb);
++
+ return err;
+ }
+
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index 73ee2771093d6..d0ff413f697c3 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -166,10 +166,10 @@ void rds_tcp_reset_callbacks(struct socket *sock,
+ */
+ atomic_set(&cp->cp_state, RDS_CONN_RESETTING);
+ wait_event(cp->cp_waitq, !test_bit(RDS_IN_XMIT, &cp->cp_flags));
+- lock_sock(osock->sk);
+ /* reset receive side state for rds_tcp_data_recv() for osock */
+ cancel_delayed_work_sync(&cp->cp_send_w);
+ cancel_delayed_work_sync(&cp->cp_recv_w);
++ lock_sock(osock->sk);
+ if (tc->t_tinc) {
+ rds_inc_put(&tc->t_tinc->ti_inc);
+ tc->t_tinc = NULL;
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index db6b7373d16c3..34964145514e6 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -863,12 +863,17 @@ int sctp_auth_set_key(struct sctp_endpoint *ep,
+ }
+
+ list_del_init(&shkey->key_list);
+- sctp_auth_shkey_release(shkey);
+ list_add(&cur_key->key_list, sh_keys);
+
+- if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
+- sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
++ if (asoc && asoc->active_key_id == auth_key->sca_keynumber &&
++ sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL)) {
++ list_del_init(&cur_key->key_list);
++ sctp_auth_shkey_release(cur_key);
++ list_add(&shkey->key_list, sh_keys);
++ return -ENOMEM;
++ }
+
++ sctp_auth_shkey_release(shkey);
+ return 0;
+ }
+
+@@ -902,8 +907,13 @@ int sctp_auth_set_active_key(struct sctp_endpoint *ep,
+ return -EINVAL;
+
+ if (asoc) {
++ __u16 active_key_id = asoc->active_key_id;
++
+ asoc->active_key_id = key_id;
+- sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
++ if (sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL)) {
++ asoc->active_key_id = active_key_id;
++ return -ENOMEM;
++ }
+ } else
+ ep->active_key_id = key_id;
+
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 2206e6f8902d7..c3a0e37705691 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -548,12 +548,6 @@ static void unix_sock_destructor(struct sock *sk)
+
+ skb_queue_purge(&sk->sk_receive_queue);
+
+-#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
+- if (u->oob_skb) {
+- kfree_skb(u->oob_skb);
+- u->oob_skb = NULL;
+- }
+-#endif
+ WARN_ON(refcount_read(&sk->sk_wmem_alloc));
+ WARN_ON(!sk_unhashed(sk));
+ WARN_ON(sk->sk_socket);
+@@ -598,6 +592,13 @@ static void unix_release_sock(struct sock *sk, int embrion)
+
+ unix_state_unlock(sk);
+
++#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
++ if (u->oob_skb) {
++ kfree_skb(u->oob_skb);
++ u->oob_skb = NULL;
++ }
++#endif
++
+ wake_up_interruptible_all(&u->peer_wait);
+
+ if (skpair != NULL) {
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index d45d5366115a7..dc27635403932 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -204,6 +204,7 @@ void wait_for_unix_gc(void)
+ /* The external entry point: unix_gc() */
+ void unix_gc(void)
+ {
++ struct sk_buff *next_skb, *skb;
+ struct unix_sock *u;
+ struct unix_sock *next;
+ struct sk_buff_head hitlist;
+@@ -297,11 +298,30 @@ void unix_gc(void)
+
+ spin_unlock(&unix_gc_lock);
+
++ /* We need io_uring to clean its registered files, ignore all io_uring
++ * originated skbs. It's fine as io_uring doesn't keep references to
++ * other io_uring instances and so killing all other files in the cycle
++ * will put all io_uring references forcing it to go through normal
++ * release.path eventually putting registered files.
++ */
++ skb_queue_walk_safe(&hitlist, skb, next_skb) {
++ if (skb->scm_io_uring) {
++ __skb_unlink(skb, &hitlist);
++ skb_queue_tail(&skb->sk->sk_receive_queue, skb);
++ }
++ }
++
+ /* Here we are. Hitlist is filled. Die. */
+ __skb_queue_purge(&hitlist);
+
+ spin_lock(&unix_gc_lock);
+
++ /* There could be io_uring registered files, just push them back to
++ * the inflight list
++ */
++ list_for_each_entry_safe(u, next, &gc_candidates, link)
++ list_move_tail(&u->link, &gc_inflight_list);
++
+ /* All candidates should have been detached by now. */
+ BUG_ON(!list_empty(&gc_candidates));
+
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index ec2c2afbf0d06..3a12aee33e92f 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1342,7 +1342,7 @@ EXPORT_SYMBOL_GPL(virtio_transport_recv_pkt);
+
+ void virtio_transport_free_pkt(struct virtio_vsock_pkt *pkt)
+ {
+- kfree(pkt->buf);
++ kvfree(pkt->buf);
+ kfree(pkt);
+ }
+ EXPORT_SYMBOL_GPL(virtio_transport_free_pkt);
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index c7383ede794fc..d5c7a5aa68532 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2389,6 +2389,10 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
+ switch (iftype) {
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_P2P_GO:
++ if (!wdev->links[link].ap.beacon_interval)
++ continue;
++ chandef = wdev->links[link].ap.chandef;
++ break;
+ case NL80211_IFTYPE_MESH_POINT:
+ if (!wdev->u.mesh.beacon_interval)
+ continue;
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 7e311420aab9f..e24d62f8883a3 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -355,16 +355,15 @@ static u32 xsk_tx_peek_release_fallback(struct xsk_buff_pool *pool, u32 max_entr
+ return nb_pkts;
+ }
+
+-u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max_entries)
++u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 nb_pkts)
+ {
+ struct xdp_sock *xs;
+- u32 nb_pkts;
+
+ rcu_read_lock();
+ if (!list_is_singular(&pool->xsk_tx_list)) {
+ /* Fallback to the non-batched version */
+ rcu_read_unlock();
+- return xsk_tx_peek_release_fallback(pool, max_entries);
++ return xsk_tx_peek_release_fallback(pool, nb_pkts);
+ }
+
+ xs = list_first_or_null_rcu(&pool->xsk_tx_list, struct xdp_sock, tx_list);
+@@ -373,12 +372,7 @@ u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max_entries)
+ goto out;
+ }
+
+- max_entries = xskq_cons_nb_entries(xs->tx, max_entries);
+- nb_pkts = xskq_cons_read_desc_batch(xs->tx, pool, max_entries);
+- if (!nb_pkts) {
+- xs->tx->queue_empty_descs++;
+- goto out;
+- }
++ nb_pkts = xskq_cons_nb_entries(xs->tx, nb_pkts);
+
+ /* This is the backpressure mechanism for the Tx path. Try to
+ * reserve space in the completion queue for all packets, but
+@@ -386,12 +380,18 @@ u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max_entries)
+ * packets. This avoids having to implement any buffering in
+ * the Tx path.
+ */
+- nb_pkts = xskq_prod_reserve_addr_batch(pool->cq, pool->tx_descs, nb_pkts);
++ nb_pkts = xskq_prod_nb_free(pool->cq, nb_pkts);
+ if (!nb_pkts)
+ goto out;
+
+- xskq_cons_release_n(xs->tx, max_entries);
++ nb_pkts = xskq_cons_read_desc_batch(xs->tx, pool, nb_pkts);
++ if (!nb_pkts) {
++ xs->tx->queue_empty_descs++;
++ goto out;
++ }
++
+ __xskq_cons_release(xs->tx);
++ xskq_prod_write_addr_batch(pool->cq, pool->tx_descs, nb_pkts);
+ xs->sk.sk_write_space(&xs->sk);
+
+ out:
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index fb20bf7207cfb..c6fb6b7636582 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -205,6 +205,11 @@ static inline bool xskq_cons_read_desc(struct xsk_queue *q,
+ return false;
+ }
+
++static inline void xskq_cons_release_n(struct xsk_queue *q, u32 cnt)
++{
++ q->cached_cons += cnt;
++}
++
+ static inline u32 xskq_cons_read_desc_batch(struct xsk_queue *q, struct xsk_buff_pool *pool,
+ u32 max)
+ {
+@@ -226,6 +231,8 @@ static inline u32 xskq_cons_read_desc_batch(struct xsk_queue *q, struct xsk_buff
+ cached_cons++;
+ }
+
++ /* Release valid plus any invalid entries */
++ xskq_cons_release_n(q, cached_cons - q->cached_cons);
+ return nb_entries;
+ }
+
+@@ -291,11 +298,6 @@ static inline void xskq_cons_release(struct xsk_queue *q)
+ q->cached_cons++;
+ }
+
+-static inline void xskq_cons_release_n(struct xsk_queue *q, u32 cnt)
+-{
+- q->cached_cons += cnt;
+-}
+-
+ static inline u32 xskq_cons_present_entries(struct xsk_queue *q)
+ {
+ /* No barriers needed since data is not accessed */
+@@ -350,21 +352,17 @@ static inline int xskq_prod_reserve_addr(struct xsk_queue *q, u64 addr)
+ return 0;
+ }
+
+-static inline u32 xskq_prod_reserve_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
+- u32 max)
++static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
++ u32 nb_entries)
+ {
+ struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;
+- u32 nb_entries, i, cached_prod;
+-
+- nb_entries = xskq_prod_nb_free(q, max);
++ u32 i, cached_prod;
+
+ /* A, matches D */
+ cached_prod = q->cached_prod;
+ for (i = 0; i < nb_entries; i++)
+ ring->desc[cached_prod++ & q->ring_mask] = descs[i].addr;
+ q->cached_prod = cached_prod;
+-
+- return nb_entries;
+ }
+
+ static inline int xskq_prod_reserve_desc(struct xsk_queue *q,
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index b2f4ec9c537f0..aa5220565763c 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -24,7 +24,8 @@
+ #include "xfrm_inout.h"
+
+ struct xfrm_trans_tasklet {
+- struct tasklet_struct tasklet;
++ struct work_struct work;
++ spinlock_t queue_lock;
+ struct sk_buff_head queue;
+ };
+
+@@ -760,18 +761,22 @@ int xfrm_input_resume(struct sk_buff *skb, int nexthdr)
+ }
+ EXPORT_SYMBOL(xfrm_input_resume);
+
+-static void xfrm_trans_reinject(struct tasklet_struct *t)
++static void xfrm_trans_reinject(struct work_struct *work)
+ {
+- struct xfrm_trans_tasklet *trans = from_tasklet(trans, t, tasklet);
++ struct xfrm_trans_tasklet *trans = container_of(work, struct xfrm_trans_tasklet, work);
+ struct sk_buff_head queue;
+ struct sk_buff *skb;
+
+ __skb_queue_head_init(&queue);
++ spin_lock_bh(&trans->queue_lock);
+ skb_queue_splice_init(&trans->queue, &queue);
++ spin_unlock_bh(&trans->queue_lock);
+
++ local_bh_disable();
+ while ((skb = __skb_dequeue(&queue)))
+ XFRM_TRANS_SKB_CB(skb)->finish(XFRM_TRANS_SKB_CB(skb)->net,
+ NULL, skb);
++ local_bh_enable();
+ }
+
+ int xfrm_trans_queue_net(struct net *net, struct sk_buff *skb,
+@@ -789,8 +794,10 @@ int xfrm_trans_queue_net(struct net *net, struct sk_buff *skb,
+
+ XFRM_TRANS_SKB_CB(skb)->finish = finish;
+ XFRM_TRANS_SKB_CB(skb)->net = net;
++ spin_lock_bh(&trans->queue_lock);
+ __skb_queue_tail(&trans->queue, skb);
+- tasklet_schedule(&trans->tasklet);
++ spin_unlock_bh(&trans->queue_lock);
++ schedule_work(&trans->work);
+ return 0;
+ }
+ EXPORT_SYMBOL(xfrm_trans_queue_net);
+@@ -817,7 +824,8 @@ void __init xfrm_input_init(void)
+ struct xfrm_trans_tasklet *trans;
+
+ trans = &per_cpu(xfrm_trans_tasklet, i);
++ spin_lock_init(&trans->queue_lock);
+ __skb_queue_head_init(&trans->queue);
+- tasklet_setup(&trans->tasklet, xfrm_trans_reinject);
++ INIT_WORK(&trans->work, xfrm_trans_reinject);
+ }
+ }
+diff --git a/net/xfrm/xfrm_ipcomp.c b/net/xfrm/xfrm_ipcomp.c
+index cb40ff0ff28da..92ad336a83ab5 100644
+--- a/net/xfrm/xfrm_ipcomp.c
++++ b/net/xfrm/xfrm_ipcomp.c
+@@ -203,6 +203,7 @@ static void ipcomp_free_scratches(void)
+ vfree(*per_cpu_ptr(scratches, i));
+
+ free_percpu(scratches);
++ ipcomp_scratches = NULL;
+ }
+
+ static void * __percpu *ipcomp_alloc_scratches(void)
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index ece44b7350613..2bc08ace38a3b 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -100,8 +100,29 @@ echo-cmd = $(if $($(quiet)cmd_$(1)),\
+ quiet_redirect :=
+ silent_redirect := exec >/dev/null;
+
++# Delete the target on interruption
++#
++# GNU Make automatically deletes the target if it has already been changed by
++# the interrupted recipe. So, you can safely stop the build by Ctrl-C (Make
++# will delete incomplete targets), and resume it later.
++#
++# However, this does not work when the stderr is piped to another program, like
++# $ make >&2 | tee log
++# Make dies with SIGPIPE before cleaning the targets.
++#
++# To address it, we clean the target in signal traps.
++#
++# Make deletes the target when it catches SIGHUP, SIGINT, SIGQUIT, SIGTERM.
++# So, we cover them, and also SIGPIPE just in case.
++#
++# Of course, this is unneeded for phony targets.
++delete-on-interrupt = \
++ $(if $(filter-out $(PHONY), $@), \
++ $(foreach sig, HUP INT QUIT TERM PIPE, \
++ trap 'rm -f $@; trap - $(sig); kill -s $(sig) $$$$' $(sig);))
++
+ # printing commands
+-cmd = @set -e; $(echo-cmd) $($(quiet)redirect) $(cmd_$(1))
++cmd = @set -e; $(echo-cmd) $($(quiet)redirect) $(delete-on-interrupt) $(cmd_$(1))
+
+ ###
+ # if_changed - execute command if any prerequisite is newer than
+diff --git a/scripts/package/mkspec b/scripts/package/mkspec
+index 7c477ca7dc982..951cc60e5a903 100755
+--- a/scripts/package/mkspec
++++ b/scripts/package/mkspec
+@@ -85,10 +85,10 @@ $S
+ mkdir -p %{buildroot}/boot
+ %ifarch ia64
+ mkdir -p %{buildroot}/boot/efi
+- cp \$($MAKE image_name) %{buildroot}/boot/efi/vmlinuz-$KERNELRELEASE
++ cp \$($MAKE -s image_name) %{buildroot}/boot/efi/vmlinuz-$KERNELRELEASE
+ ln -s efi/vmlinuz-$KERNELRELEASE %{buildroot}/boot/
+ %else
+- cp \$($MAKE image_name) %{buildroot}/boot/vmlinuz-$KERNELRELEASE
++ cp \$($MAKE -s image_name) %{buildroot}/boot/vmlinuz-$KERNELRELEASE
+ %endif
+ $M $MAKE %{?_smp_mflags} INSTALL_MOD_PATH=%{buildroot} modules_install
+ $MAKE %{?_smp_mflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
+diff --git a/scripts/pahole-flags.sh b/scripts/pahole-flags.sh
+index 0d99ef17e4a52..d4f3d63cb4344 100755
+--- a/scripts/pahole-flags.sh
++++ b/scripts/pahole-flags.sh
+@@ -20,4 +20,8 @@ if [ "${pahole_ver}" -ge "122" ]; then
+ extra_paholeopt="${extra_paholeopt} -j"
+ fi
+
++if [ "${pahole_ver}" -ge "124" ]; then
++ extra_paholeopt="${extra_paholeopt} --skip_encoding_btf_enum64"
++fi
++
+ echo ${extra_paholeopt}
+diff --git a/scripts/selinux/install_policy.sh b/scripts/selinux/install_policy.sh
+index 2dccf141241d7..20af56ce245c5 100755
+--- a/scripts/selinux/install_policy.sh
++++ b/scripts/selinux/install_policy.sh
+@@ -78,7 +78,7 @@ cd /etc/selinux/dummy/contexts/files
+ $SF -F file_contexts /
+
+ mounts=`cat /proc/$$/mounts | \
+- egrep "ext[234]|jfs|xfs|reiserfs|jffs2|gfs2|btrfs|f2fs|ocfs2" | \
++ grep -E "ext[234]|jfs|xfs|reiserfs|jffs2|gfs2|btrfs|f2fs|ocfs2" | \
+ awk '{ print $2 '}`
+ $SF -F file_contexts $mounts
+
+diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
+index bde74fcecee38..3e0fbbd995342 100644
+--- a/security/integrity/ima/ima_appraise.c
++++ b/security/integrity/ima/ima_appraise.c
+@@ -750,22 +750,26 @@ int ima_inode_setxattr(struct dentry *dentry, const char *xattr_name,
+ const struct evm_ima_xattr_data *xvalue = xattr_value;
+ int digsig = 0;
+ int result;
++ int err;
+
+ result = ima_protect_xattr(dentry, xattr_name, xattr_value,
+ xattr_value_len);
+ if (result == 1) {
+ if (!xattr_value_len || (xvalue->type >= IMA_XATTR_LAST))
+ return -EINVAL;
++
++ err = validate_hash_algo(dentry, xvalue, xattr_value_len);
++ if (err)
++ return err;
++
+ digsig = (xvalue->type == EVM_IMA_XATTR_DIGSIG);
+ } else if (!strcmp(xattr_name, XATTR_NAME_EVM) && xattr_value_len > 0) {
+ digsig = (xvalue->type == EVM_XATTR_PORTABLE_DIGSIG);
+ }
+ if (result == 1 || evm_revalidate_status(xattr_name)) {
+- result = validate_hash_algo(dentry, xvalue, xattr_value_len);
+- if (result)
+- return result;
+-
+ ima_reset_appraise_flags(d_backing_inode(dentry), digsig);
++ if (result == 1)
++ result = 0;
+ }
+ return result;
+ }
+diff --git a/sound/core/pcm_dmaengine.c b/sound/core/pcm_dmaengine.c
+index af6f717e1e7e6..c6ccb75036aec 100644
+--- a/sound/core/pcm_dmaengine.c
++++ b/sound/core/pcm_dmaengine.c
+@@ -131,12 +131,14 @@ EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_set_config_from_dai_data);
+
+ static void dmaengine_pcm_dma_complete(void *arg)
+ {
++ unsigned int new_pos;
+ struct snd_pcm_substream *substream = arg;
+ struct dmaengine_pcm_runtime_data *prtd = substream_to_prtd(substream);
+
+- prtd->pos += snd_pcm_lib_period_bytes(substream);
+- if (prtd->pos >= snd_pcm_lib_buffer_bytes(substream))
+- prtd->pos = 0;
++ new_pos = prtd->pos + snd_pcm_lib_period_bytes(substream);
++ if (new_pos >= snd_pcm_lib_buffer_bytes(substream))
++ new_pos = 0;
++ prtd->pos = new_pos;
+
+ snd_pcm_period_elapsed(substream);
+ }
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index befa9809ff001..b1632ab432cf7 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -1835,10 +1835,8 @@ static int snd_rawmidi_free(struct snd_rawmidi *rmidi)
+
+ snd_info_free_entry(rmidi->proc_entry);
+ rmidi->proc_entry = NULL;
+- mutex_lock(®ister_mutex);
+ if (rmidi->ops && rmidi->ops->dev_unregister)
+ rmidi->ops->dev_unregister(rmidi);
+- mutex_unlock(®ister_mutex);
+
+ snd_rawmidi_free_substreams(&rmidi->streams[SNDRV_RAWMIDI_STREAM_INPUT]);
+ snd_rawmidi_free_substreams(&rmidi->streams[SNDRV_RAWMIDI_STREAM_OUTPUT]);
+diff --git a/sound/core/sound_oss.c b/sound/core/sound_oss.c
+index 7ed0a2a910352..2751bf2ff61bc 100644
+--- a/sound/core/sound_oss.c
++++ b/sound/core/sound_oss.c
+@@ -162,7 +162,6 @@ int snd_unregister_oss_device(int type, struct snd_card *card, int dev)
+ mutex_unlock(&sound_oss_mutex);
+ return -ENOENT;
+ }
+- unregister_sound_special(minor);
+ switch (SNDRV_MINOR_OSS_DEVICE(minor)) {
+ case SNDRV_MINOR_OSS_PCM:
+ track2 = SNDRV_MINOR_OSS(cidx, SNDRV_MINOR_OSS_AUDIO);
+@@ -174,12 +173,18 @@ int snd_unregister_oss_device(int type, struct snd_card *card, int dev)
+ track2 = SNDRV_MINOR_OSS(cidx, SNDRV_MINOR_OSS_DMMIDI1);
+ break;
+ }
+- if (track2 >= 0) {
+- unregister_sound_special(track2);
++ if (track2 >= 0)
+ snd_oss_minors[track2] = NULL;
+- }
+ snd_oss_minors[minor] = NULL;
+ mutex_unlock(&sound_oss_mutex);
++
++ /* call unregister_sound_special() outside sound_oss_mutex;
++ * otherwise may deadlock, as it can trigger the release of a card
++ */
++ unregister_sound_special(minor);
++ if (track2 >= 0)
++ unregister_sound_special(track2);
++
+ kfree(mptr);
+ return 0;
+ }
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index ec9cbb219bc14..dbc7dfd00c44a 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -422,6 +422,11 @@ static const struct config_entry config_table[] = {
+ .device = 0x51cd,
+ },
+ /* Alderlake-PS */
++ {
++ .flags = FLAG_SOF,
++ .device = 0x51c9,
++ .codec_hid = &essx_83x6,
++ },
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ .device = 0x51c9,
+diff --git a/sound/pci/hda/hda_beep.c b/sound/pci/hda/hda_beep.c
+index 53a2b89f8983c..e63621bcb2142 100644
+--- a/sound/pci/hda/hda_beep.c
++++ b/sound/pci/hda/hda_beep.c
+@@ -118,6 +118,12 @@ static int snd_hda_beep_event(struct input_dev *dev, unsigned int type,
+ return 0;
+ }
+
++static void turn_on_beep(struct hda_beep *beep)
++{
++ if (beep->keep_power_at_enable)
++ snd_hda_power_up_pm(beep->codec);
++}
++
+ static void turn_off_beep(struct hda_beep *beep)
+ {
+ cancel_work_sync(&beep->beep_work);
+@@ -125,6 +131,8 @@ static void turn_off_beep(struct hda_beep *beep)
+ /* turn off beep */
+ generate_tone(beep, 0);
+ }
++ if (beep->keep_power_at_enable)
++ snd_hda_power_down_pm(beep->codec);
+ }
+
+ /**
+@@ -140,7 +148,9 @@ int snd_hda_enable_beep_device(struct hda_codec *codec, int enable)
+ enable = !!enable;
+ if (beep->enabled != enable) {
+ beep->enabled = enable;
+- if (!enable)
++ if (enable)
++ turn_on_beep(beep);
++ else
+ turn_off_beep(beep);
+ return 1;
+ }
+@@ -167,7 +177,8 @@ static int beep_dev_disconnect(struct snd_device *device)
+ input_unregister_device(beep->dev);
+ else
+ input_free_device(beep->dev);
+- turn_off_beep(beep);
++ if (beep->enabled)
++ turn_off_beep(beep);
+ return 0;
+ }
+
+diff --git a/sound/pci/hda/hda_beep.h b/sound/pci/hda/hda_beep.h
+index a25358a4807ab..db76e3ddba654 100644
+--- a/sound/pci/hda/hda_beep.h
++++ b/sound/pci/hda/hda_beep.h
+@@ -25,6 +25,7 @@ struct hda_beep {
+ unsigned int enabled:1;
+ unsigned int linear_tone:1; /* linear tone for IDT/STAC codec */
+ unsigned int playing:1;
++ unsigned int keep_power_at_enable:1; /* set by driver */
+ struct work_struct beep_work; /* scheduled task for beep event */
+ struct mutex mutex;
+ void (*power_hook)(struct hda_beep *beep, bool on);
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index c239d9dbbaefe..63c0c84348d0e 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2747,9 +2747,6 @@ static void generic_acomp_pin_eld_notify(void *audio_ptr, int port, int dev_id)
+ */
+ if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
+ return;
+- /* ditto during suspend/resume process itself */
+- if (snd_hdac_is_in_pm(&codec->core))
+- return;
+
+ check_presence_and_report(codec, pin_nid, dev_id);
+ }
+@@ -2933,9 +2930,6 @@ static void intel_pin_eld_notify(void *audio_ptr, int port, int pipe)
+ */
+ if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
+ return;
+- /* ditto during suspend/resume process itself */
+- if (snd_hdac_is_in_pm(&codec->core))
+- return;
+
+ snd_hdac_i915_set_bclk(&codec->bus->core);
+ check_presence_and_report(codec, pin_nid, dev_id);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2f335f0d8b4b5..86c23f1e8855f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8396,11 +8396,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ [ALC285_FIXUP_ASUS_G533Z_PINS] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+- { 0x14, 0x90170120 },
++ { 0x14, 0x90170152 }, /* Speaker Surround Playback Switch */
++ { 0x19, 0x03a19020 }, /* Mic Boost Volume */
++ { 0x1a, 0x03a11c30 }, /* Mic Boost Volume */
++ { 0x1e, 0x90170151 }, /* Rear jack, IN OUT EAPD Detect */
++ { 0x21, 0x03211420 },
+ { }
+ },
+- .chained = true,
+- .chain_id = ALC294_FIXUP_ASUS_G513_PINS,
+ },
+ [ALC294_FIXUP_ASUS_COEF_1B] = {
+ .type = HDA_FIXUP_VERBS,
+@@ -9151,7 +9153,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
+- SND_PCI_QUIRK(0x1028, 0x087d, "Dell Precision 5530", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+@@ -9375,6 +9376,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++ SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+@@ -9396,6 +9398,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+ SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
++ SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 7f340f18599c9..a794a01a68ca6 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -4311,6 +4311,8 @@ static int stac_parse_auto_config(struct hda_codec *codec)
+ if (codec->beep) {
+ /* IDT/STAC codecs have linear beep tone parameter */
+ codec->beep->linear_tone = spec->linear_tone_beep;
++ /* keep power up while beep is enabled */
++ codec->beep->keep_power_at_enable = 1;
+ /* if no beep switch is available, make its own one */
+ caps = query_amp_caps(codec, nid, HDA_OUTPUT);
+ if (!(caps & AC_AMPCAP_MUTE)) {
+@@ -4444,28 +4446,6 @@ static int stac_suspend(struct hda_codec *codec)
+
+ return 0;
+ }
+-
+-static int stac_check_power_status(struct hda_codec *codec, hda_nid_t nid)
+-{
+-#ifdef CONFIG_SND_HDA_INPUT_BEEP
+- struct sigmatel_spec *spec = codec->spec;
+-#endif
+- int ret = snd_hda_gen_check_power_status(codec, nid);
+-
+-#ifdef CONFIG_SND_HDA_INPUT_BEEP
+- if (nid == spec->gen.beep_nid && codec->beep) {
+- if (codec->beep->enabled != spec->beep_power_on) {
+- spec->beep_power_on = codec->beep->enabled;
+- if (spec->beep_power_on)
+- snd_hda_power_up_pm(codec);
+- else
+- snd_hda_power_down_pm(codec);
+- }
+- ret |= spec->beep_power_on;
+- }
+-#endif
+- return ret;
+-}
+ #else
+ #define stac_suspend NULL
+ #endif /* CONFIG_PM */
+@@ -4478,7 +4458,6 @@ static const struct hda_codec_ops stac_patch_ops = {
+ .unsol_event = snd_hda_jack_unsol_event,
+ #ifdef CONFIG_PM
+ .suspend = stac_suspend,
+- .check_power_status = stac_check_power_status,
+ #endif
+ };
+
+diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c
+index 7fdef38ed8cd3..1bfba7ef51ce5 100644
+--- a/sound/soc/codecs/da7219.c
++++ b/sound/soc/codecs/da7219.c
+@@ -2196,6 +2196,7 @@ static int da7219_register_dai_clks(struct snd_soc_component *component)
+ dai_clk_lookup = clkdev_hw_create(dai_clk_hw, init.name,
+ "%s", dev_name(dev));
+ if (!dai_clk_lookup) {
++ clk_hw_unregister(dai_clk_hw);
+ ret = -ENOMEM;
+ goto err;
+ } else {
+@@ -2217,12 +2218,12 @@ static int da7219_register_dai_clks(struct snd_soc_component *component)
+ return 0;
+
+ err:
+- do {
++ while (--i >= 0) {
+ if (da7219->dai_clks_lookup[i])
+ clkdev_drop(da7219->dai_clks_lookup[i]);
+
+ clk_hw_unregister(&da7219->dai_clks_hw[i]);
+- } while (i-- > 0);
++ }
+
+ if (np)
+ kfree(da7219->clk_hw_data);
+diff --git a/sound/soc/codecs/lpass-tx-macro.c b/sound/soc/codecs/lpass-tx-macro.c
+index 55503ba480bb6..e162a08d99452 100644
+--- a/sound/soc/codecs/lpass-tx-macro.c
++++ b/sound/soc/codecs/lpass-tx-macro.c
+@@ -823,17 +823,23 @@ static int tx_macro_tx_mixer_put(struct snd_kcontrol *kcontrol,
+ struct tx_macro *tx = snd_soc_component_get_drvdata(component);
+
+ if (enable) {
++ if (tx->active_decimator[dai_id] == dec_id)
++ return 0;
++
+ set_bit(dec_id, &tx->active_ch_mask[dai_id]);
+ tx->active_ch_cnt[dai_id]++;
+ tx->active_decimator[dai_id] = dec_id;
+ } else {
++ if (tx->active_decimator[dai_id] == -1)
++ return 0;
++
+ tx->active_ch_cnt[dai_id]--;
+ clear_bit(dec_id, &tx->active_ch_mask[dai_id]);
+ tx->active_decimator[dai_id] = -1;
+ }
+ snd_soc_dapm_mixer_update_power(widget->dapm, kcontrol, enable, update);
+
+- return 0;
++ return 1;
+ }
+
+ static int tx_macro_enable_dec(struct snd_soc_dapm_widget *w,
+@@ -1019,9 +1025,12 @@ static int tx_macro_dec_mode_put(struct snd_kcontrol *kcontrol,
+ int path = e->shift_l;
+ struct tx_macro *tx = snd_soc_component_get_drvdata(component);
+
++ if (tx->dec_mode[path] == value)
++ return 0;
++
+ tx->dec_mode[path] = value;
+
+- return 0;
++ return 1;
+ }
+
+ static int tx_macro_get_bcs(struct snd_kcontrol *kcontrol,
+diff --git a/sound/soc/codecs/mt6359-accdet.c b/sound/soc/codecs/mt6359-accdet.c
+index c190628e29056..7f624854948c7 100644
+--- a/sound/soc/codecs/mt6359-accdet.c
++++ b/sound/soc/codecs/mt6359-accdet.c
+@@ -965,7 +965,7 @@ static int mt6359_accdet_probe(struct platform_device *pdev)
+ mutex_init(&priv->res_lock);
+
+ priv->accdet_irq = platform_get_irq(pdev, 0);
+- if (priv->accdet_irq) {
++ if (priv->accdet_irq >= 0) {
+ ret = devm_request_threaded_irq(&pdev->dev, priv->accdet_irq,
+ NULL, mt6359_accdet_irq,
+ IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+@@ -979,7 +979,7 @@ static int mt6359_accdet_probe(struct platform_device *pdev)
+
+ if (priv->caps & ACCDET_PMIC_EINT0) {
+ priv->accdet_eint0 = platform_get_irq(pdev, 1);
+- if (priv->accdet_eint0) {
++ if (priv->accdet_eint0 >= 0) {
+ ret = devm_request_threaded_irq(&pdev->dev,
+ priv->accdet_eint0,
+ NULL, mt6359_accdet_irq,
+@@ -994,7 +994,7 @@ static int mt6359_accdet_probe(struct platform_device *pdev)
+ }
+ } else if (priv->caps & ACCDET_PMIC_EINT1) {
+ priv->accdet_eint1 = platform_get_irq(pdev, 2);
+- if (priv->accdet_eint1) {
++ if (priv->accdet_eint1 >= 0) {
+ ret = devm_request_threaded_irq(&pdev->dev,
+ priv->accdet_eint1,
+ NULL, mt6359_accdet_irq,
+diff --git a/sound/soc/codecs/mt6660.c b/sound/soc/codecs/mt6660.c
+index ba11555796ad8..45e0df13afb9f 100644
+--- a/sound/soc/codecs/mt6660.c
++++ b/sound/soc/codecs/mt6660.c
+@@ -503,13 +503,17 @@ static int mt6660_i2c_probe(struct i2c_client *client)
+ dev_err(chip->dev, "read chip revision fail\n");
+ goto probe_fail;
+ }
+- pm_runtime_set_active(chip->dev);
+- pm_runtime_enable(chip->dev);
+
+ ret = devm_snd_soc_register_component(chip->dev,
+ &mt6660_component_driver,
+ &mt6660_codec_dai, 1);
++ if (!ret) {
++ pm_runtime_set_active(chip->dev);
++ pm_runtime_enable(chip->dev);
++ }
++
+ return ret;
++
+ probe_fail:
+ _mt6660_chip_power_on(chip, 0);
+ mutex_destroy(&chip->io_lock);
+diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c
+index 4cb788f3e5f71..7ae7a5249f952 100644
+--- a/sound/soc/codecs/tas2764.c
++++ b/sound/soc/codecs/tas2764.c
+@@ -34,6 +34,9 @@ struct tas2764_priv {
+
+ int v_sense_slot;
+ int i_sense_slot;
++
++ bool dac_powered;
++ bool unmuted;
+ };
+
+ static void tas2764_reset(struct tas2764_priv *tas2764)
+@@ -50,34 +53,22 @@ static void tas2764_reset(struct tas2764_priv *tas2764)
+ usleep_range(1000, 2000);
+ }
+
+-static int tas2764_set_bias_level(struct snd_soc_component *component,
+- enum snd_soc_bias_level level)
++static int tas2764_update_pwr_ctrl(struct tas2764_priv *tas2764)
+ {
+- struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component);
++ struct snd_soc_component *component = tas2764->component;
++ unsigned int val;
++ int ret;
+
+- switch (level) {
+- case SND_SOC_BIAS_ON:
+- snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+- TAS2764_PWR_CTRL_MASK,
+- TAS2764_PWR_CTRL_ACTIVE);
+- break;
+- case SND_SOC_BIAS_STANDBY:
+- case SND_SOC_BIAS_PREPARE:
+- snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+- TAS2764_PWR_CTRL_MASK,
+- TAS2764_PWR_CTRL_MUTE);
+- break;
+- case SND_SOC_BIAS_OFF:
+- snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+- TAS2764_PWR_CTRL_MASK,
+- TAS2764_PWR_CTRL_SHUTDOWN);
+- break;
++ if (tas2764->dac_powered)
++ val = tas2764->unmuted ?
++ TAS2764_PWR_CTRL_ACTIVE : TAS2764_PWR_CTRL_MUTE;
++ else
++ val = TAS2764_PWR_CTRL_SHUTDOWN;
+
+- default:
+- dev_err(tas2764->dev,
+- "wrong power level setting %d\n", level);
+- return -EINVAL;
+- }
++ ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
++ TAS2764_PWR_CTRL_MASK, val);
++ if (ret < 0)
++ return ret;
+
+ return 0;
+ }
+@@ -114,9 +105,7 @@ static int tas2764_codec_resume(struct snd_soc_component *component)
+ usleep_range(1000, 2000);
+ }
+
+- ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+- TAS2764_PWR_CTRL_MASK,
+- TAS2764_PWR_CTRL_ACTIVE);
++ ret = tas2764_update_pwr_ctrl(tas2764);
+
+ if (ret < 0)
+ return ret;
+@@ -150,14 +139,12 @@ static int tas2764_dac_event(struct snd_soc_dapm_widget *w,
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+- ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+- TAS2764_PWR_CTRL_MASK,
+- TAS2764_PWR_CTRL_MUTE);
++ tas2764->dac_powered = true;
++ ret = tas2764_update_pwr_ctrl(tas2764);
+ break;
+ case SND_SOC_DAPM_PRE_PMD:
+- ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+- TAS2764_PWR_CTRL_MASK,
+- TAS2764_PWR_CTRL_SHUTDOWN);
++ tas2764->dac_powered = false;
++ ret = tas2764_update_pwr_ctrl(tas2764);
+ break;
+ default:
+ dev_err(tas2764->dev, "Unsupported event\n");
+@@ -202,17 +189,11 @@ static const struct snd_soc_dapm_route tas2764_audio_map[] = {
+
+ static int tas2764_mute(struct snd_soc_dai *dai, int mute, int direction)
+ {
+- struct snd_soc_component *component = dai->component;
+- int ret;
+-
+- ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+- TAS2764_PWR_CTRL_MASK,
+- mute ? TAS2764_PWR_CTRL_MUTE : 0);
++ struct tas2764_priv *tas2764 =
++ snd_soc_component_get_drvdata(dai->component);
+
+- if (ret < 0)
+- return ret;
+-
+- return 0;
++ tas2764->unmuted = !mute;
++ return tas2764_update_pwr_ctrl(tas2764);
+ }
+
+ static int tas2764_set_bitwidth(struct tas2764_priv *tas2764, int bitwidth)
+@@ -485,7 +466,7 @@ static struct snd_soc_dai_driver tas2764_dai_driver[] = {
+ .id = 0,
+ .playback = {
+ .stream_name = "ASI1 Playback",
+- .channels_min = 2,
++ .channels_min = 1,
+ .channels_max = 2,
+ .rates = TAS2764_RATES,
+ .formats = TAS2764_FORMATS,
+@@ -526,12 +507,6 @@ static int tas2764_codec_probe(struct snd_soc_component *component)
+ if (ret < 0)
+ return ret;
+
+- ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL,
+- TAS2764_PWR_CTRL_MASK,
+- TAS2764_PWR_CTRL_MUTE);
+- if (ret < 0)
+- return ret;
+-
+ return 0;
+ }
+
+@@ -549,7 +524,6 @@ static const struct snd_soc_component_driver soc_component_driver_tas2764 = {
+ .probe = tas2764_codec_probe,
+ .suspend = tas2764_codec_suspend,
+ .resume = tas2764_codec_resume,
+- .set_bias_level = tas2764_set_bias_level,
+ .controls = tas2764_snd_controls,
+ .num_controls = ARRAY_SIZE(tas2764_snd_controls),
+ .dapm_widgets = tas2764_dapm_widgets,
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 541ef1cd3b74e..6b2ff38db5f9c 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -1983,8 +1983,8 @@ static int wcd9335_trigger(struct snd_pcm_substream *substream, int cmd,
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+- slim_stream_unprepare(dai_data->sruntime);
+ slim_stream_disable(dai_data->sruntime);
++ slim_stream_unprepare(dai_data->sruntime);
+ break;
+ default:
+ break;
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index f56907d0942db..28175c746b9ae 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -1913,8 +1913,8 @@ static int wcd934x_trigger(struct snd_pcm_substream *substream, int cmd,
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+- slim_stream_unprepare(dai_data->sruntime);
+ slim_stream_disable(dai_data->sruntime);
++ slim_stream_unprepare(dai_data->sruntime);
+ break;
+ default:
+ break;
+diff --git a/sound/soc/codecs/wm5102.c b/sound/soc/codecs/wm5102.c
+index b034df47a5ef1..e92daeba11f20 100644
+--- a/sound/soc/codecs/wm5102.c
++++ b/sound/soc/codecs/wm5102.c
+@@ -2100,9 +2100,6 @@ static int wm5102_probe(struct platform_device *pdev)
+ regmap_update_bits(arizona->regmap, wm5102_digital_vu[i],
+ WM5102_DIG_VU, WM5102_DIG_VU);
+
+- pm_runtime_enable(&pdev->dev);
+- pm_runtime_idle(&pdev->dev);
+-
+ ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1,
+ "ADSP2 Compressed IRQ", wm5102_adsp2_irq,
+ wm5102);
+@@ -2135,6 +2132,9 @@ static int wm5102_probe(struct platform_device *pdev)
+ goto err_spk_irqs;
+ }
+
++ pm_runtime_enable(&pdev->dev);
++ pm_runtime_idle(&pdev->dev);
++
+ return ret;
+
+ err_spk_irqs:
+diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c
+index 4ab7a672f8de8..1b0da02b5c79c 100644
+--- a/sound/soc/codecs/wm5110.c
++++ b/sound/soc/codecs/wm5110.c
+@@ -2458,9 +2458,6 @@ static int wm5110_probe(struct platform_device *pdev)
+ regmap_update_bits(arizona->regmap, wm5110_digital_vu[i],
+ WM5110_DIG_VU, WM5110_DIG_VU);
+
+- pm_runtime_enable(&pdev->dev);
+- pm_runtime_idle(&pdev->dev);
+-
+ ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1,
+ "ADSP2 Compressed IRQ", wm5110_adsp2_irq,
+ wm5110);
+@@ -2493,6 +2490,9 @@ static int wm5110_probe(struct platform_device *pdev)
+ goto err_spk_irqs;
+ }
+
++ pm_runtime_enable(&pdev->dev);
++ pm_runtime_idle(&pdev->dev);
++
+ return ret;
+
+ err_spk_irqs:
+diff --git a/sound/soc/codecs/wm8997.c b/sound/soc/codecs/wm8997.c
+index 38ef631d1a1ff..c8c711e555c0e 100644
+--- a/sound/soc/codecs/wm8997.c
++++ b/sound/soc/codecs/wm8997.c
+@@ -1162,9 +1162,6 @@ static int wm8997_probe(struct platform_device *pdev)
+ regmap_update_bits(arizona->regmap, wm8997_digital_vu[i],
+ WM8997_DIG_VU, WM8997_DIG_VU);
+
+- pm_runtime_enable(&pdev->dev);
+- pm_runtime_idle(&pdev->dev);
+-
+ arizona_init_common(arizona);
+
+ ret = arizona_init_vol_limit(arizona);
+@@ -1183,6 +1180,9 @@ static int wm8997_probe(struct platform_device *pdev)
+ goto err_spk_irqs;
+ }
+
++ pm_runtime_enable(&pdev->dev);
++ pm_runtime_idle(&pdev->dev);
++
+ return ret;
+
+ err_spk_irqs:
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index a7784ac15dde6..0e2c785d911f1 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -1617,7 +1617,9 @@ static int wm_adsp_buffer_init(struct wm_adsp *dsp)
+ if (list_empty(&dsp->buffer_list)) {
+ /* Fall back to legacy support */
+ ret = wm_adsp_buffer_parse_legacy(dsp);
+- if (ret)
++ if (ret == -ENODEV)
++ adsp_info(dsp, "Legacy support not available\n");
++ else if (ret)
+ adsp_warn(dsp, "Failed to parse legacy: %d\n", ret);
+ }
+
+diff --git a/sound/soc/fsl/eukrea-tlv320.c b/sound/soc/fsl/eukrea-tlv320.c
+index 8b61582753c86..9af4c4a35eb16 100644
+--- a/sound/soc/fsl/eukrea-tlv320.c
++++ b/sound/soc/fsl/eukrea-tlv320.c
+@@ -86,7 +86,7 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ int ret;
+ int int_port = 0, ext_port;
+ struct device_node *np = pdev->dev.of_node;
+- struct device_node *ssi_np = NULL, *codec_np = NULL;
++ struct device_node *ssi_np = NULL, *codec_np = NULL, *tmp_np = NULL;
+
+ eukrea_tlv320.dev = &pdev->dev;
+ if (np) {
+@@ -143,7 +143,7 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ }
+
+ if (machine_is_eukrea_cpuimx27() ||
+- of_find_compatible_node(NULL, NULL, "fsl,imx21-audmux")) {
++ (tmp_np = of_find_compatible_node(NULL, NULL, "fsl,imx21-audmux"))) {
+ imx_audmux_v1_configure_port(MX27_AUDMUX_HPCR1_SSI0,
+ IMX_AUDMUX_V1_PCR_SYN |
+ IMX_AUDMUX_V1_PCR_TFSDIR |
+@@ -158,10 +158,11 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ IMX_AUDMUX_V1_PCR_SYN |
+ IMX_AUDMUX_V1_PCR_RXDSEL(MX27_AUDMUX_HPCR1_SSI0)
+ );
++ of_node_put(tmp_np);
+ } else if (machine_is_eukrea_cpuimx25sd() ||
+ machine_is_eukrea_cpuimx35sd() ||
+ machine_is_eukrea_cpuimx51sd() ||
+- of_find_compatible_node(NULL, NULL, "fsl,imx31-audmux")) {
++ (tmp_np = of_find_compatible_node(NULL, NULL, "fsl,imx31-audmux"))) {
+ if (!np)
+ ext_port = machine_is_eukrea_cpuimx25sd() ?
+ 4 : 3;
+@@ -178,6 +179,7 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ IMX_AUDMUX_V2_PTCR_SYN,
+ IMX_AUDMUX_V2_PDCR_RXDSEL(int_port)
+ );
++ of_node_put(tmp_np);
+ } else {
+ if (np) {
+ /* The eukrea,asoc-tlv320 driver was explicitly
+diff --git a/sound/soc/sh/rcar/ctu.c b/sound/soc/sh/rcar/ctu.c
+index 6156445bcb69a..e39eb2ac7e955 100644
+--- a/sound/soc/sh/rcar/ctu.c
++++ b/sound/soc/sh/rcar/ctu.c
+@@ -171,7 +171,11 @@ static int rsnd_ctu_init(struct rsnd_mod *mod,
+ struct rsnd_dai_stream *io,
+ struct rsnd_priv *priv)
+ {
+- rsnd_mod_power_on(mod);
++ int ret;
++
++ ret = rsnd_mod_power_on(mod);
++ if (ret < 0)
++ return ret;
+
+ rsnd_ctu_activation(mod);
+
+diff --git a/sound/soc/sh/rcar/dvc.c b/sound/soc/sh/rcar/dvc.c
+index 5137e03a9d7c7..16befcbc312cb 100644
+--- a/sound/soc/sh/rcar/dvc.c
++++ b/sound/soc/sh/rcar/dvc.c
+@@ -186,7 +186,11 @@ static int rsnd_dvc_init(struct rsnd_mod *mod,
+ struct rsnd_dai_stream *io,
+ struct rsnd_priv *priv)
+ {
+- rsnd_mod_power_on(mod);
++ int ret;
++
++ ret = rsnd_mod_power_on(mod);
++ if (ret < 0)
++ return ret;
+
+ rsnd_dvc_activation(mod);
+
+diff --git a/sound/soc/sh/rcar/mix.c b/sound/soc/sh/rcar/mix.c
+index 3572c2c5686c7..1de0e085804cc 100644
+--- a/sound/soc/sh/rcar/mix.c
++++ b/sound/soc/sh/rcar/mix.c
+@@ -146,7 +146,11 @@ static int rsnd_mix_init(struct rsnd_mod *mod,
+ struct rsnd_dai_stream *io,
+ struct rsnd_priv *priv)
+ {
+- rsnd_mod_power_on(mod);
++ int ret;
++
++ ret = rsnd_mod_power_on(mod);
++ if (ret < 0)
++ return ret;
+
+ rsnd_mix_activation(mod);
+
+diff --git a/sound/soc/sh/rcar/src.c b/sound/soc/sh/rcar/src.c
+index 0ea84ae57c6ac..f832165e46bc0 100644
+--- a/sound/soc/sh/rcar/src.c
++++ b/sound/soc/sh/rcar/src.c
+@@ -463,11 +463,14 @@ static int rsnd_src_init(struct rsnd_mod *mod,
+ struct rsnd_priv *priv)
+ {
+ struct rsnd_src *src = rsnd_mod_to_src(mod);
++ int ret;
+
+ /* reset sync convert_rate */
+ src->sync.val = 0;
+
+- rsnd_mod_power_on(mod);
++ ret = rsnd_mod_power_on(mod);
++ if (ret < 0)
++ return ret;
+
+ rsnd_src_activation(mod);
+
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 43c5e27dc5c86..7ade6c5ed96ff 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -480,7 +480,9 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+
+ ssi->usrcnt++;
+
+- rsnd_mod_power_on(mod);
++ ret = rsnd_mod_power_on(mod);
++ if (ret < 0)
++ return ret;
+
+ rsnd_ssi_config_init(mod, io);
+
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 0c1de56248427..6359d00b7bdaa 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -723,7 +723,7 @@ static int soc_pcm_close(struct snd_pcm_substream *substream)
+ struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
+
+ snd_soc_dpcm_mutex_lock(rtd);
+- soc_pcm_clean(rtd, substream, 0);
++ __soc_pcm_close(rtd, substream);
+ snd_soc_dpcm_mutex_unlock(rtd);
+ return 0;
+ }
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 17f2f3a982c38..7d9e62ab9d0e4 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -376,6 +376,10 @@ static int dmic_num_override = -1;
+ module_param_named(dmic_num, dmic_num_override, int, 0444);
+ MODULE_PARM_DESC(dmic_num, "SOF HDA DMIC number");
+
++static int mclk_id_override = -1;
++module_param_named(mclk_id, mclk_id_override, int, 0444);
++MODULE_PARM_DESC(mclk_id, "SOF SSP mclk_id");
++
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA)
+ static bool hda_codec_use_common_hdmi = IS_ENABLED(CONFIG_SND_HDA_CODEC_HDMI);
+ module_param_named(use_common_hdmi, hda_codec_use_common_hdmi, bool, 0444);
+@@ -1433,6 +1437,13 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+
+ sof_pdata->tplg_filename = tplg_filename;
+ }
++
++ /* check if mclk_id should be modified from topology defaults */
++ if (mclk_id_override >= 0) {
++ dev_info(sdev->dev, "Overriding topology with MCLK %d from kernel_parameter\n", mclk_id_override);
++ sdev->mclk_id_override = true;
++ sdev->mclk_id_quirk = mclk_id_override;
++ }
+ }
+
+ /*
+diff --git a/sound/soc/sof/ipc3-topology.c b/sound/soc/sof/ipc3-topology.c
+index e97f50d5bcba1..b8ec302bc8871 100644
+--- a/sound/soc/sof/ipc3-topology.c
++++ b/sound/soc/sof/ipc3-topology.c
+@@ -1233,6 +1233,7 @@ static int sof_link_afe_load(struct snd_soc_component *scomp, struct snd_sof_dai
+ static int sof_link_ssp_load(struct snd_soc_component *scomp, struct snd_sof_dai_link *slink,
+ struct sof_ipc_dai_config *config, struct snd_sof_dai *dai)
+ {
++ struct snd_sof_dev *sdev = snd_soc_component_get_drvdata(scomp);
+ struct snd_soc_tplg_hw_config *hw_config = slink->hw_configs;
+ struct sof_dai_private_data *private = dai->private;
+ u32 size = sizeof(*config);
+@@ -1257,6 +1258,12 @@ static int sof_link_ssp_load(struct snd_soc_component *scomp, struct snd_sof_dai
+
+ config[i].hdr.size = size;
+
++ if (sdev->mclk_id_override) {
++ dev_dbg(scomp->dev, "tplg: overriding topology mclk_id %d by quirk %d\n",
++ config[i].ssp.mclk_id, sdev->mclk_id_quirk);
++ config[i].ssp.mclk_id = sdev->mclk_id_quirk;
++ }
++
+ /* copy differentiating hw configs to ipc structs */
+ config[i].ssp.mclk_rate = le32_to_cpu(hw_config[i].mclk_rate);
+ config[i].ssp.bclk_rate = le32_to_cpu(hw_config[i].bclk_rate);
+diff --git a/sound/soc/sof/mediatek/mt8195/mt8195.c b/sound/soc/sof/mediatek/mt8195/mt8195.c
+index 30111ab23bf5e..30ab2274fbf8f 100644
+--- a/sound/soc/sof/mediatek/mt8195/mt8195.c
++++ b/sound/soc/sof/mediatek/mt8195/mt8195.c
+@@ -634,4 +634,5 @@ static struct platform_driver snd_sof_of_mt8195_driver = {
+ module_platform_driver(snd_sof_of_mt8195_driver);
+
+ MODULE_IMPORT_NS(SND_SOC_SOF_XTENSA);
++MODULE_IMPORT_NS(SND_SOC_SOF_MTK_COMMON);
+ MODULE_LICENSE("Dual BSD/GPL");
+diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c
+index d627092b399d7..643fd1036d60b 100644
+--- a/sound/soc/sof/sof-pci-dev.c
++++ b/sound/soc/sof/sof-pci-dev.c
+@@ -138,7 +138,7 @@ static const struct dmi_system_id community_key_platforms[] = {
+ .ident = "Google Chromebooks",
+ .callback = chromebook_use_community_key,
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Google"),
++ DMI_MATCH(DMI_PRODUCT_FAMILY, "Google"),
+ }
+ },
+ {},
+diff --git a/sound/soc/sof/sof-priv.h b/sound/soc/sof/sof-priv.h
+index f11f575fd1da2..544e5be9d10ef 100644
+--- a/sound/soc/sof/sof-priv.h
++++ b/sound/soc/sof/sof-priv.h
+@@ -585,6 +585,10 @@ struct snd_sof_dev {
+ /* to protect the ipc_rx_handler_list and dsp_state_handler_list list */
+ struct mutex client_event_handler_mutex;
+
++ /* quirks to override topology values */
++ bool mclk_id_override;
++ u16 mclk_id_quirk; /* same size as in IPC3 definitions */
++
+ void *private; /* core does not touch this */
+ };
+
+diff --git a/sound/soc/stm/stm32_adfsdm.c b/sound/soc/stm/stm32_adfsdm.c
+index 6ee714542b84a..c0f964891b584 100644
+--- a/sound/soc/stm/stm32_adfsdm.c
++++ b/sound/soc/stm/stm32_adfsdm.c
+@@ -334,8 +334,6 @@ static int stm32_adfsdm_probe(struct platform_device *pdev)
+
+ dev_set_drvdata(&pdev->dev, priv);
+
+- pm_runtime_enable(&pdev->dev);
+-
+ ret = devm_snd_soc_register_component(&pdev->dev,
+ &stm32_adfsdm_dai_component,
+ &priv->dai_drv, 1);
+@@ -365,9 +363,13 @@ static int stm32_adfsdm_probe(struct platform_device *pdev)
+ #endif
+
+ ret = snd_soc_add_component(component, NULL, 0);
+- if (ret < 0)
++ if (ret < 0) {
+ dev_err(&pdev->dev, "%s: Failed to register PCM platform\n",
+ __func__);
++ return ret;
++ }
++
++ pm_runtime_enable(&pdev->dev);
+
+ return ret;
+ }
+diff --git a/sound/soc/stm/stm32_i2s.c b/sound/soc/stm/stm32_i2s.c
+index ac5dff4d1677a..d9e622f4c4221 100644
+--- a/sound/soc/stm/stm32_i2s.c
++++ b/sound/soc/stm/stm32_i2s.c
+@@ -1135,8 +1135,6 @@ static int stm32_i2s_probe(struct platform_device *pdev)
+ return dev_err_probe(&pdev->dev, PTR_ERR(i2s->regmap),
+ "Regmap init error\n");
+
+- pm_runtime_enable(&pdev->dev);
+-
+ ret = snd_dmaengine_pcm_register(&pdev->dev, &stm32_i2s_pcm_config, 0);
+ if (ret)
+ return dev_err_probe(&pdev->dev, ret, "PCM DMA register error\n");
+@@ -1179,6 +1177,8 @@ static int stm32_i2s_probe(struct platform_device *pdev)
+ FIELD_GET(I2S_VERR_MIN_MASK, val));
+ }
+
++ pm_runtime_enable(&pdev->dev);
++
+ return ret;
+
+ error:
+diff --git a/sound/soc/stm/stm32_spdifrx.c b/sound/soc/stm/stm32_spdifrx.c
+index 6f7882c4fe6ad..60be4894e5fdc 100644
+--- a/sound/soc/stm/stm32_spdifrx.c
++++ b/sound/soc/stm/stm32_spdifrx.c
+@@ -1001,8 +1001,6 @@ static int stm32_spdifrx_probe(struct platform_device *pdev)
+ udelay(2);
+ reset_control_deassert(rst);
+
+- pm_runtime_enable(&pdev->dev);
+-
+ pcm_config = &stm32_spdifrx_pcm_config;
+ ret = snd_dmaengine_pcm_register(&pdev->dev, pcm_config, 0);
+ if (ret)
+@@ -1035,6 +1033,8 @@ static int stm32_spdifrx_probe(struct platform_device *pdev)
+ FIELD_GET(SPDIFRX_VERR_MIN_MASK, ver));
+ }
+
++ pm_runtime_enable(&pdev->dev);
++
+ return ret;
+
+ error:
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 706d249a9ad6b..a5ed11ea11456 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -690,7 +690,7 @@ static bool get_alias_id(struct usb_device *dev, unsigned int *id)
+ return false;
+ }
+
+-static bool check_delayed_register_option(struct snd_usb_audio *chip, int iface)
++static int check_delayed_register_option(struct snd_usb_audio *chip)
+ {
+ int i;
+ unsigned int id, inum;
+@@ -699,14 +699,31 @@ static bool check_delayed_register_option(struct snd_usb_audio *chip, int iface)
+ if (delayed_register[i] &&
+ sscanf(delayed_register[i], "%x:%x", &id, &inum) == 2 &&
+ id == chip->usb_id)
+- return iface < inum;
++ return inum;
+ }
+
+- return false;
++ return -1;
+ }
+
+ static const struct usb_device_id usb_audio_ids[]; /* defined below */
+
++/* look for the last interface that matches with our ids and remember it */
++static void find_last_interface(struct snd_usb_audio *chip)
++{
++ struct usb_host_config *config = chip->dev->actconfig;
++ struct usb_interface *intf;
++ int i;
++
++ if (!config)
++ return;
++ for (i = 0; i < config->desc.bNumInterfaces; i++) {
++ intf = config->interface[i];
++ if (usb_match_id(intf, usb_audio_ids))
++ chip->last_iface = intf->altsetting[0].desc.bInterfaceNumber;
++ }
++ usb_audio_dbg(chip, "Found last interface = %d\n", chip->last_iface);
++}
++
+ /* look for the corresponding quirk */
+ static const struct snd_usb_audio_quirk *
+ get_alias_quirk(struct usb_device *dev, unsigned int id)
+@@ -813,6 +830,7 @@ static int usb_audio_probe(struct usb_interface *intf,
+ err = -ENODEV;
+ goto __error;
+ }
++ find_last_interface(chip);
+ }
+
+ if (chip->num_interfaces >= MAX_CARD_INTERFACES) {
+@@ -862,11 +880,11 @@ static int usb_audio_probe(struct usb_interface *intf,
+ chip->need_delayed_register = false; /* clear again */
+ }
+
+- /* we are allowed to call snd_card_register() many times, but first
+- * check to see if a device needs to skip it or do anything special
++ /* register card if we reach to the last interface or to the specified
++ * one given via option
+ */
+- if (!snd_usb_registration_quirk(chip, ifnum) &&
+- !check_delayed_register_option(chip, ifnum)) {
++ if (check_delayed_register_option(chip) == ifnum ||
++ usb_interface_claimed(usb_ifnum_to_if(dev, chip->last_iface))) {
+ err = snd_card_register(chip->card);
+ if (err < 0)
+ goto __error;
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 5d105c44b46df..118b1fb7dc86c 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -39,6 +39,7 @@ struct snd_usb_iface_ref {
+ struct snd_usb_clock_ref {
+ unsigned char clock;
+ atomic_t locked;
++ int opened;
+ int rate;
+ struct list_head list;
+ };
+@@ -93,12 +94,13 @@ static inline unsigned get_usb_high_speed_rate(unsigned int rate)
+ */
+ static void release_urb_ctx(struct snd_urb_ctx *u)
+ {
+- if (u->buffer_size)
++ if (u->urb && u->buffer_size)
+ usb_free_coherent(u->ep->chip->dev, u->buffer_size,
+ u->urb->transfer_buffer,
+ u->urb->transfer_dma);
+ usb_free_urb(u->urb);
+ u->urb = NULL;
++ u->buffer_size = 0;
+ }
+
+ static const char *usb_error_string(int err)
+@@ -801,6 +803,7 @@ snd_usb_endpoint_open(struct snd_usb_audio *chip,
+ ep = NULL;
+ goto unlock;
+ }
++ ep->clock_ref->opened++;
+ }
+
+ ep->cur_audiofmt = fp;
+@@ -924,8 +927,10 @@ void snd_usb_endpoint_close(struct snd_usb_audio *chip,
+ endpoint_set_interface(chip, ep, false);
+
+ if (!--ep->opened) {
+- if (ep->clock_ref && !atomic_read(&ep->clock_ref->locked))
+- ep->clock_ref->rate = 0;
++ if (ep->clock_ref) {
++ if (!--ep->clock_ref->opened)
++ ep->clock_ref->rate = 0;
++ }
+ ep->iface = 0;
+ ep->altsetting = 0;
+ ep->cur_audiofmt = NULL;
+@@ -1261,6 +1266,7 @@ static int sync_ep_set_params(struct snd_usb_endpoint *ep)
+ if (!ep->syncbuf)
+ return -ENOMEM;
+
++ ep->nurbs = SYNC_URBS;
+ for (i = 0; i < SYNC_URBS; i++) {
+ struct snd_urb_ctx *u = &ep->urb[i];
+ u->index = i;
+@@ -1280,8 +1286,6 @@ static int sync_ep_set_params(struct snd_usb_endpoint *ep)
+ u->urb->complete = snd_complete_urb;
+ }
+
+- ep->nurbs = SYNC_URBS;
+-
+ return 0;
+
+ out_of_memory:
+@@ -1633,8 +1637,7 @@ void snd_usb_endpoint_stop(struct snd_usb_endpoint *ep, bool keep_pending)
+ WRITE_ONCE(ep->sync_source->sync_sink, NULL);
+ stop_urbs(ep, false, keep_pending);
+ if (ep->clock_ref)
+- if (!atomic_dec_return(&ep->clock_ref->locked))
+- ep->clock_ref->rate = 0;
++ atomic_dec(&ep->clock_ref->locked);
+ }
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 5b4d8f5eade20..8c3b0be909eb0 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1728,48 +1728,6 @@ void snd_usb_audioformat_attributes_quirk(struct snd_usb_audio *chip,
+ }
+ }
+
+-/*
+- * registration quirk:
+- * the registration is skipped if a device matches with the given ID,
+- * unless the interface reaches to the defined one. This is for delaying
+- * the registration until the last known interface, so that the card and
+- * devices appear at the same time.
+- */
+-
+-struct registration_quirk {
+- unsigned int usb_id; /* composed via USB_ID() */
+- unsigned int interface; /* the interface to trigger register */
+-};
+-
+-#define REG_QUIRK_ENTRY(vendor, product, iface) \
+- { .usb_id = USB_ID(vendor, product), .interface = (iface) }
+-
+-static const struct registration_quirk registration_quirks[] = {
+- REG_QUIRK_ENTRY(0x0951, 0x16d8, 2), /* Kingston HyperX AMP */
+- REG_QUIRK_ENTRY(0x0951, 0x16ed, 2), /* Kingston HyperX Cloud Alpha S */
+- REG_QUIRK_ENTRY(0x0951, 0x16ea, 2), /* Kingston HyperX Cloud Flight S */
+- REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2), /* JBL Quantum 600 */
+- REG_QUIRK_ENTRY(0x0ecb, 0x1f47, 2), /* JBL Quantum 800 */
+- REG_QUIRK_ENTRY(0x0ecb, 0x1f4c, 2), /* JBL Quantum 400 */
+- REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2), /* JBL Quantum 400 */
+- REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2), /* JBL Quantum 600 */
+- REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2), /* JBL Quantum 800 */
+- { 0 } /* terminator */
+-};
+-
+-/* return true if skipping registration */
+-bool snd_usb_registration_quirk(struct snd_usb_audio *chip, int iface)
+-{
+- const struct registration_quirk *q;
+-
+- for (q = registration_quirks; q->usb_id; q++)
+- if (chip->usb_id == q->usb_id)
+- return iface < q->interface;
+-
+- /* Register as normal */
+- return false;
+-}
+-
+ /*
+ * driver behavior quirk flags
+ */
+diff --git a/sound/usb/quirks.h b/sound/usb/quirks.h
+index 31abb7cb01a52..f9bfd5ac7bab0 100644
+--- a/sound/usb/quirks.h
++++ b/sound/usb/quirks.h
+@@ -48,8 +48,6 @@ void snd_usb_audioformat_attributes_quirk(struct snd_usb_audio *chip,
+ struct audioformat *fp,
+ int stream);
+
+-bool snd_usb_registration_quirk(struct snd_usb_audio *chip, int iface);
+-
+ void snd_usb_init_quirk_flags(struct snd_usb_audio *chip);
+
+ #endif /* __USBAUDIO_QUIRKS_H */
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index ffbb4b0d09a07..2c6575029b1cd 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -37,6 +37,7 @@ struct snd_usb_audio {
+ unsigned int quirk_flags;
+ unsigned int need_delayed_register:1; /* warn for delayed registration */
+ int num_interfaces;
++ int last_iface;
+ int num_suspended_intf;
+ int sample_rate_read_error;
+
+diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c
+index f5dddf8ef404b..6d041d1f53959 100644
+--- a/tools/bpf/bpftool/btf_dumper.c
++++ b/tools/bpf/bpftool/btf_dumper.c
+@@ -426,7 +426,7 @@ static int btf_dumper_int(const struct btf_type *t, __u8 bit_offset,
+ *(char *)data);
+ break;
+ case BTF_INT_BOOL:
+- jsonw_bool(jw, *(int *)data);
++ jsonw_bool(jw, *(bool *)data);
+ break;
+ default:
+ /* shouldn't happen */
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index 9062ef2b87670..0881437587bad 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -435,6 +435,16 @@ int main(int argc, char **argv)
+
+ setlinebuf(stdout);
+
++#ifdef USE_LIBCAP
++ /* Libcap < 2.63 hooks before main() to compute the number of
++ * capabilities of the running kernel, and doing so it calls prctl()
++ * which may fail and set errno to non-zero.
++ * Let's reset errno to make sure this does not interfere with the
++ * batch mode.
++ */
++ errno = 0;
++#endif
++
+ last_do_help = do_help;
+ pretty_output = false;
+ json_output = false;
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index 67dc010e9fe3b..63954a2e213da 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -1228,15 +1228,15 @@ void xsk_socket__delete(struct xsk_socket *xsk)
+ ctx = xsk->ctx;
+ umem = ctx->umem;
+
+- xsk_put_ctx(ctx, true);
+-
+- if (!ctx->refcount) {
++ if (ctx->refcount == 1) {
+ xsk_delete_bpf_maps(xsk);
+ close(ctx->prog_fd);
+ if (ctx->has_bpf_link)
+ close(ctx->link_fd);
+ }
+
++ xsk_put_ctx(ctx, true);
++
+ err = xsk_get_mmap_offsets(xsk->fd, &off);
+ if (!err) {
+ if (xsk->rx) {
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index c25e957c1e520..7e24b09b1163a 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -619,6 +619,11 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
+ Elf64_Xword entsize = symtab->sh.sh_entsize;
+ int max_idx, idx = sym->idx;
+ Elf_Scn *s, *t = NULL;
++ bool is_special_shndx = sym->sym.st_shndx >= SHN_LORESERVE &&
++ sym->sym.st_shndx != SHN_XINDEX;
++
++ if (is_special_shndx)
++ shndx = sym->sym.st_shndx;
+
+ s = elf_getscn(elf->elf, symtab->idx);
+ if (!s) {
+@@ -704,7 +709,7 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
+ }
+
+ /* setup extended section index magic and write the symbol */
+- if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) {
++ if ((shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) || is_special_shndx) {
+ sym->sym.st_shndx = shndx;
+ if (!shndx_data)
+ shndx = 0;
+diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c
+index 06c2cdfd8f2fa..892339da90cf4 100644
+--- a/tools/perf/arch/x86/util/intel-pt.c
++++ b/tools/perf/arch/x86/util/intel-pt.c
+@@ -871,7 +871,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
+ * User space tasks can migrate between CPUs, so when tracing
+ * selected CPUs, sideband for all CPUs is still needed.
+ */
+- need_system_wide_tracking = evlist->core.has_user_cpus &&
++ need_system_wide_tracking = opts->target.cpu_list &&
+ !intel_pt_evsel->core.attr.exclude_user;
+
+ tracking_evsel = evlist__add_aux_dummy(evlist, need_system_wide_tracking);
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 62b2f375a94dc..5a87c80ea3259 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -3878,6 +3878,7 @@ static const char * const intel_pt_info_fmts[] = {
+ [INTEL_PT_SNAPSHOT_MODE] = " Snapshot mode %"PRId64"\n",
+ [INTEL_PT_PER_CPU_MMAPS] = " Per-cpu maps %"PRId64"\n",
+ [INTEL_PT_MTC_BIT] = " MTC bit %#"PRIx64"\n",
++ [INTEL_PT_MTC_FREQ_BITS] = " MTC freq bits %#"PRIx64"\n",
+ [INTEL_PT_TSC_CTC_N] = " TSC:CTC numerator %"PRIu64"\n",
+ [INTEL_PT_TSC_CTC_D] = " TSC:CTC denominator %"PRIu64"\n",
+ [INTEL_PT_CYC_BIT] = " CYC bit %#"PRIx64"\n",
+@@ -3892,8 +3893,12 @@ static void intel_pt_print_info(__u64 *arr, int start, int finish)
+ if (!dump_trace)
+ return;
+
+- for (i = start; i <= finish; i++)
+- fprintf(stdout, intel_pt_info_fmts[i], arr[i]);
++ for (i = start; i <= finish; i++) {
++ const char *fmt = intel_pt_info_fmts[i];
++
++ if (fmt)
++ fprintf(stdout, fmt, arr[i]);
++ }
+ }
+
+ static void intel_pt_print_info_str(const char *name, const char *str)
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index b51c646c212e5..d5646c867c5b9 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -255,6 +255,9 @@ __add_event(struct list_head *list, int *idx,
+ struct perf_cpu_map *cpus = pmu ? perf_cpu_map__get(pmu->cpus) :
+ cpu_list ? perf_cpu_map__new(cpu_list) : NULL;
+
++ if (pmu)
++ perf_pmu__warn_invalid_formats(pmu);
++
+ if (pmu && attr->type == PERF_TYPE_RAW)
+ perf_pmu__warn_invalid_config(pmu, attr->config, name);
+
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 9a1c7e63e6630..6c1924883b183 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -1048,6 +1048,23 @@ err:
+ return NULL;
+ }
+
++void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu)
++{
++ struct perf_pmu_format *format;
++
++ /* fake pmu doesn't have format list */
++ if (pmu == &perf_pmu__fake)
++ return;
++
++ list_for_each_entry(format, &pmu->format, list)
++ if (format->value >= PERF_PMU_FORMAT_VALUE_CONFIG_END) {
++ pr_warning("WARNING: '%s' format '%s' requires 'perf_event_attr::config%d'"
++ "which is not supported by this version of perf!\n",
++ pmu->name, format->name, format->value);
++ return;
++ }
++}
++
+ static struct perf_pmu *pmu_find(const char *name)
+ {
+ struct perf_pmu *pmu;
+diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
+index 541889fa9f9c6..420d8b83c6905 100644
+--- a/tools/perf/util/pmu.h
++++ b/tools/perf/util/pmu.h
+@@ -17,6 +17,7 @@ enum {
+ PERF_PMU_FORMAT_VALUE_CONFIG,
+ PERF_PMU_FORMAT_VALUE_CONFIG1,
+ PERF_PMU_FORMAT_VALUE_CONFIG2,
++ PERF_PMU_FORMAT_VALUE_CONFIG_END,
+ };
+
+ #define PERF_PMU_FORMAT_BITS 64
+@@ -139,6 +140,7 @@ int perf_pmu__caps_parse(struct perf_pmu *pmu);
+
+ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
+ const char *name);
++void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu);
+
+ bool perf_pmu__has_hybrid(void);
+ int perf_pmu__match(char *pattern, char *name, char *tok);
+diff --git a/tools/perf/util/pmu.l b/tools/perf/util/pmu.l
+index a15d9fbd7c0ed..58b4926cfaca9 100644
+--- a/tools/perf/util/pmu.l
++++ b/tools/perf/util/pmu.l
+@@ -27,8 +27,6 @@ num_dec [0-9]+
+
+ {num_dec} { return value(10); }
+ config { return PP_CONFIG; }
+-config1 { return PP_CONFIG1; }
+-config2 { return PP_CONFIG2; }
+ - { return '-'; }
+ : { return ':'; }
+ , { return ','; }
+diff --git a/tools/perf/util/pmu.y b/tools/perf/util/pmu.y
+index bfd7e8509869b..283efe059819d 100644
+--- a/tools/perf/util/pmu.y
++++ b/tools/perf/util/pmu.y
+@@ -20,7 +20,7 @@ do { \
+
+ %}
+
+-%token PP_CONFIG PP_CONFIG1 PP_CONFIG2
++%token PP_CONFIG
+ %token PP_VALUE PP_ERROR
+ %type <num> PP_VALUE
+ %type <bits> bit_term
+@@ -47,18 +47,11 @@ PP_CONFIG ':' bits
+ $3));
+ }
+ |
+-PP_CONFIG1 ':' bits
++PP_CONFIG PP_VALUE ':' bits
+ {
+ ABORT_ON(perf_pmu__new_format(format, name,
+- PERF_PMU_FORMAT_VALUE_CONFIG1,
+- $3));
+-}
+-|
+-PP_CONFIG2 ':' bits
+-{
+- ABORT_ON(perf_pmu__new_format(format, name,
+- PERF_PMU_FORMAT_VALUE_CONFIG2,
+- $3));
++ $2,
++ $4));
+ }
+
+ bits:
+diff --git a/tools/testing/selftests/arm64/signal/testcases/testcases.c b/tools/testing/selftests/arm64/signal/testcases/testcases.c
+index 84c36bee4d82a..d98828cb542be 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/testcases.c
++++ b/tools/testing/selftests/arm64/signal/testcases/testcases.c
+@@ -33,7 +33,7 @@ bool validate_extra_context(struct extra_context *extra, char **err)
+ return false;
+
+ fprintf(stderr, "Validating EXTRA...\n");
+- term = GET_RESV_NEXT_HEAD(extra);
++ term = GET_RESV_NEXT_HEAD(&extra->head);
+ if (!term || term->magic || term->size) {
+ *err = "Missing terminator after EXTRA context";
+ return false;
+diff --git a/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
+index 78c76496b14ad..b595556315bc3 100644
+--- a/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
++++ b/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
+@@ -3,6 +3,7 @@
+ #include <stdio.h>
+ #include <errno.h>
+ #include <string.h>
++#include <unistd.h>
+
+ #include <bpf/bpf.h>
+ #include <bpf/libbpf.h>
+@@ -137,6 +138,7 @@ static void __test_map_lookup_and_update_batch(bool is_pcpu)
+ free(keys);
+ free(values);
+ free(visited);
++ close(map_fd);
+ }
+
+ static void array_map_batch_ops(void)
+diff --git a/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
+index f807d53fd8dd4..1230ccf901280 100644
+--- a/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
++++ b/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
+@@ -3,6 +3,7 @@
+ #include <stdio.h>
+ #include <errno.h>
+ #include <string.h>
++#include <unistd.h>
+
+ #include <bpf/bpf.h>
+ #include <bpf/libbpf.h>
+@@ -255,6 +256,7 @@ void __test_map_lookup_and_delete_batch(bool is_pcpu)
+ free(visited);
+ if (!is_pcpu)
+ free(values);
++ close(map_fd);
+ }
+
+ void htab_map_batch_ops(void)
+diff --git a/tools/testing/selftests/bpf/map_tests/lpm_trie_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/lpm_trie_map_batch_ops.c
+index 87d07b596e170..b66d56ddb7ef2 100644
+--- a/tools/testing/selftests/bpf/map_tests/lpm_trie_map_batch_ops.c
++++ b/tools/testing/selftests/bpf/map_tests/lpm_trie_map_batch_ops.c
+@@ -7,6 +7,7 @@
+ #include <errno.h>
+ #include <string.h>
+ #include <stdlib.h>
++#include <unistd.h>
+
+ #include <bpf/bpf.h>
+ #include <bpf/libbpf.h>
+@@ -150,4 +151,5 @@ void test_lpm_trie_map_batch_ops(void)
+ free(keys);
+ free(values);
+ free(visited);
++ close(map_fd);
+ }
+diff --git a/tools/testing/selftests/bpf/progs/kprobe_multi.c b/tools/testing/selftests/bpf/progs/kprobe_multi.c
+index 08f95a8155d1b..98c3399e15c03 100644
+--- a/tools/testing/selftests/bpf/progs/kprobe_multi.c
++++ b/tools/testing/selftests/bpf/progs/kprobe_multi.c
+@@ -36,15 +36,13 @@ __u64 kretprobe_test6_result = 0;
+ __u64 kretprobe_test7_result = 0;
+ __u64 kretprobe_test8_result = 0;
+
+-extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
+-
+ static void kprobe_multi_check(void *ctx, bool is_return)
+ {
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return;
+
+ __u64 cookie = test_cookie ? bpf_get_attach_cookie(ctx) : 0;
+- __u64 addr = bpf_get_func_ip(ctx) - (CONFIG_X86_KERNEL_IBT ? 4 : 0);
++ __u64 addr = bpf_get_func_ip(ctx);
+
+ #define SET(__var, __addr, __cookie) ({ \
+ if (((const void *) addr == __addr) && \
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index cbebfaa7c1e82..4d42ffea00388 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -658,13 +658,13 @@ static void test_sockmap(unsigned int tasks, void *data)
+ {
+ struct bpf_map *bpf_map_rx, *bpf_map_tx, *bpf_map_msg, *bpf_map_break;
+ int map_fd_msg = 0, map_fd_rx = 0, map_fd_tx = 0, map_fd_break;
++ struct bpf_object *parse_obj, *verdict_obj, *msg_obj;
+ int ports[] = {50200, 50201, 50202, 50204};
+ int err, i, fd, udp, sfd[6] = {0xdeadbeef};
+ u8 buf[20] = {0x0, 0x5, 0x3, 0x2, 0x1, 0x0};
+ int parse_prog, verdict_prog, msg_prog;
+ struct sockaddr_in addr;
+ int one = 1, s, sc, rc;
+- struct bpf_object *obj;
+ struct timeval to;
+ __u32 key, value;
+ pid_t pid[tasks];
+@@ -760,6 +760,7 @@ static void test_sockmap(unsigned int tasks, void *data)
+ i, udp);
+ goto out_sockmap;
+ }
++ close(udp);
+
+ /* Test update without programs */
+ for (i = 0; i < 6; i++) {
+@@ -822,27 +823,27 @@ static void test_sockmap(unsigned int tasks, void *data)
+
+ /* Load SK_SKB program and Attach */
+ err = bpf_prog_test_load(SOCKMAP_PARSE_PROG,
+- BPF_PROG_TYPE_SK_SKB, &obj, &parse_prog);
++ BPF_PROG_TYPE_SK_SKB, &parse_obj, &parse_prog);
+ if (err) {
+ printf("Failed to load SK_SKB parse prog\n");
+ goto out_sockmap;
+ }
+
+ err = bpf_prog_test_load(SOCKMAP_TCP_MSG_PROG,
+- BPF_PROG_TYPE_SK_MSG, &obj, &msg_prog);
++ BPF_PROG_TYPE_SK_MSG, &msg_obj, &msg_prog);
+ if (err) {
+ printf("Failed to load SK_SKB msg prog\n");
+ goto out_sockmap;
+ }
+
+ err = bpf_prog_test_load(SOCKMAP_VERDICT_PROG,
+- BPF_PROG_TYPE_SK_SKB, &obj, &verdict_prog);
++ BPF_PROG_TYPE_SK_SKB, &verdict_obj, &verdict_prog);
+ if (err) {
+ printf("Failed to load SK_SKB verdict prog\n");
+ goto out_sockmap;
+ }
+
+- bpf_map_rx = bpf_object__find_map_by_name(obj, "sock_map_rx");
++ bpf_map_rx = bpf_object__find_map_by_name(verdict_obj, "sock_map_rx");
+ if (!bpf_map_rx) {
+ printf("Failed to load map rx from verdict prog\n");
+ goto out_sockmap;
+@@ -854,7 +855,7 @@ static void test_sockmap(unsigned int tasks, void *data)
+ goto out_sockmap;
+ }
+
+- bpf_map_tx = bpf_object__find_map_by_name(obj, "sock_map_tx");
++ bpf_map_tx = bpf_object__find_map_by_name(verdict_obj, "sock_map_tx");
+ if (!bpf_map_tx) {
+ printf("Failed to load map tx from verdict prog\n");
+ goto out_sockmap;
+@@ -866,7 +867,7 @@ static void test_sockmap(unsigned int tasks, void *data)
+ goto out_sockmap;
+ }
+
+- bpf_map_msg = bpf_object__find_map_by_name(obj, "sock_map_msg");
++ bpf_map_msg = bpf_object__find_map_by_name(verdict_obj, "sock_map_msg");
+ if (!bpf_map_msg) {
+ printf("Failed to load map msg from msg_verdict prog\n");
+ goto out_sockmap;
+@@ -878,7 +879,7 @@ static void test_sockmap(unsigned int tasks, void *data)
+ goto out_sockmap;
+ }
+
+- bpf_map_break = bpf_object__find_map_by_name(obj, "sock_map_break");
++ bpf_map_break = bpf_object__find_map_by_name(verdict_obj, "sock_map_break");
+ if (!bpf_map_break) {
+ printf("Failed to load map tx from verdict prog\n");
+ goto out_sockmap;
+@@ -1124,7 +1125,9 @@ static void test_sockmap(unsigned int tasks, void *data)
+ }
+ close(fd);
+ close(map_fd_rx);
+- bpf_object__close(obj);
++ bpf_object__close(parse_obj);
++ bpf_object__close(msg_obj);
++ bpf_object__close(verdict_obj);
+ return;
+ out:
+ for (i = 0; i < 6; i++)
+@@ -1282,8 +1285,11 @@ static void test_map_in_map(void)
+ printf("Inner map mim.inner was not destroyed\n");
+ goto out_map_in_map;
+ }
++
++ close(fd);
+ }
+
++ bpf_object__close(obj);
+ return;
+
+ out_map_in_map:
+diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c
+index e5992a6b5e096..92e466310e27b 100644
+--- a/tools/testing/selftests/bpf/xdpxceiver.c
++++ b/tools/testing/selftests/bpf/xdpxceiver.c
+@@ -1589,6 +1589,8 @@ static struct ifobject *ifobject_create(void)
+ if (!ifobj->umem)
+ goto out_umem;
+
++ ifobj->ns_fd = -1;
++
+ return ifobj;
+
+ out_umem:
+@@ -1600,6 +1602,8 @@ out_xsk_arr:
+
+ static void ifobject_delete(struct ifobject *ifobj)
+ {
++ if (ifobj->ns_fd != -1)
++ close(ifobj->ns_fd);
+ free(ifobj->umem);
+ free(ifobj->xsk_arr);
+ free(ifobj);
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index 03b586760164a..31c3b6ebd388b 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -1466,6 +1466,13 @@ ipv4_udp_novrf()
+ run_cmd nettest -D -r ${a} -d ${NSA_DEV} -S -0 ${NSA_IP}
+ log_test_addr ${a} $? 0 "Client, device bind via IP_UNICAST_IF"
+
++ log_start
++ run_cmd_nsb nettest -D -s &
++ sleep 1
++ run_cmd nettest -D -r ${a} -d ${NSA_DEV} -S -0 ${NSA_IP} -U
++ log_test_addr ${a} $? 0 "Client, device bind via IP_UNICAST_IF, with connect()"
++
++
+ log_start
+ show_hint "Should fail 'Connection refused'"
+ run_cmd nettest -D -r ${a}
+@@ -1525,6 +1532,13 @@ ipv4_udp_novrf()
+ run_cmd nettest -D -d ${NSA_DEV} -S -r ${a}
+ log_test_addr ${a} $? 0 "Global server, device client via IP_UNICAST_IF, local connection"
+
++ log_start
++ run_cmd nettest -s -D &
++ sleep 1
++ run_cmd nettest -D -d ${NSA_DEV} -S -r ${a} -U
++ log_test_addr ${a} $? 0 "Global server, device client via IP_UNICAST_IF, local connection, with connect()"
++
++
+ # IPv4 with device bind has really weird behavior - it overrides the
+ # fib lookup, generates an rtable and tries to send the packet. This
+ # causes failures for local traffic at different places
+@@ -1550,6 +1564,15 @@ ipv4_udp_novrf()
+ sleep 1
+ run_cmd nettest -D -r ${a} -d ${NSA_DEV} -S
+ log_test_addr ${a} $? 1 "Global server, device client via IP_UNICAST_IF, local connection"
++
++ log_start
++ show_hint "Should fail since addresses on loopback are out of device scope"
++ run_cmd nettest -D -s &
++ sleep 1
++ run_cmd nettest -D -r ${a} -d ${NSA_DEV} -S -U
++ log_test_addr ${a} $? 1 "Global server, device client via IP_UNICAST_IF, local connection, with connect()"
++
++
+ done
+
+ a=${NSA_IP}
+@@ -3157,6 +3180,13 @@ ipv6_udp_novrf()
+ sleep 1
+ run_cmd nettest -6 -D -r ${a} -d ${NSA_DEV} -S
+ log_test_addr ${a} $? 1 "Global server, device client via IP_UNICAST_IF, local connection"
++
++ log_start
++ show_hint "Should fail 'No route to host' since addresses on loopback are out of device scope"
++ run_cmd nettest -6 -D -s &
++ sleep 1
++ run_cmd nettest -6 -D -r ${a} -d ${NSA_DEV} -S -U
++ log_test_addr ${a} $? 1 "Global server, device client via IP_UNICAST_IF, local connection, with connect()"
+ done
+
+ a=${NSA_IP6}
+diff --git a/tools/testing/selftests/net/nettest.c b/tools/testing/selftests/net/nettest.c
+index d9a6fd2cd9d31..7900fa98eccb1 100644
+--- a/tools/testing/selftests/net/nettest.c
++++ b/tools/testing/selftests/net/nettest.c
+@@ -127,6 +127,9 @@ struct sock_args {
+
+ /* ESP in UDP encap test */
+ int use_xfrm;
++
++ /* use send() and connect() instead of sendto */
++ int datagram_connect;
+ };
+
+ static int server_mode;
+@@ -979,6 +982,11 @@ static int send_msg(int sd, void *addr, socklen_t alen, struct sock_args *args)
+ log_err_errno("write failed sending msg to peer");
+ return 1;
+ }
++ } else if (args->datagram_connect) {
++ if (send(sd, msg, msglen, 0) < 0) {
++ log_err_errno("send failed sending msg to peer");
++ return 1;
++ }
+ } else if (args->ifindex && args->use_cmsg) {
+ if (send_msg_cmsg(sd, addr, alen, args->ifindex, args->version))
+ return 1;
+@@ -1659,7 +1667,7 @@ static int connectsock(void *addr, socklen_t alen, struct sock_args *args)
+ if (args->has_local_ip && bind_socket(sd, args))
+ goto err;
+
+- if (args->type != SOCK_STREAM)
++ if (args->type != SOCK_STREAM && !args->datagram_connect)
+ goto out;
+
+ if (args->password && tcp_md5sig(sd, addr, alen, args))
+@@ -1854,7 +1862,7 @@ static int ipc_parent(int cpid, int fd, struct sock_args *args)
+ return client_status;
+ }
+
+-#define GETOPT_STR "sr:l:c:p:t:g:P:DRn:M:X:m:d:I:BN:O:SCi6xL:0:1:2:3:Fbqf"
++#define GETOPT_STR "sr:l:c:p:t:g:P:DRn:M:X:m:d:I:BN:O:SUCi6xL:0:1:2:3:Fbqf"
+ #define OPT_FORCE_BIND_KEY_IFINDEX 1001
+ #define OPT_NO_BIND_KEY_IFINDEX 1002
+
+@@ -1891,6 +1899,7 @@ static void print_usage(char *prog)
+ " -I dev bind socket to given device name - server mode\n"
+ " -S use setsockopt (IP_UNICAST_IF or IP_MULTICAST_IF)\n"
+ " to set device binding\n"
++ " -U Use connect() and send() for datagram sockets\n"
+ " -f bind socket with the IP[V6]_FREEBIND option\n"
+ " -C use cmsg and IP_PKTINFO to specify device binding\n"
+ "\n"
+@@ -2074,6 +2083,9 @@ int main(int argc, char *argv[])
+ case 'x':
+ args.use_xfrm = 1;
+ break;
++ case 'U':
++ args.datagram_connect = 1;
++ break;
+ default:
+ print_usage(argv[0]);
+ return 1;
+diff --git a/tools/testing/selftests/tpm2/tpm2.py b/tools/testing/selftests/tpm2/tpm2.py
+index 057a4f49c79d9..c7363c6764fc6 100644
+--- a/tools/testing/selftests/tpm2/tpm2.py
++++ b/tools/testing/selftests/tpm2/tpm2.py
+@@ -371,6 +371,10 @@ class Client:
+ fcntl.fcntl(self.tpm, fcntl.F_SETFL, flags)
+ self.tpm_poll = select.poll()
+
++ def __del__(self):
++ if self.tpm:
++ self.tpm.close()
++
+ def close(self):
+ self.tpm.close()
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:5.19 commit in: /
@ 2023-03-21 13:33 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2023-03-21 13:33 UTC (permalink / raw
To: gentoo-commits
commit: 0f89cb690c57679f621b0fc647d2736205158c68
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Mar 21 12:58:39 2023 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Mar 21 13:33:32 2023 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0f89cb69
Fix config change from X86_X32 to X86_X32_ABI
Thanks to Frank Limpert
Bug: https://bugs.gentoo.org/902443
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 9e0701dd..9cb1eb0c 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -185,7 +185,7 @@
+config GENTOO_KERNEL_SELF_PROTECTION_COMMON
+ bool "Enable Kernel Self Protection Project Recommendations"
+
-+ depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS && !IOMMU_DEFAULT_DMA_LAZY && !IOMMU_DEFAULT_PASSTHROUGH && IOMMU_DEFAULT_DMA_STRICT && SECURITY && !ARCH_EPHEMERAL_INODES && RANDSTRUCT_PERFORMANCE
++ depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32_ABI && !MODIFY_LDT_SYSCALL && GCC_PLUGINS && !IOMMU_DEFAULT_DMA_LAZY && !IOMMU_DEFAULT_PASSTHROUGH && IOMMU_DEFAULT_DMA_STRICT && SECURITY && !ARCH_EPHEMERAL_INODES && RANDSTRUCT_PERFORMANCE
+
+ select BUG
+ select STRICT_KERNEL_RWX
^ permalink raw reply related [flat|nested] 27+ messages in thread
end of thread, other threads:[~2023-03-21 13:33 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-08-31 13:33 [gentoo-commits] proj/linux-patches:5.19 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2023-03-21 13:33 Mike Pagano
2022-10-24 11:04 Mike Pagano
2022-10-15 10:04 Mike Pagano
2022-10-12 11:17 Mike Pagano
2022-10-05 11:56 Mike Pagano
2022-10-04 14:51 Mike Pagano
2022-09-28 9:55 Mike Pagano
2022-09-27 12:09 Mike Pagano
2022-09-27 12:02 Mike Pagano
2022-09-23 12:50 Mike Pagano
2022-09-23 12:38 Mike Pagano
2022-09-20 12:00 Mike Pagano
2022-09-15 10:29 Mike Pagano
2022-09-08 10:45 Mike Pagano
2022-09-05 12:02 Mike Pagano
2022-08-31 15:44 Mike Pagano
2022-08-31 12:11 Mike Pagano
2022-08-29 10:45 Mike Pagano
2022-08-25 17:37 Mike Pagano
2022-08-25 10:31 Mike Pagano
2022-08-21 16:55 Mike Pagano
2022-08-19 13:32 Mike Pagano
2022-08-17 14:30 Mike Pagano
2022-08-11 12:32 Mike Pagano
2022-08-02 18:20 Mike Pagano
2022-06-27 19:30 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox